As I recall, the state if this node had been in WAIT_FOR_NEXT_INJECT. Late in my shift I decided to have a look and noticed that the current state is NONE. Should this node be returned to WAIT_FOR NEXT INJECT?
00:33 Runnnig a2l. DTT ploit shows high incoherence in YAW. Python problems? Not running from the MEDM screen
00:52 Jeff K running calibration measurements
02:44 Checking a2l, post calibration. PIT is showing a pretty good deal of mis-alignment. Going to run the script one more time before setting the intention bit.
03:02 H1 in Observing
He was informing me that they were going to go to Observing. I told him we had been there for a few hours already but he brought to my attention the fact that GWI stat is reporting us as NOT ok. Anyone?
Apologies. We've been at NLN for about that long. In Observation for only about 1 hour.
Seems like H1:DMT-CALIBRATED is 0 (zero) not 1. Is this related to the calibration task performed today?
Is this why GWI stat thinks that H1 is not OK?
Sent a message to Jeff Kissel, Aaron Viets and Alex Urban.
I tried a few things to see if I could figure out why the calibration flag wasn't set. 1) restarted the redundant calibration pipeline, This probably caused some of the backup frames to be lost but the primary and low latency frames would not be affected. The Science_RSegs_H1 process https://marble.ligo-wa.caltech.edu/dmt/monitor_reports/Science_RSegs_H1/Segment_List.html is generating segments from the output of the (restarted) redundant pipeline, but it is getting the same results. 2) Checked for dataValid errors in the channels in the broadcaster frames. dataValid would probably cause the pipeline to flush the h(t) data. No such errors were found 3) checked for subnormal/Nan data in the broadcaster frames. Another potential proble,m tha tmight cause the pipeline to flush the data. No problems of this type were found either. 4) checked pipeline log file - nothing unusual 5) Checked for frame errors or broadcaster restarts flagged by the broadcast receiver. Last restart was Dec 5! So, I can see no reason for the ht pipeline to not be running smoothly.
Alex U. on behalf of the GDS h(t) pipeline team
I've looked into why the H1:DMT-CALIBRATED flag is not being set, and TL;DR: it's because of the kappa_TST and kappa_PU factors.
Some detail: the H1:DMT-CALIBRATED flag can only be active if we are OBSERVATION_READY, h(t) is being produced, the filters have settled in, and, since we're tracking time-dependent corrections at LHO, the kappa factors (except f_CC) must each be within range -- outside of 10% their nominal value, the DMT-CALIBRATED flag will fail to be set. (See the documentation for this on our wiki page: https://wiki.ligo.org/viewauth/Calibration/TDCalibReviewO1#CALIB_STATE_VECTOR_definitions_during_ER10_47O2)
I attach below a timeseries plot of the real and imaginary parts of each kappa factor. (What's actually plotted is 1 + the imaginary part, to make them fit on the same axes.) As you can see, around half an hour or so in, the kappa_TST and kappa_PU factors go off the rails, straying 20-30% outside their nominal values. (kappa_C, which is a time-dependent gain on the sensing function, and f_CC both stay within range during this time period.)
Earlier today, Jeff reported on some work done with the L2/L3 actuation stages (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=32933) which may in principle affect kappa_TST and kappa_PU. It's possible we will need a new set of time domain filters to absorb these changes into the GDS pipeline. (I also tried a test job from the DMT machine, but the problems with kappas were still present, meaning a simple restart won't solve the problem.)
GWIstat (also the similar display gwsnap) was reporting that H1 was down because of the h(t) production problem; it did not distinguish between that and a down state. I have now modified GWIstat (and gwsnap) to indicate if there is no good h(t) being produced but otherwise the detector is running.
The attached pdf shows that CALCS and GDS agree on the calculation of kappa_tst. I suspect we may need to calculate new EPICS. Jeff (or perhaps Evan or Darkhan) will need to confirm this based on the recent L2/L3 crossover changes that Alex pointed out.
Here is a comparison between h(t) computed in C00 frames (with kappas applied) and the "correct"-ish calibration, with no kappas applied. The first plot shows the spectra of the two from GPS time 1167559872 to 1167559936. The red line is C00, and the blue line has no kappas applied. The second plot is an ASD ratio (C00 / no-kappas-applied) during the same time period. The cache file that has the no-kappas-applied frames can be found in two locations: ldas-pcdev1.ligo-wa.caltech.edu:/home/aaron.viets/H1_hoft_GDS_frames.cache ldas-pcdev1.ligo-wa.caltech.edu:/home/aaron.viets/calibration/H1/gstreamer10_test/H1_hoft_GDS_frames.cache Also, the file ldas-pcdev1.ligo-wa.caltech.edu:/home/aaron.viets/H1_hoft_test_1167559680-320.txt is a text file that has only h(t) from GPS time 1167559680 to 1167600000.
J. Kissel I've taken the calibration measurement suite that shall be representative of post-winter break. Analysis to come, data files listed below. Sensing Function Measurements: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Measurements/SensingFunctionTFs/ Swept Sine: 2017-01-03_H1DARM_OLGTF_4to1200Hz_25min.xml 2017-01-03_H1_PCAL2DARMTF_4to1200Hz_8min.xml Broadband: 2017-01-03_H1_PCAL2DARMTF_BB_5to1000Hz.xml Actuation Function Measurements: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Measurements/FullIFOActuatorTFs/2017-01-03/ UIM: 2017-01-03_H1SUSETMY_L1_iEXC2DARM_25min.xml 2017-01-03_H1SUSETMY_L1_PCAL2DARM_8min.xml PUM: 2017-01-03_H1SUSETMY_L2_iEXC2DARM_17min.xml 2017-01-03_H1SUSETMY_L2_PCAL2DARM_8min.xml TST: 2017-01-03_H1SUSETMY_L3_iEXC2DARM_8min.xml 2017-01-03_H1SUSETMY_L3_PCAL2DARM_8min.xml Note that this includes the new/better L2/L3 crossover design re-installed earlier today (see LHO aLOG 32933), both in ETMY itself and in the CAL-CS replica that forms DELTAL_EXTERNAL_DQ. The mean data points for ratio of PCAL to DELTL_EXTERNAL (which should be unity if we've calibrated the data correctly), show a ~10%, frequency-dependent deviation, worst at ~200 Hz. We'll have to wait until time-dependent parameters are corrected for before deciding if anything is really "wrong" or incorrect. We know that we will have to adjust the actuation strength and sensing gain by a scalar ~1% because of mistakenly over counting the gain of the analog anti-imaging and anti-aliasing filters (see LHO aLOG 32907), but this won't be the majority of the discrepancy.
Two alarms so far 5 minutes apart. Trends don't really show anything obvious.
Thu 22nd Dec - Sat 24th Dec No restarts reported
Sun 25 Dec Many unexpeced restarts of h1tw0 (05:35 - 13:10). System turned off to prevent further restarts.
Mon 26th Dec - Fri 30th Dec No restarts reported
Sat 31st Dec
2016_12_31 15:57 h1iopsusauxh34
2016_12_31 15:57 h1susauxh34
h1susauxh34 computer died, was power cycled.
Sun 1st Jan - Mon 2nd Jan No restarts reported
J. Kissel I've grabbed traditional "charge" (effective bias voltage due to charge) measurements from H1 SUS ETMX and ETMY this afternoon while under an earthquake. Measurments show that the effective bias voltage is still holding around / under +/-10 [V] in all quadrants. Nice! Still on the to-do list: compare this against longitudinal actuation strength measurements via calibration lines, a. la. LHO aLOG 24547. Perhaps our new years resolution can be to start this regular comparison up again.
awgtpman issues
Jenne, Dave, Jim:
we experienced some TP issues this morning. Command line "diag -l" was slow to start and did not support testpoints. First we restarted the models on h1susauxh34 since this showed errors over the break and had CRC errors, this did not fix the TPs. Next we restarted the awgtpman process on h1asc and this did fix the problems. Remember that h1asc has a special awgtpman process to permit more testponts to be opened. The reason for today's problem is unknown.
Guardian reboot
Dave, Jim:
To ensure the python leap-second updates were installed on all nodes, we rebooted h1guardian0 (it had been running for 33 days). All nodes came back with no problems. We recovered about 4GB of memory.
python gpstime leapseconds
Jim
gpstime package updated, see alog 32919 for details.
Jeff K, Jonathan, Jim, Dave:
For due diligence we performed some sanity tests on the DAQ to confirm the leap-seconds did not cause any problems.
Event at Known Time:
Jeff K dropped a ball onto the control room floor at a recorded time. We looked at a seismometer signal (e.g. H1:ISI-GND_STS_HAM2_X_DQ) using both NDS1 (dataviewer and command-line) and NDS2. The signal showed up in the latter part of the recorded second as expected.
Decode IRIG-B analog signal:
The digitized IRIG-B signal H1:CAL-PCALX_IRIGB_OUT_DQ was read by hand for an arbitary GPS time. The time chosen is GPS = 1167523720 which corresponds to a UTC time of Jan 04 2017 00:08:22.
The decoded IRIGB time is 00:08:40 which is UTC + 18. Tthere have indeed been 18 leap seconds applied to UTC since the GPS epoch of Jan 1980, this is correct.
For anyone interested in decoding irig-b by hand, the attached image shows the seconds, minutes, hours part of the analog signal along with the decoding table.
TITLE: 01/03 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
Restoring From Holiday Log Notes:
John, Kyle, Alfredo, Gerardo X2-8 battery charge a little low but still had lots of life left.
J. Kissel Before the holiday break, I'd discovered that we had somehow lost the settings improved the design of the L2/L3 (or PUM/TST) crossover -- see LHO aLOG 32540 for the bad news discovery, and LHO aLOG 28746 for the original design. I've now fixed the problem, and we have the new improved crossover again. This required several steps: (1) Turned on / switched over to the appropriate filters in the L2 and L3 DRIVEALIGN_L2L filter banks: Good Bad L2 L2L (FM6, FM7) (FM2, FM7 ,FM8) L3 L2L (FM4, FM5) (FM3, FM4) (2) Turned on / swtiched over to the appropriate filters in the corresponding replicas of those filter banks in the CAL-CS model, such that the calibration will be unaffected. (3) Changed the LOWNOISE_ESD_ETMY state of the ISC_LOCK guardian, such that it now forces the new configuration instead of the old. Committed to the userapps repository. (4) Accepted the changes in the H1SUSETMY and H1CALCS SDF systems. Hopefully we won't lose these settings again!
Also, PSL AOM diffracted power is running at ≈7%, which by my recollection is kind of high(ish).