00:33 Runnnig a2l. DTT ploit shows high incoherence in YAW. Python problems? Not running from the MEDM screen
00:52 Jeff K running calibration measurements
02:44 Checking a2l, post calibration. PIT is showing a pretty good deal of mis-alignment. Going to run the script one more time before setting the intention bit.
03:02 H1 in Observing
He was informing me that they were going to go to Observing. I told him we had been there for a few hours already but he brought to my attention the fact that GWI stat is reporting us as NOT ok. Anyone?
Apologies. We've been at NLN for about that long. In Observation for only about 1 hour.
Seems like H1:DMT-CALIBRATED is 0 (zero) not 1. Is this related to the calibration task performed today?
Is this why GWI stat thinks that H1 is not OK?
Sent a message to Jeff Kissel, Aaron Viets and Alex Urban.
I tried a few things to see if I could figure out why the calibration flag wasn't set. 1) restarted the redundant calibration pipeline, This probably caused some of the backup frames to be lost but the primary and low latency frames would not be affected. The Science_RSegs_H1 process https://marble.ligo-wa.caltech.edu/dmt/monitor_reports/Science_RSegs_H1/Segment_List.html is generating segments from the output of the (restarted) redundant pipeline, but it is getting the same results. 2) Checked for dataValid errors in the channels in the broadcaster frames. dataValid would probably cause the pipeline to flush the h(t) data. No such errors were found 3) checked for subnormal/Nan data in the broadcaster frames. Another potential proble,m tha tmight cause the pipeline to flush the data. No problems of this type were found either. 4) checked pipeline log file - nothing unusual 5) Checked for frame errors or broadcaster restarts flagged by the broadcast receiver. Last restart was Dec 5! So, I can see no reason for the ht pipeline to not be running smoothly.
Alex U. on behalf of the GDS h(t) pipeline team
I've looked into why the H1:DMT-CALIBRATED flag is not being set, and TL;DR: it's because of the kappa_TST and kappa_PU factors.
Some detail: the H1:DMT-CALIBRATED flag can only be active if we are OBSERVATION_READY, h(t) is being produced, the filters have settled in, and, since we're tracking time-dependent corrections at LHO, the kappa factors (except f_CC) must each be within range -- outside of 10% their nominal value, the DMT-CALIBRATED flag will fail to be set. (See the documentation for this on our wiki page: https://wiki.ligo.org/viewauth/Calibration/TDCalibReviewO1#CALIB_STATE_VECTOR_definitions_during_ER10_47O2)
I attach below a timeseries plot of the real and imaginary parts of each kappa factor. (What's actually plotted is 1 + the imaginary part, to make them fit on the same axes.) As you can see, around half an hour or so in, the kappa_TST and kappa_PU factors go off the rails, straying 20-30% outside their nominal values. (kappa_C, which is a time-dependent gain on the sensing function, and f_CC both stay within range during this time period.)
Earlier today, Jeff reported on some work done with the L2/L3 actuation stages (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=32933) which may in principle affect kappa_TST and kappa_PU. It's possible we will need a new set of time domain filters to absorb these changes into the GDS pipeline. (I also tried a test job from the DMT machine, but the problems with kappas were still present, meaning a simple restart won't solve the problem.)
GWIstat (also the similar display gwsnap) was reporting that H1 was down because of the h(t) production problem; it did not distinguish between that and a down state. I have now modified GWIstat (and gwsnap) to indicate if there is no good h(t) being produced but otherwise the detector is running.
The attached pdf shows that CALCS and GDS agree on the calculation of kappa_tst. I suspect we may need to calculate new EPICS. Jeff (or perhaps Evan or Darkhan) will need to confirm this based on the recent L2/L3 crossover changes that Alex pointed out.
Here is a comparison between h(t) computed in C00 frames (with kappas applied) and the "correct"-ish calibration, with no kappas applied. The first plot shows the spectra of the two from GPS time 1167559872 to 1167559936. The red line is C00, and the blue line has no kappas applied. The second plot is an ASD ratio (C00 / no-kappas-applied) during the same time period. The cache file that has the no-kappas-applied frames can be found in two locations: ldas-pcdev1.ligo-wa.caltech.edu:/home/aaron.viets/H1_hoft_GDS_frames.cache ldas-pcdev1.ligo-wa.caltech.edu:/home/aaron.viets/calibration/H1/gstreamer10_test/H1_hoft_GDS_frames.cache Also, the file ldas-pcdev1.ligo-wa.caltech.edu:/home/aaron.viets/H1_hoft_test_1167559680-320.txt is a text file that has only h(t) from GPS time 1167559680 to 1167600000.
J. Kissel
I've taken the calibration measurement suite that shall be representative of post-winter break. Analysis to come, data files listed below.
Sensing Function Measurements:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Measurements/SensingFunctionTFs/
Swept Sine:
2017-01-03_H1DARM_OLGTF_4to1200Hz_25min.xml
2017-01-03_H1_PCAL2DARMTF_4to1200Hz_8min.xml
Broadband:
2017-01-03_H1_PCAL2DARMTF_BB_5to1000Hz.xml
Actuation Function Measurements:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Measurements/FullIFOActuatorTFs/2017-01-03/
UIM:
2017-01-03_H1SUSETMY_L1_iEXC2DARM_25min.xml
2017-01-03_H1SUSETMY_L1_PCAL2DARM_8min.xml
PUM:
2017-01-03_H1SUSETMY_L2_iEXC2DARM_17min.xml
2017-01-03_H1SUSETMY_L2_PCAL2DARM_8min.xml
TST:
2017-01-03_H1SUSETMY_L3_iEXC2DARM_8min.xml
2017-01-03_H1SUSETMY_L3_PCAL2DARM_8min.xml
Note that this includes the new/better L2/L3 crossover design re-installed earlier today (see LHO aLOG 32933), both in ETMY itself and in the CAL-CS replica that forms DELTAL_EXTERNAL_DQ. The mean data points for ratio of PCAL to DELTL_EXTERNAL (which should be unity if we've calibrated the data correctly), show a ~10%, frequency-dependent deviation, worst at ~200 Hz. We'll have to wait until time-dependent parameters are corrected for before deciding if anything is really "wrong" or incorrect.
We know that we will have to adjust the actuation strength and sensing gain by a scalar ~1% because of mistakenly over counting the gain of the analog anti-imaging and anti-aliasing filters (see LHO aLOG 32907), but this won't be the majority of the discrepancy.
Two alarms so far 5 minutes apart. Trends don't really show anything obvious.
Thu 22nd Dec - Sat 24th Dec No restarts reported
Sun 25 Dec Many unexpeced restarts of h1tw0 (05:35 - 13:10). System turned off to prevent further restarts.
Mon 26th Dec - Fri 30th Dec No restarts reported
Sat 31st Dec
2016_12_31 15:57 h1iopsusauxh34
2016_12_31 15:57 h1susauxh34
h1susauxh34 computer died, was power cycled.
Sun 1st Jan - Mon 2nd Jan No restarts reported
J. Kissel I've grabbed traditional "charge" (effective bias voltage due to charge) measurements from H1 SUS ETMX and ETMY this afternoon while under an earthquake. Measurments show that the effective bias voltage is still holding around / under +/-10 [V] in all quadrants. Nice! Still on the to-do list: compare this against longitudinal actuation strength measurements via calibration lines, a. la. LHO aLOG 24547. Perhaps our new years resolution can be to start this regular comparison up again.
awgtpman issues
Jenne, Dave, Jim:
we experienced some TP issues this morning. Command line "diag -l" was slow to start and did not support testpoints. First we restarted the models on h1susauxh34 since this showed errors over the break and had CRC errors, this did not fix the TPs. Next we restarted the awgtpman process on h1asc and this did fix the problems. Remember that h1asc has a special awgtpman process to permit more testponts to be opened. The reason for today's problem is unknown.
Guardian reboot
Dave, Jim:
To ensure the python leap-second updates were installed on all nodes, we rebooted h1guardian0 (it had been running for 33 days). All nodes came back with no problems. We recovered about 4GB of memory.
python gpstime leapseconds
Jim
gpstime package updated, see alog 32919 for details.
Jeff K, Jonathan, Jim, Dave:
For due diligence we performed some sanity tests on the DAQ to confirm the leap-seconds did not cause any problems.
Event at Known Time:
Jeff K dropped a ball onto the control room floor at a recorded time. We looked at a seismometer signal (e.g. H1:ISI-GND_STS_HAM2_X_DQ) using both NDS1 (dataviewer and command-line) and NDS2. The signal showed up in the latter part of the recorded second as expected.
Decode IRIG-B analog signal:
The digitized IRIG-B signal H1:CAL-PCALX_IRIGB_OUT_DQ was read by hand for an arbitary GPS time. The time chosen is GPS = 1167523720 which corresponds to a UTC time of Jan 04 2017 00:08:22.
The decoded IRIGB time is 00:08:40 which is UTC + 18. Tthere have indeed been 18 leap seconds applied to UTC since the GPS epoch of Jan 1980, this is correct.
For anyone interested in decoding irig-b by hand, the attached image shows the seconds, minutes, hours part of the analog signal along with the decoding table.
TITLE: 01/03 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
Restoring From Holiday Log Notes:
John, Kyle, Alfredo, Gerardo X2-8 battery charge a little low but still had lots of life left.
J. Kissel Before the holiday break, I'd discovered that we had somehow lost the settings improved the design of the L2/L3 (or PUM/TST) crossover -- see LHO aLOG 32540 for the bad news discovery, and LHO aLOG 28746 for the original design. I've now fixed the problem, and we have the new improved crossover again. This required several steps: (1) Turned on / switched over to the appropriate filters in the L2 and L3 DRIVEALIGN_L2L filter banks: Good Bad L2 L2L (FM6, FM7) (FM2, FM7 ,FM8) L3 L2L (FM4, FM5) (FM3, FM4) (2) Turned on / swtiched over to the appropriate filters in the corresponding replicas of those filter banks in the CAL-CS model, such that the calibration will be unaffected. (3) Changed the LOWNOISE_ESD_ETMY state of the ISC_LOCK guardian, such that it now forces the new configuration instead of the old. Committed to the userapps repository. (4) Accepted the changes in the H1SUSETMY and H1CALCS SDF systems. Hopefully we won't lose these settings again!
While waiting for the ground to stop shaking, I ran throught Betsy's annotated LVEA sweep. I didn't find anything out of place, I did run through the science mode process for the PSL (unclear if that was necessary, I got the impression from the checksheet on the PSL that it was). Everything else seemed okay. I don't believe the ends have been done, but access is dicey today.
This morning, Jason, Mark and I swapped the assumed-to-be failing TCSY flow sensor which has been showing epochs of glitching and low readout (while other indicators show normal flow, alogs 32712 and 32230). The process to do this was such:
1) Key laser off at control box in rack, LVEA
2) Turn RF off at mezzanine rack, Mech room
3) Turn chiller off on mezzanine, Mech room
4) Turn power off on back of controller box in rack, LVEA (we also pulled the power cable to the sensor off the front of the controller, but it was probably overkill)
5) Close in-line valves under BSC chamber near yellow sensor to-be-swapped, LVEA
6) Quick-disconnect water tubes at manifold near table, LVEA
7) Pulled yelow top off of yellow sensor housing under BSC at the piping, LVEA
8) Pulled the blue and black wires to the Power recepticles inside the housing (see pic attached). Pulled full grey cable out of housing.
9) While carefully supporting blue piping*, unscrewed large white nut holding housing/sensor to piping (was tough, in fact so tough that we later removed all of the teflon tape which was unneeded in his join)
10) Pull* straight up on the housing (hard) and it comes out of the piping.
11) Reverse all above steps to insert new housing/sensor, wires and turn everything back on. Watch for rolled o-rings on the housing and proper alignment of the noth feature when installing the new sensor. Verify mechanical flow sensors in piping line show ~3-4 G/m readout when flow/chiller is restored to functionality.
12) Setup new flow sensor head with Settings: Go to the other in-use sensor, pull off the top and scroll through the menu items (red and white buttons on the unit (shown in pic). Set the new head to these values.
13) Verify the new settings on the head are showing a ~3 G/m readout on the medm screen. If not, possibly there is setting on the sensor that needs revisited.
14) Monitor TCS to see that laser comes back up and stabilizes.
* Blue piping can crack so be careful to always support it and avoid torque torque
Note - with the sensor removed, we could see alot of green merk in the blue piping where the paddle wheel sits. Still suffering green sludge in this system...
A few pictures to add to those already posted. The O-ring closest to the paddle wheel had a cut to it. Not near the electronics, plus there's the other O-ring so it doesn't look like water was getting into where the electronics is housed. Some kind of stuff stuck to each blade (paddle?) of the paddle wheel. Not a good sign if the cooling water for the laser is meant to be clean.
Settings were as follows:
FLO Unit (Flow Unit) = G/m (default was L/m)
FActor (K-Factor) = 135.00 (default was 20)
AVErage (Average) = 0
SEnSit (Sensitivity) = 0
4 Set (4mA Set Point) = 0 G/m
20 Set (20mA Set Point = 10 G/m (default was 160)
ContrAST (Contrast) = 3
Here's both TCS system laser power and flow for the past day. The drop out in the ITMY data is our few hour sensor replacement work. So far no glitching or low droops. Although, there weren't any for the last 24 hours on the old sensor either.
Attached is a 14 day duration minute trend of the TCSy chiller flow rate and CO2 laser power since our swap of tthe TCSy flow sensor. There have been 7 glitches below 2 GPM, with 3 of those glitches being below 1 GPM; all 7 glitches occured in the last week. Unless the spare flow sensor is also faulty (not beyond belief, but still a hard one to swallow) the root cause of our TCSy flow glitches lies elsewhere.
It might be a good idea to try swapping the laser controller chassis next. The electronics path for this flow meter is very simple - just the controller and then into the EtherCAT chassis where it's read by an ADC.
Also, PSL AOM diffracted power is running at ≈7%, which by my recollection is kind of high(ish).