Lowered actuator value to 34% open for Dewar fill this morning.
15:!5UTC
TITLE: 03/28 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 70Mpc
OUTGOING OPERATOR: Patrick
CURRENT ENVIRONMENT:
Wind: 11mph Gusts, 8mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.40 μm/s
QUICK SUMMARY:
14:50 Saw Roto-Rooter coming on to site
14:55 Peter King goin into PSL enclosure
TITLE: 03/28 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC STATE of H1: Observing at 66Mpc INCOMING OPERATOR: Ed SHIFT SUMMARY: Observing entire shift. No issues to report. LOG: 09:51 UTC damped PI mode 27 by changing sign of gain Bubba will escort Apollo to mid Y and then survey mid X 14:26 UTC Chris to mid Y, use snorkel lift to work around roof edge 14:41 UTC Ken to mechanical room to install controller 14:49 UTC Rotorouter? through gate to see Bubba 14:56 UTC Hanford fire department through gate
See attached.
Temperatures are warming outside, enough so that I am comfortable with turning some of the heating elements of in the LVEA. In particular, Zone 2B has been turned back this morning. I will continue to monitor temps closely.
Have remained in observing. No issues to report.
TITLE: 03/28 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 65Mpc
OUTGOING OPERATOR: Nutsinee
CURRENT ENVIRONMENT:
Wind: 8mph Gusts, 7mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.51 μm/s
QUICK SUMMARY:
No issues to report.
TITLE: 03/28 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 66Mpc
INCOMING OPERATOR: Patrick
SHIFT SUMMARY: One lockloss. No obvious environmental cause. Quickly recovered. Useism is creeping upward. TCS guardian kicked us out of Observe once. Not much else is going on.
LOG:
23:15 Gerado back
23:30 Balers done
01:58 CO2 laser lost lock (I think it was CO2Y). Back to Observe a minute later
06:04 Lockloss
06:40 Back to Observe
WP6543, FRS7701 Jonathan and Carlos:
the primary 2FA CDS login server cdsssh was upgraded from U12.04LTS to Debian 8. The reason for the upgrade is because U12 is end-of-life and will soon be unsupported.
TITLE: 03/27 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 65Mpc
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT: Wind: 16mph Gusts, 13mph 5min avg Primary useism: 0.05 μm/s Secondary useism: 0.34 μm/s
QUICK SUMMARY: Been locked for 10.5 hours. Balers should be done by now (Bubba reported that they should be done by 16:30 local) Nothing else to report.
TITLE: 03/27 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 67Mpc
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY:
LOG:
15:00 Bailers on Xarm
21:00 Betsy, TJ to Opstics Lab, out 22:00
21:15 Fil & Richard on Xarm, working on vault
22:00 Gerardo to optics lab
21:30 Evan & Miriam doing blip glitch measurements while LLO is down
WP 6544, ECR E1700107,
I have updated the following frontend models to implement the time-domain DCPD cross correlation technique.
omc.mdl (master model)h1omc.mdlCAL_CS_MASTER.mdlh1calcs.mdlThe h1omc and h1calcs models will be installed and restarted tomorrow during the maintenance period.
[The changes]
Here is a list of things that I newly added today in the OMC models (namely h1omc.mdl and omc.mdl). Also the attached are screenshots of relevant part of the models.
h1calcs.Here is a list of things I added today in the CAL CS models, e.g. h1calcs.mdl and CAL_CS_MASTER.mdl.
H(L)1:CAL-DELTAL_A and H(L)1:CAL_DELTAL_B).After these changes, I confirmed that they compile without an error. They are ready for tomorrow's installation. The models are checked into svn.
Evan, Miriam,
While L1 was having a small lock loss, we made a series of injections (with H1 in comissioning mode) in the H1:SUS-ETMY_L2_DRIVEALIGN_Y2L_EXC channel. The injections are single sine-gaussian pulses that simulate blip glitches (see https://ldas-jobs.ligo-wa.caltech.edu/~miriam.cabero/sine-gaussian.png ).
We started very quiet and slowly increased the amplitude, so that only the last 3 of the 9 injections appear in GDS-CALIB_STRAIN. The GPS times of the injections are:
1174684631
1174684680
1174684715
1174684745
1174684793
1174684854
1174684889 *
1174684945 *
1174685355 *
* Can be seen in CALIB_STRAIN
We will be repeating this kind of injections at opportunistic times (L1 not observing) in the next days, taking different blip morphologies, and different amplitudes.
Only the loudest of these saturated the noisemons, and it did so by hitting an analog limit at plus/minus 22000 counts. I projected the drive signal into noisemon counts and looked at the last three injections on the list. In the first two, the noisemon signal tracks the drive, and the subtraction of the two is just noise. In the last (loudest) injection, the noisemon hits an analog saturation at both plus and minus 22,000 counts leaving a huge glitch in the subtracted data. This is good because it suggests that the only important analog limit in the noisemon is this threshold. I don't have time to document it now, but I've tried the same with a set of loud detchar injections, which go up to hundreds of Hz, and I get the same behavior. So when the drive signal does not push the noisemon beyond 22,000 counts, we can trust the subtraction, and anything we see has entered the signal between the DAC and noisemon; it's a glitch in the electronics and not a result of the DARM loop. Attached are the three subtractions, noisemon minus projected drive signal.
Jim, Dave:
At 15:44:58 UTC (08:44:58 PDT) we received a timing error which only lasted for one second. The error was reported by the CNS-II independent GPS receivers at both end stations, they both went into the 'Waiting for GPS lock' error state at 15:44:58, stayed there for one second, and then went good. The IRIG-B signals from these receivers are being acquired by the DAQ (and monitored by GDS). The IRIG-B signals for the second prior, the second of the error, and the following two seconds (4 seconds in total) are shown below.
As can be seen, even though EX and EY both reported the error, only EX's IRIG-B is missing during the bad second.
The encoded seconds in the IRIG-B are shown in the table below. Note that the GPS signal does not have leap seconds applied, so GPS = UTC +18.
| Actual seconds | EX IRIG-b seconds | EY IRIG-b seconds |
| 15 | 15 | 15 |
| 16 | missing | 16 |
| 17 | 16 | 17 |
| 18 | 18 | 18 |
So EY was sequential through this period. EX slipped the 16 second by a second, skipped 17 and resynced at 18.
Summary: All problems were in CNS II GPS Channels at LHO. No problems were observed in the Trimble GPS Channels at either site, nor in the LLO CNS II Channels, with the exception of a change of -80ns in the LLO Trimble GPS PPSOFFSET a few seconds after the anomally (see below). It seems that both LHO CNS II Clocks simultaneously dropped from 10 to 3 satellites tracked for a single second. There is no channel recording the number of satellites locked by the Trimble clocks, but the RECEIVERMODEs at both sites remain at the highest level of quality, OverDeterminedClock (level 5 for the Trimbles) with no interruption at the time of the anomaly. It is unclear whether the LLO PPSOFFSET is causally related to the LHO event; the lack of other anomalous output from the LLO Trimble clock suggests that it is otherwise performing as intended. Descriptions of anomalous plots below. All anomalous plots are attached. Dilution of precision at BOTH LHO CNS II clocks skyrockets to 100 around the event (nominal values around 1) (H1:SYS-TIMING_X_GPS_A_DOP, H1:SYS-TIMING_Y_GPS_A_DOP). Number of satellites tracked by BOTH LHO CNS II clocks Plummets or two seconds from 10 to 3 (H1:SYS-TIMING_X_GPS_A_TRACKSATELLITES, H1:SYS-TIMING_Y_GPS_A_TRACKSATELLITES). In the second before the anomaly, Both of the LHO CNS II Clocks' RECEIVERMODEs went from 3DFix to 2DFix for exactly one second, as evidenced by a change in state from 6 to 5 in their channels' values (H1:SYS-TIMING_X_GPS_A_RECEIVERMODE, H1:SYS-TIMING_Y_GPS_A_RECEIVERMODE). The 3D Speed also spiked right around the anomaly for both LHO CNS clocks (H1:SYS-TIMING_X_GPS_A_SPEED3D, H1:SYS-TIMING_Y_GPS_A_SPEED3D). LHO CNS II Clock's 2D speeds both climb up to ~0.1 m/s (obviously fictitious) (H1:SYS-TIMING_X_GPS_A_SPEED2D, H1:SYS-TIMING_Y_GPS_A_SPEED2D). LHO Y-End CNS II Clock calculated a drop in elevation of 1.5m following the anomaly (obviously this is spurious) (H1:SYS-TIMING_Y_GPS_A_ALTITUDE). LHO X-End CNS II Clock thinks it dropped by 25m following the anomaly! I'm not sure why this is so much more extreme than the Y-End calculated drop (H1:SYS-TIMING_X_GPS_A_ALTITUDE). The Livingston Corner GPS PPSOFFSET went from its usual value of ~0+/-3ns to -80 ns for a single second at t_anomaly + 3s (L1:SYS-TIMING_C_GPS_A_PPSOFFSET). The GPS Error Flag for both LHO CNS II clocks came on, of course (H1:SYS-TIMING_Y_GPS_A_ERROR_FLAG, H1:SYS-TIMING_X_GPS_A_ERROR_FLAG)
Using my very limited knowledge of Windows administration, I have attempted to list the events logged on h1ecatc1 from 8:00 - 10:00 AM on Feb. 27 2017. Attached is a screenshoot of what was reported. I don't see anything at the time in question. However, there is a quite reasonable chance that there are other places to look that I am not aware of and/or I did not search correctly.
Starting CP3 fill. LLCV enabled. LLCV set to manual control. LLCV set to 50% open. Fill not completed after 3600 seconds. LLCV set back to 15.0% open. Starting CP4 fill. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Fill completed in 1127 seconds. TC A did not register fill. LLCV set back to 44.0% open.
Raised CP4 to 45% open.
Manually overfilled CP3 from control room at 100% open (@1:35pm local). Took 9 min. to overfill. Raised nominal value to 17% open.
J. Kissel I've gathered our "bi-weekly" calibration suite of measurements to track the sensing function, and ensure all calibration is within reasonable uncertainty and to have corroborating evidence for a time-dependent detuning spring frequency & Q. Trends of previous data have now confirmed time dependence -- see LHO aLOG 34967. Evan is processing the data and will add this day's suite to the data collection. We will begin analyzing the 7.93 Hz PCAL line that's been in place since the beginning of ER10, using a method outlined in T1700106, and check the time dependence in a much more continuous fashion. My suspicion is that the SRC detuning parameters will change on the same sort of time scale as the optical gain and cavity pole frequency. Note also, that I've grabbed a much longer data set for the broad-band injection, as requested by Shivaraj -- from 22:50:15 UTC to 22:54:20 UTC, roughly 4 minutes. The data have been saved and committed to: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Measurements/SensingFunctionTFs 2017-03-21_H1DARM_OLGTF_4to1200Hz_25min.xml 2017-03-21_H1_PCAL2DARMTF_4to1200Hz_8min.xml 2017-03-06_H1_PCAL2DARMTF_BB_5to1000Hz_0p25BW_250avgs_5min.xml The data have been exported with similar names to the same location in the repo. For time-tracking, this suite took ~38 minutes from 2017-03-21, 22:18 - 22:56 UTC.
J. Kissel Because the calibration suite requires one to turn OFF all calibration lines before the measurements then back ON after, the time-dependent correction factor computation is spoiled temporarily. In the GDS pipeline, which uses FIR filters, it takes about 2 minutes for the calculation to return to normal functionality and produce sensible results (Good! this is what's used to correct h(t)). However, because the front-end's version of this calculation (NOT used in any corrections of any astrophysical or control room product) uses IIR filters, it remains polluted until one manually clears the history on all filter banks involved in the process. Normally, as the ISC_LOCK guardian runs through the lock acquisition sequence, it clears these filter banks history appropriately. However, the calibration suite configuration is still a manual action. Moral of the story -- I'd forgotten to do this history clearing until about 1 hr into the current observation stretch. The history was cleared at approximately 2017-03-22 00:10 UTC. Why am I aLOGging it? Because clearing this history does NOT take us out of observation mode. Rightfully so in the case, because again the front-end calculation is not yet used in any control system, or to correct any data stream, it is merely a monitor. I just aLOG it so that the oddball behavior shown at the tail end of today's UTC summary page has an explanation (both 20170321 and 20170322 show the effect). To solve this problem in the future, I'm going to create a new state in the ISC_LOCK guardian that does the simple configuration switches necessary so no one forgets in the future.
J. Kissel
On the discussion of "Why Can't LLO the same SNR / Coherence / Uncertainty below 10 Hz for These Sensing Function Measurements?"
It was re-affirmed by Joe on Monday's CAL check-in call that LLO cannot get SNR on 5- 10 Hz data points. There are two things that have been investigated that could be the reason for this:
(1) The L1 DARM Loop Gain is too large ("much" larger than H1) at these frequencies, which suppresses the PCAL and SUS actuator drive signals.
(2) Because of L1's choice in location of applying the optical plant's DC readout DARM offset and avoiding-DAC-zero-crossing-glitching SUS offset means there are single vs. double precision problems in using a very traditional DARM_IN1/DARM_IN2 location of the open loop gain transfer function.
both investigations are described in LHO aLOG 32061.
They've convinced me that (2) is a small effect, and the major reason for the loss in SNR is the loop gain. However, Evan G. has put together a critique of the DARM loop (see G1700316), which shows that the difference in suppression between 5-10 Hz is only about a factor of 4. I put a screen cap of page 4 which shows the suppression.
I attach a whole bunch of supporting material that shows relevant ASDs for both during the lowest frequency points of the DARM OLG TF and the PCAL 2 DARM TF:
- DACRequest -- shows that a factor of 4 increase in drive strength would not saturate any stage of the ETMY suspension actuators
- SNR_in_DARM_ERR -- shows the loop suppressed SNR of the excitation
- SNR_in_DELTAL_EXT -- shows the calibrated displacement driven
- SNR_in_OMC_DCPDs -- shows that a factor of 4 increase in drive strength would not saturate the OMC DCPDs
So ... is there something I'm missing?
Here with attached a plot showing comparison of PCal, CAL-DELTAL_EXTERNAL and GDS for the broad band injection. As expected, GDS agrees better with PCal injection signal. The code used to make the plot is added to svn,
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/PCAL/PcalBroadbandComparison20170321.m
Just to close out the question in Comment #2 above, LLO was indeed able to use LHO-like templates and drastically improve their SNR at low-frequency; check out LLO aLOG 32495. Hazaah!
J. Kissel, E. Goetz The processed results for this data set are attached. For context of how this measurement fits in with the rest of measurements taken during ER10 / O2, check out LHO aLOG 35163.
That's 15:15UTC
15:20 UTC I also moved SUS ITMY to "SAFE" and took ITMY ISI to "OFFLINE" for CPS Noise investigations.
15:30UTC Due to impending PCAL work. BRSX is also off. Current state: "VERY_WINDY_NOBRSXY"
18:24 Hugh is finished his work and ITMY has been returned to it's nominal state(s)