WP6544
Daniel, Kiwamu, Dave:
new model code was installed for h1calcs and h1omc. This added one Dolphin IPC channel (omc sender, calcs receiver) and added two 4kHz DAQ channels from calcs.
Models were restarted, followed by a DAQ restart.
As a complement to last week's ETMy photos that Sudarshan took (aLog 34980), today I took PCal beam position photos of ETMx. I have sent them to Sudarshan for processing. Stay tuned for results.
See red and blue VS green. Bumps are at around [4, 5, 6, 7, 8]*12.125 Hz or so.
When it was really bad (red), it was easily identifiable in detchar summary page, not so when it was mildly bad (blue).
I ran coherence tool on these lines. I first tried to find a comb over there but failed. Then I tried to find them as single lines. I looked for 6 single lines: 12.125Hz, 48.5Hz, 60.625Hz, 72.75Hz, 84.875Hz, 97Hz. Two of them are not found in the coherence tool. Here are the results:
12.5Hz: https://ldas-jobs.ligo-wa.caltech.edu/~duo.tao/O2_line_12.125/index.html (Weak results and the structure does not look like a single line)
48.5Hz: https://ldas-jobs.ligo-wa.caltech.edu/~duo.tao/O2_line_48.5/index.html (Found in four of the weeks, in some EX and EY magnetometers. There are some channels that do not look like a single line. But there seems to be something going on so I showed them.)
60.625Hz: https://ldas-jobs.ligo-wa.caltech.edu/~duo.tao/O2_line_60.625/index.html (Not found)
72.75Hz: https://ldas-jobs.ligo-wa.caltech.edu/~duo.tao/O2_line_72.75/index.html (Not found)
84.875Hz: https://ldas-jobs.ligo-wa.caltech.edu/~duo.tao/O2_line_84.875/index.html (Weak. Only found in one channel and the structure does not look a single line)
97Hz: https://ldas-jobs.ligo-wa.caltech.edu/~duo.tao/O2_line_97/index.html (Found in many EX and EY magnetometers)
Robert's measurements seem to suggest that this is the resonance of one of the baffles.
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=35166
Lowered actuator value to 34% open for Dewar fill this morning.
15:!5UTC
That's 15:15UTC
15:20 UTC I also moved SUS ITMY to "SAFE" and took ITMY ISI to "OFFLINE" for CPS Noise investigations.
15:30UTC Due to impending PCAL work. BRSX is also off. Current state: "VERY_WINDY_NOBRSXY"
18:24 Hugh is finished his work and ITMY has been returned to it's nominal state(s)
TITLE: 03/28 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 70Mpc
OUTGOING OPERATOR: Patrick
CURRENT ENVIRONMENT:
Wind: 11mph Gusts, 8mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.40 μm/s
QUICK SUMMARY:
14:50 Saw Roto-Rooter coming on to site
14:55 Peter King goin into PSL enclosure
TITLE: 03/28 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC STATE of H1: Observing at 66Mpc INCOMING OPERATOR: Ed SHIFT SUMMARY: Observing entire shift. No issues to report. LOG: 09:51 UTC damped PI mode 27 by changing sign of gain Bubba will escort Apollo to mid Y and then survey mid X 14:26 UTC Chris to mid Y, use snorkel lift to work around roof edge 14:41 UTC Ken to mechanical room to install controller 14:49 UTC Rotorouter? through gate to see Bubba 14:56 UTC Hanford fire department through gate
See attached.
Temperatures are warming outside, enough so that I am comfortable with turning some of the heating elements of in the LVEA. In particular, Zone 2B has been turned back this morning. I will continue to monitor temps closely.
Have remained in observing. No issues to report.
TITLE: 03/28 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC STATE of H1: Observing at 65Mpc OUTGOING OPERATOR: Nutsinee CURRENT ENVIRONMENT: Wind: 8mph Gusts, 7mph 5min avg Primary useism: 0.04 μm/s Secondary useism: 0.51 μm/s QUICK SUMMARY: No issues to report.
TITLE: 03/28 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 66Mpc
INCOMING OPERATOR: Patrick
SHIFT SUMMARY: One lockloss. No obvious environmental cause. Quickly recovered. Useism is creeping upward. TCS guardian kicked us out of Observe once. Not much else is going on.
LOG:
23:15 Gerado back
23:30 Balers done
01:58 CO2 laser lost lock (I think it was CO2Y). Back to Observe a minute later
06:04 Lockloss
06:40 Back to Observe
WP6543, FRS7701 Jonathan and Carlos:
the primary 2FA CDS login server cdsssh was upgraded from U12.04LTS to Debian 8. The reason for the upgrade is because U12 is end-of-life and will soon be unsupported.
TITLE: 03/27 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 65Mpc
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT: Wind: 16mph Gusts, 13mph 5min avg Primary useism: 0.05 μm/s Secondary useism: 0.34 μm/s
QUICK SUMMARY: Been locked for 10.5 hours. Balers should be done by now (Bubba reported that they should be done by 16:30 local) Nothing else to report.
Jim, Dave:
At 15:44:58 UTC (08:44:58 PDT) we received a timing error which only lasted for one second. The error was reported by the CNS-II independent GPS receivers at both end stations, they both went into the 'Waiting for GPS lock' error state at 15:44:58, stayed there for one second, and then went good. The IRIG-B signals from these receivers are being acquired by the DAQ (and monitored by GDS). The IRIG-B signals for the second prior, the second of the error, and the following two seconds (4 seconds in total) are shown below.
As can be seen, even though EX and EY both reported the error, only EX's IRIG-B is missing during the bad second.
The encoded seconds in the IRIG-B are shown in the table below. Note that the GPS signal does not have leap seconds applied, so GPS = UTC +18.
Actual seconds | EX IRIG-b seconds | EY IRIG-b seconds |
15 | 15 | 15 |
16 | missing | 16 |
17 | 16 | 17 |
18 | 18 | 18 |
So EY was sequential through this period. EX slipped the 16 second by a second, skipped 17 and resynced at 18.
Summary: All problems were in CNS II GPS Channels at LHO. No problems were observed in the Trimble GPS Channels at either site, nor in the LLO CNS II Channels, with the exception of a change of -80ns in the LLO Trimble GPS PPSOFFSET a few seconds after the anomally (see below). It seems that both LHO CNS II Clocks simultaneously dropped from 10 to 3 satellites tracked for a single second. There is no channel recording the number of satellites locked by the Trimble clocks, but the RECEIVERMODEs at both sites remain at the highest level of quality, OverDeterminedClock (level 5 for the Trimbles) with no interruption at the time of the anomaly. It is unclear whether the LLO PPSOFFSET is causally related to the LHO event; the lack of other anomalous output from the LLO Trimble clock suggests that it is otherwise performing as intended. Descriptions of anomalous plots below. All anomalous plots are attached. Dilution of precision at BOTH LHO CNS II clocks skyrockets to 100 around the event (nominal values around 1) (H1:SYS-TIMING_X_GPS_A_DOP, H1:SYS-TIMING_Y_GPS_A_DOP). Number of satellites tracked by BOTH LHO CNS II clocks Plummets or two seconds from 10 to 3 (H1:SYS-TIMING_X_GPS_A_TRACKSATELLITES, H1:SYS-TIMING_Y_GPS_A_TRACKSATELLITES). In the second before the anomaly, Both of the LHO CNS II Clocks' RECEIVERMODEs went from 3DFix to 2DFix for exactly one second, as evidenced by a change in state from 6 to 5 in their channels' values (H1:SYS-TIMING_X_GPS_A_RECEIVERMODE, H1:SYS-TIMING_Y_GPS_A_RECEIVERMODE). The 3D Speed also spiked right around the anomaly for both LHO CNS clocks (H1:SYS-TIMING_X_GPS_A_SPEED3D, H1:SYS-TIMING_Y_GPS_A_SPEED3D). LHO CNS II Clock's 2D speeds both climb up to ~0.1 m/s (obviously fictitious) (H1:SYS-TIMING_X_GPS_A_SPEED2D, H1:SYS-TIMING_Y_GPS_A_SPEED2D). LHO Y-End CNS II Clock calculated a drop in elevation of 1.5m following the anomaly (obviously this is spurious) (H1:SYS-TIMING_Y_GPS_A_ALTITUDE). LHO X-End CNS II Clock thinks it dropped by 25m following the anomaly! I'm not sure why this is so much more extreme than the Y-End calculated drop (H1:SYS-TIMING_X_GPS_A_ALTITUDE). The Livingston Corner GPS PPSOFFSET went from its usual value of ~0+/-3ns to -80 ns for a single second at t_anomaly + 3s (L1:SYS-TIMING_C_GPS_A_PPSOFFSET). The GPS Error Flag for both LHO CNS II clocks came on, of course (H1:SYS-TIMING_Y_GPS_A_ERROR_FLAG, H1:SYS-TIMING_X_GPS_A_ERROR_FLAG)
Using my very limited knowledge of Windows administration, I have attempted to list the events logged on h1ecatc1 from 8:00 - 10:00 AM on Feb. 27 2017. Attached is a screenshoot of what was reported. I don't see anything at the time in question. However, there is a quite reasonable chance that there are other places to look that I am not aware of and/or I did not search correctly.
J. Kissel I've gathered our "bi-weekly" calibration suite of measurements to track the sensing function, and ensure all calibration is within reasonable uncertainty and to have corroborating evidence for a time-dependent detuning spring frequency & Q. Trends of previous data have now confirmed time dependence -- see LHO aLOG 34967. Evan is processing the data and will add this day's suite to the data collection. We will begin analyzing the 7.93 Hz PCAL line that's been in place since the beginning of ER10, using a method outlined in T1700106, and check the time dependence in a much more continuous fashion. My suspicion is that the SRC detuning parameters will change on the same sort of time scale as the optical gain and cavity pole frequency. Note also, that I've grabbed a much longer data set for the broad-band injection, as requested by Shivaraj -- from 22:50:15 UTC to 22:54:20 UTC, roughly 4 minutes. The data have been saved and committed to: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Measurements/SensingFunctionTFs 2017-03-21_H1DARM_OLGTF_4to1200Hz_25min.xml 2017-03-21_H1_PCAL2DARMTF_4to1200Hz_8min.xml 2017-03-06_H1_PCAL2DARMTF_BB_5to1000Hz_0p25BW_250avgs_5min.xml The data have been exported with similar names to the same location in the repo. For time-tracking, this suite took ~38 minutes from 2017-03-21, 22:18 - 22:56 UTC.
J. Kissel Because the calibration suite requires one to turn OFF all calibration lines before the measurements then back ON after, the time-dependent correction factor computation is spoiled temporarily. In the GDS pipeline, which uses FIR filters, it takes about 2 minutes for the calculation to return to normal functionality and produce sensible results (Good! this is what's used to correct h(t)). However, because the front-end's version of this calculation (NOT used in any corrections of any astrophysical or control room product) uses IIR filters, it remains polluted until one manually clears the history on all filter banks involved in the process. Normally, as the ISC_LOCK guardian runs through the lock acquisition sequence, it clears these filter banks history appropriately. However, the calibration suite configuration is still a manual action. Moral of the story -- I'd forgotten to do this history clearing until about 1 hr into the current observation stretch. The history was cleared at approximately 2017-03-22 00:10 UTC. Why am I aLOGging it? Because clearing this history does NOT take us out of observation mode. Rightfully so in the case, because again the front-end calculation is not yet used in any control system, or to correct any data stream, it is merely a monitor. I just aLOG it so that the oddball behavior shown at the tail end of today's UTC summary page has an explanation (both 20170321 and 20170322 show the effect). To solve this problem in the future, I'm going to create a new state in the ISC_LOCK guardian that does the simple configuration switches necessary so no one forgets in the future.
J. Kissel On the discussion of "Why Can't LLO the same SNR / Coherence / Uncertainty below 10 Hz for These Sensing Function Measurements?" It was re-affirmed by Joe on Monday's CAL check-in call that LLO cannot get SNR on 5- 10 Hz data points. There are two things that have been investigated that could be the reason for this: (1) The L1 DARM Loop Gain is too large ("much" larger than H1) at these frequencies, which suppresses the PCAL and SUS actuator drive signals. (2) Because of L1's choice in location of applying the optical plant's DC readout DARM offset and avoiding-DAC-zero-crossing-glitching SUS offset means there are single vs. double precision problems in using a very traditional DARM_IN1/DARM_IN2 location of the open loop gain transfer function. both investigations are described in LHO aLOG 32061. They've convinced me that (2) is a small effect, and the major reason for the loss in SNR is the loop gain. However, Evan G. has put together a critique of the DARM loop (see G1700316), which shows that the difference in suppression between 5-10 Hz is only about a factor of 4. I put a screen cap of page 4 which shows the suppression. I attach a whole bunch of supporting material that shows relevant ASDs for both during the lowest frequency points of the DARM OLG TF and the PCAL 2 DARM TF: - DACRequest -- shows that a factor of 4 increase in drive strength would not saturate any stage of the ETMY suspension actuators - SNR_in_DARM_ERR -- shows the loop suppressed SNR of the excitation - SNR_in_DELTAL_EXT -- shows the calibrated displacement driven - SNR_in_OMC_DCPDs -- shows that a factor of 4 increase in drive strength would not saturate the OMC DCPDs So ... is there something I'm missing?
Here with attached a plot showing comparison of PCal, CAL-DELTAL_EXTERNAL and GDS for the broad band injection. As expected, GDS agrees better with PCal injection signal. The code used to make the plot is added to svn,
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/PCAL/PcalBroadbandComparison20170321.m
Just to close out the question in Comment #2 above, LLO was indeed able to use LHO-like templates and drastically improve their SNR at low-frequency; check out LLO aLOG 32495. Hazaah!
J. Kissel, E. Goetz The processed results for this data set are attached. For context of how this measurement fits in with the rest of measurements taken during ER10 / O2, check out LHO aLOG 35163.
BTW: prior to the DAQ restart it had been running for 28 days 0 hours and 38 minutes.