WP 6559
PEM group reported possible issues with some of the PEM "Test" channels in one of the AA chassis in the CER (PEM/OAF Rack slot U7 & U6). Channels 23-32 were all verified to be working.
F. Clara, R. McCarthy
Soft closed GV 5,7 at 15:25 UTC and re-opened at 19:00 UTC during viewport pcal camera installation. We let the accumulated gas load in gate annuli be burped in.
Took the opportunity to train a couple operators on stroking pneumatic valves: Jeff Bartlett and Nutsinee
Thanks to all for transitioning to laser safe for the installation.
Krishna
I recentered the BRS-Y this morning. The driftmon should end up around + 5k counts, which gives ~20k counts of useful range before the next recentering. The drift rate currently is about 180 counts per day (see image), so next recentering will be in ~3 months. As noted in 34145, recentering was much easier this time because the recentering rod prefers certain positions, which happened to be in an acceptable range this time.
BRS-Y is ready for SEI use.
I reset the PSL power watchdogs at 16:56 UTC (9:56 PDT). This closes FAMIS 3644.
J. Oberling, E. Merilh
This morning we swapped the oplev laser for the ETMy oplev, which has been having issues with glitching. The swap went smooth with zero issues. Old laser SN is 130-1, new laser SN is 194-1. This laser operates at a higher power than the previous laser, so the SUM counts are now ~70k (used to be ~50k); the individual QPD segments are sitting between 16k and 19k counts. This laser will need a few hours to come to thermal equilibrium, so I will assess whether or not glitching has been improved this afternoon; I will keep the work permit open until this has been done.
For those investigating the possibility of these lasers causing a comb in DARM, the laser was off and the power unplugged for ~11 minutes. The laser was shut off and unplugged at 16:14 UTC (9:14 PDT); we plugged it back in and turned it on at 16:25 UTC (9:25 PDT).
Attached are spectrograms (1500-18:00 UTC vs 20-22 Hz) of the EY optical lever power sum over a 3-hour period today containing the laser swap and of a witness magnetometer channel that appeared to indicate on March 14 that a change in laser power strengthened the 0.25-Hz-offset 1-Hz comb at EY. Today's spectrograms, however, don't appear to support that correlation. During the 11-minute period when the optical lever laser is off, the magnetometer spectrogram shows steady lines at 20.25 and 21.25 Hz. For reference, corresponding 3-hour spectrograms are attached from March 14 that do appear to show the 20.25-Hz and 21.25-Hz teeth appear right after a power change in the laser at about 17:11 UTC. Similarly, 3-hour spectrograms are attached from March 14 that show the same lines turning on at EX at about 16:07 UTC. Additional EX power sum and magnetometer spectrograms are also attached, to show that those two lines persist during a number of power level changes over an additional 8 hours. In my earlier correlation check, I noted the gross changes in magnetometer spectra, but did not appreciate that the 0.25-Hz lines were relatively steady. In summary, those lines strengthened at distinct times on March 14 (roughly 16:07 UTC at EX and 17:11 at EY) that coincide (at least roughly) with power level changes in the optical lever lasers, but the connection is more obscure than I had appreciated and could be chance coincidence with other maintenance work going on that day. Sigh. Can anyone recall some part of the operation of increasing the optical lever laser powers that day that could have increased coupling of combs into DARM, e.g., tidying up a rack by connecting previously unconnected cables? A shot in the dark, admittedly, but it's quite a coincidence that these lines started up at separate times at EX and EY right after those lasers were turned off (or blocked from shining on the power sum photodiodes) and back on again. Spectrograms of optical level power sum and magnetometer channels Fig 1: EY power - April 4 - 15:00-18:00 UTC Fig 2: EY witness magnetometer - Ditto Fig 3: EY power - March 14 - 15:00-18:00 UTC Fig 4: EY magnetometer - Ditto Fig 5: EX power - March 14 - 14:00-17:00 UTC Fig 6: EX witness magnetometer - Ditto Fig 7: EX power - March 14 - 17:00-22:00 UTC Fig 8: EX witness magnetometer - Ditto Fig 9: EX power - March 15 - 00:00-04:00 UTC Fig 10: EX witness magnetometer - Ditto
Laser continued to glitch after the swap; see attachment from 4/5/2017 ETMy oplev summary page. My suspicion is that the VEA temp was just different enough from the Pcal lab (where we stabilize the lasers before install) that the operating point of the laser once installed was just outside the stable range set in the lab. So during today's commissioning window I went to End Y and slightly increased the laser power to hopefully return the operating point to within the stable range. Using the Current Mon port on the laser to monitor the power increase:
Preliminary results look promising, so I will let it run overnight and evaluate in the morning whether or not further tweaks to the laser power are necessary.
I have turned the heat off in Zone 3B in the LVEA. I will continue to closely monitor these temperatures.
TITLE: 04/04 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Cheryl
CURRENT ENVIRONMENT:
Wind: 7mph Gusts, 6mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.48 μm/s
QUICK SUMMARY:
Peter has transitioned the LVEA to laser safe. Chandra has closed the gate valves. IFO is down. IMC is set to OFFLINE. ISS second loop is off. ISI config is set to SC_OFF_NOBRSXY. Krishna is working at end Y. Filiberto is investigating PEM chassis in CER. Tumbleweed balers are on site. Rotorooter is here to backfill hole. Travis has started prep work for camera install. Ed and Jason are heading to end Y to swap optical lever laser.
TITLE: 04/04 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 64Mpc
INCOMING OPERATOR: Travis
SHIFT SUMMARY: one lock loss, relocked and in Observe
LOG:
I noticed a feature in the ITMX GigE camera image when I pulled it up yesterday. It appears to have a very dark line going through the bright features on the left side of the image. After taking a snapshot and adjusting the image, I see a to be a dark line at the top and on the right side as well. I calculated the beam shift using the image, which is fraught with error, but still a decent ballpark measurement, which came up with a beam shift in -X of 12mm. Will be interesting to see what the high-res images show.
TITLE: 04/04 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 72Mpc
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
Wind: 7mph Gusts, 6mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.49 μm/s
QUICK SUMMARY: locked over 12 hours, range is climbing, currently 71.5Mpc
TITLE: 04/04 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 65Mpc
INCOMING OPERATOR: Cheryl
SHIFT SUMMARY:
LOG:
Quiet shift, one PI (27) needed addressing. Not much else to report.
Here is a comparison between O1's cross correlation noise and that of O2.
It is more obvious that noise above 60 Hz is worse than that of O1 by a factor of a few every where (except for a broad peak at 1.2 kHz which is gone in O2). The O2 data is from a long lock stretch from last night. I used the data starting at 03/Apr/2017 2:00:00 UTC with a frequency resolution of 0.1 Hz with 5000 averages (corresponding to ~ 7 hour integration).
TITLE: 04/03 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 68Mpc
INCOMING OPERATOR: Jim
SHIFT SUMMARY: One lockloss due to EQ midway through the shift. Otherwise, no issues to report.
LOG: See previous aLogs.
GRB alert 22:23 UTC. Begin 1 hour standdown.
This may be a false alarm. Got a call from Anamaria at LLO who noticed that the GPS time reported for this GRB was ~20000 seconds before the alert arrived. I have confirmed that our CAL_INJ_CONTROL MEDM screen reports the same discrepancy for the last alert.
Following up on this earlier observation of a change point in DARM combs on March 14 and on comments added by Evan and Ansel, I tried looking at possible correlations between optical lever laser power levels and combs seen in EX and EY magnetometers. The bottom line is that there are such correlations, as documented in detail in the attached annotated slides with posted spectra at a variety of power levels (as measured by H1:SUS-ETMX/Y_L3_OPLEV_SUM_OUT_DQ). The figures below show EX and EY before/after spectra. In summary, the spectra show narrow lines before March 14 in combs of ~1-Hz spacing with various offsets (EX worse than EY). During periods when the laser powers were lowered, the spectra were contaminated by broadband noise (motivation for keeping the power high, I believe). When the power was raised to higher than it was prior to March 14, the broadband power dropped again, the lines persisted and were joined by new lines with 1-Hz spacing offset from integer frequencies by ~0.25 Hz, a comb observed to be present in DARM following March 14. The apparent creation of a new comb in DARM from these lasers may hint that other combs seen for a long time in DARM and in magnetometer spectra may also arise from these lasers. Whether the mechanism is though the optical lever damping, as suggested previously for glitches, or through another coupling, such as ESD power supply contamination from the laser supplies, is not clear to me. Fig 1: EX magnetometer spectrum before maintenance work on March 14 (Interval "A" on 1st attached slide file) Fig 2: EY magnetometer spectrum before maintenance work on March 14 (Interval "P" on 1st attached slide file) Fig 3: EX magnetometer spectrum before maintenance work, on March 15 (Interval "Q" on 1st attached slide file) Fig 3: EY magnetometer spectrum before maintenance work, on March 15 (Interval "V" on 1st attached slide file) Attachment 1: Annotated slides showing spectra before / after many change points on March 14/15 of EX laser power Attachment 2: Annotated slides showing spectra before / after a handful of change points on March 14/15 of EY laser power
Heads up on this: Jason plans to swap the ETMY laser tomorrow (see WP 6555), not necessarily because of this study, but it'll be another "before vs. after" data point. Also -- do we suspect a coupling mechanism for this correlation to DARM? The bandwidth of the optical lever damping loops is quite narrow around 0.3-0.5 [Hz], and control is applied only at the penultimate stage, which means it'll get a factor of 1/f^2 above ~5 Hz. Further, it's angular control as well, so you'd need a pretty high angle-to-length coupling coefficient... Did you see any improvement in the combs seen in DARM after Sheila installed a better cutoff filter (see LHO aLOG 34988)?
Following up on Jeff's question, below are spectral comparisons of pre-March 14, the period between March 14 and March 21, and the period afterward. I don't see evidence of a significant change in the comb structure in DARM after the damping loop rolloff adjustment on March 21. Included are zooms around four teeth in the 1-Hz comb on 0.25-Hz offsets from integer frequencies. Fig 1: 20-60 Hz Fig 2: Zoom around 20.25 Hz Fig 3: Zoom around 21.25 Hz Fig 4: Zoom around 22.25 Hz Fig 5: Zoom around 23.25 Hz
As Krishna noted in aLog 35160, this machine does not have a lot of extra juice to run this. I should have closed out before but was watching the diagnostics. Jim turned off the sensor correction so in case the session closing glitched things, it would not glith the ISI/IFO. It did not and SC has been turned back on.
I've attached about 10 hours of BRS-Y driftmon data around the time of this crash. It looks like this crash was caused by trying to use BRS-Y in a bad range. Even if the data looks temporarily smooth, BRS-Y should not be used if driftmon is below -15k counts.
The close time association with closing of the remote desktop terminal was likely just a coincidence. It is still advised to not login to the machine remotely when BRS-Y is being used for feedforward, unless really necessary.
HAM 6 pressure spiked at around 23:00 UTC time on 3/29.
This looks like about the same time that we were testing the fast shutter. Do we get this kind of pressure spike during a normal lockloss?
We don't typically see spikes like this. We have been monitoring pressures across the site, including HAM 6, in 48 hr increments for a few months now to detect peculiar spikes like this.
Note that we saw several spikes like this before the failure of the OMC which Daniel attributed to "liquid glass": https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=28840
Jim, Dave:
At 15:44:58 UTC (08:44:58 PDT) we received a timing error which only lasted for one second. The error was reported by the CNS-II independent GPS receivers at both end stations, they both went into the 'Waiting for GPS lock' error state at 15:44:58, stayed there for one second, and then went good. The IRIG-B signals from these receivers are being acquired by the DAQ (and monitored by GDS). The IRIG-B signals for the second prior, the second of the error, and the following two seconds (4 seconds in total) are shown below.
As can be seen, even though EX and EY both reported the error, only EX's IRIG-B is missing during the bad second.
The encoded seconds in the IRIG-B are shown in the table below. Note that the GPS signal does not have leap seconds applied, so GPS = UTC +18.
| Actual seconds | EX IRIG-b seconds | EY IRIG-b seconds |
| 15 | 15 | 15 |
| 16 | missing | 16 |
| 17 | 16 | 17 |
| 18 | 18 | 18 |
So EY was sequential through this period. EX slipped the 16 second by a second, skipped 17 and resynced at 18.
Summary: All problems were in CNS II GPS Channels at LHO. No problems were observed in the Trimble GPS Channels at either site, nor in the LLO CNS II Channels, with the exception of a change of -80ns in the LLO Trimble GPS PPSOFFSET a few seconds after the anomally (see below). It seems that both LHO CNS II Clocks simultaneously dropped from 10 to 3 satellites tracked for a single second. There is no channel recording the number of satellites locked by the Trimble clocks, but the RECEIVERMODEs at both sites remain at the highest level of quality, OverDeterminedClock (level 5 for the Trimbles) with no interruption at the time of the anomaly. It is unclear whether the LLO PPSOFFSET is causally related to the LHO event; the lack of other anomalous output from the LLO Trimble clock suggests that it is otherwise performing as intended. Descriptions of anomalous plots below. All anomalous plots are attached. Dilution of precision at BOTH LHO CNS II clocks skyrockets to 100 around the event (nominal values around 1) (H1:SYS-TIMING_X_GPS_A_DOP, H1:SYS-TIMING_Y_GPS_A_DOP). Number of satellites tracked by BOTH LHO CNS II clocks Plummets or two seconds from 10 to 3 (H1:SYS-TIMING_X_GPS_A_TRACKSATELLITES, H1:SYS-TIMING_Y_GPS_A_TRACKSATELLITES). In the second before the anomaly, Both of the LHO CNS II Clocks' RECEIVERMODEs went from 3DFix to 2DFix for exactly one second, as evidenced by a change in state from 6 to 5 in their channels' values (H1:SYS-TIMING_X_GPS_A_RECEIVERMODE, H1:SYS-TIMING_Y_GPS_A_RECEIVERMODE). The 3D Speed also spiked right around the anomaly for both LHO CNS clocks (H1:SYS-TIMING_X_GPS_A_SPEED3D, H1:SYS-TIMING_Y_GPS_A_SPEED3D). LHO CNS II Clock's 2D speeds both climb up to ~0.1 m/s (obviously fictitious) (H1:SYS-TIMING_X_GPS_A_SPEED2D, H1:SYS-TIMING_Y_GPS_A_SPEED2D). LHO Y-End CNS II Clock calculated a drop in elevation of 1.5m following the anomaly (obviously this is spurious) (H1:SYS-TIMING_Y_GPS_A_ALTITUDE). LHO X-End CNS II Clock thinks it dropped by 25m following the anomaly! I'm not sure why this is so much more extreme than the Y-End calculated drop (H1:SYS-TIMING_X_GPS_A_ALTITUDE). The Livingston Corner GPS PPSOFFSET went from its usual value of ~0+/-3ns to -80 ns for a single second at t_anomaly + 3s (L1:SYS-TIMING_C_GPS_A_PPSOFFSET). The GPS Error Flag for both LHO CNS II clocks came on, of course (H1:SYS-TIMING_Y_GPS_A_ERROR_FLAG, H1:SYS-TIMING_X_GPS_A_ERROR_FLAG)
Using my very limited knowledge of Windows administration, I have attempted to list the events logged on h1ecatc1 from 8:00 - 10:00 AM on Feb. 27 2017. Attached is a screenshoot of what was reported. I don't see anything at the time in question. However, there is a quite reasonable chance that there are other places to look that I am not aware of and/or I did not search correctly.
J. Kissel I've gathered our "bi-weekly" calibration suite of measurements to track the sensing function, and ensure all calibration is within reasonable uncertainty and to have corroborating evidence for a time-dependent detuning spring frequency & Q. Trends of previous data have now confirmed time dependence -- see LHO aLOG 34967. Evan is processing the data and will add this day's suite to the data collection. We will begin analyzing the 7.93 Hz PCAL line that's been in place since the beginning of ER10, using a method outlined in T1700106, and check the time dependence in a much more continuous fashion. My suspicion is that the SRC detuning parameters will change on the same sort of time scale as the optical gain and cavity pole frequency. Note also, that I've grabbed a much longer data set for the broad-band injection, as requested by Shivaraj -- from 22:50:15 UTC to 22:54:20 UTC, roughly 4 minutes. The data have been saved and committed to: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Measurements/SensingFunctionTFs 2017-03-21_H1DARM_OLGTF_4to1200Hz_25min.xml 2017-03-21_H1_PCAL2DARMTF_4to1200Hz_8min.xml 2017-03-06_H1_PCAL2DARMTF_BB_5to1000Hz_0p25BW_250avgs_5min.xml The data have been exported with similar names to the same location in the repo. For time-tracking, this suite took ~38 minutes from 2017-03-21, 22:18 - 22:56 UTC.
J. Kissel Because the calibration suite requires one to turn OFF all calibration lines before the measurements then back ON after, the time-dependent correction factor computation is spoiled temporarily. In the GDS pipeline, which uses FIR filters, it takes about 2 minutes for the calculation to return to normal functionality and produce sensible results (Good! this is what's used to correct h(t)). However, because the front-end's version of this calculation (NOT used in any corrections of any astrophysical or control room product) uses IIR filters, it remains polluted until one manually clears the history on all filter banks involved in the process. Normally, as the ISC_LOCK guardian runs through the lock acquisition sequence, it clears these filter banks history appropriately. However, the calibration suite configuration is still a manual action. Moral of the story -- I'd forgotten to do this history clearing until about 1 hr into the current observation stretch. The history was cleared at approximately 2017-03-22 00:10 UTC. Why am I aLOGging it? Because clearing this history does NOT take us out of observation mode. Rightfully so in the case, because again the front-end calculation is not yet used in any control system, or to correct any data stream, it is merely a monitor. I just aLOG it so that the oddball behavior shown at the tail end of today's UTC summary page has an explanation (both 20170321 and 20170322 show the effect). To solve this problem in the future, I'm going to create a new state in the ISC_LOCK guardian that does the simple configuration switches necessary so no one forgets in the future.
J. Kissel
On the discussion of "Why Can't LLO the same SNR / Coherence / Uncertainty below 10 Hz for These Sensing Function Measurements?"
It was re-affirmed by Joe on Monday's CAL check-in call that LLO cannot get SNR on 5- 10 Hz data points. There are two things that have been investigated that could be the reason for this:
(1) The L1 DARM Loop Gain is too large ("much" larger than H1) at these frequencies, which suppresses the PCAL and SUS actuator drive signals.
(2) Because of L1's choice in location of applying the optical plant's DC readout DARM offset and avoiding-DAC-zero-crossing-glitching SUS offset means there are single vs. double precision problems in using a very traditional DARM_IN1/DARM_IN2 location of the open loop gain transfer function.
both investigations are described in LHO aLOG 32061.
They've convinced me that (2) is a small effect, and the major reason for the loss in SNR is the loop gain. However, Evan G. has put together a critique of the DARM loop (see G1700316), which shows that the difference in suppression between 5-10 Hz is only about a factor of 4. I put a screen cap of page 4 which shows the suppression.
I attach a whole bunch of supporting material that shows relevant ASDs for both during the lowest frequency points of the DARM OLG TF and the PCAL 2 DARM TF:
- DACRequest -- shows that a factor of 4 increase in drive strength would not saturate any stage of the ETMY suspension actuators
- SNR_in_DARM_ERR -- shows the loop suppressed SNR of the excitation
- SNR_in_DELTAL_EXT -- shows the calibrated displacement driven
- SNR_in_OMC_DCPDs -- shows that a factor of 4 increase in drive strength would not saturate the OMC DCPDs
So ... is there something I'm missing?
Here with attached a plot showing comparison of PCal, CAL-DELTAL_EXTERNAL and GDS for the broad band injection. As expected, GDS agrees better with PCal injection signal. The code used to make the plot is added to svn,
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/PCAL/PcalBroadbandComparison20170321.m
Just to close out the question in Comment #2 above, LLO was indeed able to use LHO-like templates and drastically improve their SNR at low-frequency; check out LLO aLOG 32495. Hazaah!
J. Kissel, E. Goetz The processed results for this data set are attached. For context of how this measurement fits in with the rest of measurements taken during ER10 / O2, check out LHO aLOG 35163.