Lowered CP3 actuator setting to 14% open after Dewar was filled this morning.
TITLE: 03/14 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
Wind: 9mph Gusts, 7mph 5min avg
Primary useism: 0.26 μm/s
Secondary useism: 0.22 μm/s
QUICK SUMMARY:
TITLE: 03/14 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 67Mpc
INCOMING OPERATOR: Cheryl
SHIFT SUMMARY: Quiet shift.
LOG: Nothing to report. Not even a peep from a PI.
TITLE: 03/14 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 66Mpc
INCOMING OPERATOR: Jim
SHIFT SUMMARY:
Nice quiet shift with H1 riding through a 6.0 India quake. H1 has been locked for over 24hrs. A 5.9 Indonesia quake was supposed to shake us at 6:53utc, but have not seen evidence of it locally yet (as of 7:00utc).
TITLE: 03/14 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 68Mpc
OUTGOING OPERATOR: Jeff
CURRENT ENVIRONMENT:
Wind: 7mph Gusts, 5mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.21 μm/s
Overall useism trends slowly getting quieter over the last 24hrs & have low winds as well (it's currently raining out).
QUICK SUMMARY:
Bartlett handed off an OBSERVING H1 (currently over 17.5hr of lock).
Noticed a User Message on the IFO Guardian Node (which is part of Intent Bit). Basically with a flag about an SPM Diff for the HAM2 HEPI (have Time Ramp values of 30sec [vs 0sec] for horiz & vert channel). I reckon these could be taken back to 0sec, but don't want to do that while in OBSERVING.
Shift Summary: Ran A2L DTT check – Pitch OK, Yaw are a bit elevated. The tumbleweed balers are starting to clear the Y-Arm today so the N2 delivery to CP-7 on Tuesday.
There is a vehicle stuck in the sand approximately 500 meters down from End-X, near the beam tube. Bubba is investigating.
Good Observing shift. Environment is good. No issues to report.
Activity Log: Time - UTC (PT)
Around 10:30 A M local time while inspecting the arm roads for tumbleweeds, John and I found a GSA vehicle stuck ~ 500 meters south of the Y End station and ~50 meters east of the access road. I have tracked the vehicle by license plate # to the fleet manager and she is contacting the manager that the vehicle is assigned to. I told her that in the future, anyone entering the site should, 1. Enter by the gate and not the desert and 2. Notify the control room when on site. I also asked that the vehicle be removed tomorrow during our maintenance period.
The manager of GWS has contacted me and agreed to remove the vehicle tomorrow during our maintenance period. He indicated that they did not think anyone was on site yesterday which is why they choose to enter from the desert. I informed him that there is someone here 24/7 and to please inform the control room of their intentions henceforth.
GWS = Ground Water Services - ie well testing.
I distinctly remember the X ARM.....
model restarts logged for Sat 11/Mar/2017* No restarts reported
model restarts logged for Sat 11/Mar/2017 No restarts reported
model restarts logged for Fri 10/Mar/2017 No restarts reported
Note that the reporting code mistakenly reported Sun as Sat in this morning's report. This is because LHO transitioned from PST to PDT at 02:00 Sunday. My code runs at 5 minutes past midnight, and goes back a period of 24 hours to get the previous day's date. When it ran at 00:05 PDT this morning, going back 24 hours ( convert to GPS, subtract 24*60*60 seconds) effectively goes back 25 hours to 23:05 Sat 11th. Other uses of relative time spanning this period may have similar surprises.
Summary of the DQ shift from Thursday 9th March to Sunday 12th March (inclusive), click here for full report:
* The DQ Shift started with a duty cycle of 76%, which increased to 86%, 95% and on Sundaw went back down to 60% due to extreme weather conditions of high Wind and microseism ground motion.
* Range sensitvity mainly around 65 Mpc, with the excepcion of Friday which most time around 60MPc due to strong Winds.
* Weather conditions has had an impact on range sensitivity and range during this DQ shift.
* Not many locklooses; some due to PI ringing up, other was preceded by soft. sat. of H1:LSC-X_ARM_CTRL, others high winds and micoseism ground motion.
* Script 'a2l' has been run quite a few times during this shift due to the common presence of low freq. noise (10-20Hz), which mostly had not impact on range sensitivity.
* A few Earthquakes which caused not much problem.
* IMPORTANT: Driving PI mode 23 (for instance when applying damping filters) caused glitchy narrow lines at (14.5, 64.0, 78.5 and 142.5 Hz), this dominated Omicron glitchgram with a 60-80Hz band of high SNR glitches. Commissioners have noticed this an put in place a practice of not damping this mode anymore unless required for lock acquisition.
* IMPORTANT: The Gold Star Omicron glitch for the whole of Friday was a blip glitch coincident with all quadrants of H1:SUS-ETMY_L2_NOISEMON, and the highest New SNR glitch for Sunday reported by PyCBC? live 'short' was of this type too. This relates to the alog entry by Miriam, Hunter, Andy where it was reported evidence that 'Some blip glitches may be due to ETMY L2 coil drive'.
* Observed high SNR glitches caused by scattering of SUS-SRM and SUS-PRM
* Increased limits of the X and Y arm LSC tidal HEPI off load for tidal control.
* PLS tripped with standard Flow-1 error, which was caused by the drop in flow rate in head 3. This was quickly solved.
* On Sunday high winds and microseism caused repeated tidal alarms "Tidal X/Y error" as the tidal arm servos were railing at their limits (solved by increasing their range), this had an impact in lock acquisition.
* On Sunday 'a2l' script did run with errors due to execution permissions, leaving the Gains to some random values worse than when the script started which increased low frequency noise. The permissions issue will be soon looked up and the script may be modified such that when errors occur the Gain values should be reverted to the ones at the start.
In Observing for past 12 hours. Environmental conditions are good. No issues or concerns at this time.
Starting CP3 fill. LLCV enabled. LLCV set to manual control. LLCV set to 50% open. Fill completed in 2192 seconds. TC B did not register fill. LLCV set back to 19.0% open. Starting CP4 fill. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Fill completed in 2276 seconds. TC A did not register fill. LLCV set back to 41.0% open.
Increased each by 1%. CP3 now 20% open and CP4 42% open.
Reviewed the the work permits for Tuesday's maintenance window
- The PSL will be down for most of the window
- Expecting 2 Nitrogen deliveries on Tuesday
- There will be tumbleweed baling On Monday & Tuesday
Tumbleweed baling Monday along the Y Arm
Mag 5.3 Earthquake near Ovalle, Chile
Seen on USGS, Seismon, and Terramon
USGS - Mag 5.3 EQ at 15:05 UTC
Terramon - Mag 5.1 EQ at 15:00 UTC, Predicted R-Wave of 0.822um/s arrival 15:52
Rise in the BLRMS around 15:17 UTC; Primary microseism peak of 0.1um/s
No apparent effect on H1
Miriam, Hunter, Andy A subset of blip glitches appear to be due to a glitch in the ETMY L2 coil driver chain. We measured the transfer function from the ETMY L2 MASTER channel to the NOISEMON channel (specifically, for the LR quadrant). We used this to subtract the drive signal out of the noisemon, so what remains would be any glitches in the coil drive chain itself (and not just feedback from DARM). The subtraction works very well as seen in plot 1, with the noise floor a factor of 100 below the signal from about 4 to 800 Hz. We identified some blip glitches from Feb 11 and 12 as well as Mar 6 and 7. Some of the Omega scans of the raw noisemon signals look suspicious, so we performed the subtraction. The noisemons seem to have an analog saturation limit at +/- 22,000 counts, so we looked for cases where the noisemon signal is clearly below this. In some cases, there was nothing seen in the noisemon after subtraction, or what remained was small and seemed like it might be due to a soft saturation or nonlinearity in the noisemon. However we have identified at least three times where there is a strong residual. These are the second through fourth plots. We now plan to automate this process to look at many more blip and check all test mass L2 coils in all quadrants.
In case someone wants to know, the times we report here are:
1170833873.5
1170934017
1170975288.38
I have noticed similarly caused glitches on the 10th March, in particular for the highest SNR Omicron glitch for the day:

![]()
Looking at the OmegaScan of this glitch in H(t) and then the highest SNR coincident channels which are all the quadrants of H1:SUS-ETMY_L2_NOISEMON:


Hi Borja,
could you point us to the link to those omega scans? I would like to see the time series plots to check if the noisemon channels are saturating (we saw that sometimes they look like that in the spectrogram when it saturates).
I am also going to look into the blip glitches I got for March 10 to see if I find more of those (although I won't have glitches with such a high SNR like the one you posted).
Thanks!
Hi Miriam,
The above OmegaScan can be found here
Also I noticed that yesterday the highest New SNR glitch for the whole day reported by PyCBC live 'Short' is of this type as well. The OmegaScan for this one can be found here.
Hope if helps!
Hi Miriam, Borja,
While following up on a GraceDB trigger, I looked at several glitches from March 1 which seem to match those that Borja posted. The omegascans are here, in case these are also of interest to you.
Hi,
Borja, in the first omega scan you sent, the noisemon channels are indeed saturated. In that case it is difficult to tell apart if that is the reason for the spectrogram looking like that or if indeed it might be a glitch in the coil drive. Once Andy has a more final version of his code, we can check on that. In the second omega scan, the noisemon channels look just like the blip glitch looks in the calib_strain channel, which means the blip was probably already in the DARM loop before and the noisemon channels are just hearing it. Notice also that, besides the PyCBC_Live 'short', we have a version of PyCBC_Live that is dedicated specifically to find blip glitches (see aLog 34257), so at some point we will be looking into times coming from there (I will keep in mind to look into the March 10 list).
Paul, those omega scans do not quite look like what we are looking for. We did look into some blip glitches where the noisemon channels looked like what you sent and we did not find any evidence for glitches in the coil drive. But thanks for your omega scans, I will be checking those times when Andy has a final version of the subtraction code.