TITLE: 07/07 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 143Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 7mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.10 μm/s
QUICK SUMMARY:
When using the Carleton linefinder tool (which Daniel Nykamp is further developing) following up some lines for stochastic searches I came across loud and broadband coherence in H1:IMC-F_OUT_DQ with CAL-DELTAL_EXTERNAL, see figure 1 (produced by FScan, provided by the linefinder tool).
Andrew Ludgren confirmed the same coherence is also visible with GDS-CALIB_STRAIN_CLEAN and the noise sources was identified as jitter, see: https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=361570
Lockloss after 46h25 in NLN. 1372737707.
No obvious cause, DCPD saturation tag. DARM was the first loop to change attached plot.
NOISE_CLEAN reloaded as requested in 71124.
Back to NLN and Observing at 06:24UTC.
This lockloss has again rung up the violins again, 71063. 1h25m in OMC_WHITENING. ITMY was slowest to damp.
The last lock we had with low violins was the 42hr lock after Tuesday maintenance, 06/27 21UTC to 06/2915UTC. Search for things that changed during that time:
All the modes (both 500Hz and kHz) have been damping down nicely in the last 6hrs of so (EY20 is still higher than expected and we don't have setting for it - have tried finding one few times but it needs more effort).
Daniel and Sheila note that the H1:FEC-8_ADC_OVERFLOW_0_{12,13} are from before the ADC was updated so the real OMC channels are not overflowing, this is just a relic and can be used to see OMC_DPDC being closer to saturating level, could equally see using H1:OMC-DCPD_{A,B}_WINDOW_{MIN,MAX}.
We checked that OMC gain settings were not changed on the 29th June.
Since Dave needed us to go out of Observe for a few minutes to clear some memory so the CW hardware injections wouldn't stop overnight, I thought about changing the *way* in which the NonSENS noise estimate is turned off. However, I then realized that I wasn't 100% sure that it would work, so I backed out my changes. However, I didn't get the guardian code reverted and reloaded before we went back to Observing. So, I have asked the operators (currently Camilla and Oli) to reload the NOISE_CLEAN guardian next time we're out of Observing.
I had been thinking about whether there was a nice way to have the output of the nonsens noise estimate be saved, without actually sending a non-zero signal to the calibration pipeline, since currently the noise subtraction is turned off (and has been since we lowered our laser power). I had thought about doing this by turning off the output of the filter bank, and having the gain be non-zero. But, then I realized that I'm not 100% sure where the NOISE_EST_DBL_DQ is saved from, and if doing that would have made the NOISE_EST_DBL_DQ become non-zero, which would then effectively have the subtraction be on (which I don't want). Since I was trying to be speedy, I just backed out my change, and will spend some more time thinking about how I want to do this.
However, since I had set the NOISE_CLEAN guardian to "turn off" the noise estimate in the way I wasn't sure would work and loaded it, we now need to reload the old version of the NOISE_CLEAN guardian. The guardian is back and ready, but Dave was done and we went back to Observe before I loaded the NOISE_CLEAN guardian. If we do not, then next time we lock the operator will see SDF diffs on the OAF model. Please do not accept those OAF SDF diffs.
If we get to NomLowNoise and there is an SDF diff of OAF-NOISE_WHITENING_GAIN = 1, then reload the NOISE_CLEAN guardian, request it to DOWN, then request it to SUBTRACTING_NOISE. This should have the output switch be ON, and the GAIN = 0.
Camilla, Dave:
WP11290. To free up memory on h1hwinj1 we restarted the psinject process on h1hwinj1. This in turn took H1 out of observe due to gain ramping of the INJ_CW filter module.
Before the restart h1hwinj1 had 2GB memory free, which would have lasted only until 4am PDT tomorrow (Friday). After the restart it now has 14GB available, which should take us through to late Sunday. A second restart will be needed over the weekend, and the memory leak will be fixed next Tuesday.
17:04 PDT Out of observe, code restart
17:11 PDT Back in observe
At 15:13 the IOC which serves the DTS environment EPICS channels stopped running, causing the EDC to go red with a disconnect count of 10. This was a delayed reaction to the loss of network connection between CDS and the GC network in the DTS room of the H2 building.
The timeline is:
13:13 PDT network connection went down, all DTS EPICS channels froze at their last values
14:00 PDT Network was restored, but IOC still had frozen channels
15:13 PDT The cdsioc0 systemd process which maintains the SSH-Tunnel restarted, establishing a good tunnel but causing the IOC to crash in the process
15:58 PDT I restarted the IOC, all the channels became available and the EDC went GREEN
TITLE: 07/06 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 8mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
Locked for 41:40 and Observing.
Planning to come out of Observing around 5pm to restart CW injection system.
TITLE: 07/06 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 11mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.10 μm/s
QUICK SUMMARY:
There was an internet outage that was aloged here: alog 71112 and it was quickly resolved.
There was an issue with H1:CDS-DTS_ENV_TEMPERATURE_DEGF that was caused by the outage and found later.Dave has since fixed this.
Current IFO Status:
Locked and Observing for 41 hours
Target of oppertunity for restart CW Injection system which will drop us out of Observing scheduled at 5pm tonight.
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 14:53 | FAC | Karen | MY | - | Technical cleaning | 16:10 |
| 14:54 | FAC | Kim | MX | - | Technical cleaning | 16:10 |
| 16:23 | FAC | Randy | EX & EY | N | getting measurements for brackets | 17:08 |
| 16:38 | VAC | Janos | End X | N | Checking VAC System at EX | 16:59 |
| 17:50 | Tour | Cassidy & Co | Ctrl RM | n | 2 Tour Groups coming into the Control room. | 19:20 |
| 18:17 | FAC | Mitch | EX & EY | N | FAMIS HEPI Pump Checks | 20:17 |
| 23:01 | CDS | Dave B | H2 | N | Restar H1 EDC connections. | 01:01 |
| Level (inches) | ||||
|---|---|---|---|---|
| Location | Value | Difference from last reported levels |
Drip Pans | Leaks |
| CS | 5 9/16 | Clean | No unaddressed puddles | |
| EX | 7 3/8 | +1/16 | Clean | No unaddressed puddles |
| EY | 8 3/8 | -1/16 | Clean | No unaddressed puddles |
EY is running smoothly, holding at -19 and temps are within range.
EX is running smoothly, holding at -22. Jim and I were unable to adjust to -19. Temps were within range.
Corner pump sounds louder than it should. Holding at -19. Temp on the Pump housing is running at 170F. I have shut it down for today. I will cycle it back on tomorrow and monitor the temp. This pump was rebuild a couple of weeks ago.
15:23 UTC CAMERA_SERVO Guardian took us out of Oserving again. after a minute it brought us back.
15:24 UTC back to Observing.
CDS & PHONE OUTAGE:
20:24 UTC There is currently a CDS Internet Connection issue. The CDS Connection to the outside world is down. Jonathan is working on rebooting the router.
LHO Control room is back on the internet.
Jonathan and Nyath power cycled the GC switch in the MSR and that resolved the issue.
As shown in the attached figure, ETMX camera (PIT2 and YAW2) freezed for 4s and the camera guardian went to WAIT FOR CAMERA state and came back in 1 min. The ETMX camera freeze also happened last Wednesday in alog70933.
Inspected both wind fences, both seem in good shape. Also got a good reminder to be aware of wild animals. Mitch and I came within 20 feet of 2 bull elk that were hiding from the sun on the northside of the EX building. No pictures of them, sadly.
Details posted on the LLO logbook under the same subject line, just linking here: https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=66036
Perhaps unsurprisingly given its previous history, the strong 1.6611 Hz comb that disappeared (alog 69791) in late May has resurfaced. It shows up clearly in Fscans; I did some additional digging and it looks like the first traces appear on June 27th in the 12:00-14:00 UTC range. This corresponds in time with some of the work described in alog 70849, but OM2 heater changes don't account for the previous disappearance of the comb; Sheila confirms that heater wasn't on earlier in May. So it's still not clear what's going on.
Update: it's coherent with H1_PEM-EX_VMON_ETMX_ESDPOWER48_DQ and H1_PEM-EX_VMON_ETMX_ESDPOWER18_DQ, and *not* with CS or EY VMON channels.
(Last time we tried to hunt this comb down, I think we didn't have high resolution coherence plots generated to high enough frequencies for these channels.)
Plots attached. The gray dots are harmonics of a separate 99.9989 Hz comb.
It looks like the behaviour of this comb changed again on July 13, shifting slightly in frequency, before then disappearing again on July 14. It is as yet unclear what caused the changes. The attached weekly average Fscan From July 12 - 19 shows these changes around 280 Hz especially.
This comb seems to reappear between 7:30 and 9:00 UTC on July 19, 2023. Hopefully this time range can point to something that specifically changes. See attached daily Fscan image
Over the weekend we ran into a few times (alog71043, alog71026, alog71008) that we tried to get data via cdsutils getdata function in an ISC_LOCK guardian state, and it returned nothing. This caused an error in ISC_LOCK, fixed by simply reloading the node since the function just had to try again to get the data. This is not a new thing, but it's definitely another reminder that we have to be prepared for different outcomes anytime we request data.
Some months ago I made with Jonathan's help, a function wrapper that can be used to handle hung data grabs. While not the issue we saw over the weekend, it's still a good idea to use this whenever we try getting data in a Guardian node. The file is (userapps)/sys/h1/guardian/timeout_utils.py and there is either a decorator (@timeout) or a wrapper function (call_with_timeout) than can be used.
For the specific issue we saw over the weekend, a solution is to just do a simple check that the data is actually there before trying to do anything with it (ie. if data:). Using this situation as a good example:
# This wrapper should handle hung nds data grabs
popdata_prmi = call_with_timeout(cdu.getdata, 'LSC-POPAIR_B_RF90_I_ERR_DQ', -60)
# This conditional handles None data returned
if popdata_prmi.data:
if popdata_prmi.data.max() < 20:
log('no POPAIR RF90 flashes above 20, going to CHECK MICH FRINGES')
return 'CHECK_MICH_FRINGES'
else:
self.timer['PRMI_POPAIR_check'] = 60
I should have added that this fix was loaded into ISC_LOCK by Tony during commissioning today and is ready for our next relock.
This threw the attached error at 2034-07-07 04:14UTC. I edited ISC_LOCK for prmi and drmi checkers from 'if popdata_prmi.data:' to 'if popdata_prmi:'.
This seemed to work but I'm not sure if it will cover all every case. If this goes into error again I suggest the operator start by reloading ISC_LOCK and, if necessary, the "elif self.timer['PRMI_POPAIR_check'] " block of code can be commented out. Tagging OpInfo.
After this edit and a reload, the checker seems to work well, logging that there was no RF18 flashes above 120 (true) and moving to PRMI locking before the old 5 minute 'try_PRMI' timer finished.