The refcav transmission has been decaying quickly over the last day or so (down to about 0.7), and we've been getting a warning about the transmission being too low on DIAG_MAIN. I took a few mins between locks to tourch it up briefly, now it's at about 0.76. So, not a major improvement, but at least gets us away from the threshold we'd set for ourselves that we know is bad. This should (hopefully) get us through the long weekend, and then later we can revisit whether or not we need to do any in-enclosure alignment during the break.
As they've been skipped the last few weeks 75271, today while ISC_LOCK was in OMC_WHITENING waiting for violins, we took in-lock charge measurements. 21:19UTC to 21:35UTC.
There was a glitch at 21:33:10UTC at the end of the ETMX injections/ transition, should check if that's due to a mistake in the code.
Chiller 2 at EX had its Low Ambient Lockout enabled at 14 degrees. Early this morning, once the temperature dropped below 14 degrees, it went into inhibited mode. Because this chiller is running as the lead unit, Chiller 1 attempted to come on and cool the loop because its Low Ambient Lockout was disabled. There is an apparent issue with the second refrigerant circuit in Chiller 1 which caused it to fault. I disabled the Low Ambient Lockout on Chiller 2 which caused it to run again, since it is the lead unit, and this has cooled the loop back to where it had been running the last several days. I checked the chillers at EY as well and found that Chiller 2 also had its Low Ambient Lockout enabled at 14 degrees, and I disabled this as well. I will continue troubleshooting Chiller 1 at EX next week when the weather is less severe. Currently all chillers are set to run regardless of how cold it is outside.
[Tony, Jenne, Sheila]
It's been a struggle to lock this morning. We've made it up to some moderately-high locking states, but then the PRG looks totally fuzzy and a mess, and we lose lock before increasing power from 25W. Since we've had some temperature excursions in the VEAs (due to the more than 25 F outside temp change in the last day!!), I ran the rubbing check script (instructions and copy of the script in alog 64760). Everything looked fine, except MC1 (and maybe MC2). RyanC notes that MC1 also looked a little like it had increased noise back in early December, in alog 74522.
In the specrtra in the first attachment, the thick dashed lines are how the MC1 RT and SD OSEMs looked when things weren't going well this morning. They looked like that both when the IFO was locking and we were at 25W, as well as when the IMC was set to OFFLINE and was not locked. The other 4 OSEMS (SD, T1, T2, T3) all looked fine, so I did not save references. The vertical didn't look like there was a significant DC shift, but I put in a vertical offset anyway into the TEST bank (10,000 counts), and saw that those 2 OSEMs looked more normal, and started looking like the other 4. I also took away the vertical offset, expecting the RT and LF OSEM to look bad again, but they actually stayed looking fine (the 6 live traces in that first attachment).
The ndscopes in the second attachment show the rough DC levels of the 6 MC1 OSEMs over the last week, to show that the signficant motion in the RT OSEM over the last 12 hours is a significant anomoly. I'm not attaching it, but if you plot instead the L, P, Y degrees of freedom version of this, you see that the MC1 yaw has been very strange over the last 12 hours, in the exact same way that the RT OSEM has been strange. It seems that when I put in the 10,000 count offset, that got RT unstuck, and MC1's OSEM spectra have looked clean since then.
We're on our way to being locked, and we're now farther in the sequence than we've been in the last 4.5 hours, so I'm hopeful that this helped....
Julian, Camilla. WP 11614
With 4mW seed beam injected, we touched the quater and half waveplates in the SEED path on SQZT0 to minimize H1:SQZ-CLF_FIBR_REJECTED_DC_POWER. Plot attached. Reduced HAM7 rejected from 0.32mW to 0.12mW (we've not previously got this to 0 65063).
Throughput to SQZT7 is 1.2mW/4mW = 30%. We expect >40% and we have previously got only get 50%-75% though the fiber coupler switch only (not inc HAM7) 65392 65445.
If we need more power than this we can try to walk the alignment, we tried toughing only one mirror and didn't improve throughput, the second mirror was hard to reach from local laser hazard area.
Dave called and informed us that he is getting an FMCS alarm about the H0:FMC-EX_CY_H2O_SUP_DEGF channel.
This information was passed down to Eric, and he went out to EX to check it out and clear the fault.
When he returned, he mentioned that there was a change that he made that he may need to make at EY.
We will be watching these channels throughout the day to make sure that solves the problem.
Fri Jan 12 10:10:30 2024 INFO: Fill completed in 10min 26secs
Gerardo confirmed a good fill curbside. Starting TCs were high, trip temp was increased to -60C for this fill. TCmins = -77C,-72C.
NLN Lockloss :
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1389100118
The Lockloss tags said Windy, and the wind is gusting up to about 30MPH so this makes sense.
Lockloss plots and Log attached.
TITLE: 01/12 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 23mph Gusts, 14mph 5min avg
Primary useism: 0.07 μm/s
Secondary useism: 0.34 μm/s
QUICK SUMMARY:
When I came in H1's ISC_LOCK was in LOCKING_ALS after a Lockloss from NOMINAL_LOW_NOISE .
I noticed the Earthquake, check to see if Mr.Short had been called, and saw his alog .
The Chiller in the CER seems to have been fixed and the temperature is coming back down.
Ryan call and gave me the Short story of the last few hours of the OPS OWL Shift, and did a wonderful handoff after calling Dave and Camilla in for help.
Camilla just arrived and will be helping resolve the SQZ issues.
My plan is to try to lock while she is resolving issues and hopfully SQZer will be ready before INJECT_SQZING.
Taking H1 Initial_alignment since it's having a hard time locking beyond Find IR.
I am seeing a rise in LVEA ZONE 1A temperature that has not turned around yet.
Reminder that there is a python script which calculates the current windchills:
david.barker@opslogin0: windchill
EX temp 06.6(degF) wind 11.0(mph) windchill -8.5(degF) -22.5(degC)
MX temp 06.7(degF) wind 11.0(mph) windchill -8.4(degF) -22.4(degC)
CS temp 07.1(degF) wind 19.0(mph) windchill -12.2(degF) -24.6(degC)
MY temp 06.6(degF) wind 07.0(mph) windchill -5.1(degF) -20.6(degC)
EY temp 06.3(degF) wind 28.0(mph) windchill -16.7(degF) -27.0(degC)
wind chill average over site -10.2 degF -23.4 degC
H1 called for help this morning at 6:15 PST when it could not go to observing because of the SQZ system not functioning properly. Eventually I saw that the Slow Controls section of the CDS overview was red, so I called in Dave for assistance. So far we have not found evidence of an actualm hardware failure, but Richard noted that a chiller in the CER had failed this morning, causing temperatures to rapidly rise there, potentially creating this issue with the SQZ Beckhoff terminals. Richard has turned on a fan and both chillers in the CER to bring the temperatures back down.
SQZ slow control Beckhoff is reporting errors in most systems: laser, fiber, pmc, shg, opo, clf and adf.
fiber lock has a "temperature feedback limits" error which triggered when the CER went above 26C at 05:44 PST this morning.
As troubleshooting was ongoing, I went through the ObservationWithOrWithoutSqueezing wiki to make the changes necessary for observing without squeezing, since SQZ seems to have been the only system impacted at the moment and our range without squeezing was hovering just above 120Mpc. I did not have the chance to actually start observing as H1 lost lock, but the intent bit was green and I could have begun observing.
To allow the IFO to observe without squeezing, I took the following actions:
All of this must be reverted once the SQZ Beckhoff issue is resolved to observe with squeezing.
This morning the ciller bank that feeds the CER quit chilling. Quite cold outside. Normailly this would not have been a major issue but the secondary bank of chillers was shut down for noise issues in the IFO. With the second bank of chillers off, the loss of the running bank allowed for the temerature to rise quickly and to a point we had not seen before. (30c)
I reset the failed bank and turned the secondary bank on as well. A fan was also added to bring the temperature down quickly. The fan has been shut off.
The power supply that feeds the 5 Beckhoff squeezer chassis tripped off. I have not found a reason for this trip as this power supply is les than 50% loaded and the fan had already been replaced so temperature no different than the other units around it.
I will get with Marc P. to have him measure temperatures of this supply.
Requested SQZ_MANGER to SQZ_READY_IFO, as expected it got suck on TTFSS error. Took H1:SQZ-LASER_HEAD_CRYSTALFREQUENCY from where it had railed at 200 to 40MHz. This took Beatnote close to nominal 160MHz. Still not locking with Temperature Feedback Error. I trended TTFSS screen and after trying H1:SQZ-FIBR_LOCK_TEMPERATURECONTROLS_RESET (no effect) toggled H1:SQZ-FIBR_LOCK_TEMPERATURECONTROLS_ON Off and On (in the Slow Frequency Servo" box. TTFSS then locked fine.
All SQZ are in their nominal SQZ_READY_IFO states and I'll revert Ryan's "Observing with no sqz" changes.
Reverted all Ryan's changes and added new GRDs to wiki, we'll need to accept sdf's once we get to NLN. I touched pitch sliders in FC1 (mainly) and FC2, following SQZ Wiki: Issues with SQZ FC Guardian to get FC to lock green.
I checked the supply in question, H1-VDC-C2 U34 RHS (H1-ISC-SQZ SC). This supply originally drew 24V 5A, but we have added more Slow Controls (SC) chassis to this rail since, so it looks to be drawing 10A.
Temperature at the front of the supply is 95F, rear of the supply is 104F, only 9F delta. The ambient air is cool and moving around averaging 68F. Air flow is not restricted on this supply, and it has a new fan.
I do not feel good airflow through this supply like I do others, I would expect a higher temperature delta, most supplies in that area are sitting at 75F front 105F rear, delta of 30F.
If this supply trips again I recommend replacement. In either case, it will be replaced during the vent.
SDF's for Observing with SQZ accepted.
I discovered this morning that my new HWS camera control code, which only talks to the camera when it needs to turn the camera on or off, was locking up in its caget of the IFO lock status. The last change was for the lock status function to return both the caget call status and the ifo_lock_state.
I reverted the code back to the previous version, and it is now running correctly on h1hwsex, h1hwsey and h1hwsmsr1 but for an unknown reason is still locking up on caget on h1hwsmsr. When I run the caget command on h1hwsmsr's command line it works. Investigation is continuing.
Note due to this bug HWS ITMX camera was mistakingly left ON overnight until I manually turned it OFF at 08:33 PST this morning.
code is running again on h1hwsmsr.
Tagging DetChar: the ITMX HWS camera was on at 5Hz (comb visible in H1:PEM-CS_MAG_LVEA_OUTPUTOPTICS_Y_DQ) from 2024/01/09 20:20UTC to 2024/01/11 16:33UTC. Thank you Dave for finding this and turning off.
Today I checked on the status of these cameras. IX and EX were working as expected and Dave's code is working well on all of them.
The HWS code on EY had crashed (or never correctly started) with a segmentation fault but started again fine.
ITMY was failing even though Dave's code successfully turned the camera on. Couldn't restart code, had error: "HWS.HS_Camera.AcquisitionFailure: Image acquisition failed: 1 timeouts and 1 overruns occurred." in state 2B. Even though it successfully connected to the camera. I think we see similar errors when we are trying to stream images and take data at the same time. I tried re-initing the camera but it didn't change. Streaming images gave pixelated noise, no image. This code has properly been working for the last weeks... Unsure of the issue.
Louis, Jenne, TJ, Sheila
Today we continued to try to transition to the new Darm configuration, which we had suceeded in doing in December but weren't able to repeat last week (75204).
In our first attempt today we tried a faster ramp time, 0.1 seconds. This caused immediate saturation of ETMX ESD. We struggled to relock because of the environment.
Because Elenna pointed out that the problem at roughly 3 Hz with the earlier transition attempts might have been the soft loops, we thought of trying to do the transition before the soft loops are engaged, after the other ASC is on. We tried this first before transitioning to DC readout which wouldn't work because of the DARM filter changes. Then we made a second attempt at DC readout. We also lost lock due to a 2 Hz oscialltion, even without the soft loops on.
Some gps times of transitions and attempt:
Adding two more times to this list:
The second screenshot here shows the transitions from Dec 13th, 14th, and 19th. These are three slightly different configuration of the UIM filters and variations on which PUM boosts were on when we made the transition. On the 14th the oscillation was particularly small, this was with our new UIM filter (FM 2 + 6) and with both PUM boosts on L2 LOCK FM1,2,10 already during the transition. This is the same configuraition that failed mulitple times in the last two weeks.
Today I went back to three of these transitions, December 14th (1386621508 sucsesful no oscillation) and Jan 4 (1388444283) + Jan 5th (1388520329) which were unsucsesfull attempts. It also seems as though the only change to the filter file since the Dec 14th transition is a change copy the Qprime filter into L1 drivealign, which has not been used in any of these attempts (this can't be used because tidal is routed through drivalign).
In short, it doesn't seem that we made a mistaken change to any of these settings between December and January which caused the transition to stop working.
L1 DRIVEALIGN L2L | 37888 | no filters on | |
L1 LOCK L | 37922 | FM2,6 (muBoostm, aL1L2) | |
L2 DRIVEALIGN L2L | 37968 | FM5,7 (Q prime, vStopA) | |
L2 LOCK L | 38403 | FM1,2,10 (boost, 3.5, 1.5:0^2, cross) on the 5th FM1+ 2 were ramping while we did the transition | |
L3 DRIVEALIGN L2L | 37888 | no filters on | |
L3 LOCK L | 268474240 | FM8, FM9, FM10, gain ramping for 5 seconds (vStops 8+9, 4+5, 6+7) | |
ETMX L3 ISCINF L | 37888 | no filters on | |
DARM2 | 38142 | FM2,3,4,5,6,7,8 | |
DARM1 | 40782 | FM2,3,4,7,9,10 |
I added start and end time windows for the successful transitions in LHO:75631.
Glitch at 21:33:08.36 UTC is when ESD_EXC_ETMX changes H1:SUS-ETMX_L3_DRIVEALIGN_L2L from DECIMATION, OUTPUT to INPUT, DECIMATION, FM4, FM5, OUTPUT. This confused me as the gain was zero at the time but 2 seconds before, gain was changed from 1 to zero at the same time the tramp was increased to 20s, meaning the gain wasn't really zero. I've added a 2 second sleep between the gain change to zero and the tramp increase to avoid this in future and reloaded.
Analyzed 75362