Search criteria
Section: H1
Task: TCS
Closes FAMIS#27818, last checked 85024 (I am a week late)
TCSX: 30.5, no water added
TCSY: 10.6, no water added
No leak in water cup
Lockloss during commissioning at 2025-06-25 17:52UTC after over 5.5 hours Locked. Cause not known, but probably not commissioning related.
Ansel, Sheila, Camilla
Last week, Ansel noticed that there is a 2Hz comb in DARM since the break, similar to that that we've seen from the HWS camera sync frequency and power supplies and fixed in 75876. The cabling has not been changed since, the camera sync frequency has been changed.
Our current camera sync frequencies are: ITMX = 2Hz, ITMY = 10Hz. We have typically seen these combs in H1:PEM-CS_MAG_LVEA_OUTPUTOPTICS_Y_DQ. With a 0.0005Hz BW on DTT I can't easily see these combs, see attached.
It may be difficult to see in a standard spectrum, but can be clearly seen in Fscan plots linked off of the summary pages. For the "observing" Fscan, the interactive spectrum plot shows the 2 Hz comb marked automatically. See the attached image of H1:GDS-CALIB_STRAIN_CLEAN
Verifed that the cabling has not changed since 75876.
Next steps we should follow, as listed in 75876 would be to try using a different power supply or lowering the voltage to +12V. Or, there is a note suggesting Fil could make a new cable to power both the camera and CLink's via the external supply (14V is fine for both).
Thanks Camilla. If anything can be done more rapidly than waiting another week, it would be very much appreciated. Continuing to collect contaminated data is bad for CW searches.
Matt and I turned down the Voltage supplied from 14V to 12V for each camera at ~22:00UTC when the IFO was relocking. Verified HWS cameras and code still running.
We also will plan to have Dave reimpliemnt the hws_camera_control.py script he wrote in 74951 to turn the HWS's off in Observing until we fix this issue.
The 2 Hz comb is still present in H1:GDS-CALIB_STRAIN_CLEAN after the voltage change (before the software update)
Elenna, Sheila, Kevin, Matt, Camilla
For some thermalization tests, at 17:05UTC we stepped CO2 powers down from 1.7W to 0.9W each into IFO. Expect majority of thermalization to take ~1hour.
Beforehand, Sheila plugged in the freq noise injection cables in the LVEA PSL racks and Elenna turned on the AWG_LINES guardian.
I'm adding a detchar tag here in case anyone is wondering where all the lines are coming from in the data around this time- these are purposefully injected lines. If AWG_LINES is injecting, it will be in state 10. When IDLE (no injections), it is in state 2.
Weekly TCS Chiller Water Level Top-Off FAMIS 27815
TCSX - 30.5 No water added
TSCY - 10.5 no water added
TJ, Camilla WP 12605, WP 12593
As HAM1 is now low enough in pressure, we removed the HAM1 +Y yellow VP covers, re-attached bellows and removed guillotines. ISCT1 was moved back into place on Friday 84850.
After TJ transitioned to Laser Hazard, we opened the ALS and PSL light pipes and turned back on the SQZ, CO2s and HWSs lasers.
After okay from VAC, IMC locked without issue, PRC and MICH alignments look good enough on AS AIR camera and we will begin realigning ISCT1 soon.
HWS servers now point to /ligo/data/hws as the data directory.
The old data directory, h1hwsmsr:/data, is now moved to h1hwsmsr:/data_old
The contents of the old directory were copied into the new directory, except H1/ITMX, H1/ITMY, H1/ETMX, H1/ETMY, under the assumption that these only contain outputs from the running processes.
HWS processes on h1hwsmsr, h1hwsmsr1, h1hwsex were stopped and restarted and are writing to the new directory.
h1hwsey had crashed previously and wasn't running. It was restarted and is also writing to the new directory
Addressed TCS Chillers (Mon [Jun2] 100-125am local) & CLOSED FAMIS #27816:
TJ, Camilla, Checking as Sheila was concerned about this as Georgia/Craig need to move the ITMX SPOT GAIN "CENTER
" 84702.
Attached are the HWS Live plots comparing when we get to 10W and 120s later for ITMX and ITMY now. And also for ITMX and ITMY at our last 10W power up before the vent (note that then we don't pause at 10W and continue increasing power during the 120s).
Can see no obvious new point absorbers, ITMY looks perfect and ITMX has the same point absorbers as before, it;s possible there's a new point absorber on the right of the optics (orange arrow in attached) but it doesn't look scary. We will continue to check this but it's possible that our alignment has just changed (has since 66198) so that we can now see this point.
Sheila and Elenna turned the CO2s on as we were powering up from 10W to 25W. Attached is the 2 minutes after CO2s were turned on while we were sat at 15W for 4 minutes.
Can see the CO2 patterns of ITMX and ITMY. IY CO2 looks good but IX CO2 it looks off in YAW, we should check how this compares to the IFO spot at full power. It would be worth doing some dither tests to see if this needs adjusting.
Camilla, TJ
Sheila asked us to look for post vent point absorbers, but when we went to check, we found that the code hadn't been running since May 23rd.
Looking further into it, we could log into the machines and open the tmux sessions, but the code was frozen from May 23 821UTC and could not be Control C'd. I then tried a pkill on that python process, this also did not work. Trying to reboot the machines remotely aslo didn't work. I power cycled the machines from the MSR, then we were finally able to start the code up. ITMX needed the camera initialized for some reason.
We are thinking about ways to catch this in the future, maybe having DIAG_MAIN monitoring if values are updating properyly.
Camilla, TJ
I went to turn on the CO2 lasers to prep for locking today. I found both power supplies on the mechanical room mezzanine needed to have their outputs turned on. For TCSX, I then was able to turn the controller on, turn the key, hit the gate button, then turn on the laser via medm as usual. For TCSY though, the controller complained of a Flow Alarm as soon as the unit was turned on, and turning the key or hitting the gate would not clear it. The flow was reading 2.45gpm according to the paddle wheel flow meter on the floor, and before the vent we were just above 2.5gpm on the floor and 3.4gpm at the chiller. We had reduced the flow of this chiller back in December with Robert (alog81246), so I tried bumping up the flow slightly to 2.6gpm on the floor in the hopes to clear the flow alarm. Now, the flow alarm wasn't present when turning the unit on, but then when I turned the key and hit the gate button, the flow alarm came back.
At this point Camilla and I checked cable connections, tried turning the chassis off and on while waiting a bit, patting our heads while rubbing our bellies, and some other non-fruitful things. Eventually we moved the flow back up to 3.8gpm at the chiller and 2.8gpm at the floor and the flow alarm never showed up. We tried to bring the flow down a bit, but the flow alarm would return. We have no idea why the controller isn't happy with us running the flow at levels that we have been for all of 2025. For now, sorry Crab Nebula.
FAMIS 31406
The sock filters had slight discolorization, but there were floating string-like particulate in both reservoirs (pic1 & pic2). This isn't a new finding, but I was hoping after the last flush in October that it would have reduced the amount of floaters over time as they were filtered out, but this doesn't seem to be the case.
Camilla C, TJ S
The two corner station HWS SLEDs were last swapped back in October 2023 (alog73371), going a bit long than our usual ~1 year before swapping. Today we swapped them with fresh SLEDs following the T1500193 procedure, calibrated their power channels, and started the code back up with fresh references.
HWS starting values
Power as measured from fiber launcher: IX 480uW IY 180uW
Power reported into epics: IX: 1.88 IY: 0.36
SLEDs removed:
X - https://ics.ligo-la.caltech.edu/JIRA/browse/QSDM-790-5--00-11.21.380
Y - https://ics.ligo-la.caltech.edu/JIRA/browse/QSDM-840-5--00-03.20.479
HWS ending values
SLEDs installed:
X - https://ics.ligo-la.caltech.edu/JIRA/browse/QSDM-790-5--00-11.21.382 2.5mW = 165mA = 660mV (on TP with 250mA/V) Max current set to 155mA
Y - https://ics.ligo-la.caltech.edu/JIRA/browse/QSDM-840-5-0-00-06.18.005 2.5mV = 100mA = 400mV (on TP with 250mA/V) Max current set to 95mA
Power measured at fiber launcher: 2.2mW for both
To calibrate the H1:TCS-ITM{X,Y}_HWS_SLEDPOWERMON channel, we turned off the SLEDs, found the dark offset, turned the SLED back on, and then changed the gain. These values are in the SDF screenshots attached.
* Added to ICS DEFECT-TCS-7753, will give to Chrisitna for dispositioning once new stock has arrived.
New stock arrived and has been added to ICS. Will be stored in the totes in the TCS LVEA cabinet.
ISC has been updated. As of August 2023, have 2 spare SLEDs for each ITM HWS.
ISC has been updated. As of October 2023, have 1 spare SLEDs for each ITM HWS, with more ordered.
Spare 8240nm SLEDs QSDM-840-5 09.23.313 and QSDM-840-5 09.23.314 arrived and will be placed in the TCS cabinets on Tuesday. We are expecting qty 2 790nm SLEDs too.
Spare 790nm SLEDs QSDM-790-5--00-01.24.077 and QSDM-790-5--00-01.24.079 arrived and will be placed in the TCS cabinets on Tuesday.
18:48 Back to NOMINAL_LOW_NOISE
Within the same ~15 seconds of the lockloss, we turned the CO2 powers down form 1.7W each to 0.9W each. In the hope of doing the thermalization tests we tried last week 85238.
We checked the lockloss today and LSC channels at the time last week we turned the CO2s down and see no glitches corresponding with the CO2 waveplate (out of vac) change, we think the lockloss was unrelated.