I spent some time this morning on the HAM8-GS13 v1 issue reported here by Jim, causing a 0.6Hz table oscillation. While working on it, the GS13 (or something along the chain) decided to fix itself. The problem has not reappeared -yet- since then.
1- Jim set the ISI to offline @18:21 UTC. We see GS13 v1 ASD x2 below v2 and v3 (fig1)
2- With the ISI offline, I sent a Z excitation and tripped the ISI @ 18:54 UTC.
3- Restarted the Z excitation with smaller amplitude between 18:55 to 18:57 UTC. The x3 vertical GS13 ASD now match (fig2)
4- Turning off the excitation, v1 asd is still matching the other 2 vertical gs13 (fig3)
5- Turning on the loops. No 0.6Hz anymore (fig4) ...
Puzzling... We'll continue to monitor.
Lock recovery was fairly straight forward. We lost lock at Laser_Noise_Suppression one time and it looked like the ISS 2nd loop would close with the diffracted power changing too much so it would try again. It did this 4 times before we lost lock. Next lock attempt the 2nd loop had no issues.
[Aidan, Alena]
Per the autogeneration of the LLO coordinates from ZEMAX, I've run a Python script on the LHO ZEMAX ray trace to generate the corresponding data set of locations and distances for H1. Note that ITMY, ZM4, and OMC_in were not it the ZEMAX ray trace (the latter two were not in the model and ITMY was not intercepted by any rays) and so these optics do not appear in the PDF file. We'll work to resolve that omission.
The data is posted on the DCC: https://dcc.ligo.org/LIGO-E2100383-v4
I verified the coordinates for all the optics by directly interfacing with the ZEMAX model. The only slight difference was a 2mm offset of the pilot ray from center on the SQZ TFP - otherwise the rays intercepted the centers of all the optics with a precision better than 0.5mm.
Naoki, Daniel, Sheila, Camilla
Naoki found the OPO would not lock like usual. After he locked it on green there was no CLF_REFL_RF6 signal. After checking settings, we followed 73801 instructions to measure the NLG (17.8). In doing so, found we needed to adjust the OPO temperature by 0.15degrees. After this OPO temperature adjustment, it locked fine. Daniel swapped thermistor readouts today that effected this even though the requested and measured OPO temperature were the same before and after, 74188.
Note that now the OPO LOCKED_SEED_NLG guardian state goes though LOCKED_CLF_DUAL (needs a locking OPO), we first dropped the CLF RF threshold in half (via SQZ Overview > HAM7 > OPO IR > OPO PZT1 > H1:SQZ-OPO_IR_RESONANCE_CLF_NOM), now reverted.
Since we doubled up terminals, the terminal for the OPO thermistor physically changed. The resistance reported by the OPO thermistor changed by about 45Ω, whereas the EL3692 measurement error for the 10KΩ range is 50Ω. This seems within the expectation.
Today I did OMC scans, leaving the sidebands on, so that we can try to use the sidebands to calibrate the PZT voltage into frequency and estimate finesse from the scan.
I left the input power at 2W (1.8W on PRM), and did the scan slowly to try to avoid the saturations that Keita observed in 72254. I also took scans at three different speeds.
I took scans with the OMC ASC on the QPDs. We know that there is a slightly better alignment for single bounce, but don't think that this should matter much for the finesse measurement that we get this way.
Times:
WP 11523
Cabling for the manifold accelerometers pulled at both end stations. Cables pulled from TCS-C1 to the manifold. Cables need to be terminated.
LVEA has been swept following Tuesday Maintenance.
Note:
The OPO witness thermistor seems broken.
(Janos Cs.)
Output mode cleaner tube turbo station;
Scroll pump hours: 5563
Turbo pump hours: 5623
Crash bearing life is at 100%
X beam manifold turbo station;
Scroll pump hours: 785
Turbo pump hours: 787
Crash bearing life is at 100%
Y beam manifold turbo station;
Scroll pump hours: 1879
Turbo pump hours: 604
Crash bearing life is at 100%
FAMIS tasks 23523, 23595 and 23643.
Flush ports have also been added to the scroll pumps at all stations. These help the pumps to remain clean, and also the chilling efficiency will be better.
Closes FAMIS#25965, last checked 74134
BSC High Frequency noise is elevated for these sensors!!!
ITMX_ST2_CPSINF_H3
ITMX_ST2_CPSINF_V1
ITMX_ST2_CPSINF_V3
The elevated high frequency noise in ITMX_ST2_CPSINF H3 and V1 has been a thing for the past several weeks (73847), and ITMX_ST2_CPSINF_V3 is popping back up as elevated after a month.
Here is just the ITMX ST2 plot. There are peaks around 12.5Hz for H1, H2, V1, V2, and V3 that don't seem to typically be there (last time the ITMX ST2 spectra looked similar was 08/25/23 - 72434).
All other spectra look okay.
This is a regular life cycle replacement. Anthony Sanchez and Jonathan Hanks replaced sw-msr-h1aux. This resulted in an outage to the cameras and slow controls while we moved the connections. It generally went smoothly. Two things to note. First, due to being blocked on the front of the rack we found that in removing the old switch it was useful to wrap a sling around it (through the access holes in the top of the rack) in order to support the switch (and not drop it on fibers, and other equipment). Second, it looks like we ripped a tube/sleeve protecting fibers while putting the new switch in. The fibers did not look or feel damaged or kinked, so I think we just nicked the sleeve while positioning the replacement switch.
I physically power cycled h1digivideo0, h1digivideo1 and h1digivideo2. Dave power cycled cam3 (MC Trans) and cam21 (ITMX). cam3 came back. cam21 did not, but has been down for a long period prior to today, so may not be in use?
Tue Nov 14 10:06:26 2023 INFO: Fill completed in 6min 22secs
Fernando Marc Daniel
The following changes were done to the squeezer slow controls chassis:
This means thermistor readouts are now single channel for the ALS SHG, the SQZ SHG, the OPO, and the OFI (as well as the T-SAMS which were done last week). This leaves SFI1 and SFI2 still at dual channel readouts.
The in-lock charge measurements were swapping the MICHFF and SRCLFF onto ITMX when DARM control moves from ETMX moved to ITMX via LSC-OUTPUT_MTRX_3_{10,11}. This isn't required now that the LSC FF is on ETMY PUM 73420, so I removed this from the SUS_CHARGE guardian code and reloaded. Noticed by Sheila this morning as excess low frequency noise, caused by LSC FF being sent to ITMX (as well as the nominal EY PUM), pushing MICH/SRCL noise onto ITMX. This would have been happening since the LSC FF change, so from the 17 October 2023 measurement to today.
The SUS_CHARGE guardian didn't completely finish until 16:01:40UTC, 2 minutes later than it should, so I reduced the ITMX_L3_LOCK_BIAS ramp time used by SUS_CHARGE from 120s to 60s.
When BS camera was frozen last Saturday in alog74157, the CAMERA_SERVO guardian recognized that the BS camera was frozen, turned off the camera servo, and went to WAIT_FOR_CAMERA state. However, the camera checker in the WAIT_FOR_CAMERA state somehow did not work and the camera servo was turned on again and the guardian was stuck in a infinite loop. I have not figured out why the camera checker in the WAIT_FOR_CAMERA state does not work, but I think we don't need the WAIT_FOR_CAMERA state anymore. The WAIT_FOR_CAMERA state was made to avoid the short camera freeze issue in alog68756, but I think the issue was solved in alog71593. So I removed the WAIT_FOR_CAMERA state in the guardian.
TITLE: 11/14 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 2mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.43 μm/s
QUICK SUMMARY: Locked for 79 hours and 44 min!!!! But sadly we had to end this O4 record to start maintenance work. SEI state about to transition.
TITLE: 11/14 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
H1 has now been locked about 71.75hrs! Might get close to 80hrs *knock on wood*.
LOG:
Noticed that the CO2's weren't exactly outputting 0W at their NO_OUTPUT settings, plot attached, so I searched for home on both rotation stages. This brought them back much closer to zero. Daniel reminds us that "search for home" needs to be done after every Beckhoff reboot, unsure if we did it after the Beckhoff came back on Tuesday.
I further adjusted CO2X calibration as it hadn't been getting close to 1.7W since we touched it on Tuesday, 74044, TJ's bootstrapping was getting it closer to 1.7W but works best when we start with a close power.
CO2Y Rotation stage weirdness: On my final test, asking CO2Y to go to 1.7W it jumped to -700degrees! I then asked it ot go back to minium power, which it slowly did. Very strange. A better way to take it back might have been to ask it to "search for home" but I remembered that clicking "abort" often crashes Beckhoff! Searched for home after this. Plot attached.
Before bootstrapping, CO2Y had only been getting to 1.5W injected with 1.7W requested. I adjusted the calibration (sdf attached) to bring this closer to 1.7W.
We've noticed before but the CO2Y power meter power drops when the rotation stage stops moving, maybe the RS slides back after it's finished rotating, changing the power by ~0.03W. Plot attached. CO2Y rotation stage is noisier than CO2X. We should check we have a spare RS on hand.