Added the cables needed inside the SQZT0 enclosure for the PMC upgrade. They are connected to the feedthrough panels and are currently curled up in the cable tray.
Maintenance activities finished up around noon and we are currently going through an initial alignment. The COMM beatnote is at -16dBm, we plan to go to ISCT1 today to touch it up to match our new PR3 position. More on that in another alog to come.
Over the weekend Tony noticed that the CO2Y chiller flow was dropping - alog74563. He added 4.5L at the time so a leak was suspected. Today, Camilla and I checked on the table and inside the chiller for any signs of a leak, but found nothing. We refilled the chiller and will keep an extra eye on in the coming weeks, but we don't want to swap it out just yet.
We've seen this mysterious loss of water before before (example: alog68356), but we have yet to completely explain it. our two working theories have been:
Since we haven't done much with the CO2Y chiller recently, we suspect that #2 was our cause here. Corey had checked on the chillers on the past Thursday Nov 30 (alog74499), so it's possible that the diffuser was dislodged at some point after that and starting splashing out and drying before making a noticeable puddle. When the level got low enough it might have started affecting the flow, though it's not supposed to.
Inside the chiller there are plenty of spots that show signs of a leak; corrosion, sediment, etc. The problem is that we don't know which ones might be new. I'll add some pictures here for chiller S/N:617 so we can maybe tell, but this chiller in particular has many spots already, making it harder to differentiate even with pictures.
This morning I tried turning off L2L3LP in FM6 slot of the L2 DRIVEALIGN filter bank. This immediately caused a lockloss. I later realized that I forgot to manually bring the gain (H1:SUS-ETMX_L2_DRIVEALIGN_L2L_GAIN
) down to 0 before toggling FM6. I'm pretty sure this caused the lockloss because the time lines right up with my filter change (see attached). Next time I'll be sure to ramp the gain down between filter changes.
Lockloss UTC time was 2023-12-05 16:09:09Z
FAMIS 19968
pH of PSL chiller water was measured to be between 10.0 and 10.5 according to the color of the test strip.
This morning we tried Gabriele's new FM8 MICHFF filter with more low frequency gain. It did not cause a lockloss. Not enough time in this configuration to see the effect on DARM, see 74595. We can repeat this during a commissioning period.
In addition to Camilla's trend shown in 74595, we can see that the new MICHFF, despite having a large MICHFF_OUT, does not change the low frequency RMS of the DARM error signal.
This allows us to relax a bit the aggressive roll-off we had in MICHFF fits below 10 Hz.
After last week's odd behavior for the BS OpLev after a power adjustment, today I inspected the optical lever setup and took a look at the laser.
I found no issues with the setup. The fiber was properly secured with no crimps, kinks, breaks, or crushed areas. I briefly opened the transceiver box to look at the fiber run, the launching telescope, and the QPD and found no issues there; everything looked good with no signs of damage or any obvious component failures. With the setup seemingly good to go, I next looked at the laser. First thing I did was use a power meter (an Ophir Vega with a 3W-capable stick head) to measure the power out of the laser, and immediately found the problem: The laser had failed, but was not completely dead. It was still outputting a small amount of power, 0.34 mW, but changing the adjustment knob did not change the output power (the knob changes the amount of electrical current delivered to the laser diode); it remained at 0.34 mW regardless of knob position, until the current was too low to support lasing and the output power dropped to 0. I don't have any guesses right now as to why the laser failed with a simple power adjustment, but the fact remains that it did.
I swapped in a spare laser to replace the old one; old laser is SN 258, new laser is SN 120-3. I set the new laser to output ~1.5 mW, which corresponds to 0.770 V on the monitor port for the laser diode current. I chose this because upon scanning through the full range of the laser's output power in the Optics Lab yesterday, this setting seemed to have the least amount of glitching. With this new laser the SUM counts were reading ~47k; each QPD segment was reading ~12k counts, which while OK (it's not saturating) is much higher than we typically operate this optical lever at. I went to the Output Configuration Switch (OCS) at the BS OpLev chassis in the biergarten and adjusted the whitening gain from 9 dB to 0 dB; the SUM counts now read ~16.5k, closer to where this OpLev has been in the past. I've updated T1500556 to -v8 to capture this change in the BS OCS state. The laser will take a few hours to reach thermal equilibrium in its new home, then we can start monitoring it for glitching. We'll keep an eye on this over the next several days.
I have the old laser, SN 258, in the OSB Optics Lab. I'll take a look at it in the coming days to see if I can find an obvious cause for the failure, but this laser will get returned to the vendor for repair. This completes WP 11561.
Tue Dec 05 10:08:14 2023 INFO: Fill completed in 8min 11secs
Jordan confirmed a good fill curbside. TCs started close to 0.0C
I took the opportunity to touch up the FSS RefCav alignment this morning during the maintenance window. With the ISS still ON and the IMC offline, I used the two picomotor-controlled mirrors in the FSS path to increase the signal on the TPD from 0.92V to 0.94V. Not a massive improvement, but now when the mode cleaner locks the signal shouldn't drop below 0.9V.
Since the PSL rotation stage has not been calibrated since July, I took the time to do it again this morning during maintenance. The process was as follows:
Power in (W) | D | B (Minimum power angle) | C (Minimum power) | |
Old Values | 100.917 | 1.990 | -24.827 | 0.000 |
New Values | 102.9407 | 1.990 | -24.803 |
0.000 |
I ran the ETMX OPLEV charge measurement this morning. The charge is high but is trending towards zero on all DOFs except for LR_P and UR_Y.
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1385827768
Lockloss from commissioning activities alog74604
The chillers were resetting their set points based on outdoor air temperature, so the last week of cold weather followed by several days of warmer weather was causing issues for the PID loops in the programming. I disabled the outdoor air temperature reset feature on the chillers to keep them from deviating from their initial set points.
Recent monthly Fscan spectra show a comb at multiples of 0.996785 Hz, in approximately the region 20-100 Hz. Subsequent investigation shows that it probably appeared between Sept 21 and Sept 24, although the exact date is difficult to tell.
Further details:
Note that there was a comb in O3 with spacing 0.996806 Hz (which, on inspection of the O3 spectrum, seems to have a double-peak structure). Although they are very close, the new comb comb does not precisely align with the O3 comb, nor with its second peak.
Ansel pointed out on 20th September 72993 I adjusted the HWS ITMX camera frame rate from 5Hz to 1Hz as the HWS SLED had decayed. I woluld expect the pixel brightness to be larger for ITMX comapred to the amout of SLED power, but it's been lower than ITMY even with the slower camera sync freqnuncy (ITMY was 5Hz, ITMX 1Hz), plot attached.
Today (20:10UTC) I adjusted HWS ITMX the frame rate back from 1Hz to 5Hz. We've previously seen coupling from the HWS camera, see 44847 but expected we'd fixed the issue by using external power supplies for the cameras FRS4559. We could discuss turning all HWS off during observing if this is the cause of the comb.
Update: H1:PEM-CS_MAG_LVEA_OUTPUTOPTICS_Y_DQ sees this comb very clearly, and shows that it appeared part way through the day on Sept 20th. Will try to identify start time more clearly using this magnetometer channel.
Looks like it's gone in the magnetometer channel! Pre/post spectra attached.
Thank you Camilla for helping to mitigate this comb. I wonder if there are other combs that are being caused by the HWS system / power supplies. Can we turn off all HWS if they are not used during observing? We may find this would solve other problems in addition to this one. Thanks!
It's not pretty, but H1 is back in OBSERVING; range started at 143 (but it's been nosediving the last 15min to 100Mpc). Surprised to see violin modes looking fairly normal (even after being out of lock for a while and with the high useism).
SDF Diffs (see attach#1): Accepted difs for SUS: ITMx, ITMy, & BS as well as SEI: ETMx Sensor Correction
Also needed to restart the nuc30 ALIGO DARM dtt session because it Timed Out.
Microseism is still pretty high solidly on the 95th percentile mark.
Wireless Access Point in the LVEA was ON, so I turned it OFF. I left the WAP in the MSR ON.SQZ ASC AS42 not on?? Please RESET_SQZ_ASC
DARM (see attach#2) is elevated from 10-70Hz (the reason for the low range) and also broadly at high frequency....since this looked Squeezer-ish, I checked the squeezer to see that the SQZ MANAGER had a notification about "SQZ ASC AS42 not on?? Please RESET_SQZ_ASC". When I saw this, I was just about to post something in CHAT, but saw that Naoki was already on it and posted a CHAT asking if I could take H1 out of OBSERVING for him to reset SQZ_ASC, so I did and a few minutes later I took H1 back to OBSERVING. Now our range should look better...it's already back above 140Mpc! :)
Taking GRD-IFO to AUTO & H1-Manager to LOW NOISE.
Still see DARM higher from about 10-55Hz.
Jordan
We ran the functionality test on the main turbopumps in MY and EY during Tuesday Maintenance (11/28/23). The scroll pump is started to take pressure down to low 10^-02 Torr, at which time the turbo pump is started, the system reaches low 10^-08 Torr after a few minutes, then the turbo pump system is left ON for about 1 hour, after the hour the system goes through a shut down sequence.
MY Turbo:
Bearing Life:100%
Turbo Hours: 208
Scroll Pump Hours: 74
EY Turbo:
Scroll pump made a grinding sound after getting to ~ 5E-2 Torr. I closed all valves and stopped the test. The scroll pump only has 200 hours on it so it will be disassembled to figure out the source of the noise. I have swapped the scroll pump with a new ISP250, but did not have time to run the turbo test. I will resume next tuesday and add a comment to this alog with the EY results.
Closing WP 11544 and FAMIS 24917
After swapping the scroll pump, I ran the functionality test on the EY main turbopump during Tuesday maintenance, no issues were encountered during this test.
Turbo Hours: 1275
Scroll Pump Hours: 72
Bearing life: 100%
Closing WP 11553 and FAMIS 24941
Gabriele refit the 11/15 LSC FF data 74220, allowing a higher magnitude at low frequency to give us a better fit, fit attached. I lowered the Q of the 17.7Hz feature and added this filter to MICHFF FM8 as "11-15-23B". We'll want to turn this FF slowly or before Tuesday Maintenance, as the excess low frequency could cause instability.
Plot attached, red is current MICHFF FM7, blue is new FM8 (double the strength of current FF <8Hz), green is Gabirele's original design before adjusting Q of 17.7Hz feature (all including the high pass filter in FM10).
I lowered the 17.7Hz Q's by a factor of 4 from the original design, to minimize effect on KAPPA_TST error 74259:
Turning this FM8 MICHFF on 2023/12/05 16:07:00 UTC to 2023/12/05 16:09:00 UTC, did not cause a lockloss. Comparing to 2 minutes before with current MICHFF FM7 16:02:00 UTC to 2023/12/05 16:04:00 UTC, DARM looks better with new FF but this was only a couple of minutes into NLN when the ADS lines were turning off, low frequency DARM shows no obvious change. As this new FF has higher gain at low frequency, you can see this as a factor of ~5 higher output on MICHFF_OUT, trend attached.
Robert has closed up the viewports for now, ISCT1 team is wrapped up, lights are off, WAP is off.