The CDS alarms system status is now shown on the CDS Overview MEDM. The attached snapshots show the system when there are no alarms, and when two test alarms were raised.
Clicking on the "CDS ALARM" button on the overview opens the alarm MEDM.
Details:
The locklossalert EPICS IOC was modified to provide, as EPICS PVs, the ALARMS GPS time, the number of active alarms, and the channels which are in alarm.
locklossalert and alarms both run on cdslogin, and so a temporary file is used for the interprocess communication. Alarms updates the file on the minute and LLA reads the file at the 30 second mark. If the temporary file stops being updated, the oveview button will turn magenta to indicate a stopped service.
With the IFO down due to an earthquake, went out to the LVEA to verify installation and location of the Picomotor Driver E. This controller was slotted for Optical Lever but never implemented.
Control and readback cables are connected to the Corner 2 Slow Controls Chassis (D1100680) in the CER. Cables labeled H1:IO_305 and H1:IO_306 are connected in slot 5 on the rear of the chassis. Cables are routed to the HAM6 ISC racks. No driver is installed in the LVEA. Cables are disconnected and coiled up in the cable tray.
7.0 off the coast of N. California. Tripped all ISIs and many suspensions.
Thu Dec 05 10:07:59 2024 INFO: Fill completed in 7min 56secs
Jordan confirmed a good fill curbside.
FranciscoL, SheilaD
Changed H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN by 3.63 % using KappaToDrivealign.py at gps 1417450784.
Script output:
Average H1:CAL-CS_TDEP_KAPPA_TST_OUTPUT is -3.6267% from 1.
Accept changes of
H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN from 191.711517 to 198.664299
Proceed? [yes/no]
yes
Changing
H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN
H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN => 198.6643
First figure (kappa2drivealign_ndscope) is an ndscope of the relevant channels. The the second plot (top to bottom) is CAL-CS_TDEP_KAPPA_TST_OUT with a verticar marker aligned at a time where the uncertainty from CAL-CS_TDEP_PCAL_LINE3_UNCERTAINTY (third plot) increased. This increase in uncertainty is expected from turning off lines before calibration measurement.
KAPPA_TST_OUT is around 1 from minute 5 to the marker, which is the objective of changing the drive gain.
Following our usual instructions.
Simulines start:
PST: 2024-12-05 08:35:51.659267 PST
UTC: 2024-12-05 16:35:51.659267 UTC
GPS: 1417451769.659267
Simulines end:
PST: 2024-12-05 08:59:32.899615 PST
UTC: 2024-12-05 16:59:32.899615 UTC
GPS: 1417453190.899615
2024-12-05 16:59:32,822 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20241
205T163552Z.hdf5
2024-12-05 16:59:32,830 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS
_20241205T163552Z.hdf5
2024-12-05 16:59:32,834 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS
_20241205T163552Z.hdf5
2024-12-05 16:59:32,838 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS
_20241205T163552Z.hdf5
2024-12-05 16:59:32,843 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS
_20241205T163552Z.hdf5
ICE default IO error handler doing an exit(), pid = 911686, errno = 32
We should return to Observing at 20 UTC.
Lock loss at 1701UTC, commissioning caused. Ended a 37hr:21min lock, the 3rd longest in O4b.
TITLE: 12/05 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 161Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 1mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.17 μm/s
QUICK SUMMARY: Locked for 36 hours! Looks like we just made it through a 5.6 from Alaska, as well as some smaller quakes in CA. Otherwise a calm environment and no alarms.
TITLE: 12/05 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 161Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Very quiet shift with H1 observing throughout; current lock stretch is now over 26 hours. Only thing to report is that the SQZ_PMC Guardian has occasionally been saying the PMC PZT volts are low; currently around 6V and has been low for a few hours (see attached trend), but hasn't unlocked.
FAMIS 28455, last checked in alog81106
Only things of note on these trends are that the ITMX spherical power has been dropping for the past week and a half, which Camilla agrees looks strange, and the ITMY SLED power has reached the lower limit of 1, so it will need replacing soon. Everything else looks normal compared to last check.
Two plots, one with the ITMX V, P, Y OSEMs in the same time frame as with Ryan's plots. This could make me believe that the spherical power is just from the normal ITMX movement. The second plot is a year+ trend and I'd say that it shows this is normal movement.
What caught my eye with these plots is thst CO2Y power has increased in the last few weeks. We've that happen after it warms up and relocks, but this seems to be trending that way after a few relocks. Not sure how to explain that one.
Also, it looks like th flow for CO2X isn't as stable at Y, worth keeping an eye on.
Although ITMY POWERMON is below 1 (trend), the powermon seems to read lower than ITMX, see 73371 where they were last swapped in October 2023, both fibers had 2.5mW out but SLEDPOWERMON recorded 5 vs 3.
I checked that the data still makes sense and isn't considerably noisier: now vs after replacement. Maybe we can stretch out the life of the SLEDs for the end of O4 but should keep an eye on them.
You can see that the spherical power form ITMX is offset from zero so we should take new references soon.
TITLE: 12/04 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 2mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.17 μm/s
QUICK SUMMARY: H1 is running smoothly, has been observing for 20+ hours.
TITLE: 12/04 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Locked for 20 hours. Very quiet shift with nothing to report.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:43 | FAC | Karen | Opt lab | n | Tech clean | 15:58 |
| 16:51 | FAC | Chris, Eric | EX | n | Fan bearing replacement in mech room | 19:49 |
| 18:12 | FAC | Kim | H2 encl | n | Tech clean | 19:02 |
| 18:58 | VAC | Janos | Opt lab | n | Vac checks | 19:04 |
FAMIS 21691
pH of PSL chiller water was measured to be just above 10.0 according to the color of the test strip.
Wed Dec 04 10:07:33 2024 INFO: Fill completed in 7min 30secs
Gerardo confirmed a good fill.
FAMIS 26019
I somehow missed this email last week so it's a week late. The last time this was ran on Nov 10, the main differences are:
Camilla C., TJ
Recently, the CO2Y laser that we replaced on Oct 22 has been struggling to stay locked for long periods of time (alog81271 and trend from today). We've found some loose or bad cables in the past that have caused us issues, so we went out on table today to double check they are all ok.
The RF cables that lead into the side of the laser can slightly impact the output power when wiggled, in particular the ones with a BNC connector, but not to the point that we think it would be causing issues. The only cable that we found loose was for the PZT that goes to the head of the laser. The male portion of the SMA that comes out of the laser head was loose, and cannot be tightened from outside of the laser. We verified that the connection from this to the cable were good, but wiggling it did still introduce glitched in the PZT channel. I don't think that we've conviced ourselves that this is a problem though, because the PZT doesn't seem to glitch when the laser loses lock and instead it will run away.
An unfortunate consequence of the cable wiggling was that one of the Beckhoff plugs at the feedthrough must have been unseated slightly and caused our mask flipper read backs to read incorrectly. The screws for this plug were not working so we just pushed the plug back in to fully seat it and all seemed to work again.
We still are not sure why we've been having these lock losses lately, the 2nd and 3rd attachments show a few of them from the last day or so. They remind me of back in 2019 when we saw this - example1, example2. The fix ultimately a chiller swap (alog54980), but the flow and water temp seem more stable this time around. Not completely ruling it out yet though.
We've only had two relocks in the last two weeks since we readjusted cables. This is within its normal behavior. I'll close this FRS32709 unless this suddenly becomes unstable again. Though there might be a larger problem of laser stability, I think closing the this FRS makes sense since it is referencing a specific instance of instability.
Both X&Y tend to have periods of long stretches where they don't relock, and periods where they have issues staying locked (attachment 2). Unless there are obvious issues with chiller supply temperature, loose cables, wrong settings, etc. I don't think that we have a great grasp as to why it loses lock sometimes.
Ansel reported that a peak in DARM that interfered with the sensitivity of the Crab pulsar followed a similar time frequency path as a peak in the beam splitter microphone signal. I found that this was also the case on a shorter time scale and took advantage of the long down times last weekend to use a movable microphone to find the source of the peak. Microphone signals don’t usually show coherence with DARM even when they are causing noise, probably because the coherence length of the sound is smaller than the spacing between the coupling sites and the microphones, hence the importance of precise time-frequency paths.
Figure 1 shows DARM and the problematic peak in microphone signals. The second page of Figure 1 shows the portable microphone signal at a location by the staging building and a location near the TCS chillers. I used accelerometers to confirm the microphone identification of the TCS chillers, and to distinguish between the two chillers (Figure 2).
I was surprised that the acoustic signal was so strong that I could see it at the staging building - when I found the signal outside, I assumed it was coming from some external HVAC component and spent quite a bit of time searching outside. I think that this may be because the suspended mezzanine (see photos on second page of Figure 2) acts as a sort of soundboard, helping couple the chiller vibrations to the air.
Any direct vibrational coupling can be solved by vibrationally isolating the chillers. This may even help with acoustic coupling if the soundboard theory is correct. We might try this first. However, the safest solution is to either try to change the load to move the peaks to a different frequency, or put the chillers on vibration isolation in the hallway of the cinder-block HVAC housing so that the stiff room blocks the low-frequency sound.
Reducing the coupling is another mitigation route. Vibrational coupling has apparently increased, so I think we should check jitter coupling at the DCPDs in case recent damage has made them more sensitive to beam spot position.
For next generation detectors, it might be a good idea to make the mechanical room of cinder blocks or equivalent to reduce acoustic coupling of the low frequency sources.
This afternoon TJ and I placed pieces of damping and elastic foam under the wheels of both CO2X and CO2Y TCS chillers. We placed thicker foam under CO2Y but this did make the chiller wobbly so we placed thinner foam under CO2X.
Unfortunately, I'm not seeing any improvement of the Crab contamination in the strain spectra this week, following the foam insertion. Attached are ASD zoom-ins (daily and cumulative) from Nov 24, 25, 26 and 27.
This morning at 17:00UTC we turned the CO2X and CO2Y TCS chiller off and then on again, hoping this might change the frequency they are injecting into DARM. We do not expect it to effect it much we had the chillers off for a ling period 25th October 80882 when we flushed the chiller line and the issue was seen before this date.
Opened FRS 32812.
There were no expilcit changes to the TCS chillers bettween O4a and O4b although we swapped a chiller for a spare chiller in October 2023 73704.
Between 19:11 and 19:21 UTC, Robert and I swapped the foam from under CO2Y chiller (it was flattened and not providing any damping now) to new, thicker foam and 4 layers of rubber. Photo's attached.
Thanks for the interventions, but I'm still not seeing improvement in the Crab region. Attached are daily snapshots from UTC Monday to Friday (Dec 2-6).
I changed the flow of the TCSY chiller from 4.0gpm to 3.7gpm.
These Thermoflex1400 chillers have their flow rate adjusted by opening or closing a 3 way valve at the back of the chiller. for both X and Y chillers, these have been in the full open position, with the lever pointed straight up. The Y chiller has been running with 4.0gpm, so our only change was a lower flow rate. The X chiller has been at 3.7gpm already, and the manual states that these chillers shouldn't be ran below 3.8gpm. Though this was a small note in the manual and could be easily missed. Since the flow couldn't be increased via the 3 way valve on back, I didn't want to lower it further and left it as is.
Two questions came from this:
The flow rate has been consistent for the last year+, so I don't suspect that the pumps are getting worn out. As far back as I can trend they have been around 4.0 and 3.7, with some brief periods above or below.
Thanks for the latest intervention. It does appear to have shifted the frequency up just enough to clear the Crab band. Can it be nudged any farther, to reduce spectral leakage into the Crab? Attached are sample spectra from before the intervention (Dec 7 and 10) and afterward (Dec 11 and 12). Spectra from Dec 8-9 are too noisy to be helpful here.
TJ touched the CO2 flow on Dec 12th around 19:45UTC 81791 so the flowrate further reduced to 3.55 GPM. Plot attached.
The flow of the TCSY chiller was further reduced to 3.3gpm. This should push the chiller peak lower in frequency and further away from the crab nebula.
The further reduced flow rate seems to have given the Crab band more isolation from nearby peaks, although I'm not sure I understand the improvement in detail. Attached is a spectrum from yesterday's data in the usual form. Since the zoomed-in plots suggest (unexpectedly) that lowering flow rate moves an offending peak up in frequency, I tried broadening the band and looking at data from December 7 (before 1st flow reduction), December 16 (before most recent flow reduction) and December 18 (after most recent flow reduction). If I look at one of the accelerometer channels Robert highlighted, I do see a large peak indeed move to lower frequencies, as expected. Attachments: 1) Usual daily h(t) spectral zoom near Crab band - December 18 2) Zoom-out for December 7, 16 and 18 overlain 3) Zoom-out for December 7, 16 and 18 overlain but with vertical offsets 4) Accelerometer spectrum for December 7 (sample starting at 18:00 UTC) 5) Accelerometer spectrum for December 16 6) Accelerometer spectrum for December 18