For about 50 minutes starting at 01:45 PST this morning we had another jump of the EY CNS-II reference GPS 1PPS compared with the LIGO timing. Previous jump was Mon 18nov2024 05:55. Prior to that no jumps for 3 weeks since the power supply was replaced.
We had a slight mains power glitch at 07:00:26 PST this morning, the GC UPS reported being on battery power for 4 seconds.
Attached plot shows all three phases in the corner station CER.
Richard turned back on the high voltage and plugged back in the DB37 cable that was unplugged yesterday afternoon.
TITLE: 11/20 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM_EARTHQUAKE
Wind: 17mph Gusts, 11mph 3min avg
Primary useism: 0.40 μm/s
Secondary useism: 1.56 μm/s
QUICK SUMMARY:
Since the secondary microseism has been very high this evening and preventing H1 from locking, we decided to leave just the PMC locked (no FSS, ISS, or IMC) for an extended time and watch for any glitches. At around 23:45 UTC, we unlocked the FSS, Richard turned off the high voltage supply for the FSS, and Jason and I unplugged the DB37 cable from the FSS fieldbox in the PSL-R1 rack in order to ensure no feedback from the FSS made it to the NPRO. Pictures of the DB37 cable's location are attached.
The first attachment shows the changes seen when the FSS was unlocked. Since then, I've seen several instances of groups of glitches come through, such as those shown in the second and third attachments. These glitches in the PMC_MIXER channel are smaller than ones seen previously that have unlocked the IMC (like in alog81228). There have also been times where the PMC_MIXER channel gets "fuzzier" for a bit and then calms down, shown in the fourth attachment; it's possible this is due to the NPRO frequency not being controlled so the PMC sees some frequency changes? Finally, I only recall one instance of the NPRO jumping in power like in the final attachment; the PMC doesn't seem to care much about this, only having one very small glitch at this time.
I'll leave the PSL in this configuration to collect more data overnight as the secondary microseism is still much too high for H1 to successfully lock.
A zoom-in on some of the glitches from the third figure above.
After Ryan's shift ended last night, there were some larger glitches, with a similar amplitude in the PMC mixer channel to the ones that we saw unlocking the reference cavity 81356 (and IMC in 81228)
The first plot shows one of these times with larger glitches, the second one zooms in for 60ms when the glitches were frequent, this looks fairly similar to Peter's plot above.
The period of large glitches started around 2 am (7:37 UTC on Nov 20th), and ended when a power glitch turned off the laser at 7 am (15 UTC) 81376. Some of the small glitches in that time frame time seem to be at the same time that the reference cavity was resonanting (with low transmission), but many of the large glitches do not line up with times when the reference cavity was resonanting.
I've zoomed in on most of the times when the PMC mixer glitches reached 0.1, and see that there are usually small jumps in NPRO power at the time of these glitches, although the times don't always line up well and the small power glitches are happening very often so this might be a coincidence.
Sheila, Jason, Vicky - Compared the PSL + PMC mixer glitches between last night (Nov 22, 2024, no FSS no ISS) and the emergency vent (Aug 2024, PSL+PMC+FSS+ISS), as in 81354.
As a reference, "before" during the emergency vent in August 2024, the Laser + PMC + FSS + ISS were all locked with no PMC mixer glitches for >1 month.
Updating our matrix of tests to isolate the problem, and thinking things through:
| Before (vent Aug 2024) | Now (Nov 2024) |
| laser + PMC + FSS + ISS good = no glitches | laser + PMC + FSS bad = glitches 81356 |
| laser + PMC ??? (presumably good) | laser + PMC bad = same PMC mixer glitches 81371 |
1) Are these +/-0.1 V PMC mixer glitches the problem? Yes, probably.
2) Are these big PMC mixer glitches caused or worsened by the FSS? No. PMC mixer glitches basically same with FSS on 81356 and off 81371.
3) Are the laser + PMC mixer glitches new? Yes, probably. If these PMC glitches always there, could it be that we were previously able to ride them out before, and not now? But this would imply that in addition to the new glitches, the FSS secondarily degraded. Seems very unlikely: already several problems (bad amp, new eom needed new notch, etc) have been fixed with FSS, and the FSS OLTFs and in-loop spectra look good. FSS on/off does not change the PMC mixer glitches, so the problem seems most likely laser or PMC.
Sheila, Daniel, Jason, Ryan S, many others
We think the problem is not the PMC, and likely the laser.
Daniel looked at PMC mixer gliches on the remote scope: 81390. If PMC mixer glitches are indicative of the problem, we can try to track down the origin of the glitches.
Talking with Jason and RyanS, some summary of the NPROs available:
TITLE: 11/20 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: None
SHIFT SUMMARY: H1 remained unlocked the whole evening due to very high secondary microseismic motion, and there is no plan to attempt locking overnight. In the meantime, we took the opportunity to leave just the PMC locked to look for glitches (more details in alog81371). Very quiet evening otherwise with nothing else to report.
At 17:52:10 PST Tue 19 Nov 2024 we had another sharp positive glitch in the BSC3 vacuum gauge signal (PT132_MOD2). VACSTAT did not alarm because no other gauge tripped around this time. The glitch was a 2 second wide square wave, and as before it was the delta-P which tripped.
vacstat was reset at 18:39 to restore it to its monitoring state.
I startedtesting the PCALX_STAT Guardian node today.
/opt/rtcds/userapps/release/cal/h1/guardian/PCALX_STAT.py
It created a new ini file, But the DAQ was not restarted after this new ini file creation.
As it currently stands this is a draft of the final product that will be tested for a week and further refined.
This Guardian node, does not make any changes to the IFO, it's only job is to determine if PCALX arm is broken or not. TJ has already added it to the Guardian Ignore list.
TITLE: 11/19 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: 2ndary microseism has been consistently high (above the 90th percentile) for the whole day and has risen even more in the last few hours of the shift and has reached the top of the plot. It will likely will get even worse tomorrow as the storm off the pacific coast closes in. Wind has been pretty low today. ITMY5/6 is still kind of high.
As of 23:30 UTC we're back to sitting in idle, we unlocked the FSS, ISS, and IMC just leaving the PMC locked.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 21:41 | SAF | Laser | LVEA | YES | LVEA is laser HAZARD | 17:55 |
| 16:08 | FAC | Eric | EndX | N | Glycol piping famis task, upright fallen portapotty by FCES | 16:50 |
| 16:14 | TCS | TJ, Camilla | LVEA | Y | CO2Y table cable jiggling | 17:01 |
| 16:17 | FAC | Kim & Karen | FCES | N | Tech clean | 16:31 |
| 16:19 | FAC | Tyler | LVEA | N | Verify strobes in H2/PSL area | 16:28 |
| 16:20 | FAC | Tyler + Cascade Fire | OSB, Ends | N | Fire alarm testing | 18:54 |
| 16:27 | FAC | Tyler, Tristan | EndX | N | Fire alarm testing | 18:47 |
| 16:30 | FAC | Kim | EndX | N | Tech clean | 17:19 |
| 16:31 | FAC | Karen | EndY | N | Tech clean | 17:35 |
| 17:00 | ISC | Sheila | LVEA | Y | Unplug ISS Feed forward | 17:08 |
| 16:34 | VAC | Jordan | LVEA | Y->N | Prep new scroll pumps, gauge swap | 20:04 |
| 16:41 | FAC | Chris + Fire | LVEA | Y | Check fire ext. | 17:10 |
| 16:59 | VAC | Gerardo | LVEA | Y->N | Join Jordan, Scroll pumps | 20:04 |
| 17:03 | OPS | Camilla | LVEA | Y -> N | LASER HAZARD transition | 17:20 |
| 17:08 | FAC | Chris + Cascade fire | EndY | N | Fire checks, End then mid | 18:54 |
| 17:10 | VAC | Janos | LVEA | Y | Join VAC team | 17:53 |
| 17:20 | CDS | Marc, Fil | EndX | N | DAC swap | 18:12 |
| 16:45 | EPO | Corey+1 | EndY | N | Pictures | 17:15 |
| 17:30 | FAC | Kim | FCES | N | Tech clean | 18:00 |
| 17:32 | VAC | Janos | LVEA | N | Pump/Gauge work | 18:35 |
| 17:36 | FAC | Karen | FCES | N | Tech clean | 18:00 |
| 17:42 | OPS | Oli | LVEA | N | Grab power meter | 17:47 |
| 17:47 | SEI | Jim | CR | N | BS and ITM sei tests | 18:22 |
| 17:00 | ISC | Daniel | LVEA | Y | Investigations, scope installation | 17:52 |
| 17:56 | SAF | LASER | LVEA | N | LVEA is LASER SAFE | 20:46 |
| 17:57 | ALS | Camilla, Oli | EndY | Y | Beam profiling | 19:54 |
| 18:01 | FAC | Richard | LVEA | N | Walk around | 18:21 |
| 18:02 | ISC | Keita | LVEA | N | Cable checks | 18:29 |
| 18:16 | ISC | Sheila | CER | N | Plug and unplug FF cable | 18:40 |
| 18:22 | VAC | Janos | EndX then EndY | N | Scroll pumps | 19:25 |
| 18:22 | SEI | Jim | LVEA Biergarten / CR | N | working on Seismic sub systems | 19:55 |
| 17:30 | EPO | Corey+1 | LVEA | N | B-Roll | 18:25 |
| 18:37 | FAC | Kim & Karen | LVEA | N | Tech clean, Kim out 19:37 | 19:53 |
| 18:54 | FAC | Tyler+Cascade | Mids | N | Fire checks | 20:20 |
| 19:25 | VAC | Janos | LVEA | N | Join VAC team | 20:04 |
| 19:56 | SEI | Jim | LVEA | N | Take pictures | 20:06 |
| 20:03 | OPS | Oli | LVEA | N | Return power meter | 20:11 |
| 20:11 | CAL | Tony | PCAL lab | Y | Make it laser safe | 21:07 |
| 20:40 | CAL | Francisco | PCAL lab | Y | PCAL work | 22:28 |
| 20:43 | OPS/TCS | TJ | LVEA | N-> Y -> N | HAZARD TRANSITION then CO2Y table adjustment, back to safe | 21:13 |
| 20:48 | TCS | Camilla | LVEA | Y | Join TJ, CO2Y table | 21:13 |
| 21:17 | SAF | LVEA LASER SAFE | LVEA | N | LVEA IS LASER SAFE | 01:17 |
| 21:31 | ISC | Keita | LVEA | N | AS_B SEG3 testing | 21:49 |
| 21:47 | OPS | Oli | LVEA | N | Sweep | 22:13 |
| 23:37 | PSL | RyanS, Jason | CER, PSL racks | N | Pull cable |
IFO/Locking:
Some of the maintenance work completed includes:
We did not do a DAQ restart
Oli, Camilla WP12203. Repeat of some of the work done in 2019: EX: 52608, EY: 52636, older: part 1, part 2, part 3.
We misaligned ITMY and turned off the ALS-Y QPD servo with H1:ALS-Y_PZT_SWITCH and placed the Ophir Si scanning slit beam profiler to measure both the 532nm ALSY outgoing beam and the ALSY return beam in the HWS path.
The outgoing beam was a little oblong in the measurements but looked pretty clean and round by eye, the return beam did not! Photos of outgoing and return beam attached. Outgoing beam was 30mW, return beam 0.75mW.
Attached is the 13.5% and D4sigma measurements, I also have photos of the 50% measurements if needed. Distances are measured from the optic where HWS and ALS beams combine, ALS-M11 in D1400241.
We had previously removed HWS-M1B and HWS-M1C and translated HWS-M1A from whats shown in D1400241-v8 to remove clipping.
TJ, Camilla
We expanded on these measurements today and measured the positions of the lenses and mirrors in both ALS and HWS beampaths and took beamscan data further from the periscope, where the beam is changing size more. Data attached for today and all data together calculated from the VP. Photo of the beamscanner in the HWS return ALS beam path also attached.
Oli, Camilla
Today we took some beam measurements between ALS-L6 and ALS-M9. These are in the attached documents with today's data and all the data. The horizontal A1 measurements seemed strange before L6. We're unsure why as further downstream when the beam is larger and easier to see by eye it looks round.
TITLE: 11/19 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 10mph Gusts, 8mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 1.47 μm/s
QUICK SUMMARY: Due to very rapidly rising microseism, H1 will not be locking this evening or overnight, which allows us to run a long test for PSL glitch hunting by leaving just the PMC locked (no FSS, ISS, IMC). The DB37 cable was unlplugged from the FSS fieldbox at 23:45 UTC.
Camilla C., TJ
Recently, the CO2Y laser that we replaced on Oct 22 has been struggling to stay locked for long periods of time (alog81271 and trend from today). We've found some loose or bad cables in the past that have caused us issues, so we went out on table today to double check they are all ok.
The RF cables that lead into the side of the laser can slightly impact the output power when wiggled, in particular the ones with a BNC connector, but not to the point that we think it would be causing issues. The only cable that we found loose was for the PZT that goes to the head of the laser. The male portion of the SMA that comes out of the laser head was loose, and cannot be tightened from outside of the laser. We verified that the connection from this to the cable were good, but wiggling it did still introduce glitched in the PZT channel. I don't think that we've conviced ourselves that this is a problem though, because the PZT doesn't seem to glitch when the laser loses lock and instead it will run away.
An unfortunate consequence of the cable wiggling was that one of the Beckhoff plugs at the feedthrough must have been unseated slightly and caused our mask flipper read backs to read incorrectly. The screws for this plug were not working so we just pushed the plug back in to fully seat it and all seemed to work again.
We still are not sure why we've been having these lock losses lately, the 2nd and 3rd attachments show a few of them from the last day or so. They remind me of back in 2019 when we saw this - example1, example2. The fix ultimately a chiller swap (alog54980), but the flow and water temp seem more stable this time around. Not completely ruling it out yet though.
We've only had two relocks in the last two weeks since we readjusted cables. This is within its normal behavior. I'll close this FRS32709 unless this suddenly becomes unstable again. Though there might be a larger problem of laser stability, I think closing the this FRS makes sense since it is referencing a specific instance of instability.
Both X&Y tend to have periods of long stretches where they don't relock, and periods where they have issues staying locked (attachment 2). Unless there are obvious issues with chiller supply temperature, loose cables, wrong settings, etc. I don't think that we have a great grasp as to why it loses lock sometimes.
LVEA has been swept. Just had to unplug a(n already off) PEM function generator and turn off the lights for the HAM5/6 giant cleanroom.
Today we swapped the PT154 gauge on the FC-A section that was reported faulty in alog 81078 with a brand new gauge, same make/model.
FCV1 & 2 were closed to isolate the FCA section, and the angle valve on the A2 cross closed to isolate the gauge. Volume was vented with dry nitrogen and gauges swapped. CF connection was helium leak checked, no He signal above the HLD background ~2e-10 Torr-l/s.
Closing WP 12200
Related alog: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=81320
Sometime ago (July/09/2024) the analog DC gain of H1:ASC-AS_B_DC was set from "High" to "Low" when I was looking to extract more information from WFSs about the power coming past the Fast Shutter when the shutter bounces down after high power lock losses. Since this is not necessary any more (see plots in alog 81130) and since keeping this means that we have to either adjust dark offset once in a while and/or making the "shutter closed" threshold for Fast Shutter test (alog 81320), I set the gain switch back to "HIGH", and disabled the +20dB digital gain in H1:ASC-AS_B_DC_SEG[1234] FM4.
Interestingly, when I flipped the gain switch, SEG3 output didn't change (1st attachment). It could be that the specific DC channel is stuck to HIGH or LOW, but it could be that the analog offset happens to be really small for that channel. I cannot test it right now as we're leaving IMC unlocked. As soon as the light comes back I will go to the floor and test.
The second attachment is the relevant MEDM screen (that shows the state it should be in now) and the third one is the picture of the interface chassis (the switch I flipped is circled in red, and it's supposed to be HIGH now).
After the IMC was relocked I went to the floor, opened the fast shutter, switched the analog gain switch back and forth several times and confirmed that all segments responded correctly.
I accepted in SDF in both SAFE and OBSERVE to keep off FM4
It looks like usually the PSL PMC + FSS can stay locked for long periods of time, for example >1 month over the emergency vent in August. See the attached trends of the # of locked days and hours in the last few months (ndscope saved as /ligo/home/victoriaa.xu/ndscope/PSL/pmc_fss_can_stay_locked_over_emergency_o4b_vent.yaml).
The last time the PMC was locked for almost a month ended on Sept 23, 2024. Since then the PSL PMC has not stayed locked for over 5 days, but this is most likely due to commissioning tests and debugging which started around then.
Some trends showing several PSL signals over O4b, and the end of O4a.
Key points- PMC + FSS stayed locked continuously during both the O4a/b break (Feb 2024), and the emergency vent (Aug 2024), with minimal glitching in PMC TRANS and Ref Cav Trans PD (FSS-TPD).
Trying to compare PSL-related spectra between the original O4 NPRO and the current NPRO using DTT is kinda confusing.
Comparing times with before O4 NPRO from 21 Oct 2024 04:55 UTC (blue, 169 Mpc), and now with O3 NPRO at 16 Nov 2024 08:20 UTC (red, 169.9 Mpc).
For the fast mon spectra in the top left plot, FSS FAST MON OUT looks 2-5x noisier now than before. H1:PSL-FSS_FAST_MON_OUT_DQ calibrated into Hz/rtHz using zpk = ([], [10], [1.3e6]) (see 81251, this makes H1 and L1 spectra into comparable Hz/rtHz units 81210).
But for FSS-TPD (ref cav trans), it looks like there's some extra 1-10Hz noise, but otherwise the trans spectra might be quieter? Similarly confusing, the ISS AOM control signal looks quieter. Not a clear takeaway from just these spectra on how to compare O3/O4 NPROs.
2024 Nov 12
Neil, Fil, and Jim installed an HS-1 geophone in the biergarten (image attached). HS-1 is threaded to plate and plate is double-sided taped to the floor. Signal given was non-existent. Must install pre-amplifier to boost signal.
2024 Nov 13
Neil and Jim installed an amplifier (SR560) to boost HS-1 signal (images attached). Circuitry checked to ensure signal makes it to the racks. However, when left alone there is no signal coming through (image attached, see blue line labelled ADC_5_29). We suspect the HS-1 is dead. HS-1 and amplifier are now out of LVEA, HS-1's baseplate is still installed. We can check one or two more things, or wait for more HS-1s to compare.
Fil and I tried again today, we couldn't get this sensor to work. We started from the PEM rack in the CER, plugging the HS1 through the SR560 into the L4C interface chassis, confirming the HS1 would see something when we tapped it. We then moved out to the PEM bulkhead by HAM4, again confirmed the HS1/SR560 combo still showed signal when tapping the HS1. Then we moved to the biergaren and plugged in the HS1/SR560 right next to the other seismometers. While watching the readout in the DAQ of the HS1 and one of the Guralps I have connected to the PEM AA, we could see that both sensors could see when I slapped the ground near the seismometers, but the signal was barely above what looks like electronics noise on the HS1, while the Guralp showed lots of signal that looked like ground motion. We tried gains from 50-200 on the SR560, none of them really seemed to improve the snr of the HS1. The HS1 is still plugged in over night, but I don't think this particular sensor is going to measure much ground motion.
One check for broken sensors - A useful check is to be sure you can feel the mass moving when the HS-1 is in the correct orientation. A gentle shake in the vertical, inverted, and horizontal orientations will quickly reveal which orientation is correct.
After discussions with control room team: Jason, Ryan S, Sheila, Tony, Vicky, Elenna, Camilla
Conclusions: The NPRO glitches aren't new. Something changed to make us not be able to survive them as well in lock. The NRPO was swapped so isn't the issue. Looking the timing of these "IMC" locklosses, they are caused by something in the IMC or upstream 81155.
Tagging OpsInfo: Premade templates for looking at locklosses are in /opt/rtcds/userapps/release/sys/h1/templates/locklossplots/PSL_lockloss_search_fast_channels.yaml and will come up with command 'lockloss select' or 'lockloss show 1415370858'.
Updated list of things that have been checked above and attache a plot where I've split the IMC only tagged locklosses (orange) from those tagged IMC and FSS_OSCIALTION (yellow). The non IMC ones (blue) are the normal lock losses and (mostly) only once we saw before September.