The filter_bank_changes script, which shows how filters and on/off switches within a filter bank have changed over a period of time (example output, original alog), is now executable within medm screens by doing right click > Execute > filter_bank_changes. It works similar to ndscope and fmscan in that you have to select a channel on the medm screen. The channel can be any of the channels within the filter bank (ex. _SWSTAT, _INMON, _Name00, etc).
If you'd like to run it outside of an MEDM screen, the script can be found in /ligo/gitcommon/filterBankChanges/ and can be run with ./filterbankchanges.sh CHANNEL_NAME , where CHANNEL_NAME should end in _SWSTAT or one of the other aforementioned filter bank channel endings
Rick, Keita, Sheila, Daniel, remote help from Matt Heintze
This morning we swapped the NPRO laser controller S2200009 out for S2200008.
Settings before we started: Laser diode A set temperature 18.3C, Laser diode B set temperature 15.99C, laser diode injection current 2.134 Amps, laser crystal set temperature 26.04 C, laser crystal actual temperature 26.10 C.
We went to the diode room, and followed the notes from the procedure Ryan Short outlined to me to shut down the amplifiers, turning off amp2, then amp1, then closing the shutter, then we went one step beyond his instructions and shut off the NPRO. We swapped the controller with all the cables, including the interlock.
When we powered on S2200008 laser diode A temperature was set to 17.14, B set to 22.96C. We adjusted the pots on the front panel until they matched the values we had written down from the original controller. We turned the knob on the front panel for the injection current to 0. Rick and I went to the laser diode room and turned the NPRO back on, Keita confirmed that this turned on the controller. We noticed that the laser diode 1 and 2 set temps were what we had set them to be by adjusting the pots for A and B, but the act temp readbacks weren't matching, we confirmed with photos that with the original controller the set and actual temps matched. (I will attach a photo of this state). At the rack Ketia turned the injection current up to about 100mA, this didn't change the temperature readbacks.
We had a discussion with Matt Heinze, who agreed this is probably a readback issue and that it was safe to keep going. We think that this is because we haven't adjusted the pots on the LIGO daughter board following T1900456. Keita slowly turned up the injection current knob while Rick and I watched from the diode room, the NPRO power came back to 1.8W which was the same as before. The laser diode act power readbacks for diode 1 says 3.6W, while it was 5W with the original controller, and diode 2 says 2.48W where it also said 5W with the original controller. Since the power output is the same, we think these are also just readback problems due to not following T1900456. We opened the shutter, turned on the amplifiers after a few minutes, and set the power watchdogs back on, as well as the pump diode watchdog.
The PMC and FSS locked without issue, Daniel found that the squeezer laser frequency had to increase by 850MHz to get the squeezer TTFSS locked, so we moved the NPRO temperature up on the FSS by about 0.3 K. After this the squeezer and ALS lasers locked easily, and Oli relocked the IFO.
The first attached photo is the NPRO screen on the beckhoff machine before the controller swap, the second photo is after.
While Oli was relocking we saw that there are still glitches, both of the attached screenshots were taken while the IFO was in the engage ASC state.
Seeing this and based on Camilla's observation that the locklosses started 80561 on the same day that we redistributed gains to increase range in the IMC servo board in 79998, we decided to revert the change, which was intended to help us ride out earth quakes. Daniel points out that this makes some sense, that it might help to ride out the earthquake if we limit the range available to the carm servo. We hope that this will help us to ride out the glitches better without loosing lock.
Rick went back to the diode room and reset the watchdogs after the laser had been running for about 2 hours.
After going down to swap the NPRO controller (80566), we are back to Observing
As part of yesterday's maintenance, the atomic clock has been resynchronized with GPS. The tolerance has been reduced to 1000ns again. Will see how long it lasts this time.
Wed Oct 09 10:07:43 2024 INFO: Fill completed in 7min 40secs
Gerardo confirmed a good fill curbside. Note new fill time of 10am daily.
I ran the OPLEV charge measurements for both of the ETMs yesterday morning.
ETMX's charge looks to be stagnant.
ETMY's charge looks to be on a small upward trend, the charge is above 50V on LL_{P,Y} and UL_Y.
At 17:03UTC I purposely broke lock so we could start corrective maintenance for the PSL. Not sure how long we will be down. We'll also be taking this opportunity to fix the camera issue that has been taking us out of Observing every hour since 8:25UTC this morning(80556).
I added a tag to the locklost tool to tag "IMC" if the IMC looses lock within 50ms of the AS_A channel seeing the lockloss: MR !139. This will work for all future locklosses.
I then ran just the refined time plugin on all NLN locklosses from the emergency break end 2024-08-21 until now in my personal lock loss account. The first lockloss from NLN tagged "IMC" was 2024-09-13 1410300819 (plot here), then got more and more frequent: 2024-09-19, 2024-09-21, then almost every day since the 2024-09-23.
Oli found that we've ben seeing PSL/FSS glitches since before June 80520 (they didn't check before that). The first lockloss tagged FFS_OSCLATION (where FSS_FAST_MON >3 in the 5 seconds before lock loss) was Tuesday September 17th 80366.
Keita and Sheila discussed that on September 12th the IMC gain distribution was changed by 7dB (same output but gain moved from input of board to output): 79998. We think this shouldn't effect the PSL but it could possibly have exasperated the PSL glitching issues. We could try to revert this change if the PSL power chassis swap doesn't help.
On Wednesday 9th Oct the IMC gain redistribution was reverted: 80566. It seems like this change has helped reduce the locklosses from a glitching PSL/FSS but hasn't solved it completely.
TITLE: 10/09 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 5mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.22 μm/s
QUICK SUMMARY:
Observing and have been Locked for 7 hours. Overnight we went in and out of Observing many times, not sure why yet but it looks like it wasn't because of squeezing at least
One of these observing drops was for 90seconds as the CO2Y laser unlocked and relocked at a lightly lower power. It is a known issue that the laser is coming to the end of it's life and we plan to swap it out in the coming ~weeks 79560.
Looks like the camera servo had the beamsplitter camera get stuck, causing CAMERA_SERVO to turn the servos off and then back on, which worked. Since Dave had just had to restart that camera an hour earlier because of it getting stuck (80555), I wonder if it came back in a weird state?
ISC_LOCK:
2024-10-09_08:24:57.089769Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: CAMERA_SERVO: has notification
2024-10-09_08:25:10.719794Z ISC_LOCK [NOMINAL_LOW_NOISE.run] Unstalling CAMERA_SERVO
2024-10-09_09:25:01.088160Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: CAMERA_SERVO: has notification
2024-10-09_10:25:04.090580Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: CAMERA_SERVO: has notification
2024-10-09_11:25:07.094947Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: CAMERA_SERVO: has notification
2024-10-09_12:25:10.092275Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: CAMERA_SERVO: has notification
2024-10-09_13:07:53.531208Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: TCS_ITMY_CO2_PWR: has notification
2024-10-09_13:25:13.087490Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: CAMERA_SERVO: has notification
2024-10-09_14:25:16.088011Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: CAMERA_SERVO: has notification
CAMERA_SERVO example:
2024-10-09_14:25:16.020827Z CAMERA_SERVO [CAMERA_SERVO_ON.run] USERMSG 0: ASC-CAM_PIT1_INMON is stuck! Going back to ADS
2024-10-09_14:25:16.021486Z CAMERA_SERVO [CAMERA_SERVO_ON.run] USERMSG 1: ASC-CAM_YAW1_INMON is stuck! Going back to ADS
2024-10-09_14:25:16.078955Z CAMERA_SERVO JUMP target: TURN_CAMERA_SERVO_OFF
2024-10-09_14:25:16.079941Z CAMERA_SERVO [CAMERA_SERVO_ON.exit]
2024-10-09_14:25:16.080398Z CAMERA_SERVO STALLED
2024-10-09_14:25:16.144160Z CAMERA_SERVO JUMP: CAMERA_SERVO_ON->TURN_CAMERA_SERVO_OFF
2024-10-09_14:25:16.144402Z CAMERA_SERVO calculating path: TURN_CAMERA_SERVO_OFF->CAMERA_SERVO_ON
2024-10-09_14:25:16.144767Z CAMERA_SERVO new target: DITHER_ON
2024-10-09_14:25:16.146978Z CAMERA_SERVO executing state: TURN_CAMERA_SERVO_OFF (500)
2024-10-09_14:25:16.148768Z CAMERA_SERVO [TURN_CAMERA_SERVO_OFF.main] ezca: H1:ASC-CAM_PIT1_TRAMP => 0
2024-10-09_14:25:16.149295Z CAMERA_SERVO [TURN_CAMERA_SERVO_OFF.main] ezca: H1:ASC-CAM_PIT1_GAIN => 0
...
2024-10-09_14:25:29.432120Z CAMERA_SERVO [TURN_CAMERA_SERVO_OFF.main] ezca: H1:ASC-CAM_YAW3_SW1 => 4
2024-10-09_14:25:29.683381Z CAMERA_SERVO [TURN_CAMERA_SERVO_OFF.main] ezca: H1:ASC-CAM_YAW3 => OFF: INPUT
2024-10-09_14:25:29.757840Z CAMERA_SERVO REQUEST: CAMERA_SERVO_ON
2024-10-09_14:25:29.757840Z CAMERA_SERVO STALL cleared
2024-10-09_14:25:29.757840Z CAMERA_SERVO calculating path: TURN_CAMERA_SERVO_OFF->CAMERA_SERVO_ON
2024-10-09_14:25:29.824193Z CAMERA_SERVO EDGE: TURN_CAMERA_SERVO_OFF->DITHER_ON
2024-10-09_14:25:29.824193Z CAMERA_SERVO calculating path: DITHER_ON->CAMERA_SERVO_ON
2024-10-09_14:25:29.824193Z CAMERA_SERVO new target: TURN_ON_CAMERA_FIXED_OFFSETS
2024-10-09_14:25:29.824193Z CAMERA_SERVO executing state: DITHER_ON (300)
2024-10-09_14:25:59.577353Z CAMERA_SERVO [DITHER_ON.run] ezca: H1:ASC-ADS_YAW5_DOF_GAIN => 20
2024-10-09_14:25:59.581068Z CAMERA_SERVO [DITHER_ON.run] timer['wait'] = 0
2024-10-09_14:25:59.767573Z CAMERA_SERVO EDGE: DITHER_ON->TURN_ON_CAMERA_FIXED_OFFSETS
2024-10-09_14:25:59.768055Z CAMERA_SERVO calculating path: TURN_ON_CAMERA_FIXED_OFFSETS->CAMERA_SERVO_ON
2024-10-09_14:25:59.768154Z CAMERA_SERVO new target: CAMERA_SERVO_ON
2024-10-09_14:25:59.768948Z CAMERA_SERVO executing state: TURN_ON_CAMERA_FIXED_OFFSETS (405)
2024-10-09_14:25:59.769677Z CAMERA_SERVO [TURN_ON_CAMERA_FIXED_OFFSETS.enter]
2024-10-09_14:25:59.770639Z CAMERA_SERVO [TURN_ON_CAMERA_FIXED_OFFSETS.main] timer['wait'] done
...
2024-10-09_14:26:33.755960Z CAMERA_SERVO [TURN_ON_CAMERA_FIXED_OFFSETS.run] ADS clk off: ASC-ADS_YAW5_OSC_CLKGAIN to 0
2024-10-09_14:26:33.761759Z CAMERA_SERVO [TURN_ON_CAMERA_FIXED_OFFSETS.run] ezca: H1:ASC-ADS_YAW5_OSC_CLKGAIN => 0.0
2024-10-09_14:26:33.763017Z CAMERA_SERVO [TURN_ON_CAMERA_FIXED_OFFSETS.run] ezca: H1:ASC-ADS_YAW5_DEMOD_I_GAIN => 0
2024-10-09_14:26:37.898356Z CAMERA_SERVO [TURN_ON_CAMERA_FIXED_OFFSETS.run] timer['wait'] = 4
From the h1digivideo2 cam26 server logs (/tmp/H1-VID-CAM26_gst.log), at 25 minutes past the error we see a timeout on the camera. 2024-10-09 12:25:16,806 UTC - INFO - startCamera done 2024-10-09 13:25:05,745 UTC - ERROR - Duplicate messages in the previous second: 0 ... psnap.snapCamera failed: timeout 2024-10-09 13:25:05,745 UTC - ERROR - Attempting to stop camera 2024-10-09 13:25:05,748 UTC - ERROR - Attempting to start camera 2024-10-09 13:25:05,750 UTC - INFO - Opened camera 2024-10-09 13:25:24,404 UTC - INFO - Duplicate messages in the previous second: 0 ... Opened camera 2024-10-09 13:25:24,427 UTC - INFO - setupCamera done 2024-10-09 13:25:24,441 UTC - INFO - startCamera done 2024-10-09 14:25:08,817 UTC - ERROR - Duplicate messages in the previous second: 0 ... psnap.snapCamera failed: timeout 2024-10-09 14:25:08,817 UTC - ERROR - Attempting to stop camera 2024-10-09 14:25:08,827 UTC - ERROR - Attempting to start camera 2024-10-09 14:25:08,828 UTC - INFO - Opened camera 2024-10-09 14:25:31,865 UTC - INFO - Opened camera 2024-10-09 14:25:31,888 UTC - INFO - Duplicate messages in the previous second: 0 ... setupCamera done 2024-10-09 14:25:31,904 UTC - INFO - startCamera done 2024-10-09 15:07:36,397 UTC - ERROR - Duplicate messages in the previous second: 0 ... psnap.snapCamera failed: could not grab frame 2024-10-09 15:07:36,397 UTC - ERROR - Attempting to stop camera 2024-10-09 15:07:36,446 UTC - ERROR - Attempting to start camera 2024-10-09 15:07:36,447 UTC - INFO - Opened camera 2024-10-09 15:07:36,463 UTC - INFO - setupCamera done 2024-10-09 15:07:36,476 UTC - INFO - startCamera done 2024-10-09 15:07:36,477 UTC - ERROR - setupReturn = True 2024-10-09 15:07:36,477 UTC - ERROR - Failed to get frame - try number: 1 2024-10-09 15:25:11,892 UTC - ERROR - Duplicate messages in the previous second: 0 ... psnap.snapCamera failed: timeout 2024-10-09 15:25:11,893 UTC - ERROR - Attempting to stop camera 2024-10-09 15:25:11,898 UTC - ERROR - Attempting to start camera 2024-10-09 15:25:11,900 UTC - INFO - Opened camera 2024-10-09 15:25:39,541 UTC - INFO - Opened camera 2024-10-09 15:25:39,564 UTC - INFO - setupCamera done 2024-10-09 15:25:39,580 UTC - INFO - startCamera done
Tagging DetChar - These glitches that happen once an hour around 11Hz are because of this camera issue (the glitches are caused by the camera ASC turning off and ADS turning on). We are still having this issue and are planning on trying a correction at the next lockloss.
TITLE: 10/09 Owl Shift: 0500-1430 UTC (2200-0730 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 2mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.23 μm/s
QUICK SUMMARY:
H1 called me at midnight to tell me that the it was stuck in ADS_TO_CAMERAS.
I read the DIAG main messages and It mentioned the CAMERA_SERVO Guardian was the issue. Opened up that to find that ASC-YAW1 camera was stuck...
This is not all that help full of an error message by it's self.
So ...Checked the alogs for something like "ASC-CAM Stuck" and found Corey's alog!:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=79685
Then checked the BS (26) on h1digivideo2 and noticed that the centroids were not updating. Yup same problem! so I tried to restart, stop & start the service, none of which worked. Then I called Dave, who then logged in and fixed it for me.
We are now Observing. Thanks Dave!
TITLE: 10/09 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Tony
SHIFT SUMMARY:
IFO is LOCKING at CHECK_IR
We're having locking issues, primarily due to an IMC that's taking a while to lock. There seems to be some glitchiness here too when we lose lock sometimes and the power stays at 10W. We've gotten as high as PRMI/MICH_FRINGES but keep losing lock.
The last lock was fully auto (and DRMI locked immediately) with the one exception that IMC took 20 mins to lock.
EQ-Mode Worthy EQ coming from Canada (5.0 mag)
Lockloss 1: alog 80552
Lockloss 2: alog 80553
LOG:
None
FSS/PSL Caused Lockloss - IMC and ASC lost lock within 20ms. FSS Glitchy behavior started 2 mins before locloss- at a threshold, we lost lock. Screenshots attached.
Issues have been happening since mid-Sept.
FSS Caused Lockloss only 4 minutes into observing. The IMC and the ASC trends lost lock within 4ms of one another (attached trend).
We have been dealing with more FSS/PSL related locklosses since Mid-Sept.
IMC locked after a near-20 minutes but locking has been smooth otherwise, with DRMI locking immediately.
TITLE: 10/08 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 4mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.21 μm/s
QUICK SUMMARY:
IFO is LOCKING and in MAX_POWER
PSL is glitchy, and comissioners are unsure of what it is, even after troubleshooting today. It's still causing FSS Lockloss issues.
Genevieve and Sam identified the sources of many of the peaks that were vibrationally coherent with DARM in the 20 to 40 Hz region (8012 ). We suspected coupling at HAM1 and I tested this recently using local injections and the beating shaker technique.
With local injections at HAM1 using shakers and HEPI, we can produce much larger signals at the HAM1 table top then at the PSL or HAM2. The coupling site will likely have the same ratio of locally injected peaks to distant equipment peaks in its vibration sensors as the ratio of these peaks in DARM. The figure shows that the ratio of the injected peak at 25 Hz to the Air Handler peak at 26.35 Hz is consistent with coupling at HAM1 but not HAM2 (or any place else – most other potential coupling sites don’t even detect the peak injected locally at HAM1)
The figure also shows that the beats from the beating shakers matched DARM at the HAM1 table top, but could also be consistent with coupling at HAM2, so this technique, as implemented, gave a consistency check but did not discrimate well between HAM1 and 2.
Not many options to improve HAM1 table motion untill we can install an ISI, but we could reduce the motion of HAM2 would using feedforward to HEPI using the 3dl4c path. I've tested this on HAM3 and it works. I added the adc connections and moved the L4Cs this morning to the HAM2 to HEPI, but I only have spare vertical sensors right now, so limited to the Z dof. Everything is set up to do a test during some future commissioning window, I could use some time to collect a couple measurements to check the filter fit. The couple of quick tests I did during maintenance show the filter from HAM3 works though.