Operator Was trying to lock in LOCKING_ARMS_GREEN[12] and ALS_YARM Faulted when It was trying to do SCAN_ALIGNMENT[57].
We Manual'd ALS_YARM back to UNLOCKED[-28] then reloaded ALS_YARM.
Operator then took ISC_LOCK to GREEN_ARM_MANUAL[13]
Error Log:
ALS_YARM W: RELOADING @ SCAN_ALIGNMENT.run
2025-06-17_21:15:15.740544Z ALS_YARM [SCAN_ALIGNMENT.run] Generating waveforms
2025-06-17_21:15:16.229352Z ALS_YARM [SCAN_ALIGNMENT.run] Start time is 1434230144
2025-06-17_21:15:16.229580Z ALS_YARM [SCAN_ALIGNMENT.run] Opening awg streams
2025-06-17_21:15:16.232982Z awgSetChannel: failed awgnewchannel_1(chntype = 1, arg1 = 0, arg2 = 0, awg_clnt[99][0] = 31370384) H1:SUS-TMSY_M1_TEST_P_EXC
2025-06-17_21:15:16.232982Z Error code from awgSetChannel: -5
2025-06-17_21:15:16.233280Z ALS_YARM W: Traceback (most recent call last):
2025-06-17_21:15:16.233280Z File "/usr/lib/python3/dist-packages/guardian/worker.py", line 494, in run
2025-06-17_21:15:16.233280Z retval = statefunc()
2025-06-17_21:15:16.233280Z File "/opt/rtcds/userapps/release/als/common/guardian/ALS_ARM.py", line 1525, in run
2025-06-17_21:15:16.233280Z ex.open()
2025-06-17_21:15:16.233280Z File "/usr/lib/python3/dist-packages/awg.py", line 595, in open
2025-06-17_21:15:16.233280Z raise AWGStreamError("can't open stream to " + self.chan \
2025-06-17_21:15:16.233280Z awg.AWGStreamError: can't open stream to H1:SUS-TMSY_M1_TEST_P_EXC: Error setting up an awg slot for the channel (-120)
2025-06-17_21:15:16.256804Z ALS_YARM ERROR in state SCAN_ALIGNMENT: see log for more info (LOAD to reset)
2025-06-17_21:15:35.011679Z ALS_YARM MODE: MANUAL
2025-06-17_21:15:35.017804Z ALS_YARM MANAGER: removed
2025-06-17_21:15:41.635373Z ALS_YARM MODE: AUTO
The LVEA has been swept.
The Vac team still has a pump on HAM1 that can be heard throughout the VEA.
Since we plan to do some TCS tests on the IFO in the near future, I recommend we make use of the AWGLines guardian, which injects a variety of lines that can be used to monitor things like noise couplings and optical gain changes.
Taking a quick survey of the guardian, I don't see any immediate issues. The guardian injects these lines: intensity noise (47.3 Hz, 222 Hz, 4222 Hz), frequency noise (25.2, 203, 4500 Hz), jitter (111.3, 167.1, 387.4 Hz), and MICH (40.7 Hz), PRCL (35.3 Hz) and SRCL (38.1 Hz). There is an additional state that engages the 8.125 Hz notches in the ASC and then injects one ASC line. The guardian is currently set up to inject in CHARD Y (8.125 Hz), but that can be adjusted to any ASC loop. The amplitude for the CHARD Y injection looks good, but that will need to be adjusted for other ASC loops. I have no idea if the laser noise and LSC noise line amplitudes are "good", but we ran this guardian regularly before O4 started so they are probably ok.
It does not appear that any notches or bandstops are engaged for the laser noise or LSC injections. Not sure if we want to change that. The cable for frequency noise will need to be plugged in to ensure the frequency noise gets injected.
The way the states are currently written, you will need to manual to "SET UP ASC INJECTIONS" to ensure the ASC line gets injected before going to "INJECTING". The "STOP INJECTING" state will stop all the injections, including the ASC injections and revert all changes such as switches and filters.
Sheila, Matt, Camilla
We replaced the laser curtains around ISCT1/IOT2L that were removed fro the HAM1 vent, so now this area can be taken to local laser hazard.
Tue Jun 17 10:10:11 2025 INFO: Fill completed in 10min 7secs
Late entry from last week. Roger's Machinery completed the new Kobelco installation and startup last Tuesday. Our last task before acceptance testing of the system was to replace the filter elements in the filter tree. When we opened the housings, we discovered that one of the elements was mangled and had evidence of rough tool use on it. When we removed the element, we could see that there was some evidence of galling in the threads where the element screws into the filter housing (see pics). We will order a new housing and replace it before continuing with testing the system.
We have completed the upgrade of H1 frontends to RCG5.5.0 at 09:51.
Detailed alog will be written but surprises found/re-remembered:
EX dolphin frontends need to enable an unused port on the EY switch because the EX switch has no port control (damaged in Apr power outage)
PSL DBB model had an obsolete Dolphin IPC sender back when it used to have a dolphin card. New RCG doesn't allow senders with no cards. Removed the sender from the model, a DAQ restart is pending for this model
We had upgraded h1omc0 to 5.5.0 some time ago, but the H1.ipc file had changed so it needed a restart. Prior to restart it was clobbering the SWWD IPCs between sush34 and seih23, seih45.
Here is a rough squence of today's upgrade, all times local PDT
07:22 rebuild H1EDC.ini to convince outselves the first DAQ restart will be model changes only
07:43 h1cdsrfm powered down, this breaks the linkage between Dolphin locations
07:43 h1ecatmon0 upgrade (reboot)
07:45 Dolphin network manager started on h1vmboot5-5, causing the standard set of end station systems to crash (susey, seiey, iscey and susex). We PAUSE'd the remaining EX (iscex, seiex)
07:51 Reboot h1susaux[ex, ey, b123, h2, h34, h56] pem[mx, my] to upgrade them. susaux[h34, ey] got stuck and were power cycled via IPMI.
08:03 DAQ 0-leg restart for new INI files across the board.
08:18 DAQ 1-leg restart. At this point omc0, susaux's and pemmid have good DAQ data, everyone else has BAD DAQ data.
08:27 Power down EX machines, power up EY machines. SWWD IPCs working, Dolphin IPC checks out.
08:32 Power up EX machines, all at the same time because of the Dolphin switch issue. They do not start. After some head scratching we remembered that the startup sequence needs to activate a dolphin switch port, which cannot happen at EX because the switch is damaged. Work around is for all three EX front ends to switch an unused port on the EY switch. Once this was put into place the EX machines started without anyone having to drive to the end station.
08:55 Reboot h1psl0 to upgrade PSL models (no dolphin cards, more about this later...)
08:56 Power down all non-upgraded corner station machines (SUS+SEI+ASC+LSC+OAF) but not h1omc0 (more about this later..)
09:00 h1psldbb is not running. It has an obsolete Dolphin IPC sender part in the model but no Dolphin card. RCG5.5.0 does not allow this. Rebuild model sans IPC part, starts running. Note PSLDBB DAQ data is BAD from this point till the second DAQ restart.
09:10 First power up h1sush2b, h1seih16 for HAM1 SEI work. SWWD IPC between the two working well (well, for these two, more later...)
09:20 Power up all remaining corner station computers
09:30 Discover wierd SWWD IPC receive values for HAM3 and HAM4 (val should be 1.0, but is -0.002 or 0.000).
09:34 try restart h1iopsush34, IPC values still bad. But h1omc0 has not been restarted so its using the old IPC configuration and could be writing out-of-bounds
09:35 restart h1omc0 models, SWWD IPC errors are resolved
09:44 power up h1cdsrfm. First EX is PAUSE'd, EY and CS are fenced. long range dolphin starts with no issues. A new MEDM is generated from the new H1.ipc file.
09:51 Complete in 2 hours.
As Kevin and Vicky found a couple of weeks ago, the SQZ HD PD A seems to be loose, it stopped working and Vicky re-seated it and it worked again. It again had stopped working so I poked it (it started pitched down) to make it work, now it will need realigning before the HD is functional again.
TITLE: 06/17 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 4mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY:
IFO is in DOWN for PLANNED MAINTENANCE
H1 had just gotten to NLN as I arrived. I then put it in DOWN for today's maintenance. Activities slated for today:
WP12536 RCG5.5.0 Upgrade
WP12624 SWWD variable bypass time h1iopsush2a
WP12625 Fix broken ADC selector h1hpiham4
WP12626 Fix partial ADC MEDM by adding named parts
Jonathan, Erik, EJ, Dave:
Yesterday, Mon 16jun2025, we did the final round of builds and installs of all the models against RCG5.5.0
h1build was rebooted against the new boot server (h1vmboot5-5) and was used to do the builds/installs.
After the builds, I ran each model through check_model_changes and then wrote an output files parser:
All the INI changes were expected, most had the RCG additional channels expected from going from 5.1.4 to 5.5.0.
Exceptions were:
h1iopsusex had 1 less because it was upgrading from 5.3.0 so already had timing card temp chan
The following had additional ADC naming parts:
isibs, isiham3, isiham4, isietmx, isietmy, alsex, alsey, ascimc
Build took about 2 hours. Install took 2hrs 13mins, backing up targets to target_archive took 45GB of disk space, /opt/rtcds is at 89%
TITLE: 06/17 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: We stayed locked the entire shift, high frequency SQZing hasn't been the best.
LOG: No Log.
00:01 UTC Observing
Jennie W, Keita, Rahul
Today the three of us went into the optics lab to upgrade unit S1202965 with the c'bore washer and qpd clamp plate that give a ~ 2 degree tilt between the QPD plane and the mount plate. See detail D in D1300720.
Looking at the assy solidworks file, LIGO-D1101059, if the back face of the photodiode array is facing to the back, the longer clamp leg points towards the front, and the notch on the tilt washer should be approx at the 4o'clock position.
We first checked the alignment into the array photodiodes currently and realised the beam was off by a large amount in yaw from the entrance aperture.
Keita had to change the mounts for the PZT mirror and lens as these were slightly tilted on the translation stage and it seemed like we needed some more robust alignment setup.
We then tried aligning with the upstream steering mirror and PZT mirror but can see multiple beams on each array PD. To check that the beam is not too large at the input aperture we want to re-profile the beam size on the way into the ISS assembly.
We left the set-up with the M2MS beam prolfer set up at the corner of the table and rough alignment of the beam into it, more fine adjustment needs to be done.
The reason why the alignment was totally off is unknown. It was still off after turning on the PZT driver with an offset (2.5V) so it cannot be the PZT mirror. Something might have been bumped in the past month or two.
Ryan S., Elenna
Thge MOVE_SPOTS state is taking 13 minutes (!) to complete, because the YAW3 ADS DOF is very far off and taking a significant time to converge. Both Jenne and I have found that bumping up the YAW3 gain (PRM yaw) slowly helps converge the loops much faster.
Ryan kindly helped me update the state code to slowly increase the gain if the convergence is taking too long. We added a new timer 'ADS', that waits for one minute after the new A2L gains are ramped (so an additional minute after the 2 minute ramp time of the A2L gains). If, after that first minute, there is still no convergence, then the YAW3 gain is doubled. After that, the 'ADS' timer waits 2 minutes, and again doubles the gain. This process can happen up to three times, which should increase the YAW3 gain to a maximum value of 8. Jenne and I have found that the gain can go as high as 10 in this state. The two minute waits give the other ASC, like SRC1 and INP1 Y time to converge as the ADS pulls the PRM in faster. Once the convergence checker returns true, the YAW3 gain is set back to 1.
We will monitor how this proceeds on this locking attempt. I updated the guardian notify statements so it states when the gain is increased.
This was a sucess- this run through took only 7 minutes. I am shortening the 2 minute wait before increasing the gain to 90 seconds. If that still works, maybe we can go to 60 seconds.
To be more specific, the first attempt as described above meant the state took 6 minutes, 50 seconds. I loaded the change to reduce the wait time from 120 to 90 seconds, which only shortened the state length to 6 minutes, 30 seconds. The gain was only ramped to 8 for a very short period of time. I still think we can make this shorter, which we can do by making that wait time 60 seconds, and maybe taking bigger steps in the gain each time. However, we are still in the RCG upgrade, so I will hold off on changes to the guardian for now.
YAW3 is still limiting the length of the state. In this morning's relock, YAW3 convergence took nearly an additional minute more than the other loops. Once we have caught YAW3 up to everything else, we could make the state even shorter by raising the gain of other ADS loops. Two minutes of the state are taken up in the ramp of the A2L gain, so it is taking an additional 4 minutes, 30 seconds to wait for loop convergence.
Now it seems that PIT3 is taking too much time to converge, so I updated the guardian to also increase the PIT3 gain in the same way.
All looks well, aside from the known issue with LAB2 and LVEA5 seems frozen, I'll investigate that tomorrow during maintenance.
LVEA5 being off is expected, it's a pumped dust monitor so we turned it off for observing.
Oli, Camilla, Sheila, RyanS
It was pointed out (84972) that our new SRCL offset is too big at the beginning of the lock, affecting the calibration and how well we are squeezing. Camilla had the idea of taking the unused-but-already-set-up THERMALIZATION guardian and repurposing the main state so it steps LSC-SRCL1_OFFSET from the LSC-SRCL1_OFFSET value at the end of MAX_POWER to the official offset value given in lscparams (offset['SRCL_RETUNE']). This stepping starts at the end of MAX_POWER and goes for 90 minutes. Here is a screenshot of the code.
To go with this new stepping, we've commented out the line (~5641) in ISC_LOCK's LOWNOISE_LENGTH_CONTROL (ezca['LSC-SRCL1_OFFSET'] = lscparams.offset['SRCL_RETUNE']) so that the offset doesn't get set to that constant and instead keeps stepping.
To get this to run properly while observing, we did have to unmonitor the LSC_SRCL1_OFFSET value in the Observe sdf (sdf).
Attached is a screenshot of the grafana page, highlighting the 33 Hz calibration line, which seems to be the most sensitive to thermalization. Before, when the SRCL offset was set static, it appears that the 33 Hz line uncertainty starts at about 1.09 and then decays down to about 1.02 over the first hour. With the thermalization adjustment of the SRCL offset from 0 to -455 over one hour, the 33 Hz uncertainty starts around 0.945 and then increases to 1.02 over the first hour. Seems like we overshot in the other direction, so we could start closer to -200 perhaps and move to -455.
We decided to change the guardian so that it starts at -200 before then stepping its way up to -455 over the course of 75 minutes instead of 90 minutes.
With the update to the guardian to start at -200, each calibration line uncertainty has actually stayed very flat for these first 30 minutes of lock (except for the usual very large jump in the uncertainty for the first few minutes of the lock).
This shows the entire lock using the thermalization guardian with the SRCL offset ramping from -200 to -455, The line uncertainty holds steady the entire time within 2-3%!
Kiet and Sheila,
Following up on the investigation posted in aLOG 84136, we examined the impact of higher-order violin mode harmonics on the contamination region.
We found that subtracting the violin peaks near 1000 Hz (1st harmonic) from those near 1500 Hz (2nd harmonic) results in frequency differences that align with many of the narrow lines observed in the contamination region around 500 Hz.
Violin peaks that we used (using O4a+b run average spectra)
F_n1 = {1008.69472,1008.81764, 1007.99944,1005.10319,1005.40083} Hz
F_n2 = {1472.77958,1466.18903,1465.59417,1468.58861, 1465.02333, 1486.36153, 1485.76708} Hz
Out of the 35 possible difference pairs (one from each set), 27 matched known lines in the contamination region to within 1/1800 Hz (~0.56 mHz)—most within 0.1 mHz. Considering that each region actually contains >30 peaks, the number of matching pairs likely increases significantly, helping explain the dense forest of lines in the comtaimination region.
Next steps:
The Fscan run average data are available here (interactive plots):
Fundamental region (500 Hz):
1st harmonic region (1000 Hz):
2nd harmonic region (1500 Hz):
Seems similar to this Alog From Oli.