The DAQ was restarted for a second time today at 12:41 (0-leg) and 12:50 (1-leg) for three reasons:
1) h1psldbb had a bad DAQ status following model change at 09:15 to remove an IPC sender
2) H0EPICS_VACLX.ini has new PT100 HAM1 gauge channels
3) H1EPICS_DIGVIDEO.ini has new CAM37 ISCT1-REFL channels (camera was added recently)
I regenerated H1EPICS_DIGVIDEO.ini using generate_camera_daq_ini.py, which correctly added CAM37 channels as a new server machine, but reverted CAMs[21, 23, 24, 26] back to "old" server settings. Remember these cameras were moved to a new server, but lack of masking forced them back to the old server (digivideo2). At this point the EDC was expecting a new set of channels, so as a work around I built a dummy ioc to simulate these channels to keep EDC happy. So to make a long story short, these cameras are back with their original PV set and digivideo_dummy_ioc is no longer needed. To test this I shut down the IOC with no loss of EDC connections, and verified tha the Guardian CDS_COPY "from" list was still functioning.
Detailed list of DAQ channel changes (9097 additions, 91 removals)
To run the test that calculates how much loss we have between the ASC-AS_C (anti-symmetric port) and the DCPDs at the output of the OMC (where strain is derived from).
conda activate labutils
python auto_darm_offset_step.py
The script turns on one notch in the DARM LSC loop, then changes the PCAL line heights so all the power is in just two frequencies.
At the end it reverses both these changes.
It can be stopped using Crtl-C but this will not restore the PCAL line heights back to their default values so you may have to use time machine.
After setting up the PCAL the script steps the DARM offset level in 120 s steps, I think it takes about 21 minutes to run.
After the script finishes, please put the OMC ASC MASTER GAIN back to its original value.
TITLE: 06/17 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 142Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 10mph Gusts, 4mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY:
We are Observing and have been Locked for almost 1.5 hours. SQZ doesn't look good, so I might pop out of Observing in another hour to run sqz scan align. Also, if we lose lock I will be restarting a couple guardian nodes that use AWG since that was restarted earlier today, and we've already had one awg error pop up in ALS_YARM (85130).
TITLE: 06/17 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 140Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 22:26 UTC
TUES Maintenance Activities:
We had a very productive maintenance day during which the following was completed
LVEA was swept - alog 85128
*Only listing completed WP’s - close your permits if work is done.
Lock Reacquisition:
Lock recovery was quite rough due to a few issues
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:22 | SAF | LASER SAFE | LVEA | SAFE | LVEA is LASER SAFE \u0d26\u0d4d\u0d26\u0d3f( \u2022_\u2022) | 15:37 |
15:07 | FAC | Kim, Nellie | LVEA | N | Technical Cleaning | 16:12 |
15:40 | VAC | Jordan, Gerardo | LVEA | N | Anchor Drill work | 17:31 |
15:42 | OPS' | Camilla | LVEA | N | Laser SAFE transition, ISCT1 check | 16:14 |
15:42 | 3:00 | Christina | LVEA | N | Walkabout | 17:08 |
16:05 | FAC | Randy | LVEA | N | Inventory | 20:49 |
16:12 | FAC | Kim | EX | N | Technical Cleaning | 17:58 |
16:13 | FAC | Nellie | EX | N | Technical Cleaning | 17:15 |
16:19 | PSL | Jason | LVEA | N | PSL Inventory + Cable pull | 17:08 |
16:23 | 3IFO | Betsy, Mitchell, LIGO India Delegation | LVEA | N | Tour, 3IFO Walkabout + Rick | 18:20 |
16:28 | VAC | Richard | LVEA | N | Check on vac | 18:48 |
16:30 | AOS | Jennie | Optics Lab | Local | ISS Array | 17:21 |
16:42 | ISS | Kieta | Optics lab | Local | Joining Jennie W in he optics lab | 19:15 |
16:43 | CDS | Erik | Remote | N | Restarting EndX IOC | 16:53 |
16:57 | EE | Fil | LVEA & Both Ends | N | Looking for Scopes & Talkign to Richard. | 18:55 |
17:08 | FAC | Christina | End stations | N | Getting Pictures of hardware | 18:25 |
17:13 | ISC | Sheila, Matt, Camilla | LVEA | N | ISCT1 Laser Panels | 17:41 |
17:27 | Bee | Tyler | LVEA | N | Being a Busy Bee keeper and intercepting Betsy | 17:35 |
17:27 | ISS | Rahul | Optics Lab | Local | Helping Keita with ISS work | 19:15 |
17:28 | ISS | Jennie W | Optics Lab | Local | Working on ISS with Keita | 19:15 |
17:42 | ISC | Camilla | ISCT table | N | ISCT table work | 17:58 |
17:44 | PEM | Ryan C | LVEA | n | Checking Dust mons | 18:18 |
17:45 | VAC | Gerardo | Mid & EY | N | Guage replacements | 18:56 |
17:58 | FAC | Nellie, Kim | FCES | N | Technical Cleaning | 18:32 |
18:20 | 3IFO | Mitchell, Jim, 3IFO team member | LVEA | N | 3IFO inventory check | 18:23 |
18:25 | 3IFO | Brijesh, Jim | LVEA | N | 3IFO Inventory | 18:43 |
18:34 | TCS | Camilla | LVEA | N | TCS Part Search | 18:45 |
18:57 | VAC | Gerardo | LVEA | N | Vac Gauge Reconnection | 19:06 |
19:58 | OPS | Tony | LVEA | N | Sweep | 20:49 |
20:49 | ISS | Keita, Rahul | Optics lab | LOCAL | ISS array work | 22:55 |
20:58 | FAC | Chris | EX, MX | N | Air handler checks | 22:12 |
21:11 | PSL | Rick, 3IFO Delegation | PSL Anteroom | N | PSL Anteroom Tour | 22:06 |
21:21 | ISS | Jennie W | Optics lab | Local | Working on ISS system | 22:51 |
Trend of H1:IOP-SUS_ITMY_WD_OSEM1_RMSOUT shows increased motion during the 10 minutes post-RCG upgrade that OMC0, see alog 85120, was clobbering IPCs, including two peaks.
The attached screenshot has cursors at the approximate start and end of OMC0 clobbering IPCs. RMS remained high until guardian was started 30 minutes later, after which ITMY continued to ring until guardian was again restarted.
We will attempt to trace the clobbered IPCs to see if they plausibly could have driven ITMY.
The attach list shows the mapping from OMC0 IPCs to IPCs that were clobbered during the ten minutes OMC0 was running on the wrong IPC table.
ITMX, which received the same clobbered channel as ITMY, also showed a spike in movement during the same period, but was properly stilled by guardian.
WPs: 12577 and 12608 Previous work: alog 84871 This portion of the first work permit has been completed: "Migrate h0vaclx from the svn repo to the git repo. Also migrate it from using the C# code to the PowerShell code for generating the TwinCAT 3 Visual Studio solution. This would make it match h0vacmr and h0vacly in both of these regards. Update the PLC code on h0vaclx to add PT100 as an Inficon BCG552 gauge." The installation of TwinCAT 3 on h0vaclx has been updated to version 3.1.4024.35. The PowerShell script to generate the TwinCAT 3 solution has been changed to use the TwinCAT XAE Shell instead of Visual Studio 2010 because I could not get it working with the latter. The new Inficon BCG552 EtherCAT gauge on HAM1 is connected and being read into EPICS. The code being used for the scripts is at commit d27c3ebfb424572f3aba003744e97e947e5a4873 in the git repo at https://git.ligo.org/cds/ifo/beckhoff/lho-vacuum. The shortcut in the TwinCAT autostart folder to start the EPICS IOC has been updated to point to the location of the checkout of this repo. The shortcut on the Desktop has similarly been updated. Timeline of work: 9:56 Stopped the EPICS IOC. Set the TwinCAT runtime to Config. Started the installer for TwinCAT 3.1.4024.67. 10:02 A Windows Security dialog message appeared three times: "Windows can't verify the publisher of this driver software". Clicked "Install this driver software anyway" each time. The installer took a very long time on "Installing Microsoft .NET Framework 5.6.1 Full". 10:22 The computer spontaneously logged me out during the installation. 10:24 Logged back in. 10:26 I started the installation of TwinCAT 3.1.4024.67 again and then soon canceled it. 10:35 I started the installer for TwinCAT 3.1.4024.35. 10:50 I restarted the computer to complete the install as prompted. 10:52 Logged back in. The installation appeared to be successful. I tried to generate the TwinCAT 3 solution from the scripts. I could not find a way around an error saying that the project template could not be found, despite it being at the path shown. 11:19 I ran 'shutdown /r' to restart the computer and try again but still got the same error about the template after the restart. I changed the script to use the TwinCAT XAE Shell instead of Visual Studio 2010. The script froze. I logged out and back in. The script succeeded in generating the solution. I scanned for terminals and did not see the BCG552 gauge on HAM1. Gerardo told me it was not connected and went to connect it. The gauge showed up and everything appears to be working. I updated the paths in the scripts for the IOC and the shortcuts to start the IOC. I checked in all of the changes to git.
We keep having some SDF observe diff for the roll mode damping gain ramp time. I added a line to the DAMP BOUNCE ROLL guardian state to set the gain to 10 seconds when the gain is set. Apparently we've been engaging the damping with no gain ramp. Hopefully this observe diff will stop popping up now.
Operator Was trying to lock in LOCKING_ARMS_GREEN[12] and ALS_YARM Faulted when It was trying to do SCAN_ALIGNMENT[57].
We Manual'd ALS_YARM back to UNLOCKED[-28] then reloaded ALS_YARM.
Operator then took ISC_LOCK to GREEN_ARM_MANUAL[13]
Error Log:
ALS_YARM W: RELOADING @ SCAN_ALIGNMENT.run
2025-06-17_21:15:15.740544Z ALS_YARM [SCAN_ALIGNMENT.run] Generating waveforms
2025-06-17_21:15:16.229352Z ALS_YARM [SCAN_ALIGNMENT.run] Start time is 1434230144
2025-06-17_21:15:16.229580Z ALS_YARM [SCAN_ALIGNMENT.run] Opening awg streams
2025-06-17_21:15:16.232982Z awgSetChannel: failed awgnewchannel_1(chntype = 1, arg1 = 0, arg2 = 0, awg_clnt[99][0] = 31370384) H1:SUS-TMSY_M1_TEST_P_EXC
2025-06-17_21:15:16.232982Z Error code from awgSetChannel: -5
2025-06-17_21:15:16.233280Z ALS_YARM W: Traceback (most recent call last):
2025-06-17_21:15:16.233280Z File "/usr/lib/python3/dist-packages/guardian/worker.py", line 494, in run
2025-06-17_21:15:16.233280Z retval = statefunc()
2025-06-17_21:15:16.233280Z File "/opt/rtcds/userapps/release/als/common/guardian/ALS_ARM.py", line 1525, in run
2025-06-17_21:15:16.233280Z ex.open()
2025-06-17_21:15:16.233280Z File "/usr/lib/python3/dist-packages/awg.py", line 595, in open
2025-06-17_21:15:16.233280Z raise AWGStreamError("can't open stream to " + self.chan \
2025-06-17_21:15:16.233280Z awg.AWGStreamError: can't open stream to H1:SUS-TMSY_M1_TEST_P_EXC: Error setting up an awg slot for the channel (-120)
2025-06-17_21:15:16.256804Z ALS_YARM ERROR in state SCAN_ALIGNMENT: see log for more info (LOAD to reset)
2025-06-17_21:15:35.011679Z ALS_YARM MODE: MANUAL
2025-06-17_21:15:35.017804Z ALS_YARM MANAGER: removed
2025-06-17_21:15:41.635373Z ALS_YARM MODE: AUTO
Seems similar to this Alog From Oli.
The LVEA has been swept.
The Vac team still has a pump on HAM1 that can be heard throughout the VEA.
Since we plan to do some TCS tests on the IFO in the near future, I recommend we make use of the AWGLines guardian, which injects a variety of lines that can be used to monitor things like noise couplings and optical gain changes.
Taking a quick survey of the guardian, I don't see any immediate issues. The guardian injects these lines: intensity noise (47.3 Hz, 222 Hz, 4222 Hz), frequency noise (25.2, 203, 4500 Hz), jitter (111.3, 167.1, 387.4 Hz), and MICH (40.7 Hz), PRCL (35.3 Hz) and SRCL (38.1 Hz). There is an additional state that engages the 8.125 Hz notches in the ASC and then injects one ASC line. The guardian is currently set up to inject in CHARD Y (8.125 Hz), but that can be adjusted to any ASC loop. The amplitude for the CHARD Y injection looks good, but that will need to be adjusted for other ASC loops. I have no idea if the laser noise and LSC noise line amplitudes are "good", but we ran this guardian regularly before O4 started so they are probably ok.
It does not appear that any notches or bandstops are engaged for the laser noise or LSC injections. Not sure if we want to change that. The cable for frequency noise will need to be plugged in to ensure the frequency noise gets injected.
The way the states are currently written, you will need to manual to "SET UP ASC INJECTIONS" to ensure the ASC line gets injected before going to "INJECTING". The "STOP INJECTING" state will stop all the injections, including the ASC injections and revert all changes such as switches and filters.
Sheila, Matt, Camilla
We replaced the laser curtains around ISCT1/IOT2L that were removed fro the HAM1 vent, so now this area can be taken to local laser hazard.
Late entry from last week. Roger's Machinery completed the new Kobelco installation and startup last Tuesday. Our last task before acceptance testing of the system was to replace the filter elements in the filter tree. When we opened the housings, we discovered that one of the elements was mangled and had evidence of rough tool use on it. When we removed the element, we could see that there was some evidence of galling in the threads where the element screws into the filter housing (see pics). We will order a new housing and replace it before continuing with testing the system.
We have completed the upgrade of H1 frontends to RCG5.5.0 at 09:51.
Detailed alog will be written but surprises found/re-remembered:
EX dolphin frontends need to enable an unused port on the EY switch because the EX switch has no port control (damaged in Apr power outage)
PSL DBB model had an obsolete Dolphin IPC sender back when it used to have a dolphin card. New RCG doesn't allow senders with no cards. Removed the sender from the model, a DAQ restart is pending for this model
We had upgraded h1omc0 to 5.5.0 some time ago, but the H1.ipc file had changed so it needed a restart. Prior to restart it was clobbering the SWWD IPCs between sush34 and seih23, seih45.
Here is a rough squence of today's upgrade, all times local PDT
07:22 rebuild H1EDC.ini to convince outselves the first DAQ restart will be model changes only
07:43 h1cdsrfm powered down, this breaks the linkage between Dolphin locations
07:43 h1ecatmon0 upgrade (reboot)
07:45 Dolphin network manager started on h1vmboot5-5, causing the standard set of end station systems to crash (susey, seiey, iscey and susex). We PAUSE'd the remaining EX (iscex, seiex)
07:51 Reboot h1susaux[ex, ey, b123, h2, h34, h56] pem[mx, my] to upgrade them. susaux[h34, ey] got stuck and were power cycled via IPMI.
08:03 DAQ 0-leg restart for new INI files across the board.
08:18 DAQ 1-leg restart. At this point omc0, susaux's and pemmid have good DAQ data, everyone else has BAD DAQ data.
08:27 Power down EX machines, power up EY machines. SWWD IPCs working, Dolphin IPC checks out.
08:32 Power up EX machines, all at the same time because of the Dolphin switch issue. They do not start. After some head scratching we remembered that the startup sequence needs to activate a dolphin switch port, which cannot happen at EX because the switch is damaged. Work around is for all three EX front ends to switch an unused port on the EY switch. Once this was put into place the EX machines started without anyone having to drive to the end station.
08:55 Reboot h1psl0 to upgrade PSL models (no dolphin cards, more about this later...)
08:56 Power down all non-upgraded corner station machines (SUS+SEI+ASC+LSC+OAF) but not h1omc0 (more about this later..)
09:00 h1psldbb is not running. It has an obsolete Dolphin IPC sender part in the model but no Dolphin card. RCG5.5.0 does not allow this. Rebuild model sans IPC part, starts running. Note PSLDBB DAQ data is BAD from this point till the second DAQ restart.
09:10 First power up h1sush2b, h1seih16 for HAM1 SEI work. SWWD IPC between the two working well (well, for these two, more later...)
09:20 Power up all remaining corner station computers
09:30 Discover wierd SWWD IPC receive values for HAM3 and HAM4 (val should be 1.0, but is -0.002 or 0.000).
09:34 try restart h1iopsush34, IPC values still bad. But h1omc0 has not been restarted so its using the old IPC configuration and could be writing out-of-bounds
09:35 restart h1omc0 models, SWWD IPC errors are resolved
09:44 power up h1cdsrfm. First EX is PAUSE'd, EY and CS are fenced. long range dolphin starts with no issues. A new MEDM is generated from the new H1.ipc file.
09:51 Complete in 2 hours.
Kiet and Sheila,
Following up on the investigation posted in aLOG 84136, we examined the impact of higher-order violin mode harmonics on the contamination region.
We found that subtracting the violin peaks near 1000 Hz (1st harmonic) from those near 1500 Hz (2nd harmonic) results in frequency differences that align with many of the narrow lines observed in the contamination region around 500 Hz.
Violin peaks that we used (using O4a+b run average spectra)
F_n1 = {1008.69472,1008.81764, 1007.99944,1005.10319,1005.40083} Hz
F_n2 = {1472.77958,1466.18903,1465.59417,1468.58861, 1465.02333, 1486.36153, 1485.76708} Hz
Out of the 35 possible difference pairs (one from each set), 27 matched known lines in the contamination region to within 1/1800 Hz (~0.56 mHz)—most within 0.1 mHz. Considering that each region actually contains >30 peaks, the number of matching pairs likely increases significantly, helping explain the dense forest of lines in the comtaimination region.
Next steps:
The Fscan run average data are available here (interactive plots):
Fundamental region (500 Hz):
1st harmonic region (1000 Hz):
2nd harmonic region (1500 Hz):
03:08 Observing