21:12 UTC we have recovered Observing after a long struggle with ASC, I had to accept a bunch of OFFSET SDF diffs from the dark offset script earlier.
Inspected the wind fences today. The southmost panel at EY continues to degrade, might even save us the work of coming down on its own before the vent, when we plan to replace it. The rest of EY looks fine. EX also seems stable, unsure of when we plan to repair it, but we have replacement panels and are starting to order hardware. First pic is EX, second is EY.
Preparing for an upgrade of the ASC model to add PM1 auto-centering. This includes new filter modules PM1_PIT and PM1_YAW, and IPC senders H1:ASC_PM1_PIT_SUSHTTS and H1:ASC_PM1_YAW_SUSHTTS.
For now the old auto-centering driving the MCL PZTs has been left in place.
Buttons for the the new PM1 filter modules were added to the ASC medm screen as well.
Jeff, Oli
WP 12370
ECR E1700228
In a continuation of preparing for the addition of PM1 (83168), Jeff and I have made the necessary model changes for the PR3 OpLev. As outlined in 83168, the stuff that needed to be changed in the model (to match with the physical changes) was to move the PR3 OpLev channels. They were originally located in the top level of h1susim.mdl and sent via IPC to PR3, but we moved them over to the h1suspr3.mdl model. This means that the outgoing IPC connection from h1susim going into h1suspr3 has been removed(h1susim before, h1susim after), and the PR3 oplev channels have been directly connected from ADC1(h1suspr3 before, h1suspr3 after).
Both models were compiled successfully but have not yet been installed.
Model files can be found in /opt/rtcds/userapps/release/sus/h1/models/, and changes to h1susim.mdl and h1suspr3.mdl have been committed to the svn as revision 30905.
Wed Mar 05 10:11:50 2025 INFO: Fill completed in 11min 46secs
Gerardo confirmed a good fill curbside.
After a lockloss while attempting to relock this morning, I ran the dark offset script (/opt/rtcds/userapps/release/isc/h1/scripts/dark_offsets/dark_offsets_exe.py) after taking the IMC offline and shuttering ALS with the hope of fixing some of our ASC troubles.
I've attached a text file of the script output showing what all was changed, as well as screenshots of these being accepted in SDF safe.snap tables (TJ has more that he will add in a comment). These will likely need further accepting once we eventually reach NLN.
Closes FAMIS 37250, the first of this new task. All of the dust monitors look to be working as expected and are seeing counts, aside from LAB2 which is known.
TITLE: 03/05 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 119Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 3mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.31 μm/s
QUICK SUMMARY:
H1 Glitches (Omicron) plot isn't updating
In some more camera fallout we found ISC_LOCK sets CAM18s exposure during PREP and CLOSE_BD (H1:VID-CAM18_EXP) which does not exist anymore, ISC_LOCK turned "blood red" for a few seconds complaining it couldn't connect then just it moved on and went back to normal?
Those 2 lines have since been edited and loaded.
Timing things that I don't understand but accepted, ALS WFS turned off, which I reverted in the hopes that this will work again in the next lock, ESD limiters as expected from 83158
TITLE: 03/05 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
IFO is in PREVENTATIVE_MAINTENANCE and LOCKING at DARM_TO_DC_READOUT
We have been unable to lock since conclusion of maintenance. The issue concerns PRMI's ASC not improvign POP counts sufficiently for DRMI to maintain lock.
While wind and microseism was the early culprit and assumption of locking issues, both have calmed down and revealed PRMI_ASC as the problem. Since ~20:30 PT, we have been in a cycle of PRMI and DRMI where PRMI is able to lock and stay locked to move to DRMI, but then DRMI is unable to lock. When we stay at PRMI_ASC, it can stay locked indefinitely.
Elenna, Sheila and I troubleshot this quite exentsively, by moving all relevant suspensions (PRM, BS, PR2). We also ran initial alignment now a total of 3 times without needing to change anything. At the end, Sheila suggested increasing the gain of of LSC_POPAIR_B_RF18 from G = 2 to G = 4. This allowed DRMI triggering to activate a tad earlier than it usually would, which (excitingly) locked DRMI.
Relevant alog: alog 83178. The gain change fix has been commented out. It is on line 564 and 565 on ISC_LOCK.py. The plan is that it will be reverted tomorrow given a solution to the underlying PRMI_ASC not improving.
Other:
LOG:
None.
Ibrahim increased H1:LSC-POPAIR_B_RF18_I_GAIN which is normally 2 during lock acquisition to 4 to get DRMI to lock, as he says. Normally the guardian then adjusts this gain to 3.991 when the 9 MHz modulation depth is reduced, then it gets reset to 2 in PREP_FOR_LOCKING. We commented out the lines in prep for locking that do this reset, but then I remembered that this is also in SDF revert, so I accepted 3.991 in the safe.snap file. And I uncommented the change to ISC_LOCK so that we won't have two places to remember to undo this.
Edited to add one more comment: We also turned off SRC2 P in DRMI ASC to stop a large oscillation. This isn't in the guardian.
IFO is still LOCKING and not able to since MAINTENANCE ended.
At first, this was thought to be a combination of the microseism and wind speeds, but both have since gone down to since lockable states. We can also easily lock ALS and have gone through initial alignment fully automatically twice. What happens is that the PRMI_ASC alignment holds but is not high enough in counts, hovering around 50 and not improving. Then, when we attempt to lock DRMI, we lose lock every time. Elenna and I are troubleshooting the issue now. Certainly doesn't seem like the environment.
Now unable to make it past FIND_IR since fully auto initial aligment ~7PM. Wind is just picking up and is now over 30mph again. The last 6 locklosses have been from FIND_IR, caused by ALSY slowly becoming unstably and losing lock. I will go into IDLE and attempt locking if wind improves, which forecast says is unlikely before shift end.
ALSY instability was caused by DOF2 P and Y going in opposite directions slowly causing ALS to lose lock at FIND_IR. I turned them off at this point and was able to get past this state after 7 locklosses.
In other news, Elenna and I do not think this issue is environmental because we were able to hold PRMI lock for over 10 minutes but were not able to improve its counts past 60 with movements in all the relevant suspensions. DRMI is thus still failing to lock (and this is fresh after another initial alignment). The current hypothesis is that something is wrong with ASC.
The alignment on the AS AIR camera looks very poor in PRMI, and the buildups are lower than I expect (around 60 on the fast channel). We tried moving the beamsplitter, PRM, and PR2 and nothing seems to improve the buildup or the shape of the beam on the camera. I even tried stepping PR3 in all directions (I reverted my steps!) and nothing got better. DRMI locking isn't working, we see plently of flashes but no triggers. My guess is the alignment is still poor that we are not meeting the threshhold. However, we've done multiple initial alignments and even adjusted the alignment by hand with no luck. I am stumped about what the problem could be.
PCALY_STAT Guardian node was created, which is a very basic guardian node that just checks a handful of channels to be with a handfull of histoical threshholds to determine if the PCAL subsystem is functioning.
This is just a copy of PCALX_STAT which has been running well for a number of months.
TJ has also put PCALY_STAT on the GUARD_OVERVIEW screen for easy access.
Thank you Dave for fitting us into this last DAQ restart today.
WP12362 Upgrade h1susex to latest 5.30 RCG
To address a difference in the RCG5.3.0 running on h1omc0 and h1susex, all models on h1susex were built against the latest 5.3.0. h1susex was rebooted as part of this upgrade. No DAQ restart was needed.
WP12358 Move cameras to new AMD servers
All the cameras from h1digivideo0 and h1digivideo1 were added to the four already running on h1digivideo4. This brings the number of cameras on this server to 15. All cameras are running at 25 frames-per-second limits.
The cameras on h1digivideo3 (deb11) were moved to a new server, h1digivideo5.
The h1digivideo[0,1,3] machines were powered down. No changes were made to h1digivideo2.
A new camera MEDM was built showing the new layout.
See Jonathan's alog for details.
WP12356 Install independent 1PPS generator at EY
An independed 1PPS function generation was installed at EY and connected to the 5th port of the comparitor, next to the CNS-II clock. This will be used to invistigate when the CNS-II occassionally has a 500nS offset applied.
WP12355 Picket Fence Upgrade
Erik upgraded the picket fence system.
WP12332 BSC Temperature Monitors
Fil worked on the BSC1 (ITMY) temp monitor, it is now reading the correct temp.
DAQ Restart.
A EDC DAQ restart was needed for three changes:
H1EPICS_DIGVIDEO.ini: reflect the move of cameras from old to new software.
H1EPICS_CDSMON.ini: remove h1digivideo3's epics load mon channels, add those for h1digivideo5
H1EPICS_GRD.ini: add channels for new Guardian node PCALY_STAT
The DAQ restart followed our new EDC procedure, whereby the trend-writers were stopped before the edc was restarted with its new configuration.
No major surprises except FW1 sponteneously restarted itself after running for about seven minutes.
Tue04Mar2025
LOC TIME HOSTNAME MODEL/REBOOT
08:00:38 h1susex h1iopsusex <<< First we restarted h1susex models with new RCG
08:00:52 h1susex h1susetmx
08:01:06 h1susex h1sustmsx
08:01:20 h1susex h1susetmxpi
08:08:12 h1susex h1iopsusex <<< Then we rebooted h1susex
08:08:25 h1susex h1susetmx
08:08:38 h1susex h1sustmsx
08:08:51 h1susex h1susetmxpi
13:46:16 h1susauxb123 h1edc[DAQ] <<< New video, load_mon and grd ini loaded
13:47:04 h1daqdc0 [DAQ] <<<0leg start
13:47:15 h1daqfw0 [DAQ]
13:47:15 h1daqtw0 [DAQ]
13:47:19 h1daqnds0 [DAQ]
13:47:24 h1daqgds0 [DAQ]
13:48:08 h1daqgds0 [DAQ] <<< gds0 needed a second restart
13:51:17 h1daqdc1 [DAQ] <<< 1leg start
13:51:26 h1daqfw1 [DAQ]
13:51:27 h1daqtw1 [DAQ]
13:51:28 h1daqnds1 [DAQ]
13:51:35 h1daqgds1 [DAQ]
14:00:36 h1daqfw1 [DAQ] <<< spontaneous restart of fw1
LVEA and CER CDS Wifi Problems
Erik reconfigured the LVEA and CER WAPs from DHCP to static IP addresses. This resolved the immediate connection problem between wap-control and these units. Details can be found in the FRS ticket.