Tue Jun 17 10:10:11 2025 INFO: Fill completed in 10min 7secs
Late entry from last week. Roger's Machinery completed the new Kobelco installation and startup last Tuesday. Our last task before acceptance testing of the system was to replace the filter elements in the filter tree. When we opened the housings, we discovered that one of the elements was mangled and had evidence of rough tool use on it. When we removed the element, we could see that there was some evidence of galling in the threads where the element screws into the filter housing (see pics). We will order a new housing and replace it before continuing with testing the system.
We have completed the upgrade of H1 frontends to RCG5.5.0 at 09:51.
Detailed alog will be written but surprises found/re-remembered:
EX dolphin frontends need to enable an unused port on the EY switch because the EX switch has no port control (damaged in Apr power outage)
PSL DBB model had an obsolete Dolphin IPC sender back when it used to have a dolphin card. New RCG doesn't allow senders with no cards. Removed the sender from the model, a DAQ restart is pending for this model
We had upgraded h1omc0 to 5.5.0 some time ago, but the H1.ipc file had changed so it needed a restart. Prior to restart it was clobbering the SWWD IPCs between sush34 and seih23, seih45.
Here is a rough squence of today's upgrade, all times local PDT
07:22 rebuild H1EDC.ini to convince outselves the first DAQ restart will be model changes only
07:43 h1cdsrfm powered down, this breaks the linkage between Dolphin locations
07:43 h1ecatmon0 upgrade (reboot)
07:45 Dolphin network manager started on h1vmboot5-5, causing the standard set of end station systems to crash (susey, seiey, iscey and susex). We PAUSE'd the remaining EX (iscex, seiex)
07:51 Reboot h1susaux[ex, ey, b123, h2, h34, h56] pem[mx, my] to upgrade them. susaux[h34, ey] got stuck and were power cycled via IPMI.
08:03 DAQ 0-leg restart for new INI files across the board.
08:18 DAQ 1-leg restart. At this point omc0, susaux's and pemmid have good DAQ data, everyone else has BAD DAQ data.
08:27 Power down EX machines, power up EY machines. SWWD IPCs working, Dolphin IPC checks out.
08:32 Power up EX machines, all at the same time because of the Dolphin switch issue. They do not start. After some head scratching we remembered that the startup sequence needs to activate a dolphin switch port, which cannot happen at EX because the switch is damaged. Work around is for all three EX front ends to switch an unused port on the EY switch. Once this was put into place the EX machines started without anyone having to drive to the end station.
08:55 Reboot h1psl0 to upgrade PSL models (no dolphin cards, more about this later...)
08:56 Power down all non-upgraded corner station machines (SUS+SEI+ASC+LSC+OAF) but not h1omc0 (more about this later..)
09:00 h1psldbb is not running. It has an obsolete Dolphin IPC sender part in the model but no Dolphin card. RCG5.5.0 does not allow this. Rebuild model sans IPC part, starts running. Note PSLDBB DAQ data is BAD from this point till the second DAQ restart.
09:10 First power up h1sush2b, h1seih16 for HAM1 SEI work. SWWD IPC between the two working well (well, for these two, more later...)
09:20 Power up all remaining corner station computers
09:30 Discover wierd SWWD IPC receive values for HAM3 and HAM4 (val should be 1.0, but is -0.002 or 0.000).
09:34 try restart h1iopsush34, IPC values still bad. But h1omc0 has not been restarted so its using the old IPC configuration and could be writing out-of-bounds
09:35 restart h1omc0 models, SWWD IPC errors are resolved
09:44 power up h1cdsrfm. First EX is PAUSE'd, EY and CS are fenced. long range dolphin starts with no issues. A new MEDM is generated from the new H1.ipc file.
09:51 Complete in 2 hours.
As Kevin and Vicky found a couple of weeks ago, the SQZ HD PD A seems to be loose, it stopped working and Vicky re-seated it and it worked again. It again had stopped working so I poked it (it started pitched down) to make it work, now it will need realigning before the HD is functional again.
TITLE: 06/17 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 4mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY:
IFO is in DOWN for PLANNED MAINTENANCE
H1 had just gotten to NLN as I arrived. I then put it in DOWN for today's maintenance. Activities slated for today:
WP12536 RCG5.5.0 Upgrade
WP12624 SWWD variable bypass time h1iopsush2a
WP12625 Fix broken ADC selector h1hpiham4
WP12626 Fix partial ADC MEDM by adding named parts
Jonathan, Erik, EJ, Dave:
Yesterday, Mon 16jun2025, we did the final round of builds and installs of all the models against RCG5.5.0
h1build was rebooted against the new boot server (h1vmboot5-5) and was used to do the builds/installs.
After the builds, I ran each model through check_model_changes and then wrote an output files parser:
All the INI changes were expected, most had the RCG additional channels expected from going from 5.1.4 to 5.5.0.
Exceptions were:
h1iopsusex had 1 less because it was upgrading from 5.3.0 so already had timing card temp chan
The following had additional ADC naming parts:
isibs, isiham3, isiham4, isietmx, isietmy, alsex, alsey, ascimc
Build took about 2 hours. Install took 2hrs 13mins, backing up targets to target_archive took 45GB of disk space, /opt/rtcds is at 89%
TITLE: 06/17 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: We stayed locked the entire shift, high frequency SQZing hasn't been the best.
LOG: No Log.
00:01 UTC Observing
Jennie W, Keita, Rahul
Today the three of us went into the optics lab to upgrade unit S1202965 with the c'bore washer and qpd clamp plate that give a ~ 2 degree tilt between the QPD plane and the mount plate. See detail D in D1300720.
Looking at the assy solidworks file, LIGO-D1101059, if the back face of the photodiode array is facing to the back, the longer clamp leg points towards the front, and the notch on the tilt washer should be approx at the 4o'clock position.
We first checked the alignment into the array photodiodes currently and realised the beam was off by a large amount in yaw from the entrance aperture.
Keita had to change the mounts for the PZT mirror and lens as these were slightly tilted on the translation stage and it seemed like we needed some more robust alignment setup.
We then tried aligning with the upstream steering mirror and PZT mirror but can see multiple beams on each array PD. To check that the beam is not too large at the input aperture we want to re-profile the beam size on the way into the ISS assembly.
We left the set-up with the M2MS beam prolfer set up at the corner of the table and rough alignment of the beam into it, more fine adjustment needs to be done.
The reason why the alignment was totally off is unknown. It was still off after turning on the PZT driver with an offset (2.5V) so it cannot be the PZT mirror. Something might have been bumped in the past month or two.
TITLE: 06/16 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 20mph Gusts, 7mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
Ryan S., Elenna
Thge MOVE_SPOTS state is taking 13 minutes (!) to complete, because the YAW3 ADS DOF is very far off and taking a significant time to converge. Both Jenne and I have found that bumping up the YAW3 gain (PRM yaw) slowly helps converge the loops much faster.
Ryan kindly helped me update the state code to slowly increase the gain if the convergence is taking too long. We added a new timer 'ADS', that waits for one minute after the new A2L gains are ramped (so an additional minute after the 2 minute ramp time of the A2L gains). If, after that first minute, there is still no convergence, then the YAW3 gain is doubled. After that, the 'ADS' timer waits 2 minutes, and again doubles the gain. This process can happen up to three times, which should increase the YAW3 gain to a maximum value of 8. Jenne and I have found that the gain can go as high as 10 in this state. The two minute waits give the other ASC, like SRC1 and INP1 Y time to converge as the ADS pulls the PRM in faster. Once the convergence checker returns true, the YAW3 gain is set back to 1.
We will monitor how this proceeds on this locking attempt. I updated the guardian notify statements so it states when the gain is increased.
This was a sucess- this run through took only 7 minutes. I am shortening the 2 minute wait before increasing the gain to 90 seconds. If that still works, maybe we can go to 60 seconds.
To be more specific, the first attempt as described above meant the state took 6 minutes, 50 seconds. I loaded the change to reduce the wait time from 120 to 90 seconds, which only shortened the state length to 6 minutes, 30 seconds. The gain was only ramped to 8 for a very short period of time. I still think we can make this shorter, which we can do by making that wait time 60 seconds, and maybe taking bigger steps in the gain each time. However, we are still in the RCG upgrade, so I will hold off on changes to the guardian for now.
YAW3 is still limiting the length of the state. In this morning's relock, YAW3 convergence took nearly an additional minute more than the other loops. Once we have caught YAW3 up to everything else, we could make the state even shorter by raising the gain of other ADS loops. Two minutes of the state are taken up in the ramp of the A2L gain, so it is taking an additional 4 minutes, 30 seconds to wait for loop convergence.
Now it seems that PIT3 is taking too much time to converge, so I updated the guardian to also increase the PIT3 gain in the same way.
In preparation for the RCG upgrade, we are using the relocking time to reconcile SDF differences in the SAFE file.
Here are some of mine:
I have also determined that the unmonitored channel diffs in the LSC, ASC, and OMC models are guardian controlled values and do not need to be saved.
Not accepting or reverting the h1sqz, h1ascsqzfc, or slow controls cs_sqz sdfs, attached. As these have the same observe and safe files.
Accepted H1:TCS-ETMX_RH_SET{LOWER,UPPER}DRIVECURRENT as ndscope-ing shows they are normally at this value.
Some of these SDFs may have then led to diffs in the OBSERVE state. I have reverted the roll mode tRamp, and accepted the OSC gains in the CAL CS model.
I updated the OPTICALIGN OFFSETs for each suspension that we use those sliders on. I tried using my update_sus_safesnap.py script at first, but even though it's worked one other time in that past, it was not working anytime I tried using it on more than one suspension at a time (it seems like it was only doing one out of each suspension group). I ended up being able to get them all updated anyway eventually. I'm attaching all their sdfs and will be working on fixing the script. Note that a couple of the ETM/TMS values might not match thesetpoint exactly due to the screenshots happening during relocking and after they had moved a bit with the WFS
All looks well, aside from the known issue with LAB2 and LVEA5 seems frozen, I'll investigate that tomorrow during maintenance.
LVEA5 being off is expected, it's a pumped dust monitor so we turned it off for observing.
Oli, Camilla, Sheila, RyanS
It was pointed out (84972) that our new SRCL offset is too big at the beginning of the lock, affecting the calibration and how well we are squeezing. Camilla had the idea of taking the unused-but-already-set-up THERMALIZATION guardian and repurposing the main state so it steps LSC-SRCL1_OFFSET from the LSC-SRCL1_OFFSET value at the end of MAX_POWER to the official offset value given in lscparams (offset['SRCL_RETUNE']). This stepping starts at the end of MAX_POWER and goes for 90 minutes. Here is a screenshot of the code.
To go with this new stepping, we've commented out the line (~5641) in ISC_LOCK's LOWNOISE_LENGTH_CONTROL (ezca['LSC-SRCL1_OFFSET'] = lscparams.offset['SRCL_RETUNE']) so that the offset doesn't get set to that constant and instead keeps stepping.
To get this to run properly while observing, we did have to unmonitor the LSC_SRCL1_OFFSET value in the Observe sdf (sdf).
Attached is a screenshot of the grafana page, highlighting the 33 Hz calibration line, which seems to be the most sensitive to thermalization. Before, when the SRCL offset was set static, it appears that the 33 Hz line uncertainty starts at about 1.09 and then decays down to about 1.02 over the first hour. With the thermalization adjustment of the SRCL offset from 0 to -455 over one hour, the 33 Hz uncertainty starts around 0.945 and then increases to 1.02 over the first hour. Seems like we overshot in the other direction, so we could start closer to -200 perhaps and move to -455.
We decided to change the guardian so that it starts at -200 before then stepping its way up to -455 over the course of 75 minutes instead of 90 minutes.
With the update to the guardian to start at -200, each calibration line uncertainty has actually stayed very flat for these first 30 minutes of lock (except for the usual very large jump in the uncertainty for the first few minutes of the lock).
This shows the entire lock using the thermalization guardian with the SRCL offset ramping from -200 to -455, The line uncertainty holds steady the entire time within 2-3%!
Sheila, Elenna, Camilla
Sheila was questioning if something is drifting for us to need an initial alignment after the majority of relocks. Elenna and I noticed that BS PIT moves a lot both while powering up /moving spots and while in NLN. Unsure from the BS alignment inputs plot what's causing this.
This was also happening before the break (see below) but the operators were similarly needing more regular initial alignments before the break too. 1 year ago this was not happening, plot.
These large BS PIT changes began 5th to 6th July 2024 (plot). This is the day shift from the time that the first lock like this happened 5th July 2024 19:26UTC (12:26PT): 78877 at the time we were doing PR2 spot moves. There also was a SUS computer restart 78892 but that appeared to be a day after this started happening.
Sheila, Camilla
This reminded Sheila of when we were heating a SUS in the past and causing the bottom mass to pitch and the ASC to move the top mass to counteract this. Then after lockloss, the bottom mass would slowly go back to it's nominal position.
We do see this on the BS since the PR2 move, see attached (top 2 left plots). See in the green bottom mass oplev trace, when the ASC is turned off on lockloss, the BS moves quickly and then slowly moves again over the next ~30 minutes, do not see simular things on PR3. Attached is the same plot before the PR2 move. And below is a list of other PR2 positions we tried, all the other positions have also made this BS drift. The total PR2 move since the good place is ~3500urad in Yaw.
To avoid this heating and BS drift, we should move back towards a PR2 YAW of closer to 3200. But, we moved PR2 to avoid the spot clipping on the scrapper baffle, e.g. 77631, 80319, 82722, 82641.
I did a bit of alog archaeology to re-remember what we'd done in the past.
To put back the soft turn-off of the BS ASC, I think we need to:
Camilla made the good point that we probably don't want to implement this and then have the first trial of it be overnight. Maybe I'll put it in sometime Monday (when we again have commissioning time), and if we lose lock we can check that it did all the right things.
I've now implemented this soft let-go of BS pit in the ISC_DRMI guardian, and loaded. We'll be able to watch it throughout the day today, including while we're commissioning, so hopefully we'll be able to see it work properly at least once (eg, from a DRMI lockloss).
This 'slow let-go' mode for BS pitch certainly makes the behavior of the BS pit oplev qualitatively different.
In the attached plots, the sharp spike up and decay down behavior around -8 hours is how it had been looking for a long time (as Camilla notes in previous logs in this thread). Around -2 hours we lost lock from NomLowNoise, and while we do get a glitch upon lockloss, the BS doesn't seem to move quite as much, and is mostly flattened out after a shorter amount of time. I also note that this time (-2 hours ago) we didn't need to do an initial alignment (which was done at the -8 hours ago time). However, as Jeff pointed out, we held at DOWN for a while to reconcile SDFs, it's not quite a fair comparison.
We'll see how things go, but there's at least a chance that this will help reduce the need for initial alignments. If needed, we can try to tweak the time constant of the 'soft let-go' to further make the optical lever signal stay more overall flat.
The SUSBS SDF safe.snap file is saved with FM1 off, so that it won't get turned back on in SDF revert. The PREP_PRMI_ASC and PREP_DRMI_ASC states both re-enable FM1 - I may need to go through and ensure it's on for MICH initial alignment.
RyanS, Jenne
We've looked at a couple of times that the BS has been let go of slowly, and it seems like the cooldown time is usually about 17 minutes until it's basically done and at where it wants to be for the next acquisition of DRMI. Attached is one such example.
Alternatively, a day or so ago Tony had to do an initial alignment. On that day, it seemed like the BS took much longer to get to its quiescent spot. I'm not yet sure why the behavior is different sometimes.
Tony is working on taking a look at our average reacquisition time, which will help tell us whether we should make another change to further improve the time it takes to get the BS to where it wants to be for acquisition.