J. Kissel Just stating for the record, as the 2023 aLOG records are a bit unclear (LHO:72106 and LHO:72130 are all I could find, and they claim -0.2), I state here that all DOFs of H1 SUS SRM's M1 damping loop gains have been at -0.5 since 2023-04-21 22:49 UTC, prior to that, they were the -1.0 as designed (see 2022 upgrade to "level 2.0", LHO:65310). See attached trend confirming this to be true.
Tue Jun 24 09:40:55 2025 INFO: Fill completed in 8min 51secs
Today's fill was ran at 09:33 to complete it before Patrick started his work on h0vacly.
Per WP 12570 we stopped the cameras on h1digivdeo[45] and updated the pylon and pylon-camera-server software. This was to bring in a fix in the pylon library for leaked file descriptors when a camera dropped and reconnected.
Last week we just restarted the software to close all the leaked file descriptors. This should solve the problem. This was not a feature update of the camera software, just a rebuild against a newer pylon library.
This was done 8:50-9:00am local time (16:00 UTC).
The procedure was to:
1. run apt-get update
2. stop all the camera processes via the management web page
3. run apt-get install pylon pylon-camera-server
4. restart all the camera processes via the management web page
5. spot check a few cameras to make sure things come back.
For posterity, pylon was updated to 8.1.0 and pylon-camera-server was updated to 0.1.18
A FAMIS task reminded us to update the leap second files on some core infrastructure that we don't update much.
These were updated either by updating the tzdata package or copying over a current leap-seconds.list + leapseconds file to /usr/share/zoneinfo.
h1daqd* machines where updated to tzdata 2025b-0+deb11u1
h1guardian1 already had this applied already.
Erik is updating h1vmboot5-5 and it's diskless roots
h1fs[01] have been updated
h1hwinj1 has been updated
h1digivideo2 has been updated
h1hwsmsr & h1hwse[xy] have been updated
h1fescript0
This is regular maintenance. An example of what happens when we miss this is in https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=82769
TITLE: 06/24 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 2mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY:
Currently Observing and have been Locked for almost 4 hours. Magnetic injections just finished.
Workstations updated and rebooted. This was an OS packages update. Conda packages were not updated.
The FC TRANS GR (CAM33) camera looks to have crashed or at least the channels for its controls were no longer accessable. The image was still viewable, at least from the screenshots fom. i restarted the process via the browser interface linked from the camera overview and that did the trick. Back to Observing at 0730UTC.
TITLE: 06/24 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: TJ
SHIFT SUMMARY: H1 lost lock due to one or more earthquakes this evening and is still working on relocking. Despite everything running smoothly and automatically on the way back up, there was a lockloss for some reason during MAX_POWER as I type this, so H1 will be trying again on its own.
Lockloss @ 03:03 UTC after almost 6 hours locked - link to lockloss tool
Several quakes rolling through around this time; hard to say which was the real cause but likely a M5.7 in the Caribbean.
(Jordan V., Travis S., Gerardo M.)
Up to this morning the volume of HAM1 was being pumped by a turbo pump and an ion pump, we have removed the turbo pump, we closed its isolation valve, but the system remains on (SS500 cart with scroll pump, and turbo pump are still ON). The turbo pump was isolated to let the ion pump take over the pumping of HAM1, it took a few hours but the ion pump seems to be doing good managing the internal pressure of HAM1, see attached trend, we did have a small anomaly that can be noted on the same trend data, a little spike, we are looking into it. If the pressure continues to improve, we'll be able to turn off all other auxiliary systems and decouple everything from the turbo pump tomorrow.
FAMIS 31091
Nothing major to report; things are looking stable this week.
TITLE: 06/23 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Currently Observing at 150Mpc and have been Locked for just over 2 hours. Nothing too out of the ordinary today. We had two locklosses, but relocking was relatively straightforward.
LOG:
14:30 Observing at 140Mpc and have been locked for 11 hours
14:54 Out of Observing due to SQZ unlock
15:00 Staying out of Observing to start Commissioning - went to FIS
17:18 Lockloss (85239)
- Holding in DOWN for a bit while VAC team closes the pump valve for HAM1
- Running an initial alignment (accidentally did it before the newly mandated 30-60 BS cooling period but it went okay)
18:52 NOMINAL_LOW_NOISE
19:58 Lockloss (85249)
21:23 NOMINAL_LOW_NOISE
21:25 Observing
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
14:41 | FAC | Nellie | MY | n | Tech clean | 15:29 |
14:55 | FAC | Kim | MX | n | Tech clean | 15:49 |
16:13 | PCAL | Tony | PCAL Lab | y(local) | Packing up PCAL stuff | 17:06 |
16:52 | ISC | Sheila | LVEA | n | Plugging in a cable (in 10 mins ago) | 16:52 |
17:21 | VAC | Jordan, Travis | LVEA | n | Closing out pump valve | 17:29 |
20:21 | ISC | Keita | OpticsLab | y(local) | ISS array | 21:37 |
20:26 | FAC | Tyler | MX | n | Bees | 20:41 |
21:09 | Mitchell, Jason | MY | n | Verifying some stuff | 22:20 |
We're assembling the first unit that incooporates all upgrades including the QPD tilt and here are minor problems we've stumbled upon. (No ISS array unit with an upgrade to tilt the QPD (E1400231) has been assembled before as far as I see and nobody seems to have cared to update all drawings.)
First picture is an example of the QPD before upgrade. QPD assembly (D1400139) and the cable connector assembly (D1300222) are mounted on the QPD platform by the QPD clamp plate (D1300963-v1, an older version) and a pair of split QPD connector clamps (d1300220). Two pieces of kapton insulation sheets are protecting the QPD assy from getting short-circuited to the platform.
After the upgrade, the QPD assy sits on top of a tilt washer (D1400146, called beveled C-bore washer) that tilts the QPD by 1.41deg in a plane that divides YAW and PIT plane by 45 degrees (2nd picture). The bottom kapton will go between the washer and the QPD platform plate.
Problem 1: Insulation between the QPD clamp and the QPD pins is a bit sketchy.
Titled QPD means that the bottom of the QPD assy is shifted significantly in YAW and PIT. A new asymmetric QPD clamp plate with tilted seating for the screws (D1300963-v2) has been manufactured to accommodate that. But we have no record of updated kapton insulators, so the center of the clamp bore doesn't agree with the kapton (3rd picture, note that the QPD rotation is incorrect in this picture, which had to be fixed when connecting the cable). Since the tilt washer is not captured by anything (it's just sandwiched between the clamp and the platform plate), it's not impossible to shift the QPD assy such that some of the QPD pins will be grounded to the clamp and thus to the QPD platform plate.
You must check that there's no electrical connection between the QPD assy and the platform each time you adjust the QPD position in the lab.
Problem 2: New QPD connector clamp posts are too long, old ones are too short.
Old posts for the QPD connector are 13/16" long, which is too short for the upgrade because of the tilt washer, see 4th picture where things are in a strange balance. It seems as if it's working OK, but you can wiggle the post a bit so the post slides laterally relative to the clamp and/or the platform, it settles to a different angle and suddnly things become loose. To avoid that, you tighten the screws so hard that they start bending (which may be already starting to happen in this picture).
Also, because the clamp positions are 45 degrees away from the direction of tilt, one clamp goes higher than the other.
To address these, somebody procured 1" and 15/16" posts years ago, but they're just too tall to the point where the clamps are loose. If anything, what we need are probably something like 27/32" and 7/8" (maybe 7/8" works for both).
We ended up using older 13/16" posts, but added washers. Two thin washers for the shorter clamp, two thin plus one thick for the taller one (5th picture). This works OK. Shorter screw is the original, longer screw was too long but it works.
Problem 3: It's easy to set the rotation of the QPD wrong.
When retrofitting the tilt washer and the newer QPD clamp plate, you must do the following.
I screwed up and put the QPD on the connector at a wrong angle. It's easy to catch the error because no quadrant responds to the laser, but it's better not to make a mistake in the first place. It will help if the QPD assy barrel is marked at the cathode-anode1 corner.
It seems that D1300222 and D1101059 must be updated. Systems people please have a look.
D1300222: A tilt washer (D1400146), a new QPD clamp (D1300963-v2) and two sheets of kapton insulation are missing. Spacers are longer than 13/16".
D1101059: Explicitly state that part #28 (D1300963, QPD clamp) must be D1300963-v2.
I installed the beam dumps (which are two plates of filter glass, probably from Schott?) for the array after cleaning them according to E2100057.
There are marks that look like water spots and/or some fog that couldn't be removed by repeated drag wiping with methanol (see picture).
After installation, I found that these plates are very loosely captured between two metal plates, see the video, this seems to be by design. I don't like it but the same design has been working in chamber for years.
TITLE: 06/23 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 126Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 10mph Gusts, 4mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY: H1 has been locked for 1.5 hours.
Sheila reduced the OPO Trans setpoint from 95uw to 80uW (our nominal from before the break), we hope this will make the SQZ less sensitive to chosen angle. She readjusted OPO temp and measured NLG to still be around 8 to 9. We are not sure why this has dropped since last week but she noticed pump depletion when measuring the NLG when the setpoint was at 95uW.
Lockloss at 2025-06-23 19:58 UTC. A commissioner was running a jitter noise injection at the time of the lockloss, but doesn't think that was the cause of the lockloss.
The DARM signals look very noisy just before the lockloss because I was injecting jitter noise. I don't think that caused the lockloss. The tool is tagging this as "ETM GLITCH" which I also think is wrong because the extra noise was bringing the DARM signal above threshhold for the glitch checker.
21:25 UTC Back to Observing
We took 20 minutes of no-squeezing data for a cross correlation measurement today. I ran a script to collect the full 524 kHz data from DCPD A and B. The data is currently saved in 1 second hdf5 frames in /ligo/home/elenna.capote/OMC_DCPD/252306-092558_xcorr_data
I was able to read in each 1 second frame and generate a gwpy timeseries for each DCPD data set that I then saved as a single .gwf file. I also used the gwpy resample function to decimate the data to 64k so there is a smaller version. Both sets of files are saved in the same directory and can be read in with gwpy, which should also include metadata with sample rate, gps start, and channel name.
Ryan S., Elenna
Thge MOVE_SPOTS state is taking 13 minutes (!) to complete, because the YAW3 ADS DOF is very far off and taking a significant time to converge. Both Jenne and I have found that bumping up the YAW3 gain (PRM yaw) slowly helps converge the loops much faster.
Ryan kindly helped me update the state code to slowly increase the gain if the convergence is taking too long. We added a new timer 'ADS', that waits for one minute after the new A2L gains are ramped (so an additional minute after the 2 minute ramp time of the A2L gains). If, after that first minute, there is still no convergence, then the YAW3 gain is doubled. After that, the 'ADS' timer waits 2 minutes, and again doubles the gain. This process can happen up to three times, which should increase the YAW3 gain to a maximum value of 8. Jenne and I have found that the gain can go as high as 10 in this state. The two minute waits give the other ASC, like SRC1 and INP1 Y time to converge as the ADS pulls the PRM in faster. Once the convergence checker returns true, the YAW3 gain is set back to 1.
We will monitor how this proceeds on this locking attempt. I updated the guardian notify statements so it states when the gain is increased.
This was a sucess- this run through took only 7 minutes. I am shortening the 2 minute wait before increasing the gain to 90 seconds. If that still works, maybe we can go to 60 seconds.
To be more specific, the first attempt as described above meant the state took 6 minutes, 50 seconds. I loaded the change to reduce the wait time from 120 to 90 seconds, which only shortened the state length to 6 minutes, 30 seconds. The gain was only ramped to 8 for a very short period of time. I still think we can make this shorter, which we can do by making that wait time 60 seconds, and maybe taking bigger steps in the gain each time. However, we are still in the RCG upgrade, so I will hold off on changes to the guardian for now.
YAW3 is still limiting the length of the state. In this morning's relock, YAW3 convergence took nearly an additional minute more than the other loops. Once we have caught YAW3 up to everything else, we could make the state even shorter by raising the gain of other ADS loops. Two minutes of the state are taken up in the ramp of the A2L gain, so it is taking an additional 4 minutes, 30 seconds to wait for loop convergence.
Now it seems that PIT3 is taking too much time to converge, so I updated the guardian to also increase the PIT3 gain in the same way.