Wed Oct 01 10:07:13 2025 INFO: Fill completed in 7min 9secs
Looked at the last 6 locklosses from Observing.
Could see nothing much in all but one of them, 1443315128 where ETMX_L3 and DARM get noisy in the 70ms before LL, plot attached.
TITLE: 10/01 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: SEISMON_ALERT_USEISM
Wind: 10mph Gusts, 7mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.51 μm/s
QUICK SUMMARY: Locked for 3.5 hours. Two GRB shorts passed through, E606087 and E606088 at 1418 and1431UTC. No alarms. The useism is just getting above the 90 percentile.
Plans for today...Observe!
H1 Manager woke me up complaining about having been out of Observing for over 10 minutes. I checked and found that the issue was that the OPO ISS was not able to stay locked. I took the detector to NO_SQZING, took the OPO guardian to LOCKED_CLF_DUAL_NO_ISS and noted that OPO trans was around 62 uW, much lower than our 80 uW setpoint, so I used 86767 as a checklist of what to do, since we had had the same issue a few weeks ago.
I checked the SHG temp (it had been updated earlier today (87233), and it was still looking good and above 90mW, so I went to the next step from that alog - adjusting the half wave plate that's in the SHG path between the 200 MHz AOM and the SHG Rejected Power PD. I had an ndscope open with H1:SQZ-OPO_TRANS_LF_OUTPUT and H1:SQZ-SHG_LAUNCH_DC_POWERMON, and adjusted the wave plate a step at a time until the SHG launch power was about/above what it had been before (~26uW), and OPO trans was above the 80 uW setpoint. Once that was done, I was able to relock and reinject squeezing with no issues and went back into Observing.
TITLE: 09/30 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
Locking was not trivial (seems like first time in many weeks for me). Wish I could say I solved the issue, but really all I did was run a 2nd alignment and then let H1 lock.
LOG:
H1 had a lockloss about 90min ago for no obvious reasons. All was quiet as well. Attempted to relock but DRMI did not look great; also went for Check MICH Fringes + PRMI---could not even get PRMI locks after letting it try for 20min. Ultimately went for an alignment and where we are. Flashes look mostly decent and POP18/90 are flashing close to 200, but just have not had any locks (only a few short ~1sec locks thus far). Microseism is just under the 95th percentile. Currently at 20min of DRMI locking.
After over 3hrs, H1 finally locked an "xRMI"!! It took lots of waiting, 2-alignments, and me leaving the Control Room (with Lockloss Alerts on my phone) and working on the JAC table to stop me looking at DRMI and H1 trying/struggling....and just let them do their thing! (was about ready to make some phone calls! Oh and we are currently in USEISM state.
4:05utc Back to Observing (since being down since 0:51utc).
Famis task 26658 H1 ISI CPS Sensor Noise Spectra Check - Weekly
Compairing to https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=87025
HAM7_CPSINF_V2 & HAM7_CPSINF_V3 look a bit elevated around 36hz.
ETMX_ST2_CPSINF_V3 has a 16hz increase in it, actually so does ST1.
TITLE: 09/30 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 4mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.40 μm/s
QUICK SUMMARY:
H1's been locked 3+hrs (locking post-Maintenance went smoothly according to TJ!).
There is a CE/ET beam tube meeting going on this week (will be locking doors at "close of buisiness tonight).
Also got word from Erik that we can test a new ndscope with "ndscope -test".
Dave, TJ, Camilla
While Dave was troubleshooting a HWS file issue (started in 84865 but just noticed via HWS Live issues), TJ and I took new HWS references for ITMX and ITMY while we were re-locking at 25W in, MOVE_SPOTS. This is not the best place to take new references, so we plan to take new references again next Tuesday while in DOWN with a cool IFO.
Yesterday Matt and I realized that the ITMX HWS frequency had been at 57Hz rather than the nominal 1Hz since the power outage which was making the SPHERICAL_POWER signal very noisy, reverted to 1Hz.
WP12822, Sheila, Camilla
Followed what we did in 82881, but did not need to adjust alignment through the EOM. Last done only ~6 weeks ago in 86323.
SHG Power dropped when we opened the table and turned on the fan and again when we closed up so we adjusted the SHG TEC temperature twice, it ended up where it started, plot attached.
Starting with:
Then with 0V on ISS AOM controlmon, increased throughput, this was a yaw move of the AOM, moving the AOM in +X. Then put 5V on controlmon and started to maximize the 1st order beam. We decided to not completely maximize 1st order as this reduces overall throughput and our current aim is to increase total green throughout, not ISS AOM range.
Ended with:
Sheila then adjusted the alignment into the fiber and maximized H1:SQZ-OPO_REFL_DC_POWER (needed to reduce power going into fiber with pico-waveplate to avoid saturating PD).
This allowed us to have an OPO setpoint of 80uW with 25mW going into the fiber, with 5 on the controlmon and a spare~20mW that we could give to the fiber. Note that after ~2hours, the controlmon had decreased from 5 to 4. So the power to the AOM may need to be increased next time we loose lock.
I checked the NLG and OPO temp in the process:
| OPO Setpoint | Amplified Max | Amplified Min | UnAmp | Dark | NLG |
| 80 | 0.0132987 | 0.000179 | 0.0005779 | -1.67e-5 | 22.4 |
TITLE: 09/30 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Fairly light maintenance day, with relocking being automated other than me testing out some tramps at find ir. as of right now we have been locked for 1 hour.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 16:48 | VAC | Pump | LVEA | N | AIP pumping on HAM6 | 18:12 |
| 14:52 | VAC | Jordan | LVEA | n | Turning off pump carts | 14:53 |
| 14:53 | SYS | Betsy, Randy | Opt Lab | n | Parts and such | 14:57 |
| 14:57 | SYS | Randy, Tyler | LVEA | n | Craning in N bay | 15:48 |
| 15:01 | FAC | Kim, Nelly | LVEA | n | Tech clean | 16:02 |
| 15:19 | FAC | Chris, Eric | EX | n | Fan noise investigation | 15:52 |
| 15:37 | CDS | Tony, Erik | EX, EY | n | Checking laptops and HWWD | 16:52 |
| 15:37 | PEM | Robert | LVEA | n | Setting up measurements | 19:08 |
| 15:37 | SUS/CDS | Fil | LVEA | n | SR2, PR2 sat amp swap | 16:18 |
| 15:41 | SEI | Jim, Dave, Fil | LVEA, remote | n | HAM6 model restarts and epoxying under HAM6 | 16:30 |
| 15:57 | ISC | Camilla | LVEA, Opt Lab | n | Putting things away and property | 16:18 |
| 16:04 | FAC | Nelly | EY | n | Tech clean | 16:50 |
| 16:06 | FAC | Kim | EX | n | Tech clean | 17:12 |
| 16:20 | CDS | Marc | LVEA | n | Looking for Fil | 16:31 |
| 16:31 | SEI | Jim | CR | n | Poking at HAM2/3/4 | 17:31 |
| 16:33 | FAC | Eric | CS | n | Fire pump tests | 16:34 |
| 16:45 | PSL | Rahul, Jenne | Opt Lab | LOCAL | ISS array work | 17:30 |
| 16:51 | PEM | Robert | LVEA | n | Measurement setup | 18:39 |
| 16:53 | FAC | Tyler | Mids | n | 3IFO checks, closing doors behind tour | 19:10 |
| 16:54 | CDS | Fil, Marc | MY, EY | n | Grabbing parts for R7 chassis | 19:10 |
| 17:13 | FAC | Kim, Nelly | FCES | n | Tech clean | 17:57 |
| 17:37 | VAC | Gerardo, Jordan | LVEA | n | Disconnecting pump carts by HAM5/6 | 18:12 |
| 17:44 | TCS | Camilla, Elenna | EY | YES | Taking power measurement | 18:22 |
| 18:04 | PSL | Jason, Christina | Opt Lab | n | Property labeling lasers | 18:11 |
| 18:05 | VAC | Norco | CS (CP1) | n | LN2 fill | 19:52 |
| 18:05 | VAC | Guests | LVEA | n | Beam tube workshop tour | 18:37 |
| 18:17 | VAC | Norco | MX (CP5) | n | LN2 fill | 19:53 |
| 18:22 | VAC | Mike, Matt | LVEA | n | Join tour | 18:37 |
| 18:24 | SUS | Oli | CR | n | PRM, SRM measurements | 19:24 |
| 18:37 | VAC | Guests | X arm | n | Drive down to X1 BT and on overpass | 19:09 |
| 18:47 | SQZ | Camilla, Sheila | LVEA | YES | Working on back of SQZT0 | 19:52 |
| 18:47 | SAF | Sheila | LVEA | YES | Transitioning to LASER HAZARD | 19:06 |
| 19:10 | - | Ryan C | LVEA | YES | Sweep | 19:20 |
| 19:31 | PSL | Rahul, Jenne | Opt Lab | LOCAL | ISS array | 20:29 |
WP#12819 Elenna, Camilla
Elenna and I went to the EY VEA, opened the ALS table and measured the power out of the HWS collimator, there was 9uW out of the collimator and 31uW out of the fiber, before the collimator. We didn't take the wincam down there but I've attached two photos of the beam, one straight after the colimator, and one before HWS-S2 D1400241. It looks very top-hat like.
Edgard ran the OSEM calibration calculations for the measurements Jeff and I took last week for PRM(measurements, results) and SRM(measurements, results). Today, I updated the OSEMINF gains with the newly calibrated ones for their respective suspension and OSEM, and I also installed the compensation gains into the DAMP filters for PRM and SRM to counter the apparent shift in alignment due to the updated calibration OSEM gains.
To update the OSEMINF gains I put the stage, OSEM, and gain values into a txt file and used my script /ligo/home/oli.patane/Documents/WIP/estimator/OSEMcalib_updateOSEMINFgains.py to go through and update all the OSEM gains. Here are the outputs for that - PRM, SRM.
To add the DAMP compensation gains in the DAMP filter bank, I put the dof and gain values into a txt file (only for M1) and used my script /ligo/home/oli.patane/Documents/WIP/estimator/OSEMcalib_DAMPcomp.py to remove the old filters that were in FM7 (they were not being used and are sdf'd if we end up needing them again) and replaced them with the compensation gains. Here are the outputs for that - PRM, SRM, and here are the filter changes before I loaded them in - PRM, SRM.
I then went through and turned FM7 on for DAMP M1 for PRM and SRM, and then went through the safe.snap and observe.snap and accepted the gain and filter status changes (PRM, SRM).
These updates means that there will be an apparent change in alignment on PRM and SRM, but this change in alignment is not real.
There are two tramps in the LONG_SEARCH_IR state in ALS_DIFF for when we are trying to find IR, I've changed these to 1 second when stepping around a known value and to 2 seconds when transitioning to a new known value. Previously these were 4 and 10 respectively.
Years ago I tested this and found that I couldn't rush this pzt, but I didn't document why or to what extend. I've been frustrated with how long it has been taking, so I wanted to try it out again and see if I could speed it up and/or find out why I couldn't before. I couldn't really create any issues, but perhaps they only rear their head up when we aren't the best aligned. I ended up bringing them down to the values listed above with repeated success so I'll leave it there and continue locking.
Continuing with our satamp swaps for the lower stages, Fil swapped out the M2 and M3 satamps for PR2 and SR2 today (87225). To properly compensate for this change, I used the TIA measurements Jeff took for each satamp's channels and used my script /ligo/svncommon/SusSVN/sus/trunk/Common/PythonTools/satampswap_bestpossible_filterupdate_ECR_E2400330.py to update the compensation filters (satampfilterupdate_output). I then checked that they looked good(PR2, SR2) and loaded them in.
I adjusted a PEM cable to be more out of the walking path by the TCSY table and I also unplugged, rolled up, and moved an unused extension cable out of the way in the same area. I also put folded away a ladder that was out next to SUS R4.
This morning while in the CER for other reasons, I heard a coil driver on HAM4 ISI making a ticking/chirping noise. Fil suggested this was probably a fan, so we've pulled the chassis to check it out. We've put in a spare and will run that until next maintenance probably.
Chassis pulled is S1100320, the spare we put in is S1103567. Frs is https://services1.ligo-la.caltech.edu/FRS/show_bug.cgi?id=35547
We swapped the old chassis back in yesterday, after Fil replaced the fan last week.
Since maintenance will start early next Tuesday, I've updated the PEM_MAG_INJ node to start its injections at 6:00am (13:00 UTC) and the SUS_CHARGE node will start at 6:25am (13:25 UTC). These should all be finished by 6:45am (13:45 UTC).
I've put both Tuesday morning automatic injection start times back to their usual with magnetic injections starting at 7:20am and in-lock charge at 7:45am (both in local time). Guardians have been loaded.
During Tuesday maintenance, we swapped the HAM6 AIP (Starcell). Note this annulus system is connected to HAM5 via the septum plate. We vented the lines with dry nitrogen and left a continuous nitrogen purge(~.3 psi) of the line during the pump swap. Nitrogen attached to HAM5 pump out port while HAM6 pump out port was left open to atmosphere.
No issues during the swap, annulus system is now pumping at both HAM5 & 6 ports with an aux cart and turbo pair. As of end of maintenance, the HAM6 cart was at ~3E-5 Torr, HAM5 cart at ~1E-4 Torr. These pumps will continue running until pressure is <1E-5 Torr at which point the ion pumps will be powered on.
Carts are placed on foam for isolation, and a piece of foam between the flex hose running up to HAM6 pump out port and HAM6 chamber. See attached pictures.
Work permit will be closed once pumps are disconnected from chambers.
Update.
IFO was out of lock due to an earthquake, I went in to the LVEA to check on the aux-carts pumping down on the annuli for HAM5 and HAM6. HAM5 aux-cart was good and pumping down on the annulus, however HAM6 aux-cart safety valve somehow managed to trip between yesterday and today, time is unknown as of now, I restored aux cart, and opened the valve. Aux-cart for HAM6 was reporting a dubious pressure number of 1.26 x 10-07 Torr.
After restoring pumping to HAM6 annulus, both aux carts are reporting more believable numbers.
(Jordan, TJ, Gerardo)
Late entry.
TJ powered ON the ion pumps over the weekend, that allowed for the pumps to reach very good vacuum pressure on the shared annuli system, then on Tuesday morning, Jordan isolated the annuli system for HAM5 and HAM6 from the mechanical pumps and turned off the aux carts.
A couple of hours later we removed the small can turbos, flex hoses and aux carts from the HAM5/6 area, to conclude the replacement of the HAM5 annulus ion pump body.