Saturday standard coordinated calibration sweep
BB Start: 1444847709
BB End: 1444848020
Simulines Start: 1444848233
Simulines End: 1444849630
2025-10-18 19:06:16,282 | INFO | Finished gathering data. Data ends at 1444849593.0
2025-10-18 19:06:16,498 | INFO | It is SAFE TO RETURN TO OBSERVING now, whilst data is processed.
2025-10-18 19:06:16,498 | INFO | Commencing data processing.
2025-10-18 19:06:16,498 | INFO | Ending lockloss monitor. This is either due to having completed the measurement, and this functionality being terminated; or because the whole process was aborted.
2025-10-18 19:06:52,301 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20251018T184336Z.hdf5
2025-10-18 19:06:52,309 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20251018T184336Z.hdf5
2025-10-18 19:06:52,314 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20251018T184336Z.hdf5
2025-10-18 19:06:52,319 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20251018T184336Z.hdf5
2025-10-18 19:06:52,324 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20251018T184336Z.hdf5
PDT: 2025-10-18 12:06:52.452839 PDT
UTC: 2025-10-18 19:06:52.452839 UTC
CALIBMONITOR attached (took screenshot after requesting NLN_CAL_MEAS - might be a mistake but do not know)
I generated the pydarm report from this measurement. I ended up changing "pro-spring" to False and regenerating.
Sat Oct 18 10:10:10 2025 INFO: Fill completed in 10min 6secs
TITLE: 10/18 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 3mph Gusts, 0mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.54 μm/s
QUICK SUMMARY:
IFO is in ENGAGE_ASC_FOR_FULL_IFO
Got in, H1 was locking after an auto initial alignment.
It seems that the wind has died down but that the microseism is still high without signs of coming down. POP and PWR signals look unstable due to microseism. Hoping to get locked today.
TITLE: 10/18 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Wind
INCOMING OPERATOR: Tony
SHIFT SUMMARY: After the wind and microseism died down a bit, I attempted to lock H1 to moderate success. I did not run an alignment, but instead ran through a round of MICH_FRINGES and PRMI in order to lock DRMI. There were a few locklosses along the way I originally attributed to still not great environmental conditions. Eventually, there started being locklosses at the same place in CARM_OFFSET_REDUCTION, specifically when the TR CARM offset was lowered from -12 to -40 (I even verified this by going line-by-line through the state after realizing). After finally remembering that this can happen due to poor arm alignment relative to each other, I started an initial alignment, which ran automatically. Once relocking after that alignment, H1 made it through CARM_OFFSET_REDUCTION without issue. H1 is continuing to relock automatically after that, currently up to MAX_POWER.
LOG:
FAMIS 27428, last checked in alog87417
All fans look to be well within acceptable noise range and behavior is largely unchanged compared to last week.
TITLE: 10/17 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Wind
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
IFO is in IDLE due to MICROSEISM
Nature has not been good to Hanford today.
90%+ microseism and 40 mph gusts of wind have preventing locking.
We've been abl;e to get past DRMI with some ops intervention (PRM and BS touching in PRMI).
21:00 UTC to 21:09 UTC OBSERVING - We were able to lock for a whole 9 minutes but had a Lockloss due to the 1Hz ringup (alog 87547)
After losing lock, the microseism started trending down (yay!) but the wind speed started increasing rapidly (no!) so we are no unable to lock ALS. Thus, we are sitting IDLE after 5+ failed ALS attempts.
Relevant alogs:
PRMI to MICH timer reduction (alog 87540)
IMC Request Gain Scaling (alog 87545, alog 86744)
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 14:39 | FAC | Kim | Optics Lab | N | Technical Cleaning | 14:52 |
| 14:55 | FAC | Randy | Beam Tube EX | N | Fixing holes on beam tube | 21:44 |
| 20:43 | OPT | Keita | Optics Lab | Local | ISS Array | 21:44 |
Known 1Hz ringup lockloss but did not come down as it usually does. Sheila floated increase CSOFT P Gain a further 5 from 25 to 30, but we didn't end up doing this.
TITLE: 10/17 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 40mph Gusts, 24mph 3min avg
Primary useism: 0.20 μm/s
Secondary useism: 0.39 μm/s
QUICK SUMMARY: H1 is currently down after a tough day of locking. Sounds like environmental conditions (mainly microsesm and wind) are causing difficulties with getting H1 back up, and I'll be on the lookout for the strange 1Hz ringup from earlier today.
Since the power outage where the IMC throughput was reduced, our power normalization has been reducing gains artificially by scaling with the IMC input power, which has increased now to give us the same input power on PRM.
This is the reason for the apparent change in optical gain, which Elenna noted in 87453 isn't a real change in optical gain in mA/m, but it is a real change in optical gain in DARM ERR counts per meter. At 2W, the IMC throughput is about the same as it was before the vent since power in the EOM is reduced, the gain scaling that happens in OMC_LOCK and PREP_DC_READOUT aren't much impacted by the power outage. However, as we power up the interferometer the IMC throughput drops, and we scale gains by the IMC input power request. This means that we are reducing our loop gains as the IMC throughput drops.
This power scaling impacts all of our LSC loops, and ASC loops, and would drop all those gains by 3% (60/62).
This power scaling is done by the laser power guardian, I've added a variable called power_channel that is used everywhere that the power scale is set, and for now I've changed it to 'IMC-IM4_TRANS_NSUM_OUTMON'. I did not change the channel that is used to check if the rotation stage is giving the requested power, that will still be the IMC input power. This means that the bootstrapping to adjust the rotation stage should continue to give us the same requested power that it did in the past, and should work even when the IMC is unlocked.
This should cause an increase in kappa C relative to the calibration that Elenna pushed yesterday. 87520
If we switch to the IM4 trans channel for this scaling, we will increase all the loop gains by 5% compared to how they were before the power outage. It might make sense to do this, and add those factors into the loops gains to correct, so that our power scaling is more accurate in the future. I don't want to do this now since we are having locking difficulties due to microseism and the 1Hz ring up today.
Fri Oct 17 10:08:33 2025 INFO: Fill completed in 8min 30secs
Jordan confirmed a good fill curbside.
Microseism is still preventing locking.
We made it up to TRANSITION_FROM_ETMX and then again back past DRMI 2x since the morning but we still lose lock due to instability. Thankfully, it seems that the microseism has reached its peak and is coming around.
Alignment is great and flashes are good and have been tuned (a few times). OPS OBS mode has been changed to MICROSEISM (and SEI_CONFIG is automatically in useism as it has been all day).
Jennie W, Sheila,
I took a long time to post this as have been working on other things...
We carried out a test (see LHO alog #86785) to look at the effect of DARM offset stepping on the power at OMC-DCPD_SUMS and OMC-REFL (transmitted through and reflected from the OMC). We did this with the heater on OM2 off as is nominal.
We then meant to redo these measurements once we heated up OM2 to change the mode-matching of the IFO to the OMC.
Unbfortunately we lost lock at about 15:06 UTC while Corey was taking out first measurement before heating up the OM2.
The meausrement is shown in this image, I have mislabelled it as 'third measurement' but it was the first. The optical gain is shown just before this measurment to be 0.994.
Then we waited as long as we could under out initial parameters of being finished cooling the OM2 again by 1:45pm.
We took another measurement at 1 hr 25 mins into lock after two false starts where I forgot to turn off the ASC. The optical gain was measured right before we started the measurements to be 0.978 but was still thermalising.
And then we took a third 2 hrs 59 minutes into lock, the IFO should be thermalised but the temperature of OM2 was still trending upwards a bit. Optical gain was 0.986.
We can use the slope of the power at the antisymmetric port (P_AS) vs. the power at the DCPDs (P_DCPD) as the DARM offset changes to estimate the throughput of carrier through the OMC which allows us one estimate of the loss.
The plots of this throughput are here for the cold state (minus the points taken after we lost lock), here for the partially thermalised state, and here for the thermalised state.
I am also in the middle of using the plot of P_AS varying with power at the OMC reflected port (P_REFL) to get a better estimate of the mode-mis match between the interferometer and the OMC.
I plotted the loss between the antisymmetric port (calibrated into the power entering HAM6) to the power on the DCPDs. This is the inverse of the slopes in the graphs above.
All three are poltted on one graph, using plot_AS_vs_DCPD_changes.py in my own cope of the labutils repository at /ligo/home/jennifer.wright/git/local_git_copies/labutils/darm_offset_step/ .
Sheila and Camilla both agreed the loss for the two bottom lines (purple and red) are too high. These imply that a hot OM2 gives us over 20 % output losses.
If we look at the increase in loss from cold OM2 to hot OM2 this is a factor of 2.1 (210 % increase).
Compared to the decrease in optical gain squared (which we expect to reflect the change in output losses, which was:
(0.986^2 - 0.994 ^2) / 0.994^2 = -0.016 (1.6 % decrease).
We might have to check the alignment of out optics was not changing while we changed the darm offset.
Looking at OM1, OM2 and SRM alignment it did change during the darm offset steps with the biggest change (in the third offsset step measurement) being in OM2 pitch and yaw, this is only a change around 6 microradians (Elenna and Jeff state this calibration in correct to within an order of magnitude). Not sure if this enough to invalidate the loss values we measure. OM3 and OMC sus did not change much but this is because IU purposely unlocked the OMC ASC while changing the darm offset.
Jennie W, Matt T,
I plotted the antisymmetric power during the darm offset step vs. the power reflected by the OMC and am now very confused as the AS power looks to be smaller than the power reflected form the OMC. See the ndscope where I have zoomed in on the same time segment for both channels. The OMC-REFL channel is mean to be calibrated into mW and the ASC-AS_C channel is meant to be calibrated into W entering HAM 6 (even though the actual pick-off is the transmission through OM1).
The two plots attached show how the ratio between AS and OMC-REFL power changes during one of the DARM offset measurements we did right after I took this ndscope data.
Plot 1 hr 25 mins into lock.
Plot 2 hrs 59 mins into lock.
For each point the code returns the median of the time series at each step, this mioght be less valie for OMC-REFL as it is a lot noisier than ASC-AS_C.
I am still confused about the hogher power at OMC-REFL and wondering if:
a) I am confused about the calibration of one of these channels.
b) the calibration of one of these channels is wrong.
Sheila, Ibrahim
PRMI locking waited a long while with high counts before going into MICH so Sheila changed the timer from 20 mins to 10 mins.
Tony, Ibrahim, Sheila
Referencing 44348, Ibrahim and I went back to Sept 12 2018, and found a time when PRMI locked in 3 minutes. At that time, the MICH trigger level was 6, and based on the minute trend max the MICH TRIF channel was around 70, 9% of the max value. In locked DRMI, the trigger channel was 70, and the trigger level was 40, so 57% of the locked value, and the trigger matrix had a value of 1.
On March 31st, 2025 I changed the trigger matrix from 1 to 2 for PRMI and DRMI 83655, at that time the locked value on POPAIR 18 NORM was only 32 perhaps because the diode was misalinged, the trigger threshold was 37, so with the factor of 2 the trigger was still 43% of the locked value for DRMI. After the vent where ISCT1 was moved and realinged, POP 18 NORM has been back at around 70 counts when DRMI was locked.
We removed the factor of 2 from the input matrix, and loaded the DRMI guardian. After this we have had one short PRMI lock, where we can see that something is moving a lot and PRMI is not well aligned.
Lockloss seemingly due to ground motion instability. In the last 6 hours, the microseism increased by an order of magnitude.
PRG and ASC signals have been oscillating every few minutes since I started shift (and seismic was in micreoseism).
TITLE: 10/17 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 7mph Gusts, 4mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.50 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING as of 05:22 UTC
Range and wind are stable. Microseism is on the rise.
Plan is to stay OBSERVING
TITLE: 10/17 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Relocking and at MOVE_SPOTS. We were having a bit of a rough start trying to relock after the lockloss - for the first few minutes ALSX was buzzing weirdly, but then it went away on its own. I briefly checked the quad movements, and it looks like it may have been caused by ETMX Y moving to adjust the alignment. We then were having trouble locking DRMI or PRMI, so eventually I started an initial alignment and that helped a lot.
LOG:
23:30UTC Observing and have been Locked for 1 hour
23:49 Earthquake mode activated
23:50 Out of Observing to run ASC Hi Gains
23:59 GRB-Short E610763 https://gracedb.ligo.org/events/E610763/view/
00:09 Back into Observing after going back to nominal gains
00:09 Back to CALM
03:06 Lockloss https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=87534
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 23:32 | Jennie | JOAT | n | Putting away wipes | 23:35 |
Lockloss at 2025-10-17 03:06 UTC after over 4.5 hours Locked. Unsure of cause.
TITLE: 10/16 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: The DARM fom struggled this morning as it wasn't getting data from GDS and sometimes nds2, ETMY beam spot adjustment alog87508, and we had to swap the ITMY L2 sat amp box alog87515. A PRM filter was also turned off alog87523, and I accepted some CDS calibration report related SDFs. After we relocked post sat amp swap there was a 1Hz ringup seen in ASC, but it calmed down.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:01 | FAC | Randy | Yarm beam tube | N | Beam tube inspection, mid to corner | 17:56 |
| 15:04 | FAC | Kim , Nellie | VAC prep lab | N | Tech clean | 15:31 |
| 18:19 | ISC | Jennie | Vac Prep lab | N | Looks for parts | 19:30 |
| 18:23 | FAC | Kim, Nellie | Receiving, then Mids | N | Tech clean and look for stuff | 18:49 |
| 18:25 | PSL | Keita | Optics lab | N | ISS array, Laser Safe | 19:43 |
| 18:32 | PSL | Rahul | Optics lab | N | Join Keita | 19:43 |
| 19:04 | FAC | Eric | Staging | N | Move boom lift to LeXi | 19:29 |
| 20:06 | EE | Fil | LVEA | N | ITMY L2 sat amp swap | 20:16 |
| 20:39 | FAC | Randy | Xarm | N | Beamtube calking | 22:10 |
| 21:06 | PSL | Jennie | Optics lab, Vac prep | N | Check for parts/tools/cleanliness, Laser Safe | 22:00 |
| 21:34 | SUS | Rahul | LVEA, near PSL | N | Grab parts | 21:47 |
| 21:45 | PSL | Keita | Optics lab | N | ISS array work | 23:09 |
Summary: Since we have made many changes to DRMI over the last few weeks and locking is much slower than it was, I started trying to restore us to the old configuration which worked pretty well. We have locked DRMI 6 times since, none of the acquisitions have taken longer than 8 minutes. (see second attachment)
Lesson for the future:
We should be much more careful about making changes to DRMI acquisition, and look at locking statistics to evaluate whatever we do.
TJ Massinger pointed out that there is a page that already displays times, ISC_LOCK summary. If you scroll to the bottom and click on the state LOCK_DRMI_1F [101] you can see a list of the locking attempts and their duration. Craig and I used this data to make the histograms in the second attachment. We were locking much more quickly in Sept 9-12 than in the first days of October, and after making the changes tonight the times are closer to what we had in mid September although the distribution will probably not be as good.
I think our approach going forward should be to wait a few days after making a change, and get several tens of locking attempts for statistics before we make a change. If we compare the locking statistics before and after the change, and they are better, we can go ahead try another change if we want to then.
Details:
I compared changes in the DRMI acquisition settings for LSC using conlog. The two times are used are Sept 12 2018 at 21:43 (1220823832), this was a week when we remember DRMI locking quickly and looking back at the data confirms that it was locking quickly. Now DRMI locking is consuming a lot of time, and we have made many changes and aren't sure which of the changes made are the problem.
| Sept 12 | Oct 4th | changes made tonight | |
| POP18 when locked | 145 | 179 (alignment onto diode) | |
| MICH trigg +FM trig on | 40 (27% of locked) | 40 (22% of locked level) | 48 (27%) |
| SRCL trigger on level | 40 (27%) | 30 (17%) | 48 (27%) |
| SRCL trig + FM trig off | 15 (10%) | 5 (3%) | 18 (10%) |
| MICH trigger threshold off | 15 (10%) | 5 (3%) | 18 (10%) |
| REFL9I to PRCL matrix | 3.5 | 2.2 | 3.5 |
| PRCL gain | 12 | 8 | 12 (ACQUIRE) |
| REFL9I to SRCL | -5.508 | -1.8(44318) | not changed |
| MICH FM trig wait | 0.1 | 0.2 | 0.1 added to guardian |
| PRCL FM trig wait | 0.2 | 0.3 | 0.2 added to guardian |
| REFLair RF45 phase R | 82 | 97 | not changed |
| Refl air RF9 phase R | -21 | -23 | not changed |
| SRCL FM1 | was triggered, +6dB, 0.2 second ramp | no longer triggered, +106 dB to cancel -100dB in FM2 | reluctant to change |
I was reluctant to make changes to SRCL FM1 because that is the filter that has twice given huge and inexplicable outputs when we modified it.
Things I haven't done which could be done (after collecting some stats about this state, if we want to try to get closer to Sept):
The values of POP18 refered to above at H1:LSC-POPAIR_B_RF18_I_ERR_DQ. The channel used in the trigger matrix is actaully normalized.
On the Oct 4th, 2018 time above, the normalized channel was 82. This means that in the above table, for the good locking time the MICH + SRCL trigger was at 49% of the locked build ups.