Uploaded a new MEDM screen mapping the location of the RGAs at the site, non functional only to be used as a display.
Most of the RGAs in use are Pfieffer brand, capable to read up to 100 amu, red badges with RGA on them. Two types of Pfieffer RGAs, Prisma Pro and Prisma Plus.
There is one RGA at the corner station, gray badge in the LVEA, this is a Extrel brand, capable of 1000 amu.
Outstanding!
TITLE: 09/18 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Earthquake
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: EARTHQUAKE
Wind: 15mph Gusts, 6mph 3min avg
Primary useism: 0.19 μm/s
Secondary useism: 0.18 μm/s
QUICK SUMMARY:
Still recovering from an Earthquake, but TJ handed over H1 as he had just started an Initial Alignment. The plan is to proceed with locking as normal after the alignment, but we'll see how reacquisition goes since this is our first attempt at locking (1) post a 40+hr lock AND (2) our second lock after the 6-days of being down due to the power outtage where H1 needed some ISC_LOCK changes to get to NLN.
ALSO, since this EQ was a big one earlier today, we also will assess locking feasability with possible aftershocks---if all looks good we'll proceed with owl shift, but if the earth is a rumblin', we may need to cancel the owl---will consult with Jenne/TJ if this is the case.
Initial Alignment is almost done!
To avoid the very unlikely scenario that we have another power glitch that puts the dust monitor vacuum pump in reverse again, I've disconnected the vacuum line leading to the dust monitors in the PSL enclosure and the anteroom. The dust monitors themselves are left on, so this means that we should ignore the counts reported by these dust monitors. We will not have dust counts in these rooms until this is rectified.
I did this by removing the 3/4" tee underneath the South side of HAM1 and placing a length of 3/8" tubing with an unoccupied quick disconnect into the empty 3/4" tube. This 3/8" tube came from the other line of the tee, ie. one went to the PSL and one to this unoccupied line. See annotated pictures.
Lockloss from 7.8M EQ from Kamchatsky, Russia.
The following watchdogs all tripped:
ISI's: All, including both stages of the BS.
SUS: MC3, IM2, IM3, IM4, FC1, PR2, MC2, SRM,2,3 OPO
Those whom hoped for a lockloss, now have all afternoon to complete their "during lockloss tasks".
Holding H1 in Idle while the world shakes.
Elenna, Camilla
We ran auto_darm_offset_step.py after activating labutils following Jennie's 86623 instructions at ~17:30UTC to 17:45UTC. We left the POP beam divertor open and did not turn off the OMC ASC. We were in No SQZ at the time.
Repeated at 18:20 to 18:35UTC with the OMC ASC off.
I processed both measurements. With OMC ASC ON, the fit gives a contrast defect of 0.77-0.86 mW. However with OMC ASC OFF, the fit gives a contrast defect of 1.23-1.28 mW. I am confused, because the motivation for running with the OMC ASC off was "the OMC ASC goes bad when we changed the DARM offset and this makes the measurement worse".
I tried to look back at previous contrast defect measurements. Jennie measured the contrast defect in June while adjusting CO2 annular power 85434. Her measurements report values ranging between 1 and 1.1 mW for different CO2Y powers. Not sure if those are directly comparable, given her changes of the CO2s. I also believe she has been running these measurements with the ASC off.
I measured the contrast defect with the OMC ASC ON in 80804 (circa October 2024) to be 0.742 mW.
Just comparing the ASC ON measurement from the October measurement, the contrast defect has possibly gotten worse. Comparing the measurement at the 255 Hz line, the contrast has increased by 16% (0.862/0.742). Comparing the data measured at 410 Hz, the contrast defect is the same (0.771/0.771).
Thu Sep 18 10:09:17 2025 INFO: Fill completed in 9min 13secs
Calibration sweep ran at 8:30 Local 15:30 UTC
pydarm measure --run-headless bb
.....
...[computer noises]...
....
2025-09-18 08:37:08,412 bb measurement complete.
2025-09-18 08:37:08,413 bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250918T153158Z.xml
2025-09-18 08:37:08,413 all measurements complete.
gpstime;python /ligo/groups/cal/src/simulines/simulines/simuLines.py -i /ligo/groups/cal/H1/simulines_settings/newDARM_20231221/settings_h1_20250212.ini;gpstime
PDT: 2025-09-18 08:39:17.330118 PDT
UTC: 2025-09-18 15:39:17.330118 UTC
GPS: 1442245175.330118
...
..... [more computing noises] ....
...
2025-09-18 16:02:03,289 | INFO | Finished gathering data. Data ends at 1442246540.0
2025-09-18 16:02:03,505 | INFO | It is SAFE TO RETURN TO OBSERVING now, whilst data is processed.
2025-09-18 16:02:03,505 | INFO | Commencing data processing.
2025-09-18 16:02:03,505 | INFO | Ending lockloss monitor. This is either due to having completed the measurement, and this functionality being terminated; or because the whole process was aborted.
2025-09-18 16:02:39,467 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20250918T153918Z.hdf5
2025-09-18 16:02:39,474 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20250918T153918Z.hdf5
2025-09-18 16:02:39,477 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20250918T153918Z.hdf5
2025-09-18 16:02:39,481 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20250918T153918Z.hdf5
2025-09-18 16:02:39,485 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20250918T153918Z.hdf5
PDT: 2025-09-18 09:02:39.614492 PDT
UTC: 2025-09-18 16:02:39.614492 UTC
GPS: 1442246577.614492
pydarm report --skip-gds
...
.... [Final Computer Noises] ....
...
2025-09-18 09:08:18,990 report generation complete.
2025-09-18 09:08:18,990 report file: /ligo/groups/cal/H1/reports/20250918T153918Z/H1_calibration_report_20250918T153918Z.pdf
2025-09-18 09:08:18,990 displaying report /ligo/groups/cal/H1/reports/20250918T153918Z/H1_calibration_report_20250918T153918Z.pdf...
It looks like the ini file generated in this report has the old ETMX ESD bias actuation values, so I updated it and regenerated the report.
The L3 actuation strength is different by nearly 2% compared to the 8/23 report. This is possibly due to charging. Tagging sus.
TITLE: 09/18 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 2mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.17 μm/s
QUICK SUMMARY:
H1 has been locked & Observing for 35.5 hours now.
All systems seem to be running smoothly.
Violins lookin good at below 10e-17 this morning.
Potiential Comissioning tasks today:
TITLE: 09/17 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: Nice quiet shift with H1 now locked for about 26hrs.
LOG: 2340 Betsy & Jason out of the optics lab
TITLE: 09/17 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 2mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.18 μm/s
QUICK SUMMARY:
H1's been locked since last night! (20.5hrs now) H1 was in need of an ISC_LOCK node LOAD, but the H1 SQZ briefly dropped us out of Observing, so Tony was able to take this LOAD off the To Do List.
Seismically, there were a couple of EQs which required EQ Mode, but H1 rode right through them (last night and during Tony's shift....no "ASC Hi Gn" transition was needed.
From last night, it looks like TJ & Jim took care of BRSy and also the violins were fixed by Ryan C (and thanks about the note on its finicky gain!) :)
Smooth sailing for H1 with it being locked for 24.5hrs & observing.
TITLE: 09/17 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
H1 Has mostly been locked and Observing all day.
But we dropped from Observing at 15:18 UTC to apply violin damping.
Back to Observing at 15:22 UTC
And again dropping to commissioning when the SQZr lost lock at 23:07 UTC.
Back to Observing at 23:11:12 UTC.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 16:08 | SPI | Jeff | Optics Lab | N | Dropping off parts | 18:26 |
| 16:39 | FAC | MacMiller | VPW | N | Contractors Quitely Working at VPW | 00:39 |
| 17:10 | EOM | Rick | Optics lab | N | Looking for parts. | 18:10 |
| 17:37 | PSL/ io | Jason & Betsy | Optics lab | N | Crystal Photography | 19:47 |
| 18:19 | SPI | Camilla | Optics lab | N | Helping Jeff & Jason | 19:00 |
| 21:58 | EOM | Jason & Betsy | Optics lab | N | crystal photograghy | 23:58 |
| 22:48 | SPI | Jeff | Optics Lab | N | Dropping off parts. | 23:00 |
SQZ lost lock as the PMC PZT got to the end of it's range, plot. There is already a SQZ_MANAGER checker that will re-lock the PMC if the PZT is under 5V during FDS_READY_IFO (which it wasn't) so there's nothing that needs to be done to avoid this in the future.
Since we have been struggling with the carm offset reduction sequence, here is a quick look at the data from our most recent lock compared with a model that I borrowed from Sheila, see alog 62110.
I plotted the TRX norm data versus the REFL LF data, normalizing REFL LF to the value it has when DRMI is locked only (3.7 mW). Here are the TR CARM offsets (uncalibrated) as well for reference
| TRX norm | REFL A LF (mW) | TR CARM offset |
| 4.68 | 3.65 | -3 |
| 33.3 | 3.58 | -8 |
| 42.2 | 3.55 | -9 |
| 808.63 | 1.96 | -40 |
| 1207.53 | 1.24 | -49 |
| 1359.8 | 0.885 | -52 |
| 1579.92 | 0.502 | -56 |
I found Sheila's code in the alog above, and added these data points to it to plot the transmission versus the refl dc power for different PRGs. This suggests, as we expect, that our PRG is quite high, the data points showing a PRG between 48 and 54. However, the very early points indicate a much lower PRG.
I also added Sheila's plot with the arm transmission versus carm offset in picometers. Based on our transmission, it seems that a "CARM 150 picometers" state, our CARM offset is actually pretty close to 150 pm. Similarly so for CARM 5 pm, assuming that our PRG is close to 54. These points are marked with stars.
Anyway, this doesn't really help us understand what's going wrong with the offset reduction.
Jeff and I noticed that today's lock may have some elevated Bounce and roll modes.
9.7Hz bounce modes.
13.7- 13.9 hz roll modes.
Some fundemantal 500 hz Violin modes for fun.
Another plot from 6-600 hz from ealier in this lock.
I also took a look at a different lock ~ 10 days ago pre power outage.
9 hz bounce mode
13 hz roll mode
Some exquisite looking Violins.
Tagging SUS.
We have now been locked for over 16 hours.
IMC REFL DC power is steady at 18.5 mW
IMC WFS A is at 0.95 mW and IMC WFS B is at 0.75 mW
The IMC power in is 62 W and the power at IM4 trans is 56.7 W
MC2 trans is about 9670 [mystery units]
This is reasonable power for IMC refl, but the WFS power is very low. These are the jitter witnesses, and jitter subtraction is not performing as well as it was before the power outage. I can think of several possible reasons for this, but I'm sure that having less than a mW of power isn't helping.
We may want to consider either a) increasing the power on the IMC refl path b) changing the splitter between IMC refl and IMC WFS to be a 50/50 instead of a 90/10, or c) some combination of the first two options that gets us reasonable power on both IMC refl and IMC WFS.
The numbers are confirmed to have held through the entire 40 hours of this most recent lock (killed by earthquake).
Looking back before the power outage, the nominal IMC REFL and IMC WFS powers at 60 W PSL lock were 18.6 mW for IMC REFL, 0.95 mW for WFS A and 0.76 mW for WFS B. So, we are now back to operating with our nominal powers at these PDs, except that the waveplate was adjusted to reduce the power to these diodes by half.
So, if we went back to the old waveplate setting, we would have double these powers. This would be too much power for the IMC REFL diode.
We have chosen to not make any further adjustments to this path.
I created an updated blend for the SR3 pitch estimator. These are in the SUS SVN next to the first version. (see LHO log 84452)
The design script is blend_SR3_pitchv2.m
the Foton update script is make_SR3_Pitch_blend_v2.m
these are both in {SUS_SVN}/HLTS/Common/FilterDesign/Estimator/ - revision 12608
The update script will install the new blends into FM2 of SR3_M1_EST_P_FUSION_{MEAS/MODL}_BP with the name pit_v2. pit_v1 should still be in FM1. Turn on FM2, turn FM1 off.
Jeff and Oli tried the first one, and see that the first 2 modes (about 0.65 and 0.75 Hz) are seeing more motion with the estimator damping than the normal damping. To correct this, _v2 add OSEM signal to the estimator for those modes. See plots below - First 2 are the _v2 blend and a zoom of the _v2 blend. figure 3 shows the measured pitch plant vs. OSEM path - the modes line up pretty well. There is a bit of shift because the peaks are close together. I expect hope this will not matter. Figure 4 shows the plant vs. the model path. Now all 4 modes are driven by the measured OSEM signal instead of the model.
It is interesting to see that the model was not doing a good job of predicting the motion at the first 2 peaks. This is (I guess) because either (a) the model and the plant are different - or - (b) there are unmodeled drives pushing the plant (the suspension) that the model doesn't know about.
I'm guessing the answer is (b - unmodeled drives) and is likely from DAC noise. I think this because
1 - The plant fit is smooth and really good.
2 - In the yaw analysis that Edgard and Ivey are doing (not yet posted) the first mode of the yaw plant can be seen with the OSEM, but the ISI motion is much too small to excite that level of motion. But the OSEM can see motion, so something is exciting that motion.
3 - The DAC noise is the only thing I can think of.
Quick chat with Jeff indicates that the DAC noise models at those frequencies are not well trusted. We'll try something anyway and see if it is close. I don't see how to use the estimator to deal with that noise - we'd need to have an accurate realtime measurement.
Updated make_SR3_Pitch_blend_v2.m to r12610 after fixing the filter name and the subblock it writes to. Will be loading these in the next time we lose lock.
Thanks Oli!
Also - as a note to myself - I've attached 1 sec of drive signal from the SR3 outputs at the time when both estimators were on (2025-08-21 16:40 UTC, see LHO log 86491 ). The pitch drive is about 30 to 50 counts pk-pk, and not particularly high frequency, compared to the 16384 model rate. This suggests that the low frequency DAC noise is worth following up, and also that it could theoretically be improved with whitening filters. However, since the DC levels are ~ 10k counts to hold the alignments, a simple gain probably wont work. plz note - I am NOT suggesting any changes here, just logging some observations for followup.
Overdue comment - The real problem seems to be that the L to P path was not installed, but it is now, see aLOG 86567.
We do need to look into the DAC noise, however. The extra motion has been fixed by adding the L2P path AND changing the blends.