We've recently been having some issues with SRM tripping during initial alignment and during DRMI acquisition, alog87370 for example. After consulting with Jeff K, I've bumped up the trip thresholds as listed below. This was based off of the last few trips, so hopefully it has enough of a buffer now. Saved in the safe and observe sdfs.
| Old | New | |
| M1 | 150 | 250 |
| M2 | 200 | 350 |
| M3 | 225 | 400 |
Lock loss 1444317365
No obvious cause, ended a 29.5hour lock.
1659 Observing. Needed an initial alignment, but all auto.
TITLE: 10/12 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 5mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.28 μm/s
QUICK SUMMARY: Locked for 28.7 hours, range is stable, no alarms, no alerts overnight.
TITLE: 10/12 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
IFO is in NLN and OBSERVING (19 hr lock!)
Range is stable. Environment is stable. Locked for the whole shift.
LOG:
None
TITLE: 10/11 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 15mph Gusts, 8mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.20 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING
Calibration sweep was not run this morning due to LLO not being thermalized. TJ plans on running it tomorrow with LLO.
Otherwise, LHO has been quietyl observing for ~14 hours!
TITLE: 10/11 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Locked for almost 14hours. Range has been stable. Public tours came through, but it's been a very quiet shift.
LOG:
Locked and observing for 9.5 hours. H1 and L1 have moved the calibration sweep to tomorrow since we could not get a coordinated time in today.
Sat Oct 11 10:08:39 2025 INFO: Fill completed in 8min 36secs
TITLE: 10/11 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 2mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.18 μm/s
QUICK SUMMARY: Locked for almost 5 hours, no alarms, calm environment. Af ew GRB alerts and one lock loss over the night.
TITLE: 10/11 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 00:38 UTC
After the initial alignment at the start of the shift, IFO locked fully automatically. The wind has also since calmed down.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 22:52 | SAF | Laser HAZARD | LVEA | YES | LVEA is Laser HAZARD \u0d26\u0d4d\u0d26\u0d3f(\u239a_\u239a) | 13:52 |
| 15:05 | fac | randy | y-arm | - | Y2 BTE inspection | 22:53 |
| 16:38 | cheta | matt | CHETA.JAC | - | parts checks | 17:02 |
| 17:02 | psl | keita.jenny | opticslab | YES | ISS PD Array | 00:28 |
| 17:47 | pem | mitch | sitewide | - | dustmon famis | 18:40 |
Closes FAMIS 27427, Last checked in alog 87284
Corner Station looks good - nothing of note
Outstations look good too except for EX, which had a peak on Tuesday. This isn't a problem and is corroborated by Eric's alog 87343, reporting a fan bearing replacement on this fan. Everything has looked quiet since.
Screenshots attached.
TITLE: 10/10 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
Started day with high winds peaking at under 40mph at 10amPDT but they never really died. And then in the early afternoon a huge M7.6 EQ beteween Argentina/Antartica took over most of the afternoon. Ended the shift starting an Initial Alignment for the hand off to Ibrahim (EQ is steadily dropping, but it's still windy).
LOG:
Ibrahim ran a calibration measurement last night, report attached to this alog (again, the weird bug happened where it pulled the wrong report reference so I had to regenerate). Just from the first page, we can see that the PCAL/GDS transfer function, as measured by both PCAL X and Y, show the uncertainty to be around 1% again. We can see that the modeled actuation strength matches up very well with the current model that has been running since 8/28, indicating that I corrected the drivealign gain properly (ignoring all the mistakes I made in the middle).
The difference in the sensing function is mostly due to the loss of optical gain, this came with the power outage. There might be some small change in the spring, but it is very minimal, and that lines up with the fact that trying to adjust the SRCL offset yesterday only made things worse.
Comparing the GDS/PCAL broadband with previous measurements, we can see that this broadband (in red, Oct 10) is almost exactly the same at the broadband taken on Aug 28 (gray reference) when we pushed this particular calibration. There are small differences at low frequency, perhaps due to some change in the spring behavior of the sensing function, and at high frequency, which I don't have a good immediate reason for.
Given that correcting the TST actuation strength brought the calibration almost exactly back to the model, I spoke with Joe to understand then what the "mysterious" large deviation in the calibration was on Oct 4 (green trace). In the process, I also made a plot comparing the same measurement times, but instead looking at CAL DELTA L/PCAL, so examining what our calibration would look like without TDCFs (I matched the colors with teh GDS/PCAL plot). We determined that by comparing the CAL DELTAL measurement to the GDS measurement, indeed the TDCFs are doing something correctly; without them on Oct 4, our uncertainty woyld be over 6%. However, they are clearly not doing a perfect job of correcting us back to model. Joe thinks that this is because the TCDF correction process may not completely account for the effect of the DARM OLG between each of the actuation TDCF measurement points. We trended what happened to kappa UIM and kappa PUM when I adjusted the L3 drivealign gain to bring kappa TST back to 1 and we see that kappa UIM dropped by 0.9% and kappa PUM by 0.4%. This is happening because the DARM OLG is changing as we adjust kappa TST, and kappa UIM and kappa PUM are seeing some frequency dependent change, but their calculation is assuming that the DARM OLG is not changing.
In summary, the increase calibration uncertainty on Oct 4 is exactly due to the increase in kappa TST, even though the TDCFs were correcting GDS. We can can see by comparing the broadbands from Sept 27 (purple trace) and Oct 4 that at both times the TCDFs were doing an imperfect job of correcting GDS, and that scales with how large the deviation from 1 is; on Sept 27 kappa TST was about 1.02. Now that the actuation strength has been corrected, there is no need to update the calibration at this time.
TITLE: 10/10 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Earthquake
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: EARTHQUAKE
Wind: 31mph Gusts, 22mph 3min avg
Primary useism: 0.92 μm/s
Secondary useism: 0.16 μm/s
QUICK SUMMARY:
IFO is in INITIAL_ALIGNMENT
Recovering from an M7.6 EQ in the Drake Passage ( gap between Argentina and Antarctica). Ground motion is looking good - wind is looking meh.
Plan is to lock, observe and stay that way.
Another big EQ rolling through. This one is down near Antartica! Debated when to transition to "ASC Hi Gn", since R-wave was going to be 45-60min. But S&P waves triggered EQ Mode + saw a yellow Picket Fence---Clicked on ASC Hi Gn button, but during the transition H1 lost lock. This might have been too big for ASC Hi Gn regardless, but kinda wished I clicked it earlier.
Winds were peaking just under 40mph around the lockloss as well.
Will be hanging out here for a few hours.
Now we wait for R-wave which is probably due to arrive around 2135-ish utc.
We lost lock ~3 seconds after CHARD_Y filters bits are changed, there's also a big jump in ground motion at the same time, and it was gusting >30mph at the corner station.
[Joan-Rene Merou, Alicia Calafat, Anamaria Effler, Sheila Dwyer, Robert Schofield, Jenne Driggers] We have looked at the near-30 Hz and near-100 Hz combs (Detchar issue 340) in all of LHO Fscan channels (Full 148 channels list can be found in O4_H1_Fscan_ch_info.yml) to find witnesses and also channels where the amplitude and coherence change at the same dates as DARM. The list of combs is the following one:
| spacing (Hz) | offset (Hz) |
|---|---|
| 29.9695138888 | 0 |
| 99.99845486125 | 70.02888889 |
| 99.99845679 | 0 |
| 99.99846 | 29.9694 |
| 99.9984722225 | 89.90847222 |
| 99.99865 | 0 |
| 99.99845 | 10.08992 |
| 29.96952 | 520.17208 |
| 29.9695211 | 589.9007589 |
| 29.96951374 | 760.22840625 |
We list here the channels that do show most of these Combs. These same channels do show changes in coherence between July 1st and July 7th 2024, but do not show changes in the amplitude of the combs.
- H1_IMC-F_OUT_DQ
- H1_LSC-MCL_IN1_DQ
- H1_LSC-MICH_IN1_DQ
- H1_LSC-SRCL_IN1_DQ
- H1_PEM-CS_MAG_EBAY_LSCRACK_X_DQ
- H1_PEM-CS_MAG_EBAY_LSCRACK_Y_DQ
- H1_PEM-CS_MAG_EBAY_LSCRACK_Z_DQ
- H1_PEM-CS_MAG_LVEA_INPUTOPTICS_X_DQ
- H1_PEM-CS_MAG_LVEA_INPUTOPTICS_Y_DQ
- H1_PEM-CS_MAG_LVEA_INPUTOPTICS_Z_DQ
In most channels, the comb amplitude tends to get quite low after ~1500 Hz. The following sets of channels show differences between X, Y and Z:
- H1_PEM-CS_MAG_EBAY_SUSRACK_X_DQ (Higher amplitudes and towards higher frequencies)
- H1_PEM-CS_MAG_EBAY_SUSRACK_Y_DQ (Lower comb amplitudes)
- H1_PEM-CS_MAG_EBAY_SUSRACK_Y_DQ (Lower comb amplitudes)
Regarding CS_MAG_LVEA_OUTPUTOPTICS, these combs can be seen best in X, weaker in Y and almost non-existent in Z. (In CS_MAG_LVEA_INPUTOPTICS they look roughly the same height)
- H1_PEM-CS_MAG_LVEA_OUTPUTOPTICS_X_DQ (Strongest)
- H1_PEM-CS_MAG_LVEA_OUTPUTOPTICS_Y_DQ (Weaker lines)
- H1_PEM-CS_MAG_LVEA_OUTPUTOPTICS_Z_DQ (Almost no lines)
Same behavior at:
- H1_PEM-CS_MAG_LVEA_VERTEX_X_DQ (Strongest lines)
- H1_PEM-CS_MAG_LVEA_VERTEX_Y_DQ (Weaker lines)
- H1_PEM-CS_MAG_LVEA_VERTEX_Z_DQ (Almost no lines)
We can see that these combs mostly appear in the corner station. The combs do not appear in neither EX nor EY channels. However, comb 99.99865 Hz offset 0.000 appears in many EX, EY channels and does become more coherent after July 7th. However, it is very close to 100 Hz so it may be influenced by other round-number combs (?)
Looking at the 52 additional channels listed in LHO ADC channels list, we have found the following information: The combs appear in the following channels with high peaks and high coherence: - H1:PEM-CS_ADC_5_18_2K_OUT_DQ - H1:PEM-CS_ADC_5_21_2K_OUT_DQ - H1:PEM-CS_ADC_5_26_2K_OUT_DQ ! All combs appear here and with high peaks Some combs appear in the following channels with low peaks and low coherence: - H1:PEM-CS_ADC_5_22_2K_OUT_DQ - H1:PEM-CS_ADC_5_23_2K_OUT_DQ - H1:PEM-CS_ADC_5_24_2K_OUT_DQ The following channels do not show the peaks but show an increase in coherence from July 1st to July 7th 2024: - H1:PEM-CS_ADC_5_25_2K_OUT_DQ - H1:PEM-CS_ADC_5_27_2K_OUT_DQ - H1:PEM-CS_ADC_5_30_2K_OUT_DQ Only the 99.99 Hz offset 0 combs appear in the following channels: - H1:PEM-CS_ADC_4_27_2K_OUT_DQ - H1:PEM-CS_ADC_4_28_2K_OUT_DQ - H1:PEM-CS_ADC_5_19_2K_OUT_DQ - H1:PEM-CS_ADC_5_20_2K_OUT_DQ - H1:PEM-CS_ADC_5_31_2K_OUT_DQ In the arms, the following channels show coherence with only the 99.99 Hz offset 0 combs: - H1:PEM-EX_ADC_0_09_OUT_DQ - H1:PEM-EX_ADC_0_13_OUT_DQ - H1:PEM-EY_ADC_0_11_OUT_DQ - H1:PEM-EY_ADC_0_12_OUT_DQ - H1:PEM-EY_ADC_0_13_OUT_DQ - H1:PEM-EY_ADC_0_14_OUT_DQ The following channels show the unexpected behavior of showing the 99.99 Hz peak in July 1st with coherence, but it disappears on July 7th: - H1:PEM-EX_ADC_0_12_OUT_DQ In summary, after investigating the Fscan channel list and the additional channels. The ones that seem more promising as showing most of the lines with high coherence and high amplitude peaks are: - H1:PEM-CS_ADC_5_18_2K_OUT_DQ - H1:PEM-CS_ADC_5_21_2K_OUT_DQ - H1:PEM-CS_ADC_5_26_2K_OUT_DQ - H1:IMC-F_OUT_DQ - H1:LSC-MCL_IN1_DQ - H1:LSC-MICH_IN1_DQ - H1:PEM-CS_MAG_EBAY_LSCRACK_X_DQ - H1:PEM-CS_MAG_EBAY_LSCRACK_Y_DQ - H1:PEM-CS_MAG_EBAY_LSCRACK_Z_DQ - H1:PEM-CS_MAG_LVEA_INPUTOPTICS_X_DQ - H1:PEM-CS_MAG_LVEA_INPUTOPTICS_Y_DQ - H1:PEM-CS_MAG_LVEA_INPUTOPTICS_Z_DQ - H1:PEM-CS_MAG_EBAY_SUSRACK_X_DQ - H1:PEM-CS_MAG_LVEA_OUTPUTOPTICS_X_DQ Of these, channel H1:PEM-CS_ADC_5_26_2K_OUT_DQ appears to be the one with the highest amplitudes. The following figure illustrates its ASD and Coherence with DARM on the date of July 7, 2024, showing the peaks for the harmonics of the combs in this study. As can be seen in the ASD, the highest peaks are those in the list of the near-30 and near-100 Hz plots. The only peaks higher than these are the power mains at powers of 60 Hz. This channel shows all the combs listed. Most of these combs also show very high coherence with DARM.![]()
After determining in which channels the peaks appear present, we have studied the coincidence of changes in the comb heights versus the bias in the H1:SUS-ITMY_L3_ESDAMON_DC_OUT16. The following figure shows the coincidences between the changes in the relative amplitude of the first harmonic of each comb (sort of SNR) in DARM and the mean counts in H1:SUS-ITMY_L3_ESDAMON_DC_OUT16 across time. It can be seen that previously to May 2nd, the channel count was set to 60. Once it changed to around -223 after that date, the SNR of the peaks overall increased in a sudden way. Afterwards in June 13th when the count was reduced to 0, most peaks got a much lower SNR at the same time.
Fri Oct 10 10:07:47 2025 INFO: Fill completed in 7min 43secs
Gerardo confirmed a good fill curbside.
This is for FAMIS #27398.
Laser Status:
NPRO output power is 1.857W
AMP1 output power is 70.67W
AMP2 output power is 140.3W
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 16 days, 21 hr 4 minutes
Reflected power = 24.26W
Transmitted power = 106.6W
PowerSum = 130.8W
FSS:
It has been locked for 0 days 1 hr and 35 min
TPD[V] = 0.538V
ISS:
The diffracted power is around 3.9%
Last saturation event was 0 days 3 hours and 40 minutes ago
Possible Issues:
PMC reflected power is high
We noticed that comparing the last two calibration reports that there has been a significant change in the systematic error, 87295. It's not immediately obvious what the cause of this is. Two current problems we are aware of: there is test mass charge, and kappa TST is up by more than 3%, and we lost another 1% optical gain since the power outage.
One possible source is the SRC detuning changing (not sure how this could happen, but it might change).
Today, I tried to correct some of these issues.
Correcting actuation:
This is pretty straightforward, I measured the DARM OLG and adjusted the L3 DRIVEALIGN gain to bring the UGF back t about 70 Hz, and Kappa TST back to 1. This required about a 3.5% reduction in the drivealign gain, from 88.285 to 85.21. I confirmed that this did the right thing by comparing the DARM OLG to an old reference and watching kappa TST. I updated the guardian with this new gain, SDFed, and changed the standard calibration pydarm ini file to have this new gain. I also remembered to update this gain in the CAL CS model.
Correcting sensing:
Next, Camilla took a sqz data set using FIS at different SRCL offsets, 87387. We did the usual 3 SRCL offset steps, but then we were confused by the results, so we added in a fourth at -450 ct. Part of this measurement requires us to guess how much SRCL detuning each measurement has, so we spent a bunch of time iterating to make our gwinc fit match the data in this plot. I'm still not sure we did it right, but it could also be something else wrong with the model. The linear fit suggests we need about a -435 ct offset. We changed the offset following this result.
Checking the result:
After these changes, Corey ran the usual calibration sweep. The broadband comparison showed some improvement in the calibration. However, the calibration report shows a significant spring in the sensing function. To compare how this looks relative to previous measurements, I plotted the last three sensing function measurements.
The calibration was still with 1% uncertainty on 9/27. On 10/4, the calibration uncertainty increased. Today, we changed the SRCL offset following our SQZ measurement. This plot compares those three times, and includes the digital SRCL offset engaged at the time. I also took the ratio of each measurement with the measurement from 9.27 to highlight the differences. It seems like the difference between the 9/27 and 10/4 calibration cannot be attributed much to a change in the sensing. And clearly, this new SRCL offset makes the sensing function have an even larger spring than before.
Therefore, I concluded that this was a bad change to the offset, and I reverted. Unfortunately, it's too late today to get a new measurement. Since we have changed parameters, we would need a calibration measurement before we could generate a new model to push. Hopefully we can get a good measurement this Saturday. Whatever has changed about the calibration, I don't think it's from the SRCL offset. Also, correcting the L3 actuation strength was useful, but it doesn't account for the discrepancy we are seeing.
It turns out that changing the H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN was the WRONG thing to do. The point was to only update the drivealign gain to bring us back to the N/ct actuation strength of the model. We lost lock shortly after I updated the drivealign gains, so I didn't realize the error until just now when I checked the grafana page and saw that the monitoring lines were reporting 10% uncertainty!
Vlad and Louis helped me out. By going out of observing and changing the H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN back to the old value (88.285), I was able to bring the monitoring line uncertainty back down to its normal value (2%). I have undone the SDF in the CAL CS model to correct this.
The observing time section with this error is from 1444075529 to 1444081417.
Updating this alog after discussion with the cal team to include a detchar-request tag. Please veto the above time!
Request: veto time segment listed above.
I also did revert the gain change in the pydarm_ini file, but forgot to mention it earlier.