Ran the usual calibration sweeps following the TakingCalibrationMeasurements wiki at 18:30 UTC; H1 had been locked for almost 7 hours. Calibration monitor medm screenshot attached from right before moving to NLN_CAL_MEAS.
Broadband: 18:31 to 18:37 UTC
Simulines: 18:38 to 19:01 UTC
Files wrriten:
2024-09-14 19:07:37,346 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240914T183802Z.hdf5
2024-09-14 19:07:37,359 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240914T183802Z.hdf5
2024-09-14 19:07:37,369 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240914T183802Z.hdf5
2024-09-14 19:07:37,380 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240914T183802Z.hdf5
2024-09-14 19:07:37,390 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240914T183802Z.hdf5
H1 resumed observing at 19:04 UTC while data processing was running in the background.
Sat Sep 14 08:11:53 2024 INFO: Fill completed in 11min 49secs
TITLE: 09/14 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 1mph Gusts, 0mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY: H1 has been locked and observing for almost 3 hours. Calibration measurements planned for today at 18:30 UTC.
TITLE: 09/14 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Currently Oberving at 160Mpc and have been Locked for over 2.5 hours. Had one other lockloss today (besides the one we were relocking from when I came in), but besides running an initial alignment for that, it was hands off.
LOG:
23:00UTC Relocking and at ACQUIRE_DRMI_1F
23:35 Lockloss from LASER_NOISE_SUPPRESSOIN
00:49 NOMINAL_LOW_NOISE
01:07 Observing
01:09 Lockloss
01:09 Started an initial alignment
01:31 Initial aligment done, relocking
02:18 NOMINAL_LOW_NOISE
02:21 Observing
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
00:15 | PCAL | Dripta | PCAL Lab | y(local) | Quick PCAL trip | 00:21 |
Lockloss @ 09/14 01:09UTC, a few minutes after going back into Observing
02:21 Observing
Closes FAMIS#26328, last checked 79826
Corner Station Fans (attachment1)
- All fans are looking normal and within range.
Outbuilding Fans (attachment2)
- All fans are looking normal and within range.
Landed the new cables for the old type of controllers (Dual type) for IP1, IP2, and IP3. The MEDM screens will need to be updated to reflect the correct visual representation.
TITLE: 09/13 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Short lock stretches today for some reason. Fortunately, at least relocking has been mostly straightforward.
H1 is currently relocking, so far up to TRANSITION_FROM_ETMX.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:58 | SAF | H1 | LHO | YES | LVEA is laser HAZARD | 18:24 |
15:26 | PCAL | Tony, Karen | PCal Lab | Local | Technical cleaning | 16:36 |
16:05 | TCS | Camilla | Opt Lab | n | Packing parts | 16:49 |
16:42 | VAC | Travis | MX | n | Check nitrogen dewar | 17:12 |
17:16 | TCS | Camilla | Optics Lab | n | Packing Parts | 18:16 |
18:49 | PCAL | Tony | PCal Lab | Local | Testing | 19:27 |
21:12 | PCAL | Tony | PCal Lab | Local | Testing | 21:16 |
22:53 | TCS | Camilla | Opt Lab | n | Parts cleanup | 23:18 |
TITLE: 09/13 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 10mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
Currently relocking and at MAX_POWER.
Lockloss @ 22:13 UTC - link to lockloss tool
No obvious cause; only locked for about 90 minutes. I don't see the ETMX glitch prior to the lockloss that I've seen in the others today.
01:07UTC Observing
FAMIS26008
Nothing looks out of the ordinary and is consistent with the last edition of this famis task (alog79990).
Lockloss @ 18:47 UTC - link to lockloss tool
No obvious cause; were only locked for 20 minutes. ETMX started moving very slightly 43ms before the lockloss, but otherwise I don't see anything else suspicious.
H1 back to observing at 20:42 UTC. Automated relock except for I moved PRM to help during DRMI acquisition. One lockloss during TRANSITION_FROM_ETMX, but otherwise uneventful.
After a lock loss this morning Sheila and I spent some time looking further into the mode spacing for ALS (alog80065). We each took an arm and looked at the mode spacing and peak heights at good alignments and some not as good alignments. This exercise didn't yield any major revelations, but did confirm that our Y arm definitely doesn't have perfect mode matching, X arm seems to be better than Y.
Attachment 1 - ALS Y 00 mode to 2nd order mode spacing. This showed that yesterday Sheila and Ibrahim missed the small 1st order mode between these and Sheila has sinced updated their FSR and FHOM values (alog80076).
Attachment 2 - An example of ALS Y locking below the max flashes, showing that some of our power is in the higher order modes.
Attachment 3 - ALS X comparison of attachment 1
Attachment 4 & attachment 5 - ALS X 1st order peak spacing and height for an okay alignment (4) vs a better alignment (5).
Attachment 6 & attachment 7 - Same as above but for the 2nd order modes.
Lockloss @ 16:36 UTC - link to lockloss tool
No obvious cause, but there's a small hit on ETMX about 100ms before the lockloss. We've often seen similar motion before locklosses like this.
H1 back to observing at 18:29 UTC. Fully automated relock after some brief ALS commissioning.
Sheila, Ibrahim
Context: ALSY has been misbehaving (which on its own is not new). Usually, problems with ALS locking pertain to an inability to attain higher magnitude flashes. However, in recent locks we have consistently been able to reach values of 0.8-0.9 cts, which is historically very lockable, but ALSY has not been able to lock in these conditions. As such, Sheila and I investigated the extent of misalignment and mode-mismatching in the ALSY Laser.
Investgiation:
We took two references, a "good" alignment, where ALSY caught swiftly with minimal swinging, and a "bad" alignment where ALSY caught with frequent suspension swinging. We then compared their measured/purported higher order mode widths and magnitudes. The two attached screenshots are from two recent locks (last 24hrs) from which we took this data. We used the known Free Spectral Range and G-factor along with the ndscope measurements to obtain the higher order mode spacing and then compared this to our measurements. While we did not get exact integer values (mode number estimate column), we convinced ourselves that these peaks were indeed our higher-order modes (to a certain extent that will be investigated more). After confirming that our modes were our modes, we then calculated the measured power distribution for these modes.
The data is in the attached table screenshot (copying it was not very readable).
Findings:
Next:
Investigation Ongoing
TJ and I had a look at this again this morning, and realized that yesterday we misidentified the high order modes. In Ibrahim's screenshot, there is a small peak between the 00 mode and the one that is 18% of the total, this is the misalignment mode, while the mode with 18% of the total is the mode mismatch. This fits with our understanding that part of the problems that we have with ALSY locking is due to bad mode matching.
Attached is a quick script to find the arm higher order mode spacing, the FSR is 37.52kHz, the higher order mode spacing is 5.86kHz.
A while ago Oli found that some earthquakes cause IMC splitmon saturations, possibly causing locklosses. I asked Daniel this with respect to the tidal system to see if we could improve the offloading some. After some digging we found that some of the gains in the IMC get lowered to -32db at high power, greatly reducing the effective range of the IMC SPLITMON. He and Sheila decided that the best place to recover this gain was during the LASER_NOISE_SUPPRESSION state (575) so Sheila added some code to that state to redistribute some the gain (lines 5778 -5788):
self.gain_increase_counter = 0
if self.counter ==5 and self.timer['wait']:
if self.gain_increase_counter <7: #icrease the imc fast gain by this many dB
#redistribute gain in IMC servo so that we don't saturate splitmon in earth quakes, JW DS SED
ezca['IMC-REFL_SERVO_IN1GAIN'] -= 1
ezca['IMC-REFL_SERVO_IN2GAIN'] -= 1
ezca['IMC-REFL_SERVO_FASTGAIN'] += 1
time.sleep(0.1)
self.gain_increase_counter +=1
else:
self.counter +=1
We tried running this, but an error in the code broke the lock. That's fixed now, the lines are commented out in ISC_LOCK and we'll try again in some other day.
This caused 2 locklosses, so it took a little digging to figure out what is happening. The idea is to increase H1:IMC-REFL_SERVO_FASTGAIN, to compensate for reducing H1:IMC-REFL_SERVO_IN1GAIN and H1:IMC-REFL_SERVO_IN2GAIN, all analog gains used in IMC/tidal controls. It turns out there is a decorator used in almost every state of IMC_LOCK that sets H1:IMC-REFL_SERVO_IN1GAIN to some value, so when ISC_LOCK changes all 3 of these gains, IMC_LOCK was coming in after and resetting FASTGAIN. This is shown in the attached trend,on the middle plot the IN1 and IN2 gains step down like they are supposed to, but the FASTGAIN does a sawtooth caused by two guardians controlling this gain. The decorator is called IMC_power_adjust_func() in ISC_library.py and is called as @ISC_library.IMC_power_adjust in IMC_LOCK. The decorator just looks at the value of the FASTGAIN, Daniel suggests that it would be best for this decorator to look at all of the gains and do this a little smarter. I think RyanS will look into this, but looks like redistributing gain in the IMC is not straightforward.
tagging this with SPI. This would be good to compare against. The SPI should reduce HAM2-3 motion and reduce IMC length changes coming directly from ground motion. If IMC drive is to match the arm length changes then it won't help. (unless we do some feedforward of the IMC control to the ISIs?)