Workstations were updated and rebooted. OS packages were updated. Conda packages were not updated.
Was alerted at 8:49UTC.
Guardian was having trouble with automatic initial alignment due to ALS_XARM being stuck at CHECK_CRYSTAL_FREQ with no valid state to jump to afterwards.
I hit "Load" and restarted initial alignment and waited until Guardian got past this stage in initial alignment, which happened around 50 minutes later at 9:30.
Things seem to be working now so I set guardian/IFO_Notify back to self-managed and expect to be called again if we get stuck for any other reason.
TITLE: 11/07 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Locked for most of the evening, but high winds have kept H1 down for the last hour and a half.
Wind speeds aren't quite as bad as they were, so I'm going to set H1 to try locking as I hand off to Ibrahim for the night.
LOG:
No log for this shift.
Lockloss @ 06:22 UTC - assuming this is due to the suddenly high wind speeds. Currently seeing gusts over 50mph and the wind is loud against the outer walls of the control room.
First thing I noticed before the lockloss was PRG getting very unstable and ASC control signals working harder.
Unfortunately it doesn't look like the winds will be calming down much tonight according to the forecast.
Corner station wind spiked to 55mph a minute ago; holding H1 in DOWN for now as it's not having any success locking ALS.
State of H1: Observing at 158Mpc
H1 has been locked and observing for just over 4 hours. EX VEA temperature has come back to normal.
The first page of the figure shows that there was a gain of 2 or 3 Mpc when the EX HVAC was shut down. The second page shows DARM for this series of tests, with the most evident difference in the 38-45 Hz range. We have already realized much of the potential improvement with damping of the cryobaffle by significantly reducing the noise from fans in the 4 Hz region over the last couple of months and reducing the 52 Hz peak over the weekend, so I dont expect more than a couple of Mpc improvement with damping.
EX HVAC (fans, chillers cw pump) on-off tests:
Start shutting down: 20:00
Start turning on: 20:15
Start shutting down 20:30
Start turning on: 20:45
Start shutting down: 21:00
Start turning on: 21:15
FAMIS 26261, last checked in alog 73927
Fans have been turning on and off more than usual because of Robert's recent HVAC tests, but the noise levels of each fan look to be nominal and within range.
FAMIS 19967
pH of PSL chiller water was measured to be between 10.0 and 10.5 according to the color of the test strip.
FAMIS 20001
There's been a slight increase in temperature and humidity over the past week seen in the laser, ante-, chiller, and diode rooms, but not the LVEA.
The output power for the NPRO has been slowly trending upward, but the output for both amps have been trending down.
TITLE: 11/06 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 22mph Gusts, 18mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.26 μm/s
QUICK SUMMARY: H1 has just reached NLN and camera servos have turned on. Camilla and Naoki are checking on SQZ before we start observing.
TITLE: 11/06 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: A lock loss at 2252UTC ended a 16.5 hour lock. During the day there was HVAC testing from Robert (alog74026). We just got back to low noise.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 16:19 | FAC | Karen | OpticsLab | n | Tech clean | 17:23 |
| 17:24 | FAC | Karen | MY | n | Tech clean | 18:24 |
| 17:49 | FAC | Kim | MX | n | Tech clean | 18:27 |
| 17:49 | FAC | Randy | MX,MY | n | Parts hunt | 19:49 |
| 20:00 | PEM/FAC | Robert | EX | n | Turning off HVAC | 22:00 |
| 22:50 | FAC | Eric | EX | n | Changing supply fan configuration | 23:50 |
Naoki, Camilla
After noticing that it has been taking >20 minutes for the ADS servos to converge once we are at NLN 73842, Jenne suggested increasing ADS gains by a factor of 3 once we are close to NLN (after TRANSITION_TO_ETMX).
After this afternoon's 22:52UTC lockloss, Naoki edited lscparams and CAMERA_SERVO.py to increase the ADS gains from 20 to 60 in the TURN_CAMERA_SERVO_ON state of CAMERA_SERVO guardian. This state is only requested in ADS_TO_CAMERA state of ISC_LOCK, where it should be safe to increase gains. We'll plan to watch this first relock.
As part of Roberts ongoing work to look into the affects of the HVAC on DARM (Most recent WP11504), they noticed that their changing of the supply fan configurations started to show up negatively in DARM. There were ~4Hz peaks and consequnce harmonics seen visible in our DARM FOM on the wall.
They started this work at about 2234 utc. We lost lock at 2252, but doubtful that this was the cause of the lock loss.
This lock loss had another one of the LSC-DARM wiggles ~100ms before the lock loss.
There was HVAC work going on at EX at the time, but I don't see much ground motion at that time, so I'm inclined to rule them out.
Over the last few days the TCS lasers have been relocking and then outputting slightly different powers after each relock. This will cause us to input into vacuum at a very slightly different power. The largest power difference is currently with CO2Y, where last week the laser was outputting 45.0W and putting into vacuum 1.68W as measured from the power meter on table. Currently CO2Y is outputting 43.0W and putting into vacuum 1.59W. While this is not necessarily a new problem, there were more relocks in the psat 5 days from front ends going down and temperature swings.
To help prevent this in the future, Camilla plans to recalibrate the rotation stages with the annular mask again tomorrow. We've also talked about putting some bootstrapping code in so that even if the lasers relock at a different output power, they will still inject the same power.
Locked for 14 hours. Robert has started his HVAC shutdowns starting at EX. The systems will be repeatedly turned on and off over the next hour or two.
Motivated by interested results from Sheila's ESD tests last week (LHO:73913) and Gabriele finding that there is some nonlinear noise coupling in ETMX L3 from <4Hz to 15-25Hz in DARM (LHO:73937), I've started taking a closer look at the DARM loop to 1.) reduce ETMX L3 actuation below 4Hz (nominally by increasing L1/L2 actuation at low frequencies) and 2.) to have the UIM stage roll off more aggressively, similar to what LLO has had in place since O1 (see for example page 10 of G1501372 and page 11 of G2000149 ). I'm uploading the "DARM critique" generated by pyDARM for H1 based on report 20231027T203619Z. This is still a work in progress; I'm just leaving these plots & links here for my own future reference.
Tagging CAL.
We've done three measurements assess OMC loss, but we've found that there's something weird about DCPD transient response (will attach another alog, but it seems as if something is railing inside the chamber, or maybe the OMC DCPD inductor is responding nonlinearly causing some kind of soft saturation, or maybe it's just a whitening-dewhitening mismatch).
Because of this, Measurement 1 below is suspect, Measurement 2 is definitely OK, Measurement 3 is probably OK too.
Analysis will come later.
Throughout the measurement, RF sidebands (9, 45 and 118) were OFF, H1:OMC-DCPD_A_GAINSET and B were set to low (a factor of 10 smaller than nominal), and IMC-PWR_IN was 10W.
Measurement 1. Scan OMC PZT and measure the MM loss.
Lock OMC, align OMC reasonably well, unlock, scan the PZT slowly. The best scan is between 19:26:25 and 19:34:44 UTC (t=[-31m,-23m] in the 1st attachment).
"Align reasonably well" was a challenge as we were using the OMC QPD for OMC ASC, changing alignment meant changing QPD offset, and doing so frequently railed OMC suspension. While changing the offsets, maximum DCPD_SUM I could reach was 16.9, but I kept bumping OMCS so I gave up (sensor correction was off for the first half of this effort and that didn't help). In the end, usable data was obtained with default offset that gave us ~16.6 when locked, but we know that that was NOT the best alignment.
As you can see, the peaks in DCPD_SUM during the scan are all about ~14, much smaller than 16-something. (At t=-36m, the OMC was held at resonance and DCPD_SUM was ~16.6. After the scan at t=-20m, DCPD_SUM was ~16.45. So the alignment drift wasn't much of a problem.)
It turns out that we had to scan slower than this even though this was REALLY slow (~8 minutes for one cycle) and/or lower the laser power.
Measurement 2. OMC throughput.
Lock the OMC to 00 resonance (19:37:27-19:38:28 UTC, roughly t=[-20m, -19m] on the 1st attachment).
Measure DCPD_SUM, input power (via ASC-OMC_A and ASC-OMC_B SUM) and reflected power (via OMC-REFL_A).
With the same alignment into OMC, find the time where the OMC was off-resonance (DCPD transmission was minimal 19:25:58-19:26:20 UTC), and measure DCPD_SUM, input power and reflected power.
Calculate the throughput.
Measurement 3. OMC Finesse
I roughly kept the OMC at resonance by adjusting pzt voltage. DCPD_SUM was slowly drifting but it was about 16.0.
I started injecting into PSL frequency with an amplitude of ~+-600kHz (1.2MHzpp) via IMC-L_EXC. I slowed down the injection frequency until the peak value gets back to 16.0. (Second attachment.)
Best scan data is obtained 19:56:20-19:57:20 UTC.
OMC DCPD transient response
Let's see the 2nd attachment of the above alog (which is attached again).
At t=[-2m40s, -2m20s], nothing is railing at ADC. You can tell that as ADC saturation means that OMC-DCPD_A_STAT_MAX and/or OMC-DCPD_A_STAT_MIN hit +-512k (MAX only went to 120k and MIN only went to -80k in this case).
However, note that the DC value for MAX and MIN during the time OMC was kept on resonance were about 48k and 47k, respectively. The fact that MIN went to negative 80k and MAX went to positive 120k means that the transient response was huge.
Even though it was huge, if nothing was railing nor saturating, you'd still expect that the peak height in DCPD_SUM is the same as when the OMC was kept on resonance, but clearly that's not the case. At first when the scan was fast the peak value was maybe 60% of what it should have been, and as I slowed down the scan the peak gradually went back to ~99% or so.
In the case of the PZT scan, we're talking about the slow velocity where each 00 peak is ~0.6sec wide (and that was too fast, we should have reduced the laser power, anyway see 2nd attachment).
Where does the mismatch come from?
A simple whitening-dewhitening mismatch? Is something railing inside the chamber, or maybe soft-saturating because of large transient (e.g. big coil?). I think we need a help from Hartmut.
I'm using Keita's times from the OMC visibility measurement above to runthe script described in 73873, and using the dark offset times from that alog as well. This time includes more higher order mode content, and slightly higher overall efficiency, than the time in 73873, which is somewhat confusing.
Results:
Power on refl diode when cavity is off resonance: 22.764 mW
Incident power on OMC breadboard (before QPD pickoff): 23.207 mW
Power on refl diode on resonance: 2.081 mW
Measured effiency (DCPD current/responsivity if QE=1)/ incident power on OMC breadboard: 82.7 %
assumed QE: 100 %
power in transmission (for this QE) 19.187 mW
HOM content infered: 8.979 %
Cavity transmission infered: 91.712 %
predicted efficiency () (R_inputBS * mode_matching * cavity_transmission * QE): 82.676 %
omc efficency for 00 mode (including pick off BS, cavity transmission, and QE): 90.832 %
round trip loss: 685 (ppm)
Finesse: 392.814
assumed QE: 96.0 %
power in transmission (for this QE) 19.986 mW
HOM content infered: 9.099 %
Cavity transmission infered: 95.660 %
predicted efficiency () (R_inputBS * mode_matching * cavity_transmission * QE): 82.676 %
omc efficency for 00 mode (including pick off BS, cavity transmission, and QE): 90.952 %
round trip loss: 348 (ppm)
Finesse: 401.145
Tagging
- CAL (because of the suggestion that there's a mismatch between analog whitening and digital compensation for it [I doubt it]),
- CDS (because Ali James -- who was Hartmut's student that built up the in-vac transimpedance amplifier -- now has been hired as our newest CDS anlaog electronics engineer),
- DetChar (because understanding this transient behavior may be a clue to some other DetChar / GW channel related transient features, since Keita refers to studing *the* DCPDs -- the OMC DCPDs).
The CHECK_CRYSTAL_FREQ state in ALS_ARM's (XARM) code which checks on the H1:ALS-{ARM}_FIBR_LOCK_BEAT_FREQUENCY channel threw an index error which I thought I had handled but apparently not, so I added a try-except error handler to the loop where it checks the list of frequency spots to check index and I also added a sleep timer because it looks like it fixed the issue and had the beatnote within tolerance twice but then kept going and changing the frequency which worsened the beatnote and brought it out of tolerance, it just flashed through it? Hopefully the sleep timer will help with this, I'll think more about how to make the code better.