We lost lock from the calibration, so we tried to lock ALS without the linearization (some background in this alog: 83278.) An active measurement of the transfer function from DRIVEALIGN_L to MASTER out was 1 without the linearization, and -0.757 with the linearization on. So I've changed the DRIVEALIGN gain to -1.3 in the ALS_DIFF guardian when the use_ESD_linearization is set to false.
We tried this once, and it stayed locked for a DARM gain of 400, but unlocked as the UIM boosts were turning on. We tried this again but it also didn't lock DIFF, so it is now out of the guardian again.
I looked at a few more of the past ALS DIFF locks, both sucsesful and unsucsesful attempts we are saturating the ESD (either the DAC or the limiter in the linearization) in the first steps of locking DIFF. We do these steps quite slowly, stepping the darm gain to 40 waiting for the DARM1 ramp time, stepping it to 400, then waiting twice the ramp time, then engaging the boosts for offloading to L1. I reduced the ramp time from 5 seconds to 2 seconds to make this go faster. This worked on the first locking attempt, but that could be a coincidence.
We will leave this in for a while, so that we can compare how frequently we loose lock at LOCKING_ALS. In the last 7 days we've had 48 LOCKING_ALS locklosses, and 19 locklosses from NLN, so roughly 2.5 ALS locklosses per lock stretch.
The calibration measurement caused another lock loss at the end of the measurement (alog83351)
Simulines start:
PDT: 2025-03-13 08:36:53.447733 PDT
UTC: 2025-03-13 15:36:53.447733 UTC
GPS: 1425915431.447733
End of script output:
2025-03-13 15:58:59,923 | INFO | Drive, on L2_SUSETMX_iEXC2DARMTF, at frequency: 42.45, and amplitude 0.41412, is fin
ished. GPS start and end time stamps: 1425916738, 1425916753
2025-03-13 15:58:59,923 | INFO | Scanning frequency 43.6 in Scan : L2_SUSETMX_iEXC2DARMTF on PID: 795290
2025-03-13 15:58:59,923 | INFO | Drive, on L2_SUSETMX_iEXC2DARMTF, at frequency: 43.6, is now running for 23 seconds.
2025-03-13 15:59:01,039 | INFO | Drive, on DARM_OLGTF, at frequency: 1083.3, and amplitude 1e-09, is finished. GPS st
art and end time stamps: 1425916738, 1425916753
2025-03-13 15:59:01,039 | INFO | Scanning frequency 1200.0 in Scan : DARM_OLGTF on PID: 795280
2025-03-13 15:59:01,039 | INFO | Drive, on DARM_OLGTF, at frequency: 1200.0, is now running for 23 seconds.
2025-03-13 15:59:02,168 | INFO | Drive, on L1_SUSETMX_iEXC2DARMTF, at frequency: 11.13, and amplitude 13.856, is fini
shed. GPS start and end time stamps: 1425916735, 1425916753
2025-03-13 15:59:02,168 | INFO | Scanning frequency 12.33 in Scan : L1_SUSETMX_iEXC2DARMTF on PID: 795287
2025-03-13 15:59:02,169 | INFO | Drive, on L1_SUSETMX_iEXC2DARMTF, at frequency: 12.33, is now running for 25 seconds
.
2025-03-13 15:59:07,162 | ERROR | IFO not in Low Noise state, Sending Interrupts to excitations and main thread.
2025-03-13 15:59:07,163 | ERROR | Ramping Down Excitation on channel H1:SUS-ETMX_L2_CAL_EXC
2025-03-13 15:59:07,163 | ERROR | Ramping Down Excitation on channel H1:SUS-ETMX_L3_CAL_EXC
2025-03-13 15:59:07,163 | ERROR | Ramping Down Excitation on channel H1:CAL-PCALY_SWEPT_SINE_EXC
2025-03-13 15:59:07,163 | ERROR | Ramping Down Excitation on channel H1:SUS-ETMX_L1_CAL_EXC
2025-03-13 15:59:07,163 | ERROR | Aborting main thread and Data recording, if any. Cleaning up temporary file structu
re.
2025-03-13 15:59:07,163 | ERROR | Ramping Down Excitation on channel H1:LSC-DARM1_EXC
ICE default IO error handler doing an exit(), pid = 795246, errno = 32
PDT: 2025-03-13 08:59:11.496308 PDT
UTC: 2025-03-13 15:59:11.496308 UTC
GPS: 1425916769.496308
Closes FAMIS 26034, last checked in alog83240
The script reports high freq noise is elevated for these BSCs.
All the HAMs noise looks to have reduced except for HAM8, particularly a peak at just over 1 Hz in H3 and H1 looks reduced on almost all of the HAMs. HAM8s mid freq looks overall noisier and the high freq V3 peak looks elevated (just over 100Hz).
This happened at the end of the calibration measurement. This also happened last week (alog83210).
The last 3 calibration measurements have ended in locklosses, todays, saturdays and last thursdays (these link to the lockloss tool webpages). The 3 locklosses look fairly similiar but looking at the ETMX signals, todays seemed a little stronger of a lockloss? Compared to Saturday and last thursday.
Last thursdays lockloss alog83239
Looking at the same channels that Vlad did in 82878 and Ryan's above ETMX channels, the 42Hz growing wobble in DARM/ETMX seems to start as soon as the DAM1_EXC is ramped on to it's full amplitude, plot attached. The 42Hz is seen as an excitation into H1:SUS-ETMX_L2_CAL_EXC_OUT_DQ, plot attached. Maybe the ETM is not fully stable at that frequency?
Looking at a successful calibration sweep Feb 22nd 82973, There was no instability when the 42Hz oscillation was turned onto ETMX L2, plot attached.
Overhaul of the failed roofing at the LSB finished on Tuesday. T. Guidry
We had a vacuum glitch in the LVEA at 01:57 Thu 13mar2025 PDT which was coincident with a lockloss. The glitch was below VACSTAT's alarm levels by an order of magnitude, so no VACSTAT alert was issued.
The glitch is seen in most LVEA gauges, and took about 10 minutes to pump down.
Attached plots show a sample of LVEA gauges, the VACSTAT channels for LY, and the ISC_LOCK lockloss.
Adding the lock loss tag and a link to the lock loss tool - 1425891454
There's no blatantly obvious cause. The wind was definitely picking up right before the lock loss, which can be seen in many of the ASC loops, but I'm not sure it was enough to cause the lock loss.
This may be a what-came-first the LL or the VAC spike scenario but looking at the past 3 hour trend of H1:CDS-VAC_STAT_LY_Y4_PT124B_VALUE attached, it does not go above 3.48e-9. However 3 seconds before the lockloss, it jumps to 3.52e-9 attached. This seems suspiciously like whatever caused the VAC spike came before the lockloss.
There is a LL issue open to add a tag for this: #227, as it was previously seen in 82907.
The only possible plausible vacuum glitch without the laser hitting something first, is IP-glitch. Gerardo is looking into this now. Otherwise, I would say the laser hit something, which didn't cause lockloss right away (only 3 seconds later), but - obviously - caused pressure spike. The rising wind is just too big of a coincidence to me..
Corner RGA caught small change in H2 and N2 at the time of the vacuum glitch. AMU 2 delta = 5.4e-10 amp, AMU 28 delta = 1.61e-10 amp
Attached is a screenshot of RGA scan just before the vac glitch (1:57:02 AM 3/13/25), and second screenshot is ~10 seconds after. Top trace is the 0-100AMU scan at that time, bottom trace is the trend of typical gas components (AMU 2, 18, 28, 32, 40, 44), over ~50 minutes. Vertical line on bottom trace corresponds to time RGA was collected
RGA is a Pfeiffer Prisma Plus, 0-100AMU, with 10ms dwell time, EM enabled with 1200V multiplier voltage. Scans run continuous 0-100AMU sweeps
Main ion pumps reacted to the "pressure spike" after it was noted by other instruments such as the vacuum gauges, see attached plot, first one.
The second plot shows the different gauges located at the corner station, the "pressure spike" appears to be noted first by two gauges PT120 (gauge on dome of BSC2) and PT152 (gauge located at the relay tube). The amplitude of the "pressure spike" was very small, signature only was noted at the corner station, not noted at Mids or Ends.
Two of the ion pumps at the filter cavity tube responded to the "pressure spkie", see third attachment.
Also, the gauges located on the filter cavity tube noted the spike, including the "Relay Tube".
TITLE: 03/13 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 134Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 24mph Gusts, 18mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.28 μm/s
QUICK SUMMARY: Locked for 1.5 hours. Planned calibration and commissioning today. Elevated winds already present, some gusts above 40mph. The PSL 101 dust monitor has been in alarm.
Our range has been diving down to 130 or so then recovering. It doesn't seem to be the PIs or the end station noise that we were looking at recently. Rumors in the control room are that it was the squeezer, stay tuned for another alog.
Sheila, TJ, Camilla. In 83330 we showed that the SQZ servo with edited ADF PLL phase 83270 was working well, but it's been running away this morning, plot attached. We're not sure yet what changed to make it become unstable but this is similar to why we turned it off in Feb.
Tagging DetChar as the times when the SQZ phase and IFO range jump a lot (2025/03/13 14:49 UTC and 15:10UTC) may have glitches.
As a potential optic for the new TT for the in-vac POPX WFS in HAM1, we tested the transmission of what is supposed to be the old ZM2 mirror from O3 days as a function of AOI.
The results summarized below doesn't make sense because it's least transmissive at 0 deg AOI and it's not HR at 45 deg (3.3% transmission). It's supposed to be E1700103 according to the label but with a handwritten note saying "verify??? doesnt look the same" or something like that. Anyway, if it is indeed E1700103, it should be R>99.99% @ 1064nm for both P and S at 45 +- 5 deg AOI.
This mirror is probably usable for POPX in that the transmission is not disastrously big at the proposed AOI (~30deg or so?) but this is a mystery optic as far as I'm concerned. If the mirror in ZM1 is better, we'll use it.
AOI (deg) | power in transmission (mW) | power transmission in % |
0 | 0.05 | 0.03 |
10 | 0.08 | 0.04 |
15 | 0.09 | 0.05 |
20 | 0.11 | 0.061 |
25 | 0.16 | 0.089 |
30 | 0.28 | 0.16 |
35 | 0.44 | 0.24 |
40 | 1.23 | 0.68 |
45 | 5.94 | 3.3 |
Other things to note:
We mounted the mirror in a class A siskiyou mirror holder on a dirty rotational stage with a clean sheet of foil inbetween so we can easily change the AOI in YAW.
The apparatus was NPRO - BS (to throw away some power) - HWP - PBS cube transmission (to select P-pol) - steering - ZM2 optic transmission - power meter.
The beam height between the steering mirror and the ZM2 optic was leveled, then ZM2 was adjusted so the beam retroreflects within +-2mm over 19 inches, and this became our AOI=0 reference point with uncertainty of ~+-2mm/19inches/2 rad ~ +-0.1deg. We used the indicator on the rotational stage to determine the AOI, and overall AOI error should have been smaller than 1 deg.
Before/after the measurement, the power impinging the HR side of the mirror was 180.3mW/176.1mW. Since the transmitted power only had 2-digits precision at best, I used 1.8e2 mW as the input power. The ambient light level was less than 10uW and is ignored here.
Another obvious candidate is ZM1 (again from O3 days). The mirror was not removed from TT after deinstallation though. We'll see if we can roughly determine the AOI while the mirror is still suspended, as it's a hassle to remove it from the suspension and there's a risk of chipping.
TITLE: 03/13 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Observing at 147Mpc and have been Locked for almost 16 hours. Nothing happened during my shift. This also means I didn't get to do any of the tests with the cameras or A2L changes.
LOG:
Observing my entire shift
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
20:06 | ISC/SUS | Keita | Opt Lab | Yes | WFS PM1 testing | 23:32 |
21:06 | ISC | Mayank | Opt Lab | Yes | ISS PD array round 2 | 00:50 |
Observing at 144Mpc and have been Locked for 14 hours. Our range has been moving around quite a bit unfortunately.
Closes FAMIS#28396, last checked (injections last made) 82537
The ITMX data initially wasn't analyzed due to some issue with grabbing the data yesterday, but I got it to work today by just having it redownload the data and this time there were no issues with it.
I'll be editing the process_single_charge_meas.py script to make sure that if there is an error with downloaded data, the script tries to redownload the data instead of reading in previously downloaded corrupted/incomplete data, failing, and just giving up on that measurement.
Edits made to process_single_charge_meas.py to make sure that in the future, if the timeseries data that the script grabs and saves in rec_LHO_timeseries is somehow messed up, running the script again will make sure that if the data gives an error when being read from the file, it'll try redownloading the data. Not sure why I had this issue happen, since the first time I ran the script, it had no issue grabbing the data for the other quads. Hopefully this edit will at least make it so it doesn't constantly fail because of something wrong with file if the data itself is now available. This change has been committed to the svn as revision 31024.
WP9880
I removed the O3 ZM2 tip tilt from HAM5 and moved the SRM heater optics to be used in the upcoming temporary laser setup for OFI alignment.
Summary of activities:
just for info - the optic used for ZM2 (HTTS, HAM5) optic is E1000425 - currently stored at LHO.
Since the time of this alog, around 19 UTC on March 13th, we've had 68 locking_ALS locklosses and 12 NLN locklosses, so about 6 locklosses per sucsesful lock. It seem though that the change to 2 seconds was never in place, and the guardian code still said 5 seconds. So this issue seems to be getting worse without any change.
Now I've loaded the change to 2 seconds, so this should be sped up after today's maintence window.
I've looked at a bunch more of these locklosses, and they mostly happen in the time when the DARM gain is ramping, less often as the boosts are coming on in L1, and 1 I saw happened while COMM was locking.
In all the cases the linearization seems to hit its limiter before anything else goes wrong.