J. Driggers, S. Dwyer, J. Kissel, T. Shaffer TJ has been having issues locking ALS all day, and has traced down the problem *site* to be the ALS X Fiber PLL. He can lower the threshold on the ALS-X_FIBR_A_DEMOD_RFMON, and squeak by enough to move on with the lock acquisition, but the problem has been consistent all day and should be addressed by other means. Sheila suspects that these are similar issues that we've had in the past with the ALS laser mode hopping (most recently in May 2023 LHO:69773 and LHO:69712, but even further in years past as well). However, today's issues are a marked step-function drop in DEMOD power, rather than a slow gradual decay from +10 dBm-ish to below -10 dBm. In the past we've solved the slow gradual decay by lowering the beat note threshold from its nominal -10 dBm to unphysically low -30 dB threshold, left it there for a few days to get by, then the beat note would slow come back up, we'd re-raise the threshold back to -10 dB and move on. Jenne has, for now, dropped the threshold (H1:ALS-X_FIBR_LOCK_BEAT_RFMIN) to -20 dB. Earlier this morning, TJ and Jason tried increasing the power into the fiber coming from the PSL (see LHO:70944), but only was able to gain a little bit more power, and it didn't help the issue, so this further points to problems with the ALS laser, rather than what's coming in from the corner via the fiber. We have trended this issue in ALS-X_FIBR_A_DEMOD_RFMON against XVEA temperature metrics and we *do not* see any correlation. We tried flipping the laser's noise eater from ON to OFF then back ON again, and it had no effect. We can again squeak by, quickly lowering the threshold and making it past this portion of the greater IFO lock acquisition sequence, but this is a bandaid to the real problem that should be address ASAP. Sheila suggests a thing that *might* work is going to the X end station and adjusting the laser current and temperature. Meanwhile, the investigation continues. Attached are the last 24 hours, the last 24 days, and the last 24 months of behavior from this beat note channel compared against the laser crystal frequency (slow laser control via laser temperature), the laser head PZT frequency (fast laser control via PZT), and the ALS PLL Common Mode Board control signal prior to splitting between fast and slow control.
Opened FRS Ticket 28430 to track the issue.
One clarification, re: "...TJ and Jason tried increasing the power into the fiber coming from the PSL (see LHO:70944)..." The ALS lasers no longer get their PSL light for the PLL from the RefCav transmission, so improving the RefCav TPD will have no effect on the amount of power in the ALS fibers. The ALS fibers are now fed by a pickoff in the ALS path on the IOO side of the PSL table, after the PMC. From the as-built PSL layout, the fiber pickoff is ALS-PBS01, which directs the picked off beam into ALS-FC2; the IOO ALS path itself is from the transmission of IO_MB_M2, which is directly in front of the ISC EOM (IO_MB_EOM). This was installed in Oct 2019 in the break between O3a and O3b.
The beatnote strength stayed low overnight rather than coming back up. It seems like going to the end station to adjust the laser current and temperature before the long weekend is a good idea.
A note for the alog above: we looked at the fiber transmission and the fiber rejected polarization PDs, they do not show any problems. When the beatnote strength drops, there is also a jump in the crystal frequency needed to lock the PLL, which makes this look like mode hopping of the ALS laser.
In 70768 Peter noted noise bumps separated by 13Hz (26, 39, 52Hz). Fig 1 is a reproduced version of Peter’s plot. Elenna followed up with plots of suspensions with modes around 13Hz. This is a follow up of both of those things, but with no cause identified.
The 52 Hz bumps are visible in the 24-hour spectrograms on the summary pages. The one that stands out most clearly is the 52Hz bump (The 3rd harmonic, 52Hz/4=13Hz). Fig 2 shows a 51-53Hz BLRMS of STRAIN overlaid with one of the 24 spectrograms, just showing that the BLRMS and the spectrogram roughly agree (ignore the blue trace).
The 52Hz bump changes width over time, which complicates searching for correlation with BLRMS. Fig 3 shows that high BLRMS times are large and broad bump, but low BLRMS time can be low bump or tall skinny bump.
Using gwdetchar-lasso-correlation we can search all LHO channels for correlations with this 51-53Hz BLRMS. A summary of 52Hz bump lasso pages is below. ITMY/X/BS SUS/ISI channels seem somehow related as they show up multiple times (but not every time). On any given page there are other (random? such as VID CAM26 WY or Humidity) channels which look plausibly related but don’t show up on any other days. Maybe you will find clues that jump out but I only see some weak evidence. I think the correlation might be fooled by the bump height/width degeneracy in the BLRMS mentioned above.
Figures 4-8 are some of the more convincing correlated channels on the various days.
Finally, given the apparent 13Hz base frequency, I had a look at various suspensions (guided by some links from Arnaud). There is strong 13Hz in ITMX, ITMY, BS HPI (Arnaud mentioned the arm cavity baffles hang directly from HEPI). But the BLRMS of the 13Hz band seems pretty stable and doesn’t match the BLRMS of the 52Hz bump, Fig 9.
So likely the resonances are more or less stable and if they are the driver of the n*13Hz bumps, the coupling is modulated by something, such as alignment.
PI damping guardian has been edited to continue onto PI DAMPING after the first minute of OMC_WHITENING. Edits committed to SVN; diff and commit screenshot attached.
Due to the DCPD glitches from OMC_WHITENING, I had previously not engaged PI_DAMPING to turn on damping gains, coil drivers, etc during this lock stage. Oversight on my part, I didn't think about the case where we stay in that stage to damp violins for >1 hour. Violin damping looked really successful too, until PI24 ran away (nuc31 dcpds screenshot). I guess this shows that we indeed need active PI damping to avoid locklosses due to the 10.4 kHz PIs overlapping with the 2x HOMs.
NUC25 has been updated with a new PI-damping ndscope, which shows the PI damping's drive to the ESDs. Screenshot is annotated.
In the scope, first 2 plots show the ETMY (10.4kHz, PI 24+31) and ETMX (80.3kHz, PI 28+29) PI mode monitors. 3rd plot shows ESD drives: for ETMY, should be ~1000, for ETMY, should be ~50,000. If ESD drive is 0, there is no PI damping.
To damp manually, can do the following for e.g. PI 24 damping:
If PI Guardian is in "PI_DAMPING", but you want to manually step the phase, take it to "IDLE".
This is what PI_DAMPING guardian does (ie, turns on ESD switch, turns on damping gain, resets PLL integrator, then steps phase around until rms decreases).
Looks like this guardan edit is working, and we can now do PI DAMPING in OMC_WHITENING, screenshot attached.
Thu Jun 29 10:10:02 2023 INFO: Fill completed in 10min 1secs
Jordan confirmed a good fill curbside.
We've made it back to low noise but we are waiting to damp violins further before changing our OMC whitening. Not sure why the violin modes were rung up so bad from this lockloss.
Nominal settings for the violin modes seems to be working, although the rung up are coming down very slowly.
Lost lock to a PI 24, damping isn't on while we were in OMC_WHITENING waiting for violins to damp.
TJ noticed the RefCav TPD was down around 0.83V this morning. It's been hanging out around 0.9V, so it dropped over the last couple of days. During relocking there were some issues at ALS, so to be on the safe side TJ asked me to tweak the RefCav beam alignment. I was only able to get the TPD up to ~0.85V, so not much of an improvement (a whole 0.02V...). This is an indication that we'll have to go into the enclosure to manually tweak the FSS path beam alignment on the PSL table, likely during the next maintenance window (July 11) or sooner if it becomes a problem (i.e. TPD continues to drop). Will monitor.
Lockloss (tool link pending)
Ending our 42 hour long lock. No obvious cause.
TITLE: 06/29 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 139Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 2mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY: Locked for almost 42 hours (O4 record). Plans for some commissioning time later today.
TITLE: 06/29 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
SHIFT SUMMARY: IFO has been locked for 42 hours. LHO's longest O4 lock so far.
LOG:
None of my owl shifts this week have needed me to do anything apart from Tuesday maintenance measurements from 7am.
There was again coherence between H1 range and wind speeds last night, see attached and yesterday's 70909.
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 13:18 | FAC | Karen | CR | N | Vacuuming in Control Room, tagging DetChar. Happens every weekday morning before 8am | 13:23 |
| 14:31 | S&K | Ken | VPW | N | Ken driving to VPW | 14:32 |
| 14:41 | FAC | Karen | Optics Lab | N | Technical Cleaning | Still out |
Here's an updated BruCo scan after yesterday's commissioning activities
https://ldas-jobs.ligo-wa.caltech.edu/~gabriele.vajente/bruco_GDS_1372064418/
Rachel McQueen, Camilla
Nominally each Ring heater lower segment has a RTD temperature sensor on it. These our the closest temperature sensors to the quad, in alog 46791 Aidan shows them witnessing LVEA temperature changes. Giving this status update as Rachel is looking into these sensors as part of her summer SURF project.
This is the current status of the sensors at LHO is below: only ETMX has a readback but this sensor is not attached correctly to the assembly.
STATE of H1: Observing at 146Mpc. Locked for 38 hours.
TITLE: 06/29 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 10mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY:
23:12 Super event candidate S230628ax
00:19 UTC H1 Dropped out out of OBSERVING and into COMISSIONING due to the CAMERA_SERVO Guardian detecting that the ASC-CAM_PIT2_INMON & ASC-CAM_YAW2_INMON was "stuck" for 5 seconds. The Guardian node took itself to TURN_CAMERA_SERVO_OFF. it was in this state for less than a minute. It then took itself back to CAMERA_SERVO_ON. Please See Alog.
00:34 UTC We went back to observing.
5:52 UTC H0:VAC-EX_INSTAIR_PT599_PRESS_PSIG alarm sounding off. Tagging Vacuum VE group.
Closes FAMIS 25072. Last done in alog 70652.
All V_eff values are remaining flat or trending towards zero. Comparison with Oplev charge Measurements also attached, although we still need to understand the relation between them both.
TITLE: 06/29 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 11mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY: IFO locked for the past 34 hours, longest O4 H1 lock so far.
00:19 UTC H1 Dropped out out of OBSERVING and into COMISSIONING due to the CAMERA_SERVO Guardian detecting that the ASC-CAM_PIT2_INMON & ASC-CAM_YAW2_INMON was "stuck" for 5 seconds. The Guardian node took itself to TURN_CAMERA_SERVO_OFF. The Guardian node was in this state for less than a minute. It then took itself back to CAMERA_SERVO_ON. This was also seen in the Guardian log. Please See attached ndscope Screeneshot.
00:34 UTC We went back to observing.
Looking back at yesterdays alog, and some ndscopes this appears to be the same issue we saw today, and it was a mere coincidence that Elenna happen to hit the load button at the same time as the guardian node transitioned down to get the camera unstuck.
For such short camera freeze, we implemented WAIT_FOR_CAMERA state in the camera guardian (alog68756). When the camera freeze happens, the camera guardian turns off the camera servo and waits for camera for 30s without moving to ADS. If the cameras are OK after 30s, the guardian turns on the camera servo again. So the camera guardian itself is working as expected.
In the camera guardian, the is_chan_static in static_tester.py is checking if the cameras are stuck or not. If the is_chan_static can skip this short camera freeze, that would be one of the solutions.
STATE of H1: Observing at 144Mpc. Been at NLN for 14 hours.
Could not see a difference in DARM between the windy and not windy times above, see attached.