As a part of an effort to better streamline the FIND_IR step in the lock acquisition process, I put together a quick histogram of the DIFF offsets that have successfully locked DIFF IR since the start of O4b (April 10th). Clearly, there are two main regions where the offset needs to be in order to lock DIFF IR, so it may be beneficial to change how the IR search works to start by doing a narrow scan around these two regions, then expanding the search from there if need be.
I may expand this to also check the values from O4a and the COMM IR offsets if we find that useful.
Sheila asked a good question the other day of, Did SR2 alignment change between the beginning of O4b (when things were still good) and when we had the bad losses through the OFI (when things were bad, before the big shift). The answer: no, I don't think SR2 moved very much (according to its top mass osems) when the the losses through the OFI showed up. It did move about 10 urad in yaw (see table below), which I plan to look into further.
I looked at several times throughout the last few weeks when ALIGN_IFO guardian had just finished up state 58, SR2_ALIGN at 10 W, which it now does every time initial alignment is automatically run. These should all be single bounce off of ITMY, with the beam centered on AS_C by adjusting the SR2 sliders, for some given SR3 slider position (nothing automatic touches the SR3 sliders).
In the table, I summarize the SR3 and SR2 top mass osems. I've got 3 categories of times for the IFO situation:
Note that this table is not chronological, since I've grouped rows by IFO situation rather than time. The SR2 and SR3 osem values that are in bold are the ones to compare amongst each other. There does seem to be a 10 urad shift in SR2 yaw between the April 21st and April 23rd times. There are no other run-throughs of the SR2_ALIGN state of ALIGN_IFO between these times to check. This SR2 yaw shift (which is consistent even when we revert sliders to the 'pre shift' values and run SR2_ALIGN) is notable, but not nearly as large as what we ended up using for steering around the spot in the OFI.
IFO 'situation' | Date / time [UTC] | AS_C NSUM value | SR3 Pit [M1_DAMP_P_INMON] | SR3 Yaw [M1_DAMP_Y_INMON] | SR2 Pit [M1_DAMP_P_INMON] | SR2 Yaw [M1_DAMP_Y_INMON] |
(1) before EQ, before loss, before alignment shift | 17 Apr 2024 00:19:00 | 0.0227 | -281.5 | -612.2 | 569.8 | 35.3 |
(1) after EQ, before loss, before alignment shift | 21 Apr 2024 20:08:30 | 0.0227 | -281.5 | -611.9 | 571.9 | 35.3 |
(2) after EQ, after loss, before alignment shift | 23 Apr 2024 23:10:00 | 0.0193 | -281.9 | -612.2 | 572.7 | 26.7 |
(2) after EQ, after loss, shift temporarily reverted to check | 7 May 2024 18:10:00 | 0.0187 | -282.3 | -616.0 | 558.5 | 23.2 |
(3) after EQ, after loss, after alignment shift | 25 Apr 2024 12:18:20 | 0.0226 | -291.7 | -411.1 | 599.6 | 1150.0 |
(3) after EQ, after loss, after alignment shift | 7 May 2024 19:11:15 | 0.0226 | -292.4 | -408.8 | 597.6 | 1149.9 |
After a quick re-look, that 10 urad move in SR2 yaw seems to have come during maintenance, or sometime later than the time that the loss showed up.
In the attachment, the vertical t-cursors are at the times from the table in the parent comment on April 21st and 23rd. The top row is SR2 pitch and yaw, and the bottom row is SR3. The middle row shows our guardian state (i.e. when we were locked), and kappa_c which is indiciative of when we started to see loss. In particular, there are 3 locks right after the first t-cursor, and they all have quite similar OSEM values for SR2 yaw (also the times between locks are similar-ish). Those three locks are the last one with no loss, one with middling-bad loss, and one with the full loss. So, it wasn't until after we had our full amount of loss that SR2 moved in yaw. I haven't double-checked sliders yet, but probably this is a move that happened during maintenance day.
I'm using Jenne's times above to do a similar check, but looking at times when ALIGN_IFO was in state 65 (SRY align) because in that state the AS WFS centering servos are on. This state is run shortly after state 58, so I'll reuse Jenne's numbers to refer to times and the IFO situation.
This table indicates that changes in AS power are consistent between AS_C and the AS WFS, so the beam transmitted by OM1 and reflected by OM1 see similar losses. This makes it seem less likely that a bad spot on OM1 is the problem (and points to probably being an issue with the OFI), although it's not impossible that a loss on OM1 is seen in the same way for transmission and reflection.
AS_C sum | AS_C normalized to first row | AS_A sum | AS_A normalized to first row | AS_B sum | AS_B normalized to first row | ||
1 | April 17 00:20:15 UTC | 0.0626 | 1 | 5264 | 5104 | ||
1 | April 21 20:09:43 UTC | 0.0629 | 1.005 | 5283 | 1.004 | 5114 | 1.002 |
2 | April 23 23:34:15 UTC | 0.0534 | 0.853 | 4595 | 0.873 | 4359 | 0.854 |
2 | AS centering was not run this time | ||||||
3 | April 25 12:19:36 UTC | 0.0622 | 0.993 | 5209 | 0.989 | 5083 | 0.996 |
3 | May 7 19:12:30 UTC | 0.0624 | 0.997 | 5241 | 0.996 | 5118 | 1.003 |
DriptaB, RickS, FranciscoL
On Tuesday, May 7, we changed the following EPICS variables:
H1:CAL-PCALX_XY_COMPARE_CORR_FACT from 0.9979 to 0.9991
H1:CAL-PCALY_XY_COMPARE_CORR_FACT from 1.0013 to 1.0005
Which corresponds to a change of 0.12% for X-End and -0.08% for Y-End in the calibration of the fiduciail displacement factors.
The reason we changed these factors is due to an error in our previous calculation, see alog 77386, which led to a value that was 0.20% off from what we expected.
We will evaluate our changes once the interferometer acquires lock.
Functionality test was done on the corner station turbo pumps, see notes below:
Output mode cleaner tube turbo station;
Scroll pump hours: 5904.7
Turbo pump hours: 5966
Crash bearing life is at 100%
X beam manifold turbo station;
Scroll pump hours: 1948.0
Turbo pump hours: 1952
Crash bearing life is at 100%
Y beam manifold turbo station;
Scroll pump hours: 2280.8
Turbo pump hours: 953
Crash bearing life is at 100%
DriptaB, RickS, FranciscoL
On Tuesday, May 7, we turned off the pcal excitations for both end stations for a duration of 30 minutes. We want to record a timeseries of the laser power without any excitations.
GPS times:
Start - 1399131000
End - 1399132800
The channels that were *manually* turned off (and on) are:
H1:CAL-PCALX_SWEPT_SINE_ON
H1:CAL-PCALY_SWEPT_SINE_ON
H1:CAL-INJ_MASTER_SW
The guardian idle state turns off the following channels automatically on the 'PREP_FOR_LOCKING' mode (these channels were not turned back on after the time window):
H1:CAL-PCALX_OSC_SUM_ON
H1:CAL-PCALY_OSC_SUM_ON
A screenshot with Rx and Tx timeseries is attached. Further analysis is pending.
Maintenance activities have finsihed for the day, we are starting to relock now.
I checked the Kepco power supplies located at the end and mid stations.
EY - Temps < 72F, Rack C1 17-19 (PCAL +/- 18V irregular vibration noted, low draw <1A per rail)
EX - Temps < 72F, no vibrations
MY - Temps < 72F, no vibrations
MX - Temps < 72F, no vibrations
Recommend replacing fans on EY PCAL +/- 18V supplies due to irregular vibrations.
J. Kissel, J. Freed We took a walk-about into the LVEA this morning and confirmed that the systems drawings for the HAM3 and HAM4 chamber feedthroughs (feedthrus, viewports) flange layouts -- D1002874 and D1002875, respectively -- are correct in that the top feedthrus are current not used, and have blanks installed. Pictures attached. 20240507_WHAM3_D8.jpg WHAM3 D8 (what we plan to use for HAM23 SPI Pathfinder [and CRS] optical fiber). 20240507_WHAM4_D8.jpg WHAM4 D8 (if a future HAM54 SPI link comes to fruition)
Ryan S., Jason O.
Since we've been seeing slow alignment changes in the PMC over recent weeks (i.e. alog77646), Jason and I tried remotely adjusting alignment into the PMC this morning using the two picomotor-controlled mirrors with the ISS off. Unfortunately, we weren't able to get much improvement out of it, only increasing PMC transmitted power from about 108.0W to 108.1W (both readings taken with ISS on before and after our adjustments). This reinforces the hypothesis that there's some alignment drift or mode-matching change happening further upstream from the PMC, such as in the amplifiers, which would need a more in-depth procedure on-table.
While here, we also took the opportunity to improve the FSS RefCav alignment as the FSS TPD signal has been lower than we'd like. Using the two picomotor-controlled mirrors in the FSS path, we saw a much more reassuring improvement here, increasing the RefCav transmitted signal from about 815mV to 880mV. We realized after the fact that the signal here was "breathing" more than it could have been since we left the IMC locked during our adjustments.
Snapshot of the PSL quad camera images taken after our adjustments included for posterity.
Jonathan, Erik, Dave:
h1daqnds1, which is the default NDS for workstations and guardian, started reporting a full /run file system at 00:55 PDT Tue 07may2024. This caused daqd and nds processes to stop serving data.
Jonathan restarted the rts-nds service at 08:13 PDT which cleared out the /run/nds/jobs directory, freeing up the disk space.
We noted that nds0 had 1.2GB of nds jobs files, 5% of the 26GB. Jonathan pruned this down to 1% usage.
I have added a check to my hourly cds_report to warn if the nds /run file systems exceed 50% usage.
Opened FRS31124
FAMIS 19973
pH of PSL chiller water was measured to be just above 10.0 according to the color of the test strip.
Tue May 07 10:11:46 2024 INFO: Fill completed in 11min 42secs
Gerardo confirmed a good fill curbside.
Tom Dent, Derek Davis
We noted that at the start of the observing segment beginning May 07 2024 02:14:08 (GPS 1399083266) there was broadband excess noise for the first few minutes of the segment. This issue appears to be due to the line subtraction issues that injected broadband excess noise. We don't the cause of the line subtraction malfunction.
Spectrograms and spectra of the H1:GDS-CALIB_STRAIN and H1:GDS-CALIB_STRAIN_NOLINES channel shows that the excess noise was introduced during the line subtraction as the broadband noise is not present in the data before line subtraction:
It has been previously noted that it is common for the line subtraction to "turn on" up to a few minutes into an observing segment (see alog 72239) but this is the first time that we had noted this type of broadband excess noise being injected.
Due to the severity of the excess noise, the impacted time period (1399083266 - 1399083406) will need to be CAT1 vetoed. This includes a few seconds where there was no excess broadband noise but calibration lines were not subtracted.
TITLE: 05/07 Day Shift: 14:30-23:30 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 15mph Gusts, 9mph 5min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.17 μm/s
QUICK SUMMARY: A few issues on arrival. OMC_LOCK isn't able to find a carrier, it actually cant find any peaks at all. SEI_ENV guardian node is in error, which Ibrahim noticed as well. It looks like it can't grab data for some reason. Now that I type this out I'm thinking that there's an issue with Guardian nodes getting data, this would explain why OMC_LOCK sees 0 peaks in its scan, theres an empty peak list if theres an empty data list.
Maintenance can start early, I'll bring the IFO down as soon as I finish some quick investigations.
Workstations updated and rebooted. This was an OS package update. Conda packages were not updated.
TITLE: 05/07 Eve Shift: 23:00-08:00 UTC (16:00-01:00 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 02:14 UTC (5 hr 58 min lock)
When the wind died down, I was able to successfully relock with an initial alignment. SDF Diff Screenshots attached.
Other:
Guardian SEI_ENV node in error keeps happening (3 times now after hitting load). It seems that the issue happens when CONF runs through an 1800s (30 min) checking loop - Jim on leave so didn't contact. There was an E_X saturation a few mins before it happened for the first time though (tenuous link and I do not think it is connected). Screenshot 3 shows the error log.
Had some trouble restarting Nuc5 (was going into it to get channel names). The startup code looks like its running but is taking a while to display anything (still the case after more than 5 mins as shift is ending).
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
01:34 | VAC | Janos | CP3, MX | N | Pump transport | 02:05 |
Correct value for ETMY_L3_LOCK_BIAS is -4.9 77656, ISC_LOCK has now been loaded so we'll need to accept sdf diff on relock today.
Anamaria, Sheila, Robert
We scanned the biases for three test masses to find the coupling minimum for currents that we injected onto the building ground. The optimal ETMY bias was 115 V in January of 2023, 170 V in Aug. (72308) and 200 V now. With ITMX at 0V, the optimal for ITMY was 60 V in March of 2023 (68053) and -222 V now, The figure shows some of the bias scans.
On the 2nd, Robert Anamaria and I found settings that would set the biases to these values, but the EY value hasn't been correctly applied in the automated relocks since. (77647)
For ETMY, L3_LOCK_BIAS_OFFSET should be -4.9 while the L3_LOCK_BIAS_GAIN is -1 to produce a voltage readback of 200V. I've reset this in the guardian, which I might have done last week with a typo.
This has now been loaded so shoud be at the correct value from 16:00UTC 07 May 2024 77674 .