State of H1: Aligning
Lockloss @ 18:25 after a 9.5 hour lock stretch, cause still under investigation (but ETMX started oscillating suddenly about 1 second before lockloss). ITM camera error signals are far off and beatnotes are poor, so I've started an initial alignment.
Sun May 28 10:07:54 2023 INFO: Fill completed in 7min 53secs
TITLE: 05/28 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 134Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 7mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY: Taking over from Austin; H1 has been Observing for 6.5 hours.
TITLE: 05/28 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 131 Mpc
SHIFT SUMMARY:
- Arrived to a locking IFO, again did not have any flashes on DRMI and the AS air camera looked shotty at best so I will be doing an IA
- Post IA, I had to adjust SRM by a few microradians in YAW, similar to what Ryan did the previous morning to improve buildups when engaging DRMI ASC
- Lock #1:
- Acquired NLN @ 8:44 UTC, in OBSERVING @ 8:54
- Odd note: the violin guardian was NOT damping until a few minutes into NLN, when it should nominally turn on in DAMP_VIOLINS_FULL_POWER
- As Tony stated in his alog - ITMX 12 nominal settings are ringing up the mode, but changing the gain to -1 seems to be working (EDIT: gain of -2 seems to work better)
- EX saturation @ 12:02
Passing the IFO to Ryan locked (~6 hours) and in Observing.
LOG:
No log for this shift.
TITLE: 05/28 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 0Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 2mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY:
Lockloss from CHECK_MISH_FRINGES https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1369267224
Lockloss from TRANSITION_FROM_ETMX https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1369271157
2:15 made it back to Nominal_LOW_NOISE even though the wind is gusting up to 40 MPH out at EX.
2:27 UTC made it back to observing!!
Violins:
ITMX Mode12 was increasing, so I turned the damping off.
I tried turning the gain back on to nominal +2 after a while and it still shot right up as soon as gain was applied.
Now trying gain of -1 on ITMX mode 12. Seems to work well.
Lockloss right as Austin sat down to take Over the IFO 7:00UTC https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1369292468
Unknown cause
TITLE: 05/28 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 0Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 4mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY:
- Arrived when the IFO just lost lock :(
- Going to start relocking ASAP
- CDS/SEI ok
TITLE: 05/27 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 28mph Gusts, 19mph 5min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
Lockloss as soon as I arrive in the control room possibly due to the wind.
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1369263776
In the process of relocking now.
EY-EBay has once again gotten too hot it got over 40C in there today.
FMCS Computer doesn't not show unreasonable temperature in the Y end EBay though.
TITLE: 05/27 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 130Mpc
SHIFT SUMMARY:
Locking:
Quiet day otherwise today on site, see my earlier alog about LVEA temperatures over recent days. Handing off to Tony for the evening.
Temperatures are closer now to what they were 3-4 days ago, but still slightly different. Temps also are on the rise possibly due to the building heating up this afternoon.
H1 is currently running initial alignment following a ~2 hour lock stretch which ended in a lockloss from what looked like an ASC ringup (CSOFT_P is the one I saw most obviously, but I'm not sure which rung up first exactly).
Sat May 27 10:06:51 2023 INFO: Fill completed in 6min 51secs
I lowered the FC-ASC TRIGGER THRESHOLD's ON value from 0.8 to now 0.7, since it hadn't been triggering on during Friday's locks (trends). I noticed because the green beam spot on the FC green trans camera was clearly off from normal. This FC trans trigger was first setup in LHO:68587, where it said the trigger_on threshold was 0.6 -- but it was 0.6 for < 24 hours, then changed to 0.8, where it's been since. I have SDF'd the trigger_on threshold as 0.7.
Also, ISS tripped off again, so I reset it. From the SQZ_OPO_LR guardian, I requested "LOCKED_CLF_DUAL_NO_ISS" (which turns off ISS and resets lockloss counter), then requested "LOCKED_CLF_DUAL", so the guardian turns on ISS again. Sheila just did this a couple days ago 69890, but I didn't change the ISS setpoint this time. I think this is happening related to SHG output power drifts, we could check the on-table pump aom alignment. We shouldn't be driving the aom this marginally given the SHG power output and ISS setpoint, we used to be driving ~4-6V to maintain this trans power given even lower SHG output powers, and now we're driving often around 2-4V, so ISS is often in a marginal wayn now compared to before.
At the start of the 05:07UTC lock, Vicky realized the FC beamspot was not in the correct location on the camera so lowered FC_ASC the threshold further from 0.7 to 0.5. Accepted in sdf.
We decreased the OPO TEC temperature while maximizing CLF_REFL_RF6, see attached. We expect this needs changing more with the LVEA temperature fluctuations. Vicky noticed this needed to be done as WFS_A_Q wasn't low enough and RFL_QPD_A_SUM was lower than normal. See attached SQZ scopes at time of FC ASC turn on and OPO temperature improvement.
TITLE: 05/27 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 2mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY: Taking over from Austin. H1 is currently finishing initial alignment and I'll start locking now.
TITLE: 05/27 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Relocking
SHIFT SUMMARY:
- Arrived with the IFO locked an in OBSERVING
- EX saturation @ 9:59/12:41 UTC
- LVEA temps have been fluctuating - looks to be dropping now - screenshot attached
- LOCKLOSS @ 13:55 - BS saturation
- Lock #1:
LOG:
No log for this shift.
The IFO is locked (just made 4 hours) and OBSERVING, acquired @ 6:57 UTC.
TITLE: 05/27 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 126Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 2mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY:
Richard told me I need to Keep and eye on LVEA Temps because A fan was shut off. we expect to see temps fall.
The temp in the Y Ebay seemed warm at 26.1 C. So I trended back the last 10 days and noticed that H1:PEM-Y_EBAY_RACK1_TEMPERATURE reached over 45 C in the Ebay. I'm not sure if that s accurate or not but it seems like something to make a note of.
H1:PEM-Y_EBAY_RACK1_TEMPERATURE seems too high when I compare it to H1:PEM-X_EBAY_RACK1_TEMPERATURE which hasn't changed more than a degree in the last 10 days. My Previous alog tags FMCS.
SR3 Returned to Pit 437.1, yaw -149.1 which is what they were April 13 at 3:00 UTC.
2:40 UTC PI 29 ring up, the Filter cycled through and caught it before it grew so large to cause a lock loss.
LockLoss 5:27 UTC lockloss reason currently unknown
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1369200467
Relocking without an initial alignmen went through PRMI -> Check Fringes -> PRMI -> DRMI and then went all the way up to NLN.
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 17:03 | FAC | Kim | High Bay | - | Technical cleaning | 19:03 |
| 18:55 | VAC | Gerardo | FCES | - | Moving ion pumps | 19:52 |
| 22:56 | ISC | Sheila, TJ | LVEA | Local | Realign beatnotes | 23:46 |
| 23:22 | Tour | Camilla & Parents | LVEA & Roof | N | Just poping their heads in to see the IFO | 23:42 |
TITLE: 05/27 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 126Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 2mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
- IFO has been locked and in OBSERVING as of 06:57 UTC
- Great job to all who helped with troubleshooting H1 and getting it back up and running!
IFO status: Locked in Nominal Low Noise in Observing mode.
LVEA temps look reasonable.
But the temp in the Y Ebay seemed warm at 26.1 C. So I trended back the last 10 days and noticed that H1:PEM-Y_EBAY_RACK1_TEMPERATURE reached over 45 C in the Ebay. I'm not sure if that s accurate or not but it seems like something to make a note of.
H1:PEM-Y_EBAY_RACK1_TEMPERATURE seems too high when I compare it to H1:PEM-X_EBAY_RACK1_TEMPERATURE which hasn't changed more than a degree in the last 10 days.
Jeff wrote the more detailed alog of the problems with the interferometer and how we tried to track down the issue, but short version, I installed a bad blend filter that has been causing an offset to accumulate in the ISI RZ locations for all of the HAMs, except HAM8. The accumulations take about 2 weeks to reach 1 urad levels. I designed the filter for HAM8 and it has been running there with no problems since last year, so I installed on HAM4-5 on Dec 11 last year, and March 15 on HAM2-3 this year. Unfortunately, there are some issues with the foton implementation, either because I missed some cleanup in the filter or because I got bit by some issue with writing the filters from Matlab to the foton file. Attached screenshot shows some of the zpk coefficients from Foton. There are 3 poles at, or near 0hz, and there are two negative zeros at 10^-6 hz. I certainly didn't use those in my design in matlab, but I guess I missed a step or two in cleaning up the filters before install. I think I've got a fix for that now, but will wait for Tuesday to try. They shouldn't accumulate much offset over the long weekend.
I've also added a test to DIAG_MAIN, to warn if the errant integration reaches 1urad on any of the HAM chambers. This should take 2 weeks or more at the current rate, but if any of the HAMs reach 1urad offset, DIAG_MAIN will report:
"ISI {} RZ CPS residual bad"
The test in DIAG_MAIN is:
@SYSDIAG.register_test
def HAM_RZ_RESIDUAL():
"""HAM RZ residual test
"""
chambers = ['HAM2', 'HAM3','HAM4', 'HAM5', 'HAM6', 'HAM7','HAM8']
bscs = chambers
tots = []
for chamber in bscs:
if ezca['ISI-' + chamber +'_CPS_RZ_RESIDUALMON'] > 1000:
tots.append(chamber)
if tots:
yield "ISI {} RZ CPS residual bad".format(tots)
Also, it's clear to me after thinking about this a bit, if we need to "reset" this, we don't need to restart the ISI like we did this afternoon. It will be sufficient to turn off the RZ isolation loop, push the "clear history" buttons on the RZ blend filters, wait 20 or so seconds for the filters to settle, then turn the RZ loop back on.
S. Dwyer, J. Kissel, E. v. Ries, T. Shaffer, R. Short Information is rolling in fast, and there's five folks working on diagnosing / solving the issues, but think we've both identified and temporarily solved a major issue with the HAM-ISIs slowly driving away from their target alignment value with a slllooooowwww time-scale of weeks, resetting to their desired alignment position only after front-end model restarts. This aLOG picks up where Sheila left off with POP WFS falling of the ISCT1 diodes (LHO:69930), and add more to the investigation of the HAM ISI issues started with LHO:69924. The most incriminating plot is the first attachment, 2023-05-26_HAMISI_RZ_Residual_sinceNov2023.png. This shows the "RZ residual" for all HAM ISIs, a comparison between the capacitive position sensor (CPS) live RZ position (composed of each tables three horizontal CPS) against a digitally fixed "target" value. The units of the y axis are in nanoradians, so when you see drifts on the scale of 30000 nrad, that's 30 urad. Most recently, for example, due to opposing signs of drift, HAM2 and HAM3 have drifted apart by 10 urad, which we think have been causing all the ALS issues that we've been trying to solve with PR3 alignment and ISCT1 table alignments. Here's what we know (and things we've ruled out): - All HAM ISIs except HAM8 -- HAM2, HAM3, HAM4, HAM5, HAM6, and HAM7 -- show this slow accumulation of drift in RZ. - the degree to which they drift is not consistent between chambers. Some chambers as large as 30 urad, some as small as 0.5 urad before getting its drift reset. - the sign of the drift (either +RZ or -RZ) is inconsistent between chambers, but for a given chamber is consistent between drift resets. - HAM2, HAM5, HAM6, and HAM7 consistently drift in +RZ, HAM3, HAM4 consistently drift in -RZ. - On a linear y-scale, by eye, the drift looks "exponential," but it's not logarithmic with base 10. See 2023-05-26_HAMISI_RZ_Residual_sinceNov2023_logyaxis.png to see the same drift, by with a log-base-10 y axis. - the rate of accumulation appears inconsistent between chambers and non-linear, but for verbal approximation for all chambers the rate could be generalized as 0.5 urad/week or 5 urad/month. - We have confirmed (as best we can) with SUS top-mass alignment slider drive and optical levers that this is a real in the RZ residual causes a real drift in table alignment - The story is *always* muddy with these SUS alignment sliders and optical levers over the course of months, but we convinced ourselves with front-end model restarts that a model restart restores the yaw alignment of the table. - The story of "when did this start?" is muddied by chamber vents, but we believe, prior to ~Nov 2023, we did NOT see any of this drift. - We DID NOT see this in O3A or O3B in H1 - On this kind of time-scale of consistent accumulation of drift, and the fact that - we don't see this at L1, - HAM8 is not synchronized and it's the only one not drifting the first thing we point our fingers toward is the CPS timing (L1 has *not* synchronized their CPS) thinking that it's a slow-de-synchronization of the timing system. We're convinced it's NOT the CPS timing. Here's why. - We know the problem is ONLY with the H1 HAM ISIs in Horizontal CPS, most easily seen in the RZ DOF - We DO NOT see this in any BSC-ISIs - We DO NOT see any drifts in these HAM-ISI vertical residuals. (see Third Attachment) - We DO NOT see any drifts in a quick recent trend of L1's HAM-ISIs RZ sensors. - We know that the analog arrangement of the timing synchronization has grown organically enough that there's *nothing* in common across all 6 corner station / LVEA HAM2-7 ISIs. - The BSC and HAM ISIs are synchronized with two separate CPS fan out chassis. (OK, that's the one thing that the HAMs have in common) - However each HAM ISI's six CPS readouts are grouped into two analog satellite amplifiers (crates, racks) each which have 4 channels. - The sensors are grouped in pairs of horizontal and vertical, i.e. "H1V1" for corner 1's horz. and vert sensors, so verticals are synchronized in the same way as horizontals, and they're spread differently across the 8 channels available - Here's the organic layout: - HAMs 2, 3, 7, and 8's channels are grouped (H1V1 H3V3) and (H2V2 xxxx) - HAMs 4, 5, and 6's channels are grouped (H1V1 H2V2) and (H3V3 xxxx) - HAMs 2, 3, 4 ,5, and 6 are the older style shielded coax cabling systems - HAMs 7 and 8 are the newer triax cabling system - HAMs 7 and 8 verticals are FINE sensors, rather than COARSE sensors - again HAM 8 is NOT synchronized - An odd additional symptom if this drift is that the high-frequency noise floor of the sensor is increasing. However, this is *also* consistent with the gaps between the horizontal sensors increasing because of the RZ drift. - The drift resets when the front-end model is restarted: - The times that the drifts reset are: May 23 2023 15:40 UTC HAM6 reset [h1seiham16 computer restart LHO:69843] May 10 2023 20:27 UTC HAM45 reset [partial corner station dolphin crash 69492] Apr 11 2023 15:09 UTC ALL HAMs reset [RCG upgrade LHO:68595] Feb 14 2023 18:00 UTC [RCG upgrade LHO:67411] - Seeing this, we asked Erik to restart the HAMs 2-7 front-end models. The best "out of loop" metric we have for this is the optical lever position for PR3 and SR3. - pr3_oplev_ham2_restart.png PR3 position during the HAM2 reboot reports muddy information. Jim turned off the ISI HAM2 RZ loop alone (which take the CPS position drift out of the feedback), and we see the optical lever change. Then we turned off all the isolation loops and saw no change in position. Then we restarted the model and ... it came back to a DIFFERENT non-drift, nor non-isolated position. Also, Sheila has reason to believe that the optical lever calibration gain is off by as much as a factor of 10, so 1 urad reported by the optical lever is actually 10 urads of real motion. #facepalm #muddywaters - sr3_oplev_ham5_restart.png SR3 position didn't change that much after the restart because the drift wasn't that large to begin with #facepalm #muddywaters - The real proof that this reset worked came after all today's HAM2-7 model restarts which reset the table alignments, and then restoring the PR3 and SR3 alignment sliders to a value when the table alignment had all the tables had just been reset (Apr 13 2023 03:00 UTC) -- because we immediately recovered all signals on ISCT1. And now, we're now almost to NOMINAL_LOW_NOISE, and we think we've temporarily solved the issue. So, it's on Jim to investigate why these RZ CPS are causing a drift. Given that it's not the timing system, now our blame is solely on to things digital. He's got a few weeks before the drift gets bad again... Stay tuned.
Problem seems to be a bad blend on the RZ dof, with spurious poles and zeros at "0 hz", see alog 69955. I think I have a fix, there are DIAG_MAIN tests to warn if the drifts get too big and we can "reset" without restarting the model. I hope to get this fixed this coming Tuesday.