Sheila, Jennie W
Sheila noticed our range was really high in the lock at 14:51 UTC on the 17th April.
All camera offsets are the same from the good range time (16:19:53 UTC) on the 17th (left cursor) compared to a lock from last night at 2024/04/24 13:20:30 UTC (right cursor).
The offsets for these currently (second cursor) are not in lsc_params and since we had better build-ups on the 17th (first cursor), we should fix these offsets and see if we can optimise A2L gains again in the next commissioning window after the IFO range is recovered.
The best build-ups were immediately after this good time when we changed the PIT2 and YAW3 but this made the range worse.
We looked at this again today and realised the camera offsets had been set back to their former values from before our changes to PIT2 and YAW3 on April 17th when the cameras servo guardian went through state 405 (TURN ON CAMERA FIXED OFFSETS). Then we realised we may have forgotten to reload the camer servo guradian when we made these changes so we took reloaded it then took it to DITHER ON state, then to CAMERA SERVO ON and now we have CAM PIT2 OFFSET = -173 and CAM YAW3 OFFSET = -349.5 as is set in lsc_params.
FIXED ISSUE
Ryan C, Rahul
Rahul and I did a bunch of SUS health checks in which we did not see anything out of the ordinary.
1) I ran the rubbing script (/opt/rtcds/userapps/release/sus/h1/scripts/SUS_TopMassOSEMs.py) comparing to a time last week, no issues.
2) We took OFI, OM1-3 spectra. We compared to spectra on monday, the OMs are experiencing some overflows (H1SUSIFOOUT) but we're not sure why, overall it looks fine.
3) QUADs: Binary I/Os, OSEMs, Voltmons. Trending over the past week didn't reveal anything except ITMX_L1_OSEM UL might be very slowly dying.
4) OPLEV sums
Two lock acquisitions last night do NOT see the 9Hz buzzing at state 557 TRANSISITON _FROM_ETMX seen in 77359.
Plots at 2024/04/24 04:21UTC and 07:32UTC attached. Also check the two lock acquisitions that we lost before NLN before this.
Wed Apr 24 10:09:59 2024 INFO: Fill completed in 9min 56secs
Gerardo confirmed a good fill curbside.
Well pump has been started to replenish the fire water tank. The pump will run for 4 hours.
TITLE: 04/24 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 0mph Gusts, 0mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY: Still locked but with a range around 120Mpc. Lots of noise in almost the entire of the spectrum.
TJ took no SQZ time this morning from 15:31:30 to 15:45:00UTC. Plot attached comaping now to no SQZ time from April 12th 77133
TJ and I looked at the OFI (plot) and over the last 5 days, the RT voltmons has changed more than usual. We aren't actively damping the OFI though so are unsure if these VOLTMONs are just reading noise...
We checked the HWS _PROBE_TOTAL_PIXEL_VALUE and the position of IFO beam on IX and IY HWS, from last good long range power up 2024/04/21 20:55UTC (1397768177) to last nights power up 2024-04-24 07:26UTC (1397978835). No difference in the two times seen. HWS cares about SR3.
TITLE: 04/24 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C - OWL CANCELLED
SHIFT SUMMARY:
IFO is LOCKING and in ENGAGE_DRMI_ASC
5:24 UTC - Reached NLN but waiting as SQZ team (Naoki) tries to better align squeezer
5:40 UTC - Observing
6:39 UTC - Lockloss DRMI
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
00:18 | STROLL | Patrick | MY | N | Walkin' | 01:26 |
01:26 | PEM | Robert | LVEA | N | Ungrounding SUS Racks | 01:56 |
IFO is in LOCKING_ARMS_GREEN and still locking after maintenance.
There is a LOT of ongoing and past troubleshooting mostly summarized by Jenne in alog 77368. Since this was posted, I looked at the HAM6 Picomotors and determined that there was no suspicious activity (esp in the on/off switching) since locking issues began (approx 15 hrs ago now).
We just got to OMC Whitening but the violins were horrendous. We stayed there for 30 minutes before an unknown lockloss DRMI. Sheila is on TS helping with SQZ alignment/instructions with SQZ alignment.
[Jenne, Ibrahim, RyanS, Sheila, Robert, TJ, Elenna, Jennie, Oli, Jim, others]
Something is different with the IFO today, and it's not great. Since we haven't pinpointed what the problem is, we don't really know if it is related to our mediocre range last night and struggles to relock after that, or if it is new from maintenance day. So far, the solution to get back to (almost) NLN has been to put in *massive* offsets in AS_C. This did enable us to get locked (and avoided the ASC ringup that was causing locklosses this afternoon), but it left us with a large scatter shelf in DARM. Robert had the good suggestion that we see if, once locked, we could walk the offsets back to their nominal values of zero; doing this caused an ASC ringup, and it's probably the same thing that we'd been seeing throughout the day. So, indeed going toward the nominal offset of zero for pit and yaw on AS_C is not an okay place right now.
We began to narrow in on AS_C since, during initial alignment, it is in-loop for SR2 and SRY alignments, and it was causing SR2 to be pulled very far in yaw. We were seeing this visually on the AS_AIR camera, which looked really unusually terrible. In the end, Sheila hand-aligned SRY, and then put offsets into AS_C such that the ASC now servos to that point. But, the offsets used are +0.46 in pit and -0.85 in yaw, so really, really big. However, with these in place, we were able to let DRMI ASC run, and it ran fine.
Since that worked (the large AS_C offsets), we let the IFO relock the rest of the way, and it kept on working. Per a suggestion from Elenna from earlier in the afternoon, after we completed REDUCE_RF45 I manual-ed to ADJUST_POWER and increased the power 5W at a time (waiting for ASC to mostly converge between steps) until we were at 60W, then I manual-ed back to LOWNOISE_ASC. After that, I just selected NomLowNoise and guardian finished things. We ended up 'stuck' in OMC_WHITENING, although the violins were coming down with the damping that guardian was doing. It was around here that I tried ramping down the AS_C yaw offset with 10 sec ramp times to see if it would reduce the scatter shelf that we saw in DARM. See first attachment, with the scatter shelf circled. My first step was to -0.80, second step was to -0.70, and we had an ASC ringup and lockloss at this step. I wasn't sure after the first step, but definitely as we were ramping to the second step (before we lost lock) RyanS, Robert, and I all agreed that the scatter shelf was getting worse, not better. But, we lost lock before I could put the offset back.
We still don't understand *why* we need these large offsets in AS_C, just that we do. Since I hadn't yet SDF-ed them, we had 2 locklosses from ENGAGE_DRMI_ASC when the nominal zero-offsets were used in AS_C. I have since saved the AS_C offset values, and the offset switch being on, in both the safe.snap and observe.snap SDF files. The second and third attachments show these.
We've been trying to think through what from maintenance day could possible have had anything to do with this, and the only things we've come up with are the h1asc reboot and the grounding of SUS racks. From Jeff's alog about the h1asc changes, it seems quite clear that those are just additions of monitor points, and that shouldn't have had any effect. While we were unlocked after our one successful attempt, Robert went into the LVEA and un-grounded those two racks that Fil grounded earlier today. Robert is quite sure that he's checked the grounding of them in the past without any consequence to our ability to lock, but just in case we've got them undone for tonight. So far, that does not seem to have changed the fact that we need these crazy offsets in AS_C. Just in case, Robert suggests we consider rebooting h1asc tomorrow to back out other changes that happened today (even though those shouldn't have anything to do with anything).
For right now (as of ~7:20pm), my best plan is to see if we can get to Observing, and should a GW candidate come through, have the detchar and parameter estimation folks exclude all data below ~80 Hz. This is a really very poor solution though, since this huge scatter shelf will be quite detrimental to our data quality and could cause our search pipelines to trigger on this scattering.
Related thought, but not yet thoroughly investigated (at least by me), is whether our problems actually started sometime yesterday or last night, and aren't at all related to maintenance. As partial 'evidence' in this direction, I'll note that kappa_c has been dramatically low for the last two Observing segments last night. We haven't gotten all the way to NLN tonight yet (OMC_WHITENING didn't finish before we lost lock), but kappa_c looks like it might be even lower now than yesterday. The 4th attachment shows kappa_c for last night's locks (screenshot taken before we locked today). So, maybe we've had some clipping or severe degredation of alignment in the last day or so. Jim checked all of the ISIs, and nothing seems suspicious there.
Last thing - since we had run the dark offsets script earlier this afternoon (probably was a red herring) and I saved all of the SDFs in the safe.snap, we will have diffs for Observe.snap. All diffs that have _OFFSET in them should be accepted (and screenshot-ed).
Oh, also, the squezing was not at all good when we locked. I suspect that it's because the squeezer alignment isn't matched to this bad-but-working alignment of the rest of the IFO. Sheila should be able to log on in a little while and can hopefully take a quick look to see if there's anything to do.
Adding screenshot that summarizes/ supports what Jenne is saying above.
A little over 1 day ago, there was a drop in optical gain. There isn't an indication of a shift in alignment of SR2, or the OMs at that time. AS_C QPD is in loop, so that signal being zero only shows that we weren't using any offsets at the time. The drives on OM1 + OM2 show that their alignment also didn't shift at the time. Since they are used to center the beam on AS_A and AS_B, their drives would need to shift if there is a large shift in the real beam position on AS_C. You can see that for today's lock, where there were large offsets on AS_C, the alignment of SR2 is different, as well as the alignment of OM1 + 2 (indicating that the alignment onto AS_C is different). We might be clipping on the OFI, or elsewhere at the AS port.
Here's an image of the AS camera:
you can toggle through these three in different browser tabs to see that the alignment didn't seem to shift until we added offsets. So all indications are that these offsets aren't good, and we should not be using them, except that they appear to have allowed us to lock the IFO.
Editing to add another screenshot: As Ibrahim checked several times last night, the osems indiczte that SR3 + PR3 haven't moved. In early Tuedsday lock where the optical gain first dropped, the power recycling gain was unchanged, with the introduction of offsets last night it seems slightly lower (and the optical gain also took another step down).This attachment shows that the offsets moved OM3+ OMC, which I did not expect (since the AS centering beams fix the beam axis arriving on OM3, I wouldn't have expected the AS_C offsets to have moved these optics). But they didn't move for the TUesday morning low optical gain lock.
Jennie W, Sheila
Sheila suggested I use the oplevs for SR3, BS, ITMY, ITMX. ETMY, ETMX to compare our alignment out of lock right before we started locking the main IFO (ie. after locking green arms). I chose two times we were in FIND_IR (state 16 in ISC_LOCK).
One of these times was after a lockloss from a very high range lock from Monday afternoon (~157 Mpc).
GPS reference time in lock = 1397864624 (16:43:26 PDT)
GPS reference time in FIND_IR presumably before our current weird alignment/locking problems = 1397868549 (17:48:51 PDT)
The other time was in our last locking sequence Tuesday morning before maintenance Tuesday started. We did not get to NLN but fell out before this at ENGAGE_ASC_FOR_FULL_IFO (state 430 in ISC_LOCK).
GPS reference time in FIND_IR = 1397918790 (07:46:12 PDT)
The main OPLEVS which changed more than 1 micro-radian were ITMY PIT (down by 1.24 microradians)
ETMY PIT (up by 1.02 microradians)
ETMY YAW (down by 1.14 microradians)
ETMX PIT (up by 2.35 microradians)
ETMX YAW (up by 3.21 microradians)
While SRY was locked (last step in initial alignment), we moved the OFI as much as we could, by putting large offsets in the actuator filter banks. No discernable power level changes are visible. This isn't surprising given how weak the OFI actuators are, but seemed worthy of eliminating a possiblity.
Because of the locking problems, I disconnected the grounds that were installed earlier today (77350).
Corey, Robert
Derek reported that S240420aw was likely caused by scattering noise so I looked into the problem. The figure shows that a large seismic pulse (~2 orders of magnitude above background) started something swinging at 1.44 Hz, with a Q of about 300. The scattering shelf's cutoff slowly dropped in frequency over ten minutes, so there were plenty of chirps reaching different frequencies. It is tempting to guess that the source of scattering is a TMS, which have transverse resonances of about 1.4 Hz, but I checked and the Qs of TMS motion were much lower than the Q of 300 of the scatter source. The time of the seismic pulse matches that of the M2.8 earthquake in Richland on Friday night, 24 surface km from the site and 8 km deep. There is quite a bit of light scattered, but at normal ground motion levels I would expect the scattering noise shelf to reach only about 5 Hz, so I don’t think this scattering noise source affects us during normal operation. Andy L. and Beverly B. noticed similar scattering noise back in 2017 ( 37947 ) for a smaller nearby quake.
Jennie W, Elenna, Jenne, TJ
We have been having trouble locking and keep falling out during the MAX_POWER state of ISC_LOCK as we power up (a and b are examples showing MICH and DHARD during two of these four locklosses).
During some of these we noticed that MICH or DHARD could be ringing up.
We measured MICH open loop gain with Elenna's template which I have saved a version of in /ligo/home/jennifer.wright/Documents/ASC/MICH_P_OLG_broadband_shaped.xml. We foudn the gain to be about 10% too low so increased the loop gain from -2.4 to -2.7 in the loop servo. We also measured DHARD open loop gain with the template in this folder DHARD_2W_P_OLG_broadband_shaped.xml and found it looks normal.
Investigations ongoing...
Jennie, TJ, Camilla
The Operator team has been seeing more locklosses at state 557 TRANSISITON _FROM_ETMX, more so when the wind is high. Times: 1397719797; 1397722896; 1397725803; 1397273349, 1397915077
Last night we had a lockloss from 558 LOWNOISE_ESD_ETMX with a 9Hz ITMX L3 oscillation, see attached. Compare to a successful transition (still has glitch).
Note that there is a glitch ~30s before state 558 in both cases. H1:SUS-ETMX_L3_DRIVEALIGN_L2L and _L3_LOCK_L filter changes happens here. Are ramp times finished before these changes?
The timing of the glitch is 2 m 55 seconds after we get to state 557, this is the same time as in the 4 of the last 6 state 557 locklosses.
Louis, Camilla. Investigations ongoing but the timing of this glitch is suspiciously when the H1:SUS-ETMX_L1_LOCK_L gain and filters are changed, tramp is 1secinds but FM2 and FM6 foton filters have a 3 seconds ramp. There is an INPUT to this filter bank before the gain and filters are turned on. Plot attached.
In 77640 we show that the DARM1 filter change is the cause of the glitches/locklosses.