Lost lock at 2107 UTC - 1370207267
No immediate cause for this yet.
Fully automated relock. Guardian waited 12minutes to turn on the OMC whitening due to higher violin modes.
I've modified the THERMALIZATION guardian and the PRCL2 gain in ISC_LOCK to match the proposed changes from last night in alog 70211. I loaded both THERMALIZATION and ISC_LOCK just before we got to LownoiseLengthControl, so it's now running with these updated gains.
Fil informed me that the PSL chiller was alarming. It had a low level alarm, so I added 150mL to bring it back to it's max level. The last time it was filled was the end of April, according to the sheet at the chiller.
Wed Jun 07 10:07:33 2023 INFO: Fill completed in 7min 32secs
Gerardo confirmed a good fill curbside.
Lost lock at 1635 - lock loss 1370190932
There was maybe a bit of CSOFT P increasing, but it didn't seem to reach output levels much higher than usual. The tool reports DCPD saturations, but I didn't notice much in the plots there.
LSC DARM and SCRL also saw a slight rise in their in1 signals.
I did a quick check to see if this lockloss was related to the ring ups we've seen in PRCL, SRCL, and MICH recently but it does not seem to be caused directly from these channels, just a comment so we have a log of this.
Well pump started now to replenish fire water tank.
I'm not sure anyone has looked at the well pump before, so the GPS start time was 1370190407 +/- 2sec. Bubba says that this pump runs automatically, generally overnight for 4 hours. Today's work should also run the pump for 4 hours unless manually turned off.
Edit: Tagging DetChar explicitly, not just in task
A reminder that the well pump run time trend plots for 1 week, 1 month and 3 months are available on the CDS web server
If anyone is using tconvert, today it is producing the warning text:
tconvert WARNING: Leap-second info in /ligo/data/tcleaps/tcleaps.txt is no longer certain to be valid, and we were unable to get updated info from any LDAS web server. Continuing with possibly-outdated info.
The leap seconds file expired one hour ago 2023-06-07 08:08:28.000000 PDT
This is just a warning message, the resulting GPS time is still correct. There have been no leapseconds applied to UTC since Dec 2016.
I encourage people to use the gpstime utility, which is better supported and less prone to these kinds of failures, instead of tconvert.
I've extended the expiration time for tconvert's tcleaps.txt to 2026-08-07 17:55:08.000000 PDT.
There are a number of models with filters in modules that have been engaged but no filter has been defined for that stage. This causes some confusion when scanning for unresponsive filter stages, as these will clutter the results.
Attached is an example of what this looks like on the medm screen. FM2 is enabled, but nothing is loaded in that stage so the 2nd box never turns green.
My guess is that these stages used to have a filter defined for them, but were removed. The solution is to finish the removal of these stages, buy disabling the stage and saving the new state with SDF.
Full listing of filters/stages that are enabled but don't have the enabled stage defined in the filter file.
h1susetmy : [('SUS-ETMY_L2_DAMP_MODE19', 'FM1'), ('SUS-ETMY_L2_DAMP_MODE9', 'FM4')],
h1sussqzin : [('SUS-ZM1_M1_WD_OSEMAC_RMSLP_LL', 'FM6'), ('SUS-ZM2_M1_LOCK_L', 'FM6')]
h1hpietmx : [('HPI-ETMX_3DL4CINF_C_Z', 'FM1'), ('HPI-ETMX_3DL4CINF_C_Y', 'FM1'), ('HPI-ETMX_3DL4CINF_C_X', 'FM1'), ('HPI-ETMX_3DL4CINF_B_Z', 'FM1'), ('HPI-ETMX_3DL4CINF_B_Y', 'FM1'), ('HPI-ETMX_3DL4CINF_B_X', 'FM1'), ('HPI-ETMX_3DL4CINF_A_Z', 'FM1'), ('HPI-ETMX_3DL4CINF_A_Y', 'FM1'), ('HPI-ETMX_3DL4CINF_A_X', 'FM1')]
h1lsc : [('LSC-EXTRA_AI_2', 'FM2')]
h1lscaux : [('LSC-LOCKIN_1_DEMOD_9_I', 'FM1'), ('LSC-LOCKIN_1_DEMOD_9_Q', 'FM1')]
h1oaf : [('OAF-CAL_SUM_DARM_L1', 'FM3'), ('OAF-CAL_SUM_DARM_L1', 'FM6'), ('OAF-CAL_SUM_DARM_L3', 'FM3')]
h1calcs : [('CAL-CS_DARM_ANALOG_ETMY_L1', 'FM4')]
h1susproc : [('SUS-ETMY_L2_DAMP_MODE19_BL', 'FM1'), ('SUS-ITMY_L2_DAMP_MODE18_BL', 'FM1'), ('SUS-ITMY_L2_DAMP_MODE19_BL', 'FM1')]
Edited to remove any filter stages under local control.
Some filters have front-end control over which filters stages are on. Typically, more than one filter stage is used for, say a automatic boost, but only a subset is actually loaded. So, a filter may be on, but empty. This is the expected behaviour. Changing which filter stages are participating would be a front-end model change.
A better solution may be to turn the 2nd box on for empty filters, when they are on.
After reviewing all the "SUS" filter banks mentioned above, these empty filter banks are either on because they were turned on by mistake (the ZM1 and ZM2 filters) and blindly accepted into SDF, or the bank had never been active use for control or monitoring (the violin MODE control and monitoring filters), so someone was probably playing around with a filter design and cleared it out but forgot to turn off the filter. In all SUS cases, the filter module should be turned off and accepted as such in SDF. We'll make a point to clean this up and clear out the confusion next Tuesday, or during a next convenient lock loss.
h1sussqzin ZM1 and ZM2 filters in question have been turned off -- see LHO:70245.
h1hpietmx filters in question have been turned off -- see LHO:70251.
The h1oaf and h1calcs filters in question have been turned OFF -- see LHO:70255.
The h1susetmy and h1susproc filter modules in question have been addressed -- see LHO:70264.
To Daniel's point - Another choice is to populate the filter with a gain=1 stage. Then it turns on, but doesn't do anything. SEI does this with some of the calibrations. e.g. FM1 is the the manufacture's calibration, and FM2 is to tweak the calibration based on measurements. If the sensor is very close to spec, FM2 can just be a gain=1. Then all the automation works more smoothly.
I have started pumping exactly at 8:31:00.
Here is the stackup. The pressure was 27 microns at the start of the pumping.
As we've spoken with Robert, I have generated some noise at EX, then switched the pump on/off a few times. The specifics: Stomping (I used a steel bar, you should see a ~2 Hz noise): - Start: 12:44:00 - Stop: 12:44:10 - Start: 12:44:40 - Stop: 12:44:50 Pump on/off sequences: - Off: 12:46:00 - On: 12:47:00 - Off: 12:48:00 - On: 12:49:00 - Off: 12:50:00 - On: 12:51:00 The pump is now running, the pressure was 14 micron (which means, that if we could run it until Thursday, that would be sufficient - if there'll be a lock-loss, I will switch the pump off).
The pumping was finished at 16:23, as the data analysis haven't been finished yet.
I had a look at EX accelerometers and, while I see the pounding noise, I do not see the pump turn on and off. The accelerometers are more sensitive than DARM to vibration, so I think that it is fine to pump with this pump isolation setup while we are in observing mode.
Tagging a few systems level groups who should me made aware Robert's conclusion that it's OK to run these pumps. (Not resisting, indeed supporting Robert's conclusion by making sure other people see it and have a reference to point to when it's brought up by the vacuum team.)
We have been having some locklosses during the TURN_ON_BS_STAGE2 step. Looking at a couple of events, there are transients seen in the BS st2 CPS during this step from engaging the isolation loop boosts. The boost filters have ramp times, some of them were 5 seconds, probably related after this alog, but the RX RY loops still had 2 second ramps. All of these are set to 5 seconds now, this seems much smoother from the ISI CPS. Otherwise, these filters have not been changed since 2020. I also tried using the "Always On" option in foton, this went poorly. And tripped the ISI.
First attached plot is one of the previous transitions, middle row are the SEI_BS guardian state transition and the ISC_LOCK state. Top and bottom rows are the St2 CPS, RX/RY on the top, X/Y on the bottom. It seems like the probably the ramp of 2 seconds for RX/RY is too fast, and this causes the X/Y to move around a bit through tilt to horizontal coupling. Second attached plot is the DRMI lock just now and the CPS transition is much smoother.
Nothing else has changed on these filters or loops in many months, I don't know why we would be having more problems now, but maybe we're just paying more attention.
This is, of course, exactly the opposite of what I did back in 2019, when I was decreasing ramp times to get the boost engaged as quickly as possible. Shorter ramps seemed to fix locklosses at this state back then. But we were turning the stage 2 loops completely off to acquire DRMI at the time, and I wasn't using the St2 RX/RY loops then either. Now we only turn off the boosts, which leaves the dc isolation loops mostly engaged.
I've counted six lock attempts since loading the new ramp time settings in the boost filters and I don't see any kicks in the St2 CPS when engaging the boost filters. I also don't see strings of lock losses during this step like before. Seems like that means it at least didn't make things worse.
TITLE: 06/05 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 133Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 7mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.13 μm/s
QUICK SUMMARY:
Current IFO Status: NOMINAL_LOW_NOISE & OBSERVING
11:16 UTC Lockloss Update: There were SUS ETMX L3 Saturations right before the Lockloss.
LVEA Temperatue Update:
LVEA Temp has stayed elevated but stable over the last several hours.
Relocking:
Increase flashes And I are having a hard tme getting ALS-C_TRY_LF above 80% due to the low DIFF beat note being in the -40 range.
I'm gonna try going through some of the Manual Initial Alignment to see if I can get that a little higher. That was not a good idea, because not the Diff beatnote got even worse going down to -50s. Taking Sliders back to GPSTIME 1369976970 which is right before the most recent lock. Which did not help the beat notes or the Y arm flashes and better. so i took the sliders back to where they were 5 minutes ago GPStime 1370001722 and touched them up manually. I was able to get a COMM beatnote of -10, (not ideal) And A DIFF beat note of -7 ( Which I think is good.)
First locking attempt : lost lock at TRANSITION_DRMI_TO_3F While Watching H1:LSC-POPAIR_B_RF90_I_ERR_DQ. I saw a kick happen during the transition.
For the Next Locking Attempt I Waited in ENGAGE_DRMI_ASC for ASC Signals to converge for about a minute before moving on. DRMI Lost lock but it wasn't a full lockloss. I tried it again but waited about 4 minutes, and it smoothly transitioned to DRMI_LOCKED_CHECK_ASC.
Got to NOMINAL_LOW_NOISE at 13:35 but the CAMERA_SERVO Guardian has yet to get all the way up to CAMERA_SERVO_ON. Turns out i was being impatient and it would have happened eventually.
13:51 UTC OBSERVING
This looks to me like the ADS is actually still converging this whole time and there wasn't an issue with the gaurdian(s). The convergence of the smooth channels (eg. H1:ASC-ADS_PIT3_SMOOTH_INMON) seems to take longer than usual for a reason I haven't looked into. The convergence checker in the CAMERA_SERVO node will run instantaneous checks of these smooth channels to look for values below 0.0025. Since we are looking at 6 different channels (4,5,6 P&Y for each), when we are below that threshold for all of them at the same time, we are probably converged enough.
I think I've fixed the issue with the bad blends we found on Friday, so we should have any more slow drifts in RZ CPS on the HAMs. First attached screen shot shows the zeros and poles as read by foton for the new filter (top row, installed as SUPERSENS5 on all the table) and the old bad blend (bottom row, SUPERSENS4). The issue with the bad blend were 3 poles at "0 hz" and 3 zeros at "0 hz" in the high pass, the low passes appear to be identical. I missed a step doing a "minimal realization" of the filters in my design script, and it wasn't caught by some of the checks I usually do, so I'm going to look at adding some alarms for that. These erroneous poles and zeros don't show up in any normal bode plot or step response (second image).
For now, I've installed the fixed filters on all chambers in the SUPERSENS5 path, set and tested the blend guardians to use the new filter and accepted the settings in SDF for all of the chambers. I've left the old filters in, but they aren't being used, I'll remove them in a couple weeks but I want to use them as a check against the filter I installed today. I'm also going to leave in the DIAG_MAIN test in, but once I'm convinced the problem is definitely fixed I'd like to remove it.
Since this fix, the RZ RESIDUAL monintors have been rock solid, oscillating with low-normal-levels of noise around 0.0 nrad. This indicates that the tables are no longer slowly drifting away in yaw, and this has fixed the problem. The attached plot shows the long term trend of the RESIDUALMON channels from all HAM platforms before the fix and the few weeks since. The crosshair pin-points the data/time of this installed fixed. (The nastiness seen in HAM7, shown in brown, a few days later was the unrelated overnight issues with that chamber's ISI interface chassis; see LHO:70117.) Nice work Jim!
Using the lockloss select tool, I saw that ASC-SRC2_Y seemed to step out a bit further than usual, ASC_AS_C feeds into SRC2_Y and it's read at OM1 from our Optical Sensor Layout (DCC G1601619). OM1 appears to show some abnormal motion seconds before the lockloss? But trending back further, this behavior shows up in the 2 previous locklosses (16:35UTC, 09:07UTC) too althought not as strongly, so maybe it's just another witness?