Lockloss @ 03:49 UTC - no obvious cause as of yet.
After waiting for high winds to calm down, H1 has just locked and started observing as of 03:02 UTC.
FAMIS 26059
There is no measurement this week for ETMX as the IFO lost lock before it was finished.
The V_eff values for ETMY have quickly changed direction since the last analysis and are now trending away from zero. All other test masses look okay.
This afternoon, the ITMX ISI tripped, was continuously glitching and couldn't be recovered. Even when I put the ISI and HEPI in non-actuating states the ISI continued to get glitches that almost saturated the seismometers. Fil went out to the rack and while I watched the ISI he turned off the coil drivers one at a time. It wasn't until he turned off the third coil driver that the glitching stopped and the ISI started behaving normally. This is exactly the same as the event in July, where I found the St2 H3 sensor seemed to be mis-behaving. Looking at the current and voltage mons, it seems the St2 H3 coil was again misbehaving, attached trend shows the st2 coil voltage and current mons. The only outliers are the St2 H3 channels, this window should be during a time when the ISI masterswitch was off, the drives should be ~0.
We should pull the corner 3 coil drive at the next opportunity. I think we have a spare in the EE shop.
Now corresponds to FRS Ticket 29225.
Took ~ 1hr to find the problem, but IFO was down for several hours because of wind.
TITLE: 09/27 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Wind
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 27mph Gusts, 16mph 5min avg
Primary useism: 0.08 μm/s
Secondary useism: 0.34 μm/s
QUICK SUMMARY:
TITLE: 09/27 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Wind
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Pretty unsuccesful day for locking, we weren't able to relock after the morning LL due to wind.
15:38UTC Superevent S23927be didn't get a verbal alarm for this event, just a phone call.
I noticed this bump around 120Hz this week, I'm not sure if this is usually there? (tagging detchar)
The range started to drop in the morning, it seemed correlated with the winds picking up (gusts up to over 40mph)
17:45UTC lockloss
17:50UTC Yarm struggled on the relock. We kept losing Yarm at LOCKING_ALS and losing lock. After about over an hour of trying and seeing the weather forecast calling for it to get worse before it gets better I decided to wait in DOWN for a bit. The corner station was getting gusts of 50mph throughout the afternoon, we could easily hear the wind from the control room.
ITMX ST1 & ST2 ISI watchdog tripped at 21:52UTC while we were in down from the ISI senors glitching out. The sensors settled after a few minutes but then continued to glitch, potentially an electronics issue? Fil went to the CER to investigate and I moved ITMX SUS to safe and Jim turned off the master switch on the ISI while we investigate more. Fil did a power cycle of the chassis and it cleared the issue, it seems to have been a coil driver issue? Related to alog71566
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
19:56 | FAC | Randy | LVEA | N | Quick checks | 20:05 |
21:18 | VAC | Gerardo | FCES | N | Parts search | 21:45 |
21:34 | FAC | Randy | Mech room | N | Move stuff around | 22:34 |
22:16 | EE | Fil | CER | N | ITMX ISI checks | 22:30 |
Oli, Camilla
ISC_LOCK takes longer to recognize and move to the LOCKLOSS state when a lockloss occurs than we want it to, sometimes leading to confusion regarding lockloss causes. Example: 71659
The part of ISC_LOCK that determines whether we have lost lock is in the is_locked function in ISC_library.py(attachment1). The current parameters for determining a lockloss rely on the H1:LSC-TR_X_NORM_INMON channel - once that channel's value falls below 500, ISC_LOCK will recognize that we have lost lock and will take us to the LOCKLOSS state.
Currently, when we are locked LSC-TR_X_NORM_INMON sits right around 1540, and when a lockloss happens, it takes ~0.55-0.65s after the lockloss start (based on ASC-AS_A_DC_NSUM_OUT) to drop below 500, and then takes another ~0.07-0.15s for ISC_LOCK to register this and take us to LOCKLOSS. To minimize this latency, since LSC-TR_X_NORM_INMON never gets below 1450 once we reach NOMINAL_LOW_NOISE, Camilla and I think it would be useful for the LSC-TR_X_NORM_INMON is_locked threshold value to be raised significantly to allow ISC_LOCK to register locklosses faster than it is currently. By raising the threshold to 1300 for example(attachment2), the time between the lockloss start and ISC_LOCK changing states would be reduced by a factor of 2-3.
During locking, state 420 (CARM_TO_ANALOG) is the first state to refer to the value of TR_X to determine a lockloss, and then every state after that is also looking at TR_X. By the time we reach CARM_TO_ANALOG, the value of TR_X is already at 1500, but it dips down and will sometimes near 1300 when between states 508-557(attachment1). Because of this, we we are wanting to create a new dof specifically for when we are in NOMINAL_LOW_NOISE to get passed into the is_locked function that checks that the value of TR_X is above 1300.
Late entry.
Rogers Machinery technicians showed up Tuesday with the components to fix the leak on the compressor. The intercooler and aftercooler assemblies were removed, cooling lines on both components were cleaned and inspected, all sealing surfaces were inspected and cleaned, then the system was reassembled and tested, technicians reported no issues. The compressor is back to usable form. The dry air system was turned on and left running for 1 hour, after the hour I measure a dew point of -41.7 oC right before the pressure regulator.
Note, the compressor needs a yearly maintenance done, but can't be completed right now because we are waiting on a back order kit, perhaps by December.
The wind has been picking up which seems to be dropping the range. We lost lock at 17:45UTC, and have been struggling to relock ALS, particularly Yarm since due to the higher wind speeds. Its been over an hour and we're still not able to get ALS. Hopefully it'll calm down soon.
The forecast from windy.com unfortunately predicts the wind will get worse before it gets better this afternoon (peaks around 2pm).
I'm going to hold us in down for a bit to wait out some of the wind
Wed Sep 27 10:27:31 2023 INFO: Fill completed in 27min 26secs
Travis confirmed a good fill curbside. Long fill, close to the 30 min timeout, Gerardo is going to tune the LLCV setting.
Quick lockloss, no obvious reason, aside from very windy site conditions.
Following the confusion last Friday when tracing DAQ CRC errors on h1susauxh2 only to discover it had actually rebooted during the early morning power glitch, I have added an indicator to the CDS overview MEDM which shows if the model has been restarted in the past 24 hours. If the uptime days EPICS record is zero, a thin purple rectangle is shown third from the right. Attachment shows the models which were started during yesterday's maintenance.
TITLE: 09/27 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Austin
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 11mph Gusts, 9mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.42 μm/s
QUICK SUMMARY:
TITLE: 09/27 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Austin
SHIFT SUMMARY: H1 lost lock early in the shift and struggled to relock, but after running an alignment locking went smoothly. H1 has now been locked for over 5 hours.
Temperatures in the LVEA from the excursion this morning have recovered (trend attached).
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
21:21 | CAL | Tony | PCAL lab | LOCAL | PCAL work, in at 19:30 | 23:31 |
Lockloss @ 23:21 UTC - no obvious cause; LSC-DARM_IN1 saw the first movement.
Back to observing at 01:58 UTC
TITLE: 09/26 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 140Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 25mph Gusts, 15mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.62 μm/s
QUICK SUMMARY:
Right after lockloss DIAG_MAIN was showing:
- PSL_ISS: Diffracted power is low
- OPLEV_SUMS: ETMX sums low
05:41UTC LOCKING_ARMS_GREEN, the detector couldn't see ALSY at all and I noticed the ETM/ITMs L2 saturations(attachment1-L2 ITMX MASTER_OUT as example) on the saturation monitor(attachment2) and noticed the ETMY oplev moving around wildly(attachment3)
05:54 I took the detector to DOWN, and immediately could see ALSY on the cameras and ndscope; L2 saturations were all still there
05:56 and 06:05 I went to LOCKING_ARMS_GREEN again and GREEN_ARMS_MANUAL, respectively, but wasn't able to lock anything - I was able to go to LOCKING_ARMS_GREEN and GREEN_ARMS_MANUAL without the saturations getting too bad but both ALSX and Y evenually locked for a few seconds each and then unlocked and gone to basically 0 on the cameras and ndscope(attachment4). ALSX and Y sometime after this went to FAULT and were giving the messages "PDH" and "ReflPD A"
06:07 Tried going to INITIAL_ALIGNMENT but Verbal kept calling out all of the optics as saturating, ALIGN_IFO wouldn't move past SET_SUS_FOR_ALS_FPMI, and the oplev overview screen showed ETMY, ITMX, and ETMX moving all over the place (I was worried something would go really wrong if I left it in that configuration)
I tried waiting a bit and then tried INITIAL_ALIGNMENT and LOCKING_ARMS_GREEN again but had the same results.
Naoki came on and thought the saturations might be due to the camera servos, so he cleared the camera servo history and all of the saturations disappeared (attachment5) and an INITIAL_ALIGNMENT is now running fine.
Not sure what caused the saturations in the camera servos, but it'll be worth looking into whether the saturations are what ended up causing this lockloss.
As shown in the attached figure, the output of camera servo for PIT suddenly became 1e20 and the lockloss happened after 0.3s. So this lockloss is likely due to the camera servo. I checked the input signal of camera servo. The bottom left is the input of PIT1 and should be same as the BS camera output (bottom right). The BS camera output itself looks OK, but the input of PIT1 became red dashed and the output became 1e20. The situation is same for PIT2, 3. I am not sure why this happened.
Opened FRS29216 . It looks like a transient NaN on ETMY_Y camera centroid channel caused a latching of 1e20 on the ASC CAM_PIT filter modules outputs.
Camilla, Oli
We trended the ASC-CAM_PIT{1,2,3}_INMON/OUTPUT channels alongside ASC-AS_A_DC_NSUM_OUT_DQ(attachment1) and found that according to ASC-AS_A_DC_NSUM_OUT_DQ, the lockloss started(cursor 1) ~0.37s before PIT{1,2,3}_INMON read NaN(cursor 2), so the lockloss was not caused by the camera servos.
It is possible that the NaN values are linked to the light dropping off of the PIT2 and 3 cameras right after the lockloss, as when the cameras come back online ~0.2s later, both the PIT2 and PIT3 cameras are at 0, and looking back over several locklosses it looks like these two cameras tend to drop to 0 between 0.35 and 0.55s after the lockloss starts. However the PIT1 camera is still registering light for another 0.8s after coming back online(typical time for this camera).
Patrick updated the camera server to solve the issue in alog73228.
Back observing as of 05:04 UTC
I don't see anything concerning or weird in the lockloss select scope signals, theres a ~3.5Hz oscillation in darm and PRCL.