Rahul, Camilla, Jonathan, Erik, Dave:
At 07:33 PST during a measurement this morning the ETMX test mass was set into motion which exceeded the user-model, SWWD and HWWD RMS trigger levels. This was very similar to the 02 Dec 2023 event which eventually led to the tripping of the ETMX HWWD.
The 02 Dec event details can be found in T2300428
Following that event, it was decided to reduce the time the SUS SWWD takes to issue a local SUS DACKILL from 20 minutes to 15 minutes. It was this change which prevented the ETMX HWWD from tripping today.
The attached time plot shows the details of today's watchdog events.
The top plot (green) is h1susetmx user-model's M0 watchdog input RMS channels, and the trigger level (black) of 25000
The second plot (blue) is h1susetmx user-model's R0 watchdog input RMS channels, and the trigger level (black) of 25000
The lower plot shows the HWWD countdown minutes (black), the SUS SWWD state (red) and the SEI SWWD state (blue)
The timeline is:
07:33 ETMX is rang up, M0 watchdog exceeds its trigger level and trips, R0 watchdog almost reaches its trigger level, but does not trip.
At this point we have a driven R0 and undriven M0, which was also the case on 02 Dec which keeps ETMX rung up above the SWWD and HWWD trigger levels
The HWWD starts its 20 minute countdown
The SWWD starts its 5/15 minute countdown
+5min: SEI SWWD starts its 5 minute countdown
+10min: SEI SWWD issues DACKILL, no change to motion
+15min: SUS SWWD issues DACKILL, R0 drive is removed which resolves the motion
HWWD stops its count down with almost 5 minutes to spare.
We have opened a workpermit to reduce the sus quad models' RO trigger level to hopefully always have M0 and R0 trip together which will prevent this is the short term. Longer term solution requires a model change to alter the DACKILL logic.
During this timeline I also cleared filter history on L2_LOCK_L (very high counts before clearing) and M0_DAMP_L (no difference after clearing) details in 74889.
The channels used for the calibration measurement injections are listed in LHO:74919.
Dave, Rahul
We lost lock this afternoon and I took this opportunity to quickly implement the R0 watchdog changes. The new thresholds are given below,
ITMX R0 chain WD rms threshold - 20k counts
ITMY R0 chain WD rms threshold - 20k counts
ETMX R0 chain WD rms threshold - 18k counts
ETMY R0 chain WD rms threshold - 18k counts
I have accepted the above changes in the SDF and posted the screenshot below.
The threshold limit for ETMs is lower than that of ITMs based on the ndscope trends for the last 30days. The safety limit for ITMs seems to be around 20k and for ETMs 18k.
A more long term safety fix will be implemented in January 2024 by making some model changes.
WP 11587 Closed.
WP 11566
A thermocouple was installed on the floor between BSC2 and BSC3 to monitor temperature deviations from the LVEA floor and LVEA ambient air. Thermocouple connected to LX vacuum slow controls chassis, Terminal M21, Channel 4 (Connector G, pins 29 & 30 on side of rack).
F. Clara, G. Moreno
Using the Fluke 62 Mini Thermometer (IR) I checked the status of the input and output air on all of the Kepco power supplies at EX, EY, and CER Mezzanine per WP11586
EX and CER tested normal. Air flow is good, all supplies humming away, no odd vibrations.
At EY, ISC-YC1-TCS -24V supply looked to have a stuck fan. The supply is part of the ISC-YC1-TCS +/- 24V pair located in slot 18 of the C2 Power rack on the right hand side (RHS).
Ambient air at EY is 65F, air measured at the fan braket at 110F, and air measured output air at 80F. This supply shows -24V at 0A, low current is the reason it has not tripped already.
This pair of supplies powers pair of RF Oscillators and RF Amplifiers at EY. This is part of the Low Noise Power system thus its current draw on the +/- 24V is very low, as it is a reference.
Last checkup of supplies was September 19th, linked here ALOG72968 all supplies checked out normal. No issues were reported between then and now.
We replaced the supply with an upgraded spare with new ball bearing fan installed per WP11588 and we can hope for smooth sailing through the Christmas break.
Kepco with Failed Fan SN = S1300290
Kepco with Ball Bearing Fan SN = S1201931
M. Pirello, F. Clara
Lights and Mega CR light off, paging system unplugged, WAP off.
As TJ found last week, Robert's shaker is still connected to near HAM2, it is plugged in but powered off.
All else looked good, followed T1500386
Maintenance day ran a bit long due to an issue with the HEPI pump station at EX and an emergency power supply swap at EY, but all activities have now finished and H1 has started initial alignment.
H1 has started observing as of 22:35 UTC
Seismon and several other IOCs, including lveatemps, picket fence, and external alerts, were restarted.
Camilla, Ansel
At 17:46UTC, Camilla changed ITMX and ITMY HWS to 'sem 3', which is external trigger mode. This seemed to work, and stopped the code taking photos.
It appears that the HWS-associated combs are gone in the magnetometer witness channel after this change. Pre/post 1-hour spectra attached. (All known HWS-associated combs overlaid, just to check-- the near-7Hz is the one actually present in the "pre" spectrum which corresponds to current HWS sync frequency settings.)
Nice work Camilla and Ansel! Let's hope this solves the problem for good.
At 17:38UTC I tunred on the camera/CLink and restarted the HWS EX code which had been off since 74738.
At 17:46UTC I changed ITMX and ITMY to 'sem 3' and left them in this configuration so that they are not taking HWS data now. Now we need to write a script to read H1:GRD-ISC_LOCK_STATE_N state and adjust the camera mode to be sem 2 when we are in states < 580 (locking) and sem 3 when > 580 (locked). This is not trivial as camera computers are separate from EPICS.
Sheila, Naoki, Vicky - SQZ OMC mode scans with cold OM2
Taking SQZ-OMC mode scans, using DTT template saved in $(userapps)/sqz/h1/Templates/dtt/OMC_SCANS/Dec19_2023_PSAMS_OMC_scan_coldOM2.xml
PSAMS 200/200, cold OM2
PSAMS 120/120, cold OM2
Dark
1.9 W PSL input power, PSL-OMC mode scans, Cold OM2 - Sheila
with sidebands ON:
with sidebands OFF:
Dark: 1387052098 - 1387052350
Total scans from today here with zoom-in on SQZ/PSL scans.
Tue Dec 19 10:06:51 2023 INFO: Fill completed in 6min 47secs
Over the past few weeks we've occasionally been seing the notice on DIAG_MAIN that the IMC WFS need to be centered, so I did that during the maintenance window this morning, The process was as follows:
I'm leaving the IMC offline for SQZ/OMC scans.
TITLE: 12/19 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 8mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.28 μm/s
QUICK SUMMARY:
After a test this morning caused H1 to lose lock at 15:33 (see alog74887), maintenance activities have begun. Camilla is in the process of untripping SEI WDs.
Camilla, Louis We dropped out of Observing at 7am PST test the new DARM loop config: - the new DARM loop config, navigated to by going toNLN_ETMY->NEW_DARM, seems to be stable. We were able to sit there for several minutes without issue. - Camilla took a quick look at MICH FF settings while in 'NEW_DARM'. She will follow up with more info on that. - We lost lock again while trying to cal measurements in the NEW_DARM state. Still trying to figure out what's happening there; clearly reducing the amplitudes by 50% (LHO:74883) wasn't nearly enough. It seems we kicked the IFO hard enough for ISI_ETMX watchdogs to trip. due to the lockloss we did not get to test the DARM_RECOVER guardian state.
When the excitations started, the ETMX HEPI, ISI and SUS WDs tripped and (~5 minutes later) the hardware watchdogs for both SEI and SUS.
With all watchdogs tripped, I cleared history of the L2 LOCK L stage of ETMX as this was outputting very large numbers. M0 DAMP also seemed to be outputting very large numbers, I tried cleaning history but it seems like this was physically ust moving a LOT, plot attached.
Rahul via phone walked me though checking that ETMX is ready to be untripped: output counts changing by hundreds not 10,000's. We then reset SUS SW WD (cds > SWWD > ETMX) and untripped the SUS ETMX WD, Rahul said if it was still ringing very badly we would turn on M0 DAMP damping loops one-by-one but they already looked on so I took the GRD to DAMPED then ALIGNED.
After we got ETMX SUS back to usual, I reset the ETMX SEI SWWD and I followed the procedure to uptrip SEI (from wiki, HEPI then ISI). Once this was done Ryan took us to maintenance mode.
The new DARM configuration is shown in this plot is veery effective in reducing the ESD RMS, as shown in the plot below. The blue curve is one hour of normal configuration, showing the ESD total RMS as a function of time. The orange trace is during the test: the first few hundreds of seconds are still the nominal configuration, while between ~1000 and ~2000 seconds the new DARM configuration is running. The RMS is reduced well below the minimum of the normal configuration.
Unfortunately the was no clean data collected during this time to see the effect on the strain.
The MICHFF FM4 that we fit for cold OM2 74877 was actually very good for tis new DARM configuration green trace, much better than the FM1 we fit for the new DARM configuration 74817.
DARM didn't look great at 30-100Hz in either configuration, maybe this could be as the calibration is wrong.
The simulines measurements injected into the following channels:
H1:LSC-DARM1_EXC
H1:CAL-PCALY_SWEPT_SINE_EXC
H1:SUS-ETMX_L1_CAL_EXC
H1:SUS-ETMX_L2_CAL_EXC
H1:SUS-ETMX_L3_CAL_EXC
Injections began at approximately GPS 1387035241.
Summary of strain coherencce with all channels: https://ldas-jobs.ligo-wa.caltech.edu/~gabriele.vajente/bruco_1386992886_STRAIN/
It looks like there is no coherence with MICH nor SRCL
At low frequency, the usual coherence with DHARD_Y
The only noticeable difference now with respect to hot OM2 is the incresed coherence with beam jitter
https://ldas-jobs.ligo-wa.caltech.edu/~gabriele.vajente/bruco_1386992886_STRAIN_CLEAN/
Coherences for the CLEAN channel, with jitter removed
TITLE: 12/18 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Austin
SHIFT SUMMARY:
IFO is LOCKING and at CARM_TO_ANALOG
16:09 UTC - Lockloss from NLN - alog 74864
16:48 UTC - Lockloss at Max_Power - due to 6.0 EQ in China
17:00(ish) UTC - Power cycled Nuc35 due to the screen crashing and not responding.
17:12 UTC - Fire alarm panel went off. Tyler called a few seconds after saying it was a false alarm/to be expected due to work he was doing. Alarm stopped shortly after
18:00 UTC Reached OMC_Whitening but violins are quite high - NLN reached at 19:07 UTC. It took 1 hour and 7 mins for violins to be low enough to go through whitening (Tagging SUS).
19:07 UTC - Reached OBSERVING
22:01 UTC - COMMISSIONING - went into planned commissioning. Expected to continue until around 00:00 UTC.
22:35 - 22:39 UTC - Took a broadband calibration measurement. (screenshot)
23:46 UTC - Lockloss during commissioning during Simulines calibration - alog 74874
Other: MX/Woodshop Access Door Issue - people with access can’t get in. Fil is working on it and seems to have fixed it.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 23:11 | EE | Nutsinee | Optics Lab | Local | Running tests | 00:04 |
| 17:59 | SUS | Randy | CS, EX, EY | N | Site tour of all locks | 19:28 |
| 17:21 | SUS | Randy | MX | N | Inventory | 19:25 |
| 17:40 | FAC | Kim | MX | N | Technical Cleaning | 19:25 |
| 16:41 | FAC | Karen | Optics Lab, MY | N | Technical Cleaning | 18:16 |
the broadband measurement mentioned is located at /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20231218T223413Z.xml. We had this measurement taken to evaluate the state of the calibration with OM2 in the cold state. The OM2 was cooled off again in LHO:74861.
As Ryan did last week 74741. I have taken PEM_MAG_INJ and SUS_CHARGE from WAITING to DOWN so that they do not run tomorrow. Instead, tomorrow Louis will try the risky DARM loop swaps and calibration. To re-enable the automated measurements, the nodes should be requested to INJECTIONS_COMPLETE before next Tuesday.
I've requested both the SUS_CHARGE and PEM_MAG_INJ nodes to INJECTIONS_COMPLETE so that the automated injections will run again starting next Tuesday.