Today Jason and I continued with the CO2 beam scan work in 74760. We did a rough alignment along the profiler, swapping the final mirror from 1" to 2" for easier alignment and checked that the profiler could see the beam. With the 7.5" FL lens on the front, the beam is too large for the profile after ~300mm. It's also quite large before the lens.
Parts stored in the same locations as last week, optics and Nanoscan still in CO2X enclosure. First optic was moved out of the beam path but we drew around the base with red pen. We should think carefully about if this data will be useful before continuing as will need ot find a different lens to successfully scan.
Rahul, Camilla, Jonathan, Erik, Dave:
At 07:33 PST during a measurement this morning the ETMX test mass was set into motion which exceeded the user-model, SWWD and HWWD RMS trigger levels. This was very similar to the 02 Dec 2023 event which eventually led to the tripping of the ETMX HWWD.
The 02 Dec event details can be found in T2300428
Following that event, it was decided to reduce the time the SUS SWWD takes to issue a local SUS DACKILL from 20 minutes to 15 minutes. It was this change which prevented the ETMX HWWD from tripping today.
The attached time plot shows the details of today's watchdog events.
The top plot (green) is h1susetmx user-model's M0 watchdog input RMS channels, and the trigger level (black) of 25000
The second plot (blue) is h1susetmx user-model's R0 watchdog input RMS channels, and the trigger level (black) of 25000
The lower plot shows the HWWD countdown minutes (black), the SUS SWWD state (red) and the SEI SWWD state (blue)
The timeline is:
07:33 ETMX is rang up, M0 watchdog exceeds its trigger level and trips, R0 watchdog almost reaches its trigger level, but does not trip.
At this point we have a driven R0 and undriven M0, which was also the case on 02 Dec which keeps ETMX rung up above the SWWD and HWWD trigger levels
The HWWD starts its 20 minute countdown
The SWWD starts its 5/15 minute countdown
+5min: SEI SWWD starts its 5 minute countdown
+10min: SEI SWWD issues DACKILL, no change to motion
+15min: SUS SWWD issues DACKILL, R0 drive is removed which resolves the motion
HWWD stops its count down with almost 5 minutes to spare.
We have opened a workpermit to reduce the sus quad models' RO trigger level to hopefully always have M0 and R0 trip together which will prevent this is the short term. Longer term solution requires a model change to alter the DACKILL logic.
During this timeline I also cleared filter history on L2_LOCK_L (very high counts before clearing) and M0_DAMP_L (no difference after clearing) details in 74889.
The channels used for the calibration measurement injections are listed in LHO:74919.
Dave, Rahul
We lost lock this afternoon and I took this opportunity to quickly implement the R0 watchdog changes. The new thresholds are given below,
ITMX R0 chain WD rms threshold - 20k counts
ITMY R0 chain WD rms threshold - 20k counts
ETMX R0 chain WD rms threshold - 18k counts
ETMY R0 chain WD rms threshold - 18k counts
I have accepted the above changes in the SDF and posted the screenshot below.
The threshold limit for ETMs is lower than that of ITMs based on the ndscope trends for the last 30days. The safety limit for ITMs seems to be around 20k and for ETMs 18k.
A more long term safety fix will be implemented in January 2024 by making some model changes.
WP 11587 Closed.
WP 11566
A thermocouple was installed on the floor between BSC2 and BSC3 to monitor temperature deviations from the LVEA floor and LVEA ambient air. Thermocouple connected to LX vacuum slow controls chassis, Terminal M21, Channel 4 (Connector G, pins 29 & 30 on side of rack).
F. Clara, G. Moreno
Using the Fluke 62 Mini Thermometer (IR) I checked the status of the input and output air on all of the Kepco power supplies at EX, EY, and CER Mezzanine per WP11586
EX and CER tested normal. Air flow is good, all supplies humming away, no odd vibrations.
At EY, ISC-YC1-TCS -24V supply looked to have a stuck fan. The supply is part of the ISC-YC1-TCS +/- 24V pair located in slot 18 of the C2 Power rack on the right hand side (RHS).
Ambient air at EY is 65F, air measured at the fan braket at 110F, and air measured output air at 80F. This supply shows -24V at 0A, low current is the reason it has not tripped already.
This pair of supplies powers pair of RF Oscillators and RF Amplifiers at EY. This is part of the Low Noise Power system thus its current draw on the +/- 24V is very low, as it is a reference.
Last checkup of supplies was September 19th, linked here ALOG72968 all supplies checked out normal. No issues were reported between then and now.
We replaced the supply with an upgraded spare with new ball bearing fan installed per WP11588 and we can hope for smooth sailing through the Christmas break.
Kepco with Failed Fan SN = S1300290
Kepco with Ball Bearing Fan SN = S1201931
M. Pirello, F. Clara
Lights and Mega CR light off, paging system unplugged, WAP off.
As TJ found last week, Robert's shaker is still connected to near HAM2, it is plugged in but powered off.
All else looked good, followed T1500386
Maintenance day ran a bit long due to an issue with the HEPI pump station at EX and an emergency power supply swap at EY, but all activities have now finished and H1 has started initial alignment.
H1 has started observing as of 22:35 UTC
Seismon and several other IOCs, including lveatemps, picket fence, and external alerts, were restarted.
Camilla, Ansel
At 17:46UTC, Camilla changed ITMX and ITMY HWS to 'sem 3', which is external trigger mode. This seemed to work, and stopped the code taking photos.
It appears that the HWS-associated combs are gone in the magnetometer witness channel after this change. Pre/post 1-hour spectra attached. (All known HWS-associated combs overlaid, just to check-- the near-7Hz is the one actually present in the "pre" spectrum which corresponds to current HWS sync frequency settings.)
Nice work Camilla and Ansel! Let's hope this solves the problem for good.
At 17:38UTC I tunred on the camera/CLink and restarted the HWS EX code which had been off since 74738.
At 17:46UTC I changed ITMX and ITMY to 'sem 3' and left them in this configuration so that they are not taking HWS data now. Now we need to write a script to read H1:GRD-ISC_LOCK_STATE_N state and adjust the camera mode to be sem 2 when we are in states < 580 (locking) and sem 3 when > 580 (locked). This is not trivial as camera computers are separate from EPICS.
Sheila, Naoki, Vicky - SQZ OMC mode scans with cold OM2
Taking SQZ-OMC mode scans, using DTT template saved in $(userapps)/sqz/h1/Templates/dtt/OMC_SCANS/Dec19_2023_PSAMS_OMC_scan_coldOM2.xml
PSAMS 200/200, cold OM2
PSAMS 120/120, cold OM2
Dark
1.9 W PSL input power, PSL-OMC mode scans, Cold OM2 - Sheila
with sidebands ON:
with sidebands OFF:
Dark: 1387052098 - 1387052350
Total scans from today here with zoom-in on SQZ/PSL scans.
Tue Dec 19 10:06:51 2023 INFO: Fill completed in 6min 47secs
Over the past few weeks we've occasionally been seing the notice on DIAG_MAIN that the IMC WFS need to be centered, so I did that during the maintenance window this morning, The process was as follows:
I'm leaving the IMC offline for SQZ/OMC scans.
Summary of strain coherencce with all channels: https://ldas-jobs.ligo-wa.caltech.edu/~gabriele.vajente/bruco_1386992886_STRAIN/
It looks like there is no coherence with MICH nor SRCL
At low frequency, the usual coherence with DHARD_Y
The only noticeable difference now with respect to hot OM2 is the incresed coherence with beam jitter
https://ldas-jobs.ligo-wa.caltech.edu/~gabriele.vajente/bruco_1386992886_STRAIN_CLEAN/
Coherences for the CLEAN channel, with jitter removed
TITLE: 12/18 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Austin
SHIFT SUMMARY:
IFO is LOCKING and at CARM_TO_ANALOG
16:09 UTC - Lockloss from NLN - alog 74864
16:48 UTC - Lockloss at Max_Power - due to 6.0 EQ in China
17:00(ish) UTC - Power cycled Nuc35 due to the screen crashing and not responding.
17:12 UTC - Fire alarm panel went off. Tyler called a few seconds after saying it was a false alarm/to be expected due to work he was doing. Alarm stopped shortly after
18:00 UTC Reached OMC_Whitening but violins are quite high - NLN reached at 19:07 UTC. It took 1 hour and 7 mins for violins to be low enough to go through whitening (Tagging SUS).
19:07 UTC - Reached OBSERVING
22:01 UTC - COMMISSIONING - went into planned commissioning. Expected to continue until around 00:00 UTC.
22:35 - 22:39 UTC - Took a broadband calibration measurement. (screenshot)
23:46 UTC - Lockloss during commissioning during Simulines calibration - alog 74874
Other: MX/Woodshop Access Door Issue - people with access can’t get in. Fil is working on it and seems to have fixed it.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 23:11 | EE | Nutsinee | Optics Lab | Local | Running tests | 00:04 |
| 17:59 | SUS | Randy | CS, EX, EY | N | Site tour of all locks | 19:28 |
| 17:21 | SUS | Randy | MX | N | Inventory | 19:25 |
| 17:40 | FAC | Kim | MX | N | Technical Cleaning | 19:25 |
| 16:41 | FAC | Karen | Optics Lab, MY | N | Technical Cleaning | 18:16 |
the broadband measurement mentioned is located at /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20231218T223413Z.xml. We had this measurement taken to evaluate the state of the calibration with OM2 in the cold state. The OM2 was cooled off again in LHO:74861.
As Ryan did last week 74741. I have taken PEM_MAG_INJ and SUS_CHARGE from WAITING to DOWN so that they do not run tomorrow. Instead, tomorrow Louis will try the risky DARM loop swaps and calibration. To re-enable the automated measurements, the nodes should be requested to INJECTIONS_COMPLETE before next Tuesday.
I've requested both the SUS_CHARGE and PEM_MAG_INJ nodes to INJECTIONS_COMPLETE so that the automated injections will run again starting next Tuesday.
Here's a comparison of Pcal to DeltaL External and GDS Calib Strain broadband measurements with OM2 hot and cold.
File path to DTT template: /ligo/home/dana.jones/Documents/OM2_heating/OM2_hot_v_cold.xml
Dana, Louis
These broadband measurements were taken as part of the regular calibration sweeps when H1 was fully thermalized. To get the measurements, I used this command (for a sample report ID):
pydarm ls -r 20230802T000812Z | grep PCALY2DARM_BB | grep pcal
which returned:
>> pcal: /ligo/groups/cal/H1/reports/20230802T000812Z/PCALY2DARM_BB_20230727T161527Z.xml
Then to get the exact start time of the injection I used:
grep TestTime /ligo/groups/cal/H1/reports/20230802T000812Z/PCALY2DARM_BB_20230727T161527Z.xml
See this link for a list of calibration measurements. To calibrate the Delta L External and Pcal measurements I used the following files found in /ligo/home/dana.jones/Documents/OM2_heating/:
deltal_external_calib_dtt_20230621T211522Z.txt
deltal_external_calib_dtt_20230628T015112Z.txt
deltal_external_calib_dtt_20230716T034950Z.txt
deltal_external_calib_dtt_20230823T213958Z.txt
deltal_external_calib_dtt_20230830T213653Z.txt
deltal_external_calib_dtt_20231027T203619Z.txt
pcal_calib_dtt_20230621T211522Z.txt
pcal_calib_dtt_20230628T015112Z.txt
pcal_calib_dtt_20230716T034950Z.txt
pcal_calib_dtt_20230823T213958Z.txt
pcal_calib_dtt_20230830T213653Z.txt
pcal_calib_dtt_20231027T203619Z.txt
These files were generated using the following two commands (again, for a sample report ID):
pydarm export -r 20230802T000812Z --deltal_external_calib_dtt
and
pydarm export -r 20230802T000812Z --pcal_calib_dtt.
Note: For the most recent curve (23/12/10), I applied the 23/10/27 calibration TF as this was the latest valid one available.
In addition, for GDS_CALIB_STRAIN I applied a gain of 3995 in the Poles/zeros tab to convert to meters (see .xml file).
User note for calibration tab in DTT: Make sure when applying different calibration transfer functions to each curve that you set the start time appropriately—you can’t just use the same time for all of them or the system won’t know which one to choose. For each measurement, set the corresponding calibration start time to, say, 1 day before.
Here's a PCALY-to-GDS_CALIB_STRAIN broadband comparison with the OM2 in the hot state (gold trace, 2023-12-10) and in the cold state (cyan trace, 2023-12-18), see bb_hot_cold_om2.png. Both measurements were taken while the IFO was thermalized. Pcal corrections have been applied to PCALY_RX_PD_OUT. The two traces don't line up exactly but their differences are down to the percent level. Sheila is pulling up GDS_CALIB_STRAIN spectra from before and after the OM2 cooling for comparison purposes. This plot suggests that she will be able to overlay the two and compare them as long as we're not interested in making any determinations close to ~few percent level.
TITLE: 12/18 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 159Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
- Following the EQ lockloss from earlier, H1 is back to NLN @ 0:16/OBSERVE @ 0:36 (SQZ was having trouble tuning the sqz angle due to commissioner changes, but Naoki reverted these changes so we should have no further issue)
- 0:29 - inc 4.9 EQ from Mexico
- Other than the EQ it was an uneventful night, nothing else to report
LOG:
No log for this shift.
The trouble of SQZ happened after I changed the ADF servo flag in sqzparams to False. I wanted to see SQZ drift without ADF servo, but FC IR kept unlocking after this change so I reverted it. I am not sure why FC IR got unstable without ADF servo.
We found this was due to incorrect logic in SQZ_MANAGER. If the use_sqz_angle_adjust flag is false, the node never arrives at True and goes down to SQZ_READY_IFO, unlocked FC after 120s, thinking that it's failing to reach IR_LOCKED. Edit in svn 26967.