TITLE: 01/16 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 21mph Gusts, 13mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.27 μm/s
QUICK SUMMARY:
IFO Locked for 24 hours!
Planned Comissioning time today from 16:30- 19:30 UTC where we will drop from observing for comissioning and calibration activities.
Alarm handler:
PSL Dust 101 & 102
Red but not actively alarming:
Vacuum alert: H0:VAC-LX_Y0_PT110_MOD1_PRESS_TORR
Trending this channel back it looks like its been red for days, and this Alog mentions that its not currently running
At 17:48:23 Wed 15jan2025 PST all end station receivers of long-range-dolphin IPC channels originating from h1lsc saw a single IPC receive error.
The models h1susetm[x,y], h1sustms[x,y] and h1susetm[x,y]pi all receive a single channel from h1lsc and recorded a single receive error at the same time. No other end station models receive from h1lsc.
On first investigation there doesn't appear to be anything going on with h1lsc at this time to explain this.
FRS33085 is an umbrella ticket covering any IPC errors seen during O4.
Yesterday's IPC receive error was the fourth occurence during O4, we are averaging roughly one every six months.
I have cleared the end station SUS errors with a DIAG_REST when H1 was out of observe.
TITLE: 01/16 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 14:57 UTC (15 hr lock!)
Ultra-smooth shift - nothing of note.
LOG:
None
TITLE: 01/16 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
Great Day of Observing!
Dropped down to Commissioning twice:
Once due to an earthquake causing the SQZr to lock loss- but we stayed in NLN.
And another time due to an Xtreme PI Damping event. Once again we stayed in NLN.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
17:45 | PSL | Ryan C | PSL Chiller | N | Checking the PSL Chiller water level | 18:15 |
17:46 | FAC | Kim | Optics Lab | Yes | Tecnical cleaning | 16:46 |
17:56 | PEM | Ryan C | VAC Prep | n | Tracing Dust mon issues and wires | 18:16 |
19:05 | PCAL | Francisco & Kim | PCAL | Yes | Technical Cleaning | 19:34 |
21:13 | R&R | JC | Overpass | N | Walking over to the Overpass | 21:23 |
22:46 | VAC | Janos | EX | n | Work in maintenance room | 23:09 |
00:28 | PCAL | Francisco | PCAL Lab | Yes | PCAL measurements | 01:08 |
TITLE: 01/16 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 159Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 3mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.34 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING as of 14:57 UTC (9 hr lock!)
23:39:06 UTC Xtreme PI damping took us out of Observing.
We got back into Observing at 23:40:06 UTC
TITLE: 01/15 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 159Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 3mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.35 μm/s
QUICK SUMMARY:
Dropped from Observation at 15:22:22:40 UTC due to the SQZ system got unlocked. We did not drop out of Nominal_Low_Noise.
Looking slightly more into this, It looks like a 5.7Mag Eathquake may have unlocked the SQZ system.
As an update to 80919, I have finished editing all* plotallsus_tfs.m scripts to now include the plotting of cross-coupling. There is an on/off boolean along with a matrix near the top of each script that allows you to choose whether or not to plot cross coupling, and between which DOF.
This can help when we want to check for cross-coupling before vs after a period of time/vent/etc.
* When I say all I mean every sus matlab script whose name is a variation of plotallsus_tfs.m or plotallsus_tfs_M1.m. I did not update the plotallsus_spectra.m or plotallsus_tfs_{some other stage}.m since those are not used as often. All changes verified to work and committed to svn.
This comparison was suggested to help evaluate whether the narrow line contamination seen around violin mode regions during ring-ups is related to aliasing. The idea (as I follow it) is that because the 64 kHz channel has different aliasing, the noise ought to look different in the two channels if it is aliasing-related. In fact the two channels look similar, albeit not identical, which looks like evidence against the aliasing hypothesis. However, this test may not catch all the places where aliasing could occur, so it may not be conclusive. See the associated detchar-request issue for ongoing discussion.
The first two plots are from 2023, during time periods which were previously identified as having line contamination around the violin modes (71800, 79825). The last plot is from just a few days ago, on Jan 12.
Wed Jan 15 10:06:29 2025 ALERT: Fill done (errors) in 6min 25sec
TCs started very high, around +40C, and therefore TCmins [-28C, -27C] did not exceed the trip temp (-60C). After the TCs had flatlined I manually cancelled the fill.
OAT (-1C, 31F) and foggy.
Diag Main has now got a PSL_Chiller message telling me to Check the PSL Chiller. I ran the PSL Status script to hopefully get some insight. Laser Status: NPRO output power is 1.842W AMP1 output power is 70.15W AMP2 output power is 137.0W NPRO watchdog is GREEN AMP1 watchdog is GREEN AMP2 watchdog is GREEN PDWD watchdog is GREEN PMC: It has been locked 28 days, 23 hr 15 minutes Reflected power = 25.55W Transmitted power = 102.4W PowerSum = 127.9W FSS: It has been locked for 0 days 3 hr and 32 min TPD[V] = 0.781V ISS: The diffracted power is around 4.2% Last saturation event was 0 days 3 hours and 32 minutes ago Possible Issues: PMC reflected power is high Check chiller (probably low water) PSL probably wants some water and someone to chill with. The plots below seem like it might be a little lonely.
I just got the verbal alarms alert to Check the PSL Chiller.
I topped off the PSL chiller with ~100mL of water.
TITLE: 01/15 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 1mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.35 μm/s
QUICK SUMMARY:
When I walked in H1 had been locked and Observing for 33 minutes!
Unknown lockloss this morning at 13:17 UTC
H1 Manny relocked without assistance this morning and no one was even woken up.
PS: It's a little icey out in the parking lot so be prepared stepping out of your car.
ETMY mode1s damping wasn't going well this morning and it was slowly rising (it did this a few times over the previous weekend as well), I went from +60 of phase to +30 and flipped the sign of the gain +0.1 -> -0.1, it has been damping for the past hour and has damped past where it was turning around with the previous settings.
Camilla, TJ, Marc, Fil. WP#12281
We attached the AOM drive cable from the back of the D1300649 chassis to the lowest TEST point in the PEM feed through photo on the CO2X table. We used two barrel connectors (photos attached) to do this as it looks like there used to be an AOM driver on the table that the signal went into before going to the AOM.
We thought that we could use the digital filters in the h1tcscs model to create a loop with this output and feed to the Synrad UC-2000 PWM controller (needs 0-10VDC). The max of CTRL2 was capped at +/-2 (unsure why) and this was actually +/- 0.6V on the BNC via an oscilloscope. We'll need to increase this by ~x10 to get PWM to work. Reverted changed sdfs.
There was an unknown cable also labeled AOM drive coming into the table, not connected to anything photo.
Matt Todd and I checked that neither of these BNC barrels were grounded to the CO2X table.
TITLE: 01/15 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 05:40 UTC (20 min lock!)
Overall, bad recovery from maintenance. Here’s what happened in 7 short stories:
Big thanks to Erik and Sheila who helped a lot with troubleshooting.
LOG:
None
The changes here 82263 were reverted.
H1OMC models were reverted to version 5.3.0 of the RCG that uses the linear ramp. The IFO was consistently losing lock after a filter ramp down.
WP12272 h1omc0 new RCG, quadratic filter ramping
Erik, Dave:
h1omc0 models were built against RCG 5.31 which introduces quadratic smoothing to ramped filter switching. All the models running on this frontend have the new rcg (h1iopomc0, h1omc, h1omcpi).
The overview was modified to show that h1omc0 has a different rgc than h1susex by colour coding the RCG: dark_blue=5.31(quadratic filter ramp and variable duotone frequence), light-blue-5.30 (LIGO DAC) and green = 5.14 (standard)
WP12274 h1guardian1 reboot
TJ, Erik:
TJ rebooted h1guardian1 to reload all the nodes. The hope is that this will eliminate the leap-second warnings we have been seeing on certain nodes.
Tue14Jan2025
LOC TIME HOSTNAME MODEL/REBOOT
08:06:21 h1omc0 h1iopomc0 <<< Install RCG5.31
08:06:35 h1omc0 h1omc
08:06:49 h1omc0 h1omcpi
17:30:10 h1omc0 h1iopomc0 <<< Revert back to RCG5.30
17:30:24 h1omc0 h1omc
17:30:38 h1omc0 h1omcpi
In 81638 we removed a DARM1 FM1 boost because of a glitch when ramping it off causing locklosses during ESD transitions in preparation to switch back to ETMX. Today the CDS team updated H1OMC0 models with an improved filter ramping: 82263. We hope this will allow us to keep the boost us which gives us more range against high microseism while relocking (it's always off by NLN).
Uncommented line 3058 from ISC_LOCK.py and reloaded: ezca.get_LIGOFilter('LSC-DARM1').turn_on('FM1'). If we have locklossses at ISC_LOCK state 557 or 558, at the operator can re-comment this line out. Tagging OpsInfo.
Sheila, Camilla, Erik
We lost lock 12s after this DARM1 FM1 filter was turned off, we're not sure if the filter changes are the cause. Are trying to relock again.
We think we were a little confused and have been turning on FM1 the whole time, as it was still turning on in PREP_DC_READOUT_TRANSITION. Unsure if it was just luck that the glitch disappeared when we made the change Dec 5th. Will look into more...
Re-commented out line 3058 today at 1:25 UTC after losing lock at the same state LOWNOISE_ESD_ETMX (558).