We've been Locked for 6.5 hours now and are Observing at 152Mpc.
TITLE: 09/23 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
19:02 UTC Pi 24 Ring up.
19:11 UTC a few seconds after I found the right setting to bring PI24 down a lockloss happend.
relocking notes:
Xarm Increase Flashes ran. Lockloss at Locking ALS.
Relocking attempt 2
20:19 UTC Nominal Lownoise reached
20:34 UTC Lockloss before the Camera Servos turned on.
relocking attempt 3
started at 20:36 UTC
Nominal Low Noise Reached at 21:20 UTC
Observing Reached at 21:34 UTC
LOG (Testing site independent Reservation system) :
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
16:34 | test | test | test | test | test | 16:35 |
TITLE: 09/23 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 12mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY:
Detector has been Observing and Locked for almost 2 hours now. Winds look like they have spiked up just past 20mph in the last halfhour, and microseism is low.
The LVEA zone 5 temp looks like it peaked about an hour ago, but didn't go up as high or as sharply as yesterday's temperature peak.
Unkown Lockloss:
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1379536511
This Lockloss happened while waiting for the Camera Servo Gaurdian to get to CAMERA_SERVO_ON.
Lockloss:
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1379531482
19:02 UTC Pi 24 starts to Ring up.
I let the PI Gaurdian try and sort it out at first. When I noticed that was cycling too quickly through the PI damping settings, I tried fighting it a little. Eventually I turned off the PI Guardian. then Tried by hand. Unfortunately My first guess at what would dampen PI24 actually made it worse, and When I finnaly found the setting that was turning it around the IFO unlocked.
I Struggled to try and fid the right settings to damp it.
19:11 UTC a few seconds after I found the right setting to bring PI24 down a lockloss happend.
Naoki checked that PI24 is still at the correct frequency, his plot attached.
We checked that the SUS_PI guardian was doing what was expected, it was. It stopped at 40deg PHASE which seemed to be damping but as the RMSMON started rising it continued to cycle, see attached. The issue appears to be that the guardian couldn't find the exact frequency to damp the mode quicker than it was ringing up.
Naoki and I have edited SUS_PI to take steps or 45degrees rather than 60degrees to fix this. We've left the timer at 10 seconds between steps but this may need to be reduced later. SUS_PI needs to be reloaded when next out of observe, tagging OpsInfo.
Sat Sep 23 10:11:53 2023 INFO: Fill completed in 11min 49secs
TITLE: 09/23 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 0mph Gusts, 0mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY:
Have the following CDS lights still :
H1:DAQ-DC0_H1IOPSUSAUXH2_CRC_SUM
H1:DAQ-DC0_H1SUSAUXH2_CRC_SUM
Dave put in an alog about this:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=73055
This has not been resolved and I believe Dave has been wanting to resolve this but likely has to wait until Tuesday.
TITLE: 09/23 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Aligning
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Majority of my shift was quiet. We were out of Observing for less than two minutes due to the TCS ITMX CO2 laser unlocking (73061), and then at 05:39UTC we lost lock (73064) and had saturations in L2 of all ITMs/ETMs that delayed relocking. We are currently in INITIAL_ALIGNMENT with Corey helping us lock back up.
23:00UTC Detector Observing and Locked for7.5hours
23:40 Taken out of Observing by TCS ITMX CO2 laser unlocking (73061)
23:42 SDF Diffs cleared by themselves and we went back into Observing
05:39 Lockloss (73064)
LOG:
no log
Right after lockloss DIAG_MAIN was showing:
- PSL_ISS: Diffracted power is low
- OPLEV_SUMS: ETMX sums low
05:41UTC LOCKING_ARMS_GREEN, the detector couldn't see ALSY at all and I noticed the ETM/ITMs L2 saturations(attachment1-L2 ITMX MASTER_OUT as example) on the saturation monitor(attachment2) and noticed the ETMY oplev moving around wildly(attachment3)
05:54 I took the detector to DOWN, and immediately could see ALSY on the cameras and ndscope; L2 saturations were all still there
05:56 and 06:05 I went to LOCKING_ARMS_GREEN again and GREEN_ARMS_MANUAL, respectively, but wasn't able to lock anything - I was able to go to LOCKING_ARMS_GREEN and GREEN_ARMS_MANUAL without the saturations getting too bad but both ALSX and Y evenually locked for a few seconds each and then unlocked and gone to basically 0 on the cameras and ndscope(attachment4). ALSX and Y sometime after this went to FAULT and were giving the messages "PDH" and "ReflPD A"
06:07 Tried going to INITIAL_ALIGNMENT but Verbal kept calling out all of the optics as saturating, ALIGN_IFO wouldn't move past SET_SUS_FOR_ALS_FPMI, and the oplev overview screen showed ETMY, ITMX, and ETMX moving all over the place (I was worried something would go really wrong if I left it in that configuration)
I tried waiting a bit and then tried INITIAL_ALIGNMENT and LOCKING_ARMS_GREEN again but had the same results.
Naoki came on and thought the saturations might be due to the camera servos, so he cleared the camera servo history and all of the saturations disappeared (attachment5) and an INITIAL_ALIGNMENT is now running fine.
Not sure what caused the saturations in the camera servos, but it'll be worth looking into whether the saturations are what ended up causing this lockloss.
As shown in the attached figure, the output of camera servo for PIT suddenly became 1e20 and the lockloss happened after 0.3s. So this lockloss is likely due to the camera servo. I checked the input signal of camera servo. The bottom left is the input of PIT1 and should be same as the BS camera output (bottom right). The BS camera output itself looks OK, but the input of PIT1 became red dashed and the output became 1e20. The situation is same for PIT2, 3. I am not sure why this happened.
Opened FRS29216 . It looks like a transient NaN on ETMY_Y camera centroid channel caused a latching of 1e20 on the ASC CAM_PIT filter modules outputs.
Camilla, Oli
We trended the ASC-CAM_PIT{1,2,3}_INMON/OUTPUT channels alongside ASC-AS_A_DC_NSUM_OUT_DQ(attachment1) and found that according to ASC-AS_A_DC_NSUM_OUT_DQ, the lockloss started(cursor 1) ~0.37s before PIT{1,2,3}_INMON read NaN(cursor 2), so the lockloss was not caused by the camera servos.
It is possible that the NaN values are linked to the light dropping off of the PIT2 and 3 cameras right after the lockloss, as when the cameras come back online ~0.2s later, both the PIT2 and PIT3 cameras are at 0, and looking back over several locklosses it looks like these two cameras tend to drop to 0 between 0.35 and 0.55s after the lockloss starts. However the PIT1 camera is still registering light for another 0.8s after coming back online(typical time for this camera).
Patrick updated the camera server to solve the issue in alog73228.
Observing at 150Mpc and have been up for almost 12 hours now. We were taken out of Observing at 23:40UTC due to the ITMX TCS CO2 laser unlocking(73061), but were able to get back to Observing pretty quickly. Wind was up to 20mph for a while but has come back down, and microseism is trending downwards as well.
Closes FAMIS#26253, last checked 72917
Corner Station Fans (attachment1)
All fans are looking good and are at values that are consistant with the previous week
Outbuilding Fans (attachment2)
Most fans are looking good and consistant, except for EX_FAN1_570_2. Starting 09/15 it suddenly became much noisier and now for the past week has had frequent, quick, high jumps in vibrations(attachment3).
The TCS CO2 laser unlocked at ITMX at 09/22 23:40UTC and took us out of Observing for a minute and a half until it could relock itself. We went back into Observing at 23:42UTC. (Didn't contact Dave to clear the DAQ CRC errors since TCS locked back up so quickly)
A couple weeks ago I had noted(72653) that the TCS ITMX CO2 laser had been losing lock consistantly every ~2.8 days, leading to a steady decline in the laser head power (72627).
On 09/14 at 22:13UTC Camilla lowered the H1:TCS-ITMX_CO2_CHILLER_SET_POINT_OFFSET by 0.3degreesC(72887). In the week since then, it looks like (at least based on the 3 TCSX CO2 laser unlocks we've had since then) the average amount of time between unlocks has gone up slightly from every ~2.8ish days to every ~3.4 (attachment1 - T1 is when the offset was changed). Granted there's only two data points currently, but so far at least it looks like we've gained a bit of extra time between unlocks.
Unfortunately though the laser head power looks to still be decreasing at around a similar rate every time as compared to before the offset change.
TITLE: 09/22 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
Aquired IFO While it was relocking
Have the following CDS lights:
H1:DAQ-DC0_H1IOPSUSAUXH2_CRC_SUM
H1:DAQ-DC0_H1SUSAUXH2_CRC_SUM
Dave put in an alog about this:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=73055
This has not been resolved and Dave has been waiting for us to fall out of Observing to reset this.
DARM FOM was having trouble with GDS again. Ive had to VNC into nuc30 twice taody to fix it.
Got into NOMNIAL_LOW_NOISE at 15:36 UTC
A SQZer ISS Unlocked too many times, and refferenced ALOG 70050 again.
Camilla jumped in and started following the instructions found in 70050.
Camilla is adjusting the H1:SQZ-OPO_TEC_SETTEMP
Camilla's alog: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=73053
Camilla is adjusting the H1:SQZ-OPO_TEC_SETTEMP
Before 16:07 UTC Ryan Short adjusted the ISS Defracted power for the PSL.
We may have to Readjust the ISS Defracted power as the PSL thermalizes in a few hours.
This Has not happened yet as we have been in Observing the entire day.
Observing reached 16:10 UTC
Zone 5 LVEA temps increased by 3/4 of a degree oover 8 hours. But has since turned around and started to cool off already. Please keep an eye on that Zone 5 Temp change.
Locked for 7.5 hours and Observing the entire time.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
14:09 | PSL | RyanS | Remote | - | Restarting PSL | 14:19 |
16:53 | FAC | Tyler | EY | N | Chiller and Supply and return Hose work | 17:10 |
20:11 | FAC | Tyler | Air Handler1 | N | Checking AH 1 | 20:15 |
TITLE: 09/22 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 11mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
Detector Observing and has been Locked for almost 8 hours.
Fri Sep 22 10:09:28 2023 INFO: Fill completed in 9min 24secs
Travis confirmed a good fill curbside.
Note that the TCs only dropped to -150C and -144C for A, B respectively because the outside air temp was only 49F at 10am (due to fog?). We might have to increase the trip temp from -130C to -120C soon.
Around the time of the first power glitch this morning both models on h1susauxh2 recorded a single DAQ CRC error by both DC0 and DC1.
Attached plots show the 3 mains phases for this glitch, and the second plot shows one phase and one of the CRC errors (they all happened concurrently).
This glitch was more severe than usual, with two phases being significantly reduced in voltage for several cycles.
I will clear the CRCs next time H1 is out-of-observe.
At 13:47 UTC, Corey put in an alog noting that H1 was unlocked and the PSL was down. I checked the control room screenshots page and saw that DIAG_MAIN was reporting the PMC high voltage was off, and Richard reported that there were two site power glitches early this morning, so I was immediately reminded of a very similar situation in early August where the high voltage supply for PMC locking was tripped off during a power glitch. I got in contact with Corey and Richard (who was on-site) and we coordinated turning the 375V power supply in the CER mezzanine back on. Richard also checked other HV supplies and reported they were on. Once the PMC HV was back on, I proceeded with recovering the PSL from home and ran into no issues. Corey is now relocking H1.
Since the PSL had just turned on, I needed to adjust the ISS RefSignal from -2.00V to -2.06V to keep the diffracted power between 2.0% and 2.5%. This will need to be adjusted again as the PSL thermalizes to keep the diffracted power where we want it; I'll check on this when I'm on-site later this morning.
Attached is a trend of the site power glitch at 10:42 UTC this morning and the PMC HV going down as a result.
Just FYI, here at LLO the PMC HV power supply was the most sensitive to a power glitch (it is what would trip off first, even if the PSL stayed up)
This after a couple attempts we have swapped into the UPS talked about in LLO alog 66808. To date we have had no issues since this particular UPS was installed. We even simulated power outages/power glitches and the PMC HV supply stayed on.
Created FRS 29181 to track PMC HV trips and noted LLO's implementation of an online UPS.
Detchar, please tell us if the 1.66Hz comb is back.
We changed the OM2 heater driver configuration from what was described in alog 72061.
We used a breakout board with jumpers to connect all OM2 thermistor readback pins (pin 9, 10, 11, 12, 22, 23, 24, 25) to Beckhoff at the back of the driver chassis. Nothing else (even the DB25 shell on the chassis) is connected to Beckhoff.
Heater voltage inputs (pin 6 for positive and 19 for negative) are connected to the portable voltage reference powered by a DC power supply to provide 7.15V.
BTW, somebody powered the OM2 heater off at some point in time, i.e. OM2 has been cold for some time but we don't know exactly how long.
When we went to the rack, half of the power supply terminal (which we normally use for 9V batteries) was disconnected (1st picture), and there was no power to the heater. Baffling. FYI, if it's not clear, the power terminal should look like in the second picture.
Somebody should have snagged cables hard enough, and didn't even bother to check.
Next time you do it, since reconnecting is NOT good enough, read alog 72286 to learn how to set the voltage reference to 7.15V and turn off the auto-turn-off function, then do it, and tell me you did it. I promise I will thank you.
Thermistors are working.
There is an OSEM pitch shift of OM2 at the end of the maintenance period 3 weeks ago (Aug 29)
Having turned the heater back on will likely affect our calibration. It's not a bad thing, but it is something to be aware of.
Indeed it now seems that there is a ~5Mpc difference in the range calculations between the front-ends (SNSW) and GDS (SNSC) compared to our last observation time.
It looks like this has brought back the 1.66 Hz comb. Attached is an averaged spectrum for 6 hours of recent data (Sept 20 UTC 0:00 to 6:00); the comb is the peaked structure marked with yellow triangles around 280 Hz. (You can also see some peaks in the production Fscans from the previous day, but it's clearer here.)
To see if one of the Beckhoff terminals for thermistors is kaput, I disconnected thermistor 2 (pins 9, 11, 22 and 24) from Beckhoff at the back of the heater driver chassis.
For a short while the Beckhoff cable itself was disconnected but the cable was connected back to the breakout board at the back of the driver chassis by 20:05:00 UTC.
Thermistor 1 is still connected. Heater driver input is still receiving voltage from the voltage reference.
I checked a 3-hour span starting at 04:00 UTC today (Sept 21) and found something unusual. There is a similar structure peaked near 280 Hz, but the frequency spacing is different. These peaks lie on integer multiples of 1.1086 Hz, not 1.6611 Hz. Plot attached.
Detchar, please see if there's any change in 1.66Hz comb.
At around 21:25 UTC, I disconnected OM2 thermistor 1 (pins 10, 12, 23, 25 of the cable at the back of the driver chassis) from Beckhoff and connected thermistor 2 (pins 9, 11, 22, 24).
Checked 6 hours of data starting at 04:00 UTC Sept 22. The comb structure persists with spacing 1.1086 Hz.
Electrical grounding of the beckhoff systems has been modified as a result of this investigation -- see LHO:73233.
Corroborating Daniel's statement that the OM2 heater power supply was disrupted on Tuesday Aug 29th 2023 (LHO:72970), I've zoomed in on the pitch *OSEM* signals for both (a) when the suspected time of power distruption (first attachment), and (b) when the power and function of OM2 was restored (second attachment). One can see that upon power restoration and resuming the HOT configuration of TSAMS on 2023-09-19 (time (a) above), OM2 pitch *decreases* by 190 [urad] over the course of ~1 hour, with a characteristic "thermal time constant" exponential shape to the displacement evolution. Then, heading back to 2023-08-29, we can see a similar shaped event that causes OM2 pitch to *increase* by 160 [urad] over the course of ~40 minutes (time (b) above). Then at 40 minutes IFO recovery from maintenance begins, and we see the OM2 pitch *sliders* adjusted to account for the new alignment, as had been done several times before with the OM2 ON vs. OFF state changes. I take this to be consistent with: The OM2 TSAMS heater was inadvertently turned OFF and COLD on 2023-08-29 at 18:42 UTC (11:42 PDT), and The OM2 TSAMS heater was restored turned ON and HOT on 2023-09-19 18:14 UTC (11:14 PDT).