TITLE: 09/24 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
PSL_ISS reflected power is still low
16:17 UTC EX Saturation, & 18:55 & 21:37
The wind is starting to pick up.
Almost nothing to report really.
LOG: n/a
TITLE: 09/24 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 9mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.19 μm/s
QUICK SUMMARY:
Taking over for Tony - we are Observing and have been Locked for 11hours. Wind has gone up just past 20mph in the last hour and useism is also trending upwards.
Alan Knee, Beverly Berger
A newly regular ground motion has appeared in the corner station, which was first noticed in the 10-30 Hz SEI/BLRMS (figure 1) and some of the CS ACC floor sensors (figure 2, notice the short blips near 30 Hz). They appear consistently every 80 or so minutes. It's getting picked up by varous other sensors, including in SUS/OpLev/BLRMS pitch for SR3 and ITMX (figure 3) and yaw for ITMX in the same frequency range, and a PSL table microphone (figure 4).
I created spectrograms for H1:PEM-CS_ACC_LVEAFLOOR_XCRYO_Z_DQ (figure 5) and H1:GDS-CALIB_STRAIN_CLEAN (figure 6) over a 2 hour period starting Sep 22 at 8:00 UTC, which shows that this noise is somehow coupling to the strain channel, though it is fairly faint.
The feature in the SEI/BLRMS channel has appeared from time to time although not with the frequency and regularity seen now. It seems to have started around 23:00 UTC on Sep 21, but persisted through Sep 22 and 23. It's still present today (Sep 24), though its cadence has seemingly changed (figure 7).
Laser Status:
NPRO output power is 1.829W (nominal ~2W)
AMP1 output power is 67.07W (nominal ~70W)
AMP2 output power is 134.8W (nominal 135-140W)
NPRO watchdog is RED
AMP1 watchdog is RED
AMP2 watchdog is RED
PMC:
It has been locked 2 days, 4 hr 17 minutes
Reflected power = 16.19W
Transmitted power = 109.8W
PowerSum = 126.0W
FSS:
It has been locked for 0 days 7 hr and 33 min
TPD[V] = 0.8411V
ISS:
The diffracted power is around 1.8%
Last saturation event was 0 days 7 hours and 33 minutes ago
Possible Issues:
NPRO watchdog is inactive
AMP1 watchdog is inactive
AMP2 watchdog is inactive
ISS diffracted power is low
It seems I had forgotten to turn the power watchdogs back on after recovering the PSL Friday morning (this shows exactly why we have this weekly FAMIS task to check for these things). I was able to log in and turn the watchdogs on shortly after 01:00 UTC this evening. H1 did not have to drop observing for this as the power watchdogs are just Beckhoff controlled and not monitored by SDF.
The ISS diffracted power still needs adjusting; we'll wait for a lockloss to do that.
Sun Sep 24 10:11:55 2023 INFO: Fill completed in 11min 51secs
TITLE: 09/24 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 159Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 0mph Gusts, 0mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.13 μm/s
QUICK SUMMARY:
Have the following CDS lights still :
H1:DAQ-DC0_H1IOPSUSAUXH2_CRC_SUM
H1:DAQ-DC0_H1SUSAUXH2_CRC_SUM
Otherwise at first glance it looks good.
TITLE: 09/24 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Quiet evening with not even a Verbals saturation callout! We are still Observing and have now been Locked for close to 10hours. PSL_ISS reflected power is still low (around 1.9%).
23:00UTC Detector Observing and Locked for 2hours
LOG:
no log
We've been Locked for 6.5 hours now and are Observing at 152Mpc.
TITLE: 09/23 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
19:02 UTC Pi 24 Ring up.
19:11 UTC a few seconds after I found the right setting to bring PI24 down a lockloss happend.
relocking notes:
Xarm Increase Flashes ran. Lockloss at Locking ALS.
Relocking attempt 2
20:19 UTC Nominal Lownoise reached
20:34 UTC Lockloss before the Camera Servos turned on.
relocking attempt 3
started at 20:36 UTC
Nominal Low Noise Reached at 21:20 UTC
Observing Reached at 21:34 UTC
LOG (Testing site independent Reservation system) :
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 16:34 | test | test | test | test | test | 16:35 |
TITLE: 09/23 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 12mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY:
Detector has been Observing and Locked for almost 2 hours now. Winds look like they have spiked up just past 20mph in the last halfhour, and microseism is low.
The LVEA zone 5 temp looks like it peaked about an hour ago, but didn't go up as high or as sharply as yesterday's temperature peak.
Unkown Lockloss:
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1379536511
This Lockloss happened while waiting for the Camera Servo Gaurdian to get to CAMERA_SERVO_ON.
Lockloss:
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1379531482
19:02 UTC Pi 24 starts to Ring up.
I let the PI Gaurdian try and sort it out at first. When I noticed that was cycling too quickly through the PI damping settings, I tried fighting it a little. Eventually I turned off the PI Guardian. then Tried by hand. Unfortunately My first guess at what would dampen PI24 actually made it worse, and When I finnaly found the setting that was turning it around the IFO unlocked.
I Struggled to try and fid the right settings to damp it.
19:11 UTC a few seconds after I found the right setting to bring PI24 down a lockloss happend.
Naoki checked that PI24 is still at the correct frequency, his plot attached.
We checked that the SUS_PI guardian was doing what was expected, it was. It stopped at 40deg PHASE which seemed to be damping but as the RMSMON started rising it continued to cycle, see attached. The issue appears to be that the guardian couldn't find the exact frequency to damp the mode quicker than it was ringing up.
Naoki and I have edited SUS_PI to take steps or 45degrees rather than 60degrees to fix this. We've left the timer at 10 seconds between steps but this may need to be reduced later. SUS_PI needs to be reloaded when next out of observe, tagging OpsInfo.
Sat Sep 23 10:11:53 2023 INFO: Fill completed in 11min 49secs
TITLE: 09/23 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 0mph Gusts, 0mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY:
Have the following CDS lights still :
H1:DAQ-DC0_H1IOPSUSAUXH2_CRC_SUM
H1:DAQ-DC0_H1SUSAUXH2_CRC_SUM
Dave put in an alog about this:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=73055
This has not been resolved and I believe Dave has been wanting to resolve this but likely has to wait until Tuesday.
TITLE: 09/23 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Aligning
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Majority of my shift was quiet. We were out of Observing for less than two minutes due to the TCS ITMX CO2 laser unlocking (73061), and then at 05:39UTC we lost lock (73064) and had saturations in L2 of all ITMs/ETMs that delayed relocking. We are currently in INITIAL_ALIGNMENT with Corey helping us lock back up.
23:00UTC Detector Observing and Locked for7.5hours
23:40 Taken out of Observing by TCS ITMX CO2 laser unlocking (73061)
23:42 SDF Diffs cleared by themselves and we went back into Observing
05:39 Lockloss (73064)
LOG:
no log
Right after lockloss DIAG_MAIN was showing:
- PSL_ISS: Diffracted power is low
- OPLEV_SUMS: ETMX sums low
05:41UTC LOCKING_ARMS_GREEN, the detector couldn't see ALSY at all and I noticed the ETM/ITMs L2 saturations(attachment1-L2 ITMX MASTER_OUT as example) on the saturation monitor(attachment2) and noticed the ETMY oplev moving around wildly(attachment3)
05:54 I took the detector to DOWN, and immediately could see ALSY on the cameras and ndscope; L2 saturations were all still there
05:56 and 06:05 I went to LOCKING_ARMS_GREEN again and GREEN_ARMS_MANUAL, respectively, but wasn't able to lock anything - I was able to go to LOCKING_ARMS_GREEN and GREEN_ARMS_MANUAL without the saturations getting too bad but both ALSX and Y evenually locked for a few seconds each and then unlocked and gone to basically 0 on the cameras and ndscope(attachment4). ALSX and Y sometime after this went to FAULT and were giving the messages "PDH" and "ReflPD A"
06:07 Tried going to INITIAL_ALIGNMENT but Verbal kept calling out all of the optics as saturating, ALIGN_IFO wouldn't move past SET_SUS_FOR_ALS_FPMI, and the oplev overview screen showed ETMY, ITMX, and ETMX moving all over the place (I was worried something would go really wrong if I left it in that configuration)
I tried waiting a bit and then tried INITIAL_ALIGNMENT and LOCKING_ARMS_GREEN again but had the same results.
Naoki came on and thought the saturations might be due to the camera servos, so he cleared the camera servo history and all of the saturations disappeared (attachment5) and an INITIAL_ALIGNMENT is now running fine.
Not sure what caused the saturations in the camera servos, but it'll be worth looking into whether the saturations are what ended up causing this lockloss.
As shown in the attached figure, the output of camera servo for PIT suddenly became 1e20 and the lockloss happened after 0.3s. So this lockloss is likely due to the camera servo. I checked the input signal of camera servo. The bottom left is the input of PIT1 and should be same as the BS camera output (bottom right). The BS camera output itself looks OK, but the input of PIT1 became red dashed and the output became 1e20. The situation is same for PIT2, 3. I am not sure why this happened.
Opened FRS29216 . It looks like a transient NaN on ETMY_Y camera centroid channel caused a latching of 1e20 on the ASC CAM_PIT filter modules outputs.
Camilla, Oli
We trended the ASC-CAM_PIT{1,2,3}_INMON/OUTPUT channels alongside ASC-AS_A_DC_NSUM_OUT_DQ(attachment1) and found that according to ASC-AS_A_DC_NSUM_OUT_DQ, the lockloss started(cursor 1) ~0.37s before PIT{1,2,3}_INMON read NaN(cursor 2), so the lockloss was not caused by the camera servos.
It is possible that the NaN values are linked to the light dropping off of the PIT2 and 3 cameras right after the lockloss, as when the cameras come back online ~0.2s later, both the PIT2 and PIT3 cameras are at 0, and looking back over several locklosses it looks like these two cameras tend to drop to 0 between 0.35 and 0.55s after the lockloss starts. However the PIT1 camera is still registering light for another 0.8s after coming back online(typical time for this camera).
Patrick updated the camera server to solve the issue in alog73228.
Observing at 150Mpc and have been up for almost 12 hours now. We were taken out of Observing at 23:40UTC due to the ITMX TCS CO2 laser unlocking(73061), but were able to get back to Observing pretty quickly. Wind was up to 20mph for a while but has come back down, and microseism is trending downwards as well.
Closes FAMIS#26253, last checked 72917
Corner Station Fans (attachment1)
All fans are looking good and are at values that are consistant with the previous week
Outbuilding Fans (attachment2)
Most fans are looking good and consistant, except for EX_FAN1_570_2. Starting 09/15 it suddenly became much noisier and now for the past week has had frequent, quick, high jumps in vibrations(attachment3).