SDF diffs Changes were made today. I'm not sure if they were intentional or not.
But since we are in NOMINAL_LOW_NOISE I'm gonna accept them.
#edit: at 2AM when I first posted this ALOG the picture of the SDF Diffs were wrong. I have since fixed that.
same issue with ALS-Y_REFL_SERVOIN1GAIN and ALS-Y_REFL_SERVO_COMBOOST, I just accepted the changes.
TITLE: 09/16 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 144Mpc
INCOMING OPERATOR: Tony
LOG:
H1 recently had a lock end after just over 8hrs. Currently waiting for H1 to get back to NLN.
Environmentally all is well.
TITLE: 09/16 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Relocking after the lockloss (72911) was a bit strange but seemed like a one-off. Rest of the day was quiet.
15:00UTC In Observing, have been Locked for 18hours
16:38 Lockloss (72911)
16:51 LOCKLOSS_PRMI during third time going through ACQUIRE_PRMI, took itself to DOWN and restarted
17:02 I took it into INITIAL_ALIGNMENT
17:32 INITIAL_ALIGNMENT completed, heading to NOMINAL_LOW_NOISE
17:42 Lockloss at OFFLOAD_DRMI_ASC
18:24 Reached NOMINAL_LOW_NOISE
18:41 Into Observing
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
20:42 | Austin+1 | LExC, OSB, Overpass | n | tour | 22:04 |
All looks nominal for the site HVAC fans (did catch a transition between fans at MY for this one). Closing FAMIS 26251.
TITLE: 09/16 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 6mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.24 μm/s
QUICK SUMMARY:
H1's been locked just under 5hrs & all looks nominally from here (slight continued drift up in microseism from yesterday).
Sat Sep 16 10:10:29 2023 INFO: Fill completed in 10min 25secs
Lockloss @ 09/16 16:38UTC. EX saturation right before/as the lockloss occurred.
18:41 Observing
I was able to confirm that LSC-DARM_IN1 does see the lockloss before ASC-AS_A_DC_NSUM_OUT does (attachment1), which had been brought up by TJ (72890), but I want to clarify that this isn't a recent occurance - these locklosses from July 27th(attachment2) and August 10th(attachment3) also show the lockloss in DARM_IN1 a few milliseconds before the light falls off the AS_A photodiode.
In the case of this lockloss, I could not find an obvious cause, but since there was an EX callout right before the lockloss I looked into the ETMX suspension channels that we had looked into a bit yesterday (72896).
The lockloss was seen in DARM and the DCPDs before being seen in ETMX_L3_MASTER_OUT_{UL,UR,LL,LR}_DQ channels (attachment4), followed by the AS_A channel later. However, SUS-ETMX_L3_MASTER_OUT... goes through a few filters/functions before arriving at this reading and so might have a higher latency than the DARM and DCPD channels.
The EX glitch occurred ~700ms before DARM saw the lockloss, and similarly, it was seen in DARM and the DCPDs before the ETMX L3 channels (attachment5). Not sure if this glitch could cause a lockloss (at least not on its own), since we have had glitches this big (and much bigger) in ETMX that were also seen in DARM but did not cause a lockloss, see attachment6 for an example from almost a day earlier.
Thinking about what Camilla said in (72896) "ETMX moving would cause a DARM glitch (so the DARM BLRMs to increase) or vice versa, DARM changing would cause ETMX to try to follow", in this case depending on the latency either could still be true but I don't see why we would lose lock from a (relatively) mid-size saturation but hold on during a larger one unless there was some other factor (which I guess there probably usually is).
Relocking after this lockloss:
Lost lock after going through ACQUIRE_PRMI three times, so I decided to run an initial alignment. Initial alignment completed fine and we started locking and moving through states quickly, but then we lost lock at OFFLOAD_DRMI_ASC. I let it try locking again and everything was fine, so seems like that OFFLOAD_DRMI_ASC lockloss was just a one-off strange lockloss.
Closes FAMIS#26209, last completed Sept 8th
Laser Status:
NPRO output power is 1.833W (nominal ~2W)
AMP1 output power is 67.19W (nominal ~70W)
AMP2 output power is 134.8W (nominal 135-140W)
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PMC:
It has been locked 41 days, 0 hr 12 minutes
Reflected power = 16.6W
Transmitted power = 109.0W
PowerSum = 125.6W
FSS:
It has been locked for 0 days 19 hr and 29 min
TPD[V] = 0.8165V
ISS:
The diffracted power is around 2.4%
Last saturation event was 0 days 19 hours and 29 minutes ago
Possible Issues: None
TITLE: 09/16 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 3mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.24 μm/s
QUICK SUMMARY:
Everything is looking good this morning. We're Observing and have been Locked for 18hrs.
Earthquake mode was activated between 13:04-13:14UTC.
TITLE: 09/15 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Uneventful shift (mostly) with H1 approaching 10hrs of lock.
LOG:
Bumped out of Observing due to TCSx CO2 Laser unlocking.
( 2023-09-16_03:01:35.894099Z TCS_ITMX_CO2 [LASER_UP.run] laser unlocked. jumping to find new locking point )
TCS_ITMX_CO2 guardian node was able to restore the laser back up within 70 sec...then I took H1 back to OBSERVING.
TITLE: 09/15 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 11mph Gusts, 9mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.18 μm/s
QUICK SUMMARY:
Got a run down of the Day from Oli (not much to report for H1 other than known investigations into EX glitches prior to locklosses). Microseism has a slow increase over last 24hrs and winds are low.
(My first time in the Control Room for a while and it is noticeably quieter in here--- i.e. less fan noise?)
TITLE: 09/15 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Have now been Observing and Locked for 2hours. Fairly quiet day today with only the one Lockloss
15:00UTC Detector in Observing and Locked for 16hrs 22mins
18:55 Earthquake mode activated due to earthquake from Chile
19:05 Back to calm
19:54 Lockloss
21:04 Reached Nominal Low Noise
21:19 Observing
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
12:14 | FAC | Karen | Vac Prep | n | Tech clean | 15:14 |
16:52 | FAC | Cindi | Wood Shop | n | Overpass clean | 17:01 |
17:21 | FAC | Kim | H2 | n | Tech clean | 17:30 |
20:07 | LAS | Travis | FCES | n | Laser glasses check | 20:21 |
The BBB channel was continually going into the alarm range. It's been removed from Picket Fence FOM and EPICS.
The steps for removal were:
1. Edited /opt/rtcds/userapps/release/isi/h1/scripts/Picket-Fence/LHO-picket-fence.py. Commented out the block at line 39 that adds the "BBB" channel.
2. VNC to nuc5. Close the Picket Fecne window.
3. ssh as controls to nuc5. Run "start/launch.sh". This restarts the Picket Fence FOM, which also sets the EPICS variables.
Thanks Erik.
FYI - There are a bunch of stations around the Vancouver area, but BBB is the only one hosted by PNSN. We've reached out to see if we can get access to a similar low-latency server so that we can hopefully find a quieter station to use. These stations are useful for monitoring incoming motion from Alaska.
21:19UTC Back Observing