TITLE: 02/06 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC STATE of H1: Observing at 69Mpc OUTGOING OPERATOR: Ed CURRENT ENVIRONMENT: Wind: 15mph Gusts, 14mph 5min avg Primary useism: 0.05 μm/s Secondary useism: 0.41 μm/s QUICK SUMMARY: No issues to report.
All is well in H1 land. There was one GRB alert. The lock is just past 34 hours. It has started snowing lightly.
Event time: 18:07:12
Verbal Alert Time: 18:08:02
Tom Evans at LLO contacted me via TeamSpeak to verify the alarm was recieved by me as well.
After a dazzingly confusing narrative of what to do with the INJ_TRANS Guardian node in aLogs and E-mails, I selected to manually take the node to INJECT_KILL. I shall return the state to INJECT_SUCCESS after the requisite 1hr stand-down time. After speaking to Keita he assured me that this is ok to do and it won't harm or change anything but It is no longer neccesary to do manually. I spoke with Tom at LLO and he told me that they are still doind the manual Injections pause as well. I infer that they don't utilize the same method for handling this type of event that we do? I hope it isn't just me that is confused about this and why it isn't being handled identically by the entire project.
Been locked for 26 hrs 47 mins. Two GRB alerts but LLO was down. I also switched INJ_TRANS guardian state to INJECT_SUCCESS earlier as requested by Keita.
Quick Summary: Been locked and Observe for 23 hours. Looked up and saw excessive noise in DARM around 100Hz once but it went away now.
TITLE: 02/05 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC STATE of H1: Observing at 68Mpc INCOMING OPERATOR: Nutsinee SHIFT SUMMARY: Remained in observing the entire shift. One GRB alert. Damped PI mode 26. Reset timing error on H1IOPASC0. LOG: 00:19 UTC Damped PI mode 26 by changing phase and sign of gain 00:49 UTC Corey leaving. Drove his car half way up beamtube overpass to take pictures at the end of his shift. 06:30 UTC Restarted video4 06:35 UTC Diag reset timing error on H1IOPASC0
Have remained locked and in observing. Had to damp PI mode 26. GRB alert at 03:58 UTC. No other issues.
TITLE: 02/05 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC STATE of H1: Observing at 70Mpc OUTGOING OPERATOR: Corey CURRENT ENVIRONMENT: Wind: 3mph Gusts, 2mph 5min avg Primary useism: 0.02 μm/s Secondary useism: 0.28 μm/s QUICK SUMMARY: No issues to report.
TITLE: 02/04 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 69Mpc
INCOMING OPERATOR: Patrick
SHIFT SUMMARY:
Locked entire shift (14+hrs) with a rnage hovering just under 70Mpc. No new snow & temps got up to 29degF.
LOG:
Swapped out (3) halogen light bulbs in the Control Room (but found out we had LED ones after the fact) D'oh!
This afternoon I was looking at the state of the INJ_TRANS Guardian node (see attached trend of INJ_TRANS state). It currently is in an INJECT_KILL (value=210) state and trending it for O2, it has been in this state for a while. Before that, it was in a WAIT_FOR_NEXT_INJECT (value=20) state. INJ_TRANS's NOMINAL STATE is selected as the "NONE" state.
What do we want this Injection Guardian Node to be?
Note: Ed posted an alog about this, #32951, and Patrick/Keita made entries about this earlier (#32491).
If we are standing down from a GW or external trigger, then the state should be set to INJECT_KILL, otherwise the set point should be INJECT_SUCCESS.
Continuing with 10+hr lock. Over last 3hrs we've inched up to a current avg of 69Mpc (touching 71Mpc a couple times).
I'm off to make brunch.
TITLE: 02/04 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 66Mpc
OUTGOING OPERATOR: Nutsinee
CURRENT ENVIRONMENT:
Wind: 4mph Gusts, 2mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.28 μm/s (useism slowly trending down over last 4-6hrs?)
Light dusting of snow overnight & temps around 23degF. No issues on main roads, but other roads may be icy---be safe!
QUICK SUMMARY:
Nutsinee handed off a nice OBSERVING H1. She mentioned VERBAL_ALARM crashed during the lockloss in her shift.
TITLE: 02/04 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 65Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Not much. One lockloss (alog33885) . No issue relocking.
Nothing obvious except for the excessive ground motion in 3-10 Hz band. Chunk of snow fell off? I didn't hear anything in the control room.
Verbal alarm also crashed.
NameError: global name 'month_num' is not defined
9:54 Back to Observe
02:27 UTC The TCS_ITMY_CO2 guardian node has transitioned to FIND_LOCK_POINT. The number of SDF differences is varying. One instance of them is attached.
The guardian node has returned to LASER_UP. However there are a number of SDF differences remaining. I have accepted them (screenshot attached). Set back to observing at 02:40 UTC. Just got kicked back to commissioning again. Same issue...
Trying again. Accepted SDF differences attached.
Got kicked out of observing again while I was out of the room. Setting back to observing. SDF differences attached.
Here I attached some plots regarding the event to compare to what happened last time.
This looks like it started with the spike in the lsrpwr_hd_pd channel. That is the measurement channel for the laser output power that is used to stabilize the laser.
There is then a corresponding correction to the PZT position, and a change in current to the laser associated with this move. After that the slower temperature change happens to bring the PZT voltage back to the middle of its range. This all happens in the first 1/3 of the plots shown here. By the middle of the plot, the laser is unlocked and trying to relock.
Firstly, I suspect that spike in laser power that triggered this may not be real. We should take a closer look at it, but it may be related to the other spikes and jumps you're seeing on the Y-arm laser.
Secondly I think we should revisit the intentions for this laser locking system. It is meant to keep the power of the laser relatively stable, not to kick us out of observation mode.
19:10UTC I've returned the above mentioned node to INJECT_SUCCESS. Below is a screenshot of what the node looked like when the alert occurred and then what it looked like after I returned it to INJECT_SUCCESS (left to right respectfully)
Minimal things you need to know about GRB alert:
Most important: You don't have to do anything on your own if the guardian is working as it should.
How to tell if the guardian is working correctly:
Go to gracedb and find the "event time". If it's still within one hour of the "event time", the injection guardian should be in EXTTRIG_ALERT_ACTIVE and the requested state (usually INJECT_SUCCESS) should remain untouched.
If you are out of this one hour window, the guardian should be back to WAIT_FOR_NEXT_INJECT (assuming that the requested state is INJECT_SUCCESS).
If the requested state is INJECT_KILL, guardian should be in INJECT_KILL no matter what.
If none of the above is true when there's no outstanding injection, the guardian is not working as it should. You should manually transition to INJECT_KILL, wait for an hour, and request INJECT_SUCCESS. Notify the LLO operator, write alog, and write an email.
What if I've just heard GRB alert but gracedb "event time" is already more than an hour ago, do I need to do something?
No. Or, if you're curious:
Go to gracedb and find the "event time" as well as the alert time labeled as "submitted". The latency here could be more than an hour on rare occasions.
The control room audible alarm is triggered after the "submitted" time which is by definition later than the "event time".
What if you requested INJECT_KILL after GRB alert when the guardian is fine:
No harm whatsoever, though not necessary.
Remember to request INJECT_SUCCESS later. If you confirm that the guardian is working as it should, you can request INJECT_SUCCESS even before one hour window expires, otherwise wait for an hour.
In the case of GRB reported by Ed, I see that the guardian was working as it should. After the alert was detected, the guardian jumped from WAIT_FOR_NEXT_INJECT to EXTTRIG_ALERT_ACTIVE without messing with the requested state which was INJECT_SUCCESS (his first screen shot).