Summary of the DQ shift from Thursday 2nd February 00:00 - Sunday 5th February 23:59 (UTC), click here for full report
Laser Status:
SysStat is good
Front End Power is 34.05W (should be around 30 W)
HPO Output Power is 164.3W
Front End Watch is GREEN
HPO Watch is GREEN
PMC:
It has been locked 7 days, 21 hr 17 minutes (should be days/weeks)
Reflected power = 14.81Watts
Transmitted power = 65.69Watts
PowerSum = 80.5Watts.
FSS:
It has been locked for 0 days 14 hr and 11 min (should be days/weeks)
TPD[V] = 1.753V (min 0.9V)
ISS:
The diffracted power is around 2.0% (should be 3-5%)
Last saturation event was 0 days 14 hours and 12 minutes ago (should be days/weeks)
Possible Issues:
ISS diffracted power is Low
I've looked at two images of the IO beam dump steering mirror from Oct 2012 and Nov 2016. Comparing the two images suggests the beam may have moved enough to be clipping on the edge of the steering mirror, which would cause significant scattering. The image from Nov 2016 does not show the entire steering mirror, so there is some non-trivial potential for error in identifying the location of the beam on the mirror. My estimate for that error is around 25%, so if the beam is 25% closer to location of Oct 2012 the risk of clipping is small, but if the beam is 25% farther away from the location of Oct 2012, the risk of clipping is high. Attached: image of beam dump and sterring mirror from Nov 2016, analysis of images (2.4MB)
Joe is using the machine on the sidewalks East of the OSB. Low freq noise is audible in the control room.
TITLE: 02/06 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 63Mpc
INCOMING OPERATOR: Ed
SHIFT SUMMARY: Locked the whole shift, I had nothing to do.
LOG:
Observing at 62Mpc for 9hrs. Range still seems to be going down very slowly, and now a bit more from Hanford traffic. Still not sure of the cause.
Site is a bit slippery, but the rest of the roads seemed good when I came in.
Out at 08:31UTC while LLO is down to run a2l.
Back to observing at 08:42
Intention bit actually flipped at 08:50UTC. My click didn't register and I didn't register that it didn't work till now.
TITLE: 02/06 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 65Mpc
OUTGOING OPERATOR: Patrick
CURRENT ENVIRONMENT:
Wind: 8mph Gusts, 7mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.30 μm/s
QUICK SUMMARY: The range seems to be trending down in the last 24hr, you could maybe say that it is from the wind and/or useism but I've been fooled before. 4hr lock so far.
TITLE: 02/06 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC STATE of H1: Observing at 64Mpc INCOMING OPERATOR: TJ SHIFT SUMMARY: One lock loss. New longest O2 lock stretch of 41:53:43. No major issues relocking. Damped PI modes 26 and 27. Wind has come back down. LOG: 03:19 UTC TCSY chiller flow is low verbal alarm. MEDM shows it to be around 3.1, which appears normal. Must have been a glitch. 03:47 UTC Lock loss. Verbal alarms crashed. 04:28 UTC Observing. Damped PI modes 26 and 27 beforehand.
04:28 UTC Observing. No major issues relocking. Damped PI modes 26 and 27.
Reacquiring after lock loss at 03:47 UTC. Cause not clear.
TITLE: 02/06 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 69Mpc
OUTGOING OPERATOR: Ed
CURRENT ENVIRONMENT:
Wind: 15mph Gusts, 14mph 5min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.41 μm/s
QUICK SUMMARY:
No issues to report.
All is well in H1 land. There was one GRB alert. The lock is just past 34 hours. It has started snowing lightly.
Event time: 18:07:12
Verbal Alert Time: 18:08:02
Tom Evans at LLO contacted me via TeamSpeak to verify the alarm was recieved by me as well.
After a dazzingly confusing narrative of what to do with the INJ_TRANS Guardian node in aLogs and E-mails, I selected to manually take the node to INJECT_KILL. I shall return the state to INJECT_SUCCESS after the requisite 1hr stand-down time. After speaking to Keita he assured me that this is ok to do and it won't harm or change anything but It is no longer neccesary to do manually. I spoke with Tom at LLO and he told me that they are still doind the manual Injections pause as well. I infer that they don't utilize the same method for handling this type of event that we do? I hope it isn't just me that is confused about this and why it isn't being handled identically by the entire project.
19:10UTC I've returned the above mentioned node to INJECT_SUCCESS. Below is a screenshot of what the node looked like when the alert occurred and then what it looked like after I returned it to INJECT_SUCCESS (left to right respectfully)
Minimal things you need to know about GRB alert:
Most important: You don't have to do anything on your own if the guardian is working as it should.
How to tell if the guardian is working correctly:
Go to gracedb and find the "event time". If it's still within one hour of the "event time", the injection guardian should be in EXTTRIG_ALERT_ACTIVE and the requested state (usually INJECT_SUCCESS) should remain untouched.
If you are out of this one hour window, the guardian should be back to WAIT_FOR_NEXT_INJECT (assuming that the requested state is INJECT_SUCCESS).
If the requested state is INJECT_KILL, guardian should be in INJECT_KILL no matter what.
If none of the above is true when there's no outstanding injection, the guardian is not working as it should. You should manually transition to INJECT_KILL, wait for an hour, and request INJECT_SUCCESS. Notify the LLO operator, write alog, and write an email.
What if I've just heard GRB alert but gracedb "event time" is already more than an hour ago, do I need to do something?
No. Or, if you're curious:
Go to gracedb and find the "event time" as well as the alert time labeled as "submitted". The latency here could be more than an hour on rare occasions.
The control room audible alarm is triggered after the "submitted" time which is by definition later than the "event time".
What if you requested INJECT_KILL after GRB alert when the guardian is fine:
No harm whatsoever, though not necessary.
Remember to request INJECT_SUCCESS later. If you confirm that the guardian is working as it should, you can request INJECT_SUCCESS even before one hour window expires, otherwise wait for an hour.
In the case of GRB reported by Ed, I see that the guardian was working as it should. After the alert was detected, the guardian jumped from WAIT_FOR_NEXT_INJECT to EXTTRIG_ALERT_ACTIVE without messing with the requested state which was INJECT_SUCCESS (his first screen shot).