TITLE: 05/5 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing
OUTGOING OPERATOR: Cheryl
QUICK SUMMARY: No issues were handed off. Lock is 4.5 hours old.
It looks like the storm brought down part of the new FMCS - possibly communications to the LSB? All other buildings appear to be working fine with both the old and new systems.
TITLE: 05/05 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Other
INCOMING OPERATOR: Cheryl
SHIFT SUMMARY: A large lightning storm swept through the area and caused numerous power glitches to the site. Since then it has been an effort to recover. Just to make things a bit more hectic, we had a 5.8M earthquake in Tajikastan to shake up the recovery effort. I unfortunately didn't write down times for th erecovery process, but it went something like this:
Thank you for the help of Dave Barker, Jason Oberling, Patrick Thomas, Richard McCarthy, and Keita Kawabe!
I have restored all suspensions back to where they were during our last lock and made it through initial alignment up to SRC align and without any major issues. Now handing off to Cheryl.
LOG:
I managed to clear the DAQ errors for h1pemm[x,y] by restarting the IOP models way more than should be necessary for it to latch to GPS time via EPICS. I suspect its EPICS GPS source was having issues earlier. Anyhow, all DAQ data is now good.
Greg checked on the LDAS and GDS systems, he found no problems. We should be good to go to observation mode when ready.
I checked the HEPI pump controller computers, they did not reboot and seem to have rode through the whole thing. BTW, they are running a 2.6.18 kernel, which is prior to the day208bug.
In summary, I don't think we need to do anything more at this point.
We think the worst of the storm may be over. I'll let TJ fill in the details, I know of at least three glitches which took down front end models. The first just took out the ISC models at the end stations (suspect timing card power supplies), the second (at 8:40pm) was the biggest and it rebooted every front end computer. After that most front ends came back up after the reboots, h1ioppsl0 had a timing excursion which took about 15 minutes to clear. At that point the only problem was bad DAQ data from both PEM systems at the midstations. A further glitch caused h1ioplsc0 to DAC error, I restarted all the LSC models to clear this.
The Beckhoff slow control machines also rebooted. All code restarted except for the three PLCs at EX. These have just been started.
At time of writing, the only computer issues I see is the bad data from the midstation PEM front end computers to the DAQ. If memory serves, these send their DAQ data back to the end stations via ethernet-fibre_optics converters, and then piggy-back on the main switch-to-switch link to the MSR. Maybe the converters need a power cycle.
Greg is checking if HofT would be fine with invalid mid station PEM channels.
All of LDAS at LHO survived the power glitches.
Data from CDS and the DMT continue to flow and I see H1 raw, trend and low-latency and aggregated hoft arriving on the cluster at LHO and CIT. And the summary pages are up-to-date.
Thus, I think things are fine from the point-of-view of LDAS.
Thus, hoft should be marked as good when H1 gets back into observation mode and ready for low-latency analysis downstream.
Lightning strike. I just came back inside and looked at the camera to see it hit us. Dave is on the phone now.
Forgot I was logged in from when we were in maintenance for cleanroom cleaning. Logged out at ~23:51 UTC.
TITLE: 05/04 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 64Mpc
OUTGOING OPERATOR: Travis
CURRENT ENVIRONMENT:
Wind: 7mph Gusts, 6mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.22 μm/s
QUICK SUMMARY: LLO is still down but we are running at 65Mpc for 4hrs
TITLE: 05/04 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 63Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: In and out of Observing a few times for cleanroom work and OpLev swap and one lockloss likely caused by the cleaning crew (blame withheld). Otherwise no issues to report.
LOG:
17:45 Nutsinee to both ends to reinstall HWS plate
17:50 Hugh to MX
Looking at the trends posted in LHO alog 36024, the problem with the ITMy oplev appears to be a failing laser diode. Since LLO was down, it was decided to perform a swap of this laser with one I recently stabilized in the lab. The motivation is so we have a before/after record of the ITMy pointing for the vent beginning next week; it was thought to be better to perform the swap now, versus during the vent (either due to preventive maintenance or laser failure) and this effecting the alignment reading of the oplev (in theory it shouldn't, but we all know that old saying about theory and practice...). New laser SN is 120-2, old laser SN is 189-1. As is usual, the laser will need a couple hours to come to thermal equilibrium with its new home, then I can assess it for glitchiness. I will leave WP 6616 open until the laser is operating glitch-free.
OpLev laser swap is complete. Back to Observing.
For Jason to swap ITMy OpLev laser. WP#6616.
Shifter: Beverly Berger
LHO fellow: Evan Goetz
For full results see here.
Attaching screenshot of plots. CLOSING out FAMIS #4726.
Everything with these trends looks normal, except for the ITMy SUM. This now appears to be a failing laser diode (previous issue reported here), so the oplev laser was swapped this afternoon while LLO was down.
For the HAM2 oplev, this can likely be removed from the script as the HAM oplevs are now set for decomissioning (ECR E1700123 and ECR tracker 7884).