Reported here by Jim. This trip was caused by the 35W FE power watchdog tripping, there by shutting down the PSL. It appears to have been caused by the NPRO shutting off. Attachment 1 shows the output powers of the 35W FE and the HPO, as well as the power watchdogs for each laser. It is clear that the FE watchdog trips several seconds before the HPO watchdog trips. The second attachment shows the output power of the NPRO and the 35W FE power watchdog; it is clear the watchdog trip coincides with the NPRO shutting off. At this time the cause of the NPRO shut-down is unclear.
The PSL itself was restarted without issue, but as Jim mentioned in his above-linked alog there was an issue relocking the ISS; this was resolved by locking the PMC (since the in-loop PD for the ISS is beyond the PMC, so if the PMC is unlocked, the ISS will not lock). I had an issue with my remote login not being able to bring up a sitemap (command not found?), so I drove out to the site to investigate. When I showed up onsite, the lock request for the PMC was OFF (this is obviously not the nominal configuration for a locked PMC). When the laser was restarted I asked Jim if the PMC was locked and he indicated it was, but when I arrived onsite it was not. Not sure what happened here. I know there is a script responsible for turning the PMC and FSS off in the event of laser power loss, but I believe this script also turns them back on once power is restored; I will follow up with TJ, the script's author, about this functionality. In the previous 2017 laser trips this issue has not been encountered (i.e. open the HPO external shutter and the PMC and FSS have locked right up, no further action required). Maybe this script didn't quite work right this time? At any rate, everything is functioning normally now and the IFO is currently relocking (engaging ASC as I type this). I will investigate more in the morning, for now I'm going to try to get some sleep.
Submitted FRS 8049 for the PSL trip.
Jason's remote SSH execution of the MEDM sitemap highlighted a problem that some CDS accounts are missing the standard .profile (or .bash_profile) files in their home directories. I have corrected this by copying in the template .profile file into home directories of those affected accounts.
We will go through Jason's script problem with TJ.
TITLE: 05/08 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
LOG:
It was quiet until a PSL watchdog trip broke the 30 hr lock. Called Jason ~5:00 UTC for remote help, but ISS wouldn't lock. He drove in, and it turned out the PMC wasn't requesting Lock. As soon as he hit that button, PMC locked and ISS locked. I didn't change the state of the PMC, so I don't know what happened here. Sad.
We're back trying to lock now.
TITLE: 05/07 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 59Mpc
INCOMING OPERATOR: Jim
SHIFT SUMMARY: Quiet shift, locked for over 24hrs. Range is still down at like 60Mpc, could be from the rising usiesm?
LOG:
Locked for almost 20hrs but the range has been trending down slowly for the past ~10hrs, with the exception of Cheryl's IM alignment change ~5hrs ago.
Looking through the summary pages for clues, the only thing I could find are the strain seems a bit noisier around 12/13hz and the CAL accuracy ASD ratio seems to also be a bit worse as well. Roll modes don't see any higher than usual so I don't know where this 13hz noise is coming from, and the CAL ratio I'd guess is a most likely a symptom way down a chain that could have come from many places.
TITLE: 05/07 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 62Mpc
OUTGOING OPERATOR: Cheryl
CURRENT ENVIRONMENT:
Wind: 13mph Gusts, 11mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.18 μm/s
QUICK SUMMARY: Going on 16hrs at 63Mpc. Calm environment.
H1 Status:
SITE:
Patrick Kiwamu and I reversed these IM moves this morning. It looks like after this move the recycling gain dropped by about 2%, the reflected power dropped by about 13% (which is not necessarily a bad thing), and for some reason MC2 trans sum became more noisy. I do not understand why MC2 trans sum would become more noise, the mode cleaner alignment didn't change during this time.
Right now we are having trouble locking because of low recycling gain, which is why we have reverted.
TITLE: 05/07 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 64Mpc
INCOMING OPERATOR: Cheryl
SHIFT SUMMARY:
LOG:
Nothing much happened. Winds picked up for a little while, there was a small earthquake, both affected range some.
TITLE: 05/06 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 67Mpc
INCOMING OPERATOR: Jim
SHIFT SUMMARY: One liock loss for an unknown reason. The PSL seem to struggle and the FSS would lose lock repeatedly so bringing it back up took a bit longer than usual.
LOG:
This is pure speculation on my part but if the input modecleaner is swinging around for whatever reason - which means that it is pulling the laser frequency around - it might be that the injection locking of the oscillator may have saturated. Unfortunately there is no monitor of the saturation status of the SR560 that was added to the injection locking servo. But I have seen occasions where the system indicates that it is locked but the SR560 red indicator light is on. A trend of the oscillator PZT output, or of its injection locking status at the time(s) that the FSS is having a hard time might indicate this. The other reason might be that with the change of input modecleaner alignment, the alignment onto the second loop ISS photodiode array may also have changed.
Not obvious cause. I will move the IM's back to where Cheryl suggested and then align and lock.
Back to Observing at 22:39UTC
attached is a 24 hour minute trend plot of the Beckhoff MSR temperature channelss, starting at 7am Friday morning PDT. Current situation is that the doors between the control room and MSR are closed, the doors between MSR and hallway are open and a fan is directing air behind the racks. The temperature variation changed around 18:45 PDT yesterday and is now more constant and slightly elevated. Reminder that if either channel exceeds 30C (86F) cell phone alarms will be generated.
looks like upgrading scipy will take a bit more work. An FRS was opened yesterday for this issue FRS8048
TITLE: 05/06 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 65Mpc
OUTGOING OPERATOR: Cheryl
CURRENT ENVIRONMENT:
Wind: 7mph Gusts, 5mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY: Locked for 4.5hrs with calm environment. DIAG_MAIN is reporting that the IM1/2 Yaw is out of norm, Cheryl gave me some values to put in if we lose lock.
TITLE: 05/06 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 68Mpc
INCOMING OPERATOR: Cheryl
SHIFT SUMMARY:
LOG:
23:30 Apollo on site to work on MSR cooling, I never saw when he left
23:30 Lockloss and MikeL in CR with a tour
6:50 ITMY TCS guardian takes us out of OBSERVE for 20minutes, no other obvious effect on the IFO
This seems to be what has been causing some troubles with time machine and a2l. This version of scipy does not have the "special" attribute that these scripts use.