After calling the control room I drove past CS @ 23:58 and arrived at Y-Mid station @ 00:01.
Started filling CP3 @ 00:03, opened the exhaust check valve bypass valve, then opened LLCV bypass valve 1/2 turn, closed the flow 25 seconds later. 5 minutes later I closed the exhaust check valve bypass valve.
Started driving back from Y-Mid @ 00:12, arrived at the CS @ 00:16.
At 23:58 UTC, Gerardo checked in about filling CP3 at the Y-mid station. He is going to Mid Y to fill the dewar.
H1 abrubtly dropped out of lock. Did not see any obvious reasons for why at first, but then while waiting for Guardian to bring us back, noticed that we have no light from the PSL.
IMC_LOCK Guardaian is saying: FSS Unlocked. (Looking for instructions/alogs about how to address this).
But I also see that we have 0W output power for the Laser. Might go to the Laser Diode Room to see if there are any obvious messages.
There is no light on the PSL video screens.
Talking with Jason right now.
Happy New Year all!
H1 has historically not has as bad of RF beat / whistle problems as L1 has. In fact, the last alog report is for data on October 30th. But dec 31st shows a high density of glitches above 100Hz and densest above 500Hz, which have the signature shape of RF beats we've seen before and are correlated with PRCL and SRCL, similar to one of the manifestations of whistles seen at LLO.
Note 1: we produce auto omega scans for the loudest glitches that hveto finds, and these whistles are all very quiet. If anyone wants to follow these up in more detail, you can get GPS times for the high-frequency low-snr triggers for Dec 31 correlations with PRCL and SRCL at those links.
Note 2: Dec 31 looks like the worst day, but there might have been some weak whistles occuring on other days too, but we'd have to follow up some low SNR triggers on those days (e.g. today Jan 1 and the past few days).
Quick update: RF beat / whistles are still happening today, Jan 3. The hveto page shows whistles in rounds 4 and 8 coincident this time with H1:ASC-REFL_B_RF45_Q_PIT_OUT_DQ (and not PRCL and SRCL as above, so a different line/VCO freq must be at play). They are still low SNR, but lightly visible in omega scans. Some example omega scans are attached and linked here. Text files of glitches coincident with ASC REFL B RF45 are here for round 4 and round 8.
Since there can be times when one can miss recent changes to operating H1 (due to not getting your hands on H1 due to long locks or being away in between being on shift....or maybe you just don't have a great memory, like me!), there can be times when one might not be up-to-speed on the latest news, workarounds, tricks for running H1.
This is why we have an Operator Sticky Notes wiki page. During recent shifts, there were a few items/scenarios which were new to me. To help reduce searching, I added several new items to the list, and here are some of the newest ones (recent - older) I noticed/came across this week:
Please help to keep this page current with any new changes you come across.
As I walked in on shift, TJ was working on bringing H1 back. I'm fairly certain he was just going for locking H1 back up without trying an alignment. As Guardian was in the first few steps, TJ was touching up arm powers. A pretty cool thing was seeing DRMI lock within seconds(!), and then Guardian continued taking H1 up, until,...
Guardian Message: "OMC not ready" In DC_READOUT_TRANSITION
I had heard other operators having to deal with an issue with the OMC not locking over recent weeks, but I had not personally dealt with this. So, when Guardian gave the message "OMC not ready" in DC_READOUT_TRANSITION, I had to do an alog search (and also left a voicemail with the on-call commissioner). I eventually found some alogs which gave some things to try which are basically:
Made it to NLN.
Guardian Addtional Notes: (with TJ's help on the phone)
Did get an ISC_LOCK user message of: "node OMC_LOCK: STOLEN (by USER)". This is due to my taking OMC_LOCK to Auto (#2 above). To clear this message, one can go to the ISC_LOCK, click MANUAL, select INIT. (this should take OMC_LOCK back to being managed by IMC_LOCK).
Additionally, I also happen to notice some red values for OMC_LOCK & IMC_LOCK, which were the *_LOCK_EXECTIME_LAST channels. TJ said these are normally red, so I didn't worry about them.
FINALLY: Back to Observing at 17:08 roughly about 80min after the EQ (could have been much quicker if I didn't have the OMC issue!)! Had a few big glitches at the beginning of the lock, but this could have been related to my issues noted above (maybe?). But over the last few hours, H1 has been running with a nice range around the usual 80Mpc.
Sheila walked me through how to check a the file I changed to the SVN.
Since I was logged in as ops on the operator0 work station, I had to ssh into another computer:
attached email shows central ntp server reporting a NTP Stratum Change Alarm around the year change. Maybe this is related to conlog's problem? Further investigation is needed.
TITLE: 1/1 DAY Shift: 16:00-00:00UTC (08:00-04:00PDT), all times posted in UTC
STATE of H1: DOWN
Outgoing Operator: TJ
Quick Summary:
Walked in to find H1 down, but TJ was taking it right back to locking....specifics on this in upcoming alog.
O1 days 104,105
No restarts reported for both days.
BTW: DAQ has now been running for 31 days continuously. Only h1broadcaster (reconfiguration) and h1nds1 (crash) has been restarted in this time period. Both frame writers and trend writers have been stable.
While we were down I walked to the onsite warehouse and visually inspected the DCS (LDAS) cluster. Monitoring of the computers and HVAC showed all things were fine, and I've now verified this visually. It was also good to see that the HVAC system is running fine in this cold (18 degree F) weather.
Title: 1/1 OWL Shift: 08:00-16:00UTC (00:00-8:00PDT), all times posted in UTC
State of H1: Relocking
Shift Summary: Almost locked my entire shift, dropped 10min befrore the shift change from an earthquake.
Incoming Operator: Corey
Activity Log:
Most likely due to:
Title: 1/1 OWL Shift: 08:00-16:00UTC (00:00-8:00PDT), all times posted in UTC
State of H1: Observing at 79Mpc for 2hours
Outgoing Operator: Jeff B
Quick Summary: Happy New Year! Wind <6mph, useism 0.3um/s, CW inj running.
Activity Log: All Times in UTC (PT) 00:00 (16:00) Take over from Corey 00:44 (16:44) GRB alert. Spoke to LLO. In one hour hold for collecting background statistical data 01:30 (17:30) Dave B. called to say Conlog was down due to UTC year switch 01:44 (17:44) End one hour GRB hold 01:50 (17:50) Power reset Video2 to free up hung FOMs 02:40 (18:40) Lockloss – Due to mag 6.3 EQ south of Australia 02:55 (18:55) Dave B. has restarted Conlog 03:07 (19:07) Seismic up to 0.3um/s – Put IFO into down until the earth settles down a bit 05:10 (21:10) GRB Alert – LHO & LLO down due to EQ – Ignored alert 06:23 (22:23) IFO relocked and in Observing mode 08:00 (00:00) Turn over to TJ End of Shift Summary: Title: 12/31/2015, Evening Shift 00:00 – 08:00 (16:00 – 00:00) All times in UTC (PT) Support: Mike, Incoming Operator: TJ Shift Detail Summary: Lost lock about 3 hours into the shift due to a 6.3 mag EQ. After the seismic noise quieted, did an Initial alignment, and relocked the IOF with relative little trouble. IFO is currently in Observing mode with 21.8W of power and 79Mpc of range. Environmental conditions are generally good. HAPPY 2016!
After seismic 0.03-0.1 band dropped below 0.1um/s ran through initial alignment. NOTE: Had no problems with the Guardian INPUT_ALIGN process completing successfully. Was able to "fine tune" MICH_DARK by hand. Relocked the IFO on the first try with no problems and was in Observing mode at 06:23 (10:23). Power is at 21.8W and range is at 82Mpc. NOTE: This is the first relock I've done in the past couple of week that FIND_IR Diff completed under Guardian control. All previous locks I had to tune IR Diff by hand.
IFO is down due to the EQ reported in an earlier post. Seismic was dropping nicely and was just about to a point where it might be possible to relock, when it took another leg up and is back around 1.0um/s. Will give it some more time before doing an initial alignment.
The conlog process on h1conlog1-master failed soon after the UTC new year. I'm pretty sure it did the same last year but I could not find an alog confirming this. I followed the wiki instructions on restarting the master process. I did initially try to mysqlcheck the databases, but after 40 minutes I abandoned that. I started the conlog process on the master and configured it for the channel list. After a couple of minutes all the channels were connected and the queue size went down to zero. H1 was out of lock at the time due to an earthquake.
For next year's occurance, here is the log file error this time around
root@h1conlog1-master:~# grep conlog: /var/log/syslog
Dec 31 16:00:12 h1conlog1-master conlog: ../conlog.cpp: 301: process_cac_messages: MySQL Exception: Error: Duplicate entry 'H1:SUS-SR3_M1_DITHER_P_OFFSET-1451606400000453212-3' for key 'PRIMARY': Error code: 1062: SQLState: 23000: Exiting.
Dec 31 16:00:12 h1conlog1-master conlog: ../conlog.cpp: 331: process_cas: Exception: boost: mutex lock failed in pthread_mutex_lock: Invalid argument Exiting.
This indicates that it tried to write an entry for H1:SUS-SR3_M1_DITHER_P_OFFSET twice with the same Unix time stamp of 1451606400.000453212. This corresponds to Fri, 01 Jan 2016 00:00:00 GMT. I'm guessing there was a leap second applied.
of course there was no actual leap second scheduled for Dec 31 2015, so we need to take a closer look at what happened here.
The previous line before the error reports the application of a leap second. I'm not sure why, since you are right, none were scheduled. Dec 31 15:59:59 h1conlog1-master kernel: [14099669.303998] Clock: inserting leap second 23:59:60 UTC Dec 31 16:00:12 h1conlog1-master conlog: ../conlog.cpp: 301: process_cac_messages: MySQL Exception: Error: Duplicate entry 'H1:SUS-SR3_M1_DITHER_P_OFFSET-1451606400000453212-3' for key 'PRIMARY': Error code: 1062: SQLState: 23000: Exiting. Dec 31 16:00:12 h1conlog1-master conlog: ../conlog.cpp: 331: process_cas: Exception: boost: mutex lock failed in pthread_mutex_lock: Invalid argument Exiting.
Gerardo done at 0:20 UTC.