Observing for 2 hours at 75-80 MPc. There seems to be a bit of excess noise below ~30Hz in DARM which seem to correlate with some excess noise in PRCL and MICH. Wind and seismic are both calm.
In getting back to Low Noise, I had to change the OMC PZT starting voltage in omcparams.py back to -24 from the -30 that Corey changed it to earlier today. I did NOT commit the changes to SVN since they seem to be in flux.
Because the H1:PSL-PMC-REF slider was adjusted to land on a value with much larger precision than the SDF file can deal with, it was in alarm out to the 10e-16 digit which could not be cleared with the ACCEPT button. The correct way to clear this bug is to write the epics slider to a smaller precision value (truncate it to the 10 e-8 or something digit). However, we are all the way back to Nom LOW Noise, and didn't want to risk this minute slider adjustment - SO, we set the channel to be NOT MONITORED until the next lock loss, when we should:
- Set this channel (H1:PSL-PMC-REF) to be MONITORED again in SDF
- Type in the SDF setpoint value (1.18615) into the slider value and check that it is no longer in error on SDF.
After a remarkably smooth recovery from the PSL incursion, we are back to Observing.
J. Oberling, C. Gray
After this morning's NPRO shut-off, we noticed the FSS RefCav TPD was down to ~1V (the RefCav Refl spot showed that the pitch had moved significantly as well). At the rate of decay (see attached 7 day trend), there was a good possiblity of the TPD not surviving until maintenance Tuesday (1-1-2016), so we decided, with M. Landry's permission, to go in a adjust it. In the interest of returning to operation quickly, we only adjusted the periscope mirrors to bring the TPD back up; no investigation as to the cause of the alignment shift was done.
We began by adjusting only pitch on the top periscope mirror. This only got us back to half way, so I decided to include the bottom periscope mirror and walk the beam. Walking the beam down got the TPD to ~1.47. We then did a small yaw tweak to the top periscope mirror only, which got us to ~1.51V. We then measured the visibility of the cavity, and then turned the ISS back ON and left the PSL enclosure. Total time in the enclosure was ~30 minutes.
J. Oberling, C. Gray
At 21:01:51 UTC the PSL NPRO shut off (see attached trend of NPRO diode power, NPRO power, and PSL power watchdog), reason unknown. The PSL watchdog tripped 3 seconds later at 21:01:54 UTC, so the NPRO definitely shut off first. After Corey's phone call I drove to the site to reset the PSL. The only thing I found wrong was that the NPRO power supply was ON, but not sending power to the NPRO (key ON, power supply OFF). The UPS the NPRO power supply is plugged into did not show an error, and there were no other obvious indications that something went wrong, so I turned the NPRO back ON. It powered up without issue, and we turned the 35W FE laser on from the LDR. After fiddling with the PMC and FSS (as usual after a laser shutdown), everything with the PSL was locked and operating properly. It is still unknown why the NPRO shut off. At this point I suspect a glitch with the UPS; we have seen this behavior before, where a glitch in the UPS causes the NPRO to shut off with no idication of what went wrong.
At this point we noticed the FSS TPD was down to ~1V, which is getting close to the point where ALS becomes unhappy. A 7 day trend of the RefCav TPD is attached. At that rate of decay the FSS might not have lasted until maintenance day on 1-5-2016, so we decided (with M. Landry's permission) to go in and do a quick adjustment of the RefCav alignment. I will detail that in a follow up alog.
In order to get the PMC locked, we also tweaked the H1:PSL-PMC_REF from 1.35V to 1.19V, this caused a notification in SDF.
Thanks Thomas, I had forgot we made that change.
TITLE: 1/1 DAY Shift: 16:00-00:00UTC (08:00-04:00PDT), all times posted in UTC
STATE of H1: DOWN (PSL recovered, Initial Alignment by Travis)
Incoming Operator: Travis
Support: Chatted with TJ, Sheila on the phone, Thomas Vo helping with operator duties.
Quick Summary:
Nice first half of the shift; latter half of shift H1 went down due to the PSL NPRO power tripping.
After calling the control room I drove past CS @ 23:58 and arrived at Y-Mid station @ 00:01.
Started filling CP3 @ 00:03, opened the exhaust check valve bypass valve, then opened LLCV bypass valve 1/2 turn, closed the flow 25 seconds later. 5 minutes later I closed the exhaust check valve bypass valve.
Started driving back from Y-Mid @ 00:12, arrived at the CS @ 00:16.
At 23:58 UTC, Gerardo checked in about filling CP3 at the Y-mid station. He is going to Mid Y to fill the dewar.
Gerardo done at 0:20 UTC.
H1 abrubtly dropped out of lock. Did not see any obvious reasons for why at first, but then while waiting for Guardian to bring us back, noticed that we have no light from the PSL.
IMC_LOCK Guardaian is saying: FSS Unlocked. (Looking for instructions/alogs about how to address this).
But I also see that we have 0W output power for the Laser. Might go to the Laser Diode Room to see if there are any obvious messages.
There is no light on the PSL video screens.
Talking with Jason right now.
Happy New Year all!
H1 has historically not has as bad of RF beat / whistle problems as L1 has. In fact, the last alog report is for data on October 30th. But dec 31st shows a high density of glitches above 100Hz and densest above 500Hz, which have the signature shape of RF beats we've seen before and are correlated with PRCL and SRCL, similar to one of the manifestations of whistles seen at LLO.
Note 1: we produce auto omega scans for the loudest glitches that hveto finds, and these whistles are all very quiet. If anyone wants to follow these up in more detail, you can get GPS times for the high-frequency low-snr triggers for Dec 31 correlations with PRCL and SRCL at those links.
Note 2: Dec 31 looks like the worst day, but there might have been some weak whistles occuring on other days too, but we'd have to follow up some low SNR triggers on those days (e.g. today Jan 1 and the past few days).
Quick update: RF beat / whistles are still happening today, Jan 3. The hveto page shows whistles in rounds 4 and 8 coincident this time with H1:ASC-REFL_B_RF45_Q_PIT_OUT_DQ (and not PRCL and SRCL as above, so a different line/VCO freq must be at play). They are still low SNR, but lightly visible in omega scans. Some example omega scans are attached and linked here. Text files of glitches coincident with ASC REFL B RF45 are here for round 4 and round 8.
Since there can be times when one can miss recent changes to operating H1 (due to not getting your hands on H1 due to long locks or being away in between being on shift....or maybe you just don't have a great memory, like me!), there can be times when one might not be up-to-speed on the latest news, workarounds, tricks for running H1.
This is why we have an Operator Sticky Notes wiki page. During recent shifts, there were a few items/scenarios which were new to me. To help reduce searching, I added several new items to the list, and here are some of the newest ones (recent - older) I noticed/came across this week:
Please help to keep this page current with any new changes you come across.
As I walked in on shift, TJ was working on bringing H1 back. I'm fairly certain he was just going for locking H1 back up without trying an alignment. As Guardian was in the first few steps, TJ was touching up arm powers. A pretty cool thing was seeing DRMI lock within seconds(!), and then Guardian continued taking H1 up, until,...
Guardian Message: "OMC not ready" In DC_READOUT_TRANSITION
I had heard other operators having to deal with an issue with the OMC not locking over recent weeks, but I had not personally dealt with this. So, when Guardian gave the message "OMC not ready" in DC_READOUT_TRANSITION, I had to do an alog search (and also left a voicemail with the on-call commissioner). I eventually found some alogs which gave some things to try which are basically:
Made it to NLN.
Guardian Addtional Notes: (with TJ's help on the phone)
Did get an ISC_LOCK user message of: "node OMC_LOCK: STOLEN (by USER)". This is due to my taking OMC_LOCK to Auto (#2 above). To clear this message, one can go to the ISC_LOCK, click MANUAL, select INIT. (this should take OMC_LOCK back to being managed by IMC_LOCK).
Additionally, I also happen to notice some red values for OMC_LOCK & IMC_LOCK, which were the *_LOCK_EXECTIME_LAST channels. TJ said these are normally red, so I didn't worry about them.
FINALLY: Back to Observing at 17:08 roughly about 80min after the EQ (could have been much quicker if I didn't have the OMC issue!)! Had a few big glitches at the beginning of the lock, but this could have been related to my issues noted above (maybe?). But over the last few hours, H1 has been running with a nice range around the usual 80Mpc.
Sheila walked me through how to check a the file I changed to the SVN.
Since I was logged in as ops on the operator0 work station, I had to ssh into another computer:
attached email shows central ntp server reporting a NTP Stratum Change Alarm around the year change. Maybe this is related to conlog's problem? Further investigation is needed.
TITLE: 1/1 DAY Shift: 16:00-00:00UTC (08:00-04:00PDT), all times posted in UTC
STATE of H1: DOWN
Outgoing Operator: TJ
Quick Summary:
Walked in to find H1 down, but TJ was taking it right back to locking....specifics on this in upcoming alog.
O1 days 104,105
No restarts reported for both days.
BTW: DAQ has now been running for 31 days continuously. Only h1broadcaster (reconfiguration) and h1nds1 (crash) has been restarted in this time period. Both frame writers and trend writers have been stable.
The conlog process on h1conlog1-master failed soon after the UTC new year. I'm pretty sure it did the same last year but I could not find an alog confirming this. I followed the wiki instructions on restarting the master process. I did initially try to mysqlcheck the databases, but after 40 minutes I abandoned that. I started the conlog process on the master and configured it for the channel list. After a couple of minutes all the channels were connected and the queue size went down to zero. H1 was out of lock at the time due to an earthquake.
For next year's occurance, here is the log file error this time around
root@h1conlog1-master:~# grep conlog: /var/log/syslog
Dec 31 16:00:12 h1conlog1-master conlog: ../conlog.cpp: 301: process_cac_messages: MySQL Exception: Error: Duplicate entry 'H1:SUS-SR3_M1_DITHER_P_OFFSET-1451606400000453212-3' for key 'PRIMARY': Error code: 1062: SQLState: 23000: Exiting.
Dec 31 16:00:12 h1conlog1-master conlog: ../conlog.cpp: 331: process_cas: Exception: boost: mutex lock failed in pthread_mutex_lock: Invalid argument Exiting.
This indicates that it tried to write an entry for H1:SUS-SR3_M1_DITHER_P_OFFSET twice with the same Unix time stamp of 1451606400.000453212. This corresponds to Fri, 01 Jan 2016 00:00:00 GMT. I'm guessing there was a leap second applied.
of course there was no actual leap second scheduled for Dec 31 2015, so we need to take a closer look at what happened here.
The previous line before the error reports the application of a leap second. I'm not sure why, since you are right, none were scheduled. Dec 31 15:59:59 h1conlog1-master kernel: [14099669.303998] Clock: inserting leap second 23:59:60 UTC Dec 31 16:00:12 h1conlog1-master conlog: ../conlog.cpp: 301: process_cac_messages: MySQL Exception: Error: Duplicate entry 'H1:SUS-SR3_M1_DITHER_P_OFFSET-1451606400000453212-3' for key 'PRIMARY': Error code: 1062: SQLState: 23000: Exiting. Dec 31 16:00:12 h1conlog1-master conlog: ../conlog.cpp: 331: process_cas: Exception: boost: mutex lock failed in pthread_mutex_lock: Invalid argument Exiting.
This was actually a temporary change we had to make to help the PMC lock that I had forgotten about. This setting can be reverted to its orginal value of 1.35V once the channel is set to be MONITORED again in the SDF.