TITLE: 12/13 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 72.9978Mpc
OUTGOING OPERATOR: Patrick
CURRENT ENVIRONMENT:
Wind: 10mph Gusts, 8mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.15 μm/s
QUICK SUMMARY:
8:11-9:30UTC (12:11-01:30amPST): No Operator Coverage (Due to Winter Weather this week, I'll be taking the OWL shift for next 3 nights.)
Arrived to an H1 in OBSERVING & a quick scan looks like we have optimal conditions (range is at 70Mpc). Glancing at range over last 12hrs, range has 8hr period of going up to 75Mpc & down to 70Mpc(now) & I don't see anything seismically which matches that (useism has had a constant trend down over the last 24+hrs). There are also a few more glitches (i.e. drops in range) for this lock.
Low Useism: Seismically, we appear to be a tad under the 50percentile for useim (a comparison personal note between 90+percentile over the weekend & now is that the Tidal striptool is virtually flat vs over the weekend you would definitely see useism waves/oscillations at 90+%).
Violin Modes: Fundamental(~500Hz) looks to be just above 1e-19 on DARM. Second Harmonic (~1kHz) is just above 1e-15, this is the only notable feature on the "H1Glitches(DMT Omega)" tool on nuc0 (perhaps this is something which could be damped at an opportune time [Maintenance Day?]).
Talked with Fyffe @LLO to inform they are no longer flying solo.
Weather Conditions: No precipitation & cloudy.
Road Conditions: (driving in via Twin Bridges) Other than my driveway at home & driveway on-site, the roads were dry and clear (able to easily drive speed limit); made it to site in the standard 20-25min. I did come across a heard of deer right near the bridges.
TITLE: 12/13 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC STATE of H1: Observing at 70.922Mpc INCOMING OPERATOR: None SHIFT SUMMARY: Locked for over 10 hours. Only out of observing to run a2l. Two GRB alerts. There will not be an owl shift operator due to inclement weather. I will leave the IFO in observing unattended. I have allowed remote access for the users that Keita requested. LOG: 21:44 UTC At NLN. Had to set LOCKLOSS_SHUTTER_CHECK node request to HIGH_ARM_POWER in order to go to observing. Have 'TCS_HWS: HWSY Code Stopped Running' diag message. 21:55 UTC Observing 22:13 UTC Out of observing to run a2l. '/opt/rtcds/userapps/release/isc/common/scripts/decoup/a2l_min_LHO.py' 22:19 UTC Observing 23:00 UTC Changed phase for PI mode 26 23:27 UTC Requested INJECT_KILL for the INJ_TRANS guardian 00:35 UTC Requested INJECT_SUCCESS for the INJ_TRANS guardian 01:50 UTC Changed sign of gain for PI mode 28 07:05 UTC GRB alert. LLO not locked.
Have remained locked at NLN since 21:44 UTC. Stood down for a GRB at 23:20 UTC. It appeared that the injections were not automatically blocked for the GRB, so I set the INJ_TRANS guardian node to INJECT_KILL manually. Requested INJECT_SUCCESS at 00:35 UTC. Damped PI modes 26 and 28. Ran a2l near beginning of lock.
About the GRB, it turns out that the guardian was doing the right thing.
The alert was received at 2016-12-12 23:20:22 UTC, but the actual GRB event was at 2016-12-12 15:38:59, hours before the alert.
Since the alert came after our standard window of one hour, the guardian didn't block the injeciton.
We just got another GRB alert at 07:05 UTC. This time the event time is around the same time as the created time. The INJ_TRANS guardian did not change again. Perhaps this is because LLO is not locked?
I and Patrick looked at the trend of H1:GRD-INJ_TRANS_STATE_N, and actually GRB made the guardian transition to state 30 (EXTTRIG_ALERT_ACTIVE) for an hour (attached).
The GRB event is this one: https://gracedb.ligo.org/events/view/E265603
Some of the SDF uses safe.snap instead of OBSERVE.snap (attached) during observation, which doesn't make sense. I don't remember which FEC should be safe and which should be OBSERVE.
For the sake of consistency, if the settings don't change for a specific FEC, just copy safe.snap to OBSERVE.snap and use that all the time.
Dave suggested if HWSY induced glitches in HWSX again maybe we can do some cable swap test to see if the effect is reversed. However both cameras ran fine simultaneously after a computer restart so the test didn't happen.
I did a little follow up to see if there's any pattern to these glitches. Both HWSX and HWSY codes used to be able to run together for >10 days prior to Ubuntu14. These glitches happened on random days (not every maintenance Tuesday for instance), and not necessary in order (sometimes HWSX stopped running first, sometimes they both stopped at the same time). This time I only left HWSX running to see if the code would run for much longer time. If there's still an issue this might suggest that the problem is not with the PCI-e card.
The plot attached show 10min trend of HWSX and HWSY spherical power over hundred some days including the time prior to Ubuntu 14 (installed Nov 4, 2016). Flat line indicates that the HWS code stopped writing data (due to camera glitch). Sorry for the messed up time axes. Dataviewer doesn't always plot nicely when I ask for too much data. The hand written time on data after Ubuntu 14 was acquired with another second-trend plot (not shown).
After consulting with Peter K. I have adjusted the PSL dust monitor alarm level to those listed in the US-209E Cleanroom Standards chart. This has relaxed the 0.3um particle alarm levels a bit. Will continue to monitor the dust counts in the PSL. We are continuing to look into the causes of the elevated counts in the enclosure and what impact dust contamination could have on the laser system. There is a chance the dust monitor is out of calibration. I will swap out the PSL-101 unit as soon as can get access to the PSL enclosure.
I recall back in the "early days" that we had dust monitors that seemed to be sensitive to dry air. If I remember, static electricity was blamed?
TITLE: 12/12 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC STATE of H1: Lock Acquisition OUTGOING OPERATOR: Cheryl CURRENT ENVIRONMENT: Wind: 6mph Gusts, 4mph 5min avg Primary useism: 0.02 μm/s Secondary useism: 0.21 μm/s QUICK SUMMARY: Starting early to cover end of Cheryl's shift. On the way to NLN.
No restarts on any of these days except for Mon 05/Dec/2016
model restarts logged for Mon 05/Dec/2016
2016_12_05 11:17 h1ascimc
2016_12_05 11:20 h1lsc
2016_12_05 11:20 h1odcmaster
2016_12_05 11:20 h1susmc1
2016_12_05 11:20 h1susmc2
2016_12_05 11:20 h1susmc3
2016_12_05 11:51 h1broadcast0
2016_12_05 11:51 h1dc0
2016_12_05 11:51 h1fw0
2016_12_05 11:51 h1fw1
2016_12_05 11:51 h1fw2
2016_12_05 11:51 h1nds0
2016_12_05 11:51 h1nds1
2016_12_05 11:51 h1tw0
2016_12_05 11:51 h1tw1
Maintenance monday, sped up h1ascimc model to 16kHz, associated model restarts, added chans to broadcaster, associated DAQ restart.
/ligo file system is showing occassional freeze-ups, investigation continues with cdsfs0's raid controller.
TITLE: 12/12 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 0.0Mpc
OUTGOING OPERATOR: Ed
CURRENT ENVIRONMENT:
Wind: 4mph Gusts, 3mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.22 μm/s
QUICK SUMMARY:
19:00UTC (11AM) Update:
Mode26 rang up: this is one that is known to require some phase tweaking over long locks. Due to road conditions, there wasn't an operator here at the time so the usual phase changes didn't happen.
My day shift summary - have to hand off at 21:15UTC (1:15PM PT)
Activities:
Note About Fast Shutter Issue above mentions it was in Error, and Cheryl resolved it. Just wanted to make a note about this since it will happen again. Sometimes the HAM6 Fast Shutter trips. When it does, we have a command in our ISC_LOCK scripts which tests the Fast Shutter. So, until we get to that Fast Shutter Test, the shutter will stay in this Error State. Fearing the Shutter was down, Fil went out to check on the Power Supply for the Fast Shutter--it was on and operational.
Once Cheryl took H1 to a state where the test was run, the Error went away. This was marked for about 10min of downtime as CORRECTIVE MAINTENANCE, and an FRS (#6917) was filed.
Corey suggested looking at DIAG_SDF log, and there is activity that coincides with H1 going out of Observe:
19:19:45UTC - H1 out of Observe, and DIAG_SDF shows:
Now, how do I know what "USERMSG 0: DIFFS: sysecaty1plc2: 1" is?
Keita's alog 32134 - instructions on how to look for channels that changed
My bad - investigating - looked at SDF - kicked H1 out of Observe:
DIAG_SDF log:
From Keita:
I took
/opt/rtcds/lho/h1/target/h1sysecaty1plc2sdf/h1sysecaty1plc2sdfepics/OBSERVE.snap
and stripped unnecessary information, split into 20 line chunks and
put them here:
/ligo/home/keita.kawabe/LockLoss/SDFERRORS/h1sysecaty1plc2
Could you again run the lockloss tool by
for ii in ecaty1plc2*; do lockloss -c ${ii} plot -w '[-10,10]' gpstime; done
This morning (Monday Dec 12) I ran the lockloss script and I can see that H1:ALS-Y_FIBR_LOCK_TEMPERATURECOMTROLS_ON was flipping (see attached, second column from the left). Other things like LASER_HEAD_CRYSTALFREQUENCY, CRYSTALTEMPERATURE and VCO_TUNEOFS were also changing but these were not monitored.
Anyway, it's strange that this was not found when Cheryl and Corey ran lockloss tool. Maybe NDS2 was misbehaving?
Just to make sure, what I did is:
cd /ligo/home/keita.kawabe/LockLoss/SDFERRORS/h1sysecaty1plc2
for ii in ecaty1plc2_a*; do lockloss -c ${ii} plot -w '[-10, 10]' 1165432802; done
With the ASC IMC model now running at 16384 Hz we look at the coherence of jitter as measured by the IMC WFS and other channels up to 7.4 kHz. Not sure we can conclude anything except that pointing errros contaminate everything.
We can compare this with an older 900-Hz bandwidth measurement from alog 31631 which was taken before the piezo peak fix (alog 31974).
Note that 1084Hz thing doesn't have coherence with IMC WFS.
Can you check the DC sum channels for the IMC WFS as well? They are the ones that hVeto keeps finding as related to the 1080 Hz noise, and they see a modulation in the noise rather than a steady spectrum.
Done, again nothing for the bump in question though there are coherence bumps for f>1100Hz and f<800Hz.
02:26UTC Verbal Alarm said Intention bit set to Commissioning. Checking the SDF i found that the EX ECAT PLC2 was blinking. Below is a screen cap of the channel that was blinking. DIAG_MAIN was also showing a message in the log seen in the other screenshot. The channel listed in the SDF and the Message in the DIAG_MAIN correlate in terms of time of occurence and subsystem (ALS). Some troublesome ALS channels were unmonitored last night by Keita. Looking back to his aLog, this appears to be a straggler. I was able to Un-Monitor this channel and reset the Intention bit.
02:27UTC Intention bit set to Observe
Good Catch, Ed.
I am going to paste the channel name in here so it can come up during a search (because we had the same issue with the Y-arm a week later). The channel in question here (which is in Ed's snapshot) is:
H1:ALS-X_FIBR_LOCK_TEMPERATURECONTROLS_ON
As requested by Jeff and the calibration review committee, I've done a number of checks related to tracking the behavior of PCAL lines in the online-calibrated strain. (Most of these checks accord with the "official" strain curve plots contained in https://dcc.ligo.org/DocDB/0121/G1501223/003/2015-10-01_H1_O1_Sensitivity.pdf) I report on these review checks below.
I started by choosing a recent lock stretch at LHO that includes segments in which the H1:DMT-CALIBRATED flag is both active and inactive (so that we can visualize the effect of both gated and ungated kappas on strain, with the expected behavior that gstlal_compute_strain defaults each kappa factor to its last computed median if ${IFO}:DMT-CALIBRATED is inactive). There is a 4-hour period from 8:00 to 12:00 UTC on 30 November 2016 that fits the bill (see https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20161130/). I re-calibrated this stretch of data in --partial-calibration mode without kappas applied, and stored the output to
LHO: /home/aurban/O2/calibration/data/H1/
All data were computed with 32 second FFT length and 120 second stride. The following plots are attached:
The script used to generate these plots, and a LAL-formatted cache pointing to re-calibrated data from the same time period but without any kappa factors applied, is checked into the calibration SVN at https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Runs/PreER10/H1/Scripts/TDkappas/. A similar analysis on a stretch of Livingston data is forthcoming.
I have re-run the same analysis over 24 hours of Hanford data spanning the full UTC day on December 4th (https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20161204/), during which time LHO was continuously locked. This time the lowest-frequency PCAL line has a PCAL-to-DARM ratio that improves when kappas are applied, which is the expected behavior. This suggests that whatever was going on in the November 30 data, where the 36.5 Hz line briefly strayed to having worse agreement with kappas applied, was transient -- but the issue may still be worth looking into.
Nutsinee has requested that someone reload the DIAG_MAIN guardian if
the IFO loses lockwe go out of observing. (see whiteboard)