TITLE: 12/12 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC STATE of H1: Lock Acquisition OUTGOING OPERATOR: Cheryl CURRENT ENVIRONMENT: Wind: 6mph Gusts, 4mph 5min avg Primary useism: 0.02 μm/s Secondary useism: 0.21 μm/s QUICK SUMMARY: Starting early to cover end of Cheryl's shift. On the way to NLN.
No restarts on any of these days except for Mon 05/Dec/2016
model restarts logged for Mon 05/Dec/2016
2016_12_05 11:17 h1ascimc
2016_12_05 11:20 h1lsc
2016_12_05 11:20 h1odcmaster
2016_12_05 11:20 h1susmc1
2016_12_05 11:20 h1susmc2
2016_12_05 11:20 h1susmc3
2016_12_05 11:51 h1broadcast0
2016_12_05 11:51 h1dc0
2016_12_05 11:51 h1fw0
2016_12_05 11:51 h1fw1
2016_12_05 11:51 h1fw2
2016_12_05 11:51 h1nds0
2016_12_05 11:51 h1nds1
2016_12_05 11:51 h1tw0
2016_12_05 11:51 h1tw1
Maintenance monday, sped up h1ascimc model to 16kHz, associated model restarts, added chans to broadcaster, associated DAQ restart.
/ligo file system is showing occassional freeze-ups, investigation continues with cdsfs0's raid controller.
TITLE: 12/12 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 0.0Mpc
OUTGOING OPERATOR: Ed
CURRENT ENVIRONMENT:
Wind: 4mph Gusts, 3mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.22 μm/s
QUICK SUMMARY:
9:40 am local
Took about 30 sec. to overfill CP3 & CP4 each. Both nominal LLCV settings were a bit high based on exhaust pressure (non zero) and TC readings prior to fill. I've lowered CP3 to 18% and CP4 to 37%.
Lowered CP4 LLCV further to 36% open.
35%
The /trend file system on h1tw0 became corrupt this morning about 6:05 AM PST. Rebooted computer to force unmount and fsck of file system. Many errors were discovered and repaired. It appears there may be a bad drive in the RAID, it is being replaced. The daqd process is also dying when it encounters an improperly sized file, we will probably just remove it.
There were 9445 corrupted files in /trend/minute_raw which needed to be removed.
About 24.5 hours of continuous lock, still alive.
Due to road condition, many will stay at home or come in late. I'm covering the operator shift as much as I can.
At about 17:20 UTC the lock was lost probably due to PI mode 10 or 26 or both ringing up. I heard "PI mode something ringing up" but wasn't paying much attention as I was on a telecon.
In trending the PSL Enclosure temperatures for the past few days, I remembered about the AD590-based sensor on the table near the exit of the reference cavity chamber. There's a sudden jump in the temperature reading. Not sure why this would be the case. The reference cavity was locked at the time.
H1 is locked and Observing for 19hours43min.
Range is vacillating between 63 and 73Mpc. (as per the range medm screen)
The 1009.5 vioin mode has neither rung down nor up.
Environmental conditions are nominally good
All seismic BLRMS show downward trending
Below are trends from the past two days. I can see a definite correlation between wind speed and the .3u particle count in the PSL 101 dust monitor ~ two days ago. It's hard to tell about the most current one. The particle count is definitly high enough to set off the alarm but can wind speeds under 10mph be to blame?
I don't see any aLogs about VEA environmental changes at EY so I don't quite know what is going on with dust monitor 1. Again, I see wind speeds ≤ 10mph
Corey suggested looking at DIAG_SDF log, and there is activity that coincides with H1 going out of Observe:
19:19:45UTC - H1 out of Observe, and DIAG_SDF shows:
Now, how do I know what "USERMSG 0: DIFFS: sysecaty1plc2: 1" is?
Keita's alog 32134 - instructions on how to look for channels that changed
My bad - investigating - looked at SDF - kicked H1 out of Observe:
DIAG_SDF log:
From Keita:
I took
/opt/rtcds/lho/h1/target/h1sysecaty1plc2sdf/h1sysecaty1plc2sdfepics/OBSERVE.snap
and stripped unnecessary information, split into 20 line chunks and
put them here:
/ligo/home/keita.kawabe/LockLoss/SDFERRORS/h1sysecaty1plc2
Could you again run the lockloss tool by
for ii in ecaty1plc2*; do lockloss -c ${ii} plot -w '[-10,10]' gpstime; done
This morning (Monday Dec 12) I ran the lockloss script and I can see that H1:ALS-Y_FIBR_LOCK_TEMPERATURECOMTROLS_ON was flipping (see attached, second column from the left). Other things like LASER_HEAD_CRYSTALFREQUENCY, CRYSTALTEMPERATURE and VCO_TUNEOFS were also changing but these were not monitored.
Anyway, it's strange that this was not found when Cheryl and Corey ran lockloss tool. Maybe NDS2 was misbehaving?
Just to make sure, what I did is:
cd /ligo/home/keita.kawabe/LockLoss/SDFERRORS/h1sysecaty1plc2
for ii in ecaty1plc2_a*; do lockloss -c ${ii} plot -w '[-10, 10]' 1165432802; done
19:00UTC (11AM) Update:
Mode26 rang up: this is one that is known to require some phase tweaking over long locks. Due to road conditions, there wasn't an operator here at the time so the usual phase changes didn't happen.
My day shift summary - have to hand off at 21:15UTC (1:15PM PT)
Activities:
Note About Fast Shutter Issue above mentions it was in Error, and Cheryl resolved it. Just wanted to make a note about this since it will happen again. Sometimes the HAM6 Fast Shutter trips. When it does, we have a command in our ISC_LOCK scripts which tests the Fast Shutter. So, until we get to that Fast Shutter Test, the shutter will stay in this Error State. Fearing the Shutter was down, Fil went out to check on the Power Supply for the Fast Shutter--it was on and operational.
Once Cheryl took H1 to a state where the test was run, the Error went away. This was marked for about 10min of downtime as CORRECTIVE MAINTENANCE, and an FRS (#6917) was filed.