PSL trip took down the IFO, or vice versa. No commissioners are on site currently, so I'm calling it a night (with Corey's approval). Stefan and other commissioners made noise about coming back in so they try to recover, but I don't have the procedure to restart the PSL.
TITLE: 09/29 Day Shift: 23:00-7:00 UTC
STATE of H1: PSL down
SHIFT SUMMARY: Lock was generally going well until the PSL went down, PI's were a learning experience, vigilance is required.
LOG:
23:00 IFO almost locked at end of Corey's shift
1:15 DRMI flashes non-existent, did initial alignment through MICH_DARK
2:00 NLN, PI Mode 26 immediately started ringing up, but tweaking phase brought it back down.
3:30 4 PIs ring up, 17, 25, 27 & 2, breaking lock before I could make any improvemts on any of them, but 17 rang up first
3:50 NLN, PI 17 rang up around power up, but managed to bring it down again
4:47 Lockloss, PSL is down
Carlos, Dave:
h1fs1 kernel panic'ed apparently during its 08:39PDT backup of h1fs0. We rebooted via power cycle and manually got the ZFS file system back into sync so the hourly cronjobs could continue to run.
TITLE: 09/29 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Aligning
INCOMING OPERATOR: Jim W.
SHIFT SUMMARY:
OK shift. Had issues locking in the middle of the day (completely dead DRMI), but Sheila tracked this down to a misaligned PRM (Stefan alogged this). After that, took H1 up to a state for the Commissioners, and that is where they are at right now.
LOG:
Updated Nuc6 to so the PI Strip Tool has Terra's new mode.
(Betsy, Gerardo)
This morning added 500 ml of water, and walked/crawled the length of the pipe looking for water drops, none found. The inside of the TCS-Y table was not inspected.
model restarts logged for Wed 28/Sep/2016
2016_09_28 11:38 h1nds0
2016_09_28 13:29 h1nds0
two unexpected nds0 restarts, hopefully memory upgrade will fix this.
WP 6199 The memory for h1nds0 was increased from 24Gb to 48Gb and the daqd was reconfigured to make use of additional RAM. This was done in an attempt to solve the nds0 restarts that have been occurring. h1nds0 now has the same amount of memory as h1nds1. The daqd for h1nds1 was also reconfigured to use more of the available RAM, but was not restarted. The change will be effective with the next restart of the DAQ. Note that nds0 and nds1 are now configured the same.
Daniel, Stefan
We measured the ISS out of loop RIN using a straight shot beam to the OMC at 25W (35.9mA DCPD_SUM). After choosing some better misaligned positions, we got about 1e-8/rtHz between 150Hz and 800Hz (see attached plot).
The first attempt was limited by scatter noise from misaligned optics. The following new misalignment TEST offsets cut down a lot on that scatter noise:
PRM_P 520 unchanged
PRM_Y -3200 (was -6200, +3000) - this one was not updated yet, because nominally the PRM should dump the beam at a beam bump on top of HAM1.
SRM_P 707
SRM_Y -1910 (was 1090, -3000)
ITMY P 40.3
ITMY Y 53.1 (was -46.9, +100)
None of those new offsets are in SDF yet.
Here is a plot of some of the noise projections made from braodband injections last night.
The frequency noise coupling is from Evan's measurement (alog 29893) and the spectrum of the (almost) in loop sensor refl 9 I, so the sensor noise imposed by the loop still needs to be added. It is interesting that some of our jitter peaks show up in frequency noise. Frequency noise is above the DARM noise floor for the peak at 950 Hz. As Jenne wrote (29817), this peak has sometimes gone away when the 9MHz modulation depth is reduced, although not always.
In the bucket, MICH, SRCL, frequency and intensity noise are all too small to explain our broadband noise.
Edit: Attaching the plot this time
Stefan will be conducting OMC measurements during this meeting.
TITLE: 09/29 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
Wind: 6mph Gusts, 3mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.10 μm/s
QUICK SUMMARY:
H1 dropped out at about 3:30amPST hovering just under 80Mpc.
AS AIR video looked obviously misaligned. Just opted to start an Initial Alignment.
Kiwamu, Stefan
We drove the IM4 in PIT and YAW, and measured the beam jitter to OMC_DCPD_RIN transfer function. We calibrated the drive signal in Relateve Beam Jitter (RBJ), so we can directly compare it with measured beam jitters at other places (for calibration, see below).
We found a transfer function from IM4 RBJ to DCPD_RIN of about 2e-2 RIN/RBJ for yaw, and 3e-3 RIN/RBJ for pitch. When then tried to minimize the couipling by putting offests into the INP1 ASC loop that controls IM4.
For YAW, we found a minimum (with sign change) for -1300cts OFFSET, resulting in about the same transfer function as for the undisturbed PIT.
For PIT, the improvement was minimal, and we didn't find a minimum. But the best coupling was achieved at +1800cts PIT OFFSET.
With those offsets the recycling gain dropped a bit, but there still was a net range improvement.
Calibration:
========
DCPD_SUM: The channel is in mA. The attached txt file includes the closed loop correction, as well as the factor 1/20mA, casting the channel into RIN.
H1:SUS-IM4_M1_OPTICALIGN_P_OUT: The channel should be calibrated in urad at DC, and the PIT pendulum frequency is 1Hz. Also H1:SUS-IM4_M1_OPTICALIGN_P_GAIN is 0.212. Thus, the calibration in rad is 1e-6/0.212 = 4.72e-6 rad/ct + 2 poles at 1Hz
H1:SUS-IM4_M1_OPTICALIGN_Y_OUT: The channel should be calibrated in urad at DC, and the YAW pendulum frequency is 0.727Hz. Also H1:SUS-IM4_M1_OPTICALIGN_Y_GAIN is 0.388. Thus, the calibration in rad is 1e-6/0.388 = 2.58e-6 rad/ct + 2 poles at 0.727Hz
Calibration in Relative Beam Jitter (RBJ):
The field overlap between two Gaussian beams with k=2*pi/lambda, w the beam spot size, and Theta the angle between the beams is
ovl
Thus, the mode-mismatch M=1-|ovl
where Theta_w =lambda/(pi*w).
From T1200470 and the attached wCalc.m file, we find the beam spot on IM4 to be 2.12mm. Thus Theta_w for IM4 is 160urad.
Finally, there is a factor of two between the beam deflection angle Thata and the mirror angle Phi: Theta=2*Phi.
Thus, we have to divide the calibration for H1:SUS-IM4_M1_OPTICALIGN_P_OUT and H1:SUS-IM4_M1_OPTICALIGN_Y_OUT by 80urad to ger relative beam jitter (RBJ):
H1:SUS-IM4_M1_OPTICALIGN_P_OUT: 0.0590 RBJ/ct
H1:SUS-IM4_M1_OPTICALIGN_Y_OUT: 0.0323 RBJ/ct
Jason (on phone), Jim, Sheila, Stefan, Terra, Kiwamu,
The PSL tripped twice tonight, one at around 2:06 UTC and the other at 3:36 UTC. For the first trip, Jason remotely walked us through the recover procedure of the PSL on the phone. For the second time, we could not get Jason on the phone, but since the fault status was exactly the same, we repeated the same recover procedure by ourself. For both times, the laser came back up without issues. The attached shows the PSL staus screen before we cleared the errors for the first time. In each time, we had to add a few 100 ml of water to the chiller to clear the warning. Also, when we went to the chiller for the second time, there was water not only on the floor but also on the door and the chiller itself. It looked as if water was sprayed. Not sure if this is the same as what Sheila reported a few days ago (29964).
Filed FRS #6324.
Title: 09/28/2016, Evening Shift 23:00 – 07:00 All times in UTC State of H1: NLN, Kiwamu and Stefan plan to leave in undisturbed Commissioning:
02:00 PSL trip, Jason was called, he helped Kiwamu and Sheila recover
04:30 second PSL trip, Kiwamu and Stefan recovered
ey_mode[9].only_on('INPUT', 'FM10', 'FM4', 'OUTPUT', 'DECIMATION')
ey_mode[9].GAIN.put(0.001)
Channel to monitor: H1:SUS-ETMY_L2_DAMP_MODE9_OUTPUT
rms channel: H1:SUS-ETMY_L2_DAMP_MODE10_RMSLP_LOG10_OUTMON
Sheila, Kiwamu, Robert, Daniel, Stefan
After a lot of discussion about possible ways for jitter noise to end up as intensity noise, still nothing really added up. Nevertheless, here are some related numbers:
Quoting the noise around 300Hz, we have
For beam jitter:
- DBB measurement of the laser amplifier: relative beam jitter (rad/divergence angle): 2-10 e-6/rtHz (alog 28729)
- IM4 relative beam jitter (PIT/SUM, YAW/SUM): 2-5e-8/rtHz (plot 1)
For Intensity noise:
- Excess noise in DCPD-sum 1.5e-7mA/rtHz --> 20mA --> 8e-9/rtHz RIN (plot 2)
- RIN coupling from entering main IFO to DCPD (measured) up to 0.05 RIN/RIN around 300Hz (alog 29644), but varying a lot.
- thus we need the equivalent of 1e-7/rtHz RIN entering the IFO
- The REFL_A_LF_OUT_DQ, calibrated in RIN, reports a rin of 4e-9/rtHz (plot 3)
- The ISS out of loop PD reports a RIN of 7e-9/rtHz (alog 30026)
The number we were missing is an experimental measurment of the (power) transfer function beam jitter after the IMC (e.g. added through the IMs), directly to DCPD_SUM RIN.
Also, the spectum of plot 2 was taken at a time when almost none of the jitter peaks shoed up in DARM - however there is still a clear broadband hump visible. We seem to be fighting two different phenomena here. Robert also had a plot that shows that this broadband stuff is coherent with the ISS_SECONDLOOP control signal... as I promised, no conclusion.
After an Alignment, I wanted to run an INIT state for ISC_LOCK. Noticed that it was nowhere to be found. Mentioned this to TJ, and he ended up running the INIT by hand via Guardian command line.
This was due to a bug in the guardian MEDM states screen generation. Fixed in version 1.0.1 which was pushed yesterday.
As this lock progressed, the 100 to 200 Hz region was improving and the range hit 80 Mpc right after 8:30 UTC. This is despite some huge noise in the 30 to 50 Hz region that came up at the same time. Around 8:49, the detector lost lock. The first attachment is the range, and the second shows the two parts of the spectrum going in opposite directions. It's not clear why the lock was lost, although the PI summary page shows something in the 3-8 Hz band growing exponentially (blue trace, third plot). Maybe it's this line at 4735 Hz (fourth plot)? It gets bigger and grows ugly sidebands as time goes on. I don't see it identified on the PI MEDM page though (I looked for 11649 and 28033 Hz as likely aliases). Edit: Actually, this could be the 10th order EY violin mode; see alog 19608. Is it possible something is going wrong with the damping and it's getting out of control? Or maybe it's nothing to worry about?
I was able to damp it with 0.1 Hz wide butterwoth +100dB filter and a gain of +0.001. And I was able to blow it up by flipping the sign of the gain. As Evan already mentioend in alog19612 this line indeed belongs to ETMY. Terra mentioned that this is not PI due to its non-exponential growth.
I opened DBB shutter for HPO to make some measurement. After 30 seconds or so, the DBB moved to "interlock" mode. Resetting the interlock by going to "stand by" and then to "manual" immediately brings the system back to "interlock" mode.
This is not triggered by the oscillation monitor (so called "software interlock" in T0900576), as the oscillation monitor output (H1:PSL-DBB_INTERLOCK_LP_OUT) never goes above threshold (currently set to 1000). It should be so-called "hardware interlock". However, PSL documentation is kind of convoluted I have a hard time finding what closed the shutter when there's a "hardware" interlock trigger, is it DBB frontend or is there a parallel hardware circuit that closes the shutter regardless of the DBB frontend?
Anyway, according to Jason this has been a problem for a while and then it went away on the day of power outage.
Filed FRS #6330. I asked Fil to take a look at the AA and AI chassis associated with the DBB during the next maintenance window (10/4/2016). If those check out the problem is likely in the DBB control box that lives in the PSL rack in the LVEA.
I have restarted the PSL. Looking at the status screen of the PSL, I have determined that the situation was exactly the same as those from yesterday (30063). However, because it was already late in the night, I decided not to call the experts. Instead, I by myself did the same recovery procedure as I did yesterday. I added 200 ml water to the chiller.
Thanks for handling PI's Jim. You saved a few locks.
Got in right as Jim was leaving. Looking back at the spectrum, it was only MODE17 that rang up during the 3:30 time; all other modes just had increasing RMS due to bleed over from the extremely heightened mode (note MODE27 did not ring up, it was MODE10 - I've changed the color to make this more distinguishable). This one had not been problematic since I implemented stepping filters and a smarter guardian to handle (several days now).
At 3:50 power up, MODE17 was still significanly rang up (since Jim relocked so quickly), hence seeing it right away. If lock is lost due to PI and you immediately relock, pause at DC_READOUT before power up to ensure mode is damped first. I'll go through this with operators.