model restarts logged for Wed 28/Sep/2016
2016_09_28 11:38 h1nds0
2016_09_28 13:29 h1nds0
two unexpected nds0 restarts, hopefully memory upgrade will fix this.
WP 6199 The memory for h1nds0 was increased from 24Gb to 48Gb and the daqd was reconfigured to make use of additional RAM. This was done in an attempt to solve the nds0 restarts that have been occurring. h1nds0 now has the same amount of memory as h1nds1. The daqd for h1nds1 was also reconfigured to use more of the available RAM, but was not restarted. The change will be effective with the next restart of the DAQ. Note that nds0 and nds1 are now configured the same.
Daniel, Stefan
We measured the ISS out of loop RIN using a straight shot beam to the OMC at 25W (35.9mA DCPD_SUM). After choosing some better misaligned positions, we got about 1e-8/rtHz between 150Hz and 800Hz (see attached plot).
The first attempt was limited by scatter noise from misaligned optics. The following new misalignment TEST offsets cut down a lot on that scatter noise:
PRM_P 520 unchanged
PRM_Y -3200 (was -6200, +3000) - this one was not updated yet, because nominally the PRM should dump the beam at a beam bump on top of HAM1.
SRM_P 707
SRM_Y -1910 (was 1090, -3000)
ITMY P 40.3
ITMY Y 53.1 (was -46.9, +100)
None of those new offsets are in SDF yet.
Here is a plot of some of the noise projections made from braodband injections last night.
The frequency noise coupling is from Evan's measurement (alog 29893) and the spectrum of the (almost) in loop sensor refl 9 I, so the sensor noise imposed by the loop still needs to be added. It is interesting that some of our jitter peaks show up in frequency noise. Frequency noise is above the DARM noise floor for the peak at 950 Hz. As Jenne wrote (29817), this peak has sometimes gone away when the 9MHz modulation depth is reduced, although not always.
In the bucket, MICH, SRCL, frequency and intensity noise are all too small to explain our broadband noise.
Edit: Attaching the plot this time
Stefan will be conducting OMC measurements during this meeting.
TITLE: 09/29 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
Wind: 6mph Gusts, 3mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.10 μm/s
QUICK SUMMARY:
H1 dropped out at about 3:30amPST hovering just under 80Mpc.
AS AIR video looked obviously misaligned. Just opted to start an Initial Alignment.
Kiwamu, Stefan
We drove the IM4 in PIT and YAW, and measured the beam jitter to OMC_DCPD_RIN transfer function. We calibrated the drive signal in Relateve Beam Jitter (RBJ), so we can directly compare it with measured beam jitters at other places (for calibration, see below).
We found a transfer function from IM4 RBJ to DCPD_RIN of about 2e-2 RIN/RBJ for yaw, and 3e-3 RIN/RBJ for pitch. When then tried to minimize the couipling by putting offests into the INP1 ASC loop that controls IM4.
For YAW, we found a minimum (with sign change) for -1300cts OFFSET, resulting in about the same transfer function as for the undisturbed PIT.
For PIT, the improvement was minimal, and we didn't find a minimum. But the best coupling was achieved at +1800cts PIT OFFSET.
With those offsets the recycling gain dropped a bit, but there still was a net range improvement.
Calibration:
========
DCPD_SUM: The channel is in mA. The attached txt file includes the closed loop correction, as well as the factor 1/20mA, casting the channel into RIN.
H1:SUS-IM4_M1_OPTICALIGN_P_OUT: The channel should be calibrated in urad at DC, and the PIT pendulum frequency is 1Hz. Also H1:SUS-IM4_M1_OPTICALIGN_P_GAIN is 0.212. Thus, the calibration in rad is 1e-6/0.212 = 4.72e-6 rad/ct + 2 poles at 1Hz
H1:SUS-IM4_M1_OPTICALIGN_Y_OUT: The channel should be calibrated in urad at DC, and the YAW pendulum frequency is 0.727Hz. Also H1:SUS-IM4_M1_OPTICALIGN_Y_GAIN is 0.388. Thus, the calibration in rad is 1e-6/0.388 = 2.58e-6 rad/ct + 2 poles at 0.727Hz
Calibration in Relative Beam Jitter (RBJ):
The field overlap between two Gaussian beams with k=2*pi/lambda, w the beam spot size, and Theta the angle between the beams is
ovl
Thus, the mode-mismatch M=1-|ovl
where Theta_w =lambda/(pi*w).
From T1200470 and the attached wCalc.m file, we find the beam spot on IM4 to be 2.12mm. Thus Theta_w for IM4 is 160urad.
Finally, there is a factor of two between the beam deflection angle Thata and the mirror angle Phi: Theta=2*Phi.
Thus, we have to divide the calibration for H1:SUS-IM4_M1_OPTICALIGN_P_OUT and H1:SUS-IM4_M1_OPTICALIGN_Y_OUT by 80urad to ger relative beam jitter (RBJ):
H1:SUS-IM4_M1_OPTICALIGN_P_OUT: 0.0590 RBJ/ct
H1:SUS-IM4_M1_OPTICALIGN_Y_OUT: 0.0323 RBJ/ct
Jason (on phone), Jim, Sheila, Stefan, Terra, Kiwamu,
The PSL tripped twice tonight, one at around 2:06 UTC and the other at 3:36 UTC. For the first trip, Jason remotely walked us through the recover procedure of the PSL on the phone. For the second time, we could not get Jason on the phone, but since the fault status was exactly the same, we repeated the same recover procedure by ourself. For both times, the laser came back up without issues. The attached shows the PSL staus screen before we cleared the errors for the first time. In each time, we had to add a few 100 ml of water to the chiller to clear the warning. Also, when we went to the chiller for the second time, there was water not only on the floor but also on the door and the chiller itself. It looked as if water was sprayed. Not sure if this is the same as what Sheila reported a few days ago (29964).
Filed FRS #6324.
Title: 09/28/2016, Evening Shift 23:00 – 07:00 All times in UTC State of H1: NLN, Kiwamu and Stefan plan to leave in undisturbed Commissioning:
02:00 PSL trip, Jason was called, he helped Kiwamu and Sheila recover
04:30 second PSL trip, Kiwamu and Stefan recovered
ey_mode[9].only_on('INPUT', 'FM10', 'FM4', 'OUTPUT', 'DECIMATION')
ey_mode[9].GAIN.put(0.001)
Channel to monitor: H1:SUS-ETMY_L2_DAMP_MODE9_OUTPUT
rms channel: H1:SUS-ETMY_L2_DAMP_MODE10_RMSLP_LOG10_OUTMON
Sheila, Kiwamu, Robert, Daniel, Stefan
After a lot of discussion about possible ways for jitter noise to end up as intensity noise, still nothing really added up. Nevertheless, here are some related numbers:
Quoting the noise around 300Hz, we have
For beam jitter:
- DBB measurement of the laser amplifier: relative beam jitter (rad/divergence angle): 2-10 e-6/rtHz (alog 28729)
- IM4 relative beam jitter (PIT/SUM, YAW/SUM): 2-5e-8/rtHz (plot 1)
For Intensity noise:
- Excess noise in DCPD-sum 1.5e-7mA/rtHz --> 20mA --> 8e-9/rtHz RIN (plot 2)
- RIN coupling from entering main IFO to DCPD (measured) up to 0.05 RIN/RIN around 300Hz (alog 29644), but varying a lot.
- thus we need the equivalent of 1e-7/rtHz RIN entering the IFO
- The REFL_A_LF_OUT_DQ, calibrated in RIN, reports a rin of 4e-9/rtHz (plot 3)
- The ISS out of loop PD reports a RIN of 7e-9/rtHz (alog 30026)
The number we were missing is an experimental measurment of the (power) transfer function beam jitter after the IMC (e.g. added through the IMs), directly to DCPD_SUM RIN.
Also, the spectum of plot 2 was taken at a time when almost none of the jitter peaks shoed up in DARM - however there is still a clear broadband hump visible. We seem to be fighting two different phenomena here. Robert also had a plot that shows that this broadband stuff is coherent with the ISS_SECONDLOOP control signal... as I promised, no conclusion.
WP6195
I have modified the reservation system to permit users to supply work-permit numbers (if applicable) as an additional argument and to display these as a new column. Full details in the wiki page
https://lhocds.ligo-wa.caltech.edu/wiki/H1CommissioningReservationSystem
example display screen shown below (with made-up tasks)
Resetting the IMs to their O1 start and O1 end alignments, I have identified the IMC as the biggest contributer to pitch beam position changes at PRM, and IM1-3 as the biggest contributers to yaw beam position changes at PRM.
| O1 changes | delta IM1-3 | delta IMC |
| im4t p | 0.103 | -0.376 |
| im4t y | -0.209 | 0.231 |
| iss2 p (-y) | 0.197 | -0.632 |
| iss2 y (-p) | -0.817 | 0.081 |
Summary and detailed charts attached.
Copper seat has long been bottomed out due to over torque and gazillions of cycles etc. The resulting air leak into the RGA volume when the chamber is vented necessitates frequent baking of the RGA volume. This then requires that the electronics module be removed and reinstalled. These repeated re-installations of the electronics module along with the associated "hit or miss" alignment of the feed-through pins and sockets has caused the sockets to recess into their connector etc. etc. etc. Postponing fixing one problem is soon to result in a 2nd, avoidable, problem.
First couple of hours of today addressed the TCSy Chiller which went down over the night. After that, went through an Initial Alignment and had a few locks. Unfortunately, later locking attempts would break lock during ASC guardian states (this was probably related to changes Sheila notes in her alog last night); specifics below.
Day's Activities:
ISC Locking Note:
OPS INFO Notes:
PRM ALIGN: If AS_AIR is flashing, you are not locked, and this is most likely due to a misaligned PRM. Take ALIGN_IFO to DOWN, & then tweak PRM while looking at AS_AIR. Then go back to PRM_ALIGN for the alignment.
Reminder that two weeks ago we swapped out the 16bit DAC card in h1oaf0, but got a dac-zero'ed error on Sunday (9/25). Yesterday (9/27) we swapped the first and third ADCs and are waiting to see if we get another DAC error. On Keith's suggestion we are running a script on h1oaf0 which will report to a log file if the DAC FIFO status is anything other than OK. We suspect that a FIFO full or empty error is precipitating the zeroing of the DAC channels.
The script is called monitor_iop_dac_status.bsh, running in the background on h1oaf0's general core. It appends text to the log file:
/opt/rtcds/lho/h1/target/h1iopoaf0/logs/h1iopoaf0_16bitDAC_fifo_status.txt
It runs every minute, outputting a date stamp and any errors. The DAC FIFO status is gotten from the /proc/h1iopoaf0/status file.
After an Alignment, I wanted to run an INIT state for ISC_LOCK. Noticed that it was nowhere to be found. Mentioned this to TJ, and he ended up running the INIT by hand via Guardian command line.
This was due to a bug in the guardian MEDM states screen generation. Fixed in version 1.0.1 which was pushed yesterday.
As this lock progressed, the 100 to 200 Hz region was improving and the range hit 80 Mpc right after 8:30 UTC. This is despite some huge noise in the 30 to 50 Hz region that came up at the same time. Around 8:49, the detector lost lock. The first attachment is the range, and the second shows the two parts of the spectrum going in opposite directions. It's not clear why the lock was lost, although the PI summary page shows something in the 3-8 Hz band growing exponentially (blue trace, third plot). Maybe it's this line at 4735 Hz (fourth plot)? It gets bigger and grows ugly sidebands as time goes on. I don't see it identified on the PI MEDM page though (I looked for 11649 and 28033 Hz as likely aliases). Edit: Actually, this could be the 10th order EY violin mode; see alog 19608. Is it possible something is going wrong with the damping and it's getting out of control? Or maybe it's nothing to worry about?
I was able to damp it with 0.1 Hz wide butterwoth +100dB filter and a gain of +0.001. And I was able to blow it up by flipping the sign of the gain. As Evan already mentioend in alog19612 this line indeed belongs to ETMY. Terra mentioned that this is not PI due to its non-exponential growth.
I opened DBB shutter for HPO to make some measurement. After 30 seconds or so, the DBB moved to "interlock" mode. Resetting the interlock by going to "stand by" and then to "manual" immediately brings the system back to "interlock" mode.
This is not triggered by the oscillation monitor (so called "software interlock" in T0900576), as the oscillation monitor output (H1:PSL-DBB_INTERLOCK_LP_OUT) never goes above threshold (currently set to 1000). It should be so-called "hardware interlock". However, PSL documentation is kind of convoluted I have a hard time finding what closed the shutter when there's a "hardware" interlock trigger, is it DBB frontend or is there a parallel hardware circuit that closes the shutter regardless of the DBB frontend?
Anyway, according to Jason this has been a problem for a while and then it went away on the day of power outage.
Filed FRS #6330. I asked Fil to take a look at the AA and AI chassis associated with the DBB during the next maintenance window (10/4/2016). If those check out the problem is likely in the DBB control box that lives in the PSL rack in the LVEA.