I started welding the turbo header piping under the beam tube at Mid Y back together today. The first weld was started at approximately 10:17 PT and lasted for ~45 seconds. The next welds were at ~10:36 and 10:46: PT and each lasted for ~ 90 seconds. There will be more welding next week.
FRS #3896 The ~hinj account on h1hwinj1 is now being backed up to the daqsvn.ligo-la.caltech.edu computer every hour. This should close out FRS ticket.
Sharing learning laterally across the Observing Sites
The Detector Engineers and Operators review together the faults in the FRS that caused observing time loss.
The circumstance surrounding each fault is reviewed and a discussion of whether the response in each case was appropriate and what is now in place to mitigate repeat occurrence.
See LLO log entry 24196
https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=24196
J. Kissel In order to begin exploring the systematic discrepancy between model and measurement below ~30 [Hz] in the IFO's sensing function (see, e.g. LHO aLOG 24709), we've taken PCAL2DARM and DARMOLGTF transfer functions at the normal amplitude and at half amplitude. Sadly, the IFO lost lock during the last few (low frequency) data points of the last half amplitude DARMOLGTF, but I think we have enough data to make a comparison. Premiliminary message: there's no obvious, unexpected difference between normal and half amplitude. Coherence is less, so the data points are a little more scattered, but no surprise there. One thing of note, while watching the DARM ASD during the measurement, there are no signs of higher harmonics in the PCAL excitation, but once the DARM OLGTF (as driven by the ESD and subsequent upper stages) reaches ~20 Hz, one can clearly see a second harmonic in the ASD meaching along right behind the fundamental excitation frequency. To give a feel, during the normal amplitude drive, once the excitation hit ~15 [Hz], the second harmonic's amplitude was roughly 3.5 orders of mangitude below the fundamental (but still clearly visible thanks to the discrepancy between the 10 and 30 [Hz] sensitivity). Recall that the linearization for ETMY is *not* on in the IFO's lowest noise state. The EY bias voltage remains -380 [V] (with an effective bias voltage from charge of ~ -20 [V]; see LHO aLOG 24547). We leave the IFO down for the PCAL team to explore PCALX problems (see LHO aLOG 24726), and so Betsy can grab this week's charge measurements on ETMY. Analysis of the data to come later, but the files live here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PostO1/H1/Measurements/DARMOLGTFs/ 2016-01-08_H1_DARM_OLGTF_7to1200Hz.xml 2016-01-08_H1_DARM_OLGTF_7to1200Hz_halfamp.xml /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PostO1/H1/Measurements/PCAL 2016-01-08_PCALY2DARMTF_7to1200Hz.xml 2016-01-08_PCALY2DARMTF_7to1200Hz_halfamp.xml
Later, I have processed the transfer functions that Jeff took and I have made a comparison. I do not see any systematic change between the measurements with the nominal amplitudes and the ones with half amplitudes. See the attached pdf for more details. Note that we are seeking a systematic as large as 20 % at 10 Hz. There are seemingly statistical error in the measurement but they don't look systematic.
The analysis code lives in the SVN at: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PostO1/H1/Scripts/DARMOLGTFs/HalfAmp_20160112.m
The resultant plot can be found at: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PostO1/H1/Results/DARMOLGTFs/2016-01-08_HalfAmp.pdf
Found h1nds0 to be dead with a kernel panic, rebooted. Did not cycle power, just reboot.
The h1nds0 computer died January 7, 22:39:00 UTC which was 14:39 PST. Monit did not tell us about it because the computer was locked up, so we were dependent on an observant operator or administrator to notice it. Apparently we need a better system to alert people to faults of this nature.
LLO CDS uses regular SSH queries of all machines via Nagios to keep ahead of these things. Contact Michael Thomas for details
TITLE: 1/8 DAY Shift: 16:00-00:00UTC (08:00-04:00PDT), all times posted in UTC
STATE of H1: OBSERVING (but taken out at 8am for calib)
Outgoing Operator: Patrick
Quick Summary:
Nice H1 hand-off from Patrick (H1 has been locked since Cheryl locked it last night and it is going on 10+hrs). Range has been hovering at a flat & quiet 80mpc. useism looks like it's been dropping over last 12hrs & LVEA is currently at 0.55um/s in LVEA. Winds are under 4mph.
Patrick mentioned nds0 being down (symptoms: (1) nds0 white boxed & (2) edcu red-boxed on overview) & that David has been notified.
Jeff took H1 promptly at 16:00utc for Calibration.
TITLE: 01/08 [OWL Shift]: 08:00-16:00 UTC (00:00-08:00 PDT), all times posted in UTC STATE Of H1: Observing @ ~ 79 MPc. SHIFT SUMMARY: Remained in observing entire shift. 3 SUS ETMY saturations. Left a message with Dave that nds0 is down. H1:LSC-POP_A_LF_OUTPUT has been stable. INCOMING OPERATOR: Corey
Have remained in observing at ~ 79 MPc. 3 SUS ETMY saturations. H1:LSC-POP_A_LF_OUTPUT stable. nds0 is still down. I have left a message for Dave.
Ops Eve Summary: 00-08:00UTC, 16:00-23:59PT
State of H1: Observe, range 79Mpc, POP_A_LF stable
Fellow: Darkhan
Shfit Summary:
Calibration measurements until about an hour into my shift
Locking went well, and made it past DRMI
Calibrations needed H1 in not-DRMI, so I broke the lock
Calibrations continued, then back to locking
Even after having been out of lock most of the day, H1 locked and went back to Low Noise
Relocking Issues:
Changes:
I restored the gaudian to requestion 23W instead of 10W. JeffK had suggested it, and I emailed Sheila, and she said to go ahead.
TITLE: 01/08 [OWL Shift]: 08:00-16:00 UTC (00:00-08:00 PDT), all times posted in UTC STATE Of H1: Observing @ ~ 78 MPc. OUTGOING OPERATOR: Cheryl QUICK SUMMARY: From the cameras: The lights are off in the LVEA. The lights are off in the PSL enclosure. The lights are off at end Y. I can not tell if the lights are on or off at mid X, mid Y or end X. The winds are less than 10 mph. From pinging: CDS WAP is off at the LVEA, end X and end Y. CDS WAP is on at mid X and mid Y. nds0 is down and the EDCU is red. Screenshots of the seismic bands and ISI blends are attached.
J. Kissel, K. Izumi
We're too exhausted to write the aLOG this deserves, so we'll just say "we did it!" and leave plots and conclusions until tomorrow. The message: from what prelimiary analysis plots we have made, we can say that these new results are consistent with those taken during ER8, modulo the well-tracked change in kappa_{TST} (the EY L3 ESD actuation strength). Stay tuned!
We can also say that using ALS DIFF instead of a single arm locked on red has drastically improved the statistical uncertainty of the Free Swinging Michelson method -- thanks LLO!
Manually filled CP3 -> Opened exhaust check valve bypass -> Opened LLCV bypass valve 1/2 turn ccw -> LN2 evident at exhaust after 29 minutes 30 seconds -> Closed LLCV bypass -> Closed exhaust bypass valve after several minutes -> Confirmed with Gerardo that we are performing this exercise following the same prodedure -> I cannot explain the discrepancy between the fill times between today vs. the previous fill unless the LLCV bypass valve's stop-to-stop is very course and our "1/2 turns" differ significantly. Next manual over fill of CP3 is scheduled for Saturday before 4:00 pm
Several minutes prior to filling CP3, I had used an SS wire brush to remove the Q putty that had been applied during our leak hunting of the GNB spool a few months ago (in the VEA) to see if, by doing so, it changed the pressure in the Y-mid -> no change -> I then applied Vacseal to the previously repaired leak site on the GNB spool -> no change -> Finally, I applied Vacseal to the area around the repaired leak site -> no change This effort was an attempt to explain the ongoing slow pressure rise observed in the Y-mid station which began shortly after the problems with CP3 which required manual filling of CP3
Activity Log: All Times in UTC (PT) 20:00 (12:00) Take over from Corey 21:22 (13:22) Office supply delivery for Bubba 21:54 (13:54) Office supply truck off site 22:21 (14:21) GRB – Ignored due to calibration work 22:26 (14:26) Acknowledged HWJ stopped message 22:45 (14:45) Kyle – Going to Mid-Y to top off CP3 23:45 (15:45) Calibration work finished for the day. Starting to relock 00:00 (16:00) Turn over to Cheryl End of Shift Summary: Title: 01/07/2015, Day Shift 16:00 – 00:00 (08:00 – 16:00) All times in UTC (PT) Support: Jeff K., Kiwamu, Gary, Incoming Operator: Cheryl Shift Detail Summary: Supporting calibration work. 23:45 (15:45) – Tried relocking the IFO without running initial alignment.
Posted are the December data files from the temperature and relative humidity data loggers for the 3IFO desiccant cabinet and the Dry storage boxes. All appears to be normal.
Calibration work continues.
[Jenne, Kiwamu, Sheila, JeffK]
Today, the interferometer decided that it didn't want to go from ASC_PART2 to ASC_PART3 without some hand-tuning each lock. However, after some investigation, it's still not entirely clear what the right thing to do is :(
The things we were playing with were the offsets in the CSOFT and DSOFT pitch filter banks (can only be tweaked *after* the loops are on, since there's an integrator always on in each of those filter banks) and the transmon qpds' ASC-[x,y]_TR_[a,b]_[pit,yaw]_OFFSETs. These things are kind of the same, but you need to adjust the transmon offsets if you want to do some adjusting before the loops are closed, and you need to adjust the CSOFT and DSOFT offsets if you want to do some adjusting after the loops are closed.
We've had locks where we didn't survive if we didn't re-zero the qpd offsets before going to part3, and we've had locks where we didn't survive part3 if we didn't adjust the CSOFT and DSOFT offsets. But, we've also had locks where the opposite was true.
Anyhow, at this point, I think our advice is to try going through the guardian like normal (no by-hand offsetting), and if that fails one or more times, call a commissioner (it's me for now, so please call). For example, the last lock, I re-set the qpd offsets during asc part 2, but forgot to engage the CSOFT and DSOFT offsets, and we seemed to be just fine. So, it's not totally clear yet what the final prescription should be.
Tonight we had one OK lock with the offsets as Jenne aloged above, which we lost because of an EQ (see Cheryl's shift summary). On relocking without changing the offsets or doing anything special, we lost lock due to the CSOFT pitch instability that has plauged us when we have changed the TMS QPD offsets in the past. (There was a small EQ that arrived on site shortly after this lockloss, but clearly we lost lock because of the pit instability before it arrived). After this, Cheryl and I let the IFO lock and engage ASC with the offsets that Jenne had found this afternoon, then once the soft loops were closed very slowly reverted to the old offsets (see Cheryl's alog) at 2 Watts. Our recycling gain was around 41 when we started, and although it dropped as we reverted the offsets it returned to 41-42 once they were all reverted. We powered up slowly and the recycling gain stayed above 40. After about 20 minutes I thought that things seemed better than they have in several days, so I suggested we offload the full lock ASC. Sadly this broke the lock.
Since this situation is very similar to what happened Dec 3rd -4th, I re-ran the script I used then to look at build ups (23959), for the past 10 days. The results are very similar, the drop off in the POP DC trace is very similar to the drop off in the arm transmissions (ratio of arm transmissions to POP DC is shown in the middle panel, and is changing by at worst 1% while the powers drop by nearly 15%. )
One explanation we considered in December was that the OMC was becoming misaligned, causing the DARM offset to change. The second attachment shows the OMC ASC control signals and the QPDs, which are out of loop. You can see that they move around even in locks where the recycling gain is stable and don't seem to be any worse in the last few days, so it seems like this is not our problem.
Jim has just relocked with a recycling gain of 41, which has been stable for nearly 20 minutes, so this is looking more promising. One thing that we did is edit lscparams so that the guardian will go to 10 Watts in the increase power state instead of the normal set point of 23 W.
Until this is reverted the steps to follow for power up are:
1) request INCREASE_POWER (it is important to change the PSL power only in this state, this state automatically takes care of gain scalings for you).
wait until the power reaches approximately 10 watts, and the recycling gain seems stable. We have been waiting a few minutes here to make sure it stays stable, that should not normally be necessary.
2) open the PSL rotation stage medm screen (from site map under IOO on the top right hand side, Rotation Stage).
3) type 23 in the requested power dialog box and hit go to power
4) only once the rotation stage has finished moving is it safe to leave the INCREASE_POWER state.
For the record, I give you another similar example that Nutsinee and I had experienced back in October (alog 22575 and comments). At that time, we conluded that it was due to subopitimal alignment in TMSY.
Opened FRS ticket
Fault Report 4186 - Trouble advancing past Guardian state ENGAGE_ASC_PART2
(This was done as "for-real" practice with R. Oram)
Attached are 7-day pitch, yaw, and sum trends for all active H1 optical levers.
Centering:
Glitching (via DetChar summary pages):
Update on the BS oplev. Appears the cessation of glitching was only temporary, as the last couple of days have seen the laser go wild. Will attempt a power adjustment during the next maintenance period (as discussed here). If that is unsuccessful then I will swap back in the now re-stabilized laser SN 130-1 (that I just recently removed from this oplev and found to be functioning normally, just in need a slight tweak to it's TEC setpoint).