As this lock progressed, the 100 to 200 Hz region was improving and the range hit 80 Mpc right after 8:30 UTC. This is despite some huge noise in the 30 to 50 Hz region that came up at the same time. Around 8:49, the detector lost lock. The first attachment is the range, and the second shows the two parts of the spectrum going in opposite directions. It's not clear why the lock was lost, although the PI summary page shows something in the 3-8 Hz band growing exponentially (blue trace, third plot). Maybe it's this line at 4735 Hz (fourth plot)? It gets bigger and grows ugly sidebands as time goes on. I don't see it identified on the PI MEDM page though (I looked for 11649 and 28033 Hz as likely aliases). Edit: Actually, this could be the 10th order EY violin mode; see alog 19608. Is it possible something is going wrong with the damping and it's getting out of control? Or maybe it's nothing to worry about?
I was able to damp it with 0.1 Hz wide butterwoth +100dB filter and a gain of +0.001. And I was able to blow it up by flipping the sign of the gain. As Evan already mentioend in alog19612 this line indeed belongs to ETMY. Terra mentioned that this is not PI due to its non-exponential growth.
Title: 09/27/2016, Evening Shift 23:00 – 07:00 All times in UTC State of H1: Locked, commission Commissioning: 23:00 Daniel and Keita had the IFO when I came in, doing HP ISS measurments. 01:00 When they left, did initial alignment, then struggled with locking due to different guardians needing to be restarted following Beckhoff gateway(? I think) work earlier. OMC wouldn't lock because a switch at a rack had been flipped, HighZ something? Been locked since Stefan found and fixed that.
I did an upgrade of the following packages this morning:
The guardian and cdsutils packages are now using a new versioning scheme that is not tied to their SVN commit id. This was done in preparation of moving to git for upstream maintenance. The gpstime package is newly split out of cdsutils so that it maybe more more easily maintained and distributed.
Notable changes in these releases:
- The DCPDs whitening chassis was found to be switched to Low-Z - transitioned back to high-Z.
- We added notch filters for the 4.01Hz and5.01Hz SRM dither lines into the SRCLFF path. They are a Q of 60, 40dB deep. The high Q was picked to minimize the phase distorsion - it is actually slightly too high for the line width (which is driven by coupling fluctuation.) With this modification, the SRM alignment dither can be running in LOW_NOISE in case we need it.
- We also revived the pr2spotmove.py and pr3spotmove.py scripts (stored in /ligo/home/controls/sballmer/20160927/). They move the spot position on PR2 and PR3 respectively, leaving all other things equal. We were hoping to see a change in the auxiliary loop length noise - aspecially SRCL (they are higher than O1 - see Sheila's attachment.)
- First, we moved the pr2 spot position: (PR3 by 10urad in pit and 5urad in yaw). Next we did the same for the pr3 spot position (10urad on PRM). Unfortunately we did not see any change in the auxiliary loop noise.
Some additional notes:
Yes there is a big 1 Hz comb, presumably from the SRM dither lines. Attached plots are a 30 minute spectrum from 7:30 UTC, a little after going into undisturbed. The second plot is just zoomed in on a 10 Hz region to show the comb in more detail. There's some funny structure here that I'd like to understand. I'll also come back to this in a little while if the lock holds, to see if anything changes.
Thank you for catching this. Yes this would have been an inadvertent switch while working on the chassis. I thought I checked it. Must have looked before we were done working on it. SORRY.
The plot shows that the current vibration levels measured by the 6 accelerometers on the PSL table are about the same as on Boxing day during O1. I also checked several other O1 times, and the vibration levels were very similar to the Boxing day levels. With all of the recent work on the chiller and its lines, the recent levels, in contrast, have been changing quite a bit. This log: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=29764, shows a level that was higher than O1 and, most interesting, a level on July 9 that was about a factor of 2 below O1. The vibration level is very sensitive to the flow rate and Jason thinks that the HPO crystal circuit flow rate might have been lower by a couple of l/m on July 9th than it is now (this circuit has the highest flow rate and is the predominant source of vibration, but doesn’t have a flow meter).
For O1 we set the flow rate as low as suggested by Ollie, but the actual flow rate turned out to have been higher than we thought, because of a poorly calibrated flow sensor. Thus it would not be surprising at all if we could beat the O1 vibration levels and still not be that close to the minimum flow limit - I think we should reduce the flow by at least a couple of l/m.
Another source of vibration is the flow fluctuations associated with bubbles in the system. Jeff bled quite a bit of air at the filters under the PSL table today. However, a measurement taken a couple of hours later after the fans were shut down did not show improvement in the table vibration. But I think that there is still a lot of air in the lines and repeated bleeding, along with reducing the air intake, might give us significant vibration improvement.
The observation that the vibration levels are about the same as in O1, and yet the peaks from the top mount on the periscope are about twice as large on IM4 trans diodes, suggests that the resonances of the mount are no longer as well tuned into a valley in the periscope peaks. So I think we could also get some improvement by re-tuning the resonances.
And it also seems that the jitter coupling is larger. Sheila’s thinks not by much more than two, but it seems worth expanding on Sheila’s piezo mirror injections with some table shaking injections.
Robert S., Jason O., Jeff B.
I restarted the calibration pipeline at Hanford at gps time 1159058357. Based on tests, the current latency is ~5-10 seconds. The filters file used can be found in the calibration SVN: aligocalibration/trunk/Runs/PreER10/GDSFilters/H1GDS_ 1158989379.npz See this aLOG for information on the filters: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=30013 The command line used to run the pipeline was gstlal_compute_strain --data-source=lvshm --shared-memory-partition=$LIGOSMPART --filters-file=$filter_file --ifo=H1 --frame-duration=4 --frames-per-file=1 --write-to-shm-partition=$HOFTSMPART --compression-scheme=6 --compression-level=3 --control-sample-rate=4096 --expected-fcc=341.0 --partial-calibration Note: the redundant pipeline on h1dmt2 was started at gps time 1159062755, a bit after the primary pipeline.
IO Chassis separate power for timing slave boards WP6193
Fil, Richard, Jim:
h1lsc0, h1iscex and h1iscey were powered down to re-install the separate DC power supplies for the IO Chassis timing slave assembly.
PSL-ISS model WP6185:
Daniel, Keita, Dave:
new h1psliss model was installed. DAQ was restarted. Only slow channels were added to the frame.
Frame Writer 10GE SFP-Fiber restoration WP6189
Jim:
Following the experiment to swap h1fw0 and h1fw1 10GE SFP, switch ports and fiberoptics cables to see if fw0's instability was hardware related, we undid this experiment and the systems are back to original configuration.
Deactivating internal EPICS gateways, configuring EDCU and Guardian to run without them WP6188:
Jamie, Jonathan, Dave, Jim:
We turned off the EPICS gateway h1slow-h1fe and restarted the DAQ. The EDCU did not connect to any Beckhoff channels. Jonathan made a /etc/init.d/daqd_dc0 file change to define the channel access environmental parameters and he got the EDCU to run without this gateway present. We then turned off the gateways h1aux-h1fe and h1cds-h1fe and did a final DAQ restart.
Jamie did the same on the Guardian system. We now have 6 epics gateways running (h0ve-cds, h1ve-h1fe, h1slow-cds, h1aux-cds, h0ve-cds, h1fe-cds)
h1oaf0 swap first ADC card in IO Chassis WP6191:
Nutsinee, Betsy, Jim, Dave:
Prior to powering down the IO Chassis, we powered down the 16bit DAC AI-chassis which drives the TCS chillers. Nutsinee and Betsy were on the Mech Room Mezanine and noted that the chiller did not trip, it just started driving the temperature downwards as expected. We kept the AI-chassis powered down while the water lines were being flushed.
We opened the h10af0 IO-Chassis and swapped the first ADC with its right-hand-side (as viewed from front) neighbor. This is an experiment to see if the DAC errors are related to ADC errors in this chassis.
Guardian new core code
Jamie:
new guardian core code installed today.
h1hwsex cloned from h1hwsey WP6162
Jonathan, Carlos:
h1hwsey's boot disk was downgraded to a smaller HDD, and then a clone of this was made for h1hwsex. Both systems are now running Ubuntu14 and are identical.
New GDS code release WP6181
Jim:
new GDS code release was installed.
tconvert leap seconds data file update WP6177
Jim:
In preparation for the end of year addition of a leap second to UTC, the tconvert data file was updated.
Keita Daniel
We took another look at the intensity noise at 50 W out of the IMC. The attached plot shows the relative intensity noise both in-loop and out-of-loop. The brown curve depicts the dark noise as measured by alog 30001. The in-loop curves ERR1 and RIN_INNER now agree fairly well. Between 20 Hz and 300 Hz the two curves are ADC noise limited. There is some additional noise in RIN_INNER below 20 Hz which is not understood. The amber line depicts the sum of the shot and the dark noise multiplied by √2. This is the expected noise level of the outer loop sensor, if we are gain limited. The outer loop sensor shows a fair amount of excess noise below 30 Hz. The red curve is the intensity noise measured by the first loop. It is identical to the control signal of the second loop.
I opened DBB shutter for HPO to make some measurement. After 30 seconds or so, the DBB moved to "interlock" mode. Resetting the interlock by going to "stand by" and then to "manual" immediately brings the system back to "interlock" mode.
This is not triggered by the oscillation monitor (so called "software interlock" in T0900576), as the oscillation monitor output (H1:PSL-DBB_INTERLOCK_LP_OUT) never goes above threshold (currently set to 1000). It should be so-called "hardware interlock". However, PSL documentation is kind of convoluted I have a hard time finding what closed the shutter when there's a "hardware" interlock trigger, is it DBB frontend or is there a parallel hardware circuit that closes the shutter regardless of the DBB frontend?
Anyway, according to Jason this has been a problem for a while and then it went away on the day of power outage.
Filed FRS #6330. I asked Fil to take a look at the AA and AI chassis associated with the DBB during the next maintenance window (10/4/2016). If those check out the problem is likely in the DBB control box that lives in the PSL rack in the LVEA.
3pm local Manually overfilled CP3 today (one day early) to test redundant thermocouples at exhaust (TE202A & TE202B) after rewiring at terminal. Fill took 50 sec. with 1/2 turn open on bypass LLCV. Left bypass exhaust valve open. TCs were both wired backward. Also rewired CP4 exhaust TCs but still need to test (heat gun first, then manual LN2 overfill). The plan is to use TC read back coupled with exhaust pressure to automate CP3 fills. NOTE: while I was at CP4 I happened to catch an event where the exhaust pressure rose and the pump level fluctuated a lot with a drop in LLCV value. This seems to be an anomaly and could have been a delayed reaction from the Norco rep opening Dewar valves this morning during annual inspection.
The "MEDM (screen shots)" link on the lhocds.ligo-wa.caltech.edu web page has been updated to include suspension screens.
intentional move:
unintentional move due to poor IM drive diagonalization:
change on ISS2 qpd:
this change may have effected the ISS2 pd centering as well, so recentering may be beneficial
Betsy, Jason, Nutsinee, Alastair (on phone)
Procedure (written based on Alastair's experience at Livingston. somewhat useful): T1600050
Today we successfully flushed both TCS chillers. First we switched off the keys at the controller boxes. Then we switched off all the power supplies on the mezzanine (two TCS powers supplies, an RF oscillator, the AOM power supply was already off since the AOMs were taken out of the water system and not being used). Dave was going to turn off the OAF AI chasis anyway so we did a little test to see if turning off the AI chasis would trip the chiller right away. The result was the chillers were slowly driven to low temperature, they did not trip instantaneously.
Then we continued -- once Dave confirmed that the AI chasis has been turned off and the result was clear, we turned off the chillers and unplug the power and I/O cables, removed the chillers from the water systems (using the quick connects), then pulled chillers away from the wall (which has a power outlet, we also covered it with tape just in case). We had issue trying to drain the chillers as we followed the instruction. Opening and closing the drain plug did not let water through, we had to unscrewed and losen the drain "pipe" itself. we unscrewed the drain plug and plugged in hoses to allow water into a bucket. All these were not written in the instruction.
Then we noticed a lot green "stuff" sitting in the water container inside the chiller. Y chiller was worse than X chiller. We took some samples and tried our best to wipe the container clean. That red spot didn't come off by the way.
Then we reconnected the outlet pipes but left the inlet pipes disconnected. The inlet pipes had their quick connects taken out at one end pointing to the bucket. We refilled the chillers with clean lab water, plugged the power cable back in and started flushing. We didn't plug the IO cable at this point since the AI chasis were still down and I don't see a need of communicating with the front end. Plus it's just going to get in the way.
Each chiller was flushed with 10 gallons of water. The flushed water looked clean (no green tint, floating particulates). We put the quick connects back to the end of inlet hoses and reconnect them to the chillers. Once we put in new filters (which we rinsed with lab water), the filters started turning green right away. Once we made sure nothing leaked we recovered the TCS.
The drain pipe now has an extra fitting that allows a drain pipe to fit over for easy draining if we ever have to do this again.
Walked in to an H1 which was locked at NLN (Nominal Low Noise). After that, spent a good chunk of the day focussed on Maintenance. For the most part, it was completed 2:30. Once we got the PSL back (it tripped), High Power measurements were first on the Commissioning Agenda.
Day's Activities:
Fielded a couple of Remote Access requests.
Both servers h1hwsex and h1hwsey are now running Ubuntu 14.4.
Work Permit 6193
The 12V power supply for the timing slaves was moved to separate power supplies, see list below. Front end computers and IO chassis were powered cycled.
IO Chassis LSC (CER)
IO Chassis ISC (EX)
IO Chassis ISC (EY)
F. Clara, D. Baker, J. Batch
Laser tripped at 21:28:53 UTC (14:28:53 PDT), according to the status screen it was the power meter flow that tripped the system. This was likely a leftover from the work we did earlier today. The system came up without issue and is now fully up and running.
(M. Pirello)
Per Richard's Work Permit #6092 I reprogrammed both CPS fanouts in the LVEA, and the ones on the X and Y end stations. Programming was successful and the heartbeat LED's are now off.
S1400610 EY
S1400612 EX
S1400599 LVEA
S1400613 LVEA