Posted below are the October temperature & humidity data from the 3IFO-Des cabinet in the LVEA and the 2 Dry Boxes in the VPW. Data for 3IFO-Des and DB1 looks relative normal. The DB4 data for October was croupt and has not been posted.
IFO has been locked at Low Noise in Observing mode for the past 8.5 hours. The range is currently 81Mpc. Environmental conditions are good. Microseism is still a bit high, but is slowly coming down. All appers normal at this time.
Attached is 40 days of trends from the first Pressure Sensor on the Pump Station. These all look fine. There is a drop of PS1 & PS4 when I swapped some channels around for a study. Hmmm, I don't understand why the power outage fault seen about 2/3 through the data don't show on PS2 & 3 in the corner...? When the EndY came back after the power outage, there is a ~2 psi shift...hmm could be that the AOFF for that channel isn't in the database. Hey though, don't ya just love those daily and weekly glitches!?
Bottomline, none of the Pump Stations have changed there output indicating the pumps are okay.
Our monitoring of the omiscans for the top BBH/BNS triggers has turned up some interesting glitches from October 30 and 31 at the Hanford detector. First off, we have the following scan vetoed by the UPV, demonstrating solid UPV performance: https://ldas-jobs.ligo-wa.caltech.edu/~jacob.broida/Oct30/BBH/GW/1130235508/ https://ldas-jobs.ligo-wa.caltech.edu/~jacob.broida/Oct30/BBH/1130235508/#H1:ASC-Y_TR_A_NSUM_OUT_DQ A few interesting things to note here: the signal on the vetoed channel is clearly visible in the CALIB_STRAIN scan, however the signal is moved to a higher frequency. Secondly, the cause of the loud glitch signal at lower frequency is unclear, although there is some potential connection to other auxiliary channels. Next we have another loud glitch signal showing up in the CALIB_STRAIN channel, however the veto seems to have failed here: https://ldas-jobs.ligo-wa.caltech.edu/~jacob.broida/Oct31/BBH/GW/1130327173/ https://ldas-jobs.ligo-wa.caltech.edu/~jacob.broida/Oct31/BBH/1130327173 This time is indeed vetoed, however the veto channel was not strong enough to be mapped (SNR threshold set at 6). Even so, there is clear correlation between the CALIB_STRAIN signal and a number of auxiliary channels. It’s worth noting that there are loud signals across a good many of channels (and a couple different types of detectors) at this time. This scan is abnormally noisy in general. Here we have a glitch in CALIB_STRAIN with a strong connection to a particular auxiliary channel, missed by the UPV: https://ldas-jobs.ligo-wa.caltech.edu/~jacob.broida/Oct31/BBH/GW/1130326931/ https://ldas-jobs.ligo-wa.caltech.edu/~jacob.broida/Oct31/BBH/1130326931/#H1:ASC-REFL_A_RF45_Q_PIT_OUT_DQ As can be seen in the uniform signal across the auxiliary channels, there was a very loud, intermittent event here that was detected by a great number of different auxiliary detectors. There is an interesting glitch here with an unclear cause: https://ldas-jobs.ligo-wa.caltech.edu/~jacob.broida/Oct31/BNS/GW/1130314219/ And finally the big glitch that we have been chasing the past few days was seen again: https://ldas-jobs.ligo-wa.caltech.edu/~jacob.broida/Oct30/BBH/GW/1130226154/ https://ldas-jobs.ligo-wa.caltech.edu/~jacob.broida/Oct30/BBH/1130226154/#H1:PEM-EY_MAG_EBAY_SUSRACK_X_DQ We know this glitch is correlated with the following channel, however we believe that channel to be simply reacting to the same thing that is causing the glitch, and that the true cause lies elsewhere. We will continue to update as we try to isolate the root issue. For further viewing of the daily omiscans, view the calendar page here: https://ldas-jobs.ligo-wa.caltech.edu/~jacob.broida/ResultsCalender.html
Four attachments here comparing Magnitude & Phase for the L4Cs and the IPS from April 8 and 30 October (before & after grouting.)
The before grouting picture is always in the top panel.
Summary--On the L4Cs, much nicer, cleaner, quieter between 20 & 50hz. There are differences above these frequencies but hard to say if better or worse. For the L4C phase comparison, better coherence and smoother phase. That phase gets to -180 sooner (60 vs 70 hz) though. For the IPS, suttle but a horizontal zero mode might be softer but has moved down from 70 to 60hz. Otherwise, eh. For the phase on the IPS, it gets to -180 about 10 hz sooner, 40 vs 50hz.
Several operators have had trouble with some SDF diffs left over after running a2l recently. This is my fault - I think it happened when I was trying to hand-merge and reconcile our version of the a2l script with the svn version. It should be fixed now.
At this point, I think we should continue running a2l on Tuesdays in the measure-only mode that we've been using, so that we do have some long-term monitoring of our spot positions, but it is no longer important to run a2l every lock stretch. I have removed the sticky note from the ops workstation, and from the ops StickyNote wiki page.
PSL Status:
SysStat: All Green, except VB program offline & LRA out of range
Output power: 31.9w
Frontend Watch: Green
HPO Watch: Red
PMC:
Locked: 12 days, 23 hours, 40 minutes
Reflected power: 1.8w
Transmitted power: 23.7w
Total Power: 25.6w
ISS:
Diffracted power: 8.4%
Last saturation event: 0 days, 8 hours, 49 minutes
FSS:
Locked: 0 days, 8 hours, 49 minutes
Trans PD: 1.467v
SEI: Hugh – Finished HAM1 data collection over the weekend CDS: Several staff out this week. Limited activities PSL: All OK – Nothing to report VAC: Kyle – Working at X28. Kyle – Expects failure of Annulus Ion pumps throughout the site, which will be replaced as they fail. FMC: Bubba – The repair work on the beam tube enclosure continues. Planned Tuesday Maintenance Work: Bubba – Grease supply fans at CS, Mids, and Ends (WP #5586) Bubba – Remove grout forms from HAM1 Hugh – Power cycle ITM-Y ISI coil driver (WP #5587) Jodi – Forklifting storage load from 1-Ton into Mid-X (WP #5589) Rick – Photograph ETM surface looking for contamination (WP #5585) Jason – Inspect, photograph, and document Bio switch states for all H1 OpLevs (WP #5588) Richard – Continue installing PEM temperature sensors Checking Beckhoff cabling Solar panel installation on X-Arm (maybe) Keita - Measure dark offset of ISS Second Loop Check whitening gain for Baffle Diode at CS, and Ends LN2 deliveries scheduled for Mid-X and Mid-Y
O1 days 42 - 45
model restarts logged for Sun 01/Nov/2015 No restarts reported
model restarts logged for Sat 31/Oct/2015 No restarts reported
model restarts logged for Fri 30/Oct/2015 No restarts reported
model restarts logged for Thu 29/Oct/2015 No restarts reported
Title: 11/02/2015, Day Shift 16:00 – 00:00 (08:00 – 16:00) All times in UTC (PT) State of H1: 16:00 (08:00) The IFO is locked at NOMINAL_LOW_NOISE, in Observing mode Outgoing Operator: Jim Quick Summary: IFO locked at NOMINAL_LOW_NOISE, 22.3W, 78Mpc. Wind is a light breeze (4-7mph), Seismic is looking better. Although the microseism is still a bit high (0.5 to 0.3um/s) the trend is downward.
Title: 11/2 Owl Shift 8:00-16:00 UTC
State of H1: Observing
Shift Summary: I finished the initial alignment just in time for an earthquake in Alaska to hit. It was small however, didn't seem to affect locking aftewards. A few guardian and ISS hickups slowed up locking, but it seems winds have died down and useism is slowly receding. All night low frequency oscillations seem to be affecting IMC-F and maybe EY tidal? ASC loops are also unhappy, but the lock holds
Activity log:
8:45 Finish up initial alignment, an earthquake hits but eventually things quiet enough to lock
9:30 Locking hangs on DC readout, initing takes ISC_LOCK to down
10:00 Maybe a hung ISS loop holds locking,
10:30 Low noise finally, I run A2L, into Observe ~11:00
Attempting to get the IFO back into observing, but A2L tripped me up with a page or two SDF diffs, which I reverted. Now ETMY shows violin mode damping diffs. I'm accepting them but posting a screen shot of the changes, so someone who knows the whys can check.
These filters should be turned on (alog22816). If unable to get out of Observing mode to turn on the gains please keep an eye on 1008.45Hz line and make sure it's not ringing up.
To clarify, we have been reverting these gains. Guardian doesn't turn them on (Nutsinee/Evan: should we have Guardian just do it?), but we have been turning them on by reverting the gains to the non-zero values that SDF has.
Next time we are about to go to Observe, we should put these gains back to their non-zero values, and accept them in SDF. NB: They should be zero while we're locking, and can be turned to their non-zero values any time after the BOUNCE_VIOLIN_MODE_DAMPING state.
Jenne, Nutsinee
We just put this in the guardian code. Since the set points of 0 have been accepted next time SDF will show the difference. Please ACCEPT the new EPICS values for ETMY MODE3 and MODE9 gains.
For the second time tonight, I've lost lock because ISC_LOCK reached some state and decided it was time for a break. Each time, locking was finally going okay, no obvious issues, then Guardian reaches some state that I want it to wait at, and it then refuses to move on. No amount of, loading, pausing, execing, requesting higher states then switching to manual and going back to the current state will get it to move on. Only INITing, which then causes ISC_LOCK to go to DOWN.
The most recent "freeze" may have been due to ISS second loop engagement issues. I don't remember what state I was in when I lost lock last time, but I just had to go through manual engagement of the ISS second loop, and I was at first worried I was stuck again. The first freeze was on DC Readout, so it definitely wasn't an ISS issue.
Could we get some kind of error message if the second loop fails to engage after a couple minutes? I only figured this out because the log said: "2015-11-02T10:51:13.61264 ISC_LOCK new target: ENGAGE_ISS_2ND_LOOP", and I remembered that the ISS 2nd loop had issues.
INIT should be safe, if you go to Manual, select INIT, then select the state that you had been at, *then* go back to Auto. If you're in Auto when you are in INIT, it will go to Down.
Title: 11/1 Eve Shift 23:00-7:00 UTC (16:00-24:00 PST). All times in UTC.
State of H1: Initial alignment
Shift Summary: I have been having trouble getting past ALS since the lockloss earlier. I dug though the aLog and only found one mentions of the problem I seem to be having (see aLog 22543 for a description of the identical problem). Following the logic of that post, I have ran a dither alignment on TMSx and started initial alignment. Handing off to Jim with IA partially complete.
Incoming operator: Jim
Activity log:
Covered in earlier aLogs and summary.
Lost lock at 6:53 UTC. At first glance, it appears the ASC loops started running away and a bunch of SUSes saturated.