Back to relocked in Observing mode after lockloss due to runaway PI Modes 27 & 28. PI Mode 27 on it's second ring up did not respond to my attempts to suppress it. Changes in phase or gain seemed to have no effect. They are generally behaving right now but still high enough to need close scrutiny. Environmental conditions are generally good. There is currently heavy fog over the site and there is a chance for freezing rain later in the evening.
Ops Shift Transition:01/19/2017, Evening Shift 00:00 – 08:00 (16:00 - 00:00) - UTC (PT)
These plot are from a project I'm just starting on, but I thought they were interesting. Both plots show LLO (blue) & LHO (red) max wind speed minute trends (mph), micrseism (nm/s) band means and range (MPC) means, from top to bottom. First plot is 30 days during Nov/Dec during O1, second plot is for all of the Dec 2016 O2 data. X-axis is in supposed to be minutes, but I suspect something is screwed up there. First plot was made in matlab, second was done with python because the local nds python client will do gap handling, which is not available in GWdata. Stare at them long enough, you might be able to make some guesses about what kind of environment each IFO can tolerate. The other non-image files are the code I used to do the plots with.
We wanted to make the OAF BLRMS ranges more useful, so we added notch filters to the OAF BLRMS channels based on their respective power spectra, which were dominated by lines at certain frequencies. In SDF, we unmonitored 105 OAF-RANGE channels so that we could make changes online without knocking the interferometer out of observing mode. The OAF range calculation is not related to SENSMON range.
TITLE: 01/19 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: Jeff
SHIFT SUMMARY: 15hr lock coincident with LLO ruined by an earthquake. Waiting for it to settle. I switched the SEI_CONF to its large earthquake setting.
LOG:
6.8M Kirakira, Solomon Islands
Was it reported by Terramon, USGS, SEISMON? Yes, Yes, NA
Magnitude (according to Terramon, USGS, SEISMON): 6.8, 6.8, NA
Location: 71km W of Kirakira, Solomon Islands; LAT: 10.351°S 161.280°E
Starting time of event (ie. when BLRMS started to increase on DMT on the wall): ~23:10 UTC
Lock status? Broke lock
EQ reported by Terramon BEFORE it actually arrived? I was at the ops meeting so I did not see
Terramon seems to be reporting aftershocks, that may even be the same quake, that are not reported on USGS.
J. Kissel For future reference, I attach the differences between - the O1 H1SUSETMY L2/L3 crossover design (critiqued in G1501372) which had been accidentally running for the first month of O2 (see LHO aLOG 32540) and - the improved design (created by E. Goetz in July 2016; see LHO aLOG 28746) now used since the restart of the run post-winter break (see LHO aLOG 32933). Recall the goal of the improvement: - To high-pass the TST/L3 drive at low frequency more aggressively - To low-pass the PUM/L2 drive at high frequency in a more simple, sensible way to get isolation from the TST/L3 stage down to much lower frequency - To improve the interaction between PUM/L2 and TST/L3 such that the super-actuator transfer function was not a wiggly mess The .pdf attachments show the old vs. new design in the form of actuator authority, in test-mass displacement in [m] per LSC input [ct] (where the hybrid offloaded / distributed hierarchy filters have been converted to entirely distributed for sanity's sake). In the new design, the extremely aggressive, high-Q, many-order elliptic 900 Hz low-pass filter in the PUM stage has been removed, and replaced with a much softer Q'd low-pass at lower frequency. Similarly, on the low-frequency end, an additional high-pass has been added to the L3 stage filtering. The .png attachments show the requested DAC outputs for all globally driven stages of H1 SUS ETMY, for the new design (taken on 2017-01-19) and the old design (taken on 2016-12-21). The spectra were gathered under similar environmental conditions (0.5 - 0.7 [um/s] RMS microseism, ~5 [mph] winds), and when the IFO was performing roughly the same (at 65-70 [Mpc] BNS range). Although we can't see everything from the data taken in the past (the 2016-12-21, old design), we can see that in the new (2017-01-19, new design) data the PUM/L2 stage requested actuation is consistently ~2 orders of magnitude below the TST/L3 stage, where it used to be only ~1 order of magnitude about a few hundred [Hz]. We can also see that, although the difference is not hugh, the TST/L3 DAC request is now slightly below the PUM/L2 request as one might intuitively hope a from reasonable design. In summary: all goals have been achieved, the design is now sensible, and the balance of requested output from each stage is better.
I have had a few of the TCSY chiller flow alarms today, but all of them have almost immediately recovered. The one that just happened at 21:52 UTC, lasted for about 5min. The flow stayed below 2.5 Gpm and I curiously checked the laser temp. to see if it was getting hotter from a lack of flow. The temp had risen by about 0.15C
I got Jason on the case and the flow rate went back up as soon as I finished talking to him. He checked the chiller, just in case, and only noticed that there may have been a bit more air bubbles in the resevoir than on Tuesday, but everything else looked fine.
I had a closer look at the change on the temperature channel. The temperature of the laser changes in roughly 0.1s and is simultaneous with the flow rate drop. This is far too fast to be a real thermal change. It can only be electrical in nature.
Went over the completed maintenance items on the whiteboard.
IFO Update: Weather impacting operator coverage and noise. Noise investigations continue. Robert has more vetting to do. We will have to run with the lower range untill we get at least 5days of coincident data.
Apollo will be back in the control room on Tuesday.
Went through work permits.
Locked for 12+ hours at ~60Mpc. 4.7k hasn't risen too high. a2l should be run soon if we get single IFO time. A handful of dust alarms at EY, all in the >0.3u size. Finally above freezing here, might melt some of the ice.
The full report can be found on the detchar wiki: https://wiki.ligo.org/DetChar/DataQuality/DQShiftLHO20170116 Below I summarise the main highlights of this shift: • The overall duty cycle through 3 days was ~10 hours (19.3%) • On 16th the maintenace continued from the previous day for 38 horus, diagnosing issues related to possible ITMX and the X-arm compensation plate rubbing possibly due to the cold outside vs warm inside. • On 17th the duty cycle is 7 hours (29.7%). The range is 50 - 65 Mpc in the first lock, then declined drastically in the second lock due ot unknown reason. • preventive maintenace from 16:30 UTC on Tuesday to ~20:00UTC on Wednesday due to the storm. There was an outstanding winter weather alert in effect until mid-day Wednesday so most of the people went home. • On 18th the duty cycle is 3 hours (28.4%). The range is about 60 Mpc. Only the front end calibration data is shown in the range plot. • The scattering tool shows noise from H1:SUS-PRM_M1_DAMP_L_IN1_DQ and H1:SUS-SRM_M1_DAMP_L_IN1_DQ during the lock although the glitch rate and omicron glitchgrams don't seem to suggest anything bad going on in DARM. • software saturations - H1:ASC-POP_X_PZT_PIT and H1:OAF-DARM_AUDIO seem to be saturating continuously over the lock stretch from Tuesday 17th to Wednesday 18th.
I have been looking into the software saturations issues reported on the DQ shift for channels H1:ASC-POP_X_PZT_PIT and H1:OAF-DARM_AUDIO.
While the report says that they were saturating during the few days reported on the DQ shift, however I have noticed that these two channels have always been usual suspects, so a closer look was required.
First of all notice that the way the software saturations are reported is by comparing the OUTPUT of the Filter Banks with their LIMIT setting, such that that saturation occurs when OUTPUT >= LIMIT * 0.99 to account for precission problems.
Taking this into account let's look to the MEDM screen of the filter banks associated to both channels above, see attached files
* H1:ASC-POP_X_PZT_PIT: see attached file named: MEDM_FilterBank_channel_ASC_POP_X_PZT_PIT.png
In this case we observe that the LIMIT is set at 20000 (circled red). The signal going through this LIMIT can have values of a few thousand during normal operation, so all should be OK regarding saturation. However immediately after this limit there is an OFFSET/BIAS of 16000 (circled in blue), which easily can bring the OUTPUT value to values bigger than the LIMIT. However this does not mean that there is any issue because the LIMIT applies before the offset.
The problem is with the way Software Saturations are being generated. Most if not all Filter Banks have an OUTMON after the LIMIT (circled yellow), if we were to look at that channel for comparison with LIMIT, instead of OUTPUT, then this issue would be resolved.
*H1:OAF-DARM_AUDIO: (see attached file named: MEDM_FilterBank_channel_OAF_DARM_AUDIO
This is a more standard type of Filter Bank where there is no BIAS or OFFSET after the LIMIT, however at the current setup the signal is alway +-1. Notice however that this channel is only used to listen to DARM from the Control room so it does not feed anywhere else on the system and it is not dangerous. If needed be it could be turned off.
The over saturations were seen continuously over the lock stretch from H1:ASC-POP_X_PZT_PIT and H1:OAF-DARM_AUDIO, at ~11:30UTC - 13:43UTC on Tuesday 17th and at 20:30UTC - 23:31UTC on Wednesday 18th. (More specifically H1:OAF-DARM_AUDIO showed the over saturation o over the half of the lock stretch at 20:30UTC - 23:31UTC on Wednesday 18th) Please see the links of the figures below. https://ldas-jobs.ligo-wa.caltech.edu/~detchar/software-saturations/day/20170117/H1-ASC_POP_X_PZT_PIT-1168646418-86400.png https://ldas-jobs.ligo-wa.caltech.edu/~detchar/software-saturations/day/20170117/H1-OAF_DARM_AUDIO-1168646418-86400.png https://ldas-jobs.ligo-wa.caltech.edu/~detchar/software-saturations/day/20170118/H1-ASC_POP_X_PZT_PIT-1168732818-86400.png https://ldas-jobs.ligo-wa.caltech.edu/~detchar/software-saturations/day/20170118/H1-OAF_DARM_AUDIO-1168732818-86400.png The DQ shift report about this can be found on the detchar wiki: https://wiki.ligo.org/DetChar/DataQuality/DQShiftLHO20170116
TITLE: 01/19 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 59.8838Mpc
OUTGOING OPERATOR: Travis
CURRENT ENVIRONMENT:
Wind: 3mph Gusts, 1mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.50 μm/s
QUICK SUMMARY: Road update: Main roads are mostly clear of ice, but still a bit slippery. My neighborhood had one person stuck after another. I got stuck in the parking lot here trying to back into a spot for a few minutes.
Been locked for 8 hours, range around 60Mpc.
Looks like ETMY 4735 Violin mode may be ringing up again.
TITLE: 01/19 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 58.582Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: Locked in Observing for the entire shift.
LOG: Nothing to report.
Perhaps the clearest trend of reference cavity transmission versus room temperature that I've seen thus far.
TITLE: 01/19 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 60.6086Mpc
OUTGOING OPERATOR: Jeff
CURRENT ENVIRONMENT:
Wind: 7mph Gusts, 5mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.63 μm/s
QUICK SUMMARY: Jeff left me with a freshly locked, Observing IFO. Roads into the site were terrifying, especially Route 10 and the site entrance road.
Starting CP3 fill. LLCV enabled. LLCV set to manual control. LLCV set to 50% open. Fill completed in 344 seconds. LLCV set back to 15.0% open. Starting CP4 fill. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Fill completed in 1094 seconds. TC A did not register fill. LLCV set back to 33.0% open.
Raised CP4 LLCV from 33% to 34% open. Left CP3 as is. May increase after tomorrow's fill.