TITLE: 01/07 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 65.8398Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: Locked for the majority of the shift other than an EQ lockloss and a commissioning lockloss.
LOG: See previous aLogs for details.
4.7M Cuajinicuilapa, Mexico
Was it reported by Terramon, USGS, SEISMON? Yes, Yes, No
Magnitude (according to Terramon, USGS, SEISMON): 4.7, 4.7, NA
Location 17km SW of Cuajinicuilapa, Mexico; LAT: 16.4, LON: -98.5
Starting time of event (ie. when BLRMS started to increase on DMT on the wall): 6:35 UTC
Lock status? Locked
EQ reported by Terramon BEFORE it actually arrived? Not sure, I didn't notice.
Back to Observing after the EQ rang down.
Added 300 mL H2O to Xtal chiller. Diode chiller is OK. Filters appear clean.
5.7M Port Hardy, Canada
Was it reported by Terramon, USGS, SEISMON? Yes, Yes, No
Magnitude (according to Terramon, USGS, SEISMO): 5.5, 5.7, NA
Location 175km WSW of Port Hardy, Canada; LAT: 50.2, LON: -129.8
Starting time of event (ie. when BLRMS started to increase on DMT on the wall): 3:18 UTC
Lock status? Unlocked
EQ reported by Terramon BEFORE it actually arrived? No, lockloss and BLRMS reported simultaneously. Terramon reported later.
5.5 EQ Port Hardy Canada
After IA, we are back to Observing. No issues were encountered during locking after the ETMY chassis power cycles.
[Sheila, JeffK, Jenne, Vern, AndyL]
In answer to our call for help (alog 33030), Andy looked at the noisemons of the ETMY L2 coil driver, and found that it looked suspicious, similar to how it was looking in mid-Dec. See Andy's alog 32730 on finding the mid-Dec glitches relating to EY L2, other alogs on Monday Dec 19th on the swap and later swap back to original PUM driver, and Robert's alog 32885 affirming that it could be a humidity-related issue.
Anyhow, Vern, Sheila, and I just went down to the Yend station after a lockloss, and power cycled the L2 AI chassis and the coil driver. Travis is now bringing us back to lock to see if this helped.
I am somewhat concerned that it won't have helped though, if the actual answer is that the humidity in the building is too low. Robert suggested I look at the humidity now, and indeed it is quite low. Attached is a 90 day trend of the humidity in both of the end stations. The other time that it dips low is the day we had trouble in mid-Dec with glitchiness, and currently it's lower than it was that day. It looks like perhaps the sensor can't measure below 10% relative humidity? Anyhow, it's not likely to improve on its own, since it is supposed to stay quite cold. Robert suggested asking Bubba and John to look into turning on humidifiers that we may have at the end stations, but that will likely wait until Monday.
TITLE: 01/07 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
Wind: 10mph Gusts, 9mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.52 μm/s
QUICK SUMMARY: Purposely lost lock at 0:27 UTC when Sheila turned on the boost filter detailed in WP 6419. Opportunistically, Vern, Sheila, and Jenne went to EY to reboot an AI chassis in hope of helping the 1080Hz issues described in aLog 33030. Currently back to IA and locking attempts.
Jeff, Jenne, Sheila, Daniel, Young-Min
Earlier today Jim turned off ETMX sensor correction because of a problem with the BRS. This caused an increase in the motion of ETMX at low frequencies which will cause an increase in the frequency fluctuations of the laser through the CARM loop. The OMC length loop has to follow the laser frequency. Our current thinking is that the OMC length locking residual becomes too large when the sensor correction is off, so that the binear coupling of OMC length to DARM adds a lot of noise. We redid this test at 22:40 UTC to 22:50 UTC, the screenshot attached shows references from the time that sensor correction was off and red traces are from times when sensor correction was on. Young-Min did BRUCO scans from sensor correction on and off times, and saw broad coherence with OMC-LSC_I
In the past I tried to estimate noise in DARM due to OMC length noise: 30510 If you look closely at the 3rd attachments to that alog, you will see that there is a feature around 1084Hz in the estimated OMC length spectrum. This is probably where our peak in the spectrum which has been causing bothersome glitches come from. You can also see in the 5th attachment that this feature is predicted to be about a factor of 10 below DARM, so the coupling must be underestimated somehow. New realization: We reduced the amplitude of the dither line on October 11 to reduce the accoustic coupling at HAM6, (alog 30380), which coincides pretty well with the appearance of these glitches on the summary pages. The measurements I used for the estimates in alog 30510 were all taken before this change.
A second reason to suspect that the coupling is underestimated in alog 30510 is that the same model of OMC length noise would have suggested that the 16Hz comb on the dither line would not have appeared in DARM, but we know that it did: alog 25703 and comments. Understanding the OMC length coupling seems important because we have seen that we can add broadband noise to DARM with OMC length excitations.
According to the model of the OMC loop we have room to add a boost of the OMC length loop, to see if this helps with the feature at 1084Hz (we tried this once which broke the lock, we should remeasure the loop before transitioning to DC readout to try again). First I would like to try reverting the change to the OMC length dither amplitude, but we will hold off on doing that for now.
Interesting find. Another test one could try would be to switch all four BSCs to 45 mHz blends + sensor correction, when wind speeds are low. This will give maximum suppression at the microseism, but will increase lower frequency motion and be more wind-sensitive.
Here is the BruCo scans for the time (19:44:49 UTC) when the sensor correction was off,
https://ldas-jobs.ligo-wa.caltech.edu/~youngmin/BruCo/ER10-O2A/H1/Jan06/H1-OMC-DCPD-1136144706-600/
For the comparison, BruCo scans for the good time (19:30:00 UTC) when the sensor correction was on are in https://ldas-jobs.ligo-wa.caltech.edu/~youngmin/BruCo/ER10-O2A/H1/Jan06/H1-OMC-DCPD-1136143817-600/
b>TITLE: 01/06 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 65.2739Mpc
INCOMING OPERATOR: None
SHIFT SUMMARY:
H1 has been getting more and more glitchy over the last 12 hours, and we're not sure why. Sheila, JeffK, Young-Min and I are trying to look into it, but we would very much appreciate Detchar help. If someone sees something that we should look into more closely, feel free to call the control room in addition to posting, so that we can get H1 back up and on its non-glitchy feet quickly.
Attached is a screenshot of the range from the last day, but we're most concerned about the last 2 locks.
During the most recent lock, the constant low range toward the end (19:45 - 20:00 UTC 6Jan2017) appears to be from the sensor correction toggling of the Xend station ISI (see Jim's alog 33028 for details on why that was necessary), which was coincident with a broad band of extra glitchiness on the DMT Omega plot between 700Hz-1100Hz. We'd like to look into this separately, but we'll prioritize it below the major glitchiness at lower frequencies that we've had the last 2 locks.
Some things that could be helpful:
Young-Min, Sheila Jeff Cheryl Jim Jenne
We had a lock stretch with large glitches and poor range starting at around 18:40 UTC to 20:09 UTC Jan 6th, Jenne is writing about some of the other issues with this lock stretch, but there might be a useful clue for investigating the 1084 Hz glitches in this data,
The behavoir of 1084Hz seemed normal until Jim Warner turned off the sensor correction at ETMX because of a problem with the BRS (alog 33028) at 19:44:49 UTC. The 1084Hz peak got much larger and became very non stationary, and other non stationary noise appeared at high frequencies (see comparison between red and green traces in the attached screenshot and spectrogram in summary page). This continued until 19:59:28 when Jim turned the sensor correction back on with the BRS damper on. We were planning to try an on off test, but lost lock at 20:09 UTC for unknown reasons.
To check the times when sensor correction is on, the channel to check is H1:ISI-ETMX_ST1_SENSCOR_GND_STS_X_FIR_GAIN which is 0 when sensor correction is off and 1 when it is on. To check on BRS damping the channel name H1:ISI-GND_BRS_ETMX_DAMPBIT, it was on from 19:40:12 UTC to 19:54:32 UTC
Because of the other problems with this lock stretch (which could be scattered light) Cheryl is now doing an initial alingment.
Ideas we've been spit balling around the room, which may help guide DetChar studies: - We've been getting alarms that the Y END VEA temperature is swinging around. A theory: temperature causes ETMY and TMSY to drift -- perhaps differentially -- and the global ASC control system is forced to follow it. As it follows slowly, it begins to steer the IFO into places where scattered light causes more problems, and hence glitchiness. A correlation study of temperature in the Y VEA with alignment and glitch rate would be sweet. (Note Cheryl has already trended the position of the ETMs, and found that ETMY is moving around much more than ETMX.) - Cheryl has also been moving around the IMs and has found that the IMC has moved substantially over the course of a few days. Other angle to attack would be alignment of the input optics vs. glitch rate, where again, the idea is that if the input pointing to the IFO is moving around, then the global ASC system is forced to follow it, and perhaps there are bad corners of alignment space that are more susceptible to scattered light and/or non-stationary. - I'll state for the record, that this has NOTHING to do with the calibration change (LHO aLOGs 33025 and 33026), which change the response in a very stationary way. - Jim's sensor correction ON/OFF (LHO aLOG 33031) and/or BRS damping ON/OFF study (LHO aLOG 33028) is another interesting avenue to explore. Because of all of the above speculation, we're - Restoring the IM alignment to it's "nominal" location from a Dec 8 2016 lock stretch early in O1. - Running initial alignment - We'll do more sensor correction ON/OFF studies when we get back up if LLO is still not observing. But yeah -- we'll take any and all help we can get. Feel free to hop on the LIGO Control Rooms TeamSpeak channel.
Here are some hveto results for today:
Between 1-2048Hz - https://ldas-jobs.ligo-wa.caltech.edu/~laura.nuttall/detchar/hveto/20170106/
Between 10-1000Hz - https://ldas-jobs.ligo-wa.caltech.edu/~laura.nuttall/detchar/hveto/20170106_10_1000kHz/
We just had a repeat of my alog 32847, so Detchar may want to look more closely at the BRS-X damping state during lock stretches. The first attached trend shows the BRS velocity, angle, damp bit (which indicates when the damper switches on and off) and IMC-F. As before, I saw on the wall traces that IMC-F and an arm signal (ETMX L1 DRIVEALIGN L?) showed a sudden 120 sec period oscillation, I immediately suspected the BRS-X damping. When I ramped the ETMX sensor correction off the IMC and arms signals immediately settled down some, though you can see that IMC-F is noisier with sensor correction off at the end of the plot (from increased microseismic motion). I think that we may want to adjust some thresholds for BRS-X, but that carries some risks of its own. What is weird is that around 19:15 BRS-X showed a sudden increase in motion (second plot). I'm not sure what happened there, maybe some HVAC kicked off?
There are a couple of 'easy' options that may fix this - 1) Have a notch in the sensor correction at the BRS-X resonance or 2) have the operator check BRS-X amplitude periodically and damp BRS-X (by adjusting thresholds) when not in 'Observe'. In 'Observe' mode we could keep the damping off. BRS-X is unlikely to ring up to large amplitudes over a short stretch unless it gets super windy.
Another fix would be to reduce the autocollimator non-linearity in the BRS-X, which will allow much larger amplitudes before damping is needed.
Starting CP3 fill. TC A error. TC B error. Fill aborted. Starting CP4 fill. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Fill completed in 34 seconds. LLCV set back to 35.0% open.
Manually over filled CP3 from the control room, LLCV was set to 50% open for 33 seconds, temperature at both thermocouples dropped down to well below -90 oC, LLCV was returned to 15%.
After looking some more at the behavior of the thermocouples at CP3 and the amount of time that it took to over fill it, I changed the LLCV setting to 14% from 15%.
I have produced new DCS filters that incorporate the recent changes in the front-end calibration model, including the updated L2/L3 crossover. The filters can be found in the calibration SVN at this location: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/GDSFilters/H1DCS_1167436818.npz For information on the changes in the calibration model, see these aLOGs: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=33004 https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=32933 https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=32907 The filters were produced using this Matlab script in SVN revision 4095: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/TDfilters/H1_run_td_filters_1167436818.m The parameters files used (all in revision 4095) were: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/Common/params/IFOindepParams.conf /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/params/H1params.conf /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/params/2017-01-03/H1params_2017-01-03.conf /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/params/H1_TDparams_1167436818.conf /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/CAL_EPICS/D20170103_H1_CAL_EPICS_VALUES.m Several plots are attached. The first four (png files) are spectrum comparisons between DCS, CLACS, and GDS at GPS time 1167559872 (Jan 04, 2017, 10:10 UTC). Kappas were applied in making the DCS and DCS spectrum, both using the EPICS from the filters file. The filters used to produce the GDS spectrum were produced from the same parameters files (see https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=33021 ). The reason the GDS and DCS spectra do not agree may be due to the fact that the new calibration model had not been pushed to the front-end calibration at the time the comparison was made. This comparison will be redone once the new model is pushed. The last pdf contains a time series of the kappas computed by DCS and CALCS. Note that they do not agree. This is expected, as the EPICS were not updated in the front end.
Here are updated plots from the first lock stretch after the new calibration model was pushed (Jan 6, 2017, 19 UTC). Note that DCS and GDS now agree as expected.
TITLE: 01/06 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Earthquake
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
Wind: 3mph Gusts, 2mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.36 μm/s
QUICK SUMMARY:
I looked a little bit at this lockloss, and it seems that the way in which the earthquake broke the lock was by saturating PR3 M3 with a yaw drive due to low frequency motion (20 second period).
We should impove our offloading of PR3 drives to the top mass, which should be easy to do just by increasing the gain in the top mass filter banks.
Updated to 5.7M on USGS.