H1 has been getting more and more glitchy over the last 12 hours, and we're not sure why. Sheila, JeffK, Young-Min and I are trying to look into it, but we would very much appreciate Detchar help. If someone sees something that we should look into more closely, feel free to call the control room in addition to posting, so that we can get H1 back up and on its non-glitchy feet quickly.
Attached is a screenshot of the range from the last day, but we're most concerned about the last 2 locks.
During the most recent lock, the constant low range toward the end (19:45 - 20:00 UTC 6Jan2017) appears to be from the sensor correction toggling of the Xend station ISI (see Jim's alog 33028 for details on why that was necessary), which was coincident with a broad band of extra glitchiness on the DMT Omega plot between 700Hz-1100Hz. We'd like to look into this separately, but we'll prioritize it below the major glitchiness at lower frequencies that we've had the last 2 locks.
Some things that could be helpful:
Young-Min, Sheila Jeff Cheryl Jim Jenne
We had a lock stretch with large glitches and poor range starting at around 18:40 UTC to 20:09 UTC Jan 6th, Jenne is writing about some of the other issues with this lock stretch, but there might be a useful clue for investigating the 1084 Hz glitches in this data,
The behavoir of 1084Hz seemed normal until Jim Warner turned off the sensor correction at ETMX because of a problem with the BRS (alog 33028) at 19:44:49 UTC. The 1084Hz peak got much larger and became very non stationary, and other non stationary noise appeared at high frequencies (see comparison between red and green traces in the attached screenshot and spectrogram in summary page). This continued until 19:59:28 when Jim turned the sensor correction back on with the BRS damper on. We were planning to try an on off test, but lost lock at 20:09 UTC for unknown reasons.
To check the times when sensor correction is on, the channel to check is H1:ISI-ETMX_ST1_SENSCOR_GND_STS_X_FIR_GAIN which is 0 when sensor correction is off and 1 when it is on. To check on BRS damping the channel name H1:ISI-GND_BRS_ETMX_DAMPBIT, it was on from 19:40:12 UTC to 19:54:32 UTC
Because of the other problems with this lock stretch (which could be scattered light) Cheryl is now doing an initial alingment.
Ideas we've been spit balling around the room, which may help guide DetChar studies: - We've been getting alarms that the Y END VEA temperature is swinging around. A theory: temperature causes ETMY and TMSY to drift -- perhaps differentially -- and the global ASC control system is forced to follow it. As it follows slowly, it begins to steer the IFO into places where scattered light causes more problems, and hence glitchiness. A correlation study of temperature in the Y VEA with alignment and glitch rate would be sweet. (Note Cheryl has already trended the position of the ETMs, and found that ETMY is moving around much more than ETMX.) - Cheryl has also been moving around the IMs and has found that the IMC has moved substantially over the course of a few days. Other angle to attack would be alignment of the input optics vs. glitch rate, where again, the idea is that if the input pointing to the IFO is moving around, then the global ASC system is forced to follow it, and perhaps there are bad corners of alignment space that are more susceptible to scattered light and/or non-stationary. - I'll state for the record, that this has NOTHING to do with the calibration change (LHO aLOGs 33025 and 33026), which change the response in a very stationary way. - Jim's sensor correction ON/OFF (LHO aLOG 33031) and/or BRS damping ON/OFF study (LHO aLOG 33028) is another interesting avenue to explore. Because of all of the above speculation, we're - Restoring the IM alignment to it's "nominal" location from a Dec 8 2016 lock stretch early in O1. - Running initial alignment - We'll do more sensor correction ON/OFF studies when we get back up if LLO is still not observing. But yeah -- we'll take any and all help we can get. Feel free to hop on the LIGO Control Rooms TeamSpeak channel.
Here are some hveto results for today:
Between 1-2048Hz - https://ldas-jobs.ligo-wa.caltech.edu/~laura.nuttall/detchar/hveto/20170106/
Between 10-1000Hz - https://ldas-jobs.ligo-wa.caltech.edu/~laura.nuttall/detchar/hveto/20170106_10_1000kHz/
We just had a repeat of my alog 32847, so Detchar may want to look more closely at the BRS-X damping state during lock stretches. The first attached trend shows the BRS velocity, angle, damp bit (which indicates when the damper switches on and off) and IMC-F. As before, I saw on the wall traces that IMC-F and an arm signal (ETMX L1 DRIVEALIGN L?) showed a sudden 120 sec period oscillation, I immediately suspected the BRS-X damping. When I ramped the ETMX sensor correction off the IMC and arms signals immediately settled down some, though you can see that IMC-F is noisier with sensor correction off at the end of the plot (from increased microseismic motion). I think that we may want to adjust some thresholds for BRS-X, but that carries some risks of its own. What is weird is that around 19:15 BRS-X showed a sudden increase in motion (second plot). I'm not sure what happened there, maybe some HVAC kicked off?
There are a couple of 'easy' options that may fix this - 1) Have a notch in the sensor correction at the BRS-X resonance or 2) have the operator check BRS-X amplitude periodically and damp BRS-X (by adjusting thresholds) when not in 'Observe'. In 'Observe' mode we could keep the damping off. BRS-X is unlikely to ring up to large amplitudes over a short stretch unless it gets super windy.
Another fix would be to reduce the autocollimator non-linearity in the BRS-X, which will allow much larger amplitudes before damping is needed.
Starting CP3 fill. TC A error. TC B error. Fill aborted. Starting CP4 fill. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Fill completed in 34 seconds. LLCV set back to 35.0% open.
Manually over filled CP3 from the control room, LLCV was set to 50% open for 33 seconds, temperature at both thermocouples dropped down to well below -90 oC, LLCV was returned to 15%.
After looking some more at the behavior of the thermocouples at CP3 and the amount of time that it took to over fill it, I changed the LLCV setting to 14% from 15%.
[Jeff K, Alex U, Aaron V] I restarted the primary and redundant GDS calibration pipelines at GPS second 1167764080. This restart implements a new filters file with the latest calibration model: see https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=33021 At about the same time, Jeff pushed the updated EPICS records associated with this model (see https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=33025 ), which has restored the kappas to their correct values.
J. Kissel, A. Viets I've taken the opportunity during the IFO downtime to update the CAL-CS front-end calibration and associated EPICs records using the model described / verified last night (see LHO aLOG 33004). Since the changes to the DELTAL_RESIDUAL and DELTAL_CTRL paths with minor (having already taken care of compensating the change in the L2/L3 crossover on Teusday; see LHO aLOG 32933), instead of using the automated script to populate the front end, I installed the changes by hand. The changes the DELTAL_CTRL (actuator) path are Filter Bank Module Name Former Current Value Value H1:CAL-CS_DARM_ANALOG_ETMY_L1 FM3 "N_per_cnt" 8.164e-8 8.091e-8 H1:CAL-CS_DARM_ANALOG_ETMY_L2 FM3 "N_per_cnt" 6.841e-10 6.768e-10 H1:CAL-CS_DARM_ANALOG_ETMY_L3 FM3 "N_per_cnt" 4.389e-12 4.357e-12 where there, I've *replaced* the old values with the new and both are copied from the MCMC fit results in LHO aLOG 32989. The changes in the DELTAL_RESIDUAL (sensing) path are in the H1:CAL-CS_DARM_ERR filter bank, in which I've switched to using FM7 & FM8, from FM2 & FM3: Module Name Design String Formerly FM2 O2SRCD2N zpk([346.7;7.2231;-7.5587],[0.1;0.1;7000],1,"n")gain(5458.55) used FM3 O2Gain gain(8.673e-7) Currently FM7 O2B_D2N zpk([360.0;6.7496;-7.0660],[0.1;0.1;7000],1,"n")gain(4768.4) used FM8 O2BGain gain(9.191e-07) the new design comes from the nlinfit results reported in LHO aLOG 33004 For the updates to the EPICs records, the changes are too plentiful to quote directly, but I attach the log file of the function used to automatically push the changes, which is Runs/O2/H1/Scripts/CAL_EPICS/writeH1_CAL_EPICS.m (r4095, lc: r4094) Note that though the records were generated today, we've changed the name of the files before committing to the repository to match the date of the model, which is 20170103. All EPICS records changes have been accepted in SDF, and both filter files and snapshot files have been committed to the userapps repo.
I have produced new DCS filters that incorporate the recent changes in the front-end calibration model, including the updated L2/L3 crossover. The filters can be found in the calibration SVN at this location: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/GDSFilters/H1DCS_1167436818.npz For information on the changes in the calibration model, see these aLOGs: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=33004 https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=32933 https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=32907 The filters were produced using this Matlab script in SVN revision 4095: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/TDfilters/H1_run_td_filters_1167436818.m The parameters files used (all in revision 4095) were: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/Common/params/IFOindepParams.conf /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/params/H1params.conf /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/params/2017-01-03/H1params_2017-01-03.conf /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/params/H1_TDparams_1167436818.conf /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/CAL_EPICS/D20170103_H1_CAL_EPICS_VALUES.m Several plots are attached. The first four (png files) are spectrum comparisons between DCS, CLACS, and GDS at GPS time 1167559872 (Jan 04, 2017, 10:10 UTC). Kappas were applied in making the DCS and DCS spectrum, both using the EPICS from the filters file. The filters used to produce the GDS spectrum were produced from the same parameters files (see https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=33021 ). The reason the GDS and DCS spectra do not agree may be due to the fact that the new calibration model had not been pushed to the front-end calibration at the time the comparison was made. This comparison will be redone once the new model is pushed. The last pdf contains a time series of the kappas computed by DCS and CALCS. Note that they do not agree. This is expected, as the EPICS were not updated in the front end.
Here are updated plots from the first lock stretch after the new calibration model was pushed (Jan 6, 2017, 19 UTC). Note that DCS and GDS now agree as expected.
I have produced new GDS filters that incorporate the recent changes in the front-end calibration model, including the updated L2/L3 crossover. The filters can be found in the calibration SVN at this location: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/GDSFilters/H1GDS_1167602808.npz For information on the changes in the calibration model, see these aLOGs: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=33004 https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=32933 https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=32907 The filters were produced using this Matlab script in SVN revision 4095: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/TDfilters/H1_run_td_filters_1167602808.m The parameters files used (all in revision 4095) were: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/Common/params/IFOindepParams.conf /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/params/H1params.conf /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/params/2017-01-03/H1params_2017-01-03.conf /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/params/H1_TDparams_1167602808.conf /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/CAL_EPICS/D20170103_H1_CAL_EPICS_VALUES.m Several plots are attached. The first two (png files) are spectrum comparisons between GDS and CALCS at GPS time 1167559872 (Jan 04, 2017, 10:10 UTC). Kappas were applied in making the GDS spectrum, using the EPICS from the filters file (the EPICS in the frames at this time are incorrect). The last plot contains a time series of the kappas computed by GDS and CALCS. Note that they do not agree. This is expected, as the EPICS were not updated in the front end.
WP 6416 The lalapps_Makefakedata_v4 binary was rebuilt to correct a leap second issue. Rebuild procedure followed what was done at Livingston documented in FRS ticket 7014. Monit was used to stop psinject, the new binary moved into place, and Monit was used to restart psinject. Examination of running processes on h1hwinj showed the processes being stopped, and new processes being started after the restart. Perhaps someone from the injection group could take a look to see that all is working as it should?
TITLE: 01/06 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Earthquake
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
Wind: 3mph Gusts, 2mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.36 μm/s
QUICK SUMMARY:
I looked a little bit at this lockloss, and it seems that the way in which the earthquake broke the lock was by saturating PR3 M3 with a yaw drive due to low frequency motion (20 second period).
We should impove our offloading of PR3 drives to the top mass, which should be easy to do just by increasing the gain in the top mass filter banks.
WP 6418 While the IFO was unlocked due to an earthquake, I moved raw minute trend files on h1tw1 and reconfigured and restarted the NDS servers on h1nds0 and h1nds1 to read the renamed trend files. This is the start of the process to move the raw minute trend files to /frames/trend/minute_raw. This is routine maintenance that occurs every 6 months or so to avoid filling the file system where the raw minute trends are stored.
TITLE: 01/06 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
A very small earthquake showed up on the BLRMS at the same time as the lockloss, but nothing that should have knocked us out. Perhaps it was already a bit misaligned. Temp at EY is back to 68.0F,
Generic lockloss attached
Back to Observing at 13:54. I had to do an initial alignment because even though I could lock DRMI, it was very weak and I could not increase it no matter what I adjusted. IA showed a big BS adjustment, but after it ran through and we are sitting at 74Mpc right now.
The range has been slowly decreasing since the start of the lock, but I could not figure out why. There is no obvious reasons for the range drop or lockloss that I can see. EY temp is back up to 67.6F from 67.0 at the beginning of my shift.
Generic Lockloss template attatched, It shows the ASC picking up about 20sec before the lockloss, but not sure what is causing it.
Back to Observing 11:57
I had to move TMSY a bit to bring green arm power up, but other than that it was pretty straight forward.
TITLE: 01/06 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ed
CURRENT ENVIRONMENT: Wind: 7mph Gusts, 5mph 5min avg Primary useism: 0.01 μm/s Secondary useism: 0.23 μm/s
QUICK SUMMARY: Ed brought it right back up for me after a lockloss that he couldn't explain. I checked and then ran a2l once we were all the way up. EY temp is still low ~67F. Everything else seems good, cruising at 70Mpc.
07:22UTC Lockloss
I set the Observation Mode to Environment "Seismis" because I think the tiny rise in the EQ band is what did it, at this point. Y arm is kinda messed so I'm going to do an Initial Alignment but I'm leaving the mode set as mentioned.
07:44 Begin Initial alignment