Used the latest version of the VEA Sweep Checklist (T1500386), and we look good.
WP 6457 ECR E1700032 FRS 7099 DCC T1700025
This model update takes the binary coil status data out of the WD tripping code. If this data goes bad for 60 seconds, a red medm light will show on the ISI platform overview screen labeled OVERTEMP. If the coil driver actually does experience an overtemperature condition, the hardware itself will trip and of course the ISI will trip as the Actuators let go and seismometers rail, you know, cats & dogs living together, mass histeria. So there is little risk that anything too bad would result from this change.
The change was to prevent unnecessary tripping of the ISI when the binary signal from the Coil Driver went erroneously bad. This happened to LHO Oct 2015 & 22977 and the model was changed to allow a 10 sec wait before tripping the watchdog. In May 2016 this was extended to the HAMs. In Sept 2016, this 10 second delay proved not sufficient for LHO BS ISI. It then became a problem for LLO earlier this month. So this is a short likely episode missing history of the problem leading to this model change.
LHO CDS wiki (aka 'daqwiki') has been moved outside of the CDS network to a new server at https://cdswiki.ligo-wa.caltech.edu/wiki/. Redirections have been put in place on the CDS web server to ease the migration.
WP 5855
Removed the temporary Beckhoff rotation stage setup by HAM1/PSL enclosure. The following items were disconnected:
1. Power supply
2. Rotation stage interface board
3. Beckhoff terminals
4. Two network cables
The two cables going into the PSL enclosure need to be pulled out (DB15 and Conec 2W2), but requires someone inside the enclosure to guide/feed them out.
At the beginning of the maintanance window, at about 16:27:20 UTC I doubled the OMC LSC dither amplitude by a factor of two while reducing the OMC LSC-I gain by the same factor so the overall OMC LSC control bandwidth doesn't change. I wanted to repeat low/high comparison but IFO broke lock (not because of this test).
Anyway, just for this one test, I don't see any difference.
The changes were reverted back.
TJ and I went to EndY to address the potential mis-centering of the STS colocated on the BRS Table. Krishna reported the spectra of the STS did not look ideal and suggested centering may be needed.
We checked the mass position voltage readings before doing any centering and it was well out of spec: u v w were -3.7 -1.8 & -12.0V. Less than 2.5 is essential and 2v even better. We then shorted the resonant frequency jumper to make it a 1 sec instrument and hit the centering button. This is all done on the monitor port of the field satellite box about 10 feet from the BRS where the STS is located.
Our repeated attempts and observations over the next couple hours suggest the instrument masses would just go to the opposite rail and stay there espectially the w mass. I thought we waited long enough as we gave each centering several (10min) before trying again. I did not think that the instrument was poorly leveled as in that case the mass would float but just not in the center or it would sit over on one rail, rather than just going back and forth rail to rail at each centering attempt and stay there as it was doing. Still eventually, decided to check the bubble level on the STS and it was beautifully leveled.
So, either, the masses take much longer than I expect to come off the rail (not my experience) (maybe the jumper to make it a 1sec machine did not work) and if we were to look later we'd see the positions to be different. Or, something is screwed up in the machine with either the centering process or the masses themselves.
So, I'd like to look at the masses again at a later time if the coordinator would grant the access to the VEA.
I have produced filters for offline calibration of Hanford data starting at GPS time 1169326080. The filters can be found in the calibration SVN at this location: ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/GDSFilters/H1DCS_1169326080.npz For information on the associated change in calibration, see: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=33585 For suggested command line options to use when calibrating this data, see: https://wiki.ligo.org/Calibration/GDSCalibrationConfigurationsO2 The filters were produced using this Matlab script in SVN revision 4251: ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/TDfilters/H1_run_td_filters_1169326080.m The parameters files used (all in revision 4251) were: ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/params/2017-01-24/modelparams_H1_2017-01-24.conf ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/params/2017-01-24/H1_TDparams_1169326080.conf ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/CAL_EPICS/D20170124_H1_CAL_EPICS_VALUES.m Several plots are attached. The first four (png files) are spectrum comparisons between CALCS, GDS, and DCS. GDS and DCS agree to the expected level. Kappas were applied in both the GDS plots and the DCS plots with a coherence uncertainty threshold of 0.4%. Time domain vs. frequency domain comparison plots of the filters are also attached. Lastly, brief time series of the kappas and coherences are attached, for comparison with CALCS.
Here is a plot that compares the ratios of GDS and DCS (CO1) data (expected vs measured). Above ~8 Hz, the expected and measured ratios agree. Below ~8 Hz the we see difference. This comparison doesn't account for the FIR implementation of ~9 Hz high pass filter used in GDS and DCS data. If there is difference between how this implemented it could produce the difference we see here (need to be checked). The code used to make this plot is added to svn, /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/CALCS_FE/CALCSvsDARMModel_20170124.m This is just an updated version of the code Jeff used to make the CALCS and GDS comparison.
Most of our activities are complete. We have a few items which are still ongoing:
Betsy mentioned Test Mass charge measurements after their Scattering work.
This is per alog 33735 and regarding FM9 filter for 4.7kHz violin mode.
Patrick, TJ
The EX high voltage turned off after some Beckhoff work this morning. We drove down and did the normal; flip switches, set voltage to 430V, set current to 80mA, and then hit the output ON. Patrick had some notes that said that the current should end up being around 3.8mA, and it started out around there, but ended up around 24mA. Not too sure which is the most current current.
PyCBC analysts, Thomas Dent, Andrew Lundgren
Investigation of some unusual and loud CBC triggers led to identifying a new set of glitches which occur a few times a day, looking like one or two cycles of extremely high-frequency scattering arches in the strain channel. One very clear example is this omega scan (26th Jan) - see particularly LSC-REFL_A_LF_OUT_DQ and IMC-IM4_TRANS_YAW spectrograms for the scattering structure. (Hence the possible name SPINOSAURUS, for which try Googling.)
The cause is a really strong transient excitation at around 30Hz (aka 'thud') hitting the central station, seen in many accelerometer, seismometer, HEPI, ISI and SUS channels. We made some sound files from a selection of these channels :
PEM microphones, interestingly, don't pick up the disturbance in most cases - so probably it is coming through the ground.
Note that the OPLEV accelerometer shows ringing at ~60-something Hz.
Working hypothesis is that the thud is exciting some resonance/relative motion of the input optics which is causing light to be reflected off places where it shouldn't be ..
The frequency of the arches (~34 per second) would indicate that whatever is causing scattering has a motion frequency of about 17Hz (see eg https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=154054 as well as the omega scan above).
Maybe someone at the site could recognize what this is from listening to the .wav files?
A set of omega scans of similar events on 26th Jan (identified by thresholding on ISI-GND_STS_HAM2_Y) can be found at https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/isi_ham2/
Wow that is pretty loud, seems like it is even seen (though just barely) on seismometers clear out at EY with about the right propagation delay for air or ground propagation in this band (about 300 m/s). Like a small quake near the corner station or something really heavy, like the front loader, going over a big bump or setting its shovel down hard. Are other similar events during working hours and also seen at EY or EX?
It's hard to spot any pattern in the GPS times. As far as I have checked the disturbances are always much stronger in CS/LVEA than in end station (if seen at all in EX/EY ..).
More times can be found at https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/isi_ham2/jan23/ https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/isi_ham2/jan24/
Hveto investigations have uncovered a bunch more times - some are definitely not in working hours, eg https://ldas-jobs.ligo-wa.caltech.edu/~tjmassin/hveto/O2Ac-HPI-HAM2/scans/1169549195.98/ (02:46 local) https://ldas-jobs.ligo-wa.caltech.edu/~tjmassin/hveto/O2Ab-HPI-HAM2/scans/1168330222.84/ (00:10 local)
Here's a plot which may be helpful as to the times of disturbances in CS showing the great majority of occurrences on the 23rd, 26th-27th and early on 28th Jan (all times UTC). This ought to be correlated with local happenings.
The ISI-GND HAM2 channel also has loud triggers at times where there are no strain triggers as the ifo was not observing. The main times I see are approximately (UTC time)
Jan 22 : hours 13, 18 21-22
Jan 23 : hours 0-1, 20
Jan 24 : hours 0, 1, 3-6, 10, 18-23
Jan 25 : hours 21-22
Jan 26 : hours 17-19, 21-22
Jan 27 : hours 1-3, 5-6, 10, 15-17, 19, 21, 23
Jan 28 : hours 9-10
Jan 29 : hours 19-20
Jan 30 : hours 17, 19-20
Hmm. Maybe this shows a predominance of times around hour 19-20-21 UTC i.e. 11-12-13 PST. Lunchtime?? And what was special about the 24th and 27th ..
Is this maybe snow falling off the buildings? The temps started going above the teens on the 18th or so and started staying near freezing by the 24th. Fil reported seeing a chunk he thought could be ~200 lbs fall.
Ice Cracking On Roofs?
In addition to ice/snow falls mentioned by Jim, thought I'd mention audible bumps I heard from the Control Room during some snowy evenings a few weeks ago (alog33199)....Beverly Berger emailed me suggesting this could be ice cracking on the roof. We currently do not have tons of snow on the roofs, but there are some drifts which might be on the order of a 1' tall.
MSR Door Slams?
After hearing the audio files from Thomas' alog, I was sensitive to the noise this morning. Because of this, thought I'd note some times this morning when I heard a noise similar to Thomas' audio, and this noise was the door slamming when people were entering the MSR (Mass Storage Room adjacent to the Control Room & there were a pile of boxes which the door would hit when opened...I have since slid them out of the way). Realize this isn't as big of a force as what Robert mentions or the snow falls, but just thought I'd note some times when they were in/out of the room this morning:
I took a brief look at the times in Corey's previous 'bumps in the night' report, I think I managed to deduce correctly that it refers to UTC times on Jan 13. Out of these I could only find glitches corresponding to the times 5:32:50 and 6:09:14. There were also some loud triggers in the ISI-GND HAM2 channel on Jan 13, but only one corresponded in time with Corey's bumps: 1168320724 (05:31:46).
The 6:09 glitch seems to be a false alarm, a very loud blip glitch at 06:09:10 (see https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/H1_1168322968/) with very little visible in aux channels. The glitch would be visible on the control room glitchgram and/or range plot but is not associated with PEM-CS_SEIS or ISI-GND HAM2 disturbances.
The 5:32:50 glitch was identified as a 'PSL glitch' some time ago - however, it also appears to be a spinosaurus! So, a loud enough spinosaurus will also appear in the PSL.
Evidence : Very loud in PEM-CS_SEIS_LVEA_VERTEX channels (https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=155306) and characteristic sail shape in IMC-IM4 (https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=155301).
The DetChar SEI/Ground BLRMS Y summary page tab has a good witness channel, see the 'HAM2' trace in this plot for the 13th - ie if you want to know 'was it a spinosaurus' check for a spike in HAM2.
Here is another weird-audio-band-disturbance-in-CS event (or series of events!) from Jan 24th ~17:00 UTC :
https://ldas-jobs.ligo-wa.caltech.edu/~tdent/detchar/o2/PEM-CS_ACC_LVEAFLOOR_HAM1_Z-1169312457.wav
Could be someone walking up to a piece of the instrument, dropping or shifting some heavy object then going away .. ??
Omega scan: https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/psl_iss/1169312457.3/
The time mentioned in the last entry turns out to have been a scheduled Tuesday maintenance where people were indeed in the LVEA doing work (and the ifo was not observing, though locked).
Robert has placed two portable humidifiers - one in the CER and one at EY. Bubba and I topped up the water tanks on both units.
EY was ~1/2 full while the CER was completely dry.
Also, we located the circuit breaker for the EY building humdifier, checked it's operation and turned it on to 50% output (5 volts control signal).
I reset both PSL power watchdogs at 18:19 UTC (10:19 PST). This closes FAMIS task 3635.
I lowered it even more to 14% at 21:40 UTC.
Temps still low and exhaust pressure still high so I lowered incrementally all the way down to 1%. Pressure fell to 0 psi but exhaust temps are still read low (-33 C).
The Dewar was filled to a higher level than normal (86% full).
Raising LLCV to 12% and will go out to check actuator to make sure it's not frozen (we got snow today).
Changed the HV setting on IP7 controller (last of the old multivac Varian controllers in LVEA) from 5.6 kV to 7 kV.
PT-140 shows a pressure rise from turning off HV off for a couple of minutes.
The controller unit was powered down and disconnected to change its output to 5.6 kV from 7.0 kV, did a re-configuration/calibration after via the front panel.
Cable was reconnected and the controller powered back up, attached is pressure trend of near volumes to X2-8 (2 hours).
Per WP#6447
The coilmon status channels are now monitored in DIAG_MAIN, so the control room will get notifications if the status bit goes to zero. We'll need to modify the test code when we do HAMs 2&3.
@SYSDIAG.register_test
def SEI_COILMON_ALL_OK():
"""ISI coilmon status
"""
hams = ['HAM4','HAM5','HAM6']
chambers = ['BS','ITMX','ITMY','ETMX','ETMY']
bscs = chambers
for chamber in bscs:
if ezca['ISI-' + chamber + '_COILMON_STATUS_ALL_OK '] != 1:
yield "ISI %s coilmon drop out" % (chamber)
for chamber in hams:
if ezca['ISI-' +chamber + '_BIO_IN_COILMON_STATUS_ALL_OK'] != 1:
yield "ISI %s coilmon drop out" % (chamber)