While Jason and Fil were looking at the TCSY laser, Corey had left ISC_LOCK sitting at ENGAGE_SRC_ASC. We noticed that DIAG_MAIN and the ALSX guardian were both complaining about the X-arm fiber polarization. We thought we could ignore this because the arms were no longer "on ALS", but when ISC_LOCK got to the SHUTTER_ALS state, ISC_LOCK couldn't proceed because the ALSX guardian was in fault.
To move forward, I had to take the ALSX guardian to manual and put it in the SHUTTERED state. Buuttt... now ALSX wasn't monitored by ISC_LOCK. When I got to NLN, the TCSCS was in safe (from earlier work?) and it had a bunch of differences in SDF, TCS_ITMY_CO2_PWR guardian was also complaining (where is this screen? I had to use "guardctrl medm TCS_ITMY_CO2_PWR" to launch it, it recovered after INITing), and ALSX was controlled by USER. This last one I fixed by doing a caput "caput H1:GRD-ALS_XARM_MANAGER ISC_LOCK" . Normally, that would be fixed by initing the parent node, but for ISC_LOCK that means going through down and breaking lock.
Of course, after fixing all of that and surviving an earthquake, I lost lock due to PIs that seem to have shifted because of the TCS outage.
STUDYING RF PHASE DRIFTS IN THE H1 RF SOURCE OSCILLATOR SYSTEM IN CER: WORK PERMIT 6453 Dick Gustafson TUES MAINT 1/25 1700 UTA(0900pst) to 2000 (1200pst) at CER ISC Racks C# and C4. Maintenance period access... This was motivated by a search for a cause of a 1/2 and 1 Hz comb of lines seen in long CW GW searches, multiday integrations, and having noticed apparent OSC phase drifts in the commissioning/shakedown era. The scenario is the 1 PPS (1Hz) basic correction interval modulates the RF in random but sharp interval way. A second 1 Hz theory was power modulation associated with the 1Hz on/off of perhaps a 100 LEDS switching perhaps 1 amp in the OSC SOURCE power system; reprogramming, ie removing this 1 Hz variable perhaps changed, but did not eliminate the 1/2 Hz comb effect. Results: so far... We verified the 10 MHz OSC-VCO phase drifts/swings (relative to 10.000...MHz ref) +- ~ 5 to 25 ns, over randomly different swing periods of ~ 4 - 30 sec. The phase swings back an forth relative to a freq tweeked reference osc signal as seen on a TEK 3032 Oscilloscope; the ref OSC triggers scope; observations are made with MK I Eyeball. > The phase drift seems ~ constant in two drifts speeds for +, and - directions; and swings... ie drifts of varying periods ~ < 4 to ~30 sec. The 10 MHz OSC was selected first (today) as having a freq with a convenient reference stable tunable oscillator units available: << .1 ns sec drift: a "tuneable" SRS SC-10 very stable 10.000000 MHz VCO; and an SRS FS725 Rb87 Rubidium frequency standard ("atomic clock ") 10.000 MHz. This provides a stabilized 10.00000000000 MHz reference very slowly tunable over very limited range. Used here first to verify the stability and tunability of the ref oscillators and learn subtleties. PLAN: We hope to devise a practical scheme to test all or most of the OSCILLATORS in the LHO CER... are they the "same"? or what? ..next maintenance or opportunity....to come up with a plausible Drift model and understand consequences.
I have started Conlog running on conlog-master and conlog-replica. conlog-3.0.0 is running on conlog-master. conlog-flask-3.2.0 is running on conlog-replica. There are 6,653 unmonitored channels (attached). Some of these I can connect to from a CDS workstation, but not from conlog-master. I'm not certain why yet.
I had to set the following environment variables: EPICS_CA_AUTO_ADDR_LIST=NO EPICS_CA_ADDR_LIST=10.101.0.255 10.106.0.255 10.105.0.255 10.20.0.255 10.22.15.255 I did this in the systemd config file at /lib/systemd/system/conlog.service: [Unit] Description=Conlog daemon After=mysql.service [Service] Type=simple Environment="EPICS_CA_AUTO_ADDR_LIST=NO" Environment="EPICS_CA_ADDR_LIST=10.101.0.255 10.106.0.255 10.105.0.255 10.20.0.255 10.22.15.255" ExecStart=/usr/sbin/conlogd [Install] WantedBy=multi-user.target There are now 1992 unmonitored channels (attached), but it appears that these are channels that no longer exist.
TITLE: 02/01 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 68Mpc
INCOMING OPERATOR: Jim
SHIFT SUMMARY:
In OBSERVING for a good part of the shift and then at 21:18utc (1:18pmPST), where we had a lockloss AND we also had the TCSy laser trip (IR Sensor). Rest of the shift was focused on the electronics box for the TCSy.
The End Station temperatures were increased due to current & upcoming cold weather. We will want to keep an eye on the ETMs (There are StripTools looking at the vertical M0 sensors for the ETMs on the operator0 workstation.
Final Note: ALS Xarm polarization alarmed High while we are in holding pattern for H1. Jim, or future operator will need to minimize this after next lockloss.
LOG:
While heading to NLN, the TCSy laser tripped again. Fil & Jason are working on this now (& Guardian is paused).
J. Oberling, F. Clara
Fil removed the comparator box and ran some quick tests in the EE lab; he found nothing obviously wrong with it. The box was reinstalled without issue. Using a multimeter, Fil measured the input signal from the IR sensor. The interesting thing here is that the input signal changed depending on how the box was positioned (trip point is at ~96mV). Hanging free in mid-air the input signal measured ~56mV; with Fil touching the box, still hanging free, the signal dropped to ~36mV; holding the box on the side of the TCS enclosure the signal changed yet again to ~25mV; and finally, placing the box on top of the TCS enclosure (its usual home), the signal dropped yet again to ~15mV. There is definitely something fishy going on with this comparator box; grounding issue or cable/connection problem maybe?
As a final check I opened the side panel to the TCS enclosure to check the viewport to ensure there were no obvious signs of damage. Using a green flashlight I found no obvious signs of damage on either optic in the TCS viewport; in addition, nothing obviously looked amiss with the viewport itself, so for know now this seems to be an issue with either the comparator box or the IR sensor. Seeing as how it appears to be working again (famous last words, I know...) we restarted the TCSy CO2 laser. Everything came up without issue, we will continue to monitor this.
Wondering if this is related to the glitchy sigals from the TCSY laser controller. They all run through that same controller box (though of course we did try swapping that out). The lifting it up / putting it down, sounds like it could be a weird grounding issue.
Give me a call if you need any help on this tonight - I'll email my number to you.
Here are a couple pictures for informational purposes. The first is the TCS laser controller chassis, and shows which light is lit when the IR sensor is in alarm. The second shows the comparator box in alarm. This box sits on top of the TCSy enclosure, on the north-east corner.
J. Oberling, F. Clara
At around the same time as our recent lockloss, the IR Sensor alarm popped up on TCSy and would not clear. I went out and reset the laser chassis, no luck. I then wiggled the cables to and from the IR Sensor comparator box to no avail. Found Fil for assistance and we set about trying to get the alarm to clear.
This is when things got a little odd. Fil picked up the little comparator box (sits on top of the TCSy enclosure) and the alarm instantly cleared. He set it back down again and the alarm returned. He unplugged and reseated the cable that goes from the comparator box to the TCSy laser chassis; the alarm cleared, but then returned before we could restart the laser. He then unplugged and reseated both connectors for the comparator box and the alarm did not come back. We wiggled the cables and Fil tapped on the comparator box, both connectors, and the cables with a small screw driver; we could not get the alarm to return. As it now appears to be working we decided to leave it as is; we will monitor this. Regardless, something appears to not be quite right with this comparator box...
The CO2 laser was restarted without issue.
Owing to the colder temperatures last night and expected the next several nights, I have increased the heat at both end stations by 1ma.
Tagging DetChar and OpsInfo. This may mean that suspensions will start to wander around and sag. Maybe. Keep your eyes peeled!
John and I started up HU-2 yesterday at Y End to add some humidity to the VEA. In the process of starting this unit up we removed the lid on the reservoir and found that a large piece (~9" square) of bubble wrap had been left in the water discharge area of the reservoir. We removed the bubble wrap and started the humidifier. It is set at ~50% (5V on a 0-10V scale). The 5 day plot shows an increase in the humidity. Note:The 14 KW heater is powered from a electrical panel located in the VEA and was cycling several times a minute.
Tagging DetChar, OpsInfo, and ISC. There is suspicion from DetChar and Schofield that humidity, or the lack there of, at EY is causing electronics to glitch which propagate through the interferometer. So this *may* start to improve the IFO glitch rate. Maybe. Also note the "cycling several times a minute." The humidifier's water heater is power from the YVEA, and it's controlled with a bang-bang servo, so look for changes in electronic / electromagnetic signals that show some characteristic ON / OFF signatures. Again -- keep your eyes peeled!
We have been working on a low latency blip glitch hunter. It uses PyCBC Live but with a smaller template bank focused on finding possibly only blip glitches. We also generate summary pages, updated every 5 minutes:
https://ldas-jobs.ligo.caltech.edu/~miriam.cabero/blips_live/H1/day/20170209/detchar/pycbc_live/
Unfortunately it has only been running stably since Feb 3, so we cannot really compare directly if the humidifier added on Feb 1 made a significant change in the rate of blips. However, I started using this code on Jan 30, so there are some results for previous days but the code was still a little bit unstable. It can be looked at anway:
https://ldas-jobs.ligo-wa.caltech.edu/~miriam.cabero/blips_live/H1/day/20170131/detchar/pycbc_live/
By glancing at this plot from Jan 31
and from e.g. yesterday
it doesn't look like there has been an improvement...
Starting CP3 fill. LLCV enabled. LLCV set to manual control. LLCV set to 50% open. Fill completed in 15 seconds. LLCV set back to 12.0% open. Starting CP4 fill. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Fill completed in 106 seconds. TC A did not register fill. LLCV set back to 37.0% open.
Jason, Betsy
Yesterday during the maintenance period, Jason and I took a look at the beams coming from the ITMY and CPY in order to "map" where beams are there relative to their back scatter into the chamber in other places. This is similar to what LLO did when they repointed their CPY mechanically in-chamber in an attempt to mitigate scatter (alog 28417). Attached is our "map" which will eventually tie into a larger picture of scatter and "correct" pointing of CPY... TBD. A few facts that I dug up for the record:
LHO CPY CP05 has the thin side to the +X side
LLO CPY CP03 has the thin side to the +X side
The original installation of CPY and ITMY IAS pointing (with 0,0 OSEM alignment offsets) match ~roughly were we find them now (with 0,0 OSEM alignment offset), namely the CPY pit was offset from ITMY-AR by a couple mRad. No major drift over 2 years in alignment, although it is further out from this now by a factor of 2, not sure why. ITMY alignment has changed though, due to commissioning/locking work over 2 years. Recall, the original large pointing spec of the CP of "within ~1.4mRad" was for other factors such as ESD capability, and not necessarily for scatter. We may be refining this spec now for scatter mitigation.
Our map as shown in the attached:
CPY-HR is 2.2mRad away from ITMY-AR in PITCH, while not too far off in YAW (well under a mRad). IAS spec was ~1.4mRad, we insatlled it at ~0.9mRad.
The CPY pitch pointing could be repointed to match the ITMY-AR better which may improve scattering, although we do not currently have enough electronic range. So this will need to be done mechanically during a vent. We could point the CPY as far as electronically possible and re-evaluate for scatter.
(Thought I posted this earlier!)
TITLE: 02/01 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 65Mpc
OUTGOING OPERATOR: Cheryl
CURRENT ENVIRONMENT:
Wind: 10mph Gusts, 9mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.45 μm/s
Beginning of the shift saw clear skies with a balmy 17degF temperature
QUICK SUMMARY:
Observing H1 was handed off & it has been in Observing since 15:15utc (7:15amPST).
Video3 died & was rebooted.
Ran the weekly SEI CPS checks.
BSC:
No major changes here. Note some lines around ~50Hz for ETMy Stage1 (& took measurements at multiple times and saw it there); just a feature I wanted to note, but not a huge high frequency change---which we are supposed to watch out for according to the procedure.
HAM:
No major changes---high frequency looks mostly flat for al HAM ISIs.
Modified the output of HAM6 ion pump controller to -7.0 kV from -5.6 kV, did a re-configuration/calibration after via the front panel.
Work done under WP#6461.
PyCBC analysts, Thomas Dent, Andrew Lundgren
Investigation of some unusual and loud CBC triggers led to identifying a new set of glitches which occur a few times a day, looking like one or two cycles of extremely high-frequency scattering arches in the strain channel. One very clear example is this omega scan (26th Jan) - see particularly LSC-REFL_A_LF_OUT_DQ and IMC-IM4_TRANS_YAW spectrograms for the scattering structure. (Hence the possible name SPINOSAURUS, for which try Googling.)
The cause is a really strong transient excitation at around 30Hz (aka 'thud') hitting the central station, seen in many accelerometer, seismometer, HEPI, ISI and SUS channels. We made some sound files from a selection of these channels :
PEM microphones, interestingly, don't pick up the disturbance in most cases - so probably it is coming through the ground.
Note that the OPLEV accelerometer shows ringing at ~60-something Hz.
Working hypothesis is that the thud is exciting some resonance/relative motion of the input optics which is causing light to be reflected off places where it shouldn't be ..
The frequency of the arches (~34 per second) would indicate that whatever is causing scattering has a motion frequency of about 17Hz (see eg https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=154054 as well as the omega scan above).
Maybe someone at the site could recognize what this is from listening to the .wav files?
A set of omega scans of similar events on 26th Jan (identified by thresholding on ISI-GND_STS_HAM2_Y) can be found at https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/isi_ham2/
Wow that is pretty loud, seems like it is even seen (though just barely) on seismometers clear out at EY with about the right propagation delay for air or ground propagation in this band (about 300 m/s). Like a small quake near the corner station or something really heavy, like the front loader, going over a big bump or setting its shovel down hard. Are other similar events during working hours and also seen at EY or EX?
It's hard to spot any pattern in the GPS times. As far as I have checked the disturbances are always much stronger in CS/LVEA than in end station (if seen at all in EX/EY ..).
More times can be found at https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/isi_ham2/jan23/ https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/isi_ham2/jan24/
Hveto investigations have uncovered a bunch more times - some are definitely not in working hours, eg https://ldas-jobs.ligo-wa.caltech.edu/~tjmassin/hveto/O2Ac-HPI-HAM2/scans/1169549195.98/ (02:46 local) https://ldas-jobs.ligo-wa.caltech.edu/~tjmassin/hveto/O2Ab-HPI-HAM2/scans/1168330222.84/ (00:10 local)
Here's a plot which may be helpful as to the times of disturbances in CS showing the great majority of occurrences on the 23rd, 26th-27th and early on 28th Jan (all times UTC). This ought to be correlated with local happenings.
The ISI-GND HAM2 channel also has loud triggers at times where there are no strain triggers as the ifo was not observing. The main times I see are approximately (UTC time)
Jan 22 : hours 13, 18 21-22
Jan 23 : hours 0-1, 20
Jan 24 : hours 0, 1, 3-6, 10, 18-23
Jan 25 : hours 21-22
Jan 26 : hours 17-19, 21-22
Jan 27 : hours 1-3, 5-6, 10, 15-17, 19, 21, 23
Jan 28 : hours 9-10
Jan 29 : hours 19-20
Jan 30 : hours 17, 19-20
Hmm. Maybe this shows a predominance of times around hour 19-20-21 UTC i.e. 11-12-13 PST. Lunchtime?? And what was special about the 24th and 27th ..
Is this maybe snow falling off the buildings? The temps started going above the teens on the 18th or so and started staying near freezing by the 24th. Fil reported seeing a chunk he thought could be ~200 lbs fall.
Ice Cracking On Roofs?
In addition to ice/snow falls mentioned by Jim, thought I'd mention audible bumps I heard from the Control Room during some snowy evenings a few weeks ago (alog33199)....Beverly Berger emailed me suggesting this could be ice cracking on the roof. We currently do not have tons of snow on the roofs, but there are some drifts which might be on the order of a 1' tall.
MSR Door Slams?
After hearing the audio files from Thomas' alog, I was sensitive to the noise this morning. Because of this, thought I'd note some times this morning when I heard a noise similar to Thomas' audio, and this noise was the door slamming when people were entering the MSR (Mass Storage Room adjacent to the Control Room & there were a pile of boxes which the door would hit when opened...I have since slid them out of the way). Realize this isn't as big of a force as what Robert mentions or the snow falls, but just thought I'd note some times when they were in/out of the room this morning:
I took a brief look at the times in Corey's previous 'bumps in the night' report, I think I managed to deduce correctly that it refers to UTC times on Jan 13. Out of these I could only find glitches corresponding to the times 5:32:50 and 6:09:14. There were also some loud triggers in the ISI-GND HAM2 channel on Jan 13, but only one corresponded in time with Corey's bumps: 1168320724 (05:31:46).
The 6:09 glitch seems to be a false alarm, a very loud blip glitch at 06:09:10 (see https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/H1_1168322968/) with very little visible in aux channels. The glitch would be visible on the control room glitchgram and/or range plot but is not associated with PEM-CS_SEIS or ISI-GND HAM2 disturbances.
The 5:32:50 glitch was identified as a 'PSL glitch' some time ago - however, it also appears to be a spinosaurus! So, a loud enough spinosaurus will also appear in the PSL.
Evidence : Very loud in PEM-CS_SEIS_LVEA_VERTEX channels (https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=155306) and characteristic sail shape in IMC-IM4 (https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=155301).
The DetChar SEI/Ground BLRMS Y summary page tab has a good witness channel, see the 'HAM2' trace in this plot for the 13th - ie if you want to know 'was it a spinosaurus' check for a spike in HAM2.
Here is another weird-audio-band-disturbance-in-CS event (or series of events!) from Jan 24th ~17:00 UTC :
https://ldas-jobs.ligo-wa.caltech.edu/~tdent/detchar/o2/PEM-CS_ACC_LVEAFLOOR_HAM1_Z-1169312457.wav
Could be someone walking up to a piece of the instrument, dropping or shifting some heavy object then going away .. ??
Omega scan: https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/psl_iss/1169312457.3/
The time mentioned in the last entry turns out to have been a scheduled Tuesday maintenance where people were indeed in the LVEA doing work (and the ifo was not observing, though locked).
There are more SDF diffs in TCSCS. Looks like these should probably be unmonitored.
More TCS diffs.
To clarify a few Guardian operations here; please be careful to anyone who tries to put a node back to managed by clicking the "AUTO" button, this will make USER the manager NOT the normal, correct, manager node. The way for the manager to regain control of its subordinates, is to go to INIT, as Jim states. It is true that if you select INIT from ISC_LOCK while not in Manual mode, then it will go to DOWN after completing the INIT state, but if you keep ISC_LOCK in Manual then you can wait for the INIT state to complete and then click the next state that ISC_LOCK should execute. That last part is the tricky part though. If you reselect the state that you stopped at before going to INIT, then you may run the risk of losing lock because it will rerun that state. It may not break lock, but some states will. Jim did the other way to regain control of a node by caput'ing the manager node into H1:GRD-{subordinate_node}_MANGER. This also works, but is kind of the "back door" approach (although it may be a bit more clear depending on circumstances).
As for the TCS_ITMY_CO2_PWR node, all nodes are on the Guardian Overview medm screen. All TCS is under the TCS group in the top right near the BRS nodes. Perhaps we should make sure that these are also accessable from the TCS screens.