I have started Conlog running on conlog-master and conlog-replica. conlog-3.0.0 is running on conlog-master. conlog-flask-3.2.0 is running on conlog-replica. There are 6,653 unmonitored channels (attached). Some of these I can connect to from a CDS workstation, but not from conlog-master. I'm not certain why yet.
I had to set the following environment variables: EPICS_CA_AUTO_ADDR_LIST=NO EPICS_CA_ADDR_LIST=10.101.0.255 10.106.0.255 10.105.0.255 10.20.0.255 10.22.15.255 I did this in the systemd config file at /lib/systemd/system/conlog.service: [Unit] Description=Conlog daemon After=mysql.service [Service] Type=simple Environment="EPICS_CA_AUTO_ADDR_LIST=NO" Environment="EPICS_CA_ADDR_LIST=10.101.0.255 10.106.0.255 10.105.0.255 10.20.0.255 10.22.15.255" ExecStart=/usr/sbin/conlogd [Install] WantedBy=multi-user.target There are now 1992 unmonitored channels (attached), but it appears that these are channels that no longer exist.
TITLE: 02/01 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 68Mpc
INCOMING OPERATOR: Jim
SHIFT SUMMARY:
In OBSERVING for a good part of the shift and then at 21:18utc (1:18pmPST), where we had a lockloss AND we also had the TCSy laser trip (IR Sensor). Rest of the shift was focused on the electronics box for the TCSy.
The End Station temperatures were increased due to current & upcoming cold weather. We will want to keep an eye on the ETMs (There are StripTools looking at the vertical M0 sensors for the ETMs on the operator0 workstation.
Final Note: ALS Xarm polarization alarmed High while we are in holding pattern for H1. Jim, or future operator will need to minimize this after next lockloss.
LOG:
While heading to NLN, the TCSy laser tripped again. Fil & Jason are working on this now (& Guardian is paused).
J. Oberling, F. Clara
Fil removed the comparator box and ran some quick tests in the EE lab; he found nothing obviously wrong with it. The box was reinstalled without issue. Using a multimeter, Fil measured the input signal from the IR sensor. The interesting thing here is that the input signal changed depending on how the box was positioned (trip point is at ~96mV). Hanging free in mid-air the input signal measured ~56mV; with Fil touching the box, still hanging free, the signal dropped to ~36mV; holding the box on the side of the TCS enclosure the signal changed yet again to ~25mV; and finally, placing the box on top of the TCS enclosure (its usual home), the signal dropped yet again to ~15mV. There is definitely something fishy going on with this comparator box; grounding issue or cable/connection problem maybe?
As a final check I opened the side panel to the TCS enclosure to check the viewport to ensure there were no obvious signs of damage. Using a green flashlight I found no obvious signs of damage on either optic in the TCS viewport; in addition, nothing obviously looked amiss with the viewport itself, so for know now this seems to be an issue with either the comparator box or the IR sensor. Seeing as how it appears to be working again (famous last words, I know...) we restarted the TCSy CO2 laser. Everything came up without issue, we will continue to monitor this.
Wondering if this is related to the glitchy sigals from the TCSY laser controller. They all run through that same controller box (though of course we did try swapping that out). The lifting it up / putting it down, sounds like it could be a weird grounding issue.
Give me a call if you need any help on this tonight - I'll email my number to you.
Here are a couple pictures for informational purposes. The first is the TCS laser controller chassis, and shows which light is lit when the IR sensor is in alarm. The second shows the comparator box in alarm. This box sits on top of the TCSy enclosure, on the north-east corner.
J. Oberling, F. Clara
At around the same time as our recent lockloss, the IR Sensor alarm popped up on TCSy and would not clear. I went out and reset the laser chassis, no luck. I then wiggled the cables to and from the IR Sensor comparator box to no avail. Found Fil for assistance and we set about trying to get the alarm to clear.
This is when things got a little odd. Fil picked up the little comparator box (sits on top of the TCSy enclosure) and the alarm instantly cleared. He set it back down again and the alarm returned. He unplugged and reseated the cable that goes from the comparator box to the TCSy laser chassis; the alarm cleared, but then returned before we could restart the laser. He then unplugged and reseated both connectors for the comparator box and the alarm did not come back. We wiggled the cables and Fil tapped on the comparator box, both connectors, and the cables with a small screw driver; we could not get the alarm to return. As it now appears to be working we decided to leave it as is; we will monitor this. Regardless, something appears to not be quite right with this comparator box...
The CO2 laser was restarted without issue.
Owing to the colder temperatures last night and expected the next several nights, I have increased the heat at both end stations by 1ma.
Tagging DetChar and OpsInfo. This may mean that suspensions will start to wander around and sag. Maybe. Keep your eyes peeled!
John and I started up HU-2 yesterday at Y End to add some humidity to the VEA. In the process of starting this unit up we removed the lid on the reservoir and found that a large piece (~9" square) of bubble wrap had been left in the water discharge area of the reservoir. We removed the bubble wrap and started the humidifier. It is set at ~50% (5V on a 0-10V scale). The 5 day plot shows an increase in the humidity. Note:The 14 KW heater is powered from a electrical panel located in the VEA and was cycling several times a minute.
Tagging DetChar, OpsInfo, and ISC. There is suspicion from DetChar and Schofield that humidity, or the lack there of, at EY is causing electronics to glitch which propagate through the interferometer. So this *may* start to improve the IFO glitch rate. Maybe. Also note the "cycling several times a minute." The humidifier's water heater is power from the YVEA, and it's controlled with a bang-bang servo, so look for changes in electronic / electromagnetic signals that show some characteristic ON / OFF signatures. Again -- keep your eyes peeled!
We have been working on a low latency blip glitch hunter. It uses PyCBC Live but with a smaller template bank focused on finding possibly only blip glitches. We also generate summary pages, updated every 5 minutes:
https://ldas-jobs.ligo.caltech.edu/~miriam.cabero/blips_live/H1/day/20170209/detchar/pycbc_live/
Unfortunately it has only been running stably since Feb 3, so we cannot really compare directly if the humidifier added on Feb 1 made a significant change in the rate of blips. However, I started using this code on Jan 30, so there are some results for previous days but the code was still a little bit unstable. It can be looked at anway:
https://ldas-jobs.ligo-wa.caltech.edu/~miriam.cabero/blips_live/H1/day/20170131/detchar/pycbc_live/
By glancing at this plot from Jan 31
and from e.g. yesterday
it doesn't look like there has been an improvement...
Starting CP3 fill. LLCV enabled. LLCV set to manual control. LLCV set to 50% open. Fill completed in 15 seconds. LLCV set back to 12.0% open. Starting CP4 fill. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Fill completed in 106 seconds. TC A did not register fill. LLCV set back to 37.0% open.
Jason, Betsy
Yesterday during the maintenance period, Jason and I took a look at the beams coming from the ITMY and CPY in order to "map" where beams are there relative to their back scatter into the chamber in other places. This is similar to what LLO did when they repointed their CPY mechanically in-chamber in an attempt to mitigate scatter (alog 28417). Attached is our "map" which will eventually tie into a larger picture of scatter and "correct" pointing of CPY... TBD. A few facts that I dug up for the record:
LHO CPY CP05 has the thin side to the +X side
LLO CPY CP03 has the thin side to the +X side
The original installation of CPY and ITMY IAS pointing (with 0,0 OSEM alignment offsets) match ~roughly were we find them now (with 0,0 OSEM alignment offset), namely the CPY pit was offset from ITMY-AR by a couple mRad. No major drift over 2 years in alignment, although it is further out from this now by a factor of 2, not sure why. ITMY alignment has changed though, due to commissioning/locking work over 2 years. Recall, the original large pointing spec of the CP of "within ~1.4mRad" was for other factors such as ESD capability, and not necessarily for scatter. We may be refining this spec now for scatter mitigation.
Our map as shown in the attached:
CPY-HR is 2.2mRad away from ITMY-AR in PITCH, while not too far off in YAW (well under a mRad). IAS spec was ~1.4mRad, we insatlled it at ~0.9mRad.
The CPY pitch pointing could be repointed to match the ITMY-AR better which may improve scattering, although we do not currently have enough electronic range. So this will need to be done mechanically during a vent. We could point the CPY as far as electronically possible and re-evaluate for scatter.
(Thought I posted this earlier!)
TITLE: 02/01 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 65Mpc
OUTGOING OPERATOR: Cheryl
CURRENT ENVIRONMENT:
Wind: 10mph Gusts, 9mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.45 μm/s
Beginning of the shift saw clear skies with a balmy 17degF temperature
QUICK SUMMARY:
Observing H1 was handed off & it has been in Observing since 15:15utc (7:15amPST).
Video3 died & was rebooted.
Ran the weekly SEI CPS checks.
BSC:
No major changes here. Note some lines around ~50Hz for ETMy Stage1 (& took measurements at multiple times and saw it there); just a feature I wanted to note, but not a huge high frequency change---which we are supposed to watch out for according to the procedure.
HAM:
No major changes---high frequency looks mostly flat for al HAM ISIs.
Loss of heat in midX VEA and then regain Monday afternoon caused pressure fluctuations.
| Work Permit | Date | Description | alog/status |
| 6464.html | 2017-01-30 12:50 | THIS AFFECTS DELTAL_EXTERNAL_DQ ONLY and DOES NOT AFFECT ANY ASTROPHYSICALLY CONSUMED DATA. Update the relative delay between DELTAL_CTRL and DELTAL_RESIDUAL paths which account for computational delays and high-frequency response approximations. This will be done by populatingthe existing CRTL_DELAY bank with a new Thiran-computed filter that will allow for non-integer clock-cycles and more precision. In addition, I'll update the control room wall figure of merit to include the latest model improvements, which had not been done in the 2017-01-03 upgrade. | 33788 |
| 6463.html | 2017-01-30 10:52 | Back-fill clogged sensing line with UHP N2, undo tubing fitting atop CP4 and utilize 1/16" wire rope and 0.080" copper wire (solid) inserted into clogged line to probe obstruction | 33779 |
| 6462.html | 2017-01-30 10:50 | Perform scheduled maintenance to scroll compressors #3, #4 @ X-MID vent/purge-air supply skid Maintenance activity will require for the compressors to run for brief periods of time Lock-out/tag-out power to skid. | 33785 |
| 6461.html | 2017-01-30 10:45 | Modify ion pump controller settings to 7.0 kV from 5.6 kV. To maintain a stable pressure in HAM6 a different controller will be used to power the ion pump for the extend of the work. Pressure of HAM6 will be monitored very close via PT110. | 33737, 33786 |
| 6460.html | 2017-01-30 09:07 | Set the addresses on the beckhoff terminals installed at the tables. | 33782 |
| 6459.html | 2017-01-30 09:06 | As part of a test for the Beckhoff system we installed remote controls of the End Station illuminators. EY is not functioning so we need to troubleshoot and potentially fix this problem. | 33781 |
| 6458.html | 2017-01-28 14:36 | Xend station PCal bi-monthly calibration. Xend will be transitioned to LASER HAZARD. | 33783 |
| 6457.html | 2017-01-26 13:43 | Remove Coil Driver BIO Connections to WD: Modify & Compile models; DeIsolate Platform; Restart model; Isolate ISI; DAQ ReStart required. | 33776, 33799 |
| 6456.html | 2017-01-25 16:11 | I will do a 15-minute live-chat with Nagoya City Science Museum at the control room in the early morning (2-ish am) of 26th and 27th. | |
| 6455.html | 2017-01-25 9:38 | Make repairs to a leak on the OSB roof above the office area, east end. | |
| Previous W.P. | |||
| 6431.html | 2017-01-12 09:16 | Update and restart the Beckhoff PLC code on h0vacmr and h0vacex to reflect the change to Gamma controllers for IP5 and IP12. Update MEDM screens. Requires DAQ restart for change of channel names. Will trip off HV at end X. | 33385, 33755, 33799 |
| 6428.html | 2017-01-10 09:19 | Replace TCSY Laser Controller D1200745. This work is related to the flow sensor alarms/glitches. The in-line flow sensor was replaced on Dec. 20, but glitches were seen after swap. See Alog 32776. Planning on doing work this or next Tues. | 33129, 33769 |
| 5855.html | 2016-04-29 13:21 | Beckhoff Rotation Stage: Install a remote rotation stage driver by Ham1 feed cables into PSL for rotation stage. This is only and alternative to trouble shooting the rotation stage. It will run on the same ethercat as the Vacuum PT170 PT180 that will be moved Tuesday. | 33774 |
Modified the output of HAM6 ion pump controller to -7.0 kV from -5.6 kV, did a re-configuration/calibration after via the front panel.
Work done under WP#6461.
Kyle, Chandra
WP 6463
Today we fed a 1/16" diam. rope wire through the bottom sensing lines of CP 3 & 4. Length of 109.5" penetrated CP4 (measured from the 1/4" swagelock fitting). Length for CP3 was 105.5". The sensing line is 1" longer on CP4 than for CP3 (based on how much it protrudes from nipple welded to CP outer body).
The theoretical length from drawings V049-4-005, V049-4-090, V049-4-121 is 111.7" ....and then add 1.2" for CP3 and 2.2" for CP4 for additional length of swagelock connections.
We felt resistence toward the end, where I thought we were hitting the 90 deg bend just past the bibraze joint (until looking at these measurements more closely); then we were able to push another ~2". Vertical length is 2.25". The gap between the inner and outer vessel wall at the bottom is only 7/8". The last 4-5" of the wire was cold and frosted when we pulled it out. We then cycled between ~1 Torr (diaphragm pump) and 100 psig pressure on CP4 lower sensing line. Did not detect any breakthrough.
We left both CP3 and CP4 upper and lower sensing lines plumbed together through the shunt valve, essentially bypassing the transducer. Currently not pumping or pressurizing CP4 line.
PyCBC analysts, Thomas Dent, Andrew Lundgren
Investigation of some unusual and loud CBC triggers led to identifying a new set of glitches which occur a few times a day, looking like one or two cycles of extremely high-frequency scattering arches in the strain channel. One very clear example is this omega scan (26th Jan) - see particularly LSC-REFL_A_LF_OUT_DQ and IMC-IM4_TRANS_YAW spectrograms for the scattering structure. (Hence the possible name SPINOSAURUS, for which try Googling.)
The cause is a really strong transient excitation at around 30Hz (aka 'thud') hitting the central station, seen in many accelerometer, seismometer, HEPI, ISI and SUS channels. We made some sound files from a selection of these channels :
PEM microphones, interestingly, don't pick up the disturbance in most cases - so probably it is coming through the ground.
Note that the OPLEV accelerometer shows ringing at ~60-something Hz.
Working hypothesis is that the thud is exciting some resonance/relative motion of the input optics which is causing light to be reflected off places where it shouldn't be ..
The frequency of the arches (~34 per second) would indicate that whatever is causing scattering has a motion frequency of about 17Hz (see eg https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=154054 as well as the omega scan above).
Maybe someone at the site could recognize what this is from listening to the .wav files?
A set of omega scans of similar events on 26th Jan (identified by thresholding on ISI-GND_STS_HAM2_Y) can be found at https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/isi_ham2/
Wow that is pretty loud, seems like it is even seen (though just barely) on seismometers clear out at EY with about the right propagation delay for air or ground propagation in this band (about 300 m/s). Like a small quake near the corner station or something really heavy, like the front loader, going over a big bump or setting its shovel down hard. Are other similar events during working hours and also seen at EY or EX?
It's hard to spot any pattern in the GPS times. As far as I have checked the disturbances are always much stronger in CS/LVEA than in end station (if seen at all in EX/EY ..).
More times can be found at https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/isi_ham2/jan23/ https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/isi_ham2/jan24/
Hveto investigations have uncovered a bunch more times - some are definitely not in working hours, eg https://ldas-jobs.ligo-wa.caltech.edu/~tjmassin/hveto/O2Ac-HPI-HAM2/scans/1169549195.98/ (02:46 local) https://ldas-jobs.ligo-wa.caltech.edu/~tjmassin/hveto/O2Ab-HPI-HAM2/scans/1168330222.84/ (00:10 local)
Here's a plot which may be helpful as to the times of disturbances in CS showing the great majority of occurrences on the 23rd, 26th-27th and early on 28th Jan (all times UTC). This ought to be correlated with local happenings.
The ISI-GND HAM2 channel also has loud triggers at times where there are no strain triggers as the ifo was not observing. The main times I see are approximately (UTC time)
Jan 22 : hours 13, 18 21-22
Jan 23 : hours 0-1, 20
Jan 24 : hours 0, 1, 3-6, 10, 18-23
Jan 25 : hours 21-22
Jan 26 : hours 17-19, 21-22
Jan 27 : hours 1-3, 5-6, 10, 15-17, 19, 21, 23
Jan 28 : hours 9-10
Jan 29 : hours 19-20
Jan 30 : hours 17, 19-20
Hmm. Maybe this shows a predominance of times around hour 19-20-21 UTC i.e. 11-12-13 PST. Lunchtime?? And what was special about the 24th and 27th ..
Is this maybe snow falling off the buildings? The temps started going above the teens on the 18th or so and started staying near freezing by the 24th. Fil reported seeing a chunk he thought could be ~200 lbs fall.
Ice Cracking On Roofs?
In addition to ice/snow falls mentioned by Jim, thought I'd mention audible bumps I heard from the Control Room during some snowy evenings a few weeks ago (alog33199)....Beverly Berger emailed me suggesting this could be ice cracking on the roof. We currently do not have tons of snow on the roofs, but there are some drifts which might be on the order of a 1' tall.
MSR Door Slams?
After hearing the audio files from Thomas' alog, I was sensitive to the noise this morning. Because of this, thought I'd note some times this morning when I heard a noise similar to Thomas' audio, and this noise was the door slamming when people were entering the MSR (Mass Storage Room adjacent to the Control Room & there were a pile of boxes which the door would hit when opened...I have since slid them out of the way). Realize this isn't as big of a force as what Robert mentions or the snow falls, but just thought I'd note some times when they were in/out of the room this morning:
I took a brief look at the times in Corey's previous 'bumps in the night' report, I think I managed to deduce correctly that it refers to UTC times on Jan 13. Out of these I could only find glitches corresponding to the times 5:32:50 and 6:09:14. There were also some loud triggers in the ISI-GND HAM2 channel on Jan 13, but only one corresponded in time with Corey's bumps: 1168320724 (05:31:46).
The 6:09 glitch seems to be a false alarm, a very loud blip glitch at 06:09:10 (see https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/H1_1168322968/) with very little visible in aux channels. The glitch would be visible on the control room glitchgram and/or range plot but is not associated with PEM-CS_SEIS or ISI-GND HAM2 disturbances.
The 5:32:50 glitch was identified as a 'PSL glitch' some time ago - however, it also appears to be a spinosaurus! So, a loud enough spinosaurus will also appear in the PSL.
Evidence : Very loud in PEM-CS_SEIS_LVEA_VERTEX channels (https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=155306) and characteristic sail shape in IMC-IM4 (https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=155301).
The DetChar SEI/Ground BLRMS Y summary page tab has a good witness channel, see the 'HAM2' trace in this plot for the 13th - ie if you want to know 'was it a spinosaurus' check for a spike in HAM2.
Here is another weird-audio-band-disturbance-in-CS event (or series of events!) from Jan 24th ~17:00 UTC :
https://ldas-jobs.ligo-wa.caltech.edu/~tdent/detchar/o2/PEM-CS_ACC_LVEAFLOOR_HAM1_Z-1169312457.wav
Could be someone walking up to a piece of the instrument, dropping or shifting some heavy object then going away .. ??
Omega scan: https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/psl_iss/1169312457.3/
The time mentioned in the last entry turns out to have been a scheduled Tuesday maintenance where people were indeed in the LVEA doing work (and the ifo was not observing, though locked).
Summary of the DQ shift from Thursday 26th to Sunday 29th (inclusive), click here for full report:
For reference:
I have learned that the H1:PEM-CS_ACC_HAM6 accelerometers while being not in vacuum but just beside the HAM6 chamber, they are sensitive to the shutter closing down in order to protect the output PDs after lockloss. Therefore the spikes in this channels will always be present during lockloss (see attached picture where I plot this channel together with the power built up on PR cavity so a good indicator of lockloss).
I also looked at the possibility that the overflow of H1:ASC-AS_C_SEG4 and H1:ASC-AS_C_SEG2 may actually have caused the lockloss (more to come from this analysis worthy of an aLog in itself), but notice that these signals are used, together with SEG1 and SEG3, to generate the aligment signals for SR2 and SRM mirrors.
Finally, in order to look at the possible effect that the A2L script caused on the excess low frequency noise I will run BruCo before and after the running the A2L script, which is run ocassionaly to recenter the beam into the mirrors. Sheila has mentioned that it should not affect frequencies above 25Hz though.
The Bruco reports before and after the A2L script was run. In each case I look at 600 seconds from GPS time 1169564417 for the case 'before' and 1169569217 for the case after and look at the frequency band between 40 and 100Hz, as an example this is the command used for the 'Before' case:
./bruco.py --ifo H1 --channel=OMC-DCPD_SUM_OUT_DQ --gpsb=1169564417 --length=600 --outfs=4096 --naver=100 --dir=~/public_html/detchar/O2/bruco/Before_at_1169564417_600_OMC_DCPD --top=100 --webtop=20 --xlim=40:100 --ylim=1e-10:1 --excluded=share/lho_excluded_channels.txt
Another look at accelerometers/seismometers.
In addition to the external (i.e. in-air) accelerometers, we also have in-vacuum seismometers (GS13s mounted within the Seismic Isolation [ISI]tables) which we can also look at. Attached is a look at one of the HAM6 ISI GS13s (H1:ISI-HAM6_GS13INF_H1_IN1_DQ) & a HAM6 (H1:PEM-CS_ACC_HAM6_OMC_X_DQ) in-air/external (probably mounted on the HAMdoor) accelerometer. So, when the lockloss occurs, the HAM6 Fast Shutter (referred to as "the Toaster") pops up & this shakes the HAM6 table (somes trips the HAM6 ISI). This motion is seen by the GS13 inside and accelerometer outside (which was a surprise to me).