I made two mistakes in judgment before being reminded that we have an Acceptable/Not Acceptable activities list at the OPS station. First, I let Chris move the forklift from outside of the OSB receiving to the chemical bunker from 22:10-22:26 UTC. Second, I let Bubba and John onto the roof of the OSB for an inspection from 22:18-22:21. LLO was not in Observing at the time, so hopefully I can be forgiven. The forklift makes sense to me, but I was surprised that the Observation Deck is off limits.
I just did EX to complete the set.
Receiver was off between the times 12:32:58 and 12:41:59 PST
EX CNS-II receiver label (on bottom of unit):
model: CNSC02-C
Serial number: 404358
Options: 2
Cruising along at ~70 MPc. H1 has been locked for 29 hours and in Observing for 23.5 hours.
Starting CP3 fill. LLCV enabled. LLCV set to manual control. LLCV set to 50% open. Fill completed in 992 seconds. LLCV set back to 15.0% open. Starting CP4 fill. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Fill completed in 1298 seconds. TC A did not register fill. LLCV set back to 36.0% open.
Increased both by one percent:
CP3 now 16% open.
CP4 now 37% open.
The EE guys straightened me or the gear out, likely the former, and I was able to center the T240 at EndY today. This seismometer is currently on the floor just +X,+Y of the BRSY and wrt the ISI STS is +1mX, =Y. Anyway, if it performs okay, I'll move it into the BRS thermal housing exchanging it for the STS that I haven't gotten to do run well enough of late.
The first attachment shows the ground seismos and the BRS. Two hours show the T240 coming online around the large centering excursions; the ISI ground STS with my VEA movement; and the BRS getting rung up and now nicely quieted.
The second attachment is the current spectrum of the T240 channels and the ISI STS. The thin red and blue REF lines are the bad STS after the glitching stopped but it still was not matching the ISI STS--see 34198 for its lack of comparison. The current traces for ADC_0_ are now the T240 and you can see that it matches the ISI STS much better down to 10s of mHz. The lowest frequencies will be poor while the instrument thermalizes and we don't [care] too much about the stuff above 10Hz or so. [I can't vouch for the calibration yet for comparison to the STS but it is pretty good at the useism.]
On Tuesday if it continues to do okay, I [will] swap it in and it will likely then need a day or so to be ready for use--again further thermalizing.
I made the jumper modification inside of the CNS-II GPS receiver at EY. The jumper block J39 jumper was moved from spanning pins 3-5 (amplitude modulation) to pins 1-3 (pulse width modulation). The procedure was:
The EY unit label (located on unit bottom)
Model CNSC02-C
Serial Number: 404357
Options: 2
Using dataviewer I verified that the EY IRIG-B DQ channel is now PWM.
I'll modify EX unit at the next opportunity.
Operator notification of loss of end station GPS receiver: I verified that when the EY GPS receiver was powered down: SDF did not report any diffs (and therefore H1 was not knocked out of observation mode), the timing summary status on the CDS overview screen went RED (see attachment, a time-machine plot from 10:30:00 PST), there was no verbal alarm.
The first two of these are the expected behaviour, the lack of a verbal alarm is a fault, we will add this channel to this system.
Hugh was in the VEA at End Y between 18:05 and 18:31 UTC doing BRS STS work. At Vern's suggestion, we kept the IFO in Observing Mode and just logged the times of the incursion.
Dave B. was in the EY electronics bay for a similar period of time.
To Clarify, I deployed a T240 seismometer on the floor near the BRS. The BRS was not opened. The T240 will swap in for the non-functioning STS inside the BRS enclosure on Tuesday--if the T240 seems to perform better than the STS--we expect it will as it is brand new.
To clarify, I was in the front room, working in the computer rack. The analog signals (1PPS to the comparator, IRIG-B to the h1iscey PEM-AA Chassis) were unpowered between the times 10:23:31 - 10:38:55 PST.
Not much to see here. Measurements show that all is nominally ok.
WeeklyLaser plot shows a continuing downward trend of PMC Trans power. Also, attached is the PMC medm screen to show the ratio.
Everything else looks to be nominally ok.
Work Permit | Date | Description | alog/status |
6493.html | 2017-02-20 14:21 | Restore filter settings on OAF range BLRMS. This is not going to affect the interferometer operation or DMT range calculation. | 34301 |
6492.html | 2017-02-17 17:25 | Removed beam dump from the FLIR camera path (both CO2X and Y). This will be the new nominal configuration. | scheduled for Feb 28 |
6491.html | 2017-02-17 17:21 | Upgrade h1dmtlogin (marble) to current Debian OS (8). | 34297 |
6490.html | 2017-02-17 11:54 | Move the IRIG-B source signal from the IRIG-B Fanout to the CNS-II GPS receiver. | 34293 |
6489.html | 2017-02-16 16:35 | Fix tconvert on Debian 8 workstations, as it is now, it cannot convert dates of the form "Oct 15 2016 12:34:56 UTC" when the month is Oct. Every other month works. | |
6488.html | 2017-02-15 10:31 | Switch HAM1 Z Sensor Correction from HAM2 STS to ITMY STS--This allows us to troubleshoot the EndY PEM STS by borrowing the STS Host Box. The TF between the HAM2 and ITMY STSs is unity w/ 0 phase below ~2Hz covering the band of the SC. | 34176 |
6487.html | 2017-02-15 10:30 | Duplicate of WP 6488. |
TITLE: 02/22 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 69Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
Wind: 10mph Gusts, 8mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.18 μm/s
QUICK SUMMARY: Locked for 25 hours. No issues were passed from Corey.
TITLE: 02/22 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 70Mpc
INCOMING OPERATOR: Travis
SHIFT SUMMARY:
Nice and very quiet shift with nothing to hand off to Travis. This is just how we like it!
LOG:
After 12min of OBSERVING, had another PSL trip.
Went to the PSL Chillers and there must have been an explosion of water in there. The Crystal Chiller cap did NOT pop off. The Crystal & Diode chillers both have Error Alarms (Crystal has Flow Sensor Alarm & Diode has F2-Error).
Will probably call Jason shortly.
In CORRECTIVE MAINTENANCE state.
As with the first trip documented here, this one also appears to be caused by an erratic flow reading in the Laser Head 2 flow sensor. As before, the flow reading from the head 2 sensor becomes erratic before the trip and begins to drop, eventually hitting the trip point and tripping the interlock. The first attachment shows the 3 flow rates from the Laser Head circuit (once again, the 4th flow sensor is forced to a specific value in Beckhoff so cannot contribute to the trip). The 2nd attachment is a zoomed in view of the Laser Head 2 flow sensor signal.
As for the F2 error on the diode chiller, this error did not cause the chiller to turn off. I remotely turned it off and asked Corey to power cycle the chiller. This cleared the error.
The plug not popping off can be a problem. When the pump stops there is a back surge of water into the reservoir, which is what blows off the plug and sends a shower of water onto the floor. If the plug is too tight to allow the pressure from the back surge to relieve, there is a chance of cracking the reservoir. I will call Technotrans to see if there is a better way to relieve this back pressure. Until I hear from them PLEASE DO NOT reef down tight on the plug. Leave it lose. Cleaning up a bit of water on the floor is a lot better than swapping out a damaged chiller.
13:01utc: H1 & the H1 PSL Front End went down.
Will make a call to Jason.
On the PSL SYSSTAT medm, we had the Oscillator/Head 1-4 flow error in a Fault/red state. Jason said this meant that we had a "glitch in laser head flow sensors". Jason brought back the PSL remotely. I topped off the PSL Chiller (the cap did NOT pop off) with 375mL of water.
13:58 Back to LOCK ACQUISITION
FRS #7460 Filed For ~1hr of CORRECTIVE_MAINTENANCE DOWN time.
Cause appears to be a trip of the Laser Head 1-4 Flow interlock, specifically Head 2. The first attachment shows the 3 laser head flow rates (the 4th is currently being forced to a value in the Beckhoff software, so cannot contribute to the trip). It is clear that head 2 became erratic and began to drop in flow, eventually passing below the trip point and tripping the laser. The 2nd attachment shows a zoomed in view of only the Head 2 flow. Possible debris passing through the flow sensor?
Went through an updated LVEA Sweep checklist (T1500386) after Maintenance Day activities. Issues/Notes are below (and some photos are attached).
Cleanroom curtains on ISC Tables:
Do we worry about this? Found "mechanical shorts " of curtains contacting various ISC Tables. Below are what I found for curtains/tables and actions taken (if any):
Other Items:
Blip Glitch Set-Up: EXTEND Until End of O2
This is for Robert. We should update the note on this set-up to say "Set-up until End of O2".
Tom Dent, Miriam Cabero
We have identified a sub-set of blip glitches that might originate from PSL glitches. A glitch with the same morphology as a blip glitch shows up in the PSL-ISS_PDA_REL_OUT_DQ channel at the same time as a blip glitch is seen in the GDS-CALIB_STRAIN channel.
We have started identifying times of these glitches using omicron triggers from the PSL-ISS_PDA_REL_OUT_DQ channel with 30 < SNR < 150 and central frequencies between ~90 Hz and a few hundreds of Hz. A preliminary list of these times (on-going, only period Nov 30 - Dec 6 so far) can be found in the file
https://www.atlas.aei.uni-hannover.de/~miriam.cabero/LSC/blips/O2_PSLblips.txt
or, with omega scans of both channels (and with a few quieter glitches), in the wiki page
Only two of those times have full omega scans for now:
The whitened time-series of the PSL channel looks like a typical loud blip glitch, which could be helpful to identify/find times of this sub-set of blip glitches by other methods more efficient than the omicron triggers:
The CBC wiki page has been moved to https://www.lsc-group.phys.uwm.edu/ligovirgo/cbcnote/PyCBC/O2SearchSchedule/O2Analysis2LoudTriggers/PSLblips
I ran PCAT on H1:GDS-CALIB_STRAIN and H1:PSL-ISS_PDA_REL_OUT_DQ from November 30, 2016 to December 31, 2016 with a relatively high threshold (results here: https://ldas-jobs.ligo-wa.caltech.edu/~cavaglia/pcat-multi/PSL_2016-11-30_2016-12-31.html). Then I looked at the coincidence between the two channels. The list of coincident triggers is: ----------------------------------------------------- List of triggers common to PSL Type 1 and GDS Type 1: #1: 1164908667.377000 List of triggers common to PSL Type 1 and GDS Type 10: #1: 1164895965.198000 #2: 1164908666.479000 List of triggers common to PSL Type 1 and GDS Type 2: #1: 1164882018.545000 List of triggers common to PSL Type 1 and GDS Type 4: #1: 1164895924.827000 #2: 1164895925.031000 #3: 1164895925.133000 #4: 1164895931.640000 #5: 1164895931.718000 #6: 1164895958.491000 #7: 1164895958.593000 #8: 1164895965.097000 #9: 1164908667.193000 #10: 1164908667.295000 #11: 1164908673.289000 #12: 1164908721.587000 #13: 1164908722.198000 #14: 1164908722.300000 #15: 1164908722.435000 List of triggers common to PSL Type 1 and GDS Type 7: #1: 1166374569.625000 #2: 1166374569.993000 List of triggers common to PSL Type 1 and GDS Type 8: #1: 1166483271.312000 ----------------------------------------------------- I followed-up with omega scans and among the triggers above, only 1164882018.545000 is a blip glitch. The others are ~ 1 sec broadband glitches with frequency between 512 and 1024. A few scans are attached to the report.
Hi Marco,
your 'List of triggers common to PSL Type 1 and GDS Type 4' (15 times in two groups) are all during the known times of telephone audio disturbance on Dec 4 - see https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=32503 and https://www.lsc-group.phys.uwm.edu/ligovirgo/cbcnote/PyCBC/O2SearchSchedule/O2Analysis2LoudTriggers/PSLGlitches
I think these don't require looking into any further, the other classes may tell us more.
The GDS glitches that look like blips in the time series seem to be type 2, 7, and 8. You did indeed find that the group of common glitches PSL - GDS type 2 is a blip glitch. However, the PSL glitches in the groups with GDS type 7 and 8 do not look like blips in the omega scan. The subset we identified clearly shows blip glitch morphology in the omega scan for the PSL channel, so it is not surprising that those two groups turned out not to be blips in GDS.
It is though surprising that you only found one time with a coincident blip in both channels, when we identified several more times in just one week of data from the omicron triggers. What was the "relatively high threshold" you used?
Hi. Sorry for taking so long with this. I rerun PCAT on the PSL and GDS channels between 2016-11-30 and 2016-12-31 with a lower threshold for glitch identification (glitches with amplitude > 4 sigma the noise floor) and with a larger coincidence window (coincident glitches within 0.1 seconds). The list of found coincident glitches is attached to the report. Four glitches in Miriam's list [https://www.atlas.aei.uni-hannover.de/~miriam.cabero/LSC/blips/O2_PSLblips.txt] show up in the list: 1164532915.0 (type 1 PSL/type 3 GDS), 1164741925.6 (type 1 PSL/type 1 GDS), 1164876857.0 (type 8 PSL/type 1 GDS), 1164882018.5 (type 1 PSL/type 8 GDS). I looked at other glitches in these types and found only one additional blip at 1166374567.1 (type 1 PSL/type 1 GDS) out of 9 additional coincident glitches. The typical waveforms of the GDS glitches show that the blip type(s) in GDS are type 1 and/or type 8. There are 1998 (type 1) and 830 (type 8) glitches in these classes. I looked at a few examples in cat 8 and indeed found several blip glitches which are not coincident with any glitch in the PSL channel. I would conclude that PCAT does not produce much evidence for a strong correlation of blip glitches in GDS and PSL. If there is, PSL-coincident glitches must be a small subset of blip glitches in h(t). However, some blips *are* coincident with glitches in the PSL, so looking more into this may be a good idea.
Hi,
thanks Marco for looking into this. We already expected that it was a small sub-set of blip glitches, because we only found very few of them and we knew the total number of blip glitches was much higher. However, I believe that not all blip glitches have the same origin and that it is important to identify sub-sets, even if small, to possibly fix whatever could be fixed.
I have extended the wiki page https://www.lsc-group.phys.uwm.edu/ligovirgo/cbcnote/PyCBC/O2SearchSchedule/O2Analysis2LoudTriggers/PSLblips and the list of times https://www.atlas.aei.uni-hannover.de/~miriam.cabero/LSC/blips/O2_PSLblips.txt up to yesterday. It is interesting to see that I did not identify any PSL blips in, e.g., Jan 20 to Jan 30, but that they come back more often after Feb 9. Unfortunately, it is not easy to automatically identify the PSL blips: the criteria I used for the omicron triggers (SNR > 30, central frequency ~few hundred Hz) do not always yield to blips but also to things like https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=156436, which also affects CALIB_STRAIN but not in the form of blip glitches.
None of the times I added up to December appear in your list of coincident glitches, but that could be because their SNR in PSL is not very high and they only leave a very small imprint in CALIB_STRAIN compared with the ones from November. In January and February there are several louder ones with bigger effect on CALIB_STRAIN though.
The most recent iteration of PSL-ISS flag generation showed three relatively loud glitch times:
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20170210/latest/scans/1170732596.35/
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20170210/latest/scans/1170745979.41/
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20170212/latest/scans/1170950466.83/
The first 2 are both on Feb 10, in fact a PSL-ISS channel was picked by Hveto on that day (https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20170210/latest/#hveto-round-8) though not very high significance.
PSL not yet glitch-free?
Indeed PSL is not yet glitch free, as I already pointed out in my comment from last week.
Imene Belahcene, Florent Robinet
At LHO, a simple command line works well at printing PSL blip glitches:
source ~detchar/opt/virgosoft/environment.sh
omicron-print channel=H1:PSL-ISS_PDA_REL_OUT_DQ gps-start=1164500000 gps-end=1167500000 snr-min=30 freq-max=500 print-q=1 print-duration=1 print-bandwidth=1 | awk '$5==5.08&&$2<2{print}'
GPS times must be adjusted to your needs.
This command line returns a few GPS times not contained in Miriam's blip list: must check that they are actual blips.
The PSL has different types of glitches that match those requirements. When I look at the Omicron triggers, I do indeed check that they are blip glitches before adding the times to my list. Therefore it is perfectly consistent that you find GPS times with those characteristics that are not in my list. However, feel free to check again if you want/have time. Of course I am not error-free :)
I believe the command I posted above is an almost-perfect way to retrieve a pure sample of PSL blip glitches. The key is to only print low-Q Omicron triggers.
For example, GPS=1165434378.2129 is a PSL blip glitch and it is not in Miriam's list.
There is nothing special about what you call a blip glitch: any broadband and short-duration (hence low-Q) glitch will produce the rain-drop shape in a time-frequency map. This is due to the intrinsic tiling structure of Omicron/Omega.
Next time I update the list (probably some time this week) I will check the GPS times given by the command line you suggest (it would be nice if it does indeed work perfectly at finding only these glitches, then we'd have an automated PSL blips finder!)