TITLE: 01/24 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 64Mpc
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY: Mostly a quiet shift
LOG:
19:00 Kyle to MY, back 20:00
22:00 Kyle to MY, back 22:30
Simply by chance I happened to notice the Diode Chiller warning light flash red momentarily on the laser system status screen. I informed the operator (Jim W.) and went to take a look. i watched for a minuite and it happenmed again so I added 500ml. I'm not certain if it could have taken more but it seemed as if it was just at the threshold that maybe the turbulence was barely tripping the warning.
So, I think from the plot, it looks as if it hasn't been happening for a long period of time.
https://lhocds.ligo-wa.caltech.edu will now display the Shibboleth Discovery Service page when it prompts you to log in. If you are familiar with accessing FRS, DCC, or other LIGO services, this is the "which organization are you from?" window that prompts you to pick between LIGO, KAGRA, backup IdPs, etc. This replaces a different system that has been superseded recently (the hidden bit that automatically points you to a LIGO IdP).
This change was made around 22:45 UTC.
All plots look to be in normal, nominal, ranges. There are obvious humidity increases that are consistent with the temperature changes/increases that have been happening in the LVEA. Also on the 1/20 Robert Schofield was measuring water flow noise and made a 9% increase in the flows which is apparent in the chiller pressure plots.
Agree with Ed's analysis, everything looks normal.
Starting CP3 fill. LLCV enabled. LLCV set to manual control. LLCV set to 50% open. Fill completed in 17 seconds. TC B did not register fill. LLCV set back to 17.0% open. Starting CP4 fill. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Fill completed in 2275 seconds. LLCV set back to 36.0% open.
Raised CP4 LLCV from 36% to 37% open.
There were a couple of broadband PCal injections done at LHO during O2, so far, to ascertain calibration in the full frequency band of ~10-200 Hz, as opposed to select frequencies used for sweeps. Attached are plots showing the comparison of PCal and various calibrated DARM data during those injection times. The PCal, CAL-DELTAL-EXTERNAL and GDS data were obtained frames and DCS data (offline data that applies time varying corrections) were generated with kappa values applied by hand. Since calibration lines weren't available during the injections the kappas that track time varying corrections were obtained from data that followed the injection times and applied by hand for these comparisons.
The first plot show the comparison for the injection done during Nov 30, 2016 (a-log 31994). In the plot, we see that the DCS data agrees well with PCal in the frequency band of ~10-200 Hz while the other two (CAL-DELTAL-EXTERNAL and GDS) have reasonable (?) agreement with PCal. The DCS data was generated using the following command,
GPS_START_TIME = 1164509182 GPS_END_TIME = 1164509732 FILTERS = 'H1DCS_1163173888.npz' GST_DEBUG=3 gstlal_compute_strain --gps-start-time $(GPS_START_TIME) --gps-end-time $(GPS_END_TIME) --frame-cache H1_raw_frames.cache --data-source=frames --filters-file $(FILTERS) --ifo=H1 --full-calibration --control-sample-rate 16384 --frame-duration=4 --frames-per-file=1 --compression-scheme=6 --compression-level=3 --frame-type H1_TEST --chan-prefix DCS- --chan-suffix _C01 --expected-fcc=341.0 --coherence-uncertainty-threshold=0.0001 --apply-kappatst --apply-kappapu --apply-kappac --expected-kappatst-real=1.009 --expected-kappapu-real=1.005 --expected-kappac=0.982
The kappa values used in the above command were taken from the DCS frames (just after 09:30:00 UTC). The code used to produce this plot is added to svn /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/PCAL/PcalBroadbandComparison20161130.m
The second plot show similar comparison for the injection done during Jan 04, 2017 (a-log 32942). Here we see that the DCS data agrees well with PCal while both CAL-DELTAL-EXTERNAL and GDS show significant differences. This confirms the trouble we were having with the GDS data during that week (see a-log 32973). This also confirms that the DCS data (availabel in CO1 frames) rectifies some of those problems.
GPS_START_TIME = 1167533440 GPS_END_TIME = 1167533990 FILTERS = 'H1DCS_1167436818.npz' GST_DEBUG=3 gstlal_compute_strain --gps-start-time $(GPS_START_TIME) --gps-end-time $(GPS_END_TIME) --frame-cache H1_raw_frames.cache --data-source=frames --filters-file $(FILTERS) --ifo=H1 --full-calibration --control-sample-rate 16384 --frame-duration=4 --frames-per-file=1 --compression-scheme=6 --compression-level=3 --frame-type H1_TEST --chan-prefix DCS- --chan-suffix _C01 --expected-fcc=341.0 --coherence-uncertainty-threshold=0.0001 --apply-kappatst --apply-kappapu --apply-kappac --expected-kappatst-real=1.0035 --expected-kappapu-real=1.00 --expected-kappac=1.0042
The kappa values used in the above command were taken from the DCS frames (just after 03:30:00 UTC). The code used to produce this plot is added to svn/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/PCAL/PcalBroadbandComparison20170104.m
model restarts logged for Sun 22/Jan/2017 - Sat 21/Jan/2017 No restarts reported
model restarts logged for Fri 20/Jan/2017
2017_01_20 10:42 h1oaf
2017_01_20 10:44 h1dc0
2017_01_20 10:44 h1fw0
2017_01_20 10:45 h1fw1
2017_01_20 10:46 h1fw2
2017_01_20 10:46 h1nds0
2017_01_20 10:46 h1nds1
2017_01_20 10:46 h1tw1
2017_01_20 10:50 h1broadcast0
h1oaf model change with DAQ restart
model restarts logged for Thu 19/Jan/2017 No restarts reported
model restarts logged for Wed 18/Jan/2017
2017_01_18 12:49 h1sysecatc1plc1sdf
2017_01_18 12:49 h1sysecatc1plc2sdf
2017_01_18 12:49 h1sysecatc1plc3sdf
2017_01_18 12:51 h1pslopcsdf
2017_01_18 12:51 h1sysecatx1plc1sdf
2017_01_18 12:51 h1sysecatx1plc2sdf
2017_01_18 12:51 h1sysecatx1plc3sdf
2017_01_18 12:51 h1sysecaty1plc1sdf
2017_01_18 12:51 h1sysecaty1plc2sdf
2017_01_18 12:51 h1sysecaty1plc3sdf
2017_01_18 12:54 h1hpipumpctrlsdf
Restarts of Bechhoff SDF targets following unexpected crash of h1build
Jim and I got the BRS seismometer working in the following temporary PEM channels:
X-axis: H1:PEM-EY_ADC_0_09_OUT_DQ
Y-axis: H1:PEM-EY_ADC_0_10_OUT_DQ
Z-axis: H1:PEM-EY_ADC_0_19_OUT
Robert
TITLE: 01/23 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC STATE of H1: Observing at 67Mpc INCOMING OPERATOR: Jim SHIFT SUMMARY: Fairly quiet night. No major issues relocking. LOG: 11:07 UTC Damped PI mode 27 by changing sign of gain 11:12 UTC Lock loss 11:14 UTC Set observatory mode to corrective maintenace to look at BRSY. Logged into h1brsey as controls and double clicked the 'BRS C#' shortcut on the desktop. The program started and BRSY is no longer reporting a fault. 11:20 UTC Set observatory mode back to lock acquisition. 12:21 UTC NLN 12:23 UTC Observing 12:43 UTC PI mode 27 started ringing up. Changing the sign of gain made it ring up faster. Setting the sign of the gain back made it ring back down. It is now elavated but holding steady. Looking at DTT it appears the peak is currently at around 18038 Hz. The selected BP filter module matches this but the PLL set frequency does not (is at 240.2). Since changing it would take us out of observing and we are currently in dual coincedence I will leave it as is. 13:07 UTC PI mode 27 is back down in the noise floor. 13:08 UTC Just realized I forgot to set the observatory mode to observing and did so. Is there any way this could be done automatically when we hit the intent bit? 13:54 UTC Damped PI mode 28 by changing sign of gain 13:56 UTC Restarted video0 15:00 UTC Granted remote access to Sebastien to work on seismon code
Back to observing at 12:23 UTC.
Lost lock at 11:12 UTC. Reason unclear. Took the opportunity to fix the BRSY fault. Just locked DRMI.
TITLE: 01/23 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC STATE of H1: Observing at 66Mpc OUTGOING OPERATOR: Nutsinee CURRENT ENVIRONMENT: Wind: 5mph Gusts, 3mph 5min avg Primary useism: 0.03 μm/s Secondary useism: 0.73 μm/s QUICK SUMMARY: BRSY is in fault. Leaving alone unless we lose lock.
TITLE: 01/23 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 67Mpc
INCOMING OPERATOR: Patrick
SHIFT SUMMARY: A bit of commissioning earlier in the evening. BRSY is faulty (alog33522). Patrick suggested I should leave the Beckhoff alone until we lose lock so I did.
LOG:
02:44 Out of observing, Sheila and Robert doing some injection
03:44 Back to Observe.
05:27 BRSY went fault.
Jim (on phone), Nutsinee
I noticed the BRSY fault message so I digged around and found BRS_ETMY_RX_INMON and BRS_ETMY_VEL to be flat -- so I called Jim. As Jim requested I attached spectum of the subtracted and the unsubtracted BRSY signal, before and after it went fault. The two signals are exact same after the BRS went to fault. I also attached timeseries of bit channels (T1600103 -- see Status Bit troubleshooting). I couldn't find DRIFT BIT so I attached DRIFTMON instead. CBIT and DRIFTMON went bad/flat around 05:27 UTC.
According to Jim BRSY is not feeding anything (flat signal?) to the ISI right now. End Y is currently using BLEND_Quiet_250_SC_BRS. As long as the wind is low this configuration should be okay.
As per the "Troubleshooting guide" mentioned above, the CBIT being down indicates that the C# code crashed. This code reads the CCD camera and calculates the BRS angle. It is the first time this code has crashed at EY. Odd...
To restart, one should remote login to BRS-Y beckhoff computer as described in the guide above, then close the current BRS2 C# code (labelled as such) and the BRSY EPICS script. Then follow the startup procedure from Step 9 onwards. If this doesn't work, there may be a hardware problem, which will need more steps to diagnose.
At 11:12 UTC the IFO lost lock. I logged into h1brsey and double clicked on the 'BRS C#' shortcut on the desktop. The program started and BRSY is no longer reporting a fault.
I've added a test to DIAG_MAIN to catch this (I think we used to have something similar when we just had one BRS?). Before DIAG_MAIN just looked for the BRS guardian nodes to go into fault, and would just report the node was in fault. I've now inluded an explicit check for the CBIT, and DIAG_MAIN will now tell you the C code has crashed.
def BRS_CHECK():
"""Check the two end station Beam Roation Sensors to make sure that
they are not in FAULT (as read from their Guardian status node state.
Also checks the status of the CBIT, to see if C code is live
"""
for end in ['X','Y']:
if ezca['ISI-GND_BRS_ETM{}_CBIT'.format(end)] == 0:
yield 'BRS {} C code has stopped'.format(end)
elif ezca['GRD-BRS{}_STAT_STATE'.format(end)] == 'FAULT':
yield 'BRS {} is in FAULT'.format(end)
Tom Dent, Miriam Cabero
We have identified a sub-set of blip glitches that might originate from PSL glitches. A glitch with the same morphology as a blip glitch shows up in the PSL-ISS_PDA_REL_OUT_DQ channel at the same time as a blip glitch is seen in the GDS-CALIB_STRAIN channel.
We have started identifying times of these glitches using omicron triggers from the PSL-ISS_PDA_REL_OUT_DQ channel with 30 < SNR < 150 and central frequencies between ~90 Hz and a few hundreds of Hz. A preliminary list of these times (on-going, only period Nov 30 - Dec 6 so far) can be found in the file
https://www.atlas.aei.uni-hannover.de/~miriam.cabero/LSC/blips/O2_PSLblips.txt
or, with omega scans of both channels (and with a few quieter glitches), in the wiki page
Only two of those times have full omega scans for now:
The whitened time-series of the PSL channel looks like a typical loud blip glitch, which could be helpful to identify/find times of this sub-set of blip glitches by other methods more efficient than the omicron triggers:
The CBC wiki page has been moved to https://www.lsc-group.phys.uwm.edu/ligovirgo/cbcnote/PyCBC/O2SearchSchedule/O2Analysis2LoudTriggers/PSLblips
I ran PCAT on H1:GDS-CALIB_STRAIN and H1:PSL-ISS_PDA_REL_OUT_DQ from November 30, 2016 to December 31, 2016 with a relatively high threshold (results here: https://ldas-jobs.ligo-wa.caltech.edu/~cavaglia/pcat-multi/PSL_2016-11-30_2016-12-31.html). Then I looked at the coincidence between the two channels. The list of coincident triggers is: ----------------------------------------------------- List of triggers common to PSL Type 1 and GDS Type 1: #1: 1164908667.377000 List of triggers common to PSL Type 1 and GDS Type 10: #1: 1164895965.198000 #2: 1164908666.479000 List of triggers common to PSL Type 1 and GDS Type 2: #1: 1164882018.545000 List of triggers common to PSL Type 1 and GDS Type 4: #1: 1164895924.827000 #2: 1164895925.031000 #3: 1164895925.133000 #4: 1164895931.640000 #5: 1164895931.718000 #6: 1164895958.491000 #7: 1164895958.593000 #8: 1164895965.097000 #9: 1164908667.193000 #10: 1164908667.295000 #11: 1164908673.289000 #12: 1164908721.587000 #13: 1164908722.198000 #14: 1164908722.300000 #15: 1164908722.435000 List of triggers common to PSL Type 1 and GDS Type 7: #1: 1166374569.625000 #2: 1166374569.993000 List of triggers common to PSL Type 1 and GDS Type 8: #1: 1166483271.312000 ----------------------------------------------------- I followed-up with omega scans and among the triggers above, only 1164882018.545000 is a blip glitch. The others are ~ 1 sec broadband glitches with frequency between 512 and 1024. A few scans are attached to the report.
Hi Marco,
your 'List of triggers common to PSL Type 1 and GDS Type 4' (15 times in two groups) are all during the known times of telephone audio disturbance on Dec 4 - see https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=32503 and https://www.lsc-group.phys.uwm.edu/ligovirgo/cbcnote/PyCBC/O2SearchSchedule/O2Analysis2LoudTriggers/PSLGlitches
I think these don't require looking into any further, the other classes may tell us more.
The GDS glitches that look like blips in the time series seem to be type 2, 7, and 8. You did indeed find that the group of common glitches PSL - GDS type 2 is a blip glitch. However, the PSL glitches in the groups with GDS type 7 and 8 do not look like blips in the omega scan. The subset we identified clearly shows blip glitch morphology in the omega scan for the PSL channel, so it is not surprising that those two groups turned out not to be blips in GDS.
It is though surprising that you only found one time with a coincident blip in both channels, when we identified several more times in just one week of data from the omicron triggers. What was the "relatively high threshold" you used?
Hi. Sorry for taking so long with this. I rerun PCAT on the PSL and GDS channels between 2016-11-30 and 2016-12-31 with a lower threshold for glitch identification (glitches with amplitude > 4 sigma the noise floor) and with a larger coincidence window (coincident glitches within 0.1 seconds). The list of found coincident glitches is attached to the report. Four glitches in Miriam's list [https://www.atlas.aei.uni-hannover.de/~miriam.cabero/LSC/blips/O2_PSLblips.txt] show up in the list: 1164532915.0 (type 1 PSL/type 3 GDS), 1164741925.6 (type 1 PSL/type 1 GDS), 1164876857.0 (type 8 PSL/type 1 GDS), 1164882018.5 (type 1 PSL/type 8 GDS). I looked at other glitches in these types and found only one additional blip at 1166374567.1 (type 1 PSL/type 1 GDS) out of 9 additional coincident glitches. The typical waveforms of the GDS glitches show that the blip type(s) in GDS are type 1 and/or type 8. There are 1998 (type 1) and 830 (type 8) glitches in these classes. I looked at a few examples in cat 8 and indeed found several blip glitches which are not coincident with any glitch in the PSL channel. I would conclude that PCAT does not produce much evidence for a strong correlation of blip glitches in GDS and PSL. If there is, PSL-coincident glitches must be a small subset of blip glitches in h(t). However, some blips *are* coincident with glitches in the PSL, so looking more into this may be a good idea.
Hi,
thanks Marco for looking into this. We already expected that it was a small sub-set of blip glitches, because we only found very few of them and we knew the total number of blip glitches was much higher. However, I believe that not all blip glitches have the same origin and that it is important to identify sub-sets, even if small, to possibly fix whatever could be fixed.
I have extended the wiki page https://www.lsc-group.phys.uwm.edu/ligovirgo/cbcnote/PyCBC/O2SearchSchedule/O2Analysis2LoudTriggers/PSLblips and the list of times https://www.atlas.aei.uni-hannover.de/~miriam.cabero/LSC/blips/O2_PSLblips.txt up to yesterday. It is interesting to see that I did not identify any PSL blips in, e.g., Jan 20 to Jan 30, but that they come back more often after Feb 9. Unfortunately, it is not easy to automatically identify the PSL blips: the criteria I used for the omicron triggers (SNR > 30, central frequency ~few hundred Hz) do not always yield to blips but also to things like https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=156436, which also affects CALIB_STRAIN but not in the form of blip glitches.
None of the times I added up to December appear in your list of coincident glitches, but that could be because their SNR in PSL is not very high and they only leave a very small imprint in CALIB_STRAIN compared with the ones from November. In January and February there are several louder ones with bigger effect on CALIB_STRAIN though.
The most recent iteration of PSL-ISS flag generation showed three relatively loud glitch times:
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20170210/latest/scans/1170732596.35/
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20170210/latest/scans/1170745979.41/
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20170212/latest/scans/1170950466.83/
The first 2 are both on Feb 10, in fact a PSL-ISS channel was picked by Hveto on that day (https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20170210/latest/#hveto-round-8) though not very high significance.
PSL not yet glitch-free?
Indeed PSL is not yet glitch free, as I already pointed out in my comment from last week.
Imene Belahcene, Florent Robinet
At LHO, a simple command line works well at printing PSL blip glitches:
source ~detchar/opt/virgosoft/environment.sh
omicron-print channel=H1:PSL-ISS_PDA_REL_OUT_DQ gps-start=1164500000 gps-end=1167500000 snr-min=30 freq-max=500 print-q=1 print-duration=1 print-bandwidth=1 | awk '$5==5.08&&$2<2{print}'
GPS times must be adjusted to your needs.
This command line returns a few GPS times not contained in Miriam's blip list: must check that they are actual blips.
The PSL has different types of glitches that match those requirements. When I look at the Omicron triggers, I do indeed check that they are blip glitches before adding the times to my list. Therefore it is perfectly consistent that you find GPS times with those characteristics that are not in my list. However, feel free to check again if you want/have time. Of course I am not error-free :)
I believe the command I posted above is an almost-perfect way to retrieve a pure sample of PSL blip glitches. The key is to only print low-Q Omicron triggers.
For example, GPS=1165434378.2129 is a PSL blip glitch and it is not in Miriam's list.
There is nothing special about what you call a blip glitch: any broadband and short-duration (hence low-Q) glitch will produce the rain-drop shape in a time-frequency map. This is due to the intrinsic tiling structure of Omicron/Omega.
Next time I update the list (probably some time this week) I will check the GPS times given by the command line you suggest (it would be nice if it does indeed work perfectly at finding only these glitches, then we'd have an automated PSL blips finder!)
I have had a few of the TCSY chiller flow alarms today, but all of them have almost immediately recovered. The one that just happened at 21:52 UTC, lasted for about 5min. The flow stayed below 2.5 Gpm and I curiously checked the laser temp. to see if it was getting hotter from a lack of flow. The temp had risen by about 0.15C
I got Jason on the case and the flow rate went back up as soon as I finished talking to him. He checked the chiller, just in case, and only noticed that there may have been a bit more air bubbles in the resevoir than on Tuesday, but everything else looked fine.
I had a closer look at the change on the temperature channel. The temperature of the laser changes in roughly 0.1s and is simultaneous with the flow rate drop. This is far too fast to be a real thermal change. It can only be electrical in nature.
Nothing noteworthy to report. Plots show normal consistencies between humidity fluctuations and front end diode powers. Incursions for FSS alignment are evident in environmental plots.
Looking back I believe the task number is a typo. It's supposed to read 6130. uups.