Ops Shift Transition: 02/15/2017, Day Shift 16:00 – 00:00 (08:00 - 16:00) - UTC (PT)
model restarts logged for Tue 14/Feb/2017
2017_02_14 11:27 h1isiham3
2017_02_14 11:29 h1oaf
2017_02_14 12:21 h1broadcast0
2017_02_14 12:21 h1dc0
2017_02_14 12:21 h1fw0
2017_02_14 12:21 h1fw1
2017_02_14 12:21 h1fw2
2017_02_14 12:21 h1nds0
2017_02_14 12:21 h1nds1
2017_02_14 12:21 h1tw1
2017_02_14 14:32 h1broadcast0
2017_02_14 14:32 h1dc0
2017_02_14 14:32 h1nds1
2017_02_14 14:34 h1fw0
2017_02_14 14:34 h1fw1
2017_02_14 14:34 h1fw2
2017_02_14 14:34 h1nds0
2017_02_14 14:34 h1tw1
maintenance day. New code for ISI-HAM3 and OAF with associated DAQ restart. New code for h1ecatc1plc2 (not shown) with its DAQ restart. h1guardian0 was rebooted (not shown).
Work Permit | Date | Description | alog/status |
6486.html | 2017-02-14 11:46 | Change configuration on CNS II clocks to position hold (from 3D fix). Requires TacPlus software & PC. | |
6485.html | 2017-02-14 11:23 | In addition to adding hardware for the bullseye QPD test to measure beam size changes coming out of the HPO, we need to make some software changes in Beckhoff so that we can use a spare whitening chassis. | 34152 34154 |
6484.html | 2017-02-14 10:26 | Prototype fix to inability to reset SUS watchdogs on Debian screens by re-writing underlying script in python. Test on a few QUADs. | 34125 34137 |
6483.html | 2017-02-13 15:29 | Install HFS spare server as h1hfwsmas1 and split the HFS cameras into the two servers h1hfwsmsr and h1hfwsmsr1, We will connect 1 camera from each itm onto each of those servers and restart the code. | 34152 |
6482.html | 2017-02-13 13:29 | Gil Hibbs from Hibbs Engineering will be onsite Tues. Feb. 14th at 1:30 pm to look at MX or MY to scope out cryopump bake enclosure design in preparation for cryopump decommissioning post O2. | |
6481.html | 2017-02-13 13:08 | Interferometer studies when LLO is down. This will include injections to attempt to distinguish between scattering and clipping at the ITM elliptical baffles, adjusting IMC_WFS offsets, injection to study scattering at ERMY and X, noise measurements immediately after A to L, and diagnostic injections if scattering becomes evident. | |
6480.html | 2017-02-13 13:00 | Update GDS to gds-2.17.13-1 on the DMT. As per, https://bugs.ligo.org/redmine/issues/5110 John Z. says the changes are: The new version (gds-2.17.13-1) contains a bunch of infrastructure tweaks that have been well tested offline. These include: * correct tagging of dataValid when reading data segments that are shorter than the length of a frame * interleaved reading of data from multiple online frame streams, i.e. if there are two online data streams providing input data to a monitor, the data are read from both streams as they arrive rather than reading data from one stream first, followed by data from the second stream. The main reason that I would like to install the package now is that it includes the shared memory partition multiplexer. This will be necessary for implementing the dual broadcaster configuration and I would like to have this package installed and running before switching to a dual-broadcaster configuration, hopefully next Tuesday while I am ad LLO. | 34152 |
6479.html | 2017-02-13 12:54 | Update gstlal-calibration on the DMT to gstlal-calibration-1.1.4-v1 and restart it. As per, https://bugs.ligo.org/redmine/issues/5115 the changes are: * The NOGATE channels for the calibration factors (kappas) have incorrectly computed values when the kappas are gated. (This does not affect the applied values.) * The kappas are being gated by the observation-ready bit of the ODC state vector, corrupting the data during broadband injections. The calibration team holds that gating with the coherence of the calibration lines is sufficient, and the state vector gating is unnecessary and detrimental. * The pipeline exits on bad dataValid flags, a feature that Greg Mendell would like removed for C01 production. The new version will fill in those time with zeros and mark the data as bad via the h(t)-ok bit of the calib_state_vector. * Removal of all audiorate and lal_reblock elements. These are no longer necessary to clean up the data, since this is all done at the beginning of the pipeline by an element called lal_insertgap. Removal of these redundant elements is expected to reduce CPU usage. * Rearrangement of queues in the pipeline. This is aimed at the eventual goal of reducing the latency of pipeline from ~10 seconds to ~1-2 seconds, but is not currently expected to have any noticeable effect. | 34126 34152 |
6478.html | 2017-02-13 11:36 | Power on all CDS WAPS (CS, EX, EY) [Wireless Access PointS] to verify sysadmin's get an email and reminders while these are ON. | 34152 |
6477.html | 2017-02-13 11:34 | Check the IRIG-B channel connected to the PCAL AA-chassis is coming from the IRIGB fanout. | 34152 |
6476.html | 2017-02-13 09:53 | Troubleshoot read backs for the panels on tabel 6 Install new feed thrus on TCS and HWS tables for interlock monitoring. While on tcs table get some serial numbers from lasers. | 34149 |
6475.html | 2017-02-10 16:11 | Test remotely powering up the FLIR cameras, capturing an image, powering down camera. Change Beckhoff C1PLC3 SDF to flag an error if the FLIR power is ON. | 34106 34152 |
6474.html | 2017-02-10 14:13 | set up damping for new PI (alog 34046). Since we are in observe I will try to unmonitor the things that need to be changed, set them up, then monitor those that should be monitored again. | 34129 |
6473.html | 2017-02-10 13:10 | Make changes to range calculation imported from LLO to avoid large numbers from the inversion and add a matrix to select which bands go into the sum. | 34152 |
6472.html | 2017-02-09 16:29 | De-energize LLCV electric actuators, remove electric actuator housing domes, lift 4-20mA and 24VDC wires from actuator and remove 1/2" flexible conduit from housings, tighten 1/2 conduit elbows to ensure good contact with their sealing o-rings, reconnect conduit to elbows, re-land wires, re-install housing dome and re-energize actuators | 34133 34151 |
Previous W.P. | |||
6457.html | 2017-01-26 13:43 | Remove Coil Driver BIO Connections to WD: Modify & Compile models; DeIsolate Platform; Restart model; Isolate ISI; DAQ ReStart required. | 33776, 33799 34152 |
H1 Locked upon my departure at ≈68Mpc for a couple of hours.
09:35 Lockloss Nothing on BLRMs. Tidal looked fine until the moment of loss. PRMI doesn't even look close. Going to Initial alignment.
10:39 NLN
10:41 INtention Bit - Observe
Everything seems fairly normal. The PMC seems to be refusing 25% as of a day or so ago. Perhaps a remote alignment is in order?
TITLE: 02/15 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC STATE of H1: Observing at 71Mpc INCOMING OPERATOR: Ed SHIFT SUMMARY: Sheila and I went into the LVEA at the beginning of the shift. She installed a cable for the bullseye detector and we took a transfer function of the IMC. It is not clear what the problem was with locking the X arm on IR that Cheryl and Sheila were diagnosing, but we were able to lock on NLN nonetheless. I did a sweep of the LVEA, unplugged two unused extension cords and set the PSL make up air to 20%. I used diag to clear some testpoints that were left when Robert's measurement crashed. I changed the PLL set frequency for mode 26 from 210 to 209.5 while damping it. There have been no issues since.
Done earlier in the evening before going to observing. Now monitoring 266,462 channels (list attached).
In observing. No further issues since damping PI mode 26.
02:55 UTC Set to observing. LVEA swept. Accepted attached SDF differences.
03:17 - 03:20 UTC Dropped out of observing to change H1:SUS-PI_PROC_COMPUTE_MODE26_PLL_SET_FREQ to 209.5 (to match H1:SUS-PI_PROC_COMPUTE_MODE26_PLL_FREQ_COUNT_OUTPUT) while damping PI mode 26. Accepted SDF difference (attached).
During the LVEA sweep I changed the H1 PSL make up air speed from 100% to 20%.
Sheila, Jenne, Mark, Richard, Fil, Daniel
We made some progress on installing a prototype of the bullseye detector that Mark has been building in the PSL today.
We will hopefully get the whitened signals into the ADC channels that are already set aside and connected to AS_D in the ASC model tomorow.
Fil and Richard Verified the cabling for this setup. Out of the whitening was a 9pin ISC cable 250 which ran to ADC0 AA chassis in ISC-C1 port 7. It was unplugged from that port and move to ADC5 AA chassis port6 channels 21-24. We still need to verify operation.
This morning I checked the signals from the bullseye detector using a scope in the ISC rack, and saw that all 4 channels had a large oscialltion at 52kHz. I got it off the table and Mark Richard and Fil spent some time in the EE shop, and saw that indeed, attaching a long cable to the detector made it oscillate. Mark changed the 50 Ohm series resistors to 100Ohms, which fixed the problem. While we were there Mark also confirmed with a laser pointer that the middle segment is on pin 2 of the connector.
While the detector was in the shop, Vaishali and I went back to the rack and checked which pins on the input to the whitening chassis showed up as which segments in the digital system. This was not as I had expected, the correct mapping is in this table:
Head | pins | segment in AS_D |
center | 2+7 | 3 |
bottom | 4+9 | 1 |
HAM1 side | 3+8 | 2 |
anteroom side | 1+6 | 4 |
Robert and I went back into the PSL and put the detecor back on the table, and removed the lens so that we now have close to equal amounts of power in the center of the bullseye and in the outer ring. By the time we came out, the alignment seems to have shifted a little.
Keita checked on the whitening, and it seems to be working OK. We set dark offsets with the diode unplugged, one stage of whitening on and 24dB of whitening gain. We didn't measure the dark noise, so if anyone gets a chance to go back in it would be good to both measure the dark noise and try to readjust the alignment.
I went in to the PSL yesterday, the 14th, during the maintenance period as described in WP6522. I have done the following activities.
By the way, here is a picture of the bullseye setup.
J. Kissel, S. Kandhasamy As we begin to produce the systematic error budget for H1's response function, we're looking to figure out how to address the clipping of light going into H1 PCAL Y's RX PD that had been revealed in early January (see LHO aLOG 33108 and 33187). Because we got extremely lucky and took the 2017-01-03 reference measures at a time when the clipping -- which had been found to be slowly varying as a function of time -- had varied briefly back to nominal, this problem does not create any impact on the static, frequency-dependent part of the calibration pipeline. However, because the small, time-dependent correction factors are calculated using the H1 PCALY RXPD, these correction factors are systematically biased. Working through the math of T1500377, specifically Eqs. 9, 12, 15, and 16, one can see that while the cavity pole frequency estimate would not be impacted, the scalar correction factors are: If x_pcal' = eps x_pcal where eps is a real number (the number to convert the apparent displacement, x_pcal, into the real displacement, x_pcal'), then kappa_TST' = eps kappa_TST and kappa_PU' = (1 / A_0^PU) [A_total' - kappa_TST' A_0^TST] (A_total' = eps A_total, from Eq. 11) = (eps / A_0^PU) [A_total - kappa_TST A_0^TST] = eps kappa_PU and finally, S' = (1 / C_res) [ x_pcal' / d_err - D_0 (kappa_TST' A_0^TST + kappa_PU' A_0^PU)]^-1 S' = eps S such that kappa_C' = |S'|^2 / Re[S'] = (1 / eps) kappa_C which means that we can simply scale the entire response function with this time-dependent systematic error: R' = [1 / (kappa_C' C_0) ] + [kappa_PU' A_0^PU + kappa_TST' A_0^TST] = [eps / (kappa_C C_0)] + eps [kappa_PU A_0^PU + kappa_TST A_0^TST] R' = eps R I attach a 77 day minute trend of the ratio between RXPD and TXPD that's been obtained from data viewer***. I'm not yet advocating that time-series this be used as the representative systematic error, but I post it to be demonstrative. Stay tuned on how this data is encorporated into the uncertainty / error budget. *** Because it's data viewer, it spits out some dumb Julian calendar time vector. Just subtract off the first value, and you get a time vector in days since the start of the trend, which is Nov 30 2016 00:54:44 UTC. The data also lives in the CAL repo here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Measurements/PCAL/ 2017-02-14_H1PCALY_RXPD_Trend.txt 2017-02-14_H1PCALY_TXPD_Trend.txt The script used to process this data and plot the figure is here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/PCAL/ plot_h1pcaly_RXvsTXPD_trend_20170214.m
J. Kissel, on behalf of S. Karki and C. Cahillane. The data provided by Sudarshan via SLM tool that represents the PCAL clipping actually used by Craig in his systematic error budget for CBC analysis chunks 2 & 3 during O2 -- i.e. from Nov 30 2016 17:09:39 UTC to Jan 22 2017 04:36:04 (1164560996 to 1169094982) -- has been committed to the svn here: ${CalSVN}/aligocalibration/trunk/Runs/O2/H1/Results/PCAL/ O2_RxPD_TxPD_factor_dcsready.txt I attach a .png of the data also committed to the same location.
Attached is a list of the perl scripts under /opt/rtcds/userapps/release/ at LHO. Their creation date is also shown.
ECR E1700056 approves migrating these to python if and when applicable.
the last one in the list, wdreset_all.pl, has already been migrated.
Follow the progress on converting these scripts (and/or determining whether they're obsolete/unused) with Integration Issue / ECR Tracker Ticket 7394.
Note - There are several ISI scripts which are written in perl but are not in the list Dave made. This is because I wrote them a really long time ago, and didn't realize you should add the .pl to the end of the name. sigh. Naturally, these should also be updated into python or deprecated. Included are v1 guardian scripts in /opt/rtcds/userapps/trunk/isi/common/scripts/bsc/ e.g. goto_DAMPED there is stuff used at build time, e.g. create_hamisi_payload and a bunch of other stuff, some useful, some not. Here is the full list of the isi/common/scripts: BSCISIchecker gps_filters BSCISItool gpsget BSCISItool_old ham BSCISItoolwrapper.sh hitChannelWithOnes HAMISIchecker makeUserappsBranch HAMISIchecker.orig masterSwitchBlendFilters HAMISIchecker.patch medm_auto_concat.pl HAMISItool resetWatchdogThresholds HAMISItoolwrapper.sh restart_guardians align_restore_isi sensor_hilo align_save_isi setCartBiasSetpoints.pl bsc setCartBiasWrapper.sh buildBranchModel setFilterOffsets check_filter.pl storeCartBiasTargets.pl create_hamisi_payload storeTargetOffsets.pl create_hamisi_payload_with_links svnUpMedmDirs create_isi_payload svnUpUserappsBuildDirs ff01off switchBlendFilters ff01on wd_dackill_reset.pl ff12off wd_plots ff12on Note - I'm a fan of the conversion, just don't want to forget anything -Brian
There is no requirement that scripts in any language have extensions. Scripts generally do not have extensions, whereas library files do. One should not assume that all perl scripts have .pl extensions. Ditto for python (.py) or bash/csh (.sh).
MODE 26 has been fixed: it has been raised due to an incorrect high pass placement implemented 2/10 when we were setting up new filters. I've fixed this and it should go back to normal amplitudes and damping gain needs. Thanks operators for alerting medealing with this.
(TRY QPD INP FILT #1 had accidentally been switched from a 14.8-15.8 kHz BP filter to a 32kHz high pass. I've reverted this. OMC DCPD INP FILT #3 still has its correct 31kHz high pass.)
Sorry, probably my mistake.
Tom Dent, Miriam Cabero
We have identified a sub-set of blip glitches that might originate from PSL glitches. A glitch with the same morphology as a blip glitch shows up in the PSL-ISS_PDA_REL_OUT_DQ channel at the same time as a blip glitch is seen in the GDS-CALIB_STRAIN channel.
We have started identifying times of these glitches using omicron triggers from the PSL-ISS_PDA_REL_OUT_DQ channel with 30 < SNR < 150 and central frequencies between ~90 Hz and a few hundreds of Hz. A preliminary list of these times (on-going, only period Nov 30 - Dec 6 so far) can be found in the file
https://www.atlas.aei.uni-hannover.de/~miriam.cabero/LSC/blips/O2_PSLblips.txt
or, with omega scans of both channels (and with a few quieter glitches), in the wiki page
Only two of those times have full omega scans for now:
The whitened time-series of the PSL channel looks like a typical loud blip glitch, which could be helpful to identify/find times of this sub-set of blip glitches by other methods more efficient than the omicron triggers:
The CBC wiki page has been moved to https://www.lsc-group.phys.uwm.edu/ligovirgo/cbcnote/PyCBC/O2SearchSchedule/O2Analysis2LoudTriggers/PSLblips
I ran PCAT on H1:GDS-CALIB_STRAIN and H1:PSL-ISS_PDA_REL_OUT_DQ from November 30, 2016 to December 31, 2016 with a relatively high threshold (results here: https://ldas-jobs.ligo-wa.caltech.edu/~cavaglia/pcat-multi/PSL_2016-11-30_2016-12-31.html). Then I looked at the coincidence between the two channels. The list of coincident triggers is: ----------------------------------------------------- List of triggers common to PSL Type 1 and GDS Type 1: #1: 1164908667.377000 List of triggers common to PSL Type 1 and GDS Type 10: #1: 1164895965.198000 #2: 1164908666.479000 List of triggers common to PSL Type 1 and GDS Type 2: #1: 1164882018.545000 List of triggers common to PSL Type 1 and GDS Type 4: #1: 1164895924.827000 #2: 1164895925.031000 #3: 1164895925.133000 #4: 1164895931.640000 #5: 1164895931.718000 #6: 1164895958.491000 #7: 1164895958.593000 #8: 1164895965.097000 #9: 1164908667.193000 #10: 1164908667.295000 #11: 1164908673.289000 #12: 1164908721.587000 #13: 1164908722.198000 #14: 1164908722.300000 #15: 1164908722.435000 List of triggers common to PSL Type 1 and GDS Type 7: #1: 1166374569.625000 #2: 1166374569.993000 List of triggers common to PSL Type 1 and GDS Type 8: #1: 1166483271.312000 ----------------------------------------------------- I followed-up with omega scans and among the triggers above, only 1164882018.545000 is a blip glitch. The others are ~ 1 sec broadband glitches with frequency between 512 and 1024. A few scans are attached to the report.
Hi Marco,
your 'List of triggers common to PSL Type 1 and GDS Type 4' (15 times in two groups) are all during the known times of telephone audio disturbance on Dec 4 - see https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=32503 and https://www.lsc-group.phys.uwm.edu/ligovirgo/cbcnote/PyCBC/O2SearchSchedule/O2Analysis2LoudTriggers/PSLGlitches
I think these don't require looking into any further, the other classes may tell us more.
The GDS glitches that look like blips in the time series seem to be type 2, 7, and 8. You did indeed find that the group of common glitches PSL - GDS type 2 is a blip glitch. However, the PSL glitches in the groups with GDS type 7 and 8 do not look like blips in the omega scan. The subset we identified clearly shows blip glitch morphology in the omega scan for the PSL channel, so it is not surprising that those two groups turned out not to be blips in GDS.
It is though surprising that you only found one time with a coincident blip in both channels, when we identified several more times in just one week of data from the omicron triggers. What was the "relatively high threshold" you used?
Hi. Sorry for taking so long with this. I rerun PCAT on the PSL and GDS channels between 2016-11-30 and 2016-12-31 with a lower threshold for glitch identification (glitches with amplitude > 4 sigma the noise floor) and with a larger coincidence window (coincident glitches within 0.1 seconds). The list of found coincident glitches is attached to the report. Four glitches in Miriam's list [https://www.atlas.aei.uni-hannover.de/~miriam.cabero/LSC/blips/O2_PSLblips.txt] show up in the list: 1164532915.0 (type 1 PSL/type 3 GDS), 1164741925.6 (type 1 PSL/type 1 GDS), 1164876857.0 (type 8 PSL/type 1 GDS), 1164882018.5 (type 1 PSL/type 8 GDS). I looked at other glitches in these types and found only one additional blip at 1166374567.1 (type 1 PSL/type 1 GDS) out of 9 additional coincident glitches. The typical waveforms of the GDS glitches show that the blip type(s) in GDS are type 1 and/or type 8. There are 1998 (type 1) and 830 (type 8) glitches in these classes. I looked at a few examples in cat 8 and indeed found several blip glitches which are not coincident with any glitch in the PSL channel. I would conclude that PCAT does not produce much evidence for a strong correlation of blip glitches in GDS and PSL. If there is, PSL-coincident glitches must be a small subset of blip glitches in h(t). However, some blips *are* coincident with glitches in the PSL, so looking more into this may be a good idea.
Hi,
thanks Marco for looking into this. We already expected that it was a small sub-set of blip glitches, because we only found very few of them and we knew the total number of blip glitches was much higher. However, I believe that not all blip glitches have the same origin and that it is important to identify sub-sets, even if small, to possibly fix whatever could be fixed.
I have extended the wiki page https://www.lsc-group.phys.uwm.edu/ligovirgo/cbcnote/PyCBC/O2SearchSchedule/O2Analysis2LoudTriggers/PSLblips and the list of times https://www.atlas.aei.uni-hannover.de/~miriam.cabero/LSC/blips/O2_PSLblips.txt up to yesterday. It is interesting to see that I did not identify any PSL blips in, e.g., Jan 20 to Jan 30, but that they come back more often after Feb 9. Unfortunately, it is not easy to automatically identify the PSL blips: the criteria I used for the omicron triggers (SNR > 30, central frequency ~few hundred Hz) do not always yield to blips but also to things like https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=156436, which also affects CALIB_STRAIN but not in the form of blip glitches.
None of the times I added up to December appear in your list of coincident glitches, but that could be because their SNR in PSL is not very high and they only leave a very small imprint in CALIB_STRAIN compared with the ones from November. In January and February there are several louder ones with bigger effect on CALIB_STRAIN though.
The most recent iteration of PSL-ISS flag generation showed three relatively loud glitch times:
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20170210/latest/scans/1170732596.35/
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20170210/latest/scans/1170745979.41/
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20170212/latest/scans/1170950466.83/
The first 2 are both on Feb 10, in fact a PSL-ISS channel was picked by Hveto on that day (https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20170210/latest/#hveto-round-8) though not very high significance.
PSL not yet glitch-free?
Indeed PSL is not yet glitch free, as I already pointed out in my comment from last week.
Imene Belahcene, Florent Robinet
At LHO, a simple command line works well at printing PSL blip glitches:
source ~detchar/opt/virgosoft/environment.sh
omicron-print channel=H1:PSL-ISS_PDA_REL_OUT_DQ gps-start=1164500000 gps-end=1167500000 snr-min=30 freq-max=500 print-q=1 print-duration=1 print-bandwidth=1 | awk '$5==5.08&&$2<2{print}'
GPS times must be adjusted to your needs.
This command line returns a few GPS times not contained in Miriam's blip list: must check that they are actual blips.
The PSL has different types of glitches that match those requirements. When I look at the Omicron triggers, I do indeed check that they are blip glitches before adding the times to my list. Therefore it is perfectly consistent that you find GPS times with those characteristics that are not in my list. However, feel free to check again if you want/have time. Of course I am not error-free :)
I believe the command I posted above is an almost-perfect way to retrieve a pure sample of PSL blip glitches. The key is to only print low-Q Omicron triggers.
For example, GPS=1165434378.2129 is a PSL blip glitch and it is not in Miriam's list.
There is nothing special about what you call a blip glitch: any broadband and short-duration (hence low-Q) glitch will produce the rain-drop shape in a time-frequency map. This is due to the intrinsic tiling structure of Omicron/Omega.
Next time I update the list (probably some time this week) I will check the GPS times given by the command line you suggest (it would be nice if it does indeed work perfectly at finding only these glitches, then we'd have an automated PSL blips finder!)