TITLE: Nov 17 EVE Shift 00:00-08:00UTC (08:00-04:00 PDT), all times posted in UTC
STATE Of H1: Down
OUTGOING OPERATOR: Patrick
QUICK SUMMARY: I’ve been asked to stand down the swing shift due to deplorable wind conditions that are expected to last until 11:00UTC. LHO Wind Speed indicators are reporting winds in excess of 50mph at this time. WOW! I just saw a 75mph gust. By the looks of Terramon the Alaska quake didn’t cause too much of a ruckus. I’m not aware of anyway to remotely see µSeism. The Observatory Mode was left in 'Preventative Maintenance' Mode. I will continue to monitor the situation. Nasty situation science fans!
I put a strip chart of the wind speed over the DARM spectrum on nuc3 so that it would be posted to the web.
Jonathan, Jim, Patrick, Dave:
Looks like we got away lightly with this glitch (famous last words). We just needed to reboot h1lsc0 and h1iscey. We got permission from Mike to make the h1lsc model change requested by Duncan and Detchar to promote the channel H1:LSC-MOD_RF45_AM_CTRL_OUT_DQ to the science frame (WP5614). We restarted the DAQ to resync the lsc model. Interestingly while the DAQ was down, my epics freeze alerts went crazy until the data concentrator came back. This did not happen during the 13:30 DAQ restart, but might suggest a link between DAQNET and FELAN ports on the front ends.
We have restarted all the Beckhoff SDF systems to green up the DAQ EDCU. At the time of writing the CDS overview is green with the exception of an excitation on SUS-BS.
The wind continues to hammer on the wall of the CER, hopefully no more power glitches tonight.
It looks to me that all DCS computers/services running in the LSB and the warehouse survived the power glitch. (I'm not sure if it was site-wide.)
here is what the overview looked like before we started the recovery.
[Patrick, Kiwamu, Jenne]
After we recovered the PSL, the PMC wasn't locking. Rick reminded us that we might have to enable the output of the high voltage power supplies in the power supply mezzanine for the PZT. We went up there, and enabled the output on the PMC HV supply (labeled PSL or something like that?) as well as on the OMC PZT supply (labeled HAM6 PZTs). For the PMC power supply, the Vset was correct (according to the sticker on the front panel), but Iset needed to be set. On the OMC power supply we set the Vset, and it came back (Iset was fine).
DIAG_MAIN reminded us about the OMC HV supply, but perhaps TJ can add the PMC HV supply to that list?
DIAG_MAIN is also telling us that both ESD drivers are off, but since probably no locking will be happening tonight, we'll deal with that in the morning when the wind decides to die down.
We have known for a while that the front-ends (and DAQ computers) do not set things properly so that EPICS traffic is limited to only one Ethernet port (FE-LAN) instead of all connected networks. This is straightforward to correct for executables started in a shell (where the environment variable can be asserted). A bit harder for ones started from an init script. Certainly something to test on a test stand.
I acknowledged a trouble alert on the fire control panel in the CUR in the OSB which was triggered by the 16:11 PST power glitch.
We would like to know the range of counts we have available for doing hardware injections with PCAL. Hardware injections are filtered with an inverse actuation filter to get from strain to counts, this is the CAL-PINJX_HARDWARE filterbank. Here I've filtered waveforms with the CAL-PINJX_HARDWARE filterbank and see how many counts they require. I've attached plots of the time series in counts. Last Thursday a filter was added to compensate for the upsampling that is done in the analog electronics. This seems to have a significant effect on the counts. I did a test with the coherentbbh0 waveform (used in hardware injections tests before) and saw that it increased its counts by a factor of ~3. I've attached plots for before and after adding this filter. Here I report the max number of counts from the waveforms: * CW injections: ~1500 counts * 1.4-1.4 non-spinning SEOBNRv2 waveform (target SNR is ~15): 7e7 counts * 25-25 non-spinning SEOBNRv2 waveform (target SNR is ~15): ~10000 counts * coherentbbh0 waveform from hardware injections tests (recovered SNR is ~18-23): ~20000 counts Note these SNRs are approximates and can change depending on sky location. It was quoted that perhaps the CW injections could change by a factor of 2. There is also a higher-frequency line that gets injected in PCALX and we need to take this into consideration as well.
I filtered the 1.4-1.4 non-spinning BNS template with and without the inverse anti-imaging filter that was added last week. The first plot is the h(t) time series for the waveform. It is near the time of the merger where the maximum number of counts occurs. The second plot is the filtered time series in counts without the inverse anti-imaging filter. The maximum is 5e6 counts. The third plot is the filtered time series in counts with the inverse anti-imaging filter. The maximum is 7e7 counts.
With the release of HWInjReport v 2.2, it was necessary to repeat the prior analysis run spanning the first half of O1, Sep 12 2015 00:00:00 UTC (1126051217) to Oct 23 2015 20:14:43 UTC. In the previous iteration of this run, there were a number of anomalies that necessitation reexamining the logic and implementation of HWInjReport, leading to the creation of version 2.2. While some anomalies, which turned out to not be anomalies, were removed, a few anomalies, which have proven to be true anomalies, still remained, as desired for a properly functioning injection analysis software.
The actual output report, along with the generated log file and the input schedule file containing scheduled injections, have been attached to provide details of output results. Below is a summary analysis of the results.
This run was performed with the following parameters:
GPS Start Time = 1126051217 # Beginning of time span, in GPS seconds, to search for injections GPS End Time = 1129666500 # Ending of time span, in GPS seconds, to search for injections Check Hanford IFO = True # Check for injections in the Hanford IFO frame files. Check Livingston IFO = True # Check for injections in the Livingston IFO frame files. IFO Coinc Time = 0.012 # Time window, in seconds, for coincidence between IFO injection events. Check ODC_HOFT = True # Check ODC-MASTER_CHANNEL_OUT_DQ channel in HOFT frames. Check ODC_RAW = True # Check ODC-MASTER_CHANNEL_OUT_DQ channel in RAW frames. Check ODC_RDS = True # Check ODC-MASTER_CHANNEL_OUT_DQ channel in RDS frames. Check GDS_HOFT = True # Check GDS-CALIB_STATE_VECTOR channel in HOFT frames. Check CAL_RAW = True # Check CAL-INJ_ODC_CHANNEL_OUT_DQ channel in RAW frames. Report Normal = True # Include normal (IFO-coincident, consistent, and scheduled for network injections and consistent and scheduled for IFO injections) injections in report Report Anomalous = True # Include anomalous (non-IFO-coincident, inconsistent, or unscheduled) injections in report Use CONDOR optimizations = True # Enable optimizations that assume execution on a CONDOR machine
The schedule file contained some 63 injections. Of these, 43 were found to occur within at least one IFO, and 20 were found to not occur in any IFO. Some of the non-occurring scheduled injections were listed as having zero amplitude scaling; so, it is reasonable that these injection would not be detected as they have no amplitude.
This run yielded 28 normal network injections, all of which were CBC injections.
There were a significant number of UNSCHEDULED injections, all of which appear to be single-IFO injections.
There were a number of CAL-INJ resets; however, every such reset had the peculiar property that they only show as occurring in only two of the frame channels or bits, with all other channels or bits showing as non-occurring. Further, only specific doublet combinations frame channels/bits occurred in this fashion. These doublet combinations were
In other words, for all the report CAL-INJ resets, the frame flags would only show an injection occurring in only one of the above combinations and all other frame flags would show the injection as non-occurring in their associated channel/bit. This phenomenon was verified to occur for several of the CAL-INJ resets; so, report of this phenomenon appears reliable.
There were 3 UNKNOWN injections that occurred only in L1:
These are known hardware injections that were inserted without specifying the type of the injection.
There were several pairs of H1 and L1 single-IFO injections that matched to the same scheduled injection. These are injections that should have been IFO-coincident but had time differences greater than the set IFO coincidence time of 12 ms. These anomalous injections can be seen as occurring together, one after the other, with both being matched to the same scheduled injection. The easiest way to spot this is that a specific Inj UID value will occur twice in immediate succession. In this case, Inj UIDs 29, 30, 31, 38, 40, 48, and 54 occur in this fashion.
It is hypothesized that these anomalies may be due to a known bug that existed at the time in which the amplitude threshold for setting the ODC bits for the occurrence of an injection was not being checked using the absolute value of the output waveform, but instead only checked for positive crossing of the waveform. If waveforms sent to the IFOs are significantly out of phase (such as may occur from one IFO inverting the waveform relative to the other), then, at low frequencies, the ODC bits for one IFO would not register the occurrence of the waveform until significantly after the other IFO. This is because while one IFO outputs a signal crossing into the positive, thus being registered as having an injection at the time of crossing, the other IFO would be outputting a signal crossing into the negative, thus not registering as an injection even though one is occurring. It would not be until the second IFO output a signal crossing into the positive that it would register as having an injection, but this could be significantly outside the 12 ms IFO coincidence window. In fact, one case, Inj UID 31, was found to have a 31 ms (spooky!) difference between start of the injection times for H1 and L1.
It is presumed that corrections have been made to the threshold checks for determining injection occurrence, but I have not, yet, verified that this is the case. It is possible there are other issues that can conspire to evince the above phenomenon.
A follow-on run is currently underway that covers the period from Oct 23 2015 00:00:00 UTC to Nov 17 2015. The intent is to catch-up on missed analysis opportunities due to fixing the software.
We experienced a power glitch at approx 16:11 PST. We are the process of recovering.
Due to a number of bugs and other issues, analysis reports from HWInjReport had been suspended indefinitely until these issues could be resolved. I am happy to report that these issues have been resolved and have resulted in the release of HWInjReport v2.2. This version contains a number of fixes to the analysis logic to improve accuracy and reliability.
Several features were added to HWInjReport for this version. The first is that the application has been modified to use direct access to the data nodes when running on a CONDOR machine; this has the effect of significantly improving runtime by a factor 3 or better due to no longer needing to wait on the archive robot to load files, which was the case when running from LDAS-pcdev1, for instance. In part of this, HWInjReport now uses gw_data_find to find frame files instead of the deprecated ligo_data_find. The second feature is support for analyzing the CAL-INJ channel in RAW frame files and checking the TRANSIENT bit within that channel for hardware injections. The third feature is support for recognizing STOCHASTIC injections.
Also with this version, the output format has been compressed some for better readability (the output is not as wide as it originally was), and the threshold for IFO coincidence between Hanford and Livingston has been increased to 12ms from 10ms.
With these changes, version 2.2 is the latest stable release of HWInjReport. It is still expected that changes and improvements to the software will continue into the future as necessary; however, hopefully, these changes will not necessitate another cessation of analysis using HWInjReport due to validation issues.
TITLE: 11/17 [DAY Shift]: 15:00-23:00 UTC (08:00-16:00 PDT), all times posted in UTC STATE Of H1: Down SHIFT SUMMARY: Too windy to lock. Entire day devoted to maintenance. INCOMING OPERATOR: Ed (holding off on coming in upon instructions from Mike) ACTIVITY LOG: 15:00 Ken installing GPS clock in control room 15:07 Bubba and Peter K. to H2 PSL enclosure to check on makeup air fan 15:53 Jeff and Jodi to H2 PSL enclosure for OMC extraction 15:59 Jim B. restarting broadcaster daqd 16:03 Jim B. done restarting broadcaster daqd 16:16 Gerardo to LVEA to join Jeff and Jodi 16:16 Filiberto pulling cable from electronics room to H1 PSL racks over HAM1 and HAM2 for EOM monitoring chassis 16:21 Joe D. to check charge in scissor lift at end Y 16:23 Keita to H1 PSL enclosure to start installation of chassis. Karen going with him to clean. 16:34 Hugh to end stations to check HEPI fluid levels 16:34 Sudarshan to MSR to check PCAL camera 16:37 Jason to H1 PSL diode room to reset PSL watchdog 16:41 Jason done resetting PSL watchdog, going to end X and then end Y to zero optical levers 16:43 Vinnie and Borja to end Y for glitch hunting 16:52 Kyle going back and forth to X28 in preparation for baking ion pump 17:08 Bubba done checking H2 PSL enclosure makeup air fan 17:16 LN2 delivery through gate (two trucks) 17:20 Joe D. back from end Y 17:21 Ryan working on alog 17:24 Sand truck through gate 17:24 Travis at end Y restarting PCAL camera 17:28 Travis done 17:28 Rick and Evan to end stations to put tamper stickers on PCAL hardware 17:37 Hugh and Travis back 17:37 Jason done at end X, going to end Y 17:39 Filiberto done pulling cable in LVEA, going to end Y to pull fiber for vacuum gauge 17:40 LN2 truck through gate 17:42 Bubba and John to end Y to check for noise from trap primers 17:54 Karen and Christina done cleaning in the LVEA, going to end stations. Christina reports a problem with the card reader between the LVEA and high bay. 17:56 Ryan done working on alog 18:17 Jodi and Jeff done. OMC extracted and craned over beamtube. 18:23 Gerardo out of LVEA 18:14 Started trying to lock ALS. 18:31 Arms would not stay locked on green. Gave up and put guardian back to down state. 18:32 Bubba and John back from end Y. Bubba getting meter left in LVEA. 18:37 Jason done zeroing optical levers at end stations, going to zero optical levers in LVEA 18:37 Offloaded SR3 M2 per "SR3 Cage Servo warning: Align SR3 to offload M2 actuators" guardian notice. Jenne put instructions in the ops "Troubleshooting the IFO" wiki. 18:42 Karen and Christina done at end Y, heading to end X 18:47 Betsy to electronics room to take picture of wifi cable for wiki 18:48 Rick and Evan done putting tamper stickers on PCAL hardware 18:56 Betsy to turn on wifi in LVEA for Jason in LVEA 19:01 Keita done installing chassis, starting rephasing 19:04 US Linen delivery, LN2 truck leaving 19:08 Jeff B. working on dust monitor plugging near HAM4 and HAM5 19:13 Pressed 'DAQ Clear Accumulated CRC' on CDS overview 19:15 Joe D. to end Y to unplug battery charger 19:26 Travis to end X and end Y to install protection covers on PCAL hardware 19:31 Karen and Christina done cleaning, leaving end X 19:38 Restarted CDS and OPS overview on video2. Dave changed the OPS overview medm to increase the gracedb query failure notice timeout time. 19:47 Filiberto done pulling fiber for vacuum gauge at end Y, going to pull fiber for vacuum gauge at end X 19:49 Hugh to LVEA to inventory 3IFO equipment 19:50 Jeff B. done 19:56 Joe D. back 19:58 Jeff B. back to LVEA to make one connection on dust monitor plumbing 20:11 Keita done rephasing 20:11 Betsy starting charge measurements 20:15 Jason done 20:21 Hugh done 20:30 Madeline W. updating calibration filters (WP5605). Notified over teamspeak. 20:31 Travis back 20:33 Madeline W. done. "The DMT/GDS h(t) pipeline is restarted and running again at LHO" 20:55 Vinnie and Borja back 21:20 Charge measurements done 21:21 Filiberto back 21:31 Dave restarting DAQ 21:31 Keita and Kiwamu remeasuring phase 21:50 Kyle and Gerardo to X28 to drill anchor holes 22:23 Keita and Kiwamu done 22:36 Bubba to mid X to check water/fogging on surveillance camera lens 22:47 Keita to PSL enclosure to measure RF level 22:56 Vinnie and Borja back to end Y to recheck glitch hunting results 22:58 Beckhoff SDF crashed, Dave leaving off, EDCU is red 23:07 Bubba back from mid X 23:07 Kyle and Gerardo back 23:20 Keita done 00:11 Power glitch !!! 00:14 Vinnie and Borja back 00:18 Dave restarting h1lsc0
With help from Peter King and Ken from S&K Electric we determined that the disconnect was turned off for the H-2 PSL make up air fan. I turned the disconnect on and the fan is working properly now. The fan disconnect was not labeled, however, Peter said he would print a label and install it on the disconnect.
Cleared counters in attached screenshot.
I took a few charge measurements this morning, however a few of each set were garbage due to an atttempted MICH lock. Then, the other few look noisy due to the ongoing 30-70mph winds we've felt all morning. I'll try to catch a quieter period for more measurements this week. Plots coming...
Winds are too high to lock so maintenance is running past noon.
I've updated tinj.m to use CAL-PINJX_TINJ* instead of CAL-INJ_TINJ*. The new changes have been implemented at LHO. The EPICs records changed are: * H1:CAL-PINJX_TINJ_STATE * H1:CAL-PINJX_TINJ_START * H1:CAL-PINJX_TINJ_ENDED * H1:CAL-PINJX_TINJ_TYPE * H1:CAL-PINJX_TINJ_ENABLE * H1:CAL-PINJX_TINJ_PAUSE * H1:CAL-PINJX_TINJ_OUTCOME To do this: (1) Edited tinj.m to replace CAL-INJ_TINJ* with CAL-PINJX_TINJ*. The only exception was CAL-INJ_EXTTRIG_ALERT_TIME since ext_alert.py still uses this channel and has not been switched over yet. (2) At 20:57 UTC turned run_tinj off with the monit web interface. (3) Compile run_tinj using: mcc -R -nojvm -R -nodisplay -R -singleCompThread -m run_tinj (4) At 21:00 UTC turned run_tinj on with the monit web interface. (5) Commit changes. I checked tinj.log and it was updating as expected. We will need to coordinate the switch-over of CAL-INJ_EXTTRIG_ALERT_TIME to CAL-PINJX_EXTTRIG_ALERT_TIME. And update LLO to this version as well.
Because Dave is finding that these are changing more frequently and thus causing him to commit more SDF snap files to the svn than he had originally planned for, I finished setting all of the SUS OPTICALIGN PIT and YAW channels (just 2 per each SUS) to not be monitored by the SDF. I also started clearing house on the new BECKHOFF SDF. In OBSERVE, this system sees more channel settings which differ from lock stretch to lock stretch. The following were set to NOT MON in SDF:
H1:*-*_*_LIMITCOUNT (Counter channelswhich have just been incrementing since the begining of the run)
H1:ALS-C_COMM_VCO_CONTROLS_SETFREQUENCY (Changes by very small amounts from lock to lock for ALS locking)
Today I re-zeroed all of the active H1 optical levers. The table below shows their pitch/yaw values before re-zeroing and the values as they read when I finished each oplev (there could be relaxation in the motors causing a slow, slight drift, especially in the BS and HAM2 oplevs which are zeroed with piezo motors steering a 2" turning mirror). This closes work permit #5606.
Optical Lever | Old (µrad) | New (µrad) | ||
Pitch | Yaw | Pitch | Yaw | |
ETMx | -2.4 | -2.2 | -0.2 | 0.3 |
ETMy | -1.3 | -6.7 | 0.1 | 0.0 |
ITMx | -10.2 | -8.2 | -0.2 | 0.0 |
ITMy | 7.9 | -2.6 | -0.2 | -0.0 |
PR3 | -1.1 | -2.9 | 0.0 | 0.0 |
SR3 | 10.1 | -5.7 | 0.0 | 0.2 |
BS | -13.3 | -9.2 | -0.4 | 0.4 |
HAM2 | -20.6 | -39.0 | 0.1 | -0.1 |
Tagging SUS, DetChar, and ISC for future reference.
The control room screen shots include the seismic DMT plots:
https://lhocds.ligo-wa.caltech.edu/cr_screens/