The querying failure showed up and didn't go way like it has been, so I logged onto the script0 machine and it was down. Restarted it in the same screen (pid: 4403), and made sure that the ext_alert.pid had the same.
Yesterday while I was on shift I noticed that the querying failure would repeatedly pop up but then dissapear seconds later. I logged in to script0 and the script hadn't stopped, but it was just having a very hard time connecting yesterday. A "Critical connection error" would be printed frequently. Today it seemed to not be showing the same signs of connection issues, at least until I had to restart it.
The ext_alert.py script has not reported 2 different GRBs in the past two days that are on GraceDB (E194592 & E194919). Both of the external events had high latency times (3004sec and 36551sec). I am not sure if this is the reason that they were not reported. I didn't see anything in ext_alert.py or rest.py that would filter out any events with high latency, but mayb this done on the GraceDB side of things? I'll look into this more on Monday when I get a chance.
Title: 10/23 Day Shift 15:00-23:00 UTC (8:00-16:00 PST). All times in UTC.
State of H1: Observing
Shift Summary: Locked in Observing Mode for my entire shift. During a period when LLO was down, some PEM injections, electronics cabling investigations, and GW injections took place opportunistically.
Incoming operator: Jim
Activity log:
16:37 Kyle to Y28 and Mid Y
18:23 Kyle done
20:52 Commissioning mode while Sheila and Keita go to LVEA to look at cabling. I reloaded a corrected SR3_CAGE_SERVO guardian at the same time.
20:59 Sheila and Keita done
21:10 turned off light in high bay/computer room after Keita noticed it was still on
21:22 Jordan starting PEM injection
21:34 Jordan done
21:47 CW injection alarm
22:02 Stochastic injection alarm
22:09 TJ restarting DIAG_MAIN guardian
22:15 Stochastic injection complete
22:19 CW injection started
22:21 a2L script ran
22:30 Observing Mode
Summary:
- Range ~75 Mpc, observating 55% (low % largely caused by power outage)
- RF45 glitches still present
- anthropogenic noise (train band) caused many glitches in , caused 8+ newSNR triggers in BBH
- DQ shift page: https://wiki.ligo.org/DetChar/DataQuality/DQShiftH120151019
CW injections started around 22:19 UTC. We transitioned to Observing Mode at 22:30 UTC with the CW injections running.
Daniel, Sheila, Evan
Over the past 45 days, we had two instances where the common-mode length control on the end-station HEPIs hit the 250 µm software limiter. One of these events seems to have resulted in a lockloss.
The attached trends show the ISC drives to the HEPIs, the IMC-F control signal, and the IMC-F offloading to the suspension UIMs over the past 45 days. One can see two saturation events: one on 25 September, and another on 11 October.
We survived the event on 11 October: the EY HEPI hit the rail, and counts began to accumulate on the EY UIM, but the control signal turned around and HEPI came away from the rail. On the 25th of September, both end-station HEPIs hit the rail, and after about 2 hours of counts accumulating on the UIMs, IMC-F ran away (second attachment). Note that both HEPI drives started at 0 µm at the beginning of the lock stretch.
Both of these periods experienced large common drifts, when a pure tidal excitation would repeat after 24 hours. This may indicate a problem with the reference cavity temperature and PSL/LVEA temperature during these days.
Added PSL Temp Trends log in 22881.
Chris B, Joe B After LLO had locked again, Joe and I took the opportunity to perform the coherent stochastic injection. CW injections were off at both sites. And the intent bit was off at both sites. For the LLO couterpart aLog entry see: LLO aLog 21999. Waveform: The waveforms injected were: https://daqsvn.ligo-la.caltech.edu/svn/injection/hwinj/Details/stoch/Waveform/SBER8V3.txt Injection: At H1 Chris performed the injection with the command: awgstream H1:CAL-INJ_TRANSIENT_EXC 16384 SBER8V3_H1.txt 1.0 1129673117 -d -d > log_stoch.txt I've attached the log. As we were doing the injection we noticed a range drop of ~10%. IMPORTANT ACTION ITEM: The end of the stochastic waveform was not tapered. So when the injection ended it introduced a large transient into the ETMY. Robot voice was activated. This needs to be fixed. The beginning was properly tapered.
It looks like no one set the CAL-INJ_TINJ_TYPE EPICS channel prior to running awgstream. At LHO it happened to be equal to 2, so this stochastic injection was logged in H1 ODC bits and in the DQ segment database as a burst injection. At LLO it happened to be equal to 0, so this stochastic injection did not flip any of the type-specific bits in L1:CAL-INJ_ODC, although it did still flip the TRANSIENT ODC bit. So, this stochastic injection should be represented in the segment database with the ODC-INJECTION_TRANSIENT flag, but not with the ODC-INJECTION_STOCHASTIC flag.
The re-calibrated C01 hoft data generated by DCS is ready for use. It is available via NDS2 and will be published into LDR soon. The calibration factors were not applied when generating this version of hoft. The calibration group will comment separately on the uncertainties in the C01 calibration. (The calibration group is working on applying the calibration factors which will result in another version of hoft.) 1. The times the re-calibrated C01 hoft cover are: H1: 1125969920 == Sep 11 2015 01:25:03 UTC to 1128398848 == Oct 09 2015 04:07:11 UTC L1: 1126031360 == Sep 11 2015 18:29:03 UTC to 1128398848 == Oct 09 2015 04:07:11 UTC as per these alogs, https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=22392 https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=21464 and as documented here: https://wiki.ligo.org/Calibration/GDSCalibrationConfigurations https://wiki.ligo.org/LSC/JRPComm/ObsRun1#Calibrated_Data_Generation_Plans_and_Status https://dcc.ligo.org/LIGO-T1500502 2. Users should use C00 hoft and DQ flags before/after the above times. This means the O1 analysis chunk including Oct 09 2015 04:07:11 UTC might need parts a and b, using C01 and C00 data respectively. 3. These C01 specific DQ flags exist: H1:DCS-MISSING_H1_HOFT_C01:1 H1:DCS-SCIENCE_C01:1 H1:DCS-UP_C01:1 H1:DCS-CALIBRATED_C01:1 H1:DCS-ANALYSIS_READY_C01:1 H1:DCS-INVALID_CALIBRATED_DATA_TST_C01:1 H1:DCS-INVALID_CALIBRATED_DATA_THREE_C01:1 H1:DCS-CALIB_FILTER_NOT_OK_C01:1 L1:DCS-MISSING_L1_HOFT_C01:1 L1:DCS-SCIENCE_C01:1 L1:DCS-UP_C01:1 L1:DCS-CALIBRATED_C01:1 L1:DCS-ANALYSIS_READY_C01:1 L1:DCS-INVALID_CALIBRATED_DATA_TST_C01:1 L1:DCS-INVALID_CALIBRATED_DATA_THREE_C01:1 L1:DCS-CALIB_FILTER_NOT_OK_C01:1 Summaries of the % "live" time are here: https://ldas-jobs.ligo.caltech.edu/~gmendell/DCS_SegGen_Runs/H1_C01/html_out/Segment_List.html https://ldas-jobs.ligo.caltech.edu/~gmendell/DCS_SegGen_Runs/L1_C01/html_out/Segment_List.html And due to non-zero dataValid flags, L1:DCS-ANALYSIS_READY_C01:1 will not include these segments, 1127128496 1127128592 96 1127353008 1127353104 96 1127387760 1127387856 96 1127435760 1127435856 96 1127687856 1127687952 96 1128122032 1128122075 43 1128122086 1128122128 42 1128320816 1128320912 96 Total duration = 661 seconds. but will gain these, due to gaps that have been filled in: 1126981012 1126981079 67 1126981095 1126981474 379 1126981888 1126983568 1680 1126989208 1126989232 24 Total duration = 2150 seconds. We will not lose any H1:DMT-ANALYSIS_READY:1 time, but will gain these, due to gaps that have been filled in: 1126627282 1126627298 16 1126645233 1126645244 11 1126988688 1126988732 44 1127059230 1127059246 16 1127480460 1127480471 11 1127563235 1127563246 11 Total duration = 109 seconds. 3. For analysis ini files, i. The frame-types are H1_HOFT_C01 and L1_HOFT_C01 ii. The STRAIN channels are: H1:DCS-CALIB_STRAIN_C01 16384 L1:DCS-CALIB_STRAIN_C01 16384 iii. State and DQ information is also in these channels: H1:DCS-CALIB_STATE_VECTOR_C01 16 H1:ODC-MASTER_CHANNEL_OUT_DQ 16384 L1:DCS-CALIB_STATE_VECTOR_C01 16 L1:DCS-CALIB_STRAIN_C01 16384 The bits in the STATE VECTOR are documented here: https://wiki.ligo.org/LSC/JRPComm/ObsRun1#Calibrated_Data_Generation_Plans_and_Status
Following instructions written by Keith Thorn at LLO (https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=21998) I was able to get psinject to run under control of monit. The same problem existed here as at LLO, including an existing psinject that monit didn't know about. The ~hinj/Details/pulsar/RELEASE points to O1test at LHO as at LLO. I have left psinject turned off (via monit) for now.
Joe B, Chris B Command will be: awgstream H1:CAL-INJ_TRANSIENT_EXC 16384 SBER8V3_H1.txt 1.0 1129673117 -d -d > log_stoch.txt
This test is over. More details later.
H1 has been locked for the past ~8 hours in Observing Mode. Just cruising along listening for gravitational waves.
TITLE: 10/23 OWL Shift: 07:00-15:00UTC (00:00-08:00PDT), all times posted in UTC
STATE of H1: In Observation at 77Mpc for 4+hrs.
Incoming Operator: Travis
Support: Nutsinee, Landry (chatted on phone about status), Kiwamu (on-call, but not called)
Quick Summary: H1 running in special state for 45mHz Blend Filters due to high useism with low gain on the
Shift Activities:
GRB Alarm.
In response to Corey's alog I have turned on an additional heater in the LVEA.
HC2B has been set to 9ma or one stage of heat. The discharge air is responding so I know the heater is working.
The power outage earlier this week has messed up our FMCS trending capability and Bubba and I have been trying to sort this out. There are a few bad data samples which mess up the plotting routine and apparently no way to delete these data points according to our vendor!. For the interim we can use data viewer while at LIGO but from home it is difficult to see trends.
I noticed that the end stations were running out of control range so I have incremented the heat there as well. Both end stations' heaters are now set to 9ma on the control signal. These are variac fed units so there should be no significant current spikes as in the corner where the heaters are either off or on.
LVEA Low Temperatures For Last Three Evenings
Have had a few "CS temperature is low" Verbal Alarms. Looking at trends (see attached 7-day), it seems like we've had dips in temperature for the last (3) nights. Sending Bubba & John an email.
H1 Status
Other than the lockloss and slow recovery, H1 has been doing decently at 76Mpc.
Overview
As Nutsinee was beginning to head out, she noticed (9:23) oscillations on tidal signals & ASC signals. H1 then had a lockloss within 4mins (see Nutsinee's Lockloss post). Spent quite a bit of time trying to get back with a mix of different issues.
ALS troubles
During the 2nd & lock acquisition attempt, saw that the ALS had troubles. Alignment looked fine, but the arms would oscillate and then H1 went DOWN. Tried staying DOWN for a while to let mirrors calm down, but not sure what the problem was here. Eventually, H1 made it through this step, but spent on the order of 15-25min just in ALS.
ISS 2nd Loop "Not Great"
Once ALS issues were behind us, H1 had locklosses at various points during guardian states. On about the 3rd attempt after ALS issues above, made it all the way to ENGAGE_ISS_2ND_LOOP, but was stuck here on the order of 15-20min. Nutsinee pointed me to Kiwamu's alog for when we have troubles during this state & we followed Kiwamu's procedure for engaging the ISS by hand.
NOTE: While we were at this state the ISS diff power was just under 9%. We tried adjusting the slider to lower this to no avail. After engaging by hand, this went up, and then came down to about 8.2%.
SDF Differences: LSC ARM INPUT MATRX 2_2
Once we were at NLN, noticed (2) SDF differences for the LSC. These were related to an LSC ARM Input Matrix Element 2_2. The value was 0, but SDF wanted it to be 14. So we Reverted the two channels, but we actually had to click "Load Matrix" for this Matrix to actually change the element. Not sure why Guardian missed this.
Summary
So, went through Guardian the way TJ instructed me to at the end of his shift (involving going Manual during ENGAGE_ASC_PART3 & staying in Low gain for ASC SOFT----this is all for running in the special state with Blend filters for high useism).
Corey, Nutsinee
Few minutes before the lockloess we noticed DHARD, IMC-F_OUT16, and ETMY DRIVEALIGN was running away on the FOM. No earthquake report from USGS within half an hour before/after the lockloss. No seismic activity in the earthquake band. I've attached some lockloss plots below. All the pretty much ASC signal were running away exacpt DSOFT and CSOFT. WITNESS channels show BS, ETMX, ETMY, PRM, and SR2 running away. I've (temporarily) added IMC-F, ETMX and ETMY DRIVEALIGN to the plot.
I have taken a look at the recalibrated C01 frames for LLO and LHO. See LLO alog 21961 , for the LLO version of this. I grabbed the H1:GDS-CALIB_STRAIN (online) and H1:DCS-CALIB_STRAIN_C01 (recalibrated) data from the Caltech cluster, and generated the ratio of the spectrum at time which we expect both the C01 and C00 to be correct. The data is stored in the calibration svn at: https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Runs/O1/H1/Results/C01 It was generated by the script: https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Runs/O1/H1/Scripts/CHECKS/compare_C00_C01.m Below I attach a ratio of the strain between C01 and C00, from GPS 1128249017 (Oct 7, 2015 10:30 UTC). This is a time period when the front end models and GDS pipeline matched our best knowledge of the instrument and were used to generate the recalibrated C01 data. This should be compared to LHO alog 22291 , first attachment. Similarly to LLO (see LLO alog 21357 ), we believe the small 2% differences at low frequency are due to minor mismatches in the front end filter realization against our full calibration model which the GDS filters match well against. The large narrow frequency 5% differences are around the DARM calibration line (at 37.3 Hz) is similar to LLO (see 21961 ). The other higher discrepancy is I believe related to the front end filtering handling of certain notches, which Kiwamu had to reduced the Q for in the calibration. See LHO alog 21332 and 22738 .
The issue with the front-end filters around the violin modes (see 22631) was only present in the C00 calibration between 0200 Sep 11 and 1600 Sep 14, so I think this does not appear in this comparison (which Joe says uses data from Oct 7). The +/-5% discrepancy in magnitude at high frequency is around 600Hz, maybe it's a calibration line?
It might be worth making a plot like this for a stretch of time when the 508Hz noise was present in the C00 frames.
Because there have been some indications the now missing HAM3 .6hz line was electronic, I checked that it was still gone after this morning's power outage. It is, so far. Attached spectra are the GS-13s.
It looks like the line reappeared for a while on Oct 13. It is visible in the DetChar summary page for HAM3 at the link below https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20151013/plots/H1-ISI_ODC_1B8A8B_MEDIAN_RATIO_SPECTROGRAM-1128729617-86400.png I've attached a png of the plot which is linked to above.