The DMT Viewer sessions (for seismic trends) on NUC5 had been closing out the last few days. The NUC5 instructions say this is probably due to memory loss, so rebooted the computer by clicking the button on the nuc computer.
Unfortunately, I didn't obviously see the computer come up on the TVs. Luckily Jim was here and tinkered with the TVs and saw that the TV switched to a different input source. So, had to switch it to HDMI (also had to switch the upper TV, which required a ladder).
I couldn't log in to the computer, so I had to pull a keyboard and mouse from another computer & logged in. (after this I could share the screen remotely).
I opened up the seismic DMT sessions. I wasn't sure how to open the rf45 session (Cheryl posted the link to that file...but do I need to ssh into a different session to run this?).
Final note: I could not get the mouse onto the bottom TV screen.
TITLE: 10/25 OWL Shift: 07:00-15:00UTC (00:00-08:00PDT), all times posted in UTC
STATE of H1: H1 In Observing @ 79 Mpc
Outgoing Operator: Jim
Support: On Call-->Kiwamu until 7am & then Sheila from 7am - 4pm
Quick Summary: H1 running nicely with a high range hovering at 80Mpc (due to low useism?). Jim mentioned a peak on the running DARM spectra which looked new (and not seen on our 10/14 reference). This peak is ~16.75Hz (What is this?).
TITLE: 23:00-7:00UTC
STATE of H1: Locked in Observation Mode @ 77Mpc
Support: I ran into Kiwamu in the kitchen? Miquel was here.
Quick Summary: lost lock half way through, relocking was kind of slow
Shift Activities:
3:10 Lock loss, I was making food at the time, but I suspect RF45?
3:45 DRMI and PRMI are no good, start initial alignment
4:15 Initial alignment done
5:00 relocked, A2L run, it leaves some diffs in OMC that I revert, which brings up a change in LSC ARM input matrix. Not sure what it does, but the IFO is running so I accept it.
RF45 has been slightly noisy tonight, seems to coincide with ETMY saturations. I couldn't find the directions on how to deal with it, someone threw away the sticky on the ops workstation, and it seems the directions haven't made it into the ops wiki.
Please see the new Operator Sticky Note wiki page. (this is where we look for latest/greatest news for operators running H1. Here rf45 instructions are posted from Stefan's alog from 9/30).
Lost lock at about 3:00 UTC, and was struggling a little to get it back. IFO is relocked now, I'm running A2L before resuming observing, should be just a couple more minutes.
TITLE: 15:00-23:00UTC, 8:00-12:00PT, all times in UTC
STATE of H1: Locked in Observation Mode @ 77Mpc
Incoming Operator: Jim
Support: Jordan Palamos (fellow)
Quick Summary: H1 locked entire shift
Shift Activities:
Glitches in the range:
17:36:19UTC - ETMY saturation, glitch in range, 17:38:00UTC
18:38:00UTC - glitch in range, not announced as ETMY
18:55:00UTC - glitch in range, not announced as ETMY
18:04:01UTC - ETMY saturation, glitch in range, 18:06:00UTC
19:03:20UTC - ETMY saturation, glitch in range, 19:05:00UTC
19:28:19UTC - ETMY saturation, glitch in range, 19:30:00UTC
20:35:16UTC - ETMY saturation, glitch in range, 20:37:00UTC
21:04:00UTC - glitch in range, not announced as ETMY
22:00:56UTC - ETMY saturation, glitch in range - NONE
22:21:00UTC - glitch in range, not announced as ETMY
Quiet shift with no earthquake activity, and only one burst of wind around 18:50UTC that got up to 20mph.
The 45 MHz FOM is userapps/isc/h1/scripts/H1_45MHz_Demod_FOM.xml
The six-panel seismic FOM screen is /ligo/home/controls/FOMs/Seismic_FOM.xml
TITLE: 10/24 OWL Shift: 07:00-15:00UTC (00:00-08:00PDT), all times posted in UTC
STATE of H1: Locked in Observation Mode @ 76Mpc
Incoming Operator: Cheryl
Support: Kiwamu (on-call, and on phone for 1.5-2hrs)
Quick Summary: H1 back with 90mHz Blends for BSC-ISIs & ASC Soft filters OFF---> Since useism has been trending down over the last 24hrs (it's currently about 0.18um/s).
Shift Activities:
(Corey, Kiwamu)
After no obvious headway, Jim and I both were thinking another Initial Alignment was looking inevitable. After he left around 1am, I went about starting an Initial Alignment.
During the alignment, while in ALS, I noticed one of the control signals not converging (ETMX M0 LOCK Y OUT 16). Is this normal? I went ahead and offloaded Green WFS.
Then when starting Input Align, the Xarm simply would not lock. Would get fringes which would give powers up to 1.0, but they were less than a second. The Mode Cleaner would drop out every few minutes as well. When there were fringes in the Xarm, one could see the ASAIR spot shift in yaw a little. These were the symptoms. I waited for about 20min. Then I woke up Kiwamu at about 1:45am PST.
PR2 Tweaked In Yaw
Since the IMC kept dropping, we addressed input pointing by tweaking PR2 in yaw. I moved 1.0 units (from 4313.6 to 4312.6)
Dark Offsets
Kiwamu suspected bad dark offsets, so we pitched MC2 to prevent IMC locking, and then ran the dark offset script (/opt/rtcds/userapps/release/asc/common/scripts/setQPDoffsets)
After running this the offsets were not zero (more like 4!). Found offsets in the SUM filter banks (which are at sitemap/LSC/Photodetectors overview/X_TR_ {A & B} /SUM {full}. We took these to zero & were now happy with offsets.
At this point, I went to get water while Kiwamu watched X-arm locking.
LSC XARM Gain
We tried to give Xarm a kick by enabling the ETMX M0 LOCK L OFFSET (& also disabled the BOOST [FM4]), but this didn't have much of an effect....
While I was getting water, Kiwamu changed a gain for XARM (located at sitemap/LSC/Overview/XARM). The gain is usually at 0.05 during INPUT_ALIGN, but Kiwamu took it to 0.1 & this finally locked XARM. So to get the XARM to lock one doubles the gain, waits for it to lock up, and then drops it back down (i.e. to 0.05).
At this point we decided I should start over with the alignment, and that Kiwamu would go back to sleep...I think we were on the phone for about 1-1.5hrs.
Back to Initial Alignment & Locking
Ran through an Initial Alignment with no noticeable issues. (Dark Mich took a while to settle down; that seemed new to me, and Jim noticed this, too when I saw him aligning).
Finally went for locking. Had issues at various points. For ALS, there were VCO/PDH Errors popping up on Guardian. Going to the Yarm VCO showed errors on that screen (ALS_CUST_LOWNOISEVCO.adl). So, from Sheila's Training powerpoint file, I saw that I should take the Tune Ofs slider to zero, and that took care of that.
ASC Soft FM1 -20dB filters: OFF!
Since useism has been trending down over the last 12 hrs, and since the BSC-ISI blends are back to the 90mHz ones, I opted to let Guardian Turn OFF the ASC SOFT filters. This is different from previous lock segments!
Made it to NOMINAL_LOW_NOISE!
After over 4.5hrs H1 was finally back to NLN! SDF had a few Diffs, so I went through them:
O1 CDS Overview RED TIMING ERROR
Before going to Observation Mode, I noticed SUSETMY had a TIMING RED box. Clicked "Diag Reset" to clear.
[Time for lunch]
I forgot to run the A2L measurement!! :-/
TITLE: 10/24 OWL Shift: 07:00-15:00UTC (00:00-08:00PDT), all times posted in UTC
STATE of H1: Jim Aligning
Outgoing Operator: Jim
Support: On Call-->Kiwamu
Quick Summary: Arrived to see Jim starting an alignment. Started attempting to lock. For DRMI, we had a quick wrong-mode-lock, but could not get powers up, so we tried PRMI, but it this never locked up. After head scratching, we are figuring out: To Align (again), Or Not To Align. I'm trying to lock one more time with this alignment again. If there is no luck here, I might try another initial alignment. Stay Tuned.
OK, PRMI only locked once in 10min. I'm going to try Initial Alignment #2 shortly.
Title: 10/24 EVE Shift 23:00-7:00 UTC
State of H1: Relocking
Shift Summary: Locked in Observing Mode for most of the shift, lost lock shortly before Corey arrived
Activity log:
23:06 Lock loss
00:30 Switch back to 90mhz blends, useism and winds are both quiet
1:00 Lock is reacquired
6:37 lock loss, no clear cause, winds are quite, useism still going down, none of the other foms show anything obviously suspicious, I start a new initial alignment because ALS comm beatnote has been trending down (was 2 db at the beginning of the 1:00 lock stretch, was 1db now, with ALS X being difficult to hold)
Detailed summary: https://wiki.ligo.org/DetChar/DataQuality/DQShiftLHO20151012 Summary: - Range ~75 Mpc, duty cycle: 75.83%, 49.74%,58.81% respectively -EY Magnetic Glitches Periodic 60Hz magnetic glitches are still present and are vetoed out using the channel H1:SUS-ETMY_M0_DAMP_Y_IN1_DQ -Loud Glitches The Loud glitches are still there but the number of loud glitches during analysis ready time are relatively lower than previous days. As usual few of them were vetoed using this channel ASC-AS_A_RF45_Q_YAW_OUT_DQ. The other channels which showed up as the significant channels in the significance drop plot are X and Y transmon QPDs. These is true for all the previous days since the EX beam diveter issue has been fixed. -300-400Hz Glitches These glitches seems to be related to PSL periscope. These glitches were mostly vetoed using the channel - IMC-WFS_A_DC_SUM_OUT_DQ at round 12. These glitches are not present after tuesday's maintenance (alog 22482 related to IMC WFS offset disabled). Two relevant alogs can be found here : 22418, 22154 -RF45 Glitches RF 45 glitches can be noticed again on 12th when interferometer was not locked. But at the beginning of 13th's lock we can see these glitches clearly in DARM. Some relevant alogs 22498, 22515, 22527. But there was no RF 45 glitch after ~3:30 UTC. -Low Frequency Glitches Apart from RF45 glitches there were few cluster of glitches which showed up on 12th and is present till date. HVeto ( round 10 mainly and some other rounds) as well as UPV results of 12th Oct showed that corner station seismic channels were associated with these glitches. (Related alog- 22494,22710). These glitches seem to be related to ground motion in frequency range 3-10Hz. Hveto used mainly corner station seismic channels to veto out these glitches.
The querying failure showed up and didn't go way like it has been, so I logged onto the script0 machine and it was down. Restarted it in the same screen (pid: 4403), and made sure that the ext_alert.pid had the same.
Yesterday while I was on shift I noticed that the querying failure would repeatedly pop up but then dissapear seconds later. I logged in to script0 and the script hadn't stopped, but it was just having a very hard time connecting yesterday. A "Critical connection error" would be printed frequently. Today it seemed to not be showing the same signs of connection issues, at least until I had to restart it.
The ext_alert.py script has not reported 2 different GRBs in the past two days that are on GraceDB (E194592 & E194919). Both of the external events had high latency times (3004sec and 36551sec). I am not sure if this is the reason that they were not reported. I didn't see anything in ext_alert.py or rest.py that would filter out any events with high latency, but mayb this done on the GraceDB side of things? I'll look into this more on Monday when I get a chance.
Title: 10/23 Day Shift 15:00-23:00 UTC (8:00-16:00 PST). All times in UTC.
State of H1: Observing
Shift Summary: Locked in Observing Mode for my entire shift. During a period when LLO was down, some PEM injections, electronics cabling investigations, and GW injections took place opportunistically.
Incoming operator: Jim
Activity log:
16:37 Kyle to Y28 and Mid Y
18:23 Kyle done
20:52 Commissioning mode while Sheila and Keita go to LVEA to look at cabling. I reloaded a corrected SR3_CAGE_SERVO guardian at the same time.
20:59 Sheila and Keita done
21:10 turned off light in high bay/computer room after Keita noticed it was still on
21:22 Jordan starting PEM injection
21:34 Jordan done
21:47 CW injection alarm
22:02 Stochastic injection alarm
22:09 TJ restarting DIAG_MAIN guardian
22:15 Stochastic injection complete
22:19 CW injection started
22:21 a2L script ran
22:30 Observing Mode
Summary:
- Range ~75 Mpc, observating 55% (low % largely caused by power outage)
- RF45 glitches still present
- anthropogenic noise (train band) caused many glitches in , caused 8+ newSNR triggers in BBH
- DQ shift page: https://wiki.ligo.org/DetChar/DataQuality/DQShiftH120151019
CW injections started around 22:19 UTC. We transitioned to Observing Mode at 22:30 UTC with the CW injections running.
Daniel, Sheila, Evan
Over the past 45 days, we had two instances where the common-mode length control on the end-station HEPIs hit the 250 µm software limiter. One of these events seems to have resulted in a lockloss.
The attached trends show the ISC drives to the HEPIs, the IMC-F control signal, and the IMC-F offloading to the suspension UIMs over the past 45 days. One can see two saturation events: one on 25 September, and another on 11 October.
We survived the event on 11 October: the EY HEPI hit the rail, and counts began to accumulate on the EY UIM, but the control signal turned around and HEPI came away from the rail. On the 25th of September, both end-station HEPIs hit the rail, and after about 2 hours of counts accumulating on the UIMs, IMC-F ran away (second attachment). Note that both HEPI drives started at 0 µm at the beginning of the lock stretch.
Both of these periods experienced large common drifts, when a pure tidal excitation would repeat after 24 hours. This may indicate a problem with the reference cavity temperature and PSL/LVEA temperature during these days.
Added PSL Temp Trends log in 22881.
We're still having operators run A2L before going to Observe or when leaving Observe (e.g. for maintenence), so there's more data to come, but I wanted to post what we do have, so that we can compare with LLO's aLog 21712.
The 2 plots are the same data: The first uses the same axis limits as the LLO plot, while the second is zoomed in by a factor of 5.
It certainly looks like we aren't moving nearly as much as LLO is.
Note that the values for both LLO and LHO are missing a coupling factor of L2P->L3L, which will change the absolute value of the spot position displacements. However, since we're both wrong by the same factor, our plots are still comparable. See aLog 22096 (the final comment in the linked thread) for details on the factor that we still need to include.
Today I went through all of the A2L data that has been collected so far, and pulled out for analysis all of the runs for each optic that had acceptable data. Here, I'm defining "acceptable" as data sets that had reasonable linear fits, as displayed in the auto-generated figures.
In particular, for almost all of the ETMY data sets, there is one data point that is very, very far from the others, with large error bars. I'm not sure yet why we seem to have so much trouble collecting data for ETMY. I've emailed Marie at LLO to see if they have similar symptoms.
For all of the times that we have taken measurements, I also checked whether the IFO had been on for more than 30 minutes or not. In both plots below, the cold blue data points are times when the IFO had been at Nominal Low Noise for less than 30 minutes prior to the measurement, while the hot pink data points are times when the IFO had been at Nominal Low Noise for at least 30 minutes.
The first plot is all of the data points that I accepted, plotted versus time. This is perhaps somewhat confusing, but each of the y-axes are centered about the mean spot position for that optic and degree of freedom, plus or minus 3 mm. So, each y-axis has a range of 6mm, although they're all centered around different values.
For the second plot, I find all the times that I have both acceptable pitch and yaw measurements, and plot the spots on a grid representing the face of the optic. Note that since I had zero acceptable ETMY Yaw data points, nothing is plotted for ETMY at all.
Interestingly, the ITM spots seem fairly consistent, regardless of how long the interferometer has been on, while the ETMX spots have a pretty clear trend of moving as the IFO heats up. None of our spots are moving more than +- 1.5 mm or so, so we are quite consistent.
After a few hours noticed odd display on the low frequency seismic changes. Then I looked at the date for this session and it was for some time in August! Not sure why this was the case. I just hit RUN. Closed the session out and hit RUN again, and this time it was live with today's date.
Now can see that over the last 12hrs the useism has been trending up noticealby.