The comparison yesterday had very different ground motions so it bears repeating with similar GM time.
Attached is comparison from 4am Tuesday (reference traces) and 2am this morning. Below 10hz on X Y & Z there is nothing to mention. Between 10 & 100hz or something on X & Y, there are both more and less noisy bands not obviously due to the GM. The Z dof looks very comparable. On the rotational DOFs, there are a few areas of interest: RX 0.5hz; RY 0.1--0.5hz & 20hz; RZ 20--80hz. I'll repeat this tomorrow to see if these notables are repeatable.
JeffK & HughR
Here are TFs and coherences between the ground STSs near HAM2 and the HEPI floating L4Cs. These comparisons are from 3am Monday and Wednesday with pale reference traces from before Grouting Monday. The grouted data look mostly just like before grouting but there are a couple bands where things look better now: Y, 1 & 35hz. Again to be repeated.
J. Kissel, K. Izumi, N. Kijbunchoo We've gathered new DARMOGLTF measurements to continue our long-term investigating on the slow evolution of the calbriation parameters. Based on what I see in DTT alone, the uncorrected loop gain and pcal to CAL-CS transfer function remains within 5% and 5 [deg] of the reference model. Very good! I attach screen shots of the raw measurements and a conlog of the relevant settings, but more detailed analysis to come. The new DTT results live here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/DARMOLGTFs/2015-10-28_H1_DARM_OLGTF_7to1200Hz.xml /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/PCAL/2015-10-28_PCALY2DARMTF_7to1200Hz.xml and have been exported to the following: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/DARMOLGTFs 1 / (1 + G) = 2015-10-28_H1_DARM_OLGTF_7to1200Hz_A_ETMYL3LOCKIN2_B_ETMYL3LOCKEXC_coh.txt 2015-10-28_H1_DARM_OLGTF_7to1200Hz_A_ETMYL3LOCKIN2_B_ETMYL3LOCKEXC_tf.txt -G = 2015-10-28_H1_DARM_OLGTF_7to1200Hz_A_ETMYL3LOCKIN2_B_ETMYL3LOCKIN1_coh.txt 2015-10-28_H1_DARM_OLGTF_7to1200Hz_A_ETMYL3LOCKIN2_B_ETMYL3LOCKIN1_tf.txt /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/PCAL/ C / (1 + G) = 2015-10-28_PCALY2DARMTF_7to1200Hz_A_PCALRX_B_DARMIN1_coh.txt 2015-10-28_PCALY2DARMTF_7to1200Hz_A_PCALRX_B_DARMIN1_tf.txt
Sudarshan, Darkhan, RickS
Using data from the SLM tool for 67 minutes just before the measurement of the Pcal2Darm TF, we calculated the TF for the 1 kHz and 3 kHz Pcal lines.
The plots attached below show:
1) The "kappas"
2) The measured TF, with coherence and the fractional uncertainty
3) The data for the 1 kHz line and comparison with TF values interpolated from the measured TF data
4) The data for the 3 kHz line
At 1 kHz, the transfer coefficient calculated from the SLM data differes from the interpolated coefficient from the meassured transfer function by 1.01% in amplutude and 0.15 deg. in phase.
The script and plots will be submitted to the SVN tomorrow and I will add a comment giving their location when they are.
Jonathan, Ryan, Jim, Carlos, Dave
we have configured and started the syslog-ng daemons on all the front end computers. This was done when H1 was out of observing mode. We have also configured SSHD on the front end computers to log any successful and unsuccessful login attempts.
The new Beckhoff SDF system was showing some channels as being different but the difference value is below the resolution in the OBSERVE.snap file. For these channels I have accepted them and wrote a new OBSERVE.snap file. Some channels were changing continuously, I have unmonitored these. The SDF is now running with zero diffs.
A WSU TC undergraduate group toured the site this morning. Arrival time at LSB = 9:30 AM. Departure time = 11:40 AM. Group size = ~15 adults. Vehicles at the LSB = ~5 passenger cars. The group was in the control room near 11:30 AM.
O1 days 32 to 40
model restarts logged for Tue 27/Oct/2015
2015_10_27 09:43 h1sysecatc1plc1sdf
2015_10_27 10:18 h1sysecatc1plc1sdf
2015_10_27 11:52 h1broadcast0
2015_10_27 11:52 h1dc0
2015_10_27 11:52 h1nds0
2015_10_27 11:52 h1nds1
2015_10_27 11:52 h1tw0
2015_10_27 11:52 h1tw1
2015_10_27 11:55 h1nds0
2015_10_27 11:55 h1nds1
2015_10_27 11:56 h1nds0
2015_10_27 11:57 h1nds0
2015_10_27 12:05 h1broadcast0
2015_10_27 12:05 h1dc0
2015_10_27 12:05 h1tw0
2015_10_27 12:05 h1tw1
2015_10_27 12:06 h1nds0
2015_10_27 12:06 h1nds1
2015_10_27 13:22 h1sysecatc1plc1sdf
2015_10_27 13:29 h1sysecatc1plc1sdf
2015_10_27 14:08 h1sysecatc1plc1sdf
2015_10_27 14:40 h1sysecatc1plc2sdf
2015_10_27 14:42 h1sysecatc1plc3sdf
2015_10_27 14:44 h1sysecatx1plc1sdf
2015_10_27 14:44 h1sysecatx1plc2sdf
2015_10_27 14:46 h1sysecatx1plc3sdf
2015_10_27 14:48 h1sysecaty1plc1sdf
2015_10_27 14:48 h1sysecaty1plc2sdf
2015_10_27 14:50 h1sysecaty1plc3sdf
Maintenance Tuesday. Roll out of new Beckhoff SDF system, showing up as pseudo front ends (marked in grey). DAQ restarted for new EDCU channels. Restart needed following NDS issues with Beckhoff SDF's high DCU ids.
model restarts logged for Mon 26/Oct/2015 No restarts reported.
model restarts logged for Sun 25/Oct/2015
2015_10_25 14:37 h1nds1
2015_10_25 19:00 h1nds1
2015_10_25 19:03 h1nds1
unexpected crash on h1nds1.
model restarts logged for Sat 24/Oct/2015 No restarts reported.
model restarts logged for Fri 23/Oct/2015 No restarts reported.
model restarts logged for Thu 22/Oct/2015 No restarts reported.
model restarts logged for Wed 21/Oct/2015 No restarts reported.
model restarts logged for Tue 20/Oct/2015
Power outage at 06:33. Full restart report attached for the full day. Maintenance day restarts not associated with power outage recovery are:
2015_10_20 12:30 h1calcs
2015_10_20 12:30 h1calex
model restarts logged for Mon 19/Oct/2015 No restarts reported.
TITLE: "10/28 [DAY Shift]: 15:00-23:00UTC (08:00-16:00 PDT), all times posted in UTC"
STATE Of H1: Obseving at 78 Mpc
OUTGOING OPERATOR: Ed
QUICK SUMMARY: The ifo has been locked almost 19 hours. MICH breathing continues. Wind below 5 mph. Useism reaching 0.5 um/s (90th percentile dashed line). Nominal seismic activity in eq band.
TITLE: Oct 28 OWL Shift 7:00-15:00UTC (00:00-08:00 PDT), all times posted in UTC
STATE Of H1: Observing
SUPPORT: N/A
LOCK DURATION: Entire shift
INCOMING OPERATOR: Nutsinee
END-OF-SHIFT SUMMARY:
IFO still locked at 78Mpc. Wind Calm. Sei and µSei remain the same. 2 more ETMY glitches. No further oscillations occurred. Curious that when LLO broke lock we also experienced a dip in BNS range.
SUS E_T_M_Y saturating (Oct 28 08:19:22 UTC)
SUS E_T_M_Y saturating (Oct 28 08:32:10 UTC)
SUS E_T_M_Y saturating (Oct 28 09:19:04 UTC)
SUS E_T_M_Y saturating (Oct 28 09:57:46 UTC)
SUS E_T_M_Y saturating (Oct 28 11:39:19 UTC)
SUS E_T_M_Y saturating (Oct 28 11:57:32 UTC)
MID-SHIFT SUMMARY:
ACTIVITY LOG:
07:34 Oscillations: duration ~ 5 minutes. May correlate with a high SNR LF glitch that I observed on DMT Omega.
07:48 MICH (live) on NUC4 FOMW seems to be “breathing” somewhere between 6Hz and 20Hz. Perhaps this is tied in with the previous log entry?
07:57 Oscillation: @08:04 IMC-F_OUTMON(tidal) grew to +/- 5 cts. ASC-DHARD_Y & P grew to +/- 2000cts. total duration is ~ 12minutes. Correlating(?) DMT Omega glitch shows up at approx the start time of the oscillation
10:00 Reset GraceDB External Notification script
13:25 DARM glitched between 60Hz and 300Hz
I had to restart GWIstat to clear a problem with the reported GEO600 status that seems to have been caused by a network interruption during the Tuesday maintenance period. For LHO and LLO, the reported duration of the current "OK+Intent" status (as of 5:53 PDT) will be incorrect until the status changes, because GWIstat keeps track of that time. By the way, there is now another status display page produced in parallel with GWIstat: https://ldas-jobs.ligo.caltech.edu/~gwistat/gwsnap.html . It's a cleaner layout, but doesn't have the links to summary pages.
MID-SHIFT SUMMARY: IFO locked and Observing, coincidently with Livingston, @~81.3Mpc. no change in EQ Sei bands. µSei has seen an increase to .6µ. Wind is calm. There’s been 4 ETMY saturations and two rogue Oscillations that were detailed in a previous aLog. I attempted to restart DTT RF45MHzAM on NUC5 to no avail.
Ed, Patrick
There have been some oscillations occasionally occurring that Patrick noticed on the previous shift and that I have been watching. They aren't destructive to the lock and they wax and wane on their own. Patrick and I also noticed that the "MICH live" DMT trace is "breathing" up from its reference trace between ~3Hz to ~20Hz. I also noticed that the last time these oscillations occurred there was a high SNR glitch in the LF range of the DMT Omega plot.
Excerpt from Activity Log:
07:57 Oscillation: @08:04 IMC-F_OUTMON(tidal) grew to +/- 5 cts. ASC-DHARD_Y & P grew to +/- 2000cts. total duration is ~ 12minutes. Correlating(?) DMT Omega glitch shows up at approx the start time of the oscillation. (08:04UTC + 9 minutes)
TITLE: Oct 28 OWL Shift 7:00-15:00UTC (00:00-08:00 PDT), all times posted in UTC
STATE Of H1: Observing
OUTGOING OPERATOR: Patrick
QUICK SUMMARY: IFO is in Observing @ ~78Mpc. Eq sei bands are all in the .22micron range. µSei is around .4µ. Wind is ≤10mph. There is an occasional oscillation that Patrick has noticed that can be seen in POP_A_LF_OUTPUT and also reflected in IMC-F_OUT16(tidal) and ASC-DHARD_Y_OUTMON. This doesn’t seem to be a destructive oscillation. It seems the 45MhzRFAM plot has crashed. It appeared that the live trace was right on top of the reference. There is usually a good amount of space between the two.
TITLE: 10/27 [EVE Shift]: 23:00-07:00 UTC (16:00-00:00 PDT), all times posted in UTC STATE Of H1: Observing @ ~79 MPc. SHIFT SUMMARY: Remained in observing entire shift. Low frequency ASC oscillations came and went a few times. At some point the RF45 noise DMT monitor stopped updating. I restarted it but it still wouldn't run. I left it closed. I had to restart the GraceDB query script. It is still occasionally failing. Seismic and winds remain about the same. INCOMING OPERATOR: Ed
Low frequency oscillations in the ASC loops have come and gone twice now. Attached are full data plots from 2:30 UTC to 5:30 UTC on Oct. 28 2015.
As part of Maintanence recovery, I realigned the IMs after the HAM2 ISI tripped.
IM2 pitch showed a big change from beofre to after the ISI trip at -42urad, now corrected.
There had been several notifications on the CAL_INJ_CONTROL medm that the GraceDB querying had failed, but each time it succeeded shortly after. Finally it seemed to fail and stay failed. I logged into h1fescript0 and killed the screen listed in the PID file (4403). It looks like there are two other screens left over that it also failed in (22363 and 21590). I'm leaving those for diagnosis. I restarted the script in a new screen (25515). The medm updated that it succeeded. It has failed and succeeded again twice since. Logging into the new screen it reports: [ext-alert 1130044041] CRITICAL: Error querying gracedb: timed out [ext-alert 1130044071] CRITICAL: Error querying gracedb: timed out [ext-alert 1130044343] CRITICAL: Error querying gracedb: timed out [ext-alert 1130044373] CRITICAL: Error querying gracedb: timed out [ext-alert 1130044644] CRITICAL: Error querying gracedb: timed out [ext-alert 1130044674] CRITICAL: Error querying gracedb: timed out
When I restarted the GraceDB script the verbal alarms script reported a GRB alert. It appears this was an old one though. Gamma Ray Burst (Oct 28 05:06:07 UTC) Gamma Ray Burst Acknowledge event? Gamma Ray Burst (Oct 28 05:06:12 UTC) Gamma Ray Burst Acknowledge event? Gamma Ray Burst (Oct 28 05:06:18 UTC) Gamma Ray Burst Acknowledge event?
Due to the violin mode problem on 10/25, Sheila has asked me to investigate when this mode really started to rung up. The first plot attached shows that the amplitude of 1008.45Hz were consistant the day before the power glitch and three hours before power glitch (the small difference you see is within the mode normal fluctuation range). The second plot shows the 1008.45 Hz got rung up by an order of magnitude during the first lock acquired after the power glitch just like others. Because this mode didn't have a damping filter at the time, ideally the amplitude should have stayed where it was. However, the final plot shows that the amplitude became worse as time progress while other modes were either stable or being damped until it caused the problem on October 25th. Could anything that happened during the power lost caused the mode to change its phase as it seems to be slowly rung up by ETMY MODE3 that's been existing since before O1? Note that this violin mode had never rung up before. The investigation continues.
To ensure that the 1008.45Hz line hasn't been slowly ringing up all this time, I've looked back at the asd amplitude of this mode until October 1st. First plot attached shows the amplitude/sqrt(Hz) versus frequency of this particular mode, one plot per day. The second plot shows log amplitude versus time. I only plotted one data point per day (10:00:00 UTC if data available, or any time where the BNS range was stable and the ifo was locked for at least an hour). The last data point is today (10/28 02:00:00 UTC). This mode has been fluctuating between 1e-22 and 1e-21 since the begining of the month (10/01) up until 10/20. You can see clearly that the amplitude begins to rise above its nominal on 10/21 after the power outage on 10/20 and continues to grow exponentially until it started to cause problems on 10/25. Indicates that the amplitude grow was causing by a positive feedback, which Sheila found it to be ETMY MODE3.
To conclude this study: This mode wasn't, and hasn't been ringing up before October 20th. Why it started to ring up after power outage is unclear. I can't think of anything else but something must have changed to cause this mode to change phase during the power outage.
Should we worry...?
Was there a significant temperature excursion during the power outage?
Yes.
I've attached the plot of average temperature in the VEAs. After the power outage LVEA average temperature had three big dips of about half degree. Average temperature at EY seems to fluxtuate more often and EX had couple of large drops.
Which turns out to be just a coincidence with the power outage according to John.