It's been a rough one so far. ETMX ISI was rung up when I came in. I've tried a bunch of things, I'm not sure what or IF I've fixed it. Currently, the Z sensor correction is running on the ISI and the RY loop is running on St2. I've switched everything back to nominal, I'll keep watching the ISI, but for now everything looks OK. Shifting the Z sensor correction was what I thought earlier was the fix, but when I got almost to NLN, EX ISI rang up and blew the lock. I just gotten to NLN I've now turned on the St2 RY isolation loop, and the ISI may have settled down, but I've been here before. The current St2 RY blend is a 750mhz blend that injects a lot of GS13 noise at low frequency and CPS noise at high frequency, so it's not ideal, and I have a hard time believing that this configuration is really necessary for the ISI's stability but it's difficult to argue with qualified success.
Attached are a few spectra of the EX ISI St2 sensors. First plot is the RX (blue), RY(brown) GS13s, with GS13 noise curve(black). You can see turning on the loop makes the microseism better by a little, but worse at the blend cross-over and at a couple hz. Maybe the high microseism is whats giving us problems? Next plot are the CPSs red is RX, green is RY, I kept GS13 noise in for scale. Again, microseism is suppressed, but GS13 noise is making the low frequency worse by almost an order of magnitude, below 70mhz.
Turning on the St2 RY loop looked like it settled the ISI down, but I've backed everything out, and I think I can go back to observing. The EX ISI still needs monitoring.
It's not clear to me that trying to turn on the St2 RY loop was the fix here. It's entirely possible that I tried that just as the microseism receded below some threshold, or something. I don't recommend other operators try this if the ISI is rung up. Not yet, anyway.
Ops Eve Summary: 00:01-08:00UTC (16:00-23:59PT)
State of H1: down, ETMX ISI is oscillating
Help: Keita and Vern
Incoming Operator: JimW, who's looking into the ISI now
Shift Summary:
Currently:
As I was leaving I remembered an issue with ETMX guardian during my shift.
I ran INIT on the guardian for ETMX seismic, thinking it wouldn't do anything to the ISI, just refresh all the settings, so I would know that I was worknig with a good ISI guardian state.
What I got was guardian flashed, and then recovered, but in the process kicked up the ETMX oscillations.
I thought all INIT states were safe to run and not disturb hardware, but that's not how it turned out today.
According to alog 23420, "When useism is high, however, we have to use a 45mhz blends..." so I've changed all of them on ETMY and ETMX to 45mHz.
Status is that all BSC ISI's are on all 45mHz blend filters.
It appears that changing the blend filters rung up ETMX. I'm trying to get it back. I have the ISI in Isolated Damped, and manually changed GS13 fitlers and ST2 ISO filters and gains to engage ST2 ISO, which has worked, however, I'm at 0.01 gain for the loops when they are typically at 1 for ISO_X, Y, Z, and RZ. I did not engage ISO RX or RY.
6:44UTC - oscillations come back.
DRMI has locked for the second time. The first time was from 3:42-3:59UTC. Will attempt to progress the lock toward Low Noise.
two plots attached: Dec. 1st and Dec. 5th - all while H1 is locked in Low Noise
Dec. 1st plot shows BS optical lever sum glitching by 200+ counts.
Dec. 5th plot of the same signals shows ETMY optical lever sum glitching 50-70 counts, and ETMX optical lever sum gltiching 200+ counts, but BS sum is now quiet and +/-25 counts from the mean sum value.
Is this glitching, or an artifact of something else, and could this be effecting the IFO's locking?
Red herring... Variation in signal is a small percentage of the sum.
TITLE: 12/5 DAY Shift: 16:00-00:00UTC (08:00-16:00PDT), all times posted in UTC
STATE of H1: Down for last 4hrs
Incoming Operator: Cheryl
Support: Made a call to Keita (& Vern) to give them status/update
Quick Summary: H1 great for first half of the shift and then down the 2nd half. Have posted what I've done thus far. useism was high 24hrs ago, went down a little and appears to be inching up again (see attached snapshot...I couldn't fix nuc5 such that it can be viewable online). Cheryl was able to lock it up with similar conditions last night, so hope she can do what she did yesterday.
Woes Continue
LVEA useism has been high for last 6hrs. It's currently at 0.9um/s. Not the worst period of useism, but certainly similar to other times when we had issues locking. (see plot of last 55 days).
Also running with all BSCs in their standard 45mHz configuration, EXCEPT for ETMx which Jim alogged about earlier.
TITLE: 12/5 DAY Shift: 16:00-00:00UTC (08:00-16:00PDT), all times posted in UTC
STATE of H1: NLN at 80Mpc
Outgoing Operator: Jim
Quick Summary: Icy drive in. Going on 12+hrs of lock. useism band is elevated to 0.7um/s for LVEA & winds are under 10mph. Had an FMCS Air Handler MAJOR alarm for LVEA REHEAD 2B (think this guy has been alarming off and on for a week +; a known issue). Ah, looks like L1 is joining us for some double coincidence time.
Continue to run with 45mHz Blends enabled, BUT with ETMx with 90mHz for x & y.
O1 day 78
model restarts logged for Fri 04/Dec/2015 No restarts reported
Title: 12/5 owl Shift 8:00-16:00 UTC
State of H1: NLN
Shift Summary:Quiet night, locked entire shift
Activity log:
Nothing to report. Cheryl locked after earlier earthquakes, winds have stayed quiet, microseim maybe reversing it's downward trend.
Ops Eve Shift: 00:01-08:00UTC (16:00-23:59PT)
State of H1: locked in Observe since 04:53UTC, 3+ hours, and range is 80Mpc
Shift Details:
Ops Eve Shift: 00:01-08:00UTC (16:00-11:59PT)
State of H1: relocked and in Observe after an earthquake, range is 82Mpc
1. The re-calibrated C01 hoft data generated by DCS is ready for use up to 1129508864 == Oct 22 2015 00:27:27 UTC. It is available at CIT and via NDS2, and is being transferred via LDR to the other clusters. The times the re-calibrated C01 hoft now cover are: H1: 1125969920 == Sep 11 2015 01:25:03 UTC to 1129508864 == Oct 22 2015 00:27:27 UTC L1: 1126031360 == Sep 11 2015 18:29:03 UTC to 1128398848 == 1129508864 == Oct 22 2015 00:27:27 UTC This information has been updated here: https://wiki.ligo.org/Calibration/GDSCalibrationConfigurations https://wiki.ligo.org/LSC/JRPComm/ObsRun1#Calibrated_Data_Generation_Plans_and_Status https://dcc.ligo.org/LIGO-T1500502 (Note that jobs are running to generate C01 hoft up to Dec 03 2015 11:54:23 UTC, but this data will not be ready until after Dec 9.) 2. For analysis ini files: i. The frame-types are H1_HOFT_C01 and L1_HOFT_C01 ii. The STRAIN channels are: H1:DCS-CALIB_STRAIN_C01 16384 L1:DCS-CALIB_STRAIN_C01 16384 iii. State and DQ information is also in these channels: H1:DCS-CALIB_STATE_VECTOR_C01 16 H1:ODC-MASTER_CHANNEL_OUT_DQ 16384 L1:DCS-CALIB_STATE_VECTOR_C01 16 L1:DCS-CALIB_STRAIN_C01 16384 The bits in the STATE VECTOR are documented here: https://wiki.ligo.org/LSC/JRPComm/ObsRun1#Calibrated_Data_Generation_Plans_and_Status 3. For analysis segments and missing data segments use these C01 specific flags: H1:DCS-ANALYSIS_READY_C01:1 L1:DCS-ANALYSIS_READY_C01:1 H1:DCS-MISSING_H1_HOFT_C01:1 L1:DCS-MISSING_H1_HOFT_C01:1 Summaries of the C01 DQ segments are here: https://ldas-jobs.ligo.caltech.edu/~gmendell/DCS_SegGen_Runs/H1_C01_09to22Oct2015/html_out/Segment_List.html https://ldas-jobs.ligo.caltech.edu/~gmendell/DCS_SegGen_Runs/L1_C01_09to22Oct2015/html_out/Segment_List.html 4. Note that the same filter files are used for all C01 hoft generation: H1DCS_1128173232.npz L1DCS_1128173232.npz However, the C01 Analysis Ready time has been reduced from H1:DMT-ANALYSIS_READY:1 and L1:DMT-ANALYSIS_READY:1 by 3317 seconds and 9386 seconds respectively. This is because these options were used: --factors-averaging-time=128 and --filter-settle-time 148. (This allows study of the calibration factors stored in the C01 hoft frames, even though the calibration factors are not applied in C01.) 5. See these previous alogs: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=22779 https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=22000
While the EQ has the IFO down for a time, took TFs to check the TILT decoupling on the ISI to see if the coupling was bad.
When we blend the inertial sensors into the super sensor at lower frequencies, we must make sure there is no tilt coupling. The tilting inertial sensor will inject the tilt into the platform motion, obviously not a good thing.
First attached is a look at the saved plot from decoupling Jim or I did way back; the decoupling factors have been the same numbers for over a year (I checked conlog, thank you Patrick.)
The blue curve is the final decoupled TF between X drive and T240 response. The Red curve is with no decoupling. Fairly clear: the coupling is showing up below 60mHz and that the decoupling factors do a very good job of eliminating the tilt--a nice linear trace down to DC is what you want.
The second attachment is the current measurement showing that the tilt decoupling is still very good. The blue trace is the original value (same as above) and the red and black traces are with the 45 and 90mHz blends respectively. The blend should not matter when looking for the decoupling but the 7.1Mag EQ gave the time so I ran both.
Now I do see some yuck in the 30-70mHz band but these are only 3 averages and the blue trace is 10 averages so that may be the reason for the yuck.
In 23939 Evan pointed out low frequency glitches. These look very similar to the scattering glitches seen at LLO every day (there likely driven by OM suspension motion). I think these gltiches at LHO are probably related to SRM or PRM optic motion for a few reasons.
Figures 1 and 2 show the SNR/Freq plots from hveto of Strain and Mich, respectively. These both show a relationship between amplitude and SNR that is what you expect for driven fringes/shelves sticking out farther above the noise, the higher frequency they are driven to.
Figures 3 and 4 show omega scans of Strain and Mich showing pretty clear arches. The arches are stronger in Mich than in Strain (in terms of SNR).
Figures 5 and 6 show the fringe frequency prediction based on the velocity of the PRM and SRM optics. The period is about right for both. The results for other optics are shown here. The code is here. And the scattering summary page for this day is here. The dominant velocity looks like about 0.15Hz judged by eye from the timeseries.
Figure 7 shows that during the glitchy times (green and yellow) the SRM and PRM optics are moving more at about 0.12Hz (probably compatible with the largest velocity frequency above). There's also a pretty strong 0.6Hz resonance in PRM, but this is the same in good and bad times.
I ran EXCAVATor over a 4 hour period when the low-frequency glitches were visible, see the results here. The top 10 channels are all related to SRM, but the absolute value of the derivative (as usual in case of scattered light) of H1:SUS-SRM_M2_WIT_L_DQ wins by a hair and also seems to have a decent use-percentage. Using this channel as a veto, most of the channels related to SRM drop in relevance in the second round. This round is won by H1:ASC-DHARD_P_OUT_DQ with some signals related to ETMX/ITMX close behind.