O1 day 78
model restarts logged for Fri 04/Dec/2015 No restarts reported
Title: 12/5 owl Shift 8:00-16:00 UTC
State of H1: NLN
Shift Summary:Quiet night, locked entire shift
Activity log:
Nothing to report. Cheryl locked after earlier earthquakes, winds have stayed quiet, microseim maybe reversing it's downward trend.
Ops Eve Shift: 00:01-08:00UTC (16:00-23:59PT)
State of H1: locked in Observe since 04:53UTC, 3+ hours, and range is 80Mpc
Shift Details:
Ops Eve Shift: 00:01-08:00UTC (16:00-11:59PT)
State of H1: relocked and in Observe after an earthquake, range is 82Mpc
1. The re-calibrated C01 hoft data generated by DCS is ready for use up to 1129508864 == Oct 22 2015 00:27:27 UTC. It is available at CIT and via NDS2, and is being transferred via LDR to the other clusters. The times the re-calibrated C01 hoft now cover are: H1: 1125969920 == Sep 11 2015 01:25:03 UTC to 1129508864 == Oct 22 2015 00:27:27 UTC L1: 1126031360 == Sep 11 2015 18:29:03 UTC to 1128398848 == 1129508864 == Oct 22 2015 00:27:27 UTC This information has been updated here: https://wiki.ligo.org/Calibration/GDSCalibrationConfigurations https://wiki.ligo.org/LSC/JRPComm/ObsRun1#Calibrated_Data_Generation_Plans_and_Status https://dcc.ligo.org/LIGO-T1500502 (Note that jobs are running to generate C01 hoft up to Dec 03 2015 11:54:23 UTC, but this data will not be ready until after Dec 9.) 2. For analysis ini files: i. The frame-types are H1_HOFT_C01 and L1_HOFT_C01 ii. The STRAIN channels are: H1:DCS-CALIB_STRAIN_C01 16384 L1:DCS-CALIB_STRAIN_C01 16384 iii. State and DQ information is also in these channels: H1:DCS-CALIB_STATE_VECTOR_C01 16 H1:ODC-MASTER_CHANNEL_OUT_DQ 16384 L1:DCS-CALIB_STATE_VECTOR_C01 16 L1:DCS-CALIB_STRAIN_C01 16384 The bits in the STATE VECTOR are documented here: https://wiki.ligo.org/LSC/JRPComm/ObsRun1#Calibrated_Data_Generation_Plans_and_Status 3. For analysis segments and missing data segments use these C01 specific flags: H1:DCS-ANALYSIS_READY_C01:1 L1:DCS-ANALYSIS_READY_C01:1 H1:DCS-MISSING_H1_HOFT_C01:1 L1:DCS-MISSING_H1_HOFT_C01:1 Summaries of the C01 DQ segments are here: https://ldas-jobs.ligo.caltech.edu/~gmendell/DCS_SegGen_Runs/H1_C01_09to22Oct2015/html_out/Segment_List.html https://ldas-jobs.ligo.caltech.edu/~gmendell/DCS_SegGen_Runs/L1_C01_09to22Oct2015/html_out/Segment_List.html 4. Note that the same filter files are used for all C01 hoft generation: H1DCS_1128173232.npz L1DCS_1128173232.npz However, the C01 Analysis Ready time has been reduced from H1:DMT-ANALYSIS_READY:1 and L1:DMT-ANALYSIS_READY:1 by 3317 seconds and 9386 seconds respectively. This is because these options were used: --factors-averaging-time=128 and --filter-settle-time 148. (This allows study of the calibration factors stored in the C01 hoft frames, even though the calibration factors are not applied in C01.) 5. See these previous alogs: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=22779 https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=22000
While the EQ has the IFO down for a time, took TFs to check the TILT decoupling on the ISI to see if the coupling was bad.
When we blend the inertial sensors into the super sensor at lower frequencies, we must make sure there is no tilt coupling. The tilting inertial sensor will inject the tilt into the platform motion, obviously not a good thing.
First attached is a look at the saved plot from decoupling Jim or I did way back; the decoupling factors have been the same numbers for over a year (I checked conlog, thank you Patrick.)
The blue curve is the final decoupled TF between X drive and T240 response. The Red curve is with no decoupling. Fairly clear: the coupling is showing up below 60mHz and that the decoupling factors do a very good job of eliminating the tilt--a nice linear trace down to DC is what you want.
The second attachment is the current measurement showing that the tilt decoupling is still very good. The blue trace is the original value (same as above) and the red and black traces are with the 45 and 90mHz blends respectively. The blend should not matter when looking for the decoupling but the 7.1Mag EQ gave the time so I ran both.
Now I do see some yuck in the 30-70mHz band but these are only 3 averages and the blue trace is 10 averages so that may be the reason for the yuck.
I took advantage of the lock loss and went down to End Y to look for any obstructions near the temperature sensors in the VEA. I found that the small garbing room on the north wall was blocking one of the sensors. I placed the end curtains and the curtains in front of the sensor on top of the garbing room. There was also one of the large aluminum disc if front of a return air opening. I relocated the disc. We have had a difficult time regulating the temperature accurately at this end station, hopefully this will alleviate some of this problem.
Activity Log: All Times in UTC (PT) 16:00 (08:00) Take over from Jim 17:53 (09:53) Locksmith on site – Bubba will escort 18:26 (10:26) Betsy – Working in the optics lab 19:55 (11:55) Betsy – Out of Optics lab 22:45 (14:45) Lockloss – 7.1mag EQ in the south Indian Ocean 22:50 (14:50) Reset Timing error on H1SUSETMY 23:13 (15:13) Bubba – Going to End-Y to move garb room curtains away from temperature sensor 23:49 (15:49) Bubba – Back from End-Y 23:58 (15:58) Hugh - Running TFs on End-X ISI 00:00 (16:00) Turn over the Cheryl End of Shift Summary: Title: 12/04/2015, Day Shift 16:00 – 00:00 (08:00 – 16:00) All times in UTC (PT) Support: None Incoming Operator: Cheryl Shift Detail Summary: Overall a good observing shift until 7.1 magnitude EQ in the Southern Indian Ocean. Have the IFO in DOWN until the seismic conditions settle down. Took the opportunity of being down to reset the timing error on H1SUSETMY.
Lockloss at 22:45 (14:45) due to 7.1 mag EQ in Southern Indian Ocean. Seismic is up to 2.5um/s and increasing. Have the IFO in a DOWN state until things settle down.
Good observing for first half of shift. Range is just over 80Mpc. Wind and seismic are OK. Microseism is still high but is starting to trend downward.
The 0.25 Hz bandwidth coherence results between H1 and L1 as generated by stochmon (see Fig. 9) https://ldas-jobs.ligo.caltech.edu/~thomas.callister/stochmonO1/stochmon.html are showing a new peak at 24.75 Hz to 25.0 Hz. The 0.1 Hz coherence results (Fig. 11) show finer structure with peaks at 24.9, 25.0 and 25.5 Hz. Those coherence plots are also posted below. Pat Meyers ran STAMP-PEM for H1 and L1 around these frequencies. For H1 https://ldas-jobs.ligo-wa.caltech.edu/~meyers/stamppem/results/HTML/11325/H1-GDS-CALIB_STRAIN-HTML-1132574417-3600-245-255.html There was coherence to be seen at these frequencies with lots of PEM-CS accelerometers/mics and then getting into the suspensions. The STAMP-PEM results at L1 did not show anything interesting at these frequencies. https://ldas-jobs.ligo-la.caltech.edu/~meyers/stamppem/results/HTML/11325/L1-GDS-CALIB_STRAIN-HTML-1132577658-359-245-255.html Nathaniel Strauss ran the coherence tool over the data for the more recent O1 week using 1 mHz resolution. The full set of coherence tool results for the last O1 week for H1 can be found here: https://ldas-jobs.ligo-wa.caltech.edu/~eric.coughlin/O1/LineSearch/H1_COH_1131753615_1132358415_SHORT_1_webpage/ Nathaniel found much structure in this 25 Hz region in tons of channels. His summary is here: https://ldas-jobs.ligo-wa.caltech.edu/~nchriste/O1/noise_lines/25Hz/Lines_at_24-25_Hz.pdf Keith Riles has speculated that for this observed H1-L1 coherence an H1 line at 24.9 Hz is getting tangled up with the L1 line at 24.82 Hz. He also notes that there are plentiful 1-Hz combs in both IFOs with various offsets from zero, including lines that appear at about 24.5 and 25.0 Hz. I am also posting spectrums that Keith made from H1 and L1 in that band from the first week of O1 Nelson, Nathaniel, Pat, Keith
Maybe obvious and I don't have a lot of detail; and, others have already shown (maybe indirectly) that the blends of the ISI impact the IMC. But here is a coherence plot showing huge coherence between the ISI inertial X motion and IMC-F between 50 and 130mHz. The blends here are the Quite_90s, I'll look to get some comparison when we run the 45mHz blends.
Checked all the ISI coil criver status bits for dropouts (these will no longer drop us out of lock (tripping the ISI WD) since 3 Nov.) None showed any drops from good state. I saved DV templates for these channels in /ligo/home/hugh.radkins/DataViewerTemps/ with names that should be obvious. Maybe I'll set up a famis point to do this monthly.
In 23939 Evan pointed out low frequency glitches. These look very similar to the scattering glitches seen at LLO every day (there likely driven by OM suspension motion). I think these gltiches at LHO are probably related to SRM or PRM optic motion for a few reasons.
Figures 1 and 2 show the SNR/Freq plots from hveto of Strain and Mich, respectively. These both show a relationship between amplitude and SNR that is what you expect for driven fringes/shelves sticking out farther above the noise, the higher frequency they are driven to.
Figures 3 and 4 show omega scans of Strain and Mich showing pretty clear arches. The arches are stronger in Mich than in Strain (in terms of SNR).
Figures 5 and 6 show the fringe frequency prediction based on the velocity of the PRM and SRM optics. The period is about right for both. The results for other optics are shown here. The code is here. And the scattering summary page for this day is here. The dominant velocity looks like about 0.15Hz judged by eye from the timeseries.
Figure 7 shows that during the glitchy times (green and yellow) the SRM and PRM optics are moving more at about 0.12Hz (probably compatible with the largest velocity frequency above). There's also a pretty strong 0.6Hz resonance in PRM, but this is the same in good and bad times.
I ran EXCAVATor over a 4 hour period when the low-frequency glitches were visible, see the results here. The top 10 channels are all related to SRM, but the absolute value of the derivative (as usual in case of scattered light) of H1:SUS-SRM_M2_WIT_L_DQ wins by a hair and also seems to have a decent use-percentage. Using this channel as a veto, most of the channels related to SRM drop in relevance in the second round. This round is won by H1:ASC-DHARD_P_OUT_DQ with some signals related to ETMX/ITMX close behind.
Title: 12/04/2015, Evening Shift 16:00 – 00:00 (08:00 – 16:00) All times in UTC (PT) State of H1: 08:00 (16:00), The IFO locked at NOMINAL_LOW_NOISE, 22.2w, 79Mpc. Outgoing Operator: Jim Quick Summary: IFO locked in Observing mode for the past 5.5 hours. Environmental conditions are mixed – wind is up to a light breeze (2-7mph), seismic activity is quiet. Microseism is still on the high side at 0.6um/s. The timing error on SUS ETM-Y has returned after Travis reset it last night.
Title: 12/4 owl Shift 8:00-16:00 UTC
State of H1: NLN
Shift Summary: Winds calmed down shortly after I arrived. Struggled some with ISS 2nd loop and ETMX ISI.
Activity log:
Travis had finished IA a while before I arrived, but he was struggling with high winds. After he left, winds tapered off, but ISS 2nd loop was refusing to engage and I lost lock twice because of it. ETMX ISI was also acting up and may have been causing troubles for Travis as well. That ISI is in a different configuration than the other chambers, and may cause issues if the microseism comes up any more.O1 days 76,77
model restarts logged for Thu 03/Dec/2015 No restarts reported
model restarts logged for Wed 02/Dec/2015 No restarts reported
[Sheila, Jenne, Travis, JeffK, EvanH]
The transmitted powers through the IFO are dropping on a several-hour timescale, and we don't know why. It looks like this was also happening at the end of yesterday's 31 hour lock. (POP_LF is shown for the last day or so in the attached plot.) These are the only 2 locks in the last 10 days that have this trend - for all the others the powers stay nice and steady.
The power into the interferomter as measured by both IMC_Trans and IM4_Trans is steady, so it's not anything from the PSL or IMC.
We have looked at all the alignment and length control signals that we can think of, as well as the witness channels on the bottoms of the optics, and we aren't seeing anything that jumps out at us as a cause of this power drop.
Intriguingly, the REFL power is dropping as well as the transmitted powers, so perhaps we're losing our mode matching throughout the lock? Sheila found that the TCS CO2 power is different after Tuesday maintenence this week, although it's not changing throughout these locks.
Anyhow, we're not sure what is wrong, so we're not sure what we would tweak if we could, so we're leaving the IFO alone. But, I suspect that once POP_LF gets down near 15,000 counts we'll lose this lock.
This morning Keita, Evan and I had another look at these two locks where the POP power dropped.
The first attached plots show various power build ups, normalized to their medians durring this 2.5 day stretch of data. The lines for POPDC, TRX, and TRY are almost on top of each other, and drop by about 10% in the 5-10 hours before the lockloss. The lower plot shows the arm transmissions normalized by the POP power, which is mostly stable but increases by about 10% at the end of the locks. From this we can conclude that the problem doesn't seem to be either a mode mismatch or misalignment of the arm cavities, but something happening in the vertex.
In both cases the refl power drops sooner than the POP and arm powers, and it drops by almost 20%. AS90 and AS_C are fairly stable.
One possiblity is that somehow the OMC was becoming misaligned, the DARM offset was increasing to compensate for this. The second plot shows the OMC ASC control signals (normailzed to their medians), and the OMC QPDs (detrended) durring this time. Although there does seem to be small ez=xcursions in these signals at the end of the locks, its not verry conclusive. The difference of X and Y tidal control signals in the bottom panel might have shown us a change in DARM offset, but it seems like the tidal signal is large compared to anything that is corellated with the power drops.
Note: there was a typo in the script used above, in the first plot lower subplot I was not plotting the ratios that I though I was. Conclusions are not changed.
An unnecessary trip of the ISI occurs everytime the complete platform is deisolated and then reisolated.
The model code keeps the T240 Saturations out of the watchdog bank for tripping the ISI when ever all the isolation gains are zero. But if the T240s are riled up the Saturations still accumulate. As soon as the T240 Monitor has alerted the Guardian that the T240 has settled and the Guardian then starts Isolating, the watchdog trips because the T240 saturations are too many. This only trips the ISI and so the T240 does not get upset again, and after the operator has untripped the watchdog (clearing the saturations,) the ISI isolates fine.
It seems we missed this loophole, if the HEPI does not trip, the T240s often don't get too upset so it isn't a problem. Otherwise, usually something is happening, EQ, platform restart, etc, and the operator (Jim & Me too) just untrip and chalk it up to whatever.
This should be fixed and I'm sure Jamie/Hugo will have some ideas but I suggest something like adding lines:
reset (push) H1:ISI-{platform}_SATCLEAR
wait 60+ seconds
after line 51 in .../guardian/isiguardianlib/isolation/states.py
ISSUEs: 1) The reset will clear all saturations, not just the T240s. 2) The wait is required because the Saturation bleed off code still has the bug of needing a bleed cycle to execute so any reset can take up to 60 seconds. Wasted time not locking/observing.
Integration Issue #1163 filed
JeffK HughR
Looking closer and studying, it looks like the model has logic to send a reset into the T240 WD when Isolation starts but it may have been fouled with the WD saturation bleed off upgrade done a couple months ago. Continuing.
I just checked and it looks like you have the latest models svn up'ed on your machines. We need to look into the models/code. My notes are attached.
Something that might be the issue: Your version of /opt/rtcds/userapps/release/isi/common/src/WD_SATCOUNT.c is out-dated (see below). It looks like there was a bug fix to the saturation counter code you did not receive. Updating is pretty invasive (recompile/restart all the ISI models). We need to make sure that this will solve all the issues you pointed out first.
controls@opsws2:src 0$ pwd
/opt/rtcds/userapps/release/isi/common/src
On the SVN:
controls@opsws2:src 0$ svn log -l 5 ^/trunk/isi/common/src/WD_SATCOUNT.c
------------------------------------------------------------------------
r11267 | brian.lantz@LIGO.ORG | 2015-08-11 16:36:13 -0700 (Tue, 11 Aug 2015) | 1 line
fixed the CLEAR SATURATIONS bug - cleanup of comments
------------------------------------------------------------------------
r11266 | brian.lantz@LIGO.ORG | 2015-08-11 16:32:19 -0700 (Tue, 11 Aug 2015) | 1 line
fixed the CLEAR SATURATIONS bug
------------------------------------------------------------------------
r11131 | hugo.paris@LIGO.ORG | 2015-07-30 18:37:24 -0700 (Thu, 30 Jul 2015) | 1 line
ISI update detailed in T1500206 part 2/2
------------------------------------------------------------------------
On the computers at LHO
controls@opsws2:src 0$ svn log -l 5 WD_SATCOUNT.c
------------------------------------------------------------------------
r11131 | hugo.paris@LIGO.ORG | 2015-07-30 18:37:24 -0700 (Thu, 30 Jul 2015) | 1 line
ISI update detailed in T1500206 part 2/2
------------------------------------------------------------------------
controls@opsws2:src 0$