According to alog 23420, "When useism is high, however, we have to use a 45mhz blends..." so I've changed all of them on ETMY and ETMX to 45mHz.
Status is that all BSC ISI's are on all 45mHz blend filters.
It appears that changing the blend filters rung up ETMX. I'm trying to get it back. I have the ISI in Isolated Damped, and manually changed GS13 fitlers and ST2 ISO filters and gains to engage ST2 ISO, which has worked, however, I'm at 0.01 gain for the loops when they are typically at 1 for ISO_X, Y, Z, and RZ. I did not engage ISO RX or RY.
6:44UTC - oscillations come back.
DRMI has locked for the second time. The first time was from 3:42-3:59UTC. Will attempt to progress the lock toward Low Noise.
two plots attached: Dec. 1st and Dec. 5th - all while H1 is locked in Low Noise
Dec. 1st plot shows BS optical lever sum glitching by 200+ counts.
Dec. 5th plot of the same signals shows ETMY optical lever sum glitching 50-70 counts, and ETMX optical lever sum gltiching 200+ counts, but BS sum is now quiet and +/-25 counts from the mean sum value.
Is this glitching, or an artifact of something else, and could this be effecting the IFO's locking?
Red herring... Variation in signal is a small percentage of the sum.
TITLE: 12/5 DAY Shift: 16:00-00:00UTC (08:00-16:00PDT), all times posted in UTC
STATE of H1: Down for last 4hrs
Incoming Operator: Cheryl
Support: Made a call to Keita (& Vern) to give them status/update
Quick Summary: H1 great for first half of the shift and then down the 2nd half. Have posted what I've done thus far. useism was high 24hrs ago, went down a little and appears to be inching up again (see attached snapshot...I couldn't fix nuc5 such that it can be viewable online). Cheryl was able to lock it up with similar conditions last night, so hope she can do what she did yesterday.
Woes Continue
LVEA useism has been high for last 6hrs. It's currently at 0.9um/s. Not the worst period of useism, but certainly similar to other times when we had issues locking. (see plot of last 55 days).
Also running with all BSCs in their standard 45mHz configuration, EXCEPT for ETMx which Jim alogged about earlier.
TITLE: 12/5 DAY Shift: 16:00-00:00UTC (08:00-16:00PDT), all times posted in UTC
STATE of H1: NLN at 80Mpc
Outgoing Operator: Jim
Quick Summary: Icy drive in. Going on 12+hrs of lock. useism band is elevated to 0.7um/s for LVEA & winds are under 10mph. Had an FMCS Air Handler MAJOR alarm for LVEA REHEAD 2B (think this guy has been alarming off and on for a week +; a known issue). Ah, looks like L1 is joining us for some double coincidence time.
Continue to run with 45mHz Blends enabled, BUT with ETMx with 90mHz for x & y.
O1 day 78
model restarts logged for Fri 04/Dec/2015 No restarts reported
Title: 12/5 owl Shift 8:00-16:00 UTC
State of H1: NLN
Shift Summary:Quiet night, locked entire shift
Activity log:
Nothing to report. Cheryl locked after earlier earthquakes, winds have stayed quiet, microseim maybe reversing it's downward trend.
Ops Eve Shift: 00:01-08:00UTC (16:00-23:59PT)
State of H1: locked in Observe since 04:53UTC, 3+ hours, and range is 80Mpc
Shift Details:
Ops Eve Shift: 00:01-08:00UTC (16:00-11:59PT)
State of H1: relocked and in Observe after an earthquake, range is 82Mpc
1. The re-calibrated C01 hoft data generated by DCS is ready for use up to 1129508864 == Oct 22 2015 00:27:27 UTC. It is available at CIT and via NDS2, and is being transferred via LDR to the other clusters. The times the re-calibrated C01 hoft now cover are: H1: 1125969920 == Sep 11 2015 01:25:03 UTC to 1129508864 == Oct 22 2015 00:27:27 UTC L1: 1126031360 == Sep 11 2015 18:29:03 UTC to 1128398848 == 1129508864 == Oct 22 2015 00:27:27 UTC This information has been updated here: https://wiki.ligo.org/Calibration/GDSCalibrationConfigurations https://wiki.ligo.org/LSC/JRPComm/ObsRun1#Calibrated_Data_Generation_Plans_and_Status https://dcc.ligo.org/LIGO-T1500502 (Note that jobs are running to generate C01 hoft up to Dec 03 2015 11:54:23 UTC, but this data will not be ready until after Dec 9.) 2. For analysis ini files: i. The frame-types are H1_HOFT_C01 and L1_HOFT_C01 ii. The STRAIN channels are: H1:DCS-CALIB_STRAIN_C01 16384 L1:DCS-CALIB_STRAIN_C01 16384 iii. State and DQ information is also in these channels: H1:DCS-CALIB_STATE_VECTOR_C01 16 H1:ODC-MASTER_CHANNEL_OUT_DQ 16384 L1:DCS-CALIB_STATE_VECTOR_C01 16 L1:DCS-CALIB_STRAIN_C01 16384 The bits in the STATE VECTOR are documented here: https://wiki.ligo.org/LSC/JRPComm/ObsRun1#Calibrated_Data_Generation_Plans_and_Status 3. For analysis segments and missing data segments use these C01 specific flags: H1:DCS-ANALYSIS_READY_C01:1 L1:DCS-ANALYSIS_READY_C01:1 H1:DCS-MISSING_H1_HOFT_C01:1 L1:DCS-MISSING_H1_HOFT_C01:1 Summaries of the C01 DQ segments are here: https://ldas-jobs.ligo.caltech.edu/~gmendell/DCS_SegGen_Runs/H1_C01_09to22Oct2015/html_out/Segment_List.html https://ldas-jobs.ligo.caltech.edu/~gmendell/DCS_SegGen_Runs/L1_C01_09to22Oct2015/html_out/Segment_List.html 4. Note that the same filter files are used for all C01 hoft generation: H1DCS_1128173232.npz L1DCS_1128173232.npz However, the C01 Analysis Ready time has been reduced from H1:DMT-ANALYSIS_READY:1 and L1:DMT-ANALYSIS_READY:1 by 3317 seconds and 9386 seconds respectively. This is because these options were used: --factors-averaging-time=128 and --filter-settle-time 148. (This allows study of the calibration factors stored in the C01 hoft frames, even though the calibration factors are not applied in C01.) 5. See these previous alogs: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=22779 https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=22000
While the EQ has the IFO down for a time, took TFs to check the TILT decoupling on the ISI to see if the coupling was bad.
When we blend the inertial sensors into the super sensor at lower frequencies, we must make sure there is no tilt coupling. The tilting inertial sensor will inject the tilt into the platform motion, obviously not a good thing.
First attached is a look at the saved plot from decoupling Jim or I did way back; the decoupling factors have been the same numbers for over a year (I checked conlog, thank you Patrick.)
The blue curve is the final decoupled TF between X drive and T240 response. The Red curve is with no decoupling. Fairly clear: the coupling is showing up below 60mHz and that the decoupling factors do a very good job of eliminating the tilt--a nice linear trace down to DC is what you want.
The second attachment is the current measurement showing that the tilt decoupling is still very good. The blue trace is the original value (same as above) and the red and black traces are with the 45 and 90mHz blends respectively. The blend should not matter when looking for the decoupling but the 7.1Mag EQ gave the time so I ran both.
Now I do see some yuck in the 30-70mHz band but these are only 3 averages and the blue trace is 10 averages so that may be the reason for the yuck.
I took advantage of the lock loss and went down to End Y to look for any obstructions near the temperature sensors in the VEA. I found that the small garbing room on the north wall was blocking one of the sensors. I placed the end curtains and the curtains in front of the sensor on top of the garbing room. There was also one of the large aluminum disc if front of a return air opening. I relocated the disc. We have had a difficult time regulating the temperature accurately at this end station, hopefully this will alleviate some of this problem.
Activity Log: All Times in UTC (PT) 16:00 (08:00) Take over from Jim 17:53 (09:53) Locksmith on site – Bubba will escort 18:26 (10:26) Betsy – Working in the optics lab 19:55 (11:55) Betsy – Out of Optics lab 22:45 (14:45) Lockloss – 7.1mag EQ in the south Indian Ocean 22:50 (14:50) Reset Timing error on H1SUSETMY 23:13 (15:13) Bubba – Going to End-Y to move garb room curtains away from temperature sensor 23:49 (15:49) Bubba – Back from End-Y 23:58 (15:58) Hugh - Running TFs on End-X ISI 00:00 (16:00) Turn over the Cheryl End of Shift Summary: Title: 12/04/2015, Day Shift 16:00 – 00:00 (08:00 – 16:00) All times in UTC (PT) Support: None Incoming Operator: Cheryl Shift Detail Summary: Overall a good observing shift until 7.1 magnitude EQ in the Southern Indian Ocean. Have the IFO in DOWN until the seismic conditions settle down. Took the opportunity of being down to reset the timing error on H1SUSETMY.
Lockloss at 22:45 (14:45) due to 7.1 mag EQ in Southern Indian Ocean. Seismic is up to 2.5um/s and increasing. Have the IFO in a DOWN state until things settle down.
In 23939 Evan pointed out low frequency glitches. These look very similar to the scattering glitches seen at LLO every day (there likely driven by OM suspension motion). I think these gltiches at LHO are probably related to SRM or PRM optic motion for a few reasons.
Figures 1 and 2 show the SNR/Freq plots from hveto of Strain and Mich, respectively. These both show a relationship between amplitude and SNR that is what you expect for driven fringes/shelves sticking out farther above the noise, the higher frequency they are driven to.
Figures 3 and 4 show omega scans of Strain and Mich showing pretty clear arches. The arches are stronger in Mich than in Strain (in terms of SNR).
Figures 5 and 6 show the fringe frequency prediction based on the velocity of the PRM and SRM optics. The period is about right for both. The results for other optics are shown here. The code is here. And the scattering summary page for this day is here. The dominant velocity looks like about 0.15Hz judged by eye from the timeseries.
Figure 7 shows that during the glitchy times (green and yellow) the SRM and PRM optics are moving more at about 0.12Hz (probably compatible with the largest velocity frequency above). There's also a pretty strong 0.6Hz resonance in PRM, but this is the same in good and bad times.
I ran EXCAVATor over a 4 hour period when the low-frequency glitches were visible, see the results here. The top 10 channels are all related to SRM, but the absolute value of the derivative (as usual in case of scattered light) of H1:SUS-SRM_M2_WIT_L_DQ wins by a hair and also seems to have a decent use-percentage. Using this channel as a veto, most of the channels related to SRM drop in relevance in the second round. This round is won by H1:ASC-DHARD_P_OUT_DQ with some signals related to ETMX/ITMX close behind.
An unnecessary trip of the ISI occurs everytime the complete platform is deisolated and then reisolated.
The model code keeps the T240 Saturations out of the watchdog bank for tripping the ISI when ever all the isolation gains are zero. But if the T240s are riled up the Saturations still accumulate. As soon as the T240 Monitor has alerted the Guardian that the T240 has settled and the Guardian then starts Isolating, the watchdog trips because the T240 saturations are too many. This only trips the ISI and so the T240 does not get upset again, and after the operator has untripped the watchdog (clearing the saturations,) the ISI isolates fine.
It seems we missed this loophole, if the HEPI does not trip, the T240s often don't get too upset so it isn't a problem. Otherwise, usually something is happening, EQ, platform restart, etc, and the operator (Jim & Me too) just untrip and chalk it up to whatever.
This should be fixed and I'm sure Jamie/Hugo will have some ideas but I suggest something like adding lines:
reset (push) H1:ISI-{platform}_SATCLEAR
wait 60+ seconds
after line 51 in .../guardian/isiguardianlib/isolation/states.py
ISSUEs: 1) The reset will clear all saturations, not just the T240s. 2) The wait is required because the Saturation bleed off code still has the bug of needing a bleed cycle to execute so any reset can take up to 60 seconds. Wasted time not locking/observing.
Integration Issue #1163 filed
JeffK HughR
Looking closer and studying, it looks like the model has logic to send a reset into the T240 WD when Isolation starts but it may have been fouled with the WD saturation bleed off upgrade done a couple months ago. Continuing.
I just checked and it looks like you have the latest models svn up'ed on your machines. We need to look into the models/code. My notes are attached.
Something that might be the issue: Your version of /opt/rtcds/userapps/release/isi/common/src/WD_SATCOUNT.c is out-dated (see below). It looks like there was a bug fix to the saturation counter code you did not receive. Updating is pretty invasive (recompile/restart all the ISI models). We need to make sure that this will solve all the issues you pointed out first.
controls@opsws2:src 0$ pwd
/opt/rtcds/userapps/release/isi/common/src
On the SVN:
controls@opsws2:src 0$ svn log -l 5 ^/trunk/isi/common/src/WD_SATCOUNT.c
------------------------------------------------------------------------
r11267 | brian.lantz@LIGO.ORG | 2015-08-11 16:36:13 -0700 (Tue, 11 Aug 2015) | 1 line
fixed the CLEAR SATURATIONS bug - cleanup of comments
------------------------------------------------------------------------
r11266 | brian.lantz@LIGO.ORG | 2015-08-11 16:32:19 -0700 (Tue, 11 Aug 2015) | 1 line
fixed the CLEAR SATURATIONS bug
------------------------------------------------------------------------
r11131 | hugo.paris@LIGO.ORG | 2015-07-30 18:37:24 -0700 (Thu, 30 Jul 2015) | 1 line
ISI update detailed in T1500206 part 2/2
------------------------------------------------------------------------
On the computers at LHO
controls@opsws2:src 0$ svn log -l 5 WD_SATCOUNT.c
------------------------------------------------------------------------
r11131 | hugo.paris@LIGO.ORG | 2015-07-30 18:37:24 -0700 (Thu, 30 Jul 2015) | 1 line
ISI update detailed in T1500206 part 2/2
------------------------------------------------------------------------
controls@opsws2:src 0$