TITLE: 08/04 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 52Mpc
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
Wind: 4mph Gusts, 3mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY: No issues to report.
TITLE: 08/04 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 52Mpc
INCOMING OPERATOR: Patrick
SHIFT SUMMARY: Quiet, except for an earthquake
LOG:
23:30 Chandra to MY
0:30 Lockloss when I switched to an earthquake state during the EQ
4:00 A2L while LLO is down
The attached figure shows a cWB outlier that Brennan H. sent around to DetChar. It is compared to likely raven pecks discussed in this log: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=37630 The match is quite good (see figure), though, in this case, the peck did not produce a very loud sound. But the above log suggests that the coupling path for pecks is not through the air, but through the GN2 vent to the vacuum enclosure. While the sound for the cWB outlier was not that loud, the vibration of the enclosure was strong and sensed by the BSC10 accelerometers. The vacuum enclosure motion likely couples to DARM through the P-Cal periscope (see above log). It looks like there is a broader resonance in the vent/vacuum enclosure system, excited by ice-craving ravens, that overlaps with the 94 Hz resonance of the P-Cal periscope, that is strongly coupled to DARM through scattered light.
Sheila, TVo
After a few attempts to fully switch the DARM control from ETMY to ETMX on the L3, L2, and L1 stage which resulted in lock losses, we decided to only switch the L3 stage to ETMX to test whether or not differences in the test mass charge is the culprit for this 10-80 Hz noise.
The python script we used to swap is attached as well as the resultant DARM signals.
After stably swapping to ETMX, we also flipped the bias sign on ETMY and reduced the biases to zero. Then Sheila grounded the ESD to the chamber. All of these configurations did not change the DARM spectrum.
I have slowly been looking at suspension pitch sensors around the times of the EQ, here are a few more plots.
Quads during comparable EQs
From December 10th to January 25th, we had three EQs which were larger (in 30-100mHz BLRMS of vertical ground velocity) than the July 6th Montana EQ. (One of these EQs was also larger in the 100mHz-300mHz band) The attached plots are directly comparable to the plots in 37799 except that each color is the time period between a different EQ. While there are some shifts in the top mass (smaller than what we had in the Montana EQ), there are not comparable shifts in the relationship between the top mass pitch and the oplev pitch.
Triples
I was hoping that the triples would be easier to understand than the quads, since all the sensors are relative to the cage. In the end I don't think this is very illuminating, but I am posting the plots anyway. Attached are plots for all the small triples showing scatter plots of different osems before and after the Earthquake, analogous to the plots attached to 37799
You can see that for some of the triples, there is no change in the linear relationship between top mass torque and pitch and top mass vs intermediate mass pitch, and small offsets between the intermediate mass and bottom mass. These could just be unreliable readings from the bottom mass osems (MC1, and MC3 are good examples) PR2+SR2 seem to have real shift similar to the shifts we see on all the quads.
At mid shift all is good. Robert will be doing PEM injections until 13:00PT. Air quality from the wildfire smoke continues to be an issue. For 0.3um particle size, the outside air is a little better having dropped from 10 million particles per CF, to 8.9 million particles per CF. In the control room counts are: 0.3um = 11,000, 0.5um = 5500, 1.0um 2300 all per CF. Counts particle for all sizes in the LVEA and both VEA remain under 1000.
I took the opportunity to change the high freqeuncy calibration line from 4001.3 to 3501.3 when the IFO was changed to comissioning for PEM injection.
AlanW was curious about the changes happening at ENDX Pcal in the last ten days. All these changes are consistent with the work I am doing at ENDX pcal to gather data for high freqeuncy calibration. Attached is a screenshot of changes in excitation and the corresponding chnages in the TxPD signal. RxPD may show more erratic changes but that is because the RxPD is clipping. We will make sure the Pcal goes back to its original configuration once the data gathering is completed.
TITLE: 08/03 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 54Mpc
INCOMING OPERATOR: Jeff
SHIFT SUMMARY:
LOG:
TITLE: 08/03 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 54Mpc
INCOMING OPERATOR: Cheryl
SHIFT SUMMARY: Quiet shift, Dave seems to have fixed the hardware watchdog issue that had plagued us the last 2 nights.
LOG:
23:00 Sheila and TVo were preparing a measurement when I arrived, Jeff was just getting the IFO to NLN.
PI mode 26 popped up a couple times, otherwise nothing much happened.
FAMIS 6909 HAM3 V1 seems elevated. BS ST1 all DOF seem elevated. ETMY ST1 H2 seems elevated. ETMY ST2 V2 seems elevated. ITMX ST1 all DOF seem elevated. ITMX ST2 V3 seems elevated. ITMY ST1 all DOF seem elevated.
J. Kissel, R. McCarthy Some investigation on this... Date Jun 26 2017 Aug 02 2017 Time GPS 1182506160 - 1182506760 1185730620 - 1185731220 Time UTC 09:55:42 - 10:05:42 17:36:42 - 17:46:42 Time PDT 02:55:42 - 03:05:42 10:36:42 - 10:46:42 Richard points to data from the Jun 26th FAMIS check, in LHO aLOG 37138, worried that this might be exposing something wrong after the July 6th Montana EQ. - I've trended the global seismic configuration, and we were in "WINDY" at both these times, so this rules out a different configuration of the ST1 controls (i.e. I'd thought it was maybe that we were in a higher blend filter, or sensor correction for the site was off or something.) - The summary pages don't show a difference between the ST2 / Suspension Point performance on these two days, which means whatever excess that ST1 is seeing is controlled below the sensor noise of ST2 GS13s, which is good. (Not summary pages are a media spectra for the entire UTC day, not just for these 10 minute periods used for this FAMIS test). - Then I realized there should be a large difference in the 1-3, 3-10 Hz input ground motion, just due to the difference between 3am local and 10am local anthropogenic activity. I attach spectra comparing ground motion (as measured by the STSs on the ground in all VEAs), and they agree with what's shown in the ST1 CPS -- in the 0.8 to 20 Hz region, there are features that show roughly an order of magnitude more motion in all buildings comparing the Jun 26th time and Aug 02 time. This is not at all indicative of anything wrong. (Aug 02 is the reference, Jun 26 is the non-reference data). We should standardize at what time of day we use to gather data for inspection in this FAMIS task. The test was designed to look for elevation in the *sensor noise* of the ISI's capacitive position sensors, indicative of problems we've seen with the electronics -- i.e. the flat, above 10 Hz, featureless part of the spectra will be elevated above the black line if there's badness. There will likely *always* be feature-full, residual seismic motion that's visible in these spectra that can be different from test-to-test, especially on stage 1 in the 1-30 Hz range because ST1 does not isolate this region (that job is left for stage 2 / ST2). One can't necessarily *know* that the feature-full full stuff is "real" residual seismic data, but this test is designed for you to ignore that stuff, and focus on the high frequency flat portion of the spectra. Standardizing that we take the data in the middle of the night, local time, when there is less 1-30 Hz input ground motion (since most people are asleep), means the platform will be moving less, and expose more CPS sensor noise, and this'll be a more focused test.
I've updated both HAM and BSC python scripts to look at 2 am local using gpstime.tconvert('2am today') . I've also left code in, commented out, so that the measurement time can be specified in the terminal. It would be nice to have some easier to find or use documentation for some of these libraries. I knew there was tconvert python stuff, but had no idea where to find how to use it.
After our last failed attempt to transition to ETMX in low noise, I thought the problem might have been a pitch instability at 2.8 Hz. This partially motivated turning down the CSOFT gain. I copied the L2 L2P filters from ETMY to ETMX after talking to Kiwamu and remebering that we haven't actually used the L2 length to angle decoupling on ETMX.
TVo and I again redid the 75% transition, measured the OLTF down to 10 Hz and saw that it agreed well with the measurement when we were completely on ETMY, and tried the full transition. We stayed locked for about 60 after the ramp ended, starting at 20:41:40 UTC on August 2nd, then unlocked with a similar instability at around 2.8 Hz.
We probably need to do measure the OLTF down to 2 Hz, or measure the crossovers. This time the problem didn't look like pitch, it didn't show up in the oplevs (the L2P decoupling worked at least).
The attached screenshot shows the spectrum before and after this test, as well as the spectrum after the reboots described in 37969
(Reference alog 37923 and its comments) FRS 8666
In the past two evenings H1 was taken out of observation mode by a transient SDF difference on SUSITMX. Conlog reports that the channel being changed is H1:SUS-ITMX_HWWD_STATE (I had previously incorrectly said conlog did not see any change, but my query was in error).
Trending the ITMX_HWWD_STATE does show it flashing an LED error once in a while, which has been a known issue from before O2 and presumed related to the longer monitor cable in the corner station between the HWWD unit in the CER and the satellite-amp box in the Biergarten. No such transients are seen in EY where the cable run is shorter. Trends show the HWWD-LED glitching every day at a rate of 5-10 per day. So my first questions was: why has this not taking H1 out of observation-mode before? Here is the answer:
We know the LED monitor voltage dips below the trip level during loss-of-lock and lock-acquisition when the DAC outputs are being driven more aggressively. Trends show that prior to this week all the HWWD glitches had occurred when H1 did not have a range, an indication it is not in observation mode (an example 24 hour trend is the bottom plot of attachment).
On Monday and Tuesday evening this week the HWWD LED glitched for 3 seconds each time when the DACs were relatively quiet. Top plot of attachment shows Tuesday's event, middle plot shows Monday's event. This could be an indicator that this signal is slowly degrading. Why it only happened once per day, and on each day between the local times of 5pm and 6pm, we can only assume this is a coincidence.
On reflection, the SDF should not be monitoring these HWWD STATE channels. With Vern's approval I have taken them out of the OBSERVE.snap for the four quad suspensions (these are the only systems with HWWD units).
Attached are three 24 hour minute trend plots. In each plot, upper channel is the H1 range, lower is the ITMX HWWD STATE (0=good, 8=LED error). Bottom plot is a normal situation where there are no HWWD diffs when the IFO is in observation mode. Top plot shows Tuesday loss-of-observation-mode (spike near left margin), middle plot shows Monday loss-of-observation-mode (second spike in from left).
Here are the SDF changes to the OBSERVE.snap files:
-H1:SUS-ETMY_HWWD_STATE 1 0.000000000000000e+00 1
+H1:SUS-ETMY_HWWD_STATE 1 0.000000000000000e+00 0
-H1:SUS-ETMX_HWWD_STATE 1 0.000000000000000e+00 1
+H1:SUS-ETMX_HWWD_STATE 1 0.000000000000000e+00 0
-H1:SUS-ITMX_HWWD_STATE 1 0.000000000000000e+00 1
+H1:SUS-ITMX_HWWD_STATE 1 0.000000000000000e+00 0
-H1:SUS-ITMY_HWWD_STATE 1 0.000000000000000e+00 1
+H1:SUS-ITMY_HWWD_STATE 1 0.000000000000000e+00 0
WP 7101
Sheila, Richard, Fil, Sudarshan, Dave:
We power cycled the front end computers and their associated IO Chassis for the systems h1susb123 (ITMX, ITMY, BS, ITMPI), h1susex (ETMX, TMSX, ETMXPI) and h1susey (ETMY, TMSY, ETMYPI). Prior to the reboots, Sheila checked the SUS safe.snap SDF files to see if they were up to date (which they were).
The power down sequence for each computer was:
The power up sequence was:
The power sequence in the corner station went well. We had problems at both end stations:
EX: the power up of h1susex caused the h1iscex computer to freeze, which in turn caused a Dolphin glitch on h1seiex.
EY: the power up of h1susey caused a dolphin glitch on this fabric, all ISC and SEI models were glitched.
Both problems were unexpected and unexplained and worrisome.
h1iscex was found to be frozen but powered on. Richard power cycled the computer.
The recovery from the Dolphin glitches at both end stations was the same:
note, h1iopseiey had a slight IRIG-B excursion to +50, which recovered in a few minutes.
Once all the models were running correctly, the system was cleaned up by resetting the IOP software watchdogs (SWWD), clearing the latched errors with DIAG_RESET, clearing the DAQ CRC errors.
Sudarshan reports a PCAL guardian issue with HIGH_FREQ_LINES node, which did not like h1calex being reset to its safe.snap settings.
While we were rebooting h1susey, Richard and I took a look at the BIOS settings on this computer (one of the faster models). We found that the 'Power Technology' setting is set to 'Max Performance', which Gerrit reports could be the source of our glitching.
J. Kissel I'm behind on my documentation as I slow process all the data that I'm collecting these days. This aLOG is to document that on this past Tuesday (2017-07-25) I took standard top-to-top mass transfer functions for the Triple SUS (BS, HLTS, and HSTS; 10 SUS in total), as I've done for the QUADs (see LHO aLOG 37689 and associated comments). I saw no evidence of rubbing during the act of measurement, but I'd like to confirm with a thorough comparison. As such, I'll post comparisons against previous measurements, other suspensions, and the appropriate model in due time. This leaves: 3 doubles, 9 singles. Data is stored and committed here: /ligo/svncommon/SusSVN/sus/trunk/BSFM/H1/BS/SAGM1/Data/ 2017-07-25_1501_H1SUSBS_M1_WhiteNoise_L_0p01to50Hz.xml 2017-07-25_1501_H1SUSBS_M1_WhiteNoise_P_0p01to50Hz.xml 2017-07-25_1501_H1SUSBS_M1_WhiteNoise_R_0p01to50Hz.xml 2017-07-25_1501_H1SUSBS_M1_WhiteNoise_T_0p01to50Hz.xml 2017-07-25_1501_H1SUSBS_M1_WhiteNoise_V_0p01to50Hz.xml 2017-07-25_1501_H1SUSBS_M1_WhiteNoise_Y_0p01to50Hz.xml /ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/PR3/SAGM1/Data/ 2017-07-25_1507_H1SUSPR3_WhiteNoise_L_0p01to50Hz.xml 2017-07-25_1507_H1SUSPR3_WhiteNoise_P_0p01to50Hz.xml 2017-07-25_1507_H1SUSPR3_WhiteNoise_R_0p01to50Hz.xml 2017-07-25_1507_H1SUSPR3_WhiteNoise_T_0p01to50Hz.xml 2017-07-25_1507_H1SUSPR3_WhiteNoise_V_0p01to50Hz.xml 2017-07-25_1507_H1SUSPR3_WhiteNoise_Y_0p01to50Hz.xml /ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/SR3/SAGM1/Data/ 2017-07-25_H1SUSSR3_M1_WhiteNoise_L_0p01to50Hz.xml 2017-07-25_H1SUSSR3_M1_WhiteNoise_P_0p01to50Hz.xml 2017-07-25_H1SUSSR3_M1_WhiteNoise_R_0p01to50Hz.xml 2017-07-25_H1SUSSR3_M1_WhiteNoise_T_0p01to50Hz.xml 2017-07-25_H1SUSSR3_M1_WhiteNoise_V_0p01to50Hz.xml 2017-07-25_H1SUSSR3_M1_WhiteNoise_Y_0p01to50Hz.xml /ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/ PR2/SAGM1/Data/2017-07-25_1607_H1SUSPR2_M1_WhiteNoise_L_0p01to50Hz.xml PR2/SAGM1/Data/2017-07-25_1607_H1SUSPR2_M1_WhiteNoise_P_0p01to50Hz.xml PR2/SAGM1/Data/2017-07-25_1607_H1SUSPR2_M1_WhiteNoise_R_0p01to50Hz.xml PR2/SAGM1/Data/2017-07-25_1607_H1SUSPR2_M1_WhiteNoise_T_0p01to50Hz.xml PR2/SAGM1/Data/2017-07-25_1607_H1SUSPR2_M1_WhiteNoise_V_0p01to50Hz.xml PR2/SAGM1/Data/2017-07-25_1607_H1SUSPR2_M1_WhiteNoise_Y_0p01to50Hz.xml PRM/SAGM1/Data/2017-07-25_1607_H1SUSPRM_M1_WhiteNoise_L_0p03to50Hz.xml PRM/SAGM1/Data/2017-07-25_1607_H1SUSPRM_M1_WhiteNoise_P_0p01to50Hz.xml PRM/SAGM1/Data/2017-07-25_1607_H1SUSPRM_M1_WhiteNoise_R_0p01to50Hz.xml PRM/SAGM1/Data/2017-07-25_1607_H1SUSPRM_M1_WhiteNoise_T_0p01to50Hz.xml PRM/SAGM1/Data/2017-07-25_1607_H1SUSPRM_M1_WhiteNoise_V_0p01to50Hz.xml PRM/SAGM1/Data/2017-07-25_1607_H1SUSPRM_M1_WhiteNoise_Y_0p01to50Hz.xml SR2/SAGM1/Data/2017-07-25_1715_H1SUSSR2_M1_WhiteNoise_L_0p01to50Hz.xml SR2/SAGM1/Data/2017-07-25_1715_H1SUSSR2_M1_WhiteNoise_P_0p01to50Hz.xml SR2/SAGM1/Data/2017-07-25_1715_H1SUSSR2_M1_WhiteNoise_R_0p01to50Hz.xml SR2/SAGM1/Data/2017-07-25_1715_H1SUSSR2_M1_WhiteNoise_T_0p01to50Hz.xml SR2/SAGM1/Data/2017-07-25_1715_H1SUSSR2_M1_WhiteNoise_V_0p01to50Hz.xml SR2/SAGM1/Data/2017-07-25_1715_H1SUSSR2_M1_WhiteNoise_Y_0p01to50Hz.xml SRM/SAGM1/Data/2017-07-25_1814_H1SUSSRM_M1_WhiteNoise_L_0p01to50Hz.xml SRM/SAGM1/Data/2017-07-25_1814_H1SUSSRM_M1_WhiteNoise_P_0p01to50Hz.xml SRM/SAGM1/Data/2017-07-25_1814_H1SUSSRM_M1_WhiteNoise_R_0p01to50Hz.xml SRM/SAGM1/Data/2017-07-25_1814_H1SUSSRM_M1_WhiteNoise_T_0p01to50Hz.xml SRM/SAGM1/Data/2017-07-25_1814_H1SUSSRM_M1_WhiteNoise_V_0p01to50Hz.xml SRM/SAGM1/Data/2017-07-25_1814_H1SUSSRM_M1_WhiteNoise_Y_0p01to50Hz.xml
More detailed plots of BS, compared against previous measurements and model. We see perfect agreement with model and previous measurement, so this SUS is definitely clear of rubbing.
More detailed plots if PR3 and SR3. Both are clear of rubbing. The new measurements agree with old measurements of the same suspension, the model, and other suspensions of its type. PR3's L2L transfer function shows "extra" unmodeled resonances that were not there before, but they line up directly with the Y modes. This is likely that, during the measurement the Y modes got rung up, and the power is so large that it surpasses the balance the of the sensors, so they're not subtracted well. I can confirm that these frequencies are incoherent with the excitation, and we've seen such inconsequential cross coupling before. Nothing about which to be alarmed.
More detailed plots of PRM, SRM, and SR2 compared against previous measurements and model. We see good agreement with model and previous measurement, so these SUS are clear of rubbing. There is a subtle drop in response scale factor for all of these suspensions (and in retrospect it's seen on the other SUS types too), and I suspect this is a result of the OSEMs LEDs slowly loosing current over the series of measurements, see attached 4 year trends.
While PR2 shows all resonances are in the right place, there is a suspicious drop in scale for the L and Y DOFs with respect to prior measurements. However, this is the first measurement where we've measured the response with the nominal alignment offsets needed to run the IFO (!!). These DOFs (L and Y) have the LF and RT OSEM sensor / actuators in common (see E1100109 for top mass OSEM layout), so I checked the OSEM sensors, an indeed the LF OSEM sensor is on the very edge of its range at ~1400 [ct] out of 32000 (or 15000 [ct] if it were perfectly centered). I'll confirm that the suspension is free and OK tomorrow by retaking the measurements at a variety of alignment offsets. I really do suspect we're OK, and the measurement is just pushing the OSEM flag past its "closed light" voltage and the excitation is becoming non-linear, therefore reducing the linear response. I attach the transfer function data and a 4 year trend of the LF and RT OSEM values to show that we've been operating like this for years, and there's been no significan change after the Jul 6th EQ.
I'd forgotten to post about the OMCS data I took on 2017-07-25 as well.
The data lives here:
/ligo/svncommon/SusSVN/sus/trunk/OMCS/H1/OMC/SAGM1/Data/
2017-07-25_1812_H1SUSOMC_M1_WhiteNoise_L_0p02to50Hz.xml
2017-07-25_1812_H1SUSOMC_M1_WhiteNoise_P_0p02to50Hz.xml
2017-07-25_1812_H1SUSOMC_M1_WhiteNoise_R_0p02to50Hz.xml
2017-07-25_1812_H1SUSOMC_M1_WhiteNoise_T_0p02to50Hz.xml
2017-07-25_1812_H1SUSOMC_M1_WhiteNoise_V_0p02to50Hz.xml
2017-07-25_1812_H1SUSOMC_M1_WhiteNoise_Y_0p02to50Hz.xml
Detailed plots now attached, and they show that OMC is clear of rubbing; the data looks as it has for past few years, and what difference we see between LHO and LLO are the lower-stage Pitch modes which are arbitrarily influence by ISC electronics cabling running down the chain (as we see for the reaction masses on the QUADs).
After the 10:16 UTC lockloss, DIAG_MAIN began reporting that, in addition to the known PCAL Y issues, PCAL X was also off by more than 1%. Trending the RX PD OUTPUT shows that the output changed in a step coincident with the lockloss. I'll not attempt to do anything about this at the moment and let those in the land of the living decide what actions to take.
The Y-end TX PD doesn't show any changes during this time and hence that end would not have been cause of any problem. The X-end changes are small and hence wouldn't have been a cause either. Actually the full rate X-end channel (16 kHz DQ) doesn't show any changes (first plot) and so the changes we see in '*OUTPUT' channel might be some artifact. The small changes we see in Y-end is response to the lock loss. The second zoomed-in plot show that the changes in Y-end happen after the lock loss and only in RX PD. This is indication of test mass moving (oscillating) after the lock loss, due to kick. The PCal could cause lock losses when most of it's power (~100%) changes suddenly or the optical follower servo loop becomes unstable. It wasn't the case here.