The Master switch needs to be closed so Guardian has the setpoint wrong. Not sure how long this has been as the channels are not trendable. Maybe guardian needs a reload, or an init cycle. Otherwise, not to worry until out of lock to try these guardian fixes.
Title: 9/21 Day Shift 15:00-23:00 UTC (8:00-16:00 PDT). All time posted in UTC.
State of H1: Observing
Outgoing Operator: Jeff Bartlett
Summary: Locked in Observing for ~9 hours.
Laser Status:
Activity Log: All Times in UTC (PT) 07:00 (00:00) Take over from TJ 07:09 (00:09) IFO locked at NOMINAL_LOW_NOISE, 22.7W, 71Mpc 07:16 (00:16) Set Intent Bit to Observing 08:17 (01:17) ETMY saturation 08:36 (01:36) ETMY saturation 09:43 (02:43) ETMY saturation 12:03 (05:03) ETMY saturation 13:33 (06:33) ETMY saturation 14:41 (07:41) ETMY saturation 14:45 (07:45) Christina moving 1-ton truck from Carpenters shop to LSB 15:00 (08:00) Turn over to Travis End of Shift Summary: Title: 09/21/2015, Evening Shift 07:00 – 15:00 (00:00 – 08:00) All times in UTC (PT) Support: None needed Incoming Operator: Travis Shift Summary: - 07:09 Relocked IFO after Lockloss (possible 6.3mag EQ in Chile. Set the Intent bit to Observing at 07:16 (00:16). - Smooth Owl shift last night. IFO locked in NOMINAL_LOW_NOISE at 22.7W with 68 to 72Mpc range for 8 hours. No significant seismic events during the shift. The wind was fluctuating between 20 plus mph and single digit speeds during the shift. This morning wind speeds are back up to high teens to low 20mphs. 6 ETMY saturation events during the shift, with minor effect on range.
Looking at the past 12 months of laser output power trend data (see image PastYear.png). During first long stretch between 10-21-2014 and 5-10-2015, of 202 days, the output power decay is approximately 10 mW/day. Going by a crude fit by eye. The second stretch which coincides with when the diode current was last increased, the output power decay is approximately 20 mW/day. Also attached are trends from the past 120 days. The image Fit120.png shows the regression fit to the data as given by Grace. The fitted decay rates are ~15 mW /day and ~20 mW/day. So for the ~90 days of the observing we might lose ~1.8 W from the laser. Which would bring the power into the IMC accordingly. At some point we need to ask ourselves at which point do we increase the diode current to up the laser power to preserve the status quo? Alternatively is a ~1.8 W power loss by the end of the run, enough to worry about?
H1 in Observing mode for past 4 hours. Wind has dropped to a light breeze. There was some low frequency seismic noise at END-Y for the past two hours (could be wind related) that has tapered off. Three reported ETM-Y saturations over the past 4 hours. IFO is stable with a 74Mpc range. All appears normal.
Transition Summary: Title: 09/21/2015, Owl Shift 07:00 – 15:00 (00:00 – 08:00) All times in UTC (PT) State of H1: At 07:09 (00:00) ReLocked at NOMINAL_LOW_NOISE, 22.7W, 71Mpc Outgoing Operator: TJ Quick Summary: At the shift change TJ was relocking the IFO after Lockloss. 07:16 (00:16) set the Intent bit to Observing. Seismic activity is ringing down after 6.3 mag EQ in Chile. Wind is mid-teens to low 20mph.
Title: 09/20 Eve Shift 23:00-7:00 UTC (16:00-0:00 PST). All times in UTC.
State of H1: Relocking
Shift Summery: Relocked after some high winds and then stayed locked for 6+ hours. Small issue with a RF45 EOM driver that took us out of Observing while Stephan fixed it. Lost lost at 6:20 and have been on our way back up since.
Incoming Operator: Jeff B.
Log:
Lockloss @ 6:20 UTC
Terramon showed a 6.3 earthquake, and when I saw LLO go down I turned on the DHARD Y boost as Sheila suggested in alog21708 at 6:19. A minute or so later we lost lock. I'm not sure if the earthquake brought us down, the new boost, or something else entirely.
There is a problem with the network switch connecting LDAS to GC in the DCS room in the onsite warehouse.
This means the cluster head nodes, ldas-grid, ldas-pcdev1, detchar, and the nds2 server at LHO are unreachable.
This may be why the summary pages are not updating.
However the computers are up and jobs are running.
I've been to the site, talked with the operator, then walked from the control to the warehouse to check on the network switch. It appears to be dead. (See my email below.)
Also, ldas-pcdev2 and the web server (located in a different room) are up and reachable and can see the data for the summary pages. I've contacted Duncan Macleod to see if there is away to get the summary pages to update again, using the working connections.
We may not be able to replace the broken switch until tomorrow.
Note that this does not affect the flow of data to LDAS at LHO or CIT.
-------- Original Message --------
Subject: Re: [DASWG] Re: Problem with the head nodes at LHO
Date: Sun, 20 Sep 2015 20:39:06 -0700
From: Gregory Mendell <gmendell@ligo-wa.caltech.edu>
To: daswg@ligo.org
CC: LDAS_ADMIN_ALL <ldas_admin_all@ligo.caltech.edu>, "detchar@ligo.org" <detchar@ligo.org>
On 9/20/15 7:16 PM, Gregory Mendell wrote:
> The problem appears to be that nds, ldas-grid, and ldas-pcdev1 are
> unreachable from outside the internal ldas network.
>
> These computers are up and jobs are running.
>
> I will be checking on the GC switch in the ldas room at LHO once I get
> out to the site (in about 30 minutes).
The switch has no link lights on any of its copper or fiber connections
and no indication it is getting power.
Power cycling and pushing reset button several time did not work.
Moving power cord to another circuit did not work. (Other switches and
computers are all up and showing lights on the same circuits, so it was
unlikely this would have worked anyways.)
I'm in touch with Dan Moraru. If we don't have a spare we'll have to
replace the broken switch tomorrow.
Regards,
Greg
This problem was fixed by Dan Moraru this morning, thanks to a loner switch from Ryan Blair.
H1 is Locked and Observing. Seemed to have fixed a problem with a RF45 EOM driver. Winds have died down and seismic is looking good.
While I was investigating alog21712 I found that the ISC_LOCK Guardian node went into error at 22:43 UTC. "AttributeError: 'REFL_IN_VACUO' has no attribute 'failure' ". Seems like a code error but I don't see an log if it was fixed or just reloaded or what. I will investgate further when I get a chance.
That was my fault due to an errant button click while frantically trying to relock before your shift started. This was during initial alignment in which I had the ISC_LOCK guardian in manual mode in order to take it to DOWN. I believe I was trying to click another window below the open 'ALL LOCK STATES' window (note that the REFL_IN_VACUO button is the bottom left button in this window). I noticed it went into error trying to get to this state, but attributed it as my mistake, reloaded the node, and failed to log it since I was doing other things at the time. Once I went back to AUTO, the error state cleared.
The DIAG_MAIN node will go into error a little less than once a day, and report:
File "/ligo/apps/linux-x86_64/cdsutils-497/lib/python2.7/site-packages/cdsutils/avg.py", line 67, in avg
2015-09-21T01:10:35.46180 for buf in conn.iterate(*args):
RuntimeError: Requested data were not found
(screenshot attached)
While this only requires the operator to reload the node, the cdsutils avg package is used in other nodes such as ISC_LOCK and OMC_LOCK (and others that I can't think of off the top of my head). This inconsistancy also omits its use in the DIAG_CRIT node, for fear that it will not be able to find the data, bring the node into error, and then out of Observation.
Operators: Please log and screenshot the Guardian log any time that a Guardian node goes into error. This will help us diagnose the problem, or others, faster.
The logs are plain text, so you can just copy/paste them into the log as well.
We again have the non-stationary noise described in alog 21353 Interestingly, the noise also seems to move around in frequency. Attached is a power spectrum of 3 different UTC times, both of DELTAL_EXTERNAL and OMC_DCPD_SUM. (OMC_DCPD_NULL does not show it.)
.. was due to to the RF45 EOM driver acting up. Attached are the same specra, from the same times, but including the H1:LSC-MOD_RF45_AM_CTRL_OUT_DQ channel, as well as its coherence with DCPD_SUM. H1:LSC-MOD_RF45_AM_CTRL_OUT_DQ is clearly elevated when we have this noise show up. As a temporary fix (because we are still in a CAMLAND alert) we lowered the modulation index by 1dB: H1:LSC-MOD_RF45_AM_RFSET was 23.2 is 22.2 and related (because this lowered the light entering HAM6) H1:ASC-ODC_AS_A_DC_LT_TH was -13000 is -11000 (this could just stay there - no need to revert) Remarks: - Currently the DOWN script will NOT set H1:LSC-MOD_RF45_AM_RFSET back on lock loss - This change in the SB power will lower the modulation index (normally ~0.3) by 1 dB. - Thus the carrier power (~(1-Gamma^2/2) will go up by about 1%. - Updated the guardian to reset H1:LSC-MOD_RF45_AM_RFSET to 23.2 in the DOWN state (ISC_LOCK.py, svn revision 11674) It was reloaded at 2:59 UTC.
Tagging a few more subsystems so more people get the memo to help investigate, and note the change in configuration for this lock stretch.
J. Kissel, T. Schaffer During the brief period we were down and attempting to lock the IFO this afternoon, TJ's robot -- we'll call him Jeeves today -- was announcing that the SUS SRM and SUS OMC's Software Watchdogs were tripping. The software watchdogs have not tripped. This watchdog noise has been going on for quite some time, and I wanted to get to the bottom of it. Here're my conclusions: (1) Neither the SUS SRM or SUS OMC are in danger. (2) The robot is claiming a "trip," but it is merely reporting that the EPICs records H1:IOP-SUS_$(OPTIC)_WD_RMS H1:IOP-SUS_%(OPTIC)_DK_SIGNAL are indicating that the BLRMS of top mass OSEMs is briefly surpassing threshold. Recall that we need 300 [sec] = 5 [min] of these indicators at solid "bad" = "red" before *any* action is taken by the software watchdog. Thus, the robot is claiming a much worse thing has happened than what has really happened. (3) The suspensions' top masses are moving "so much" because of ISC control fed to the SUS and those stages of the SUS (for the SRM, it's occasionally blasts from the angular controls, and for the OMC it's more consistent blasting in all L, P, and Y DOFs -- but also for angular control). I imagine that because the ground motion is rather large right now with the wind, the top masses are moving more from residual ground motion AND the ISC request is larger trying to account for it. (4) The threshold for the software watchdog is set right where it should be, given how long and consistently the threshold must saturate before any action is taken. The USER watchdogs, which watch the same RMS signal (just not calibrated into volts as the software watchdog has been), are at 20% / 75% of their threshold for SRM / OMC (respectively) for tripping during this time. I don't argue that these have been set any more carefully, but in any case, the USER WD should trip first, and in this case it will. (5) We could probably treat the suspensions a little nicer by being a little smarter about when we turn on the ASC control, or putting software limits on the control signals, but they're big girls -- they can handle it. I suggest the only thing we change is the EPICs record that Jeeves watches to H1:IOP-SUS_OPTIC_DK_WD_OUT to report that the software watchdogs have tripped. This record tells you that 5 minutes has passed in which the OSEMs have surpassed their threshold, and has triggered the SEI system's countdown until shutdown which will be 5 minutes later. I attach screenshots of the ISC Lock Guardian's state, the RMS values of the trigger signals for both the USER and SW WDs for both SRM and OMC, as well as a trend of the ISC requested drive to that respective suspension. These trends, and my knowledge of the design of each of these watchdogs, are what have convinced me of the above conclusions.
I've changed it in "Jeeves" aka VerbalAlarms
It was noted recently elsewhere that there are a pair of lines in DARM near 41 Hz that may be the roll modes of triplet suspensions. In particular, there is a prediction of 40.369 Hz for the roll mode labeled ModeR3. Attached is a zoom of displacement spectrum in that band from 50 hours of early ER8 data. Since one naively expects a bounce mode at 1/sqrt(2) of the roll mode, also attached is a zoom of that region for which the evidence of bounce modes seems weak. The visible lines are much narrower, and one coincides with an integer frequency. For completeness, I also looked at various potential subharmonics and harmonics of these lines, in case the 41-Hz pair come from some other source with non-linear coupling. The only ones that appeared at all plausible were at about 2/3 of 41 Hz. Specifically, the peaks at 40.9365 and 41.0127 Hz have potential 2/3 partners at 27.4170 and 27.5025 Hz (ratios: 0.6697 and 0.6706) -- see 3rd attachment. The non-equality of the ratios with 0.6667 is not necessarily inconsistent with a harmonic relation, since we've seen that quad suspension violin modes do not follow a strict harmonic progression, and triplets are almost as complicated as quads. On the other hand, I do not see any evidence at all for the 4th or 5th harmonics in the data set, despite the comparable strain strengths seen for the putative 2nd and 3rd harmonics. Notes: * The frequency ranges of the three plots are chosen so that the two peaks would appear in the same physical locations in the graphs if the nominal sqrt(2) and 2/3 relations were exact.. * There is another, smaller peak of comparable width between the two peaks near 27 Hz, which may be another mechanical resonance. * The 27.5025-Hz line has a width that encompasses a 25.5000-hz line that is part of a 1-Hz comb with a 0.5-Hz offset reported previously.
We are looking for the source of the 41 Hz noise lines. We used the coherence tool results for a week of ER8, with 1 mHz resolution: https://ldas-jobs.ligo-wa.caltech.edu/~eric.coughlin/ER7/LineSearch/H1_COH_1123891217_1124582417_SHORT_1_webpage/ and as a guide looked at the structure of the 41 Hz noise, as seen in the PSD posted above by Keith. Michael Coughlin then ran the tool that plots coherence vs channels, https://ldas-jobs.ligo-wa.caltech.edu/~mcoughlin/LineSearch/bokeh_coh/output/output-pcmesh-40_41.png and made the following observations Please see below. I would take a look at the MAGs listed, they only seem to be spiking at these frequencies. The channels that spike just below 40.95: H1:SUS-ETMY_L3_MASTER_OUT_UR_DQ H1:SUS-ETMY_L3_MASTER_OUT_UL_DQ H1:SUS-ETMY_L3_MASTER_OUT_LR_DQ H1:SUS-ETMY_L3_MASTER_OUT_LL_DQ H1:SUS-ETMY_L2_NOISEMON_UR_DQ H1:SUS-ETMY_L2_NOISEMON_UL_DQ H1:PEM-CS_MAG_EBAY_SUSRACK_Z_DQ H1:PEM-CS_MAG_EBAY_SUSRACK_Y_DQ H1:PEM-CS_MAG_EBAY_SUSRACK_X_DQ The channels that spike just above 41.0 are: H1:SUS-ITMY_L2_NOISEMON_UR_DQ H1:SUS-ITMY_L2_NOISEMON_UL_DQ H1:SUS-ITMY_L2_NOISEMON_LR_DQ H1:SUS-ITMY_L2_NOISEMON_LL_DQ H1:SUS-ITMX_L2_NOISEMON_UR_DQ H1:SUS-ITMX_L2_NOISEMON_UL_DQ H1:SUS-ITMX_L2_NOISEMON_LR_DQ H1:SUS-ITMX_L2_NOISEMON_LL_DQ H1:SUS-ETMY_L3_MASTER_OUT_UR_DQ H1:SUS-ETMY_L3_MASTER_OUT_UL_DQ H1:SUS-ETMY_L3_MASTER_OUT_LR_DQ H1:SUS-ETMY_L3_MASTER_OUT_LL_DQ H1:SUS-ETMY_L2_NOISEMON_UR_DQ H1:SUS-ETMY_L2_NOISEMON_UL_DQ H1:SUS-ETMY_L2_NOISEMON_LR_DQ H1:SUS-ETMY_L2_NOISEMON_LL_DQ H1:SUS-ETMY_L1_NOISEMON_UR_DQ H1:SUS-ETMY_L1_NOISEMON_UL_DQ H1:SUS-ETMY_L1_NOISEMON_LR_DQ H1:SUS-ETMY_L1_MASTER_OUT_UR_DQ H1:SUS-ETMY_L1_MASTER_OUT_UL_DQ H1:SUS-ETMY_L1_MASTER_OUT_LR_DQ H1:SUS-ETMY_L1_MASTER_OUT_LL_DQ H1:SUS-ETMX_L2_NOISEMON_UR_DQ H1:SUS-ETMX_L2_NOISEMON_LL_DQ H1:PEM-EY_MAG_EBAY_SUSRACK_Z_DQ H1:PEM-EY_MAG_EBAY_SUSRACK_Y_DQ H1:PEM-EY_MAG_EBAY_SUSRACK_X_DQ H1:PEM-CS_MAG_LVEA_OUTPUTOPTICS H1:PEM-CS_MAG_LVEA_OUTPUTOPTICS_QUAD_SUM_DQ The magnetometers do show coherence at the two spikes seen in Keith's plot. The SUS channels are also showing coherence at these frequencies, sometimes broad in structure, sometimes narrow. See the coherence plots below. Nelson, Michael Coughlin, Eric Coughlin, Pat Meyers
Nelson, et. al Interesting list of channels. Though they seem scattered, I can imagine a scenario where the SRM's highest roll mode frequency is still the culprit. All of the following channels you list are the drive signals for DARM. We're currently feeding back the DARM signal to only ETMY. So, any signal your see in the calibrated performance of the instrument, you will see here -- they are part of the DARM loop. H1:SUS-ETMY_L3_MASTER_OUT_UR_DQ H1:SUS-ETMY_L3_MASTER_OUT_UL_DQ H1:SUS-ETMY_L3_MASTER_OUT_LR_DQ H1:SUS-ETMY_L3_MASTER_OUT_LL_DQ H1:SUS-ETMY_L2_NOISEMON_UR_DQ H1:SUS-ETMY_L2_NOISEMON_UL_DQ H1:SUS-ETMY_L2_NOISEMON_LR_DQ H1:SUS-ETMY_L2_NOISEMON_LL_DQ H1:SUS-ETMY_L1_NOISEMON_UR_DQ H1:SUS-ETMY_L1_NOISEMON_UL_DQ H1:SUS-ETMY_L1_NOISEMON_LR_DQ H1:SUS-ETMY_L1_MASTER_OUT_UR_DQ H1:SUS-ETMY_L1_MASTER_OUT_UL_DQ H1:SUS-ETMY_L1_MASTER_OUT_LR_DQ H1:SUS-ETMY_L1_MASTER_OUT_LL_DQ Further -- though we'd have to test this theory by measuring the coherence between, say the NoiseMon channels and these SUS rack magnetometers, I suspect these magnetometers are just sensing the requested DARM drive control signal H1:PEM-EY_MAG_EBAY_SUSRACK_Z_DQ H1:PEM-EY_MAG_EBAY_SUSRACK_Y_DQ H1:PEM-EY_MAG_EBAY_SUSRACK_X_DQ Now comes the harder part. Why the are ITMs and corner station magnetometers firing off? The answer: SRCL feed-forward / subtraction from DARM and perhaps even angular control signals. Recall that the QUAD's electronics chains are identical, in construction and probably in emission of magnetic radiation. H1:PEM-CS_MAG_EBAY_SUSRACK_Z_DQ H1:PEM-CS_MAG_EBAY_SUSRACK_Y_DQ H1:PEM-CS_MAG_EBAY_SUSRACK_X_DQ sound like they're in the same location for the ITMs as the EY magnetometer for the ETMs. We push SRCL feed-forward to the ITMs, and SRM is involved in SRCL, and also there is residual SRCL to DARM coupling left-over from the imperfect subtraction. That undoubtedly means that the ~41 [Hz] mode of the SRM will show up in DARM, SRCL, the ETMs and the ITMs. Also, since the error signal / IFO light for the arm cavity (DARM, CARM -- SOFT and HARD) angular control DOFs have to pass through HSTSs as they come out of the IFO (namely SRM and SR2 -- the same SUS involved in SRCL motion), they're also potentially exposed to this HSTS resonance. We feed arm cavity ASC control signal to all four test masses. That would also explain why the coil driver monitor signals show up on your list: H1:SUS-ITMY_L2_NOISEMON_UR_DQ H1:SUS-ITMY_L2_NOISEMON_UL_DQ H1:SUS-ITMY_L2_NOISEMON_LR_DQ H1:SUS-ITMY_L2_NOISEMON_LL_DQ H1:SUS-ITMX_L2_NOISEMON_UR_DQ H1:SUS-ITMX_L2_NOISEMON_UL_DQ H1:SUS-ITMX_L2_NOISEMON_LR_DQ H1:SUS-ITMX_L2_NOISEMON_LL_DQ The 41 Hz showing up in H1:SUS-ETMX_L2_NOISEMON_UR_DQ H1:SUS-ETMX_L2_NOISEMON_LL_DQ (and not in the L3 or L1 stage) also is supported by the ASC control signal theory -- we only feed ASC to the L2 stage, and there is no LSC (i.e. DARM) request to ETMX (which we *would* spread among the three L3, L2, and L1 stages.). Also note that there's a whole integration issue about how these noise monitor signals are untrustworthy (see Integration Issue #9), and the ETMX noise mons have not been cleared as "OK," and in fact have been called out explicitly for their suspicious behavior in LHO aLOG 17890 I'm not sure where this magnetometer lives: H1:PEM-CS_MAG_LVEA_OUTPUTOPTICS H1:PEM-CS_MAG_LVEA_OUTPUTOPTICS_QUAD_SUM_DQ but it's clear from the channel names that these is just two different versions of the same magnetometer. I'm surprised that other calibrated LSC channels like H1:CAL-CS_PRCL_DQ H1:CAL-CS_PRCL_DQ H1:CAL-CS_PRCL_DQ don't show up on your list. I'm staring at the running ASD of these channels on the wall and there's a line at 41 [Hz] that in both the reference trace and the current live trace (though, because PRCL, SRCL, and MICH all light that bounces off of HSTSs, I suspect that you might find slightly different frequencies in each). "I see your blind list of channels that couple, and raise you a plausible coupling mechanism that explains them all. How good is your hand?!"
I neglected to state explicitly that the spectra I posted are taken from non-overlapped Hann-windowed 30-minute SFTs, hence with bins 0.5556 mHz wide and BW of about 0.83 mHz.
Attached are close-in zooms of the bands around 41 Hz peaks, from the ER8 50-hour data integration, allowing an estimate of their Q's (request from Peter F). For the peak at about 40.9365 Hz, one has: FWHM ~ 0.0057 Hz -> Q = 40.94/.0057 = 7,200 For the peak at about 41.0127 Hz, one has: FWHM ~ 0.0035 Hz -> Q = 41.01/0.0035 = 12,000 Also attached are zooms and close-in zooms for the peak at 41.9365 Hz from 145 hours of ER7 data when the noise floor and the peak were both higher. The 41.0127-Hz peak is not visible in this data set integration. In the ER7 data set, one has for 41.9365 Hz: FWHM ~ 0.0049 Hz -> Q = 40.94/0.0049 = 8,400 Peter expected Q's as high as 4000-5000 and no lower than 2000 for a triplet suspension. These numbers are high enough to qualify.
Andy Lundgren pointed out that there is a line at about 28.2 Hz that might be close enough to 40.9365/sqrt(2) = 28.95 Hz to qualify as the bounce-mode counterpart to the suspected roll mode. So I've checked its Q in the 50-hour ER8 set and the 145-hour ER7 set and am starting to think Andy's suspicion is correct (see attached spectra). I get Q's of about 9400 for ER and 8600 for ER7, where the line in ER7 is much higher than in ER8, mimicking what is seen at 41 Hz.
In an email Gabriele Vajente has stated, "...the noise might be correlated to PRCL." There is a coherence spike between h(t) and H1:LSC-PRCL_OUT_DQ at 40.936 Hz. Here is the coherence for a week in ER8.
Peter F asked if Q of ~ 10,000 for bounce and roll modes was plausible. Answer is yes. We have evidence that the material loss can at least a factor of 2 better than 2e-4 - e.g. see our paper (due to be published soon in Rev Sci Instrum,) P1400229, where we got an average 1.1 x 10^-4 loss for music wire. Q = 1/loss.
[Stuart A, Jeff K, Norna R] After having looked through acceptance measurements, taken in-chamber (Phase 3), for all H1 HSTSs, it should be noted that our focus was on the lower frequency modes of the suspensions, so we have little data to refine the estimates of the individual mode frequencies for each suspension. No vertical (modeV3 at ~27.3201 Hz) or roll (modeR3 at ~40.369 Hz) modes are present in the M1-M1 (top-to-top) stage TFs of the suspensions. Some hints of modes can be observed in M2-M2 and M3-3 TFs (see attached below), as follows:- 1) M2-M2, all DOFs suffer from poor coherence above 20 Hz. However, there are some high Q features that stand out in the L DOF for SRM, at frequencies of 27.46 Hz and 40.88 Hz. In Pitch, there is a high Q feature at 27.38 Hz for PR2. In Yaw, a feature at 40.81 Hz is just visible for MC1. 2) M3-M3, again all DOFs suffer very poor coherence above 20 Hz. However, a feature can be seen standing above the noise at 26.7 Hz for MC2 in the L DOF. Also, a small peak is present at 40.92 Hz for SR2 in the Yaw DOF.
We currently don't have any bandstops for these modes on the tripples, except for in the top stage length path to SRM and PRM. It would not impact our ASC loops to add bandstops to the P+Y input on all triples. We will do this next time we have a chance to put some changes in.
Ryan Derosa mentioned that he took some low resolution measurements that include an L1 SR2 roll mode at 41.0 Hz.
I have now looked at the data for all the MCs, to complement the PRs and SRs above in log 21741. Screenshots of the data are attached, a list of the modes found are below.
H1
SUS bounce (Hz) roll (Hz)
MC1 27.38 40.81
MC2 27.75 40.91
MC3 27.43? 40.84
L1
SUS bounce (Hz) roll (Hz)
MC1 27.55? 40.86
MC2 --- 40.875
MC3 27.53 40.77
Error bars of +- 0.01 Hz.
I am not sure about the bounce modes for H1 MC3 and L1 MC1 since the peaks are pretty small. I couldn't find any data on L1 MC2 showing a bounce mode.
Expanding the channel list to include all channels in the detchar O1 channel list:
https://wiki.ligo.org/DetChar/O1DetCharChannels
I ran a coherence study for a half our of data towards the end of ER8.
I see particularly high coherence at 40.93Hz in many channels in LSC, OMC, ITM suspensions, and also a suspension for PR2. It seems to me like this particularly strong line is probably due to PR2 based on these results, Keith's ASDs, and Brett's measurements, and it seems to be very highly coherent.
Full results with coherence matrices and data used to create them (color = coherence, x axis = frequency, y axis = channels) broken down roughly by subsystem can be found here:
https://ldas-jobs.ligo-wa.caltech.edu/~meyers/coherence_matrices/1126258560-1801/bounce_roll4.html
Attached are several individual coherence spectra that lit up the coherence matrices with the frequency of maximum coherence in that range picked out.
-Pat
J. Kissel, E. King, M. Oliver, T. Sadecki Unsure where Robert and Nutsinee left us off on whether we could enjoy our Sunday afternoon in the control room (LHO aLOG 21180), I figure I'd log that we've turned on music using the "big" speaker in the control room at a medium to light volume (max volume from my laptop, bass and treble roughly in the middle of the range, just a hair under the first volume tick of the "line in" port). We're listening to Lake Street Dive's two albums "Bad Self Portraits," and "Lake Street Dive;" some relaxing Sunday modern pop folk rock, with a hint of 60s motown and some southern flare. Track one of Bad Self Portraits started at 19:36:30 UTC, (12:36:30 PDT).
The bass-heavy one showed up on a microphone but no evidence of it coupled into DARM. Enjoy the music =)
By "the bass heavy one," I believe Nutsinee is referring to the song that was using *during* the PEM injections (Underworld's "Sola Sistem"), *not* what we're listening to now. P.S. we've now switched artists -- at around 20:45 UTC, we switched to listening through Hiatus Kaiyote's "Chose Your Weapon." A little bit more bass heavy, neo soul / R&B, with Jazz influences. Volume and Bass settings are the same.
Music has been turned OFF at 03:38 UTC. (Or at least mine).