I'm still not sure what happened, but the SUS_CHARGE Guardian had a connection error for the H1:SUS-ITMX_L3_DRIVEALIGN_L2L filter bank, but it looks like it should have been a GAIN. Looks like this happened last week as well - alog71899. Before Camilla came in and pointed that out to me, I first tried stopping the node and then re-exec'ing it to create new epics connections, but this didn't work. I took the node to DOWN and then tried the injections again, but then stopped it a bit too early. I also fixed some tabs to spaces that might also have been creating some guardian confusion.
Fixed now and we ran it past the error point then brought it to DOWN for maintenance.
TITLE: 08/15 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 2mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.15 μm/s
QUICK SUMMARY: SUS_CHARGE ran into a connection error (see separate alog), maintenance has began with minor activities on site.
TITLE: 08/15 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Came in to the detector in the process of relocking, and during my shift we had two locklosses, one unknown, one known, but the detector locked itself back up quickly each time without a need for initial alignment or any help. We are currently Observing and have been Locked for 2hrs 25mins.
23:00 Relocking from lockloss that happened before I arrived (72201)
23:46:40 NOMINAL_LOW_NOISE
00:00:26 Observing
02:09 Lockloss(72209), I am taking us to LOWNOISE_LENGTH_CONTROL for 102Hz peak test(72214)
03:13 Reached NOMINAL_LOW_NOISE
03:57 Lockloss caused by me and Jenne, we are each taking 50% of the blame :( (72211)
04:41 Reached NOMINAL_LOW_NOISE
05:11 Observing
LOG:
no log
In investigating the cause of the peak at 102Hz (72064, 72108), the cause had been narrowed down in between a calibration lines issue or an LSC FF issue. Since the change to the LSCFF ramp times did not fix the peak (72188), Jeff and Ryan S added a new ISC_LOCK state today(well 8/14 22:45UTC) (72205), TURN_ON_CALIBRATION_LINES, so that the calibration lines aren't turned on until late in the locking process.
In the following locking sequence (72201), the peak still appeared around the LOWNOISE_LENGTH_CONTROL and TURN_ON_CALIBRATION_LINES states, so it was decided that the next time we lost lock, we would pause on the way back up at LOWNOISE_LENGTH_CONTROL to determine which of the two states the peak was linked to.
We lost lock at 2:09UTC (72209), so I set the detector to only go through to the state right before LOWNOISE_LENGTH_CONTROL (I selected LOWNOISE_ESD_ETMY when it should've been LOWNOISE_ESD_ETMX but that presumeably wouldn't which of the two states the peak turned on at), and then selected LOWNOISE_LENGTH_CONTROL and the 102Hz peak appeared quickly after (full spectrum, zoomed in). Once we had been in that state for a bit I moved to TURN_ON_CALIBRATION_LINES to see if that would cause the peak to change in any way, but it didn't change(zoomed in - not the best ss sorry). So the peak at 102Hz is caused by one of the filter gains in LOWNOISE_LENGTH_CONTROL.
Lockloss @ 3:57 due to me having selected the wrong state to stop at before going into LOWNOISE_LENGTH_CONTROL for our 102Hz peak test :(. Currently relocking and everything is going well!!
Back to Observing at 5:11UTC!
After losing lock at 2:09UTC, I started relocking the detector but had it stop right before it got to LOWNOISE_LENGTH_CONTROL so we could see whether the 102Hz peak (72064) is related to the LSC filter gains turning on or the calibration lines turning on. We got our answer, so I continued and we are currently in NOMINAL_LOW_NOISE waiting for ADS to converge so we can go into Observing.
Lockloss at 2:09UTC. Got an EX saturation callout immediately before.
In relocking, I will be setting ISC_LOCK so we go to LOWNOISE_LENGTH_CONTROL instead of straight to NOMINAL_LOW_NOISE, to see if the 102Hz line is caused by LOWNOISE_LENGTH_CONTROL or by the calibration lines engaging at TURN_ON_CALIBRATION_LINES (72205).
I noticed a weird set of glitches on the glitchgram fom that took place between 3:13 and 3:57UTC(spectrogram, omicron triggers), ramping up in frequency from 160-200Hz over that timespan. Even though we weren't Observing when this was happening, the diagonal line on many of the summary page plots is hard to miss and so I wanted to post this and tag DetChar to give info as to why these glitches appeared and why they (presumably) shouldn't be seen again.
This is after we had reached NOMINAL_LOW_NOISE but were not Observing yet due to waiting for ADS to converge. Although I don't know the direct cause of these glitches, them appearing, along with ADS not converging, was because I had selected the wrong state to pause at before going into LOWNOISE_LENGTH_CONTROL(for 102Hz peak test), so even though I was then able to proceed to NOMINAL_LOW_NOISE, the detector wasn't in the correct configuration. Once we lost lock trying to correct the issue, relocking automatically went through the correct states. So these glitches occurred during a non-nominal NOMINAL_LOW_NOISE.
R. Short, J. Kissel
Following investigations into the 102Hz feature that has been showing up for the past few weeks (see alogs 72064 and 72108 for context), it was decided to try a lock acquisition with the calibrations lines off until much later in the locking sequence. What this means in more specific terms is as follows:
For each of these points where the calibration lines are turned off or on, the code structure is now consistent. Each time "the lines are turned on/off," the cal lines for ETMX stages L1, L2, and L3 (CLK, SIN, and COS), PCal X, PCal Y, and DARMOSC are all toggled.
The OMC and SUSETMX SAFE SDF tables have been updated appropriately (screenshots attached).
These changes were implemented on August 14th around 22:45 UTC, are committed to svn, and ISC_LOCK has been loaded.
TITLE: 08/14 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Commissioning time this morning while L1 was down and observing this afternoon until a lockloss shortly before the end of shift.
H1 is relocking automatically, currently locking PRMI.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 16:38 | FAC | Cindi | MX | - | Technical cleaning | 17:46 |
| 16:39 | ISC | Elenna | CR | - | ASC injections | 19:27 |
| 16:39 | PEM | Robert | LVEA/CR | - | PEM injections | 20:19 |
| 16:41 | FAC | Randy | LVEA N-bay | - | Hardware checks | 17:12 |
| 16:42 | VAC | Travis | MY | - | Turbopump maintenance (back/forth all day) | 20:33 |
| 17:13 | FAC | Randy | EY | - | Plug in scissor lift | 18:13 |
TITLE: 08/14 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 4mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.16 μm/s
QUICK SUMMARY:
Detector went down a bit before I arrived (72201) for unknown causes. Currently working its way back up and is doing well.
Lockloss @ 22:44 UTC - cause currently unknown, seemed fast
00:00 Back into Observing
Today I took "unbiased" OLGs of INP1 P and Y (67187). I have plot the measurements with error shading.
INP1 P has a UGF of about 0.036 Hz and phase margin of 87 deg. This UGF seems very low for the target Gabriele and I had when we redesigned this loop (69108). I think should be closer to 0.1 Hz. INP1 Y has a UGF of about 0.25 Hz with phase margin 35 deg, which is higher than I would have expected for our target. Time permitting, I will look into the design of both of these loops and see if there are any adjustments worth making.
You can find the measurement templates, exported data, and processing code in '/ligo/home/elenna.capote/DRMI_ASC/INP1'.
The templates for the these measurements are also saved in [userapps]/asc/h1/templates/INP1 as 'INP1_{P,Y}_olg_broadband_shaped.xml'.
As a reminder, INP1 controls IM4 and is sensed on a combination of REFL RF45 WFS.
Follow up on previous tests (72106)
First I injected noise on SR2_M1_DAMP_P and SR2_M1_DAMP_L to measure the transfer function to SRCL. The result shows that the shape is different and the ratio is not constant in frequency. Therefore we probably can't cancel the coupling of SR2_DAMP_P to SRCL by rebalancing the driving matrix. Although I haven't thought carefully if there is some loop correction I need to do for those transfer functions. I measured and plotted the DAMP_*_OUT to SRCL_OUT. transfer functions. It might still be worth trying to change the P driving matrix while monitoring a P line to minimize the coupling to SRCL.
Then I reduced the damping gains for SR2 and SR3 even further. We are now running with SR2_M1_DAMP_*_GAIN = -0.1 (was -0.5 for all but P that was -0.2 since I reduced it yesterday). Also SR3_M1_DAMP_*_GAIN = -0.2 (was -1). This has improved a lot the SRCL motion and also improved DARM RMS. It looks like it also improved the range.
Tony has accepted this new configuration in SDF.
Detailed log below for future reference.
Time with SR2 P gain at -0.2 (but before that too)
from PDT: 2023-08-10 08:52:40.466492 PDT
UTC: 2023-08-10 15:52:40.466492 UTC
GPS: 1375717978.466492
to PDT: 2023-08-10 09:00:06.986101 PDT
UTC: 2023-08-10 16:00:06.986101 UTC
GPS: 1375718424.986101
H1:SUS-SR2_M1_DAMP_P_EXC butter("BandPass",4,1,10) ampl 2
from PDT: 2023-08-10 09:07:18.701326 PDT
UTC: 2023-08-10 16:07:18.701326 UTC
GPS: 1375718856.701326
to PDT: 2023-08-10 09:10:48.310499 PDT
UTC: 2023-08-10 16:10:48.310499 UTC
GPS: 1375719066.310499
H1:SUS-SR2_M1_DAMP_L_EXC butter("BandPass",4,1,10) ampl 0.2
from PDT: 2023-08-10 09:13:48.039178 PDT
UTC: 2023-08-10 16:13:48.039178 UTC
GPS: 1375719246.039178
to PDT: 2023-08-10 09:17:08.657970 PDT
UTC: 2023-08-10 16:17:08.657970 UTC
GPS: 1375719446.657970
All SR2 damping at -0.2, all SR3 damping at -0.5
start PDT: 2023-08-10 09:31:47.701973 PDT
UTC: 2023-08-10 16:31:47.701973 UTC
GPS: 1375720325.701973
to PDT: 2023-08-10 09:37:34.801318 PDT
UTC: 2023-08-10 16:37:34.801318 UTC
GPS: 1375720672.801318
All SR2 damping at -0.2, all SR3 damping at -0.2
start PDT: 2023-08-10 09:38:42.830657 PDT
UTC: 2023-08-10 16:38:42.830657 UTC
GPS: 1375720740.830657
to PDT: 2023-08-10 09:43:58.578103 PDT
UTC: 2023-08-10 16:43:58.578103 UTC
GPS: 1375721056.578103
All SR2 damping at -0.1, all SR3 damping at -0.2
start PDT: 2023-08-10 09:45:38.009515 PDT
UTC: 2023-08-10 16:45:38.009515 UTC
GPS: 1375721156.009515
If our overall goal is to remove peaks from DARM that dominate the RMS, reducing these damping gains is not the best way to acheive that. SR2 L damping gain was reduced by a factor of 5 in this alog, and a resulting 2.8 Hz peak is now being injected into DARM from SRCL. This 2.8 Hz peak corresponds to a 2.8 Hz SR2 L resonance. There is no length control on SR2, so the only way to suppress any length motion of SR2 is via the top stage damping loops. The same can be said for SR3, whose gains were reduced by 80%. It may be that we are reducing sensor noise injected into SRCL from 3-6 Hz by reducing these gains, hence the improvement Gabriele has noticed.
Comparing a DARM spectrum before and after this change to the damping gains, you can see that the reduction in the damping gain did reduce DARM and SRCL above 3 Hz, but also created a new peak in DARM and SRCL at 2.8 Hz. I also plotted spectra of all dofs of SR2 and SR3 before and after the damping gain change showing that some suspension resonances are no longer being suppressed. All reference traces are from a lock on Aug 9 before these damping gains were reduced and the live traces are from this current lock. The final plot shows a transfer function measurement of SR2 L taken by Jeff and me in Oct 2022.
Since we fell out of lock, I took the opportunity to make SR2 and SR3 damping gain adjustments. I have split the difference on the gain reductions in Gabriele's alog. I increased all the SR2 damping gains from -0.1 to -0.2 (nominal is -0.5). I increased the SR3 damping gains from -0.2 to -0.5 (nominal is -1).
This is guardian controlled in LOWNOISE_ASC, because we need to acquire lock with higher damping gains.
Once we are back in lock, I will check the presence of the 2.8 Hz peak in DARM and determine how much different the DARM RMS is from this change.
There will be SDF diffs in observe for all SR2 and SR3 damping dofs. They can be accepted.
SR2 and SR3 damping gains changes that Elenna made have been accepted
The DARM RMS increases by about 8% with these new slightly higher gains. These gains are a factor of 2/2.5 greater than Gabriele's reduction. The 2.8 Hz peak in DARM is down by 21%.
This is a somewhat difficult determination to make, given all the nonstationary noise from 20-50 Hz, but it appears the DARM sensitivity is slightly improved from 20-40 Hz with a slightly higher SR2 gain. I randomly selected several times from the past few locks with the SR2 gains set to -0.1 and recent data from the last 24 hours where SR2 gains were set to -0.2. There is a small improvement in the data with all SR2 damping gains = -0.2 and SR3 damping gains= -0.5.
I think we need to do additional tests to determine exactly how SR2 and SR3 motion limit SRCL and DARM so we can make more targeted improvements to both. My unconfirmed conclusion from this small set of data is that while we may be able to reduce reinjected sensor noise above 3 Hz with a damping gain reduction, we will also limit DARM if there is too much motion from SR2 and SR3.
J. Kissel, A. Neunzert, E. Goetz, V. Bossilkov As we continue the investigation in understanding why the noise the region around 102.13 Hz gets SUPER loud at the beginning of nominal low noise segments, and the calibration line seems to be reporting a huge amount of systematic error (see investigations in LHO:72064), Ansel has found that some new electronics noise has appeared in the end station as of Saturday Aug 5 2023 around 05:30a PT at frequency extremely, and unluckily close to the 102.13000 Hz calibration line -- something at 102.12833 Hz; see LHO:72105. While we haven't yet ID'd the cause, and thus no have a solution -- we can still change the calibration frequency to move it away from this feature in hopes that there're not beating together terribly like that are now. I've changed the calibration frequency line frequency to 104.23 Hz as of 21:13 UTC on Aug 09 2023. This avoids (a) LLO's similar frequency at 101.63 Hz, and (b) because the former frequency, 102.13 Hz was near the upper edge of the 9.33 Hz wide [92.88, 102.21) Hz pulsar spin down, "non-vetoed" band, this new frequency 104.23 Hz skips up to the next 18.55 Hz wide "non-veto" band between [104.22, 122.77) Hz according to LHO:68139. Stay tuned - to see if this band-aid fix actually helps, or just spreads out the spacing between the comb, and - as we continue to investigate the issue of from where this thing came. Other things of note: Since - this feature is *not* related to the calibration line itself, - this calibration line is NOT used to generate any time-dependent correction factors and thus the calibration pipeline itself, nor the data it produces is affected - this calibration line is used only to *monitor* the calibration systematic error, - this feature is clearly identified in an auxiliary PEM channel -- and that same channel *doesn't* see the calibration line we conclude that there *isn't* some large systematic error that is occuring, it's just the calculation that's getting spoiled and misreporting large systematic error. Thus, we make NO plan to do anything on further with the calibration or systematic error estimate side of things from this. We anticipate that this now falls squarely into the noise subtraction pipeline's shoulders. Given that this 102.12833 Hz noise has a clear witness channel, and the noise creates non-linear nastiness, I expect this will be an excellent candidate for offline non-linear / NONSENS cleaning. Here's the latest list of calibration lines: Freq (Hz) Actuator Purpose Channel that defines Freq Changes Since Last Update (LHO:69736) 15.6 ETMX UIM (L1) SUS \kappa_UIM excitation H1:SUS-ETMY_L1_CAL_LINE_FREQ No change 16.4 ETMX PUM (L2) SUS \kappa_PUM excitation H1:SUS-ETMY_L2_CAL_LINE_FREQ No change 17.1 PCALY actuator kappa reference H1:CAL-PCALY_PCALOSC1_OSC_FREQ No change 17.6 ETMX TST (L3) SUS \kappa_TST excitation H1:SUS-ETMY_L3_CAL_LINE_FREQ No change 33.43 PCALX Systematic error lines H1:CAL-PCALX_PCALOSC4_OSC_FREQ No change 53.67 | | H1:CAL-PCALX_PCALOSC5_OSC_FREQ No change 77.73 | | H1:CAL-PCALX_PCALOSC6_OSC_FREQ No change 104.23 | | H1:CAL-PCALX_PCALOSC7_OSC_FREQ FREQUENCY CHANGE; THIS ALOG 283.91 V V H1:CAL-PCALX_PCALOSC8_OSC_FREQ No change 284.01 PCALY PCALXY comparison H1:CAL-PCALY_PCALOSC4_OSC_FREQ No change 410.3 PCALY f_cc and kappa_C H1:CAL-PCALY_PCALOSC2_OSC_FREQ No change 1083.7 PCALY f_cc and kappa_C monitor H1:CAL-PCALY_PCALOSC3_OSC_FREQ No change n*500+1.3 PCALX Systematic error lines H1:CAL-PCALX_PCALOSC1_OSC_FREQ No change (n=[2,3,4,5,6,7,8])
Just a post-facto proof that this calibration line frequency change from 102.13 to 104.23 Hz drastically improved the symptom that the response function systematic error, as computed by this ~100 Hz line, was huge for hours while the actual 102.128333 Hz line was loud. The attached screenshot shows a 2 days before and two days after the change (again on 2023-08-09 at 21:13 UTC). The green trace, which shows that there is no longer erroneously reported large error as computed by the 102.13 and then 104.23 Hz lines at the beginning of nominal low noise segments.
Benoit, Ansel, Derek
Benoit noticed that for recent locks, the 102.13 Hz calibration line is much louder than typical for the first few hours of the lock. An example of this behavior is shown in the attached spectrogram of H1 strain data on August 5 - this is the first day this behavior appeared. Ansel noted that this feature includes a comb-like structure around the line that is only present in the H1:GDS-CALIB_STRAIN_NOLINES channel and not H1:GDS-CALIB_STRAIN (see spectra for CALIB_STRAIN and CALIB_STRAIN_NOLINES on Aug 5). This issue also visible in the PCAL trends for the 102.13 Hz line.
We are not sure if the excess noise near 102.13 Hz is from the calibration line itself or another noise source that is near the line. However, the behavior has been present for every lock since 12:30 UTC on August 5 2023.
FYI,
$ gpstime Aug 05 2023 12:30 UTC
PDT: 2023-08-05 05:30:00.000000 PDT
UTC: 2023-08-05 12:30:00.000000 UTC
GPS: 1375273818.000000
so... this behavior seems to have started at 5:30a local time on a Saturday. Therefore *very* unlikely that the start of this issue is intentional / human change driven.
The investigation continues....
making sure to tag CAL.
Other facts and recent events:
- Attached are 2 screenshots that show the actual *digital* excitation is not changing with time in anyway.
:: 2023-08-08_H1PCALEX_OSC7_102p13Hz_Line_3mo_trend.png shows the specific oscillator, --- PCALX's OSC7 which drives the 102.13 Hz line's EPICs channel version of its output. The minute trend shows the max, min, and mean of the output, and there's no change in amplitude.
:: 2023-08-08_H1PCALEX_EXC_SUM_3mo_trend.png shows a trend of the total excitation sum from PCAL X. This also shows *no* change in time in amplitude.
Both trends show the Aug 02 2023 change in amplitude kerfuffle I caused that Corey found and a bit later rectified -- see LHO:71894 and subsequent comments, but that was done, over with an solved, definitely by Aug 03 2023 UTC and unrelated to the start up of this problem.
It's also well after I installed new oscillators and rebooted the PCALX, PCALY, and OMC models on Aug 01 2023 (see LHO:71881).
The front-end version of the calibration's systematic error at 102.13 Hz also shows the long, time-dependent issue -- this will allow us to trend the issue against other channels
Folks in the calibration group have found that the online monitoring system for the
- overall DARM response function systematic error
- (absolute reference) / (Calibrated Data Product) [m/m]
- ( \eta_R ) ^ (-1)
- (C / 1+G)_pcal / (C / 1+G)_strain
- CAL-DELTAL_REF_PCAL_DQ / GDS-CALIB_STRAIN
(all different ways of saying the same thing; see T1900169) in calibration at each PCAL calibration line frequency -- the "grafana" pages -- are showing *huge* amounts of systematic error during these times when the amplitude of the line is super loud.
Though this metric is super useful because it's dreadfully obvious that things are going wrong -- this metric is not in any normal frame structure, so you can't compare it against other channels to find out what's causing the systematic error.
However -- remember -- we commissioned a front-end version of this monitoring during ER15 -- see LHO:69285.
That means the channels
H1:CAL-CS_TDEP_PCAL_LINE8_COMPARISON_OSC_FREQ << the frequency of the monitor
H1:CAL-CS_TDEP_PCAL_LINE8_SYSERROR_MAG_MPM << the magnitude of the systematic error
H1:CAL-CS_TDEP_PCAL_LINE8_SYSERROR_PHA_DEG << the phase of the systematic error
tell you (what's supposed to be***) equivalent information.
*** One might say that "what's suppose to be" is the same as "roughly equivalent" due to the following reasons:
(1) because we're human, the one system is displaying the systematic error \eta_R, and the other is displaying the inverse ( \eta_R ) ^ (-1)
(2) Because this is early-days in the front-end system, it uses the "less complete" calibrated channel CAL-DELTAL_EXTERNAL_DQ rather than the "fully correct" channel GDS-CALIB_STRAIN
But because the problem is so dreadfully obvious in these metrics, even though they're only *roughly* equivalent, you can see the same thing.
In the attached screenshot, I show both metrics for the most recent observation stretch, between 10:15 and 14:00 UTC on 2023-Aug-09.
Let's use this front-end metric to narrow down the problem via trending.
There appears to be no change in the PCALX analog excitation monitors either. Attached is a trend of some key channels in the optical follower servo -- the analog feedback system that serves as intensity stabilization and excitation power linearization for the PCAL's laser light that gets transmitted to the test mass -- the actuator of which is an acousto-optic modulator (an AOM). There seems to be no major differences in the max, min, and mean of these signals before vs. after these problems started on Aug 05 2023. H1:CAL-PCALX_OFS_PD_OUT_DQ H1:CAL-PCALX_OFS_AOM_DRIVE_MON_OUT_DQ
I believe this is caused by the presence of another line very close to the 102.13 Hz pcal line. This second line is present at the start of a lock stretch but seems to go away as the lock stretch continues. I have attached a plot showing a zoom-in on an ASD around 102.1-102.2 Hz right after a lock stretch (orange), where the second peak is evident, and well into a lock stretch (blue) where the PCAL line is still present, but the second peak right below it in frequency is gone. This ASD is computed using an hour of data for each curve, so we can get the needed resolution for these two peaks.
I don't know the origin of this second line. However, a quick fix to the issue could be moving the PCAL line over by about a Hz. The second attached plot shows that the spectrum looks pretty clean from 101-102 Hz, so somewhere in there would be probably be okay for a new location of the PCAL line.
Since it looks like the additional noise is at 102.12833 Hz, I did a quick check in Fscan data from Aug 5 for channels where there is high coherence with DELTAL_EXTERNAL at 102.12833 but *not* at 102.13000 Hz. This narrows down to just a few channels:
(lines git issue opened as we work on this.)
As a result of Ansel's discovery, and conversation on the CAL call today -- I've moved the calibration line frequency from 102.13 to 104.23 Hz. See LHO:72108.
This line may have appeared in the previous lock the day before (Aug 4). The daily spectrogram for Aug 4 shows a line near 100 Hz starting at 21:00 UTC.
Looking at alogs leading up to the time Derek notes above, I noticed that Gabriele retuned and tested new LSC FF. This change may be related to this new peak. Remembering some issues we had recently where DHARD filter impulses were ringing up violin modes, I checked the new LSC FF filters and how they are engaged in the guardian. Some of them have no ramp time, and the filter bank is turned on immediately along with the filters in the guardian. I have no idea why that would cause a peak at 102 Hz, but I updated those filters to have a 3 second ramp.
Reloaded the H1LSC model to load in Elenna's filter changes
Now that the calibration line has been moved, the comb-like structure at the calibration line frequency is no longer present (checked in the CLEAN channel).
We can also see the shape of the 102.12833 Hz line much more clearly without the overlapping calibration line. I have attached a plot for reference on the width and shape.
As discussed in todays commissioning meeting, I checked TMSX and ETMX movement for a kick during locking and couldn't see anything suspicious. I did find some increase motion/noise every 8Hz in TMSX 1s into ENGAGE_SOFT_LOOPS when ISC_LOCK isn't explicitly doing anything, plot attached. However this noise was present prior to Aug 4th, (July 30th attached).
TMS is suspicious as Betsy found that TMS's have violin modes ~103-104Hz.
Jeff draws attendtion to 38295, showing modes of quad blade springs above 110Hz, and 24917 showing quad top wire modes above 300Hz.
Elenna's notes with calibration lines off (as we are experimenting with for current lock) we can see this 102Hz peak at ISC_LOCK state ENGAGE_ASC_FOR_FULL_IFO. We were mistaken.
To preserve documentation, this problem has now been solved, with more details in 72537, 72319, and 72262.
The cause of this peak was a spurious, narrow, 102 Hz feature in the SRCL feedforward that we didn't catch when the filter was made. This has been been fixed, and the cause of the mistake has been documented in the first alog listed above so we hopefully don't repeat this error.