Closes 26262, last completed in alog 74035
Looks like there were a couple glitches for both EX fans ~ 4 days ago, but has since leveled out. Couple glitches over the past couple days for the CS fan vibrometers as well.
I increased the amplitude gain of the TST SUS LINE at 17.6 Hz line to 0.17 from 0.085 using the same line command from LHO:71947. The change took place at GPS 1383694441. This change was in response to LHO:74113 and LHO:74136. It's meant to be a temporary measure until we can try the new MICH FF without the 17.7Hz zp pair (LHO:74139). I've attached a scope of the ETMX L3 SUS line uncertainty to show that it's now down to about 0.5%, which is below the 1% threshold implemented by the GDS pipeline (LHO:72944). Here is the command I used:val=0.17 && caput H1:SUS-ETMX_L3_CAL_LINE_CLKGAIN $val && caput H1:SUS-ETMX_L3_CAL_LINE_SINGAIN $val && caput H1:SUS-ETMX_L3_CAL_LINE_COSGAIN $val && caput H1:CAL-CS_TDEP_SUS_LINE3_COMPARISON_OSC_CLKGAIN $val && caput H1:CAL-CS_TDEP_SUS_LINE3_COMPARISON_OSC_SINGAIN $val && caput H1:CAL-CS_TDEP_SUS_LINE3_COMPARISON_OSC_COSGAIN $valOutput:Old : H1:SUS-ETMX_L3_CAL_LINE_CLKGAIN 0.085 New : H1:SUS-ETMX_L3_CAL_LINE_CLKGAIN 0.17 Old : H1:SUS-ETMX_L3_CAL_LINE_SINGAIN 0.085 New : H1:SUS-ETMX_L3_CAL_LINE_SINGAIN 0.17 Old : H1:SUS-ETMX_L3_CAL_LINE_COSGAIN 0.085 New : H1:SUS-ETMX_L3_CAL_LINE_COSGAIN 0.17 Old : H1:CAL-CS_TDEP_SUS_LINE3_COMPARISON_OSC_CLKGAIN 0.085 New : H1:CAL-CS_TDEP_SUS_LINE3_COMPARISON_OSC_CLKGAIN 0.17 Old : H1:CAL-CS_TDEP_SUS_LINE3_COMPARISON_OSC_SINGAIN 0.085 New : H1:CAL-CS_TDEP_SUS_LINE3_COMPARISON_OSC_SINGAIN 0.17 Old : H1:CAL-CS_TDEP_SUS_LINE3_COMPARISON_OSC_COSGAIN 0.085 New : H1:CAL-CS_TDEP_SUS_LINE3_COMPARISON_OSC_COSGAIN 0.17
tagging DetChar: please be on the lookout for any artifacts that may have been caused by increasing this ETMX L3 line at 17.6 Hz.
lscparams.py has been updated with the new SUS ETMX L3 gain
TITLE: 11/11 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Austin
SHIFT SUMMARY:
Earthquake mode activated twice today for 3 differnt 5.0 Mag earthquakes out of Iceland and haiti.
18:40 UTC Dropped out of comissioning during an Earthquake, the SQZ manager fell out of nominal state.
With the Following message in the SQZ_MANAGER LOG:
2023-11-10_18:40:38.829674Z SQZ_MANAGER [FREQ_DEP_SQZ.run] Unstalling SQZ_FC
2023-11-10_18:40:38.834696Z SQZ_MANAGER [FREQ_DEP_SQZ.run] FC-IR UNLOCKED!
2023-11-10_18:40:38.895063Z SQZ_MANAGER JUMP target: SQZ_READY_IFO
Tagging SQZ
Back to observing at 18:47 UTC.
Dropped out of Observing at 19:06 for some commissioning while LLO is relocking.
Back to Observing at 19:37 UTC.
21:00 LVEA transitions to LASER HAZARD
21:09 UTC Dropped Out of Observing to do Commissioning at 21:09
Lost lock at 21:13 UTC due to Commissioning activies.
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=74141
Multiple earthquakes above 5.0 struck within a short period of time just after the lockloss.
21:42 UTC Starting Initial Alignment while earthquakes roll through.
22:09 Relocking started
Lockloss due to what looks to be ground motion on the peakmon channels. H1:ISI-GND_STS_CS_Z_EQ_PEAK_OUTMON
Robert comes back from the LVEA to tell us that HAM1 is making a strange sound. Like a picomotor is stuck on.
Robert and a few others go back into the LVEA, Called me back from the LVEA while standing next to HAM1. I could hear the noise over the phone. Robert says HEPI is singing at about 1000hz. Gerardo Asks me to make sure no one is running an excitation.I then Kill all test points. from the CDS Overview screen button.
Camilla Found a channel where the she could see excess noise from HAM1.
Robert has found what he believes to be the source of the noise. Seems like it's the Hepi pump. Robert has damped it using foam.
Tagging SEI
See Camilla's alog:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=74140
LVEA is still LASER HAZARD
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 16:55 | SEI | Michel | Mid X & Y | N | Searching for Feed through parts. | 17:27 |
| 17:51 | VAC | Travis, Janos, +2 | Mid Y | N | Tour of Mid Y to CIT Visitors | 18:51 |
| 19:06 | Chilled water system | Eric | Mid X | N | working Chilled water system | 19:23 |
| 19:07 | PEM | Robert | LVEA | Y | getting ready to do some PEM Tests. | 19:39 |
| 19:08 | OPS | Ryan C | LVEA | Y | LASER HAZARD Transistion | 19:33 |
| 21:11 | PEM | Robert | LVEA | Yes | PEM MEasurements | 21:39 |
| 21:15 | TCS | Camilla | CTRL RM | N | Making TCS Changes | 21:25 |
| 21:26 | PEM | Robert | EndX | N | Getting shaker | 21:56 |
| 22:05 | PEM | Robert | LVEA | YES | Setting up more measurements. + Checking on Shaker and Hepi sound | 00:05 |
| 22:06 | VAC | Gerardo, + 3 | LVEA | YES | LVEA tour | 22:06 |
| 22:18 | VAC | Jordan +4 | FTCE + Overpass | N | CIT Tour | 22:48 |
| 22:25 | VAC | Gerado + Mitch | LVEA | YES | Vaccum Prep work | 22:55 |
| 23:15 | CDS | Jonathan | MidY | N | Looking at a switch configuration | 23:34 |
TITLE: 11/11 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 6mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.47 μm/s
QUICK SUMMARY:
- IFO is locked but ongoing commissioning work for the next ~2 hours or so
- Violin modes are still high, particularly the usual ITMY 5/6 modes
- CDS/DMs ok
- LVEA is LASER HAZARD
With approval of the cleaning review committee, the NonSENS cleaning is now set up to be running for the high frequency laser noise cleaning, during Observing.
Attached is the effect right now, less than half an hour into the lock; it should improve over the next few hours.
The NOISE_CLEANING guardian takes care of turning it on, and I've accepted SDFs in Observe.snap so that we should be good to go also for the next lock.
J. Kissel, O. Patane Oli and I have finished building up the infrastructure on the x1susquad and x1sustriple computers with front-end models, MEDM screens, EPICs settings, and filter files for upcoming dynamical testing of Bigger Beam Splitter Suspension (BBSS) and HAM Relay Triple Suspension (HRTS). These should now behave as through they were a fully-functional part of the production system, the team should be able to take calibrated ASDs of OSEMs as well as full, driven transfer functions as normal "health check." Transfer functions. This completes all the critical "before Jeff leaves" actions from the controls testing and commissioning checklist, T2300376. Below, I outline the work we finished today. - Finished designing and debugging MEDM screens started yesterday (continuing the work described in LHO:74097). The final results are committed to the userapps repo, under /opt/rtcds/userapps/release/sus/common/medm sus/common/medm/ x1suslo1_overview_macro.txt rev 26712 x1susbs_overview_macro.txt rev 26712 sus/common/medm/bbss/ SUS_CUST_BBSS_OVERVIEW.adl rev 26712 (and all referenced subscreens SUS_CUST_BBSS*.adl) sus/common/medm/hxts/ SUS_CUST_HRTS_OVERVIEW.adl rev 26712 (and all referenced subscreens SUS_CUST_H?TS*.adl) - Installed basis transformation matrices defined as of SWG:12125 Used /ligo/svncommon/SeiSVN/seismic/Common/MatlabTools/ fill_matrix_values rev 9790 and /ligo/svncommon/SusSVN/sus/trunk/ BBSS/Common/MatlabTools/ make_susbbss_projections.m rev 11666 HRTS/Common/MatlabTools/ make_susbbss_projections.m - Installed the correct actuator sign gains in the COILOUTF banks, given the magnet arrangement in the BSFM and HRTS controls arrangement posters (E1100108 and E2300341) and the sign conventions established in T1200015. - Installed OSEMINF sensor calibration filters These are standard for every OSEM signal chain so just hand copied these filters over from the H1 production system to all the BBSS BS and HRTS LO1 OSEMINF filters: - FM1 "10:0.4" zpk([10],[0.4],1,"n") - FM5 "to_um" zpk([],[],0.0233333,"n") - Installed the appropriate COILOUTF actuator / coil driver frequency response compensation for the top masses on the test stands. QUAD (SUS BSC) Test stand and (SUS) Triple Test Stand are driving their top masses with QUAD and Triple Top drivers. Although these drivers are standard production electronics with the *ability* to switch their frequency response, there's no binary IO switch-ability on these test stands to do it remotely like there is on the production system. So we install all of the frequency response compensation for a TOP driver, - FM2 SimLPM1 zpk([9.99999],[1],1,"n") - FM6 AntiAcqM1 zpk([0.9],[30.9996],1,"n") - FM7 AntiLPM1 zpk([1],[9.99999],1,"n") Because the future production BBSS system *will* have switch-ability, we've installed the usual digital infrastructure to remotely switch both the analog and digital filters. But, since the analog filter can't switch, and is thus in its default configuration with the LP OFF, (State 1), the digital system *must always remain in state 1.* The HRTS on the other hand, in the future production HRTS system will be driven by HAM-A drivers with their LP switched jumpered to the ON position, and there's no plan for remote control. As such, we ripped out the remote switching infrastructure. So, we've hard-coded the test on the HRTS MEDM screens to suggest that the COILOUTF is in STATE 2. The HAM-A drivers also have a totally different frequency response than the TOP drivers. So -- under the hood, in the test stand only, we should leave the COILOUTF in the STATE 1 configuration (FM2, FM6, and FM7 ON) to correctly compensate the configuration of the TOP driver. - Installed modern standard watchdog system, including BANDLIM and RMSLP filters. These are standard for every OSEM signal chain, so just hand copied these filters over from the H1 production system to all the BBSS BS and HRTS LO1 BANDLIM and RMSLP filters: - BANDLIM FM1 "acBandLim" zpk([0;8192;-8192],[0.1;9.99999;9.99999],10.1002,"n") - RMSLP FM "10sLP" butter("LowPass",4,0.1) Then arbitrarily set the threshold to 100; this can be changed to be however loose or tight you find is helpful / protective for the hardware as you test them. - Installed rudimentary, robust damping filters that should be stable for any suspension type. For test stand dynamical testing, we don't care about sensor noise re-injection. All we care about is "are the loops stable?" and "does the suspension damp in ~1 sec after experiencing an impulse?" As such, we installed a simple, generic damping filter with the same non-specific frequency response for every degree of freedom: - FM1 "0:25,25" zpk([0],[25;25],1,"n") simple, velocity damping damping zero at 0 Hz, and a simple two-pole roll-off at 25 Hz - FM5 "from_um" zpk([],[],43.478,"n") standard "anti" calibration of the OSEM sensors == 1 / "to_um" - FM10 "RLP11" zpk([1+i*59.9917;1-i*59.9917],[5.73577+i*8.19152;5.73577-i*8.19152;1.71429+i*23.9387;1.71429-i*23.9387],1,"n") some non-aggressive, low-Q, elliptic extra roll off filter (stolen from Elenna and Sheila's HAUX damping loop design) BUT we did NOT install a "gain" filter in FM4 as is standard, nor a standard -1.0 in the DARM_${DOF}_GAIN epics field. This damping gain should be determined and set empirically. Just make sure the gain is negative. (and it'll be the easiest thing to do, rather than create a model of what the "best" gain should be). - Saved all the settings in the right configuration with in test stand SDF system. Next steps -- as indicated in T2300376: - Create BBSS/X1/BS/SAGM1/Data/ and HRTS/X1/${LO1, LO2, LO3, OBS}/SAMG1/Data/ folders in the SusSVN. - Create DTT templates to take "health check" transfer functions, and save them into the above mentioned new data folders. :: For BBSS, probably good to copy from the BSFM templates. :: For HRTS, *also copy from BSFM templates* because you'll get the F1F2F3LFRTSD OSEMINF channels, BUT don't drive anything until you tune the amplitude below. - Actually assemble, build, and hook up the OSEMs of a real suspension, - Measure open light current values, and install the appropriate compensation OFFSETs and GAINs in the OSEMINF banks. - Do crude "push around the suspensions with OFFSETs" in the TEST banks to make sure the DAC is actually driving the suspension, and that all of the degrees of freedoms are doing the right thing and the sign conventions are all correct. :: "Plus L OFFSET in the TEST L bank creates more positive L motion in the DAMP_L IN1 channel :: "Plus V OFFSET in the TEST V bank creates more positive V motion in the DAMP V IN1 channel" :: "Plus T OFFSET in the TEST T bank creates more positive T motion in the DAMP T IN1 channel" - Drive transfer functions using the DTT templates :: You'll likely have to tune the frequency response and amplitude of the template to get good coherence. - Save the transfer functions -- making sure to be good librarians about filenames associated with the suspension. :: Even though the channel names in the DTT template are all going to *say* the optic is LO1, if you're hooking up LO2, and then LO3 later, save the file names with that name! :: Also, be dilligent about saving file names with the correct frequency vector (e.g. 0p01to50Hz for a careful 0.01 BW measurement, vs. 0p03to50Hz for a fast 0.03 Hz BW measurement) - Export the data to text file for reading into matlab. - Create the matlab infrastructure to process individual DTT transfer functions against the matlab dynamical model, and save the results to a BBSS/X1/BS/SAGM1/Data/ and HRTS/X1/${LO1, LO2, LO3, OBS}/SAMG1/Results/ directory structure. - create the matlab infrastructure to compare multiple measurements of the BBSS, and separately, the HRTS against each other, and compare! Jeff should be back by the time you're done with all that.
Lockloss from NOMINAL_LOW_NOISE
Lost lock at 21:13 UTC due to Commissioning activies. Likely a button related to L2 Lock was pushed that Unlocked us.
As the MICH FF change on Oct 12th effected the Calibration team's sensing of KAPPA_TST, 74136, I copied our current MICHFF FM5 into FM6 without the 17.7Hz zero/pole pair. TF didn't look too different, plots attached, so if we have a thermalized commissioning time we can try this FM6 and compare to FM5 using /lsc/h1/scripts/feedforward/MICH_excitation_comparison.xml
Ideally we would install a new MICH FF filter, designed with this 17.7Hz feature notched out, that is pending.
alog for Tony, Robert, Jim, Gerardo, Mitchell, Daniel
At 22:16UTC the HAM1 HEPI started "ringing", Robert heard this when he was in the LVEA as a 1000Hz "ringing" that he tracked to HAM1. plot attached.
Geradro, Mitchell and Robert investigated the HEPI pumps in the Mechanical room mezzanine and didn't find anything wrong. Robert physically damped the part of HEPI that was vibrating with some foam around 22:40UTC and the "ringing" stopped, readbacks going back to nominal background levels. Can see it clearly in H1:PEM-CS_ACC_HAM3_PR2_Y_MON plot as well as H1:HPI-HAM1_OUTF_H1_OUTPUT channels, plots attached. It must be down converting to see it in HEPI 16Hz channels. HAM1 Vertical IPSINF channels also looked strange, plot.
Jim checked the HEPI readbacks are now okay.
Don't know why it started. Current plan is that it's okay now and more through checks will be done on Tuesday.
Snapshot of peak mon right after a lockloss during this time.
Lockloss page from the lockloss that happened during this event.
Rober reports at 1khz, but is seems there are a number of features at 583, 874 and 882. Can't tell if there are any higher, because HEPI is only a 2k model. Attached plot shows the H1 L4C asds, red is from a couple weeks ago, blue is when HAM1 was singing, pink is after Robert damped the hydraulic line. Seems like the HAM1 motion is back to what it was a couple weeks ago. Not sure what this was, I'll look at the chamber when I get a chance on Monday or Tuesday, unless it becomes an emergency before then...
Second set of asds compare all the sensors during the "singing" to the time in October. Red and light green are the Oct data, blue and brown are the choir, top row are the H L4Cs, bottom row are the V. Ringing is generally loudest in the H sensors, though H2 is quieter than the other 3 H sensors.
Noticed that the CO2's weren't exactly outputting 0W at their NO_OUTPUT settings, plot attached, so I searched for home on both rotation stages. This brought them back much closer to zero. Daniel reminds us that "search for home" needs to be done after every Beckhoff reboot, unsure if we did it after the Beckhoff came back on Tuesday.
I further adjusted CO2X calibration as it hadn't been getting close to 1.7W since we touched it on Tuesday, 74044, TJ's bootstrapping was getting it closer to 1.7W but works best when we start with a close power.
CO2Y Rotation stage weirdness: On my final test, asking CO2Y to go to 1.7W it jumped to -700degrees! I then asked it ot go back to minium power, which it slowly did. Very strange. A better way to take it back might have been to ask it to "search for home" but I remembered that clicking "abort" often crashes Beckhoff! Searched for home after this. Plot attached.
Before bootstrapping, CO2Y had only been getting to 1.5W injected with 1.7W requested. I adjusted the calibration (sdf attached) to bring this closer to 1.7W.
We've noticed before but the CO2Y power meter power drops when the rotation stage stops moving, maybe the RS slides back after it's finished rotating, changing the power by ~0.03W. Plot attached. CO2Y rotation stage is noisier than CO2X. We should check we have a spare RS on hand.
This is a follow up on LHO:LHO:74113 Indeed there is a high Q feature in LSC_MICHFF in FM5 (10-12-23-EY) right at 17.7Hz that is coupling into DARM and conflicting with the LINE3 SUS line at 17.6Hz (see attached). It can also be seen in the LSC FF posted in LHO:73428. The resonant peak is about an order of magnitude higher than before the filter changes on October 12. Options forward include: 1. revert the MICH FF until the 17.7Hz can be removed. 2. increase the L3 SUS line even more to accommodate. Addon: Camilla asked me what physically is causing the peak. From talking to JoeB, Vlad, and Gabriele: It's caused by beamsplitter bounce mode modeV3. It's listed in https://awiki.ligo-wa.caltech.edu/wiki/Resonances/LHO. Oddly, it's listed as being at 17.522 Hz but the alog the records points to (via a wrong link), LHO:49643, pegged it right at 17.79Hz(!) Joe & Vlad: We should notch the bounce mode in MICH to avoid driving it during the excitation, e.g. when retuning the MICH FF.
Fri Nov 10 10:06:11 2023 INFO: Fill completed in 6min 7secs
Travis confirmed a good fill curbside.
BSC high freq noise is elevated for these sensor(s)!!!
ITMX_ST2_CPSINF_H3
ITMX_ST2_CPSINF_V1
But this is a trend going back several weeks already.
I made focussed shutdowns yesterday of just one or a few fans. The range and spectra were not strongly affected, and I did not find a particularly bad fan.
Nov. 9 UTC
CER ACs
Off 16:30
On 16:40
Off 16:50
On 17:00
Turbine shutdowns
SF4 off : 17:10
SF4 on: 17:21
SF3 and 4 off: 17:30
SF3 and 4 back on: 17:40
SF3 off: 17:50
SF3 back on: 18:00
SF1 and 4 off: 18:30
SF1 and 4 on: 18;35
SF1 and 4 off: 19:00
SF1 and 4 on: 19:18
SF1 and 3 off: 19:30
SF1 and 3 back on: 19:40
SF1 off: 19:50
SF1 back on: 20:00
SF1 and 4 off 20:10
SF1 and 4 back on: 20:20
SF3 off: 22:50
SF3 on: 23:00
SF3 off: 3:41
SF3 on: 3:51
Nov 10 UTC
SF5,6 off 0:00
SF5,6 back on 0:10
TITLE: 11/10 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 2mph Gusts, 1mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.52 μm/s
QUICK SUMMARY:
H1 Is locked in Nominal_Low_Noise and Observing.
The ITMYMode 5 & 6 Violins are Elevated, 10E-15 on DARM, but they are trending downward.
The latest settings are still working fine, both IY05 and IY06 are going down as shown in the attached plot (shows both narrow and broad filters along with the drive output)- however it will take some time before they get down to their nominal level.
ITMY08 is also damping down nicely.
TITLE: 11/10 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 159Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
- Lockloss @ 0:40 UTC, cause unknown
- Took the opportunity to reload the HAM2 ISI GDS TP while we were down
- Alignment looked horrible, so will be running an initial alignment
- H1 lost lock at 25 W, DCPD saturation
- Back to NLN/OBSERVE @ 3:55, attached are some ALS EY SDFs which I accepted
- Superevent S231110g
- 4:06 - inc 5.9 EQ from Indonesia
- Lockloss @ 5:27 - cause unknown
- Relocking was automated, reached NLN/OBSERVE @ 6:46 UTC
LOG:
No log for this shift.
Looking at the DCPD signals during the 25W state before this evening LL, during the LL and the next 25W relock, they looked typical (~8-12k) during the LL but there was a weird glitch like halfway through the states duration and they were diverging when the final glitch and LL occured. The relock after the 25W LL had higher DCPDs signals, ~40k higher than usual, and the previous lock also had higher than usual DCPDs at this state, which were damped down over the long lock. So something caused them (particularly ITMY modes 5/6, and 8... the problem childs) to ring up during the following lock aquisition from the 25W LL, they didn't have enough time to damp down during this lock and so after the 5:27 LL they were still high during aquisition.
JoeB, M. Wade, A. Viets, L. Dartez Maddie pointed out an issue with the computation of the kappa_TST starting since 10/13. After looking into it a bit further, Aaron found a noisy peak at 17.7 Hz just above the kappa_TST line (which is very close at 17.6 Hz). It turns out that the peak has been there for a while, but it got louder on 10/13 and has been interfering with the kappa_TST calculation since then. For a quick reference take a look at the top left plot on the calibration summary pages for October 12 and October 13. Taking a look at the DARM spectra for those days, JoeB, noticed a near 17.7Hz on 10/12 at about 23:30 UTC (darm_spect_oct12_2023.png). Interestingly, he noted that it looks like the 17.7Hz line, which was present before the glitch, got louder after the glitch (darm_spect_oct13_2023.png). I've attached an ndscope screenshot of the moment that the glitch happens (darm_17p7hz_glitch_kappaTST_uncertainty.png). Indeed, there is a glitch at around 23:30 on 10/12 and it is seen in by the KappaTST line and the line's uncertainty. Interestingly, after the glitch the TST line uncertainty stayed high by about 0.6% compared to its value before the glitch occurred. This 0.6% increase pushed the mean KappaTST line uncertainty above 1%, which is also the threshold applied by the GDS pipeline to determine when to begin gating that metric (see comment LHO:72944 for more info on the threshold itself). It's not clear to us what caused the glitch or why the uncertainty stayed higher afterwards. I noticed that the glitch at 23:30 was preceded by a smaller glitch by a few hours. Oddly, the mean KappaTST uncertainty also increased (and stayed that way) then too. There are three distinct "steps" in the kappaTST uncertainty shown in the ndscope I attached. I'll note that I initially looked for changes to the 17.7Hz line before and after the TCS changes on 10/13 (LHO:73445) but did not find any evidence that the two are related. == Until we identify what is causing the 17.7Hz line and fix it, we'll need to do something to help the kappaTST estimation. I'd like to see if I can slightly increase the kappaTST line height in DARM to compensate for the presence of this noisy peak and improve the coherence of the L3 sus line TF to DARM.
The TCS Ring Heater changes were reverted 16 October 74116.
On Oct 12th we retuned MICH and SRCL LSC Feedforwards and moved the actuation from ETMX to ETMY PUM 73420.
The MICH FF always has a pole / zero pair at about 17.7 Hz. In the latest filter, the peak is a few dB higher than in previous iterations
The change in H1:CAL-CS_TDEP_SUS_LINE3_UNCERTAINTY is when the MICH and SRCL feedforward filters were changed. See attached.
this is due to a 17.7Hz resonance in the new MICH FF. See LHO:74136.