Follow up to Ryan C. alog. Previously when running into issues engaging DRMI ASC, we would wait in ENGAGE DRMI ASC for a few minutes to let the signals converge. In this case, we would lose lock after a few seconds, we noticed that a few ASC signals would start to converge then suddenly swing wildly, causing the lockloss. These were most apparent in the MICH, SRC1, and PRC2 signals, both in P and Y. Keita and I decided to try and walk the suspensions that feed into these signals, as their signal input was well into the thousands (into the tens of thousands particularly for PRC2). After attempting to walk the BS (MICH), SRM (SRC1), and PRC2 (PR2), we were able to get the ASC signals closer to 0, however this didn't appear to help once we went back to ENGAGE DRMI ASC.
Here are the original values for the 3 suspensions I moved in case they need to be reverted at a future time:
PR2 P: 1582.4 Y: 3236.0
BS P: 98.61 Y: -395.81
SRM P: 2228.3 Y: -3144.0
This image is showing the the ASC signals looked like pre movement of the three suspensions, and this is what the signal looked like post movement (sorry these scopes have poor scaling). Keita suggested we start looking into the ASC loops themselves, particularly at SRC2. Before the DRMI ASC loops turned on we turned OFF the SRC2 servo, then went back to ENGAGE DRMI ASC. This seemed to be able to hold us in ENGAGE ASC. At this point, we tried turning the SRC2 loop back on, but with a halved gain for both P/Y (P original gain was 60, Y was 100). When turning on the servo with half the gain, we were still seeing a good amount of 1hz motion in the signal, so we tried setting the gain to be a quarter of the original value...same result.
Next, we tried halving the SRC1 gains, from 4, down to 2. Then, we tried to add back in the quartered SRC2 gains, which still yielded the same result. At this point, it was decided that we entirely leave the SRC2 loop off, while keeping the halved SRC1 P/Y loops and try to continue locking - this worked and we were able to continue locking. Eventually, guardian took over the ASC loops once we got to ENGAGE ASC, and we had no issues relocking afterwards. This should be looked at tomorrow during the day, but at least now we have a temporary workaround if we lose lock again.
Steps taken to bypass this issue - Tagging OpsInfo:
1) Wait in DRMI LOCKED PREP ASC
2) Turn OFF the SRC2 P/Y loops
3) Set SRC1 P/Y gains to 2 (originally 4) - note this step was us taking extra safe steps to err on the side of caution since the SRC2 oscillation was coupling into SRC1
4) Continue the locking process - guardian will eventually take over and set the control loops to their nominal state
Back to NLN @ 10:12 UTC.
TITLE: 08/11 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Austin
SHIFT SUMMARY:
Lock#1:
We rode through a 5.9 from Japan and a 5.1 from NZ which came through within a few minutes of eachother around 00:45UTC (peakmon maxed out at 1100).
Superevent S23081n
Lockloss @ 04:25
Lock#2:
Couldn't get any flashes at DRMI or PRMI, went through CHECK_MICH, still nothing on PRMI, so H1MANAGER took us to initial alignment. Lost it during OFFLOAD_DRMI_ASC, SRM SR2 BS saturations then LL. LSC-POPAIR_B_RF90_I_ERR_DQ starts oscillating and growing a minute of so before we lose it.
Lock#3:
Lost it at OFFLOAD_DRMI_ASC again, same situation.
Lock#4:
Lost it at FIND_IR
Lock#5-6:
The IMC seems to be taking longer to lock, Lost it at DRMI. The flashes were noticibly worse during aquire_drmi so I decided to try and do another initial alignment.
Xarm was struggling and looked weird and was very fuzzy on the scope but the camera image looked fine? The signals weren't converging so I requested XARM to unlocked and it then went into increase flashes then locked but encountered the same issue but it was able to get to offload this attempt and get past green_arms. During SRY the OM suspensions weren't getting cleared so there were a lot of IFO_OUT saturations from OM1_Y and OM3_P. Finished initial alignment then went back into locking.
Same issue, theres a ringup during DRMI_LOCKED_CHECK_ASC that kills it. I called Keita for some help near the end of the shift and told the incoming operator about the issues and at his suggestion tried to hold us in TURN_ON_BS_STAGE2 since it may be the ASC signals that are causing issues? We're holding here as of 07:00UTC
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 22:03 | PEM | Robert | EX | N | PEM injection | 22:54 |
No clear reason, quick lockloss
STATE of H1: Observing at 151Mpc
We've been locked for 18:13, everythings stable.
I injected acoustically today in the PSL, the LVEA, and EX. These are to update the coupling functions for the automated detection vetting system after the drop from 75 to 60W.
TITLE: 08/10 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: Ryan
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 9mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
Dropped into Comissioning at 16:07 UTC
ISC Lock taken to NLN_CAL_MEAS for an Injection 17:04 UTC
Patrick working on the FMCS FOMS and Alerts , which are making the Alarms alarm. 20:32 UTC
Back to OBSERVING 22:55 UTC
H1 Current Status: NOMINAL_LOWNOISE & OBSERVING
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:24 | Electrical | Ken | Mid Y | N | Swapping out light fixtures | 19:24 |
| 16:05 | SRCL | Gabrielle | Remote | N | SRCL and DARM Measurements | 16:58 |
| 16:58 | ASC | Elena | Ctrl Rm | N | ASC tests & measurments | 17:13 |
| 17:18 | SQZ | Sheila | Ctrl Rm | N | 20 Minutes of no SQZn | 17:58 |
| 18:08 | Vac | Travis | FMCE | N | Looking for parts | 18:23 |
| 18:19 | Commish | Sheila | LVEA | N | Plugging in Freq Injection cable. | 18:28 |
| 18:37 | FAC | Karen | Optics & VAC Labs | N | Technical cleaning | 19:07 |
| 18:43 | Commish | Elena | Ctrl Rm | N | Noise budget injections | 18:49 |
| 19:11 | PEM | Robert | LVEA | N | Plugging in a cable | 19:32 |
| 19:46 | PEM | Robert | EX | N | Turning on an amp | 21:02 |
| 20:27 | FMCS | Patrick | Ctrl Rm | N | Working on the FMCS Screens | 20:42 |
| 22:03 | PEM | Robert | EX | N | PEM injections | 23:33 |
TITLE: 08/10 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 3mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
Today I took "unbiased" OLGs of DHARD P and CHARD P (see 67187 for discussion of unbiased measurements and methods). Craig previously measured these loops at 60W input power before I did some additional redesign of the loops (68698, 67488, 67518).
I have plotted the open loop gain with error shading in the attached plots. You can find the measurement templates, exported data, and processing code in '/ligo/home/elenna.capote/ARM_ASC/{DHARD,CHARD}'.
The templates for the these measurements are also saved in [userapps]/asc/h1/templates/{DHARD,CHARD} as '{DHARD,CHARD}_P_olg_broadband_shaped.xml'.
CHARD P has a UGF of 3 Hz with a phase margin of 33 deg. DHARD P has a UGF of 3.4 Hz with a phase margin of 27 deg.
DHARD has two other UGFs at 1.6 Hz and 2.4 Hz. CHARD has additional UGFs around 2.5 Hz, 1.2 Hz, 0.95 Hz and 0.65 Hz.
These are recent measurements by Gabriele of DHARD Y and CHARD Y, plotted with the error shading as well.
CHARD Y has a UGF of 3.3 Hz and 44 deg of phase margin. DHARD Y has a UGF of 5 Hz and a phase margin of 22 deg.
DHARD Y has additional UGFs at 2.1 and 1.5 Hz. CHARD Y has some interesting peak features that cross zero a few times between 2.3 and 3.3 Hz. There also appear to be additional UGFs around 1.2, 0.9, 0.5, 0.4 and 0.3 Hz.
Taking another look at these measurements, DHARD P has about 6 dB gain margin at the highest UGF. DHARD Y's gain margin about 3 or 4 dB. Tagging CAL because this comment might be useful for DARM loop investigations.
Elenna, Sheila
We got data today to rerun our noise budget with the current noise (150Mpc). We got quiet time with no squeezing for 10 minutes starting at 1375723813, with no large glitches. We ran excitations for LSC, laser noise, and ASC. We had quiet time with squeezing injected from the previous night of observing, I choose 1375695779 as a time with high range and no large glitches. This is commit 50358cda
Elenna, Sheila
We ran the noise budget code for this no squeezing time.
This is all commited as 0f9ffe0e
Sheila, Vicky - we have re-run the noise budget for following times:
Noise budget with squeezing. Changes here: using GDS instead of CAL-DELTAL, closer thermalized FDS time to no-sqz, using updated IFO gwinc parameters related to quantum noise calculation.
(Edit: was a glitch in the old time; updated to an FDS time without glitches. All plots updated.)
PDT: 2023-08-10 08:45:00.000000 PDT
UTC: 2023-08-10 15:45:00.000000 UTC
GPS: 1375717518.000000
PDT: 2023-08-10 09:35:52.000000 PDT
UTC: 2023-08-10 16:35:52.000000 UTC
GPS: 1375720570.000000
Noise budget with no squeezing. Same time as above, now calculates using gwinc quantum noise calculation instead of semiclassical calculation used previously.
PDT: 2023-08-10 10:18:11.000000 PDT
UTC: 2023-08-10 17:18:11.000000 UTC
GPS: 1375723109.000000
Both sqz & no-sqz noise budgets now use the correlated quantum noise calculation from gwinc, instead of semiclassical calculations for SN & QRPN. The gwinc budget parameters related to quantum noise calculation are consistent with the recent sqz data set (8/2, alog 72565), with readout losses evenly split between IFO output losses that influence optical gain (20%) and SQZ injection losses (20%), parameters in plot title here. This is high on SQZ injection losses, and slightly conservative on IFO output losses. This updated FDS time is thermalized and closer to the No-SQZ time; the time used previously was several hours earlier near the start of lock, w/ ifo not yet thermalized.
Unlike before, both budgets now show GDS-CALIB STRAIN, which on 8/10 was more accurately calibrated (see Louis's alog on Aug 8, LHO:72075, comparing CAL-DELTAL and GDS vs. PCAL sweep, and his record from 72531). CAL-DELTAL was previously overestimating range due to calibration inaccuracies. We got GDS-CALIB_STRAIN data from nds servers, and at first weren't able to get input jitter data from nds, due to the sampling rate change of IMC-WFS channels from 2k to 16k, 71242. Jonathan H. helped us fix this issue, so we can now pull GDS data and input jitter data from nds.ligo-wa.caltech.edu:31200 -- thank you Jonathan!! With this, the input jitter sub-budget is kind of interesting, looks to be mostly IMC-WFS in YAW.
A quick thought on discrepancy between expected and measured DARM between below several hundred Hz-- I don't know if this could be related to the recent update to gwinc CTN parameters (high/low index loss angles), related to quantum noise, or mystery noise. The recent gwinc CTN update seemed to have dropped the calculated CTN level slightly (maybe 10-15% or so). In April 2023, Kevin helped update CTN parameters LHO:68499 to reconcile H1 budget with the official gwinc parameters, while Evan made a correlated noise measurement 68482 where noise in the bucket seems more consistent with the older CTN estimate from gwinc (or very slightly higher). Another idea is that it could be related to quantum noise, such as SRCL detuning or sqz angle which could've changed since the sqz dataset, as quantum noise can also affect the noise in this region.
All pushed as git commit 28cf2664.
Edit: All pushed again as git commit 33ffd60b.
Added noise budgets with squeezing for HVAC off time on August 17 from alogs 72308, 72297.
When comparing this HVAC off time on Aug 17 with the noise budget from above on Aug 10, it's interesting to note the broadband difference in input jitter (Aug 10 vs Aug 17, HVAC off). Between these times, worth noting that I think there were several additional improvements (like LSC FF or SUS-related) as well.
Edit: updated 8/10 input jitter budget to the less glitchy noise budget time.
Much of the gap between expected DARM (black traces) and measured DARM (red traces) in the noise budget looks compatible with elevating the CTN trace. Budget plots with 100 Hz CTN @ 1.45e-20 m/rtHz are attached below for the no-HVAC times. This is almost 30% higher than the new gwinc nominal CTN at 100 Hz (i.e., 1.128e-20 m/rtHz --> 1.45e-20 m/rtHz). Compared to the old gwinc estimate of 1.3e-20, this is ~11% higher. Quantum noise calculation unchanged here.
This CTN level is similar to the 30% of excess correlated noise that Evan H. observed in April 2023, see LHO:68482. His cross-correlation measurement sees ~30% excess correlated noise around 100 Hz after subtracting input jitter noise, where that "30%" is using the newer gwinc CTN estimate of 1.128e-20 m/rtHz @ 100 Hz. This elevated correlated noise, if attributed to CTN, corresponds to CTN @ 100 Hz of about 1.3*1.128 = 1.46e-20 m/rtHz. See this git merge request for the gwinc CTN update ; this update lowered the expected CTN at 100 Hz by ~15%, from 1.3e-20 (old) to 1.1e-20 m/rtHz (new), based on updated MIT measurements.
For reference, I have plotted these various CTN levels as dotted traces in the thermal sub-budget.
To elevate CTN levels by 30% in the budget code, I scaled both high+low index loss angles by a factor of 1.8, specifically Philhighn 3.89e-4 --> 7e-4 ; Phillown 2.3e-5 --> 4.14e-5. It seems like much higher than this level ~1.45e-20 might be difficult to reconcile with the full budget.
Noteworthy w.r.t. squeezing: from the laser noise sub-budget, laser frequency noise looks within 33% of squeezed shot noise with ~3.7dB of squeezing. By contrast, the L1 noise budget from Aug 2023 (LLO:66532) shows laser noise at the ~20% level of squeezed shot noise with 5.3 dB of squeezing -- i.e. a lower laser noise floor past shot noise.
The following plots can be found in /ligo/gitcommon/NoiseBudget/aligoNB/out/H1/lho_all_noisebudgets_081723_noHVAC_elevatedCTN, and not yet commited to the git repo.
Plots with higher CTN are attached here for the SQZ / no-SQZ proper noise budget times from 8/10, when injections were run.
Comparing the sqz vs. no-sqz budgets suggests there might be more to understand here, to tease apart the contributions from coating thermal noise (CTN) vs. quantum noise in the bucket. In particular, something disturbing that stands out, is that I imagined that if elevated CTN is the physical effect we're missing, it would reconcile both NBs with and without squeezing. However, there is still some discrepancy in the un-squeezed budget, which was not resolved by CTN, and seems to have a consistent shape. I'm wondering if this is related to the IFO configuration as it affects the quantum noise without squeezing. I think this could result from a non-zero but small SRCL detuning since it looks like elevated noise, with a clear shape, that increases below the DARM pole. Simply elevating CTN to match the no-sqz budget would put us in conflict with squeezed darm, so I don't think it makes sense to elevate CTN further. The budget currently has 0 SRCL detuning as it "seems small-ish", but this parameter is somewhat unconstrained in the quantum noise models.
In models, the readout angle is upper-bounded by Sheila's contrast defect measurement, though in principle it could probably be anything lower than that too, which could be worth exploring. It might be helpful to have an external measurement of the thermalized physical SRCL detuning, or in the models allowing the SRCL detunings to vary, to explore how it fits or is constrained by the fuller noise budget picture.
Plots with squeezing can be found in /ligo/gitcommon/NoiseBudget/aligoNB/out/H1/lho_all_noisebudgets. No squeezing plots are in /ligo/gitcommon/NoiseBudget/aligoNB/out/H1/lho_darm_nosqz_noisebudget.
I pushed to git commit 70ca191c without elevated CTN and the associated extra traces. The relevant parameters are left commented out at the bottom of the QuantumParams file, and relevant code to plot the extra traces is commented out in the lho_all_noisebudgets script.
WP 11362. I logged into fmcs-epics-cds and restarted the IOC. This did not clear the RO alarm.
I restored the FMCS alarm levels. Code to do this is under my home account in an fmcs directory.
J. Kissel, for the Calibration Team After Louis updated the CAL-CS portion of the DARM calibration by re-installing the 3.2 kHz pole that was missing from the model of the TST stage ESD driver (see LHO:72043) we've have a preliminary, by-hand, estimate of the H1 detector's systematic error in calibration that did not include proper PCAL calibration (see LHO:72075). In addition, yesterday I moved one of the continuous calibration line frequencies from 102.13 Hz to 104.23 Hz (see LHO:72108). So I wanted to show the before vs. after of that. Take a look at the following three attachments comparing before vs. after the changes. Both plots show the *modeled* response function systematic error (formed by almagomating all of the uncertainty and systematic error from the individual DARM model components) against the *measured* response function systematic error (directly measured from PCAL lines continuously injected into the data stream) 2023-08-07 22:50 UTC (Archive Folder :: 1375483828) -- uncertainty_consistency_check_H1_1375480228_1375483828_GDS-CALIB_STRAIN.png attached 2023-08-07 00:15 UTC Louis re-installs the missing 3.2 kHz pole. 2023-08-08 07:50 UTC (Archive Folder :: 1375516226) -- uncertainty_consistency_check_H1_1375512626_1375516226_GDS-CALIB_STRAIN.png In this before .vs after, one sees the drastic improvement in *measured* systematic error at ~78-80 Hz and it's alignment within the 68% confidence interval of the *modeled* systematic error. The key here is that the astrophysical pipelines are still using the *modeled* systematic error, and until the 2023-08-07 00:15 UTC fix, the modeled systematic error has been *under reporting* the true systematic error (as revealed by the *measured* systematic error). Not to worry though -- since we've now identified what the problem *was* we can, and will, go back and re-calculate the systematic error for later offline consumption. Of lesser importance, 2023-08-09 21:13 UTC Jeff moves the 102.13 Hz calibration line to 104.23 Hz 2023-08-10 02:50 UTC (Archive Folder :: 1375671030) -- uncertainty_consistency_check_H1_1375667431_1375671030_GDS-CALIB_STRAIN.png This is a culmination of an amazing amount of work by the whole team, especially Louis Dartez. WELL DONE!! Because we live in the insane world of 1%-level calibration, we can never be happy for too long. Still on the to-do list for the low-latency systematic error, - Incorporate the results of the PCALX roaming line into the *modeled* systematic error. This will refine the 68% confidence interval above 1 kHz. - Incorporate *some* model of the detector's sensing function change during thermalization into the *modeled* systematic error. This will refine the model during the first 2-3 hrs of observation ready data that occurs right after a PSL laser power-up lock-acuisition.
Bubba, Richard, Patrick, Dave:
The reverse osmosis system in the woodshop has been in alarm for over 24 hours, which has caused cell phone alarms to be sent.
While this issue is being worked, Bubba has requested the cell alarms to be bypassed.
Bypass will expire:
Sun 20 Aug 2023 11:32:36 AM PDT
For channel(s):
H0:FMC-CS_WS_RO_ALARM
Thu Aug 10 10:25:32 2023 INFO: Fill completed in 25min 27secs
Note that TC-A was hovering close to the -130C trip point, and this fill was close to being timed-out at 30mins.
Follow up on previous tests (72106)
First I injected noise on SR2_M1_DAMP_P and SR2_M1_DAMP_L to measure the transfer function to SRCL. The result shows that the shape is different and the ratio is not constant in frequency. Therefore we probably can't cancel the coupling of SR2_DAMP_P to SRCL by rebalancing the driving matrix. Although I haven't thought carefully if there is some loop correction I need to do for those transfer functions. I measured and plotted the DAMP_*_OUT to SRCL_OUT. transfer functions. It might still be worth trying to change the P driving matrix while monitoring a P line to minimize the coupling to SRCL.
Then I reduced the damping gains for SR2 and SR3 even further. We are now running with SR2_M1_DAMP_*_GAIN = -0.1 (was -0.5 for all but P that was -0.2 since I reduced it yesterday). Also SR3_M1_DAMP_*_GAIN = -0.2 (was -1). This has improved a lot the SRCL motion and also improved DARM RMS. It looks like it also improved the range.
Tony has accepted this new configuration in SDF.
Detailed log below for future reference.
Time with SR2 P gain at -0.2 (but before that too)
from PDT: 2023-08-10 08:52:40.466492 PDT
UTC: 2023-08-10 15:52:40.466492 UTC
GPS: 1375717978.466492
to PDT: 2023-08-10 09:00:06.986101 PDT
UTC: 2023-08-10 16:00:06.986101 UTC
GPS: 1375718424.986101
H1:SUS-SR2_M1_DAMP_P_EXC butter("BandPass",4,1,10) ampl 2
from PDT: 2023-08-10 09:07:18.701326 PDT
UTC: 2023-08-10 16:07:18.701326 UTC
GPS: 1375718856.701326
to PDT: 2023-08-10 09:10:48.310499 PDT
UTC: 2023-08-10 16:10:48.310499 UTC
GPS: 1375719066.310499
H1:SUS-SR2_M1_DAMP_L_EXC butter("BandPass",4,1,10) ampl 0.2
from PDT: 2023-08-10 09:13:48.039178 PDT
UTC: 2023-08-10 16:13:48.039178 UTC
GPS: 1375719246.039178
to PDT: 2023-08-10 09:17:08.657970 PDT
UTC: 2023-08-10 16:17:08.657970 UTC
GPS: 1375719446.657970
All SR2 damping at -0.2, all SR3 damping at -0.5
start PDT: 2023-08-10 09:31:47.701973 PDT
UTC: 2023-08-10 16:31:47.701973 UTC
GPS: 1375720325.701973
to PDT: 2023-08-10 09:37:34.801318 PDT
UTC: 2023-08-10 16:37:34.801318 UTC
GPS: 1375720672.801318
All SR2 damping at -0.2, all SR3 damping at -0.2
start PDT: 2023-08-10 09:38:42.830657 PDT
UTC: 2023-08-10 16:38:42.830657 UTC
GPS: 1375720740.830657
to PDT: 2023-08-10 09:43:58.578103 PDT
UTC: 2023-08-10 16:43:58.578103 UTC
GPS: 1375721056.578103
All SR2 damping at -0.1, all SR3 damping at -0.2
start PDT: 2023-08-10 09:45:38.009515 PDT
UTC: 2023-08-10 16:45:38.009515 UTC
GPS: 1375721156.009515
If our overall goal is to remove peaks from DARM that dominate the RMS, reducing these damping gains is not the best way to acheive that. SR2 L damping gain was reduced by a factor of 5 in this alog, and a resulting 2.8 Hz peak is now being injected into DARM from SRCL. This 2.8 Hz peak corresponds to a 2.8 Hz SR2 L resonance. There is no length control on SR2, so the only way to suppress any length motion of SR2 is via the top stage damping loops. The same can be said for SR3, whose gains were reduced by 80%. It may be that we are reducing sensor noise injected into SRCL from 3-6 Hz by reducing these gains, hence the improvement Gabriele has noticed.
Comparing a DARM spectrum before and after this change to the damping gains, you can see that the reduction in the damping gain did reduce DARM and SRCL above 3 Hz, but also created a new peak in DARM and SRCL at 2.8 Hz. I also plotted spectra of all dofs of SR2 and SR3 before and after the damping gain change showing that some suspension resonances are no longer being suppressed. All reference traces are from a lock on Aug 9 before these damping gains were reduced and the live traces are from this current lock. The final plot shows a transfer function measurement of SR2 L taken by Jeff and me in Oct 2022.
Since we fell out of lock, I took the opportunity to make SR2 and SR3 damping gain adjustments. I have split the difference on the gain reductions in Gabriele's alog. I increased all the SR2 damping gains from -0.1 to -0.2 (nominal is -0.5). I increased the SR3 damping gains from -0.2 to -0.5 (nominal is -1).
This is guardian controlled in LOWNOISE_ASC, because we need to acquire lock with higher damping gains.
Once we are back in lock, I will check the presence of the 2.8 Hz peak in DARM and determine how much different the DARM RMS is from this change.
There will be SDF diffs in observe for all SR2 and SR3 damping dofs. They can be accepted.
SR2 and SR3 damping gains changes that Elenna made have been accepted
The DARM RMS increases by about 8% with these new slightly higher gains. These gains are a factor of 2/2.5 greater than Gabriele's reduction. The 2.8 Hz peak in DARM is down by 21%.
This is a somewhat difficult determination to make, given all the nonstationary noise from 20-50 Hz, but it appears the DARM sensitivity is slightly improved from 20-40 Hz with a slightly higher SR2 gain. I randomly selected several times from the past few locks with the SR2 gains set to -0.1 and recent data from the last 24 hours where SR2 gains were set to -0.2. There is a small improvement in the data with all SR2 damping gains = -0.2 and SR3 damping gains= -0.5.
I think we need to do additional tests to determine exactly how SR2 and SR3 motion limit SRCL and DARM so we can make more targeted improvements to both. My unconfirmed conclusion from this small set of data is that while we may be able to reduce reinjected sensor noise above 3 Hz with a damping gain reduction, we will also limit DARM if there is too much motion from SR2 and SR3.
Today I took "unbiased" OLGs of MICH P and MICH Y (see 67187 for a discussion of unbiased measurements and methods). These loops have not been measured since Gabriele and I updated the loop design in May (69370).
The templates for the these measurements are saved in [userapps]/asc/h1/templates/MICH as 'MICH_{P,Y}_olg_broadband_shaped.xml'. I obtained 40 averages at a 0.015 Hz bandwidth, so it took about 20 minutes to run each measurement.
I have plotted the open loop gain with error shading in the attached plots. You can find the same measurement templates, exported data, and processing code in '/ligo/home/elenna.capote/DRMI_ASC/MICH'.
MICH P appears to have a UGF of 1 Hz with a phase margin of 46 deg. MICH Y appears to have a UGF of 0.55 Hz with a phase margin of 35 deg.
I believe Gabriele and I sought to reduce the UGF of MICH Y more than MICH P because at the time, MICH Y contributed more to the ASC subbudget from 10-30 Hz. However, we are now seeing significant upconversion of low frequency motion in DARM that limits the sensitivity from 20-40 Hz. I will revisit this loop design and prioritize more low frequency suppression to see if we can reduce the DARM RMS further.
Based on this measurement of MICH Y, it appeared the loop is stable for a 13 dB increase in gain, which would put the UGF closer to 1 Hz like MICH P. I raised the gain by a factor of 4, and the loop is stable and there doesn't appear to be excess noise in DARM. I ran a quick injection to check the level of MICH Y relative to DARM.
MICH Y gain is now -2.4 (was -0.6). This is updated in the guardian (lownoise ASC) and SDFed.
A few minutes after posting, Sheila and I noticed CSOFT Y motion increased significantly and the noise in DARM between 20-30 Hz worsened. This extra noise and motion reduced as I reduced the MICH Y gain back to -0.6. Looking at the spectra and RMS of both MICH Y IN1 and LSC DARM IN1, it appears that although the overall RMS of both decreases with the higher gain, the higher gain also increases a 1.3 Hz peak in both MICH Y and DARM. I am undoing these changes and keeping the MICH Y gain nominal (-0.6).
Taking advantage of the fact that we're not locked, I put the missing ETMX "HFPole" filter module (LHO:72030) back in theH1CAL-CS_DARM_ANALOG_ETMX_L3filterbank. From inspecting the filter archive, it looks like the "HFpole" ETMX filter module was removed on 4/25/2023. This is around the time we were rolling out the cmd-dev infrastructure for the calibration group. The plan is to follow up with a Broadband measurement later tonight or at the earlier opportunity to establish whether or not to keep this filter in place. The zpk string I used iszpk([], [3226.75],1,"n"). The value3226.75was calculated by summing the poles for all four ESD quadrants from LHO:46773 as per LHO:27150. I've attached screenshots of the ETMX filterbank and the GDS TP window. GDS table diff324c324 < # DESIGN CS_DARM_ANALOG_ETMX_L3 2 zpk([],[3226.75],1,"n") --- > # DESIGN CS_DARM_ANALOG_ETMX_L3 2 zpk([],[],9.787382864894167e-13,"n") 343c343 < CS_DARM_ANALOG_ETMX_L3 2 21 1 0 0 HFPole 4.158812836234200838170239e-01 -0.1682374327531596 0.0000000000000000 1.0000000000000000 0.0000000000000000 --- > CS_DARM_ANALOG_ETMX_L3 2 21 1 0 0 TEST_Npct_50W 9.787382864894166725851836e-13 0.0000000000000000 0.0000000000000000 0.0000000000000000 0.0000000000000000
The above aLOG covers another *solution* top the on-going studies about the ~5-10% systematic error in the calibration -- namely, what's unique to LHO and *left over* after the flaw in GDS filters that was fixed in LHO:71787. The filter was loaded by 2023-08-07 17:15 UTC.
This change has been added to the LHO record of calibration pipeline changes for O4, DCC:T2300297
Correction to the timing of this filter update -- The filter was loaded by 2023-08-07 17:15 PDT -- i.e. 2023-08-08 00:15 UTC
* Added to ICS DEFECT-TCS-7753, will give to Chrisitna for dispositioning once new stock has arrived.
New stock arrived and has been added to ICS. Will be stored in the totes in the TCS LVEA cabinet.
ISC has been updated. As of August 2023, have 2 spare SLEDs for each ITM HWS.
ISC has been updated. As of October 2023, have 1 spare SLEDs for each ITM HWS, with more ordered.
Spare 8240nm SLEDs QSDM-840-5 09.23.313 and QSDM-840-5 09.23.314 arrived and will be placed in the TCS cabinets on Tuesday. We are expecting qty 2 790nm SLEDs too.
Spare 790nm SLEDs QSDM-790-5--00-01.24.077 and QSDM-790-5--00-01.24.079 arrived and will be placed in the TCS cabinets on Tuesday.
In 84417, we swapped:
The removed SLEDs have been dispositioned, DEFECT-TCS-7839.