Following instructions from the TakingCalibrationMeasurements wiki, broadband PCal and Simulines sweeps were run at 18:30 UTC after dropping observing.
Broadband start:
PDT: 2024-04-06 11:31:05.690691 PDT
UTC: 2024-04-06 18:31:05.690691 UTC
GPS: 1396463483.690691
Simulines start:
PDT: 2024-04-06 11:37:45.702793 PDT
UTC: 2024-04-06 18:37:45.702793 UTC
GPS: 1396463883.702793
Files written:
2024-04-06 18:59:41,732 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240406T183810Z.hdf5
2024-04-06 18:59:41,740 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240406T183810Z.hdf5
2024-04-06 18:59:41,745 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240406T183810Z.hdf5
2024-04-06 18:59:41,750 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240406T183810Z.hdf5
2024-04-06 18:59:41,755 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240406T183810Z.hdf5
H1 resumed observing at 19:01 UTC.
Sat Apr 06 10:06:39 2024 INFO: Fill completed in 6min 35secs
Dave confirmed a good fill curbside.
FAMIS 26488, last checked in alog75989
There are 12 T240 proof masses out of range ( > 0.3 [V] )!
ITMX T240 1 DOF X/U = -1.14 [V]
ITMX T240 1 DOF Y/V = 0.364 [V]
ITMX T240 1 DOF Z/W = 0.471 [V]
ITMX T240 3 DOF X/U = -1.177 [V]
ITMY T240 3 DOF X/U = -0.535 [V]
ITMY T240 3 DOF Z/W = -1.549 [V]
BS T240 1 DOF Y/V = -0.368 [V]
BS T240 3 DOF Y/V = -0.326 [V]
BS T240 3 DOF Z/W = -0.464 [V]
HAM8 1 DOF X/U = -0.366 [V]
HAM8 1 DOF Y/V = -0.365 [V]
HAM8 1 DOF Z/W = -0.647 [V]
All other proof masses are within range ( < 0.3 [V] ):
ETMX T240 1 DOF X/U = -0.065 [V]
ETMX T240 1 DOF Y/V = -0.021 [V]
ETMX T240 1 DOF Z/W = -0.058 [V]
ETMX T240 2 DOF X/U = -0.259 [V]
ETMX T240 2 DOF Y/V = -0.209 [V]
ETMX T240 2 DOF Z/W = -0.212 [V]
ETMX T240 3 DOF X/U = -0.01 [V]
ETMX T240 3 DOF Y/V = -0.14 [V]
ETMX T240 3 DOF Z/W = -0.003 [V]
ETMY T240 1 DOF X/U = 0.122 [V]
ETMY T240 1 DOF Y/V = 0.136 [V]
ETMY T240 1 DOF Z/W = 0.202 [V]
ETMY T240 2 DOF X/U = -0.059 [V]
ETMY T240 2 DOF Y/V = 0.187 [V]
ETMY T240 2 DOF Z/W = 0.114 [V]
ETMY T240 3 DOF X/U = 0.218 [V]
ETMY T240 3 DOF Y/V = 0.15 [V]
ETMY T240 3 DOF Z/W = 0.142 [V]
ITMX T240 2 DOF X/U = 0.17 [V]
ITMX T240 2 DOF Y/V = 0.274 [V]
ITMX T240 2 DOF Z/W = 0.277 [V]
ITMX T240 3 DOF Y/V = 0.176 [V]
ITMX T240 3 DOF Z/W = 0.151 [V]
ITMY T240 1 DOF X/U = 0.111 [V]
ITMY T240 1 DOF Y/V = 0.095 [V]
ITMY T240 1 DOF Z/W = 0.014 [V]
ITMY T240 2 DOF X/U = 0.065 [V]
ITMY T240 2 DOF Y/V = 0.24 [V]
ITMY T240 2 DOF Z/W = 0.114 [V]
ITMY T240 3 DOF Y/V = 0.084 [V]
BS T240 1 DOF X/U = -0.185 [V]
BS T240 1 DOF Z/W = 0.117 [V]
BS T240 2 DOF X/U = -0.077 [V]
BS T240 2 DOF Y/V = 0.038 [V]
BS T240 2 DOF Z/W = -0.138 [V]
BS T240 3 DOF X/U = -0.169 [V]
There are 2 STS proof masses out of range ( > 2.0 [V] )!
STS EY DOF X/U = -4.089 [V]
STS EY DOF Z/W = 2.831 [V]
All other proof masses are within range ( < 2.0 [V] ):
STS A DOF X/U = -0.506 [V]
STS A DOF Y/V = -0.75 [V]
STS A DOF Z/W = -0.641 [V]
STS B DOF X/U = 0.431 [V]
STS B DOF Y/V = 0.933 [V]
STS B DOF Z/W = -0.456 [V]
STS C DOF X/U = -0.681 [V]
STS C DOF Y/V = 0.854 [V]
STS C DOF Z/W = 0.421 [V]
STS EX DOF X/U = -0.072 [V]
STS EX DOF Y/V = 0.052 [V]
STS EX DOF Z/W = 0.046 [V]
STS EY DOF Y/V = 0.101 [V]
STS FC DOF X/U = 0.252 [V]
STS FC DOF Y/V = -1.001 [V]
STS FC DOF Z/W = 0.692 [V]
TITLE: 04/06 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 3mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.19 μm/s
QUICK SUMMARY: H1 lost lock seconds before I arrived; working on relocking now. The network is expected to go down sometime soon.
The calibration has been updated at LHO using Cal report 20240330T211519Z. The TDCF EPICS channel changes and the CALCS filter changes are attached. In O4b, the Calibration group is using a slightly different scheme for keeping track of cal reports, the front end pipeline settings, the GDS pipeline, and the hourly online uncertainty budgets. The biggest change is that each cal report is now also a git repository with its own history. This will allow for better tracking in situations for which it is deemed necessary to regenerate / reprocess calibration measurements. Additionally, there are now two additional channels available on the front end:H1:CAL-CALIB_REPORT_HASH_INT
andH1:CAL-CALIB_REPORT_ID_INT
. These new channels are populated by the pyDARM tools when someone in the control room runs 'pydarm export --push'. New Channels and their purpose:H1:CAL-CALIB_REPORT_HASH_INT
: numeric representation of the git commit hash for the report that was used to generate the current calibration pipelineH1:CAL-CALIB_REPORT_ID_INT
: numeric representation of the report id (e.g. 20240330T211519Z) the current calibration pipeline is configured with. Current channel values:caget H1:CAL-CALIB_REPORT_HASH_INT H1:CAL-CALIB_REPORT_ID_INT H1:CAL-CALIB_REPORT_HASH_INT 4.14858e+07 H1:CAL-CALIB_REPORT_ID_INT 1.39587e+09
End-to-end procedure for updating the Calibration pipeline: 0. Take new calibration measurements following the instructions in OpsWiki/TakingCalibrationMeasurements. 1. make sure that the current pyDARM deployment version is up-to-date 1.a) runpydarm -v
and check that the returned version (e.g. 20240405.1) matches the latest 'production release' tag listed at https://git.ligo.org/Calibration/pydarm/-/tags. 1.b) if the tags do not match, have a member of the Calibration group deploy the latest pyDARM tools to the site and the ldas cluster. They should follow the instructions laid out here. 2. generate a new cal report 2.a) runpydarm report
(if this measurement set should be considered an epoch in sensing or actuation then apply the appropriate command line options as listed in the pyDARM help menu (pydarm report -h
)). Report generation will now populate the report directory at/ligo/groups/cal/H1/reports/<report id>/
with various 'export' products. These include dtt calibration files, inverse sensing foton exports, and TDCF EPICS records that would be updated if this report were to be exported to the front end. Here is a quick list of the some of the products that get stored at this step:pcal_calib_dtt.txt
: Pcal calibration into meters of displacementdeltal_external_calib.txt
: calibration of DELTAL_EXTERNAL into strainpydarm_version
: the pyDARM tag indicating the version of pyDARM used to generate the reportexport_epics_records.txt
: list of each EPICS channel name and the value it would get set to when the report is exported to the front endgstlal_compute_strain_C00_filters_H1.npz
: a set of GDS filters and meta data that is sent to the GDS pipeline when the report is exported. 3. inspect the plots in the cal report to make sure they're reasonable. Typically this done by a member of the calibration group that is well-acquainted with the IFO and the calibration pipeline. 3.a) if the cal report is valid, set the 'valid' tag in the cal report:touch /ligo/groups/cal/H1/reports/<report id>/tags/valid
. 3.b) if the cal report was marked valid in 3.a), then 'commit' the report now that its contents have been changed:pydarm commit <report id>
. If you have not done this before, you may see a message from git complaining about dubious ownership. If that happens, follow the instructions in the message and try committing again. If you continue to have trouble, reach out to me, Jamie Rollins, or another member of the Calibration group that is knowledgeable about the new infrastructure. 4. if the report is 'valid', export the new calibration to the front end. 4.a) to first compare the cal report against the currently installed calibration pipeline, runpydarm status
4.b) to have pyDARM list all of the changes it would make if exported, runpydarm export
4.c) once you are certain that you want to update the calibration, runpydarm export --push
. This will write to all of the EPICS channels listed inexport_epics_records.txt
and perform various CAL-CS frontend foton filter changes. 4.d) reload the CAL-CS front end coefficients via the MEDM screen system to make sure the new changes are loaded into place. 4.e) add an 'export' tag to the current report (touch /ligo/groups/cal/H1/reports/<report id>/tags/exported
) and commit it again (pydarm commit <report id>
). 5. upload the newly exported report to the ldas cluster. 5.a) runpydarm upload <report id>
5.b) wait about 1-2 minutes after the upload to allow time for the systemd timers on the ldas cluster to recognize that the new report exists. You can confirm that the latest report is recognized by the ldas cluster by verifying that https://ldas-jobs.ligo-wa.caltech.edu/~cal/archive/H1/reports/latest/ points to the correct report. 6. restart the GDS pipeline 6.a) runpydarm gds restart
to begin the process of restarting the GDS pipeline. This will show prompts from the DMT machines (DMT1 and DMT2) asking you to confirm that the hash for the GDS pipeline package (gstlal_compute_strain_C00_filters_H1.npz
). The prompts will contain the following line:b1c9f6cd1ba3c202a971c6b56c7a1774afb1931625a7344e9a24e6795f3837d7 gstlal_compute_strain_C00_filters_H1.npzTo confirm that the hash above is correct, runsha256sum /ligo/groups/cal/H1/reports/<report id>/gstlal_compute_strain_C00_filters_H1.npz
and verify that the two hashes are identical. If they are the same then type 'yes' and continue with the GDS restart process. After performing this process for the second DMT machine, pyDARM will continue with the pipeline restart. The GDS pipeline currently takes about 12 minutes to fully reboot and begin producing data again. During this time, no GDS calibration data will be receivable. 6.b) If the two hashes are not the same and all of the above checks were done, then something is likely wrong with the pyDARM+GDS pipeline system and you cannot continue with the calibration push. Take the following steps to reset the calibration to its former state: 1. open the CAL-CS SDF table and revert all of the EPICS channel pushes listed inexport_epics_records.txt
. 2. reset the foton filters by reverting to the last installed h1calcs filter before you exported the calibration report. 3. remove the exported tag from the new report (rm /ligo/groups/cal/H1/reports/<report id>/tags/exported
), commit it (pydarm commit <report id>
), and re-upload it to the ldas cluster (pydarm upload <report id>
). Summary of pyDARM commands (for use in the control room):pydarm report [<args ...> <report id>]
: generate a calibration report based on the measurement set at <report id>. See output ofpydarm report -h
for additional customization.pydarm status [<args ...> <report id>]
: compare current calibration pipeline against what the pipeline would be if report <report id> were exported to the front endpydarm commit [<args ...> <report id>]
: make a new commit in the report <report id> git repository. this creates a new hash and should be done any time the report's contents are changed.pydarm upload <report id>
: upload/sync the report <report id> with the ldas clusterpydarm gds restart
: initiate a GDS pipeline restartpydarm ls -r
: list all reports
TITLE: 04/06 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: We are in Observing and have been Locked for 8 hours now. Super quiet shift but we had a gw candidate a bit ago (S240406aj).
LOG:
23:00UTC Detector in NOMINAL_LOW_NOISE
23:07 Observing
04:10 Kicked out of Observing when squeezer lost lock
04:14 Squeezer relocked itself and we went back into Observing
06:29 Superevent S240406aj
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:08 | PCAL | Francisco | PCAL Lab | y(local) | PCALing | 00:03 |
23:36 | RUN | Camilla | MY | n | RUN | 00:06 |
We're Observing at 160 Mpc and have been Locked for just over four hours. Quiet evening so far.
Coherence with PRCL and REFL_RIN is back, so maybe the PRCL offset tuned a few days ago is not optimal anymore.
Sheila, Jennie W
Today Sheila changed the guardian to use the camera offsets that we determined via beam walking yesterday. Since we improved the camera offset that sets the spot on the BS (CAM YAW1) we decided to optimise the YAW to Length gains for the ITMs to improve on this change.
Sheila and tuned the Y2L drive align gains to see if it increased or decreased our coupling to DHARD and CHARD.
We used templates at each step to check the transfer function and coherence from H1:ASC_{CHARD,DHARD}_Y_SM to DARM by injecting broadband noise.
The nominal Y2L gains are 2.1 for ITMX and -1.9 for ITMY.
We found a minimum in the coupling with both a common and differential step (that brought us back to our nominal gain for ITMX) but changed the gain on ITMY DRIVEALIGN to -2.5. The yellow trace in both plots shows the transfer function and coherence for this configuration between dhard and darm and chard and darm where nominal is in dark blue.
Templates are saved in /ligo/home/sheila.dwyer/Alignment/DHARD/DHARD_A2L_tuning.xml and
/ligo/home/sheila.dwyer/Alignment/CHARD/CHARD_A2L_tuning.xml
I updated the gains in SDF and Sheila has also put them in the guardian.
I accepted the current DRIVEALIGN gain for the SUS-ITMX_L2_DRIVEALIGN_Y2L_SPOT_GAIN (which is unmonitored) we ended up leaving it at its set point which is 2.1 but this was set by the guardian so had nbot been accepted in SDF which I did.
We updated the Y2L gain for SUS-ITMY_L2_DRIVEALIGN_Y2L_SPOT_GAIN (also unmonitored) to -2.5 so I accpted this in sdf also.
I also reverted changes we had made to the tramps for bvoth of these as they made diffs in the OBSERVE.snap - we might want to shorten these Tramps after the guardian changes that were made to the camera servos today, but I figured we can do that not during observing.
New SRCL feedforward in place, some improvement 10-10Hz. Still work to be done to improve above 100Hz but SRCL coherence is lower here (see Bruco from 76927).
Measurement was taken yesterday in 76967. Old FM3, new FM1, accepted in safe and observe sdf and ISC_LOCK updated.
TITLE: 04/05 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1:
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
16:15 | FAC | Kim | H2 | - | Technical cleaning | 16:23 |
16:41 | VAC | Gerardo | LVEA | - | Retrieving aux cart | 16:48 |
21:37 | VAC | Gerardo, Jordan | MX | - | Retrieving cable trays | 22:26 |
TITLE: 04/05 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Commissioning
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 10mph Gusts, 7mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.29 μm/s
QUICK SUMMARY:
Detector just got into NOMINAL_LOW_NOISE
Broadband PCAL measurement taken this afternoon starting at:
PDT: 2024-04-05 15:19:02 PDT
UTC: 2024-04-05 22:19:02 UTC
GPS: 1396390760
Measurement ran for ~5 minutes.
FAMIS 26238
I added the PDWD to this script after the first time I ran this.
Laser Status:
NPRO output power is 1.811W (nominal ~2W)
AMP1 output power is 66.96W (nominal ~70W)
AMP2 output power is 137.3W (nominal 135-140W)
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 3 days, 3 hr 59 minutes
Reflected power = 17.61W
Transmitted power = 108.3W
PowerSum = 125.9W
FSS:
It has been locked for 0 days 4 hr and 45 min
TPD[V] = 0.8019V
ISS:
The diffracted power is around 2.3%
Last saturation event was 0 days 1 hours and 29 minutes ago
Possible Issues: None reported
We've added a new path to the CAMERA_SERVO guardian which will now be the default, where we set the camera offsets to fixed values stored in lscparams rather than reseting it each lock based on the ADS set points. This is because we found last night that the build ups were better at a different offset for the BS camera 76961
We caused one lockloss doing this switch, which we think is because I changed the order of doing the switch. We've edited the guardian in a way that we think should not have a large transient.
Eric, Camilla, Naoki
Attached are some plots showing several FOM for the impact of changing the ZM4 and ZM5 PSAMS on the squeezing seen in DARM. The axes of these plots show the strain gauge readouts for the PSAMs since the piezos are known to have issues with hysteresis. Our nominal setting is (7.2, -0.72).
Initially we had relatively little reason to believe that the current settings were optimal. Both PSAMS were centered in their range in terms of voltage after offloading during the commissioning break. We decided to start with a broad scan (see PSAMS.jpg) and chose to prioritize regions not covered by the PSAM scan in 76507. We Note, however, that the scans in 76507 were completed prior to the recomissioning of the SQZ alignment optimization scripts which may impact those results to some degree. We learned that the current point is actually not too far from optimal.
We then narrowed in on points closer to (7.2, -0.72) (see PSAMS_zoom.jpg). It looks like there may be a point which produces slightly higher levels of ASQZ. We would like to explore the area around this PSAM setting more carefully.
A few comments:
We've added a script to pull data to do this analyisis to opt/rtcds/userapps/release/sqz/h1/scripts/PSAMS_data/pull_PSAMS_data.py.
It looks at channels = ['H1:AWC-ZM4_PSAMS_STRAIN_VOLTAGE','H1:AWC-ZM5_PSAMS_STRAIN_VOLTAGE', 'H1:SQZ-CLF_REFL_RF6_PHASE_PHASEDEG', 'H1:SQZ-DCPD_RATIO_1_DB_MON', 'H1:SQZ-DCPD_RATIO_3_DB_MON', 'H1:SQZ-DCPD_RATIO_4_DB_MON', 'H1:SQZ-DCPD_RATIO_5_DB_MON', 'H1:SQZ-OMC_TRANS_RF3_DEMOD_RFMON'] which is BLRMS at 85Hz, 350Hz, broad 1kHz, 1.7kHz.
You need to feed it a list of ASQZ and SQZ gpstimes as a text file, example attached, we used data from 76949 and 76925.
Results attached. As expected they mainly agree with Eric's.
First plot zoom_heatmap.png is with reduced color bars to show the details of which of the better PSAMS positions are best, heatmap.png has full color bars showing how bad the minimum PSAMS settings are.
I was able to request and transition to the new DARM state without issue. ETMX still saw a pretty large kick. I'll have to circle back re: how large compared to previous attempts. We will also want to take a look at any effects moving the integrator on L2 LOCK L had. Note: We do get a few SUS_PI warnings shortly after transitioning to this state. Quiet time to monitor for non-stationarity: Start GPS: 1394344327 Stop GPS: 1394346169 Turned off cal lines at GPS 1394346770.39 Elenna started a Bruco after the cal lines were turned off. here is a screenshot of DARM in the new configuration. 20-90Hz looks high and below 15Hz looks low. It's hard to tell how much of this is real until CAL-CS is calibrated & that calibration is propagated to the DTT template. I started an L2 LOCK IN1/IN2 injection using noise recorder at 1394347034.426. We used a tuned broadband measurement that Craig and I put together. Apparently it was too strong because we lost lock from this injection. It also tripped EX. I requested DOWN and the EX trip alarm reset.
Bruco is here: https://ldas-jobs.ligo-wa.caltech.edu/~elenna.capote/brucos/New_DARM/
Just from first glance looks like some residual LSC coherence, but not enough to explain the strange shape of New DARM.
Here are plots comparing the MASTER OUTS on ETMX during Louis's quiet time here with L3 offloaded, vs Gabriele's quiet time in alog 76278 during Nominal DARM. We are concerned only with the filter changes that Louis and Sheila made to offload more of L3's length actuation onto L2. Because L2 is used to control both length and angular degrees of freedom, it can be easy to ask too much of L2. This lock seems to indicate that this configuration is fairly stable. I looked at the UL MASTER OUTs for the L1, L2, L3 on ETMX. 1) RMS on L3 drives is halved. There is much less L3 drive from 1 to 6 Hz, which dominates the RMS. 2) L2 drives are largely unchanged. 3) L1 drives are changed, but the RMS remains similar. There is much less HF content in the L1 drive with L3 offloaded, and the shape of the resonances around 3 and 6 Hz is altered. Overall it's hard to tell which stage is picking up L3's slack from these PSDs. I believe the intention was to offload to L2, but we don't see any obvious change in what control signal is being sent to the L2 stage. This could simply mean that the angular controls are relatively stronger in the L2 controllers. We'll look at the DRIVEALIGN signals to try and figure that one out quantitatively.
The new DARM loop configuration reduces the DARM noise non-stationary at low frequency.
First plot compares the ESD drive with the Old DARM and the New DARM, confirming that the RMS is significantly reduced, especially at the relevant frequencies.
Second and third plots are spectrograms and whitened spectrograms of GDS-CALIB_STRAIN in the two configuration. Despire GDS-CALIB_STRAIN being wrongly calibrated with the New DARM, it is clear that the low frequency non stationarity is gone in New DARM.
Last two plots are the bicoherence of DARM with the ESD drives, showing that in the Old DARM there is still some bicoherence for noise in the 10-30 Hz region, while in the New DARM this is gone.
These transitions last night were made with a different L2 LOCK filter (which is in L2 LOCK L FM6, replacing the filter used in earlier new DARM configurations that was at FM2). The attached screenshot shows the filter change, I replaced the poles at zero with poles at 0.03 Hz to get rid of the integrator here without changing the phase at the crossover much. This was done with the guardian version 27211
Plots of the actuators during the transitions are attached, here and here, they can be compared to the one that Louis posted where we used L2 LOCK FM2. This suggests that the change to these poles didn't help to reduce the transient during the transition.
Today we tried another change to the transition, this time Evan and I moved the poles in L2 LOCK L from 0.03 Hz to 0.1 Hz, and changed the ramp time for the transition to 10 seconds (from 5). The model is shown in the attached PDF where the new filter is in place in the transition traces. This transition wasn't smoother than the others, see here.
The new UGF is 70 Hz with 20° of phase margin. The crossover between L2 and L3 is at 18 Hz with probably about 40° of phase margin (low coherence due to interference with calibration lines). We have not measured the L1 to L2 crossover yet.
S. Dwyer, E. Capote, E. Hall, S. Pandey, L. Dartez
Here are some notes from our efforts to measure IN1/IN2 at the L1 LOCK L input.
- Sheila adjusted UIM measurement template for new darm config. This template is at /opt/rtcds/userapps/release/lsc/h1/templates/DARM/UIM_crossover.xml
.
- Evan ran the template initially and saw that the UGF is near 1Hz. He adjusted the excitation amplitude along the way to improve coherence for the next time we run this measurement.
- Evan added a high pass filter in the L3 DRIVEALIGN bank with a cutoff frequency at at 5Hz
- first filter attempt was at 8Hz; possibly caused a roll mode to excite near 13.75Hz
- second filter attempt was at 5Hz; this seemed to improve the roll mode excitation
We ended up losing lock a shortly after the injection finished due to PRC activity.
Comparison of DARM ESD drive from end of O4a versus a few days ago. The microseism was about 0.2 µm/s in both cases. The rms DAC drive from 0.1 Hz to 0.3 Hz is about 400 ct, so even in cases of exceptionally high microseism it will be subdominant to the 7000 ct rms that is accumulated above 1 Hz.