Displaying reports 2301-2320 of 83002.Go to page Start 112 113 114 115 116 117 118 119 120 End
Reports until 15:14, Thursday 27 February 2025
H1 General
ryan.crouch@LIGO.ORG - posted 15:14, Thursday 27 February 2025 (83094)
OPS Thursday EVE shift start

TITLE: 02/27 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 9mph Gusts, 5mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.57 μm/s
QUICK SUMMARY:

LHO FMCS (PEM)
oli.patane@LIGO.ORG - posted 15:12, Thursday 27 February 2025 (83093)
Mystery noise in ISI GND STS EX channels

TJ, Eric, Oli

Yesterday TJ noticed that the 3-10 Hz and 10-30Hz seismic blrms for EX X-axis shifted up at some point in time and then later in the day, the values shifted back down(attachment1). The same thing happened today. It looks a lot like something turning on and off, so I tried looking at a bunch of FMC and PEM channels to try and narrow down what could be causing it. Looking through the PEM End Floor summary pages, the noise seems to be seen in the EBAY FLOOR accelerometers much more than in the VEA FLOOR accelerometers(attachment2,attachment3). It also looks like this on/off noise started showing up on February 20th.

I plotted the EX EBAY FLOOR accelerometer channel along with the 3-30Hz blrms channels and when the blrms channels are sitting higher, the EX floor accelerometer is less noisy, which seems backwards to me(attachment4). The fan accelerometer readings also match with this movement(attchment5). There also seems to be somewhat of a correlation between the acceleration and whether the EX_AH_DSCHRG temperature is higher or lower; when the acceleration is larger, the discharge air temperature is generally lower and vice versa, but that doesn't seem to be true 100% of the time.

I also don't know if this noise couples into DARM - I briefly checked the range blrms and didn't visually see any obvious drops or rises in the different bands with the changes in acceleration. I also didn't see any glitches on the glitchgram from when things go from on->off/higher->lower output/whatever is changing. I've told Eric about it.

Images attached to this report
H1 CAL (CAL, ISC, OpsInfo)
jeffrey.kissel@LIGO.ORG - posted 14:58, Thursday 27 February 2025 (83088)
Calibration Push of Report ID 20250222T193656Z; Improved Freq. Dep. of Systematic Error between 10 and 80 Hz.
J. Betzwieser, V. Bossilkov, L. Dartez, J. Kissel

We've updated the calibration to account for the errors that have been in play for a while, as discussed in LHO aLOG 82804, and this after trying last week but running into issues; see 82935, and 83083. The highlights for this change are:
    (1) Employing a detuned sensing function for the first time in O4.
    (2) Updating the computational delay in the actuation stages
    (3) Updating the UIM distribution filters to match what's installed in the true ETMX_L1_LOCK bank
    (4) Making sure the ETMX L3 DRIVEALIGN GAIN is self-consistent everywhere.

The model parameter set pushed includes measured interferometer parameters based on the 20250222T193656Z measurement (i.e. (1) and (2)), but the other two changes (3) and (4) are digital filters and settings self-consistency additions that are done by hand.

The summary of the resulting change in systematic error is shown in the only attachment, with black as "before" and blue and red being the "after" where blue is with not-yet great TDCFs, and red being after 15 minutes of TDCF "burn in."

%%%%% Process %%%%
After Joe / Louis / Vlad concocted a model parameter set they were happy with, and they've tagged is as "valid," we:
(Quotes of command lines that start with 
   - "$" indicate that it's on the LHO control room workstation
   - "@" indicate that it's on the LHO LDAS cluster)

- Confirmed that I *don't* need to activate any special conda environment on the local LHO control room workstations,

- Checked the pydarm version to ensure it is what Louis wants it to be:

$ pydarm --version
20250227.0
i.e. this "tag" of the pydarm code: https://git.ligo.org/Calibration/pydarm/-/tags/20250227.0.
(Note, to find these tags, go to the pydarm homepage https://git.ligo.org/Calibration/pydarm, then use the sidebar to navigate to "code" > "tags")

- Locally checked the about-to-be-pushed calibration parameter set with
     $ pydarm export 20250222T193656Z
Here, I trusted that all the "various darm model transfer function values at calibration line frequencies" -- so called "EPICs records" had been validated by Joe / Louis / Vlad, but we went through the expect filter changes together.

- Also looked thru (a bit more) human readable .json file for the report in the report directory,
     /ligo/groups/cal/H1/reports/20250222T193656Z $ firefox foton_filters_to_export.json
and sanity checked that version of what was about to be exported.

- HIT GO ON THE EXPORT
    $ pydarm export --push 20250222T193656Z
which implicitly includes addressing highlight (2) from about, since this is in the pushed model parameter set

Addressing highlight (1) from above,
- By hand, loaded filter coefficients on CALCS model to install the new 3 DARM_ERR filters for the updated (inverse) sensing function model.

- By hand, turned on the new FM8 SRCD2N filter in the CAL-CS_DARM_ERR bank (FM9 and FM10 for the [inverse] cavity pole and optical gain were already on, since they've been used consistently throughout the O4 run).

- In the CALCS SDF OBSERVE.snap file (which is the same as the safe.snap file), I individually accepted the CAL-CS_DARM_ERR FM8 filter being ON first. THEN accepted the ~80 "EPICs records" for the "model values at calibration line frequencies."

Addressing highlight (4) from above,
- We collectively reviewed that 
    . H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN (real, in loop value)
    . H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN (CALCS replica actuation model value)
    . [actuation_x_arm] pydarm parameter tst_drive_align_gain (the pydarm parameter replica of the actuation model value)
were all self-consistently the current value of 198.664.

Addressing highlight (3) from above,
- By hand, Louis copied over the H1:SUS-ETMX_L1_LOCK_L FM6 "aL1L3" filter coefficients over into the equivalently named replica H1:CAL-CS_DARM_FE_ETMX_L1_LOCK_L FM6 filter coefficients, and load filter coefficients in CALCS model again.

Then on to restarting the GDS pipeline,
- accepted H1CDSSDF table to capture CALIB_REPORT_ID, HASH, and GDS HASH

- Joe pushed the valided report the the LHO LDAS cluster,
     $ pydarm upload 20250222T193656Z

- I logged into the LDAS cluster and made sure that the report made it to the cluster OK,
    $ ssh jeffrey.kissel@ldas-grid.ligo-wa.caltech.edu

    @ cat /home/cal/archive/H1/reports/last-exported/id
    20250222T193656Z

- Back on the workstation, I printed out the checksum for the GDS filters we were about to restart the GDS pipeline with,
    $ cd /ligo/groups/cal/H1/reports/20250222T193656Z/
    $ sha256sum gstlal_compute_strain_C00_filters_H1.npz
    3c12baf1aac516042212f233d3f2f574a8b77ef25a893da9a6244a2950c42d1e  gstlal_compute_strain_C00_filters_H1.npz

- HIT GO ON RESTARTING THE GDS PIPELINE, back on the work station
    $ pydarm gds restart
and watched the output to command line for the check-sums it spit back about what it installed, to make sure it was the same as above,
    [...]
    target GDS filter file sha256 checksum: 3c12baf1aac516042212f233d3f2f574a8b77ef25a893da9a6244a2950c42d1e
    [...]
    target GDS filter file sha256 checksum: 3c12baf1aac516042212f233d3f2f574a8b77ef25a893da9a6244a2950c42d1e
    [...]
    Connection to h1guardian1 closed.
    2025-02-27 10:39:23 PDT ==============

- Waited 12 minutes, look for data from H1 on web sites, and running 
    $ pydarm gds status

- Now started to validate whether the push worked:
    . Went to to NLN_CAL_MEAS

    . ran excitation template, 
         $ diaggui /ligo/home/louis.dartez/pcaly2darm_bb_excitation.xml &
    . Let the 300 average, ~10 minute excitation, run to completion, but ignored the answer from it, since it's a poorly calibrated DELTAL_EXTERNAL / PCAL TF.

    (A) After 4 to 6 minutes of running the excitation template, waiting for PCAL to show up in the DTT accessible frames, we ran the better calibrated template, 
         $ diaggui /ligo/home/louis.dartez/pcaly2darm_broadbands/pcaly2darm_broadbands_compare.xml &
updating the date and time within the time of the above excitation BB template, saving the answer to 
         /ligo/home/jeffrey.kissel/2025-02-27/2025-02-27_185617UTC_H1_PCAL2DELTAL_BB_300avgs.xml

    (B) Using the same "better calibrated offline measurement" template, we also gathered "before" data during from last saturday's suite.

    . We thought (A) is definitely better than before (B), but see a hump at ~70 Hz. So, suspecting it was GDS kappa's not yet burned in at a good measured value, we turned on the calibration lines for ~15 minutes, allowing GDS to compute good updated kappa values to be used during the broadband measurement.

    (C) Took it again after having calibration lines ON for ~15 minutes.
        2025-02-27_192525UTC_H1_PCAL2DELTAL_BB_300avgs.xml

The results are attached -- see 2025-02-27_192525UTC_H1_PAL2GDSL_BB_300avgs.png.
This transfer function, a measure of the systematic error in the calibration, has much less error below 50 Hz, which was the goal.
We're sad that there's still a ~2% error hump at 70 Hz, and that in inverted from before, but we consider this good enough.

The observing segment that started 2025-02-27 20:18 UTC has this new calibration.

For final clean up, I committed the following affected files to the userapps repo:
CALCS filter file
     /opt/rtcds/userapps/release/cal/h1/filterfiles
         H1CALCS.txt
CALCS SDF safe / observe file
     /opt/rtcds/userapps/release/cal/h1/burtfiles
         h1calcs_safe.snap

CDSSDF SDF safe / observe file
     /opt/rtcds/userapps/release/cds/h1/burtfiles/h1cdssdf/
         h1cdssdf_safe.snap rev 30829
Images attached to this report
H1 ISC
thomas.shaffer@LIGO.ORG - posted 14:45, Thursday 27 February 2025 - last comment - 15:24, Thursday 27 February 2025(83089)
Ran A2L script again

We ran the a2l_min_multi_pretty.py script again today since we made changes to the OMC offsets (alog83087). There were minimal gain changes, the only one I updated in lscparams.py was ETMY P from 5.49 -> 5.52.

The range coherence template we normally use was giving odd results during the quiet commissioning times after the OMC reverting, no coherence above 20Hz is seen. Sheila mentions that this is something she has seen before and isn't physical. We ran the script expecting changes but didn't see much change in gains or to the range.

RESULTS

|      |   | Initial | Final |  Diff |
| ETMX | P |   3.23  |  3.24 |  0.01 |
| ETMX | Y |    4.9  |   4.9 |   0.0 |
| ETMY | P |   5.49  |  5.52 |  0.03 |
| ETMY | Y |   1.35  |  1.34 | -0.01 |
| ITMX | P |  -0.53  | -0.54 | -0.01 |
| ITMX | Y |   3.21  |  3.22 |  0.01 |
| ITMY | P |   0.06  |  0.05 | -0.01 |
| ITMY | Y |  -2.74  | -2.75 | -0.01 |
 

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 15:24, Thursday 27 February 2025 (83096)

Matt reminded me that I need to run diaggui_test with the range coherence template, so when I range it the correct way, we can see that the a2l script didn't change much or maybe even made it slightly worse. In the attached screenshot the left is before a2l was run, the right is after.

Images attached to this comment
H1 ISC (OpsInfo)
thomas.shaffer@LIGO.ORG - posted 14:41, Thursday 27 February 2025 (83092)
SRC2 P gain brought down to 40

We've been having some issues with SRC2 P starting to go unstable after engaging for DRMI (alog83080, alog82997). Ryan C tested out lower the gain from 60 -> 40 on Sheila's suggestion yesterday and it seemed to work (alog83078), and I also bashed it in during a relock today that started to go unstable. Since this seems to work and the gain is brought back to 60 later in ENGAGE_ASC_FOR_FULL_IFO, we've made the change in ISC_DRMI and loaded the guardian.

H1 SUS (SEI, SUS)
joshua.freed@LIGO.ORG - posted 14:09, Thursday 27 February 2025 (83090)
ETMY BOSEM Noise Injections

J. Freed

ETMY shows strong coupling of BOSEM noise between 9-14Hz by about a factor of 20 below DARM from most sensors.

Today I did damping loop injections on all 6 BOSEMs on the ETMY M0. This is a continuation of the work done previously for ITMX, ITMY, PR2, PR3, PRM, SR2, SR3, SRM, and ETMX. As with PRM, gains of 300 and 600 were collected for SR3 (300 is labled as ln or L).  Calibration lines were off. Data was colected in two parts, as such, f1,f2,f3 had a different background than lf,rt,sd bosems.

The plots, code, and flagged frequencies are located at /ligo/home/joshua.freed/bosem/ETMY/scripts. While the diaggui files are at /ligo/home/joshua.freed/bosem/ETMY/data and data2. I used Sheila code located under /ligo/home/joshua.freed/bosem/ETMY/scripts/osem_budgeting.py to produce the sum of all contributions as well as the individual plots.
 
main.png Shows the current noise plot for most of the suspensions done excluding ITMX, ITMY, and PR2.
 
ETMY1.png Shows the estimated contributions to DARM from the F1, F2, F3 bosems only using the 600 gain data
 
ETMY1.png Shows the estimated contributions to DARM from the LF, RT, SD bosems only using the 600 gain data. They show super strong coupling around the bounce mode but need to check the correlation between the incresed bounce mode and my measurments. This is why I collect the 300 gain data.
 
reference number in diaggui files for ETMY in data 1
Background time: (ref0 DARM, ref1 F1_out, ref2 F2_out, ref3 F3_out, ref4 LF_out, ref5 RT_out, ref6 SD_out)
F1L time:                (ref7 DARM, ref8 F1_out)
F1 time:                  (ref9 DARM, ref10 F1_out)
F2L time:               (ref11 DARM, ref12 F2_out)
F2 time:                 (ref13 DARM, ref14 F2_out)
F3L time:              (ref15 DARM, ref16 F3_out)
F3 time:                (ref17 DARM, ref18 F3_out)
 
reference number in diaggui files for ETMY in data 2
Background time: (ref0 DARM, ref1 F1_out, ref2 F2_out, ref3 F3_out, ref4 LF_out, ref5 RT_out, ref6 SD_out)
LFL time:                (ref7 DARM, ref8 F1_out)
LF time:                  (ref9 DARM, ref10 F1_out)
RTL time:               (ref11 DARM, ref12 F2_out)
RT time:                 (ref13 DARM, ref14 F2_out)
SDL time:              (ref15 DARM, ref16 F3_out)
SD time:                (ref17 DARM, ref18 F3_out)
 
Images attached to this report
H1 ISC
jennifer.wright@LIGO.ORG - posted 11:00, Thursday 27 February 2025 (83087)
Reverted OMC ASC QPD offsets

Jennie W, Sheila

Since our range has been decreasing over the last week and the OMC QPD offsets did not definitely give us a clear gain long term in kappa C, we have reverted the change we did on Monday.

We might want to improve these offsets again once we sort out any other problems that are making larger impacts on our range currently.

See the below ndscope for how this affected the range and optical gain in the short term - this picture is slightly confusing as the calibration was being updated so kappa C will have been reset at the some point before or during our measurements.

The other two attached images show the QPD offsets being accepted in OBSERVE and SAFE snap files.

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:15, Thursday 27 February 2025 - last comment - 10:39, Thursday 27 February 2025(83084)
Thu CP1 Fill

Thu Feb 27 10:05:41 2025 INFO: Fill completed in 5min 38secs

For this fill I tested that the rate-of-change trip value can be changed in the configuration yaml file and then loaded into the code. For this test it was reduced from 60.0 to 50.0 DEGC.

The code which generates the plot now dynamically reads the trip PV to set the horizontal bar.

It looks like we tripped on a TC-A sputter just before the LN2 got flowing, but looks like a good fill.

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 10:21, Thursday 27 February 2025 (83085)

The code currently stops updating the ROC channels at the time the fill is terminated. These channels are then zeroed at the start of the next fill. This is the reason the lower panel in the plot has bars behind the legend from yesterday's fill, and that these channels flat line from the end time onwards.

I'll work on a code change to continue calculating the ROC for a period of time after the fill ends.

david.barker@LIGO.ORG - 10:39, Thursday 27 February 2025 (83086)

New code has been loaded which continues with ROC calcs for 10 minutes post fill. I will test this during tomorrow's fill.

H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 08:42, Thursday 27 February 2025 (83082)
Lockloss

Lockloss @ 2025-02-27 16:36 UTC during commissioning after 22 minutes locked. People had just entered the LVEA.

H1 CDS
david.barker@LIGO.ORG - posted 08:28, Thursday 27 February 2025 (83081)
Running local NDS version of DARM FOM diaggui

Oli, Jonathan, Dave:

The DARM FOM in the control room (running on nuc30) stopped updating and could not be restarted. Jonathan tracked it down to an ongoing NDS2 issue which will hopefully be resolved soon. In the meantime I started the "local" version of this fom which only uses the local NDS and does not try to connect to NDS2.

I'm running the diaggui by hand from a terminal as controls on nuc30, it was started with:

cd /opt/rtcds/userapps/release/cds/h1/scripts/fom_startup/nuc30

diaggui --fom ./H1_DARM_FOM_cds.xml

H1 General
oli.patane@LIGO.ORG - posted 07:33, Thursday 27 February 2025 - last comment - 08:05, Thursday 27 February 2025(83079)
Ops Day Shift Start

TITLE: 02/27 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 3mph Gusts, 2mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.72 μm/s
QUICK SUMMARY:

Currently relocking and at CARM_TO_TR

Comments related to this report
oli.patane@LIGO.ORG - 08:05, Thursday 27 February 2025 (83080)ISC

So last night Ryan adjusted the SRC2 P gain down from 60 to 40 during DARM_LOCKED_CHECK_ASC in order to try and stop the oscillations, and it worked (83074),. That value though of course was just a test and wasn't changed in the code, so I would have expected the next times passing through those states for it to have the oscillations again, but it hasn't. In the three times we've passed through DARM ASC-ville since then, with the SRC2 P gain at its original value of 60, that hasn't happened.

H1 General
ryan.crouch@LIGO.ORG - posted 22:00, Wednesday 26 February 2025 (83074)
OPS Wednesday EVE shift summary

TITLE: 02/27 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: TJ
SHIFT SUMMARY: 1 lockloss and we're on our way back past the troublesome CHECK_ASC now. I had to reduce the gain of SRC2_p from 60 to 40. We're about to start powering up now.
LOG: No log.

H1 ISC (IOO, PSL)
mayank.chaturvedi@LIGO.ORG - posted 19:09, Wednesday 26 February 2025 - last comment - 14:40, Monday 03 March 2025(83077)
Opened a new ISS PD array

Jennie Siva Keita Mayank

Following our previous attempt  here .  We opened a new ISS PD array (S.N. 1202965).
This unit is in great condition. i.e. 

1) No sign of contamination.
2) All the optics are intact (No chipping)

We tried interfacing the QPD-cable S1203257 with the QPD but it turned out that they are not compatible.
We will look for the updated version of the QPD cable.   

Images attached to this report
Comments related to this report
jennifer.wright@LIGO.ORG - 14:49, Thursday 27 February 2025 (83091)EPO

More photos I took of the unboxed unit,

Keita holding part of QPD connector that connects to cable,

zoom in of part of prisms close to PD array to show they don't look damaged like the previous one we unboxed,

dcc and serial number of baseplate (this is different part for each observatory due to differing beam heights).

Keita explaining the QPD cable clamp to Shiva (right) and Mayank (left).

View of optics with periscope upper mirror on the left.

View of part of prisms close to periscope.

View of back of array and strain relief.

plus a picture of an packaged coptic that was sitting on top of this capsule while it was in the storage cupboard.

 

Images attached to this comment
jennifer.wright@LIGO.ORG - 16:02, Thursday 27 February 2025 (83099)

For future reference all the ISS arrays and there serial numbers are listed in the dcc entry for the assembly drawing LIGO-D1101059-v5.

matthewrichard.todd@LIGO.ORG - 14:40, Monday 03 March 2025 (83143)

[Matthew Mayank Siva Keita]

On Friday (2025-02-28) we moved the optics onto taller posts so that we did not have to pitch the beam up to much (in hind-sight, we probably would've been okay doing this) when we align the beam into the input port of the ISS array. We have not aligned the beam yet and most likely should re-profile it(may not need to) to ensure that the planned lens position is correct.

We also spent some time checking the electronics box for proper connections and polarity; then we tested the upper row of PDs (4 top ones) by plugging in each cathode/anode to the respective port. The output DSUB we used a breakout board and threw each channel onto an oscilloscope -- it seems that all four of the top row of PDs are functioning as anticipated.


Important Note:

Keita and I looked at the "blue glass" plates that serve as beam dumps, but just looking at the ISS array we do not know how to mount them properly. We think there may be some component missing that clamps them to the array. So we repackaged the blue-glass in its excessive lens paper.

Images attached to this comment
H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 18:46, Wednesday 26 February 2025 - last comment - 19:47, Wednesday 26 February 2025(83076)
02:44 UTC lockloss

02:44 UTC lockloss

Comments related to this report
ryan.crouch@LIGO.ORG - 19:47, Wednesday 26 February 2025 (83078)ISC

I stopped at CHECK_ASC and reduced the gain of SRC2P from 60 to 40 and we were able to stay locked as the oscillation came and went, I ran the olg measurement. After the measurement I tried to reduce the gain down to 30 but I fat fingered it and made it bigger and we lost lock.

Images attached to this comment
H1 CAL (CAL)
joseph.betzwieser@LIGO.ORG - posted 12:31, Thursday 20 February 2025 - last comment - 10:15, Thursday 27 February 2025(82935)
Calibration debugging
[Vlad, Louis, Jeff, Joe B]
So while exercising the latest pydarm code with an eye towards correcting the issues noted in LHO alog 82804, we ran into a few issues which we are still trying to resolve.

First, Vlad was able to recover all but the last data point from the simulines run on Feb 15th, which lost lock at the very end of the sweeps.  See his LHO alog 82904 on that process.

I updated the pydarm_H1.ini file to account for the current drive align gains and point at the current L1SUSETMX foton file (saved in the calibration svn as /ligo/svncommon/CalSVN/aligocalibration/trunk/Common/H1CalFilterArchive/h1susetmx/H1SUSETMX_1421694933.txt ).  However, I had to also merge some changes for it submitted from offsite.  Specifically https://git.ligo.org/Calibration/ifo/H1/-/commit/05c153b1f8dc7234ec4d710bd23eed425cfe4d95, which is associated with MR 10 which was intended to add some improvements to the FIR filter generation.

Next, Louis updated the pydarm install at LHO from tag 20240821.0 to tag 202502220.0.

We then generated the report and associated GDS FIR filters.  This is /ligo/groups/cal/H1/reports/20250215T193653Z.  The report and fits to the sweeps looked reasonable, however, the FIR generation did not look good.  The combination of the newly updated pydarm and ini changes was producing some nonsensical filter fits.  This is the first attachment.

We reverted the .ini file changes, and this helped to recover more expected GDS filters, however, there's still a bit of a small ~1% change visible around 10 Hz (instead of being a flat 1.0 ratio) in the filter response using the new pydarm tag versus the old pydarm tag which we don't understand quite yet, and would like to before updating the calibration.  I'm hoping we can do this in the next day or two after going through the code changes between the versions.

Given the measured ~6 Hz SRC detuning spring frequency (as seen in the reports), we will need to include that effect in the calcs front end to eliminate a non-trivial error when we do get around to updating the calibration.  I created a quick plot based off the 20250215T193653Z measured parameters, comparing the full model over a model without the SRC detuning included.  This is the attached New_over_nosrc.png image.
Images attached to this report
Non-image files attached to this report
Comments related to this report
joseph.betzwieser@LIGO.ORG - 10:15, Thursday 27 February 2025 (83083)
The slight difference we were seeing in old report GDS filters vs new report GDS filters was actually due to a mcmc fitting change.  We had changed the pydarm_cmd_H1.yaml to fit down to 10 or 15 Hz instead 40 Hz, which means it is in fact properly fitting the SRC detuning, which in turn means the model the FIR filter generation is correcting has changed significantly at low frequencies.

We have decided to use the FIR filter fitting configuration settings we've been using for the entire run for the planned export today.

Louis has pushed to LHO a pydarm code which we expect will properly install the SRC detuning into the h1calcs model.

I attach a text file of the diff of the FIR filter fitting configuration settings for the pydarm_H1.ini file between Aaron's new proposal (which seems to work better for DCS offline filters based only looking at ~6 reports) and the one's we've been using this run so far to fit the GDS online filters.

The report we are proposing to push today is: 20250222T193656Z





Non-image files attached to this comment
Displaying reports 2301-2320 of 83002.Go to page Start 112 113 114 115 116 117 118 119 120 End