L. Dartez, J. Kissel More details to come, but as of 2023-08-31 19:10:00 UTC (12:10 PDT), we've updated several corners of the calibration for the first time since Jun 21 2023 (see LHO:70693) in order to: - Update the static model of the test mass actuation strength, to better match the current time-dependent-correction factor value (because it had gotten large enough that approximations used in all TDCF calculations would have started to breakdown) (LHO:72416) - Update the "DARM loop modeled transfer functions at calibration line frequencies" EPICs records in order to account for the new DARM2 FM8 boost (LHO:72562 and LHO:72569) - Update the sensing function (only a little bit) because we're now regularly operating with OM2 "hot" as of 2023-07-19 (LHO:72523) - start using the newly re-organized pydarm librarianship, including the use of new simulines-measured IFO sening and actuation function data. (aLOG pending) - fixed an unimpactful bug in the front-end computed version of the live measured response function systematic error, in which the local oscillator frequency for the demod that was demodulating the recently changed 102.13 to 104.23 Hz calibration had not been updated. (to be commented below) The exciting news is that, with all the metrics we have on hand, these calibration updates made everything better. - 1st attachment: at the boundary of the change, we see the "relative" time-dependent correction factors change rapidly from non-unity values to unity values (and the cavity pole doesn't change, as expected). - 2nd attachment: at the boundary of the change, we see the front-end computed live measured response function systematic error goes from large values to values close to unity magnitude and zero phase. We're still tracking down some bugs in the *modeled* systematic error budget that has been broken since yesterday, Aug 20 2023 19:50 UTC *and* we're not sure if the *GDS* processed live measured response function systematic error is running yet, but we'll keep you posted. The comments below will also contain some updated details on the process for this update.
attaching SDF tables for the cal update and for the H1:CAL-CS_TDEP_PCAL_LINE8_COMPARISON_OSC_FREQ
change. All changes have been accepted and saved on OBSERVE and safe snap files.
I'm also including a screenshot of the H1CALCS filter updates (H1CALCS_DIFF.png).
The interferometric-measurement-informed portion of this calibration push was informed by report 20230830T213653Z, whose measurement is from LHO:72573. parameter foton value physical units value ----------- ------------- ----------------------- 1/Hc [m/ct] 2.93957e-07 3.4019e+06 [ct/m] (* 2475726 [mA/ct] * 1e-12 [m/pm] = 8.422 [mA/pm]) f_CC [Hz] 438.694 L1/EX [N/ct] 7.53448e-08 1.60487 [N/A] L2/EX [N/ct] 6.24070e-10 0.03047 [N/A] L3/EX [N/ct] 1.02926e-12 2.71670e-11 [N/V^2] (with 3.3 [DAC V_bias] * 40 [ESD V_bias / DAC V_bias] = 132 [ESD V_bias])
I attach here a log of the process for updating the calibration. A lot of the work is much like it was in June -- see LHO:70735, but there are a few new bells and whistles that we used. Plus, there's a few extra steps at the end to validate that down-stream products look good -- namely, that the end-game plot -- the *measured* and *modeled* systematic error agree from https://ldas-jobs.ligo-wa.caltech.edu/~cal/. Indeed, in doing this, we found some bugs that we're still sorting out. I also note that Louis did a TON of work leading up to today, generating the last ~2 months of reports, re-organizing and re-creating them, defining epoch tags, etc. So steps (0) through (5) were taken care of before today, and we started around step (6). Steps (6)-(9) out of (11) -- using today's procedure's numbering -- worked really well, and went super smoothly. The procedure is getting quite good!
Following the usual instructions on the wiki, I took a broadband measurement followed by the simulines.
Start time:
PDT: 2023-08-31 12:35:20.025399 PDT
UTC: 2023-08-31 19:35:20.025399 UTC
GPS: 1377545738.025399
End time:
PDT: 2023-08-31 12:57:25.118313 PDT
UTC: 2023-08-31 19:57:25.118313 UTC
GPS: 1377547063.118313
2023-08-31 19:57:24,730 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOL
G_SS_20230831T193521Z.hdf5
2023-08-31 19:57:24,760 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCA
LY2DARM_SS_20230831T193521Z.hdf5
2023-08-31 19:57:24,771 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUS
ETMX_L1_SS_20230831T193521Z.hdf5
2023-08-31 19:57:24,782 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUS
ETMX_L2_SS_20230831T193521Z.hdf5
2023-08-31 19:57:24,793 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUS
ETMX_L3_SS_20230831T193521Z.hdf5
Note, this is the first measurement taken *after* the 2023-08-31 19:10 UTC calibration update (LHO:72594). Also, $ gpstime 1377545738 PDT: 2023-08-31 12:35:20.000000 PDT UTC: 2023-08-31 19:35:20.000000 UTC GPS: 1377545738
While running initial alignments, we have been getting saturations for OM1 and OM2 when running through the DOWN state of ALIGN_IFO. These have been traced back to the yaw integrators for each suspension's top mass not being properly cleared. Since it seems like the integrators are, in fact, being cleared but then immediately filled up again, I've moved the clear integrators step to be the last step done in the DOWN state and added a one second wait timer right before it. This hopefully gives more time for everything to settle before the top mass integrators are cleared, but we'll see the next time an initial alignment is run.
ALIGN_IFO has been loaded and changes committed to svn.
All dust monitor pumps are operating within temp and pressure specs.
Thu Aug 31 10:07:07 2023 INFO: Fill completed in 7min 3secs
TITLE: 08/31 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: Austin
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 5mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY: Locked for 21 hours. PR2, SR2, MC2 saturation at 1132 UTC, but no obvious seismic or other enviromental noise a that time.
CDS overview - OK
Below is the summary of the DQ shift for the week of 21-27 August 2023
The complete DQ shift report may be found here.
TITLE: 08/31 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
INCOMING OPERATOR: Austin
SHIFT SUMMARY:
Very quiet evening with one candidate event, S230831e. H1 was locked and observing for the whole shift; current lock stretch is at 13 hours.
LOG:
No log for this shift.
State of H1: Observing at 151Mpc
H1 has been locked for 9 hours and observing the whole shift. One candidate event two hours ago, S230831e, quiet evening otherwise.
Elenna, Gabriele, Camilla
This afternoon we updated the MICH Feedforward, it is now back to around the level it was last Friday, comparison attached. Last done in 72430. Maybe need to be done so soon because of the 72497 alignment changes on Friday.
The code for excitations and analysis has been moved to /opt/rtcds/userapps/release/lsc/h1/scripts/feedforward/
Elenna updated in guardian to engage FM1 rather than FM9 and sdf accepted. New filter attached. I forgot the accept this in h1lsc safe.snap and will ask the operators to accept MICHFF FM1 when we loose lock or come out of observe(72431), tagging OpsInfo.
Attached is a README file with instructions.
Accepted FM1 in the LSC safe.snap
Calling out a line from the above README instructions that Jenne pointed me to that confirms my suspicions that the *reason* the bad FF filter's high Q feature showed up at 102.128888 Hz, right next to the 102.13 Hz calibration line: "IFO in Commissioning mode with Calibration Lines off (to avoid artifacts like in alog#72537)." in other words -- go to NLN_CAL_MEAS to turn off all calibration lines before taking active measurements that inform any LSC feed forward filter design. Elenna says the same thing -- quoting the paragraph from LHO:72537 later added in edit: How can we avoid this problem in the future? This feature is likely an artifact of running the injection to measure the feedforward with the calibration lines on, so a spurious feature right at the calibration line appeared in the fit. Since it is so narrow, it required incredibly fine resolution to see it in the plot. For example, Gabriele and I had to bode plot in foton from 100 to 105 Hz with 10000 points to see the feature. However, this feature is incredibly evident just by inspecting the zpk of the filter, especially if you use the "mag/Q" of foton and look for the poles and zeros with a Q of 3e5 (!!). If we ensure to both run the feedforward injection with cal lines off and/or do a better job of checking our work after we produce a fit, we can avoid this problem.
FAMIS 26055, last run in alog 72310
Most V_eff values are trending upwards while a few are staying flat, with ETMX's increase being the most dramatic (more noticeable likely because the ETMX measurement has not been run in over a month according to this week's plot).
Thanks goes to Camilla for pointing out that I should've been paying attention to the y-axes on the V_eff value plots. Although many of them are indeed trending upwards, they are also moving towards zero, so there's little cause for concern.
Vicky, Naoki, Sheila
Summary: We tried to improve squeezing by walking alignments, this wasn't sucsesful. We did see that we were able to increase the ADF IQ SUM while decreasing sqz, which we didn't understand.
Since we saw a small improvement in sqz on the homodyne yesterday after moving ZM3, today we tried to move the filter cavity axis to see if we could reduce clipping while injecting sqz into the IFO. We injected the ADF and set offsets on the FC green trans green QPD A, and watched for higher ADF. The QPD offset didn't have much of an effect itself, we did however see a 10% increase in the ADF IQSUM channel while the AS42 loops had a large transient because of our QPD step.
We were able to recreate the higher ADF IQ SUM by setting an offset of -0.3 in AS_A_RF42_YAW, but this didn't visibily improve the squeezing or range (we tried switching the offset back and forth several times, the initial improvement wasn't reproduced).
We did some further steps, and saw that we could increase the ADF IQ SUM even further, but that the squeezing level was reduced with the higher ADF. We wondered if we were being confused by a change in the squeezing angle. We checked the sqz angle with and without the yaw offset engaged, rotating the sqz angle didn't explain the difference we saw with the alignment offset.
After regenerating the NonSENS c-code and restarting the h1bos and h1oaf models yesterday, I tried some new cleaning during our commissioning time this afternoon. Now I can get all 3 of LSC, Jitter, and Laser noise subtraction all to work, with no mysterious minus signs needed.
Attached is my DTT 'NonSENS budget' from a time when we were still in commissioning. The blue is the CALIB_STRAIN_NOLINES channel, and red is CALIB_STRAIN_CLEAN. The black is the sum of all the other colors' noise estimates. This LSC subtraction was from a time before the MICH FF was retuned, so it may not have as big an effect were I to turn it on now that MICH FF was updated.
Today I turned on the Jitter and Laser noise subtraction during commissioning (without having retrained it), and it performed at a similar level to this plot from last week. The LSC subtraction was not as effective as this plot, but that's because the LSC in-loop feedforward was tuned later in the week last week, and so there just was less that needed to be subtracted.
TITLE: 08/30 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 16mph Gusts, 12mph 5min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY: H1 started observing about 15 minutes ago after the ADC issue was fixed. Wind is not as bad as yesterday.
TITLE: 08/30 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
Lock#1:
Lockloss at 16:18UTC
The squeezer kept losing lock all morning, and the ISS saturated.
Lock#2:
After some investigation while we were relocking it was found that FC1 was having troubles, Fil power cycled the chassis and then swapped the coil driver and satellite box for a spare. But none of these actions were successful to fix the issue.
We required NLN at 17:52, but we stayed out of Observing while the FC1 investigation/work was ongoing. It was narrowed down to an issue with the T3 BOSEM on FC1. An old (2011) ADC card ended up being swapped for a new one (2021) which was the root of issue! Fil swapped back the original coil drivers and satellite box. Alog72558 documenting the process
I took a broadband and simulines calibration measurement during the investigation time, then a second one later on in the afternoon.
Back into Observing at 22:46UTC after squeezer commissioning and MICHFF testing.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:25 | FAC | Cindy | MidX | N | Tech clean | 16:53 |
17:39 | EE | Fil | CER | N | Investigate FC1 | 17:51 |
17:59 | EE | Fil | MidY | N | Get a spare satellite box | 18:22 |
18:22 | EE | Fil | CER | N | Coil driver, Satellite box swap FC1 | 18:50 |
18:50 | FAC | Cindy | Mech room | N | Tech clean | 19:37 |
19:40 | SUS | Fil, Rahul | CER | N | Investigate whether to replace AA chassis | 20:07 |
19:41 | FAC | Cindy | H2 | N | Tech clean | 19:58 |
19:50 | LSC | Elenna | Remote | N | Test new MICHFF, OMC injection | 20:19 |
20:15 | EE | Fil, Dave | MSR, CER | N | HAM7 FC1 AA chassis swap, restore swapped parts (Coil drivers, satellite box) | 20:40 |
20:42 | ASC | Elenna | Remote | N | Measurments, 8Hz line | 21:20 |
20:45 | SQZ | Vicky, Sheila, Naoki | CR | N | SQZ commissioning | 22:28 |
21:16 | FAC | Randy | LVEA, west bay | N | Grab parts | 21:21 |
22:19 | FAC | Tyler | EndY chiller yard | N | Check on chiller | 22:33 |
22:37 | LSC | Elenna, Camilla | Remote | N | Test new filter | 22:45 |
I ran a 2nd calibration sweep today, starting with broadband:
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20230830T212846Z.xml
Simulines:
2023-08-30 21:58:00,615 | INFO | Commencing data processing.
2023-08-30 21:58:56,567 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20230830T213653Z.hdf5
2023-08-30 21:58:56,585 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20230830T213653Z.hdf5
2023-08-30 21:58:56,611 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20230830T213653Z.hdf5
2023-08-30 21:58:56,636 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20230830T213653Z.hdf5
2023-08-30 21:58:56,661 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20230830T213653Z.hdf5
GPS start: 1377466629.697354
GPS stop: 1377467954.983395
We think this is a more thermalized measure of the IFO after installing the new DARM2 FM8 boost filter, and we'll likely use *this* measurement to inform a calibration update. aLOGs of DARM2 FM8 boost filter change -- LHO:72562 and LHO:72569 Previous unthermalized measurement thta also had the new DARM filter in place -- LHO:72560
This measurement has been processed by pydarm, and can now be found under the report 20230830T213653Z. Attached here for reference. This measurement served as the basis for the update to the calibration on 2023-08-31 -- see LHO:72594. I've measured the OMC DCPD "rough [mA]" to DARM_ERR [ct] transfer function during this measurement, and found the magnitude to be 2475726 [mA/ct] at 5 [Hz]. DTT template is committed to the CalSVN under /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O3/H1/Measurements/FullIFOSensingTFs 2023-08-30_2130UTC_H1_OMCDCPDSUM_to_DARMIN1.xml
I ran a calibration sweep this afternoon, starting with the broadband:
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20230830T190736Z.xml
Simulines:
2023-08-30 19:36:43,647 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20230830T191430Z.hdf5
2023-08-30 19:36:43,663 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20230830T191430Z.hdf5
2023-08-30 19:36:43,674 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20230830T191430Z.hdf5
2023-08-30 19:36:43,684 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20230830T191430Z.hdf5
2023-08-30 19:36:43,696 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20230830T191430Z.hdf5
GPS start: 1377458086.806734
GPS stop: 1377459422.017018
This is the first calibration measurement with the new DARM boost filter (alog 72565)
... however, we realized too late that the IFO had not yet thermalized during this measurement. As such, we're not confident that this measurement will convey a representative sensing function suitable for an update to the calibration pipeline parameters. So, we will likely throw this *sensing* measurement in the "not yet thermalized" bin. It is still a totally fine actuation measurement. For a more thermalized measurement in the same IFO configuration a few hours later -- see LHO:72573. Attached shows a trend of the TDCFs (specifically the relative optical gain, \kappa_C, and cavity pole frequency, f_CC -- second and third rows) vs. the arm cavity power (bottom row). The first dashed vertical line is the start of this measurement's time. The second dashed vertical line is the start of the LHO:72573 measurement. The arm cavity power is much more like the previous stretch's thermalized arm power at the second vertical dashed line, and less "on the exponential rise." One can see this *less* so in the \kappa_C and f_CC trend, but we feel it better to be safe than sorry and use the LHO:72573 measurement 2.5 hours later.
The nominal value of the squeezer laser diode current was change to 1.863 from 1.95. The tolerance is unchanged at 0.1. Looking at trends we sometimes read a value just below 1.85 leading to a failed laser condition which in turn triggers a relocking of the squeezer laser. However, since we are already locked, all we see is the fast and commen gains ramping down and up.
Looking at this diode current trend over the past 500 days, we see it fairly stable but trending down very slowly. It may have lost 10 mA over the past year. Resetting the nominal value should keep us in the good band for a while if this trend continues.
So far this seems to have fixed the TTFSS gain changing issue! Haven't seen gain changes while locked in the past couple days, since Daniel changed the laser diode nominal current (bottom purple trend).
In the past week there wasn't single TTFSS gain ramping incident during lock. The fast and common gains are again monitored in SDF.
Since the results from yesterday's quarter bias test (66810) seem inconsistent with Wensday's test (66751), I'm trying a repeat of the test with lines on and off so that we can have the same number of averages for all these configurations.
The SRCL cleaning probably needs retuning because of the ring heater change, we've got pretty high SRCL coherence up to 50Hz right now. This does reproduce that the noise is higher with 1/4 bias, and the result doesn't seem to depend on having the lines on or off. For now I've left the IFO with all the lines back on and full bias, Robert will try some injections in a little while. I'll add a plot to this alog later on.
The first attachment here shows the main result of this test: The noise from 20-40 Hz is higher with the reduced bias, in a similar way to the first test 66751 A secondary thing to note is that we still do not see a broad reduction in noise when the ADS lines are off, as was seen at LLO.
One possible explanation for this increase in noise would be the ESD nonlinearity. We currently don't run with any linearization on the ESD. The ESD actuation is described by Eq 3 in T1700446, and in many other places.
rearranging Eq 3 in terms linear and quadratic with the signal voltage (and dropping the static terms):
F = [2*(gamma - alpha)*V_bias + beta - beta2] * V_signal + (alpha + gamma) V_signal ^2
Aside to understand the gain scaling we needed to match the linear response:
The table in 66751 shows how I adjusted the digital gain in the signal electrode paths to keep the overall loop gain the same. If we reduce V_bias from V_b1 to V_b2 we compensate with digital gain in the signal path to keep the linear force the same (so V_s becomes g*V_s), the gain we need to apply is:
g = [ 2(gamma - alpha) V_b1 + beta - beta 2 ]/ [ 2(gamma - alpha) V_b2 + beta - beta 2 ] (V_b1 = -447V, V_b2 = -124 V)
We can check this against the gain that we needed using some old in lock charge measurements, if the beta terms are both zero we'd see linear gain scaling with the bias (g = 3.6). For the cooefficients measured in 56613 we'd expect g = 1.34 and for the coeficents measured in 38656 we'd expect g = 2.34. So, some up to date in lock charge measurements could help us understand if the gain scaling we see makes sense with this math, but the variation in past measurements has been more than enough to encompass the gain scaling that we saw this time. This means that if we were to run at a reduced bias our ESD actuation strength would probably vary more with the distribution of charge.
Projection of quadratic contribution from ESD:
As we lower the bias and increase the voltage applied to the signal electrodes the quadratic term will become larger and might introduce noise to DARM. The quadratic term in the signal votlage is (alpha + gamma) * V_s ^2. I've added this to the noise budget with the coefficents measured in 56613, using the LVESDAMON channel which is calibrated into volts applied to the ESD (see ESD_Quadratic in budget.py) The second and third attachments show the projections this make with the quarter bias and normal (full bias) configuration. While this does predict upconversion around the calibration and ADS lines with the quarter bias, It doesn't predict well all the extra noise introduced in the quarter bias test. I'm hoping to do a repeat of the quarter bias test with a line injected on the ESD to measure the quadratic term directly rather than infering it from the old charge measurements.
Lance is using the times above for a comparison to recent data, and we noticed that I made a typo above. As the legend in the screenshot indicates, the full bias lines off time is 17:49 UTC 1/14/2023