Camilla takes the first sensing function, 3h 18m after hitting 60 W. Note that the ETM ring heaters are still thermalizing [takes ~24 hours], even though the arm powers and PRG have thermalized. Camilla ran this from her computer. $ pydarm report that produces the first MCMC and GPR. generates report ID 20230621T191615Z I run $ pydarm report #again, as long as the latest measurements match the latest report, then it "just" displays that current report We see that it looks like there's a *little* bit of pro spring. Louis reminds us that we tried -165 for a while, but we trend and aLOG hunt that -165 was used in the first few days that we went up to 78W, https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=68554 But, still, after thinking about it a bit on a sticky note, we feel like descreasing the magnitude of the offset from -175 to -165 will bring the sensing function from slightly pro to flat. As such, we change the SRCL offset to -165, and take another measurement. in order to get the 20230621T191615Z measurement to show up on the *next* report, we have to "validate" that report by creating an empty file in the "tags" directory of that report. $ cd /ligo/groups/cal/H1/reports $ touch 20230621T191615Z/tags/valid $ pydarm ls -r list all the *validated* reports and their tags 20230504T055052Z epoch valid epoch-sensing epoch-actuation 20230505T012609Z valid 20230505T174611Z valid 20230505T200419Z valid 20230506T182203Z valid epoch-sensing 20230508T180014Z valid 20230509T070754Z valid 20230510T062635Z exported valid 20230517T163625Z valid 20230616T161654Z valid 20230620T234012Z valid 20230621T191615Z valid Want to rerun Camilla's report (because we hadn't "validated" the 20230621T191615Z report before she ran it) $ pydarm report --force --no-gds-filters The new measurement is in the directory: 20230621T201733Z We find that -165 shows no obvious difference (from what we can see with the ylims as they stand on the report), so we decide to revert to -175. Retake another measurement again, now that we're back at a SRCL offset of -175 [ct]. 20230621T211522Z Upon first processing, we see that the MCMC has found some large value for the spring frequency of fs = 5.2 Hz and Qs of 37.88. /ligo/groups/cal/H1/archive/20230621T211522Z_badMCMConfsandQ We don't want the MCMC to think there's a spring at all. For some reason, this 20230621T211522Z data, with much of the same values of the measurement as 20230621T191615Z yields an MCMC. Louis changed /ligo/groups/cal/H1/ifopydarm_H1.ini is_pro_spring = False and the report looks great. now, we have to apply the appropriate tags to define the new EPOCH Note, we saved the report with the full history here: /ligo/groups/cal/H1/archive/20230621T211522Z_goodMCMC_withFull75WHistory Now we'll tag the 20230621T191615Z measurement -- the first -175 SRCL offset measurement of today as the as the start of an epoch: /ligo/groups/cal/H1/reports $ touch 20230621T191615Z/tags/epoch-sensing Such that we now see $ pydarm ls -r 20230504T055052Z epoch valid epoch-sensing epoch-actuation 20230505T012609Z valid 20230505T174611Z valid 20230505T200419Z valid 20230506T182203Z valid epoch-sensing 20230508T180014Z valid 20230509T070754Z valid 20230510T062635Z exported valid 20230517T163625Z valid 20230616T161654Z valid 20230620T234012Z valid 20230621T191615Z valid epoch-sensing NOW we re-run the report AGAIN, because this will clear out the 75/76W history of measurements, so we have a "clean" GPR that uses only 60W data. $ pydarm report --force Looks great. /ligo/groups/cal/H1/reports/20230621T211522Z/ H1_calibration_report_20230621T211522Z.pdf Now tag as *valid* and get ready to push. /ligo/groups/cal/H1/reports/20230621T211522Z$ touch tags/valid $ pydarm ls -r 20230504T055052Z epoch valid epoch-sensing epoch-actuation 20230505T012609Z valid 20230505T174611Z valid 20230505T200419Z valid 20230506T182203Z valid epoch-sensing 20230508T180014Z valid 20230509T070754Z valid 20230510T062635Z exported valid 20230517T163625Z valid 20230616T161654Z valid 20230620T234012Z valid 20230621T191615Z valid epoch-sensing 20230621T211522Z valid So now we're ready for export. Let's do a dry run first, to see what will be exported. $ pydarm export searching for 'last' report... found report: 20230621T211522Z using model from report: /ligo/groups/cal/H1/reports/20230621T211522Z/pydarm_H1_site.ini filter file: /opt/rtcds/lho/h1/chans/H1CALCS.txt Hc: 3.4207e+06 :: 1/Hc: 2.9234e-07 Fcc: 4.3933e+02 Hz Hau: 4.6947e-08 N/A :: 7.5083e-08 N/ct Hap: 2.0483e-08 N/A :: 6.2353e-10 N/ct Hat: 3.7886e-02 N/A :: 9.5026e-13 N/ct filters (filter:bank | name:design string): CS_DARM_ERR:10 O4_Gain:zpk([], [], 2.9234e-07) CS_DARM_CFTD_ERR:10 O4_Gain:zpk([], [], 2.9234e-07) CS_DARM_ERR:9 O4_NoD2N:zpk([439.32644887584786], [7000], 1.0000e+00) CS_DARM_ANALOG_ETMX_L1:4 Npct_O4:zpk([], [], 7.5083e-08) CS_DARM_ANALOG_ETMX_L2:4 Npct_O4:zpk([], [], 6.2353e-10) CS_DARM_ANALOG_ETMX_L3:4 Npct_O4:zpk([], [], 9.5026e-13) $ pydarm status shows a comparison between the current values in the front end, which presumably should be the same as the what's in the report directory that has last been taged as "exported" Reviewed these, and they look OK. LETS PUSH pydarm export --push We had a long thought about whether to push the changes to the actuator, and decided to go for it even though there's no new measurements. $ touch 20230621T211522Z/tags/exported jeffrey.kissel@cdsws03:/ligo/groups/cal/H1/reports$ pydarm ls -r 20230504T055052Z epoch valid epoch-sensing epoch-actuation 20230505T012609Z valid 20230505T174611Z valid 20230505T200419Z valid 20230506T182203Z valid epoch-sensing 20230508T180014Z valid 20230509T070754Z valid 20230510T062635Z exported valid 20230517T163625Z valid 20230616T161654Z valid 20230620T234012Z valid 20230621T191615Z valid epoch-sensing 20230621T211522Z exported valid Now commit the two reports to the cluster (mostly to get FIR filter npzs over to the cluster) jeffrey.kissel@cdsws03:/ligo/groups/cal/H1/reports$ arx commit 20230621T191615Z jeffrey.kissel@cdsws03:/ligo/groups/cal/H1/reports$ arx commit 20230621T211522Z Now check in on the DMT machines $ pydarm gds status then $ pydarm gds restart THIS PROMPTS THE USER TO CONFIRM $ pydarm gds restart ==================== dmt1 restart ==================== Already up to date. % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 112M 100 112M 0 0 100M 0 0:00:01 0:00:01 --:--:-- 100M 5945beec4e860275c7c02cd2a42dbf25f8a272a66e40c9b4f335aab31d5cf3e3 gstlal_compute_strain_C00_filters_H1.npz restart calibration pipeline? type 'yes' to confirm and restart: You must type yes (and hit enter) for both dmt1 and dmt2 machines. Then run 20 seconds later, $ pydarm gds status ==================== dmt1 status ==================== ifo config git status: commit 229a0fe93f1f89b83c436c0f3ff6dfceb2d96b17 (HEAD -> main, origin/main, origin/HEAD) Author: Louis Dartez Date: Wed May 24 02:45:35 2023 -0700 change kappa_c to True after MR253 in pydarm ---------------------------------------- filter checksum: 5945beec4e860275c7c02cd2a42dbf25f8a272a66e40c9b4f335aab31d5cf3e3 /home/dmtexec/calibration/gstlal_compute_strain_C00_filters_H1.npz ---------------------------------------- systemd service status: ● gstlal-calibration.service - Calibration pipeline DMT process Loaded: loaded (/home/dmtexec/.config/systemd/user/gstlal-calibration.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2023-06-21 15:57:43 PDT; 2min 32s ago Main PID: 3956864 (gstlal_compute_) CGroup: /user.slice/user-1002.slice/user@1002.service/gstlal-calibration.service └─3956864 /usr/bin/python3.6 /usr/bin/gstlal_compute_strain --frame-duration=1 --frames-per-file=1 --config-file=/home/dmtexec/calibration/ifo/gstlal_compute_strain_C00_H1.ini --filters-file=/home/dmtexec/calibration/gstlal_compute_strain_C00_filters_H1.npz ======================================== Then check that the file is the right file, by comparing the check some of the file, /home/dmtexec/calibration/gstlal_compute_strain_C00_filters_H1.npz 5945beec4e860275c7c02cd2a42dbf25f8a272a66e40c9b4f335aab31d5cf3e3 $ shasum -a256 gstlal_compute_strain_C00_filters_H1.npz 5945beec4e860275c7c02cd2a42dbf25f8a272a66e40c9b4f335aab31d5cf3e3 gstlal_compute_strain_C00_filters_H1.npz and they are the same. Great! Ran through the SDF and accepted that in both OBSERVE and safe.snap. After ~10 minutes confirmed that the grafana pages were again reading out live data. Eventually, the