Fri Jun 23 10:09:21 2023 INFO: Fill completed in 9min 20secs
Jordan confirmed a good fill curbside.
Here's a new BruCo scan for a good range time after the LSC FF improvements from yesterday:
https://ldas-jobs.ligo-wa.caltech.edu/~gabriele.vajente/bruco_GDS_1371513618
In summary:
TITLE: 06/23 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 140Mpc
SHIFT SUMMARY:
LOCK#1:
LOCK#2:
LOCKs Post Alignment & with Laser Noise Suppression issue
LOG:
TITLE: 06/23 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 134Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 4mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY:
As posted earlier, there were issues getting through the LASER NOISE SUPPRESSION step. Luckily, Evan saw my posts on ligo.chat and gave me suggestions for stuff to check from status descriptions I posted. In addition to the note earlier, we had various messages:
From this Evan pointed me to ISS 2nd Loop Offset alogs:
Luckily, with the above changes, for the next lock attempt, LASER NOISE SUPPRESSION was not an issue.
Once at NOMINAL LOW NOISE, it took the Camera Servo about 22min before we could go to OBSERVING.
And after looking at the range forthe last hour, it appears to be lower than what we had for the last lock, so perhaps my changes were not optimal and could be improved.
Not sure if I turned "OFF" the 2nd Loop the correct way. On the 2nd Loop Screen, I would toggle the 2nd Loop via H1:PSL-ISS_SECONDLOOP_CLOSED (on right of screen), but I'm thinking I should have been toggling H1:PSL-ISS_SECONDLOOP_ENABLE (middle of 2nd loop screen) instead.
Noting here for future reference... (and tagging OpsInfo)
The "cleanest" way we have to engage/disengage the secondloop is with the IMC_LOCK guardian. In the LOCKED state, the IMC is locked and the secondloop is OFF. When the ISS_ON state is requested, the guardian moves to CLOSE_ISS (which actually does the process of engaging the secondloop), and then to ISS_ON, where the secondloop is considered to be ON. The secondloop can be turned off again by requesting the LOCKED state, where the guardian will move through OPEN_ISS.
Immediately after the long lock, green arms looked bad and later PRMI, so I manually tweaked these up. But for DRMI, there were repeated locklosses when ASC was engaged. So I finally decided on an Initial Alignment & this got H1 through DRMI.
Just had a lockloss at a later stage: LASER NOISE SUPPRESSION.
H1's Longest Lock Duration: 26hrs 51min
There was no obvious environmental reason for a lockloss at 0751utc.
Green arms were a bit off (as well as the BS for PRMI). First locking attempted failed while ASC was engaging for DRMI.
1371541886 I can see no LSC or ASC instabilities before lockloss. The lock was not "FAST" (>30ms) as classified in G2201762 and lockloss was upstream of IMC as the IMC stayed locked for 0.3s after light fell off AS_A, see attached.
TITLE: 06/23 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 139Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 11mph Gusts, 8mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.05 μm/s
QUICK SUMMARY:
H1 is humming along w/ just over 26hrs of lock & current OBSERVING segment is just over 7.5hrs with a range around 140Mpc. Was breezing walking in, but gusts are under 10mph. (Wow! Nice to see H1 doing so great!)
TJ wanted me to confirm that AWG_CAL_LINES remained in the IDLE state (NOTE: it did after the lockloss).
TITLE: 06/23 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 141Mpc
SHIFT SUMMARY: Locked for 26 hours, range stable, enviroment calm.
LOG:
We have been locked for 24 hours and 46 minutes currently, this beats our previous longest O4 lock of 23 hours 44min on June 3rd. Let's see how long this one will go!
I have updated the LHO and LLO references in the H1_DARM_FOM.xml template (see userapps, isc/h1/scripts). LLO's new trace is using 100 avgs of observing GDS calib strain from today (multipled by 3995). LHO's new trace is 100 avgs of Cal Delta L right after we updated the MICH FF and improved the low frequency sensitivity. Gabriele, Vicky, and Masayuki and I determined that LHO currently has 3.7 dB of FDS and LLO has 5.5 dB of FDS, so the legend is updated correctly. We also determined LHO is at about 143 Mpc range and LLO is at about 152 Mpc range.
Our sensitvity is now improved below 30 Hz from our last reference trace taken in January after updates to things like the cleaning, LSC and ASC. The gains below 30 Hz have been mostly due to the improvements made to the ASC and LSC. Going back to 60W input has been very beneficial for our low frequency improvement!
The first attachment to this log compares the measurement I used as our new reference (in red, live trace), to the January 2023 reference in yellow. There is also the gwinc trace for 400 kW and 4.5 dB FDS in black.
The second attachment shows the new LHO reference in yellow and the new LLO reference in blue in the DARM FOM.
A variety of range metrics are attached comparing O4 performance to O3. (The times used for the sensitivity curves here are not the same as for the wall plot.)
Hanford's comoving volume sensitivity (proportional to its ideal detection rate) is now 1.8 times larger for systems below about 40 solar masses total. The volume increase for 1000 solar mass IMBH binaries is more than 4.
Also uploaded to the DCC: https://dcc.ligo.org/T2300239
Pem measurements just compoleted and Robert swept the LVEA.
TITLE: 06/22 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Commissioning
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 9mph 5min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY: The IFO has been locked for 18 hours, and we've recovered the Mpcs that were a product of me misreading the sdf table. Currently finishing up some PEM measurements and then should be back to observing within the hour.
R. Short, J. Driggers
Daniel had noticed that the ISS secondloop gain had been set back to -5 dB when the IFO relocked to 60W last night. Since we now want this at -2 dB in our 60W configuration (alog 70684), I've updated the ISS_acquisition_gain in lscparams.py to be -2 and committed to svn. While we were out of Observing this afternoon starting at 22:45 UTC, I changed the gain to -2 dB, accepted the change in SDF, and reloaded the IMC_LOCK guardian.
This usually doesn't work without adjusting the input offset. Also, you would want to test this with the IMC alone, since the AC-coupling is finicky. On the other hand, changing the gain after the servo is engaged is trivial.
Relock was fully auto after one lock loss while finding IR. There was a missing comma that brough ISC_LOCK into error in LOWNOISE_LENGTH_CONTROL, easy fix.
There were a few SDF diffs that look like they need to be accepted based on alog70648. Accepted with screenshots attached.
I turned on the CAL_AWG_LINES Guardian at request of Jeff. I had to change this node's nominal state to LINES_ON for it to be OK.
I'm thinking that this lower range is related to the squeezer. I've attached a screenshot fo the FDS DARM FOM where the live trace is above the refernece in the same frequencies that DARM seems to be higher than normal. I followed the instructions on the Troubleshooting SQZ wiki to adjust the sqeeze angle, but I wasn't able to make anything better, only worse.
I adjusted the sqz angle from 0630-0640 UTC.
Our range is staying low at ~125Mpc, there seems to be extra noise from 20-60Hz. Investigating.
There's a lot more coherence of PRCL and SRCL in the ongoing lock than the previous one. The thermalization cal lines are also very high in DARM and CHARD - maybe they weren't turned on until this new lock though.
Tagging CAL regarding the turn on of CAL_AWG_LINES for this 60W lock stretch -- thanks TJ!
Tagging DetChar as well -- to note that we have 8 extra calibrations on during this nominal low noise stretch that started at June 22 2023 05:04 UTC
These are in because we want to characterize the thermalization of the detector's DARM loop sensing and response functions now that we're operating at 60W rather than 75/76W. I hope to get a few more of these lock acquisitions with these extra lines on, and then we'll turn them off as we had done for the start of the engineering run.
If you'd like to create a data quality flag, you can find the status of these lines "in one go" by looking at the CAL_AWG_LINES guardian state channel,
H1:GRD-CAL_AWG_LINES_STATE_N
The numerical value of the channel is 10.0 when the extra calibration lines are ON (the state is called LINES_ON), and 2.0 when the lines are OFF (the state is called IDLE). See CAL_AWG_LINES_StateGraph for the flow of the state graph.
A retrospect on the lower range: The solution was a lack of capturing the SRCLFF1 gain in SDF / Guardian during yesterday's power reduction from 75W to 60W LHO:70648. See LHO:70712 and LHO:70710 where Tony recovered the correct gain of 2.1. @DetChar -- it might be worth creating a data quality flag for this: Observation segment start (with SRCLFF1 Gain at 1.0): 2023-06-22 05:04:38 UTC 22:04:38 PDT 1371445496 GPS Observation segment stop (with SRCLFF1 Gain at 1.0): 2023-06-22 13:53:47 UTC 06:53:47 PDT 1371477245 GPS
RyanC just added the SRCLFF1 gain of 2.1 to lscparams, saved, and reloaded the ISC_LOCK guardian. So, if we need to relock it'll come back on with the correct gain. Note though, that we expect to update this yet again later today.
Here's that SDF screenshot that I said I was going to attach. Turns out my tired brain had flipped setpoint and epics value in the tables, Doh! My fault.
Adding a quick comment to Jeff's note about lines, with a bit of relevant info from ER15.
Abby Wang and Athena Baches recently analyzed lines in May 2023 data, grouping those that evolve similarly in time. They found a cluster of lines corresponding to the awg lines, but not including any other entries. This is good news; it implies that there are *not* strong narrow artifacts with very similar histories-- i.e. these lines aren't causing unexpected strong lines elsewhere, which ought to have shown up in the same cluster. (Note: it's still possible that there are weak artifacts which aren't caught by this analysis.)
The attached plots show what the time evolution looks: each row is a line (corresponding to 11.475, 11.575, 15.175, 15.275, 24.4, and 24.5 Hz) , yellow = above threshold and blue = below threshold.
L. Dartez, J. Kissel More details to come, but as of Jun 21 2023 23:30 UTC, we have updated the calibration, to reflect the new IFO with input power back at 60W and all the associated other configuration changes including but not limited to a SRCL offset of -175 [ct].
The calibration update was pushed based on the second -175 [ct] SRCL offset sensing function data taken in LHO:70683. Even though there were no new measurements of the actuators taken, these the N/ct actuation strength "free" parameters were also updated with, essentially, a new MCMC run on the last, most recent, old data from May 17 2023 (LHO:69684). Here're the following "free parameter" values exported to foton: $ pydarm export searching for 'last' report... found report: 20230621T211522Z using model from report: /ligo/groups/cal/H1/reports/20230621T211522Z/pydarm_H1_site.ini filter file: /opt/rtcds/lho/h1/chans/H1CALCS.txt Hc: 3.4207e+06 :: 1/Hc: 2.9234e-07 Fcc: 439.33 Hz Hau: 7.5083e-08 N/ct Hap: 6.2353e-10 N/ct Hat: 9.5026e-13 N/ct filters (filter:bank | name:design string): CS_DARM_ERR:10 O4_Gain:zpk([], [], 2.9234e-07) CS_DARM_CFTD_ERR:10 O4_Gain:zpk([], [], 2.9234e-07) CS_DARM_ERR:9 O4_NoD2N:zpk([439.32644887584786], [7000], 1.0000e+00) CS_DARM_ANALOG_ETMX_L1:4 Npct_O4:zpk([], [], 7.5083e-08) CS_DARM_ANALOG_ETMX_L2:4 Npct_O4:zpk([], [], 6.2353e-10) CS_DARM_ANALOG_ETMX_L3:4 Npct_O4:zpk([], [], 9.5026e-13) The calibration report on the MCMC fitting for free parameters, as well as the GPR fit based on the two measurements at -175 [ct] (20230621T211522Z in LHO:70683 and 20230621T191615Z in LHO:70671) is attached below for convenience, but has been archived on the LDAS cluster under H1_calibration_report_20230621T211522Z.pdf
For the primary metric of how the calibration's quality changed across the 75W to 60W, then change of SRCL offset change, the calibration push, see LHO:70705. I copy and past that attached image here for convenience. Also repeating Louis: The DTT template for this measurement is stored in /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O4/H1/Measurements/FullIFOSensingTFs/ 20230621_systematic_error_deltal_external_gds_calib_strain.xml
we changed is_pro_spring to False in the pyDARM parameter model set. Commit: 353de502.
Attached are the raw the blow-by-blow notes I took during yesterday's calibration push that highlights all the command-line commands and actions we needed to take in order to update the calibration. Recapping here with a little more procedural clarity: (Any command recalled without a path are/were/may be run any new, fresh terminal; we did not need to invoke any special conda environment thanks to the hard work done behind the scenes by the pydarm-cmd team): (0) If at all possible, understand what you expect to change in the calibration ahead of time. If that is *limited* to something changing that can only be measured with the full IFO, i.e. you expect *only* a change in the "free parameters" (overall sensing function gain, DARM cavity pole frequency, or any of the three ETMX UIM, PUM, TST actuator strengths) then you run through the process outlined below as we did yesterday. Other changes to the DARM loop, like electronics changes or computational arrangement mean you have to do a more in-depth characterization of that thing, update the DARM loop model parameter set, *then* start at (1). (1) Measure something new about the IFO. In this case we *knew* that we expect a change in the inteferometric response of the IFO because of the ring heater changes and input power change, so we remeasured the sensing function; and expected only the optical gain and the cavity pole to change. $ pydarm measure --run-headless bb sens pcal We, of course, should be out of observing, and the ISC_LOCK guardian should be in NLN_CAL_MEAS. When the measurement is complete, you can do steps (2) through (6) with the IFO *back* in NOMINAL_LOW_NOISE, and you can even go back in to OBSERVING during that time. (2) Process that measurement, and create the folder of material that's required for that processing, as though it were a part of the on-going "epoch" of measurements where you expect nothing to have changed about the DARM loop other than the time-dependent corrections to the free parameters. This gives you a "report" that shows the residuals between the last installed model of the IFO and you current measurement compared to the rest of the measurement/model residuals in the inventory for that last "epoch." In this way, you can confirm or refute your expectations of what has changed. $ pydarm report which generates the folder in /ligo/groups/cal/H1/reports/20230621T191615Z/ (3) Looking at the first results, we were disappointed that occasionally the MCMC fit would land on a parameter hyperspace island that had a large SRC detuning spring frequencies, even through the lower frequency limit of the data fed into the fitter was ~80 Hz. As such, we adjusted the *default* model parameter set, /ligo/groups/cal/H1/ifo/pydarm_H1.ini changing the following parameter, Line 15 is_pro_spring = False and re-ran the report, $ pydarm report --force in order to re-run the MCMC. This worked so we committed pydarm_H1.ini to the ifo/H1/ repo as git hash 353de502. (4) After looking through the history of measurement/model residuals, you should then have an understanding what you want to *tag* as "valid" and an understanding of whether you new measurement *is* infact the boundary of a new epoch. This may also be the time when you *don't* like what you see, so you modify the controls settings of the IFO to change it further and go back to step (1). As you can see from LHO aLOGs 70671, 70677, and 70683, we were doing just that. In the end, we had *two* measurements in "the new epoch" that we liked, and one measurement in the middle -- technically its *own* epoch -- that we didn't like. So, after processing all the data and making no updates to the report tags so we could see the whole history of the sensing function, we tagged the reports in the following way $ cd /ligo/groups/cal/H1/reports/ $ pydarm ls -r # before tagging 20230620T234012Z valid # Last valid 75W sensing function data set 20230621T191615Z # First new 60W data set, with SRCL offset -175 [ct] 20230621T201733Z # Second new 60W data set, with SRCL offset -165 [ct] 20230621T211522Z # Third new 60W data set, with SRCL offset -175 [ct] $ # validating First and Third 60W data sets, both with -175 [ct] SRCL offset the report so it shows up on the $ touch 20230621T191615Z/tags/valid $ touch 20230621T191615Z/tags/epoch-sensing $ touch 20230621T211522Z/tags/valid $ pydarm ls -r # after tagging 20230620T234012Z valid 20230621T191615Z valid epoch-sensing 20230621T201733Z 20230621T211522Z valid (5) Then, now that these tags are set up -- and specifically the epoch-sensing tag -- only these new 60W data sets are included in the history, which means only those data sets are stacked and fit to GPR. Thus, this report generation run will be the "final" report that generates what we end up exporting out to the calibration pipeline. Importantly, even though the epoch boundary is defined by the the *first* 20230621T191615Z measurement, the parameters that will be installed are defined by the MCMC of the *latest* 20230621T211522Z measurement. This works because we're assuming the IFO is the same in this entire boundary, so we should get equivalent answers (within uncertainty, and modulo time-dependent correction factors) if we MCMC any of the measurements in the epoch. $ pydarm report --force yields a good report, with "free parameters," foton exports, FIR filters, MCMC posteriors, and GPR fits that are ready to export to the calibration pipeline. Also note that all of these re-runs of the report ("pydarm report --force") are *over-writing* the contents of the report, so if you want to save any interim products you must move them out of the way to a different location and/or different name. (6) We can validate what we are about to push out into the world with the dry run command, $ pydarm export where if you don't specify the report ID, then it exports the latest report. In this case, the latest is 20230621T211522Z, so we do want to use this simplest use of this command. That spits out text like what's shown in LHO:70699. Another option is $ pydarm status which spits out a comparison between what's *actually* installed in the front-end against the latest report (which, in this case, is what we're *about* to install). (7) If you're happy with what you see, then it's time to shove "the calibration" out into the world. (a) Presumably, the IFO is still locked, in NOMINAL_LOW_NOISE, and maybe even in OBSERVING. Warn folks that you're about to take the IFO out of OBSERVING, and the DARM_FOM on the wall is about to go nuts, but the IFO is fine. Try to do steps (b) through (g) as quickly but accurately/completely/carefully as possible. (b) push EPICs records to the front end and save new foton files. $ pydarm export --push (c) open up the CAL-CS GDS-TP screen, and look at the DIFF of filter coefficients. Hit the LOAD_COEFFICIENTS button if you see what you expect from the DIFF. (d) on the same screen, open up the SDF OVERVIEW. Review the changes and accept if you see what you expect. Now everything's updated in the front-end, so it's time to migrate stuff out to the cluster so GDS and the uncertainty pipelines get updated. (e) Add an additional tag to the report which you just pushed, $ touch /ligo/groups/cal/H1/reports/20230621T211522Z/tags/exported $ pydarm ls -r 20230620T234012Z valid 20230621T191615Z valid epoch-sensing 20230621T211522Z exported valid (f) Archive all the reports that are a part of this wonderful new epoch, which pushes the whole folder so it includes the tags. Having the "exported" tag is particularly important for the GDS pipeline. /ligo/groups/cal/H1/reports$ arx commit 20230621T191615Z /ligo/groups/cal/H1/reports$ arx commit 20230621T211522Z (g) Restart the gds pipeline, which picks up the .npz of filter coefficients from the latest report marked with the "exported" tag. $ pydarm gds restart This opens up prompts form both DMT machines, dmt1 and dmt2, to say "yes" to confirm that you want to restart. After restarting the GDS pipeline, you can check the status of the machines as well, $ pydarm gds status (8) Once you're done with the GDS pipeline restart, then you've gotta wait ~2-5 minutes for the pipeline to complete its restart. To check whether the pipeline is back up and running, head to "grafana" calibration monitoring page. Presumably, the IFO is still in NOMINAL_LOW_NOISE, so eventually the live measurement of response function systematic error should begin to reappear, hopefully even closer to 1.0 mag, 0.0 deg phase. While you're waiting, you can also pull up trends of - the front-end computed TDCFs, see if those move closer to 1.0 (or in the case of f_cc, closer to the MCMC value) - the front-end computed DELTAL_EXTERNAL / PCAL systematic error transfer function, see if those move closer to 1.0 In addition, the newest, latest *modeled* systematic error budget only gets triggered once every hour, so you just have to be patient on that one, and check in later. (9) Once everything is settled, take the ISC_LOCK guardian to NLN_CAL_MEAS, and take a broad-band PCAL injection for final, high-density frequency resolution, post-install validation a la LHO:70705