The FMCS team has their own set of alarms for the reverse osmosis (RO) system. I've removed this alarm in the Alarm Handler system in the control room. Operators will no longer need to alert the FMCS team of RO related issues unless there are major problems.
The file for this is symlinked in /opt/rtcds/lho/h1/alh/fmcs/ to /opt/rtcds/userapps/release/cds/h1/alarmfiles/fmcs.alhConfig and is under svn control.
4 pressure gauges were replaced today after the previous set was damaged during the runaway of EY Chiller 1. (2 per supply and return line) There is some ambiguity between most of these gauges across all AHU's. Because the new gauges all read nearly exact pressures I will likely replace most/all others in the near future. I also observed about 10psi of pressure loss since replenishing on friday. Glycol was added via the makeup tank to get back to a 30psi operating pressure. The EY makeup tank now has about 50% capacity. T. Guidry
2053 Lock loss (1377550449) after exactly 27 hours. No immediate cause.
L. Dartez, J. Kissel More details to come, but as of 2023-08-31 19:10:00 UTC (12:10 PDT), we've updated several corners of the calibration for the first time since Jun 21 2023 (see LHO:70693) in order to: - Update the static model of the test mass actuation strength, to better match the current time-dependent-correction factor value (because it had gotten large enough that approximations used in all TDCF calculations would have started to breakdown) (LHO:72416) - Update the "DARM loop modeled transfer functions at calibration line frequencies" EPICs records in order to account for the new DARM2 FM8 boost (LHO:72562 and LHO:72569) - Update the sensing function (only a little bit) because we're now regularly operating with OM2 "hot" as of 2023-07-19 (LHO:72523) - start using the newly re-organized pydarm librarianship, including the use of new simulines-measured IFO sening and actuation function data. (aLOG pending) - fixed an unimpactful bug in the front-end computed version of the live measured response function systematic error, in which the local oscillator frequency for the demod that was demodulating the recently changed 102.13 to 104.23 Hz calibration had not been updated. (to be commented below) The exciting news is that, with all the metrics we have on hand, these calibration updates made everything better. - 1st attachment: at the boundary of the change, we see the "relative" time-dependent correction factors change rapidly from non-unity values to unity values (and the cavity pole doesn't change, as expected). - 2nd attachment: at the boundary of the change, we see the front-end computed live measured response function systematic error goes from large values to values close to unity magnitude and zero phase. We're still tracking down some bugs in the *modeled* systematic error budget that has been broken since yesterday, Aug 20 2023 19:50 UTC *and* we're not sure if the *GDS* processed live measured response function systematic error is running yet, but we'll keep you posted. The comments below will also contain some updated details on the process for this update.
attaching SDF tables for the cal update and for the H1:CAL-CS_TDEP_PCAL_LINE8_COMPARISON_OSC_FREQ change. All changes have been accepted and saved on OBSERVE and safe snap files.
I'm also including a screenshot of the H1CALCS filter updates (H1CALCS_DIFF.png).
The interferometric-measurement-informed portion of this calibration push was informed by report 20230830T213653Z, whose measurement is from LHO:72573. parameter foton value physical units value ----------- ------------- ----------------------- 1/Hc [m/ct] 2.93957e-07 3.4019e+06 [ct/m] (* 2475726 [mA/ct] * 1e-12 [m/pm] = 8.422 [mA/pm]) f_CC [Hz] 438.694 L1/EX [N/ct] 7.53448e-08 1.60487 [N/A] L2/EX [N/ct] 6.24070e-10 0.03047 [N/A] L3/EX [N/ct] 1.02926e-12 2.71670e-11 [N/V^2] (with 3.3 [DAC V_bias] * 40 [ESD V_bias / DAC V_bias] = 132 [ESD V_bias])
I attach here a log of the process for updating the calibration. A lot of the work is much like it was in June -- see LHO:70735, but there are a few new bells and whistles that we used. Plus, there's a few extra steps at the end to validate that down-stream products look good -- namely, that the end-game plot -- the *measured* and *modeled* systematic error agree from https://ldas-jobs.ligo-wa.caltech.edu/~cal/. Indeed, in doing this, we found some bugs that we're still sorting out. I also note that Louis did a TON of work leading up to today, generating the last ~2 months of reports, re-organizing and re-creating them, defining epoch tags, etc. So steps (0) through (5) were taken care of before today, and we started around step (6). Steps (6)-(9) out of (11) -- using today's procedure's numbering -- worked really well, and went super smoothly. The procedure is getting quite good!
Following the usual instructions on the wiki, I took a broadband measurement followed by the simulines.
Start time:
PDT: 2023-08-31 12:35:20.025399 PDT
UTC: 2023-08-31 19:35:20.025399 UTC
GPS: 1377545738.025399
End time:
PDT: 2023-08-31 12:57:25.118313 PDT
UTC: 2023-08-31 19:57:25.118313 UTC
GPS: 1377547063.118313
2023-08-31 19:57:24,730 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOL
G_SS_20230831T193521Z.hdf5
2023-08-31 19:57:24,760 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCA
LY2DARM_SS_20230831T193521Z.hdf5
2023-08-31 19:57:24,771 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUS
ETMX_L1_SS_20230831T193521Z.hdf5
2023-08-31 19:57:24,782 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUS
ETMX_L2_SS_20230831T193521Z.hdf5
2023-08-31 19:57:24,793 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUS
ETMX_L3_SS_20230831T193521Z.hdf5
Note, this is the first measurement taken *after* the 2023-08-31 19:10 UTC calibration update (LHO:72594). Also, $ gpstime 1377545738 PDT: 2023-08-31 12:35:20.000000 PDT UTC: 2023-08-31 19:35:20.000000 UTC GPS: 1377545738
While running initial alignments, we have been getting saturations for OM1 and OM2 when running through the DOWN state of ALIGN_IFO. These have been traced back to the yaw integrators for each suspension's top mass not being properly cleared. Since it seems like the integrators are, in fact, being cleared but then immediately filled up again, I've moved the clear integrators step to be the last step done in the DOWN state and added a one second wait timer right before it. This hopefully gives more time for everything to settle before the top mass integrators are cleared, but we'll see the next time an initial alignment is run.
ALIGN_IFO has been loaded and changes committed to svn.
All dust monitor pumps are operating within temp and pressure specs.
Thu Aug 31 10:07:07 2023 INFO: Fill completed in 7min 3secs
TITLE: 08/31 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: Austin
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 5mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY: Locked for 21 hours. PR2, SR2, MC2 saturation at 1132 UTC, but no obvious seismic or other enviromental noise a that time.
CDS overview - OK
Below is the summary of the DQ shift for the week of 21-27 August 2023
The complete DQ shift report may be found here.
TITLE: 08/31 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
INCOMING OPERATOR: Austin
SHIFT SUMMARY:
Very quiet evening with one candidate event, S230831e. H1 was locked and observing for the whole shift; current lock stretch is at 13 hours.
LOG:
No log for this shift.
State of H1: Observing at 151Mpc
H1 has been locked for 9 hours and observing the whole shift. One candidate event two hours ago, S230831e, quiet evening otherwise.
Elenna, Gabriele, Camilla
This afternoon we updated the MICH Feedforward, it is now back to around the level it was last Friday, comparison attached. Last done in 72430. Maybe need to be done so soon because of the 72497 alignment changes on Friday.
The code for excitations and analysis has been moved to /opt/rtcds/userapps/release/lsc/h1/scripts/feedforward/
Elenna updated in guardian to engage FM1 rather than FM9 and sdf accepted. New filter attached. I forgot the accept this in h1lsc safe.snap and will ask the operators to accept MICHFF FM1 when we loose lock or come out of observe(72431), tagging OpsInfo.
Attached is a README file with instructions.
Accepted FM1 in the LSC safe.snap
Calling out a line from the above README instructions that Jenne pointed me to that confirms my suspicions that the *reason* the bad FF filter's high Q feature showed up at 102.128888 Hz, right next to the 102.13 Hz calibration line:
"IFO in Commissioning mode with Calibration Lines off (to avoid artifacts like in alog#72537)."
in other words -- go to NLN_CAL_MEAS to turn off all calibration lines before taking active measurements that inform any LSC feed forward filter design.
Elenna says the same thing -- quoting the paragraph from LHO:72537 later added in edit:
How can we avoid this problem in the future? This feature is likely an artifact of running the injection to measure the feedforward with the calibration lines on, so a spurious feature right at the calibration line appeared in the fit. Since it is so narrow, it required incredibly fine resolution to see it in the plot. For example, Gabriele and I had to bode plot in foton from 100 to 105 Hz with 10000 points to see the feature. However, this feature is incredibly evident just by inspecting the zpk of the filter, especially if you use the "mag/Q" of foton and look for the poles and zeros with a Q of 3e5 (!!). If we ensure to both run the feedforward injection with cal lines off and/or do a better job of checking our work after we produce a fit, we can avoid this problem.
I ran a 2nd calibration sweep today, starting with broadband:
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20230830T212846Z.xml
Simulines:
2023-08-30 21:58:00,615 | INFO | Commencing data processing.
2023-08-30 21:58:56,567 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20230830T213653Z.hdf5
2023-08-30 21:58:56,585 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20230830T213653Z.hdf5
2023-08-30 21:58:56,611 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20230830T213653Z.hdf5
2023-08-30 21:58:56,636 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20230830T213653Z.hdf5
2023-08-30 21:58:56,661 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20230830T213653Z.hdf5
GPS start: 1377466629.697354
GPS stop: 1377467954.983395
We think this is a more thermalized measure of the IFO after installing the new DARM2 FM8 boost filter, and we'll likely use *this* measurement to inform a calibration update. aLOGs of DARM2 FM8 boost filter change -- LHO:72562 and LHO:72569 Previous unthermalized measurement thta also had the new DARM filter in place -- LHO:72560
This measurement has been processed by pydarm, and can now be found under the report 20230830T213653Z. Attached here for reference. This measurement served as the basis for the update to the calibration on 2023-08-31 -- see LHO:72594. I've measured the OMC DCPD "rough [mA]" to DARM_ERR [ct] transfer function during this measurement, and found the magnitude to be 2475726 [mA/ct] at 5 [Hz]. DTT template is committed to the CalSVN under /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O3/H1/Measurements/FullIFOSensingTFs 2023-08-30_2130UTC_H1_OMCDCPDSUM_to_DARMIN1.xml
Vicky, Naoki, Sheila, Daniel
Details of homodyne measurement:
This morning Daniel and Vicky reverted the cable change to allow us to lock the local oscillator loop on the homodyne (undoing change described in 69013). Vicky then locked the OPO on the seed using the dither lock, and increased the power into the seed fiber to 75mW (it can't go above 100mW for the safety of the fiber switch). We then reduced the LO power so that the seed and LO power were matched on PDA, and adjusted the alignment of the sqz path to get good (~97%) visibility measured on PDA. We removed the half wave plate from the seed path, without adjusting the rotation. With it removed, we checked the visibility on PDB, and saw that the powers were imbalanced.
Polarization issue (revisiting the polarization of sqz beam, same conclusion as previous work):
There is a PBS in the LO path close to the homodyne, so we believe that the polarization should be set to horizontal at the beamsplitter in that path. The LO power on the two PDs is balanced (imbalanced by 0.4%), so we believe this means that the beamsplitter angle was set correctly for p polarized light as we found it, and there is no need to adjut the beamsplitter angle. However, when we switched to the seed power, there is a 10% difference between the power on the two PDs without the halfwave plate in the path. We put the halfwave plate back, and the powers were again balanced (with the HWP angle as we found it). We believe this means that the polarization of the sqz path is not horizontal arriving at the homodyne, and that the half wave plate is restoring the polarization to horizontal. If the polarization rotation is happening on SQZT7, the half wave plate should be able to mitigate the problem, if it's happening in HAM7 it will look like a loss for squeezing in the IFO. Vicky re-adjusted the alignment of the sqz path after we put the HWP back in, because it slightly shifts the alignment. After this the visibility measured on PDA is 95.7% (efficiency of 91.6%) and on PDB visibility is 96.9% (efficiency of 93.9%).
SQZ measurements, unclipping:
While the IFO was relocking Vicky and Naoki measured SQZ, SN, ASQZ and mean SQZ on the homodyne and found 4.46dB sqz, 10.4dB mean sqz and 13.14dB anti-sqz measured from 500-550Hz. Vicky then checked for clipping, and saw some evidence of small clipping (order 1% clipping with 10urad yaw dither on ZM2). We went to the table to check that the problem wasn't in the path to the IR PD and camera, we adjusted the angle of the 50/50 beamsplitter that sends light to the camera, and set the angle of the camera to be more normal to the PD path. This improved the image quality on the camera. Vicky moved ZM3 to reduce the clipping seen by the IR PD slightly. She restored good visibility by maximizing the ADF, and also adjusted both PSAMs, moving ZM4 from 100V to 95V. (We use different PSAMs for the homodyne than the IFO). After this, she re-measured sqz at 800-850Hz: 5.2dB sqz, 13.6dB anti-sqz, and 10.6dB mean sqz.
Using the nonlinear gain of 11 (Naoki and Vicky checked it's calibration yesterday), and the equations from aoki, this sqz/asqz level implies total efficiency of 0.72 without phase noise, the mean sqz measurement implies a total efficiency of 0.704. From the sqz loss spreadsheet we have 6.13% known HAM7 losses, if we also use the lower visibility measured using PDA we should have a total efficiency for the homodyne of 0.916*0.9387 = 0.86. This means that we would infer an extra 16-18% losses from these homodyne measurements, which seems too large for homodyne PD QE and optics losses in the path. Since we believe that the polarization issue is reflected in the visibility, this means that these are extra losses in addition to any losses the IFO sees due to the polarization issue.
Screenshot from Vicky shows the measurement made including the dark noise.
Including losses from phase noise of 20mrad, dark noise -21dB below shot noise, and a more accurate calibration of our measured non-linear gain to generated sqz level (from adf paper vs the aoki paper sheila referenced), the total efficiency could marginally be increased to 0.74. This suggests 26% loss based on sqz/asqz. This is also consistent with the 27% loss calculated separately from the mean sqz and generated sqz levels.
From the sqz wiki, we could budget 17% known homodyne losses. This includes 7% in-chamber loss to the homodyne (opo escape efficiency * ham7 optics losses * bdiverter loss), and 11% HD on-table losses (incl. 2% optics losses on SQZT7, and visibility losses of 1- 91.6% as Sheila said above (note this visibility was measured before changing alignments for the -5.2dB measurement; so there remains some uncertainty from visibility losses)).
In total, after including more loss effects (phase noise, dark noise), a more accurate generated sqz level, and updating the known losses -- of the 27% total HD losses observed, we can plausibly account for 17% known losses, lowering the unexplained homodyne losses to ~10-11% (this is still high).
From Sheila's alog LHO:72604 regarding the quantum efficiency of the homodyne photodiodes (99.6% QE for PDA, and 95% QE for PDB), if we accept this at face value (which could be plausible due to e.g. the angle of incidence on PD B), this would change the 1% budgeted HD PD QE loss to 5% loss.
This increases the amount of total budgeted/known homodyne losses to ~21%: 1 - [0.985(opo)*0.953 (ham7)*0.99 (bdiverter) * 0.98(on-table optics loss)*0.95(PD B QE)*0.916(hd visibility)].
From the 27% total HD losses observed, we can then likely account for about 21% known losses (~7% in-chamber, ~15% on-table), lowering unexplained homodyne losses to < 7%.
Since the results from yesterday's quarter bias test (66810) seem inconsistent with Wensday's test (66751), I'm trying a repeat of the test with lines on and off so that we can have the same number of averages for all these configurations.
The SRCL cleaning probably needs retuning because of the ring heater change, we've got pretty high SRCL coherence up to 50Hz right now. This does reproduce that the noise is higher with 1/4 bias, and the result doesn't seem to depend on having the lines on or off. For now I've left the IFO with all the lines back on and full bias, Robert will try some injections in a little while. I'll add a plot to this alog later on.
The first attachment here shows the main result of this test: The noise from 20-40 Hz is higher with the reduced bias, in a similar way to the first test 66751 A secondary thing to note is that we still do not see a broad reduction in noise when the ADS lines are off, as was seen at LLO.
One possible explanation for this increase in noise would be the ESD nonlinearity. We currently don't run with any linearization on the ESD. The ESD actuation is described by Eq 3 in T1700446, and in many other places.
rearranging Eq 3 in terms linear and quadratic with the signal voltage (and dropping the static terms):
F = [2*(gamma - alpha)*V_bias + beta - beta2] * V_signal + (alpha + gamma) V_signal ^2
Aside to understand the gain scaling we needed to match the linear response:
The table in 66751 shows how I adjusted the digital gain in the signal electrode paths to keep the overall loop gain the same. If we reduce V_bias from V_b1 to V_b2 we compensate with digital gain in the signal path to keep the linear force the same (so V_s becomes g*V_s), the gain we need to apply is:
g = [ 2(gamma - alpha) V_b1 + beta - beta 2 ]/ [ 2(gamma - alpha) V_b2 + beta - beta 2 ] (V_b1 = -447V, V_b2 = -124 V)
We can check this against the gain that we needed using some old in lock charge measurements, if the beta terms are both zero we'd see linear gain scaling with the bias (g = 3.6). For the cooefficients measured in 56613 we'd expect g = 1.34 and for the coeficents measured in 38656 we'd expect g = 2.34. So, some up to date in lock charge measurements could help us understand if the gain scaling we see makes sense with this math, but the variation in past measurements has been more than enough to encompass the gain scaling that we saw this time. This means that if we were to run at a reduced bias our ESD actuation strength would probably vary more with the distribution of charge.
Projection of quadratic contribution from ESD:
As we lower the bias and increase the voltage applied to the signal electrodes the quadratic term will become larger and might introduce noise to DARM. The quadratic term in the signal votlage is (alpha + gamma) * V_s ^2. I've added this to the noise budget with the coefficents measured in 56613, using the LVESDAMON channel which is calibrated into volts applied to the ESD (see ESD_Quadratic in budget.py) The second and third attachments show the projections this make with the quarter bias and normal (full bias) configuration. While this does predict upconversion around the calibration and ADS lines with the quarter bias, It doesn't predict well all the extra noise introduced in the quarter bias test. I'm hoping to do a repeat of the quarter bias test with a line injected on the ESD to measure the quadratic term directly rather than infering it from the old charge measurements.
Lance is using the times above for a comparison to recent data, and we noticed that I made a typo above. As the legend in the screenshot indicates, the full bias lines off time is 17:49 UTC 1/14/2023