Jennie W, Sheila
Since our range has been decreasing over the last week and the OMC QPD offsets did not definitely give us a clear gain long term in kappa C, we have reverted the change we did on Monday.
We might want to improve these offsets again once we sort out any other problems that are making larger impacts on our range currently.
See the below ndscope for how this affected the range and optical gain in the short term - this picture is slightly confusing as the calibration was being updated so kappa C will have been reset at the some point before or during our measurements.
The other two attached images show the QPD offsets being accepted in OBSERVE and SAFE snap files.
Thu Feb 27 10:05:41 2025 INFO: Fill completed in 5min 38secs
For this fill I tested that the rate-of-change trip value can be changed in the configuration yaml file and then loaded into the code. For this test it was reduced from 60.0 to 50.0 DEGC.
The code which generates the plot now dynamically reads the trip PV to set the horizontal bar.
It looks like we tripped on a TC-A sputter just before the LN2 got flowing, but looks like a good fill.
The code currently stops updating the ROC channels at the time the fill is terminated. These channels are then zeroed at the start of the next fill. This is the reason the lower panel in the plot has bars behind the legend from yesterday's fill, and that these channels flat line from the end time onwards.
I'll work on a code change to continue calculating the ROC for a period of time after the fill ends.
New code has been loaded which continues with ROC calcs for 10 minutes post fill. I will test this during tomorrow's fill.
Lockloss @ 2025-02-27 16:36 UTC during commissioning after 22 minutes locked. People had just entered the LVEA.
Oli, Jonathan, Dave:
The DARM FOM in the control room (running on nuc30) stopped updating and could not be restarted. Jonathan tracked it down to an ongoing NDS2 issue which will hopefully be resolved soon. In the meantime I started the "local" version of this fom which only uses the local NDS and does not try to connect to NDS2.
I'm running the diaggui by hand from a terminal as controls on nuc30, it was started with:
cd /opt/rtcds/userapps/release/cds/h1/scripts/fom_startup/nuc30
diaggui --fom ./H1_DARM_FOM_cds.xml
TITLE: 02/27 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 3mph Gusts, 2mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.72 μm/s
QUICK SUMMARY:
Currently relocking and at CARM_TO_TR
So last night Ryan adjusted the SRC2 P gain down from 60 to 40 during DARM_LOCKED_CHECK_ASC in order to try and stop the oscillations, and it worked (83074),. That value though of course was just a test and wasn't changed in the code, so I would have expected the next times passing through those states for it to have the oscillations again, but it hasn't. In the three times we've passed through DARM ASC-ville since then, with the SRC2 P gain at its original value of 60, that hasn't happened.
TITLE: 02/27 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: TJ
SHIFT SUMMARY: 1 lockloss and we're on our way back past the troublesome CHECK_ASC now. I had to reduce the gain of SRC2_p from 60 to 40. We're about to start powering up now.
LOG: No log.
Jennie Siva Keita Mayank
Following our previous attempt here . We opened a new ISS PD array (S.N. 1202965).
This unit is in great condition. i.e.
1) No sign of contamination.
2) All the optics are intact (No chipping)
We tried interfacing the QPD-cable S1203257 with the QPD but it turned out that they are not compatible.
We will look for the updated version of the QPD cable.
More photos I took of the unboxed unit,
Keita holding part of QPD connector that connects to cable,
zoom in of part of prisms close to PD array to show they don't look damaged like the previous one we unboxed,
dcc and serial number of baseplate (this is different part for each observatory due to differing beam heights).
Keita explaining the QPD cable clamp to Shiva (right) and Mayank (left).
View of optics with periscope upper mirror on the left.
View of part of prisms close to periscope.
View of back of array and strain relief.
plus a picture of an packaged coptic that was sitting on top of this capsule while it was in the storage cupboard.
For future reference all the ISS arrays and there serial numbers are listed in the dcc entry for the assembly drawing LIGO-D1101059-v5.
[Matthew Mayank Siva Keita]
On Friday (2025-02-28) we moved the optics onto taller posts so that we did not have to pitch the beam up to much (in hind-sight, we probably would've been okay doing this) when we align the beam into the input port of the ISS array. We have not aligned the beam yet and most likely should re-profile it(may not need to) to ensure that the planned lens position is correct.
We also spent some time checking the electronics box for proper connections and polarity; then we tested the upper row of PDs (4 top ones) by plugging in each cathode/anode to the respective port. The output DSUB we used a breakout board and threw each channel onto an oscilloscope -- it seems that all four of the top row of PDs are functioning as anticipated.
Important Note:
Keita and I looked at the "blue glass" plates that serve as beam dumps, but just looking at the ISS array we do not know how to mount them properly. We think there may be some component missing that clamps them to the array. So we repackaged the blue-glass in its excessive lens paper.
02:44 UTC lockloss
I stopped at CHECK_ASC and reduced the gain of SRC2P from 60 to 40 and we were able to stay locked as the oscillation came and went, I ran the olg measurement. After the measurement I tried to reduce the gain down to 30 but I fat fingered it and made it bigger and we lost lock.
Ibrahim, Oli
Attached are the most recent BBSS Model Comparisons with most recent BBSS parameter settings (+/- from FDR):
d0=+3.0mm
l1 = -3.0mm
m2 = +0.3kg
FDR, d1 = +3.5mm (which is equivalent to BP = -4mm)
TITLE: 02/27 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
LOG:
15:30UTC Relocking
- ISC_LOCK in PREP_FOR_LOCKING; H1_MANAGER had gone into error less than a minute before (15:29 UTC)
- I ran an initial alignment
- We got those saturations again during OFFLOAD_DRMI_ASC (ndscope)
- Lockloss during LOWNOISE_ESD_ETMX due to earthquake coming in
- Holding in DOWN until earth settles
- Started another initial alignment
- Another set of saturations during OFFLOAD_DRMI_ASC - this time we lost lock from them (ndscope2)
- Saturations again but got through them this time
18:15 NOMINAL_LOW_NOISE
18:20 Observing
21:16 Commissioning to adjust A2L and PRCL FF
21:51 Back to Observing
22:48 Superevent S250226dl
00:13 Superevent S250227e
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
16:13 | FAC | Nellie | OpticsLab | n | Tech clean | 16:28 |
16:19 | FAC | Tyler | MX, MY | n | 3IFO | 18:19 |
17:20 | FAC | Kim | MY, MX | n | Tech clean | 20:47 |
17:29 | VAC | Janos | EY | N | Parts Inventory | 17:44 |
20:40 | FIT | Matt | XARM | n | Runnin | 21:31 |
22:49 | OPT | Jennie, Keita, Siva, Mayank | OpticsLab | n | Unwrapping class A parts (Jennie out 00:10) | 00:36 |
TITLE: 02/27 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 143Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 11mph Gusts, 6mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.43 μm/s
QUICK SUMMARY:
This weird noise was first noticed in 82997 during OFFLOAD_DRMI_ASC. The first instance of this happening while relocking was three days ago 2025-02-23 03:59:19 UTC. It started showing up LSC-PR_GAIN and LSC-POPAIR_B_RF90_I_NORM and the lower stages of SR2 and SRM. Almost every relock since then has seen these glitches. They almost always start showing up in OFFLOAD_DRMI_ASC, but I've found a few instances(one, two, three) where the noise clearly starts during DRMI_LOCKED_CHECK_ASC, the state before OFFLOAD_DRMI_ASC. Next time we lose lock we will be sitting in DRMI_LOCKED_CHECK_ASC, so we will probably see this ringup then.
Based on Oli's range checks (alog83063) we decided to run the a2l script and a PRCLFF measurement after getting run coordinator approval. Below are the results of the a2l script, a few moving quite a bit. These have been added into lscparams.py and they are not in SDF. Jenne will post the PRCLFF results.
RESULTS
Initial | Final | Diff | ||
ETMX | P | 3.34 | 3.23 | -0.11 |
Y | 4.9 | 4.91 | 0.01 | |
ETMY | P | 5.56 | 5.49 | -0.07 |
Y | 1.28 | 1.35 | 0.07 | |
ITMX | P | -0.66 | -0.53 | 0.13 |
Y | 2.97 | 3.21 | 0.24 | |
ITMY | P | -0.06 | 0.06 | 0.12 |
Y | -2.51 | -2.74 | -0.23 |
We reran the PRCLFF injection template to try and improve the range in the 20-50 Hz range. The template was at /opt/rtcds/userapps/release/lsc/h1/scripts/feedforward/PRCL_excitation_gain_adjust.xml.
The right two panels are the magnitude and phase of the PRCL to DARM transfer function.
We tried gains of 0.9 (starting value)
1.0
1.3
1.5
We found the TF was minimised in the 20-50Hz range using a gain of 1.3 in PRCLFF (pink trace).
The live trace was our last step (red) which was a gain of 1.5.
The coherences are shown in the bottom left corner (labels correct but colours don't correlate with the TF colours).
(NB: Ref 1 darm spectra in top left is old reference - ignore).
TJ added the value to lsc params and I accepted it in the OBSERVE snap file.
TITLE: 02/26 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 3mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.31 μm/s
QUICK SUMMARY:
Currently in PREP_FOR_LOCKING, looks like H1 MANAGER went into error at 15:29 UTC. I reloaded it and we've just started an initial alignment.
This is another instance of guardian issue #83. I think this is a race condition that we run into about two times a year, I need to find a way around it.
Sheila, Matt, Camilla
We aligned and took some SQZ HD data this morning but realized that the ZM optics were noisy with HAM7 tripped and the increased (60mph!! wind). After un-tripping HAM7, Matt and Sheila rechecked balancing and visibility, started at 98.3% but noisy, improved to 99.0%. Since then HAM7 tripped and was un-tripped again but assumed nothing would have changed.
NLG Measurements:
opo_grTrans_ setpoint_uW | Amplified Max | Amplified Min | UnAmp | Dark | NLG (usual) |
120 | 0.108354 | 0.000573291 | 0.0018491 | 6.6e-5 | 60 |
110 | 0.0656167 | 0.000599855 | 36.8 | ||
100 | 0.0413918 | 0.00062871 | 23.2 | ||
80 | 0.0209032 | 0.000681607 | 11.7 | ||
60 | 0.0121804 | 0.000745147 | 0.0018428 | 6.2e-5 | 6.8 |
40 | 0.00710217 | 0.000852173 | 3.98 |
Data saved to camilla.compton/Documents/sqz/templates/dtt/20250225_GOOD_HD.xml
FC2 is misaligned during this dataset.
Type | NLG | SQZ dB @ 1kHz | Angle | DTT Ref | Notes |
Dark Noise | N/A | N/A | ref 0 | This and shot noise was noisy <300Hz at times when the OPO was locking or scanning, unsure why. | |
Shot Noise | N/A | N/A | ref 10 |
Blocked SEED, LO only.
|
|
SQZ | 60 | -7.4 | 167 | ref1 | opo_grTrans_setpoint_uW = 120uW, OPO gain -14 |
ASQZ | 60 | 22.6 | 225 | ref2 | |
MSQZ | 60 | 19.6 | N/A | ref3 | |
SQZ | 37 | -7.3 | 164 | ref4 | opo_grTrans_setpoint_uW = 110uW |
ASQZ | 37 | 20.2 | 229 | ref5 | |
MSQZ | 37 | 17.2 | N/A | ref6 | |
SQZ | 23 | -7.4 | 161 | ref7 | opo_grTrans_setpoint_uW = 100uW |
ASQZ | 23 | 18.0 | 233 | ref8 | |
MSQZ | 23 | 15.0 | N/A | ref9 | |
SQZ | 12 | -7.3 | 151 | ref11 | opo_grTrans_setpoint_uW = 80uW, OPO gain -8 |
ASQZ | 12 | 14.5 | 246 | ref12 | |
MSQZ | 12 | 11.7 | N/A | ref13 | |
SQZ | 7 | -7.0 | 143 | ref14 | opo_grTrans_setpoint_uW = 60uW |
ASQZ | 7 | 11.7 | 253 | ref15 | |
MSQZ | 7 | 8.7 | N/A | ref16 | |
SQZ | 4 | -5.9 | 132 | ref17 | opo_grTrans_setpoint_uW = 40uW |
ASQZ | 4 | 8.9 | (-)85 | ref18 | |
MSQZ | 4 | 6.0 | N/A | ref19 |
I'm also attaching pictures of SR785 measurements of various loop gains.
Measurement | Time | Notes |
LO OLG | 11:28 | 2.5kHz UGF |
OPO OLG | 12:06 PST | -8dB gain OPO loop gain |
OPO OLG | 12:09 PST | -14 dB on OPO loop gain |
CLF OLG | 12:10 PST | 13.kHz UGF |
This data set is nicely fit by standard squeezing equations, and suggests we have 6% unexplained losses and very little phase noise.
In the attached PDF I took the median of the ASD from 500-700 Hz, subtracted dark noise in quadrature from each ASD, then calculated the dB relative to shot noise for squeezing, anti-squeezing and mean squeeze. I used the squeezing and anti-squeezing and opo transmission numbers to fit for the OPO threshold (in units of transmitted power), total efficiency of squeezing, and phase noise. The attached plot shows the resulting model plotted against the data, including mean squeezing and NLG measurements that were not used for fitting. The NLG plot does suggest that we are slightly underestimating our NLG with our measurements.
This suggests that the OPO threshold is at 156uW transmitted power, and that the totall efficiency is 83%. This can be compared to 73365 and to the expected losses from the loss tracking sheet and sqz wiki. Expected losses:
opo escape efficiency | 0.985 | |
3 SFI passes | (0.99)^3 = 0.97 | |
B:BS1 | 0.9897 | HAM7 total = 0.946 |
SQZT7 | 0.98 | |
visibilty | 0.99^2 | |
homodyne QE | 0.977 | |
total expected in homodyne | 0.887 |
With the measured efficiency of 0.833, this means we have 6% unexplained losses in HAM7 or SQZT7.
[Vlad, Louis, Jeff, Joe B] So while exercising the latest pydarm code with an eye towards correcting the issues noted in LHO alog 82804, we ran into a few issues which we are still trying to resolve. First, Vlad was able to recover all but the last data point from the simulines run on Feb 15th, which lost lock at the very end of the sweeps. See his LHO alog 82904 on that process. I updated the pydarm_H1.ini file to account for the current drive align gains and point at the current L1SUSETMX foton file (saved in the calibration svn as /ligo/svncommon/CalSVN/aligocalibration/trunk/Common/H1CalFilterArchive/h1susetmx/H1SUSETMX_1421694933.txt ). However, I had to also merge some changes for it submitted from offsite. Specifically https://git.ligo.org/Calibration/ifo/H1/-/commit/05c153b1f8dc7234ec4d710bd23eed425cfe4d95, which is associated with MR 10 which was intended to add some improvements to the FIR filter generation. Next, Louis updated the pydarm install at LHO from tag 20240821.0 to tag 202502220.0. We then generated the report and associated GDS FIR filters. This is /ligo/groups/cal/H1/reports/20250215T193653Z. The report and fits to the sweeps looked reasonable, however, the FIR generation did not look good. The combination of the newly updated pydarm and ini changes was producing some nonsensical filter fits. This is the first attachment. We reverted the .ini file changes, and this helped to recover more expected GDS filters, however, there's still a bit of a small ~1% change visible around 10 Hz (instead of being a flat 1.0 ratio) in the filter response using the new pydarm tag versus the old pydarm tag which we don't understand quite yet, and would like to before updating the calibration. I'm hoping we can do this in the next day or two after going through the code changes between the versions. Given the measured ~6 Hz SRC detuning spring frequency (as seen in the reports), we will need to include that effect in the calcs front end to eliminate a non-trivial error when we do get around to updating the calibration. I created a quick plot based off the 20250215T193653Z measured parameters, comparing the full model over a model without the SRC detuning included. This is the attached New_over_nosrc.png image.
The slight difference we were seeing in old report GDS filters vs new report GDS filters was actually due to a mcmc fitting change. We had changed the pydarm_cmd_H1.yaml to fit down to 10 or 15 Hz instead 40 Hz, which means it is in fact properly fitting the SRC detuning, which in turn means the model the FIR filter generation is correcting has changed significantly at low frequencies. We have decided to use the FIR filter fitting configuration settings we've been using for the entire run for the planned export today. Louis has pushed to LHO a pydarm code which we expect will properly install the SRC detuning into the h1calcs model. I attach a text file of the diff of the FIR filter fitting configuration settings for the pydarm_H1.ini file between Aaron's new proposal (which seems to work better for DCS offline filters based only looking at ~6 reports) and the one's we've been using this run so far to fit the GDS online filters. The report we are proposing to push today is: 20250222T193656Z