We were in NLN but couldn't go into Observing due to some SDF diffs that I had been warned about. I accepted the diffs and we went right into Observing!
TITLE: 02/21 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 22:01 UTC (12 hr lock!)
Extremely calm shift with no loklosses. Of note:
LOG:
None
TJ, Oli
Starting around 02/20 at 11:30 UTC, the range had a step down, where it stayed for the rest of the lock. After this step down, the range was also much noisier than it had been before the step (ndscope1). Jane Glanzer ran Lasso for us during this lock stretch (lasso), and the top channel that came back with the highest correlation to this range drop was H1:SUS-PI_PROC_COMPUTE_MODE8_NORMLOG10RMSMON
, with the other channels all having much lower correlation coefficients. This was weird to us because we bandpass and downconvert to monitor mode 8 but we don't monitor or damp it, and we don't even turn on its PLL. When you plot the correlated channel along with related PLL channels (SUS-PI_PROC_COMPUTE_MODE8_PLL_AMP_FILT_OUTPUT
, SUS-PI_PROC_COMPUTE_MODE8_PLL_LOCK_ST
, and SUS-PI_PROC_COMPUTE_MODE8_PLL_ERROR_SIG
), you can see there was some weird noise in multiple of these channels that started when the range dropped (ndscope2).
I tried plotting a series of PI_PROC_COMPUTE_MODE channels for every mode (there are 32 in total), and out of all of them, only mode 8 and mode 24 showed any change in any of their respective channels around the time of the range drop(ndscope3). It only being these two channels is very interesting. PI mode 8 comes from the TRX QPD and has a bandpass between 10 and 10.8 kHz. Like I mentioned earlier, we do not actively do anything to this channel. Mode 24 on the other hand, we definitely monitor and are damping it a lot of the time. Mode 24 is read in from the DCPDs and has a bandpass of, and the PI is centered around 10.431 kHz. It is damped by using the ESD on ETMY. Mode 24 actually has more channels that correlate better and have larger amplitudes than mode 8, but Mode 8 NORMLOG10RMSMON correlated better with lasso over the entire lock stretch.
Zooming into when the range drop started yesterday, we actually see that the large drop in range happened about 12 minutes before we see the huge error signals in the mode 24 PLL, and it is at the very beginning of the rise in mode 8 and 24 NORMLOG10RMSMON (ndscope4).
Zooming out, this excess noise in those PI channels seems to have started early on February 5th, aka after relocking after the February 4th maintenance day(ndscope5). On other days, the range doesn't seem to have been affected by this noise though, at least nowhere near the amount that it was affected yesterday. Sometimes during these periods of noise, the range won't be very good (below 155), but other times we'll see this noise in modes 8 and 24 and still be right around or above 160.
Since it looks like the range drop yesterday started before the PI channels really started changing, we're pretty sure the issue is somewhere else and is just bleeding into these two downconverted channels. Because these two PI channels go through bandpasses in the 10kHz regime, there might be something in that frequency range. It is interesting though that although another PI channel we actively monitor, Mode 31, is also in the same frequency area (centered at 10.428 kHz), is read from the OMC DCPDs, and is damped using ETMY, all just like PI24, there doesn't seem to be any coupling into its channels.
Either way, pretty good leads were made this morning towards finding the actual cause of this range drop. TJ and Camilla looked into the many glitches that appeared then at 60 and 48 Hz, as well as noting that the line at 46.1 Hz had grown, which is a known PR3 roll mode (82924).
The glitching related to this range drop appears to have subsided in the most recent lock. Comparison of glitchgrams from
When these glitches were occurring, they appeared on roughly a 6-minute cadence.
TITLE: 02/21 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 5mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.31 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING as of 18:02 UTC (6hr 20 min lock)
EY 5_6 Violins are still elevated. Rahul is trying out new settings.
TITLE: 02/21 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Lock loss at the start of the shift again, this one caused but , followed by a commissioning period and a calibration measurement. ITMY modes 5&6 rang up slowly overnight and continue to be an issue to damp (82940). As of right now, I'm not sure we've found a setting that is actually damping these modes. We've been locked for 6.5 hours, observing for 2.5 of those.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
19:34 | SAF | Laser Haz | LVEA | YES | LVEA is laser HAZARD!!! | 06:13 |
17:55 | FAC | Tyler | MY | n | Fire panel check | 18:55 |
17:56 | PEM | Robert | LVEA | yes | Turn on amps | 19:22 |
18:15 | ISC | Sheila, Mayank, Matt | LVEA | yes | ISCT table work with POP | 18:43 |
18:28 | FAC | Kim | MY, MX | n | Tech clean | 19:31 |
FAMIS27809
I didn't add any water to either chiller as they both looked near the max. Both filters looked good and the leak detection Dixie cup was dry. The T2200289 was updated.
Matthew Sheila
I ran a2l measurements using TJ and Sheila's a2l_min code, and I've attached the new settings below.
I've also attached plots of the coherence checks with lsc and asc channels and darm.
Results of the A2L measurement
*************************************************
RESULTS
*************************************************
ETMX P
Initial: 3.6
Final: 3.34
Diff: -0.26
ETMX Y
Initial: 4.92
Final: 4.9
Diff: -0.02
ETMY P
Initial: 5.64
Final: 5.56
Diff: -0.08
ETMY Y
Initial: 1.35
Final: 1.28
Diff: -0.07
ITMX P
Initial: -0.64
Final: -0.66
Diff: -0.02
ITMX Y
Initial: 3.0
Final: 2.97
Diff: -0.03
ITMY P
Initial: -0.03
Final: -0.06
Diff: -0.03
ITMY Y
Initial: -2.53
Final: -2.51
Diff: 0.02
Matthew Mayank Sheila
We went out to ISCT1 to measure the power coming to the POP AIR port, components are labeled in Sheila's alog. It seems the 50/50 beamsplitter is acting appropriately.
2025-02-20 10:28:11.348428 PST:
H1:ASC-POP_X_DC_NSUM_OUT16 = 1.1193946024467205
H1:LSC-POP_A_LP_OUT16 = 39.499373468859446
BS9010 power: 3.6 mW
POPX power: 1.8 mW
2025-02-20 10:35:24.229534 PST
H1:ASC-POP_X_DC_NSUM_OUT16 = 1.1307608842849726
H1:LSC-POP_A_LP_OUT16 = 39.88992869059245
These average power measurements were taken using
The large beam size coming off the periscope meant it was difficult to use the power meter until downstream of the lens. That's why the first measurement was directly after the 90/10 BS.
TJ, Rahul
ITMY mode 05 and mode 06 have been rung-up since morning and TJ and I have tried several settings to bring them down. The settings which is currently working is listed below in bold font,
New settings for ITMY 05/06 (not yet committed to lscparams file):-
FM5 + FM6 + FM10 Gain 0.01 (phase -60 degrees)
The other settings which I unsuccessfully tried as follows,
-60deg phase -0.01 gain - increase
zero phase -0.01 gain - increase
+30degree phase -0.01 gain - increase
Similarly TJ efforts (from morning to noon) are also given below,
Settings tried for this lock below. There was commissioning going on during this time, so I worry that a few of the ones I tried were made worse by measurements.
Phase, gain - result
-30deg , -0.01 - Slow increase
-60deg. -0.01 - Slow increase
-90deg, +0.01 - very slow increase
+30deg, +0.01 - no increase, no decrease
+60deg, +0.01 - maybe no increase or decrease?
Commissioning measurement might have confused this one 0deg, +0.01 - increased
Looks like IY05 is now increasing (as per narrow filter, broad filter shows decreasing) and mode 06 is decreasing. Since mode 06 is higher than mode 05, I will let the current settings damp it for now.
Current damping using FM6 + FM7 + FM10 Gain 0.01 (-30degree phase) - not sure if this is working yet. Will keep an eye on it tonight.
Ran a calibration measurement using an older version of simulines that the CAL group updated in the wiki. We were locked for about 3.5 hours when I started the broad band.
Simulines start:
PST: 2025-02-20 13:36:12.224035 PST
UTC: 2025-02-20 21:36:12.224035 UTC
GPS: 1424122590.224035
Simulines end:
PST: 2025-02-20 14:00:14.443589 PST
UTC: 2025-02-20 22:00:14.443589 UTC
GPS: 1424124032.443589
Files:
2025-02-20 22:00:14,365 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20250220
T213613Z.hdf5
2025-02-20 22:00:14,373 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20
250220T213613Z.hdf5
2025-02-20 22:00:14,378 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20
250220T213613Z.hdf5
2025-02-20 22:00:14,383 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20
250220T213613Z.hdf5
2025-02-20 22:00:14,388 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20
250220T213613Z.hdf5
Jennie W, Sheila
Today I got a chance to redo some of my output chain measurements (alog #82555) that gave us a confusing result when we were heating up and cooling down the OM2 heater. The confusiong was that our optical gain got better for cold OM2 compared to hot, but the loss through HAM6 predicted by stepping the DARM offset and comparing it to the power at the anitsymmetric port predicted that the loss was worse with cold OM2 and gave us unreasonably low (~65 %) throughput estimate for the fundamental TM00 mode through HAM6. We realised that the OM3 and OMC alignment are being changed by the ASC during this time so that could be affecting the comparison.
Steps:
Turn off OMC ASC at 18:43:00 UTC.
Run auto_darm_offset_step.py from /ligo/gitcommon/labutils/darm_offset_step at 18:45:01 UTC.
Results: P_AS = 62.996 mW + 1.162 * P_DCPD
This makes the throughput estimate for the TM00 mode through HAM6 to the DCPDs to be 86.0 % of the power at the AS port. Which seems more reasonable than the 65% we got last time. We need to do this measurements again when we have improved kappa C somehow to check it gives us the same loss estimate between two times.
We were then going to purposely change kappa C as a comparison to the estimate the DARM offset step gives us, byt changing the QPD offsets based on this measurement 82383(I got the sign wrong last time I did this).
I turned the OMC ASC back on before doing this, however when I changed the H1:ASC-OMC_A_PIT_OFFSET to 0.45 this saturated the OM3 suspension so this was set back to nominal.
I think I got signs wrong yet again so we will try what we now think are the correct ones another commissioning period but maybe do it slowly so as not to cause saturations.
Jennie, Sheila
Sheila wanted me to checked that we can't use the OMC REFL power as a calculation method for how much of the TM00 light does not make it through the OMC. If the mode-matching of the light at the AS port to the OMC is good, we would expect not to see much of a variation at the OMC REFL port as we change the DARM offset, and indeed we can't. See this image, with REFL power at the top.
[Vlad, Louis, Jeff, Joe B] So while exercising the latest pydarm code with an eye towards correcting the issues noted in LHO alog 82804, we ran into a few issues which we are still trying to resolve. First, Vlad was able to recover all but the last data point from the simulines run on Feb 15th, which lost lock at the very end of the sweeps. See his LHO alog 82904 on that process. I updated the pydarm_H1.ini file to account for the current drive align gains and point at the current L1SUSETMX foton file (saved in the calibration svn as /ligo/svncommon/CalSVN/aligocalibration/trunk/Common/H1CalFilterArchive/h1susetmx/H1SUSETMX_1421694933.txt ). However, I had to also merge some changes for it submitted from offsite. Specifically https://git.ligo.org/Calibration/ifo/H1/-/commit/05c153b1f8dc7234ec4d710bd23eed425cfe4d95, which is associated with MR 10 which was intended to add some improvements to the FIR filter generation. Next, Louis updated the pydarm install at LHO from tag 20240821.0 to tag 202502220.0. We then generated the report and associated GDS FIR filters. This is /ligo/groups/cal/H1/reports/20250215T193653Z. The report and fits to the sweeps looked reasonable, however, the FIR generation did not look good. The combination of the newly updated pydarm and ini changes was producing some nonsensical filter fits. This is the first attachment. We reverted the .ini file changes, and this helped to recover more expected GDS filters, however, there's still a bit of a small ~1% change visible around 10 Hz (instead of being a flat 1.0 ratio) in the filter response using the new pydarm tag versus the old pydarm tag which we don't understand quite yet, and would like to before updating the calibration. I'm hoping we can do this in the next day or two after going through the code changes between the versions. Given the measured ~6 Hz SRC detuning spring frequency (as seen in the reports), we will need to include that effect in the calcs front end to eliminate a non-trivial error when we do get around to updating the calibration. I created a quick plot based off the 20250215T193653Z measured parameters, comparing the full model over a model without the SRC detuning included. This is the attached New_over_nosrc.png image.
The slight difference we were seeing in old report GDS filters vs new report GDS filters was actually due to a mcmc fitting change. We had changed the pydarm_cmd_H1.yaml to fit down to 10 or 15 Hz instead 40 Hz, which means it is in fact properly fitting the SRC detuning, which in turn means the model the FIR filter generation is correcting has changed significantly at low frequencies. We have decided to use the FIR filter fitting configuration settings we've been using for the entire run for the planned export today. Louis has pushed to LHO a pydarm code which we expect will properly install the SRC detuning into the h1calcs model. I attach a text file of the diff of the FIR filter fitting configuration settings for the pydarm_H1.ini file between Aaron's new proposal (which seems to work better for DCS offline filters based only looking at ~6 reports) and the one's we've been using this run so far to fit the GDS online filters. The report we are proposing to push today is: 20250222T193656Z
Attached is the DARM BLRMS before and after some filters were edited and added to remove some lines, all changes are attached apart form another adjustment to RBP_2 notch 33.375Hz.
Back to observing at 2004 after a lock loss and commissioning period. We are not thermalized yet, so we will have to run a calibration measurement later.
Matt Jennie Mayank TJ Sheila
We wanted to estimate whether we were clipping on the scraper baffle of PR2 so we moved the PR3 YAW sliders during single bounce quite a bit to get an idea of when were clipping, using the AS_C_NSUM to gauge whether we were starting to clip.
Most of the steps are the same in Sheila's previous alog, expect we are in single bounce and we are therefore not looking for the ALS beatnotes, instead we are looking at the AS_C_NSUM value. You want to take ISC_Lock to PR2_SPOT_MOVE so that the guardian will adjust the IM4, PR2 yaw sliders while you adjust PR3 sliders so that most everything stays aligned. You will have to adjust the pitch every once in awhile due to the cross couplings in IM4 and PR2.
Steps taken in this measurement:
We think the yaw offset value that puts the spot on the center of PR2 is around -230 urad (we started here around 09:00 PST)
3. By going in steps to -900 in yaw offset, while pitching PR3 every so often to keep everything centered, we were able to move across PR2 to the left edge of the baffle, clipping about 10% of the power at the ASC-AS_C QPD (finished 09:45 PST).
4. Then we reset the sliders to their starting values and follwed the above steps going to the right (we stopped at +440 yaw offset). Here we recorded around a 10% loss on the right edge of the baffle (finished 10:44 PST).
Mayank has some plots he will comment analyzing the results of this.
Useful ndscopes
We modeled the PR2Baffle beam clipping
Matt Jennie Mayank TJ Sheila Keita
We extracted the beam path and geometry for the PRM-PR2Baffle-PR2-PR2Baffle-PR3 leg of the input light from the following documents, D1200573, D1102451 (cookie cutter), D0901098, D020023 etc.
The angle of PRM-PR2Baffle beam with respect to X axis was changed from about -0.1 radian to 0.1 radian (over and above the existing value of 0.339 Degrees) such that the
PR2 Beam spot moves from approximately -25mm to 25 mm. The PR2 yaw was adjusted such that the PR2-PR3 beam always hits the same spot on PR3.
This ensures that the beam spot on PRM and PR3 remain unchanged while the beam spot on PR2 changes.
Attachment 1: Shows the overall modelled geometry
Attachment 2: Zoom view around PR2Baffle along with the beam paths.
Attachment 3: Python script.
Attachment 4: Experiment_Data_ 19Feb2025.
Attachment 5: Experiment_Data_ 5July2025.
Plot 1 shows the PR2 beam spot motion with respect to the angle.
Plot 2 shows that following distances vs the PR2 beam spot location
a) PRM-PR2 beam and the Lower edge of the PR2Baffle (D1 distance)
b) PR2-PR3 beam and the Upper edge of the PR2Baffle (D2 distance).
Plot 3 shows the transmission of the beam with respect PR2 beam spot (As the beam comes closer to the baffle edge, some of the beam power is blocked by the baffle edge and hence the transmission decreases)
Plot 4 shows the net transmission for the beam (multiplication of Upper edge and Lower edge transmissions)
It also shows the experimentally measured data above.
The two curves do not match exactly however they differ in position by around 2 mm. It seems PR2_Baffle is approximately at the right place. It may not be necessary to move the PR2_Baffle.
Due to so much SQZ strangeness over the weekend, Sheila set the sqzparams.py use_sqz_ang_servo to False and I changed the SQZ_ANG_ADJUST nominal state to DOWN and reloaded SQZ_MANAGER and SQZ_ANG_ADJUST.
We set the H1:SQZ-CLF_REFL_RF6_PHASE_PHASEDEG to a normal good value of 190. If the operator thinks the SQZ is bad and wants to change this to maximize the range AFTER we've been locked 2+ hours, they can. Tagging OpsInfo.
Daniel, Sheila, Camilla
This morning we set the SQZ angle to 90deg and scanned to 290deg using 'ezcastep H1:SQZ-CLF_REFL_RF6_PHASE_PHASEDEG -s '3' '+2,100''. Plot attached.
You can see that the place with the best SQZ isn't a good linear range for H1:SQZ-ADF_OMC_TRANS_SQZ_ANG, which is why the SQZ angle servo has ben going unstable. We are leaving the SQZ angle servo off.
Daniel noted that we expect the ADF I and Q channels to rotate around zero, which they aren't. So we should check that the math calculating these is what we expect. We struggles the find the SQZ-ADF_VCXO model block, it's in the h1oaf model (so that the model runs faster).
Today Mayank and I scanned the ADF phase via 'ezcastep H1:SQZ-ADF_VCXO_PLL_PHASE -s '2' '+2,180''. You can see in the attached plot the I and Q phases show sine/cosine functions as expected. We think we may be able to adjust this phase to improve the linearity of H1:SQZ-ADF_OMC_TRANS_SQZ_ANG around the best SQZ so that we can again use the SQZ ANG servo. We started testing this, plot, but found that the SQZ was v. freq dependent and needed the alignment changed (83009) so ran out of time.
I reviewed the weekend lockloss where lock was lost during the calibration sweep on Saturday.
I've compared the calibration injections and what DARM_IN1 is seeing [ndscopes], relative to the last successful injection [ndscopes].
Looks pretty much the same but DARM_IN1 is even a bit lower because I've excluded the last frequency point in the DARM injection which sees the least loop suppression.
It looks like this time the lockloss was a coincidence. BUT. We desperately need to get a successful sweep to update the calibration.
I'll be reverting the cal sweep INI file, in the wiki, to what was used for the last successful injection (even though it includes that last point which I suspected caused the last 2 locklosses), out of abundance of caution and hoping the cause of locklosses is something more subtle that I'm not yet catching.
Despite the lockloss, I was able to utilise the log file saved in /opt/rtcds/userapps/release/cal/common/scripts/simuLines/logs/H1/
(log file used as input into simulines.py), to regenerate the measurement files.
As you can imagine the points where the data is incomplete are missing but 95% of the sweep is present and fitting all looks great.
So it is in some way reassuring that in case we lose lock during a measurement, data gets salvaged and processed just fine.
Report attached.
How to salvage data from any failed attempt simulines injections:
/opt/rtcds/userapps/release/cal/common/scripts/simuLines/logs/{IFO}/ for IFO=L1,H1
'./simuLines.py -i /opt/rtcds/userapps/release/cal/common/scripts/simuLines/logs/H1/{time-name}.log'
for time-name resembling
20250215T193653Z/simuLines.py
' is the simulines exectuable and can have some full path like the calibration wiki does: './ligo/groups/cal/src/simulines/simulines/simuLines.py
'