Wed Jun 25 10:12:58 2025 INFO: Fill completed in 12min 55secs
MEDM
Confusingly we have been running with two separate copies of the vacuum overview MEDM.
CDS_OVERVIEW has been using /ligo/lho/h0/ve/medm/beckhoff/H0_VAC_SITE_OVERVIEW_CUSTOM.adl (not under any version control)
SITEMAP has been using /opt/rtcds/userapps/release/vacuum/h0/medm/Target/H0_VAC_SITE_OVERVIEW_CUSTOM.adl (under git version control)
It is the second one, which is git controlled, which should be used. I took the opportunity when fixing the MEDMs to remove use of the /ligo files and move CDS_OVERVIEW over to using the Target file.
Changes made to the site overview and committed to git (see attachment) are:
RED - HAM1 now only shows the new PT100_MOD2 gauge
YELLOW - IP24,IP25 have new names
PURPLE - text to show the local git working directory and git repo information
VACSTAT
I added the new HAM1 PT100_MOD2 gauge to VACSTAT last night. Initially I also removed the temporary H1 PT100B gauge, but then remembered this has channels in H1EPICS_VACSTAT.ini which caused EDC disconnects.
So for now VACSTAT has the new H0 PT100_MOD2 (not in DAQ) and the old H1 PT100B (in DAQ, but needs to be removed)
Trip levels were returned to pre-vent values.
I just took some No SQZing time (starting at 2025-06-25 15:01 UTC (1427040078)) and compared it to the pre-vent No SQZ Time of 2025-03-26 16:01 UTC (1427040078). I used the command python3 range_compare.py 1427040078 1434898893 --span 600. I originally wanted to use a time span of half an hour instead of 10 minutes, but the March time had some sort of glitch that messed up its range calculation, and the range calculation for today's time was the same, so I went with 10 minutes. Here's the range comparison.
TITLE: 06/25 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 2mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY:
Observing and have been Locked for almost 2.5 hours. Range is okay at 145Mpc, it looks like its been drifting a bit down since the lock started.
Looking at the lockloss from last night (2025-06-25 11:02 UTC - note that the 'refined lockloss time' is one whole second early than the actual lockloss time), it's immediately clear that we lost lock due to a quick ringup, but it is unclear where that ringup came from or what frequency it was actually at. We can see an 8Hz oscillation in DARM one second before the lockloss, but looking at the LSC channels, SRCL sees a 1.5 Hz oscillation right before the lockloss, and PRCL has a 4.5 Hz oscillation (although the PRCL oscillation could be unrelated, although it does look like it grows a bit larger). In the ASC channels, MICH P OUT has a ringup at ~3.8 Hz in that last second, and some of the other ASC channels look like they maybe also have some sort of excursion right before the lockloss.
Signal railed about 5:18 PM local time, I checked trend data for PT120 and PT180 and no pressure rise noted inside the main volume. Attached is 3 day trend of the pump behavior, very glitchy for a long while already.
System will be evaluated as soon as possible. AIP last replaced on 2015, see aLOG 18261.
Well, it appears as if the pump still has some life, just a few minutes ago started to pump the annulus system, for now.
(Jordan V., Gerardo M.)
Late entry, activity took place last Tuesday 07/22/2025.
The annulus ion pump signal railed again, so this time we decided to replace the controller. It does not seem like the controller improved the ion pump behavior, since the current signal is swinging more than before, see attached plot. We are keeping an eye on this system.
Lockloss From Nominal_Low_Noise @ 03:01:08 UTC.
I'm not exactly sure what caused this LL .
With the HAM1 ion pump performing nobly, the HAM1 turbo pump was spooled down and the associated SS-500 pump cart was disconnected from the HAM1 volume. All mechanical sources of vacuum equipment noise should be in nominal observing mode.
The HAM1 pressure gauge, PT-100, was connected via a standard gauge pigtail cable, directly from gauge body sub-D to supply CPC. All of the temporary cabling was removed including the breakout board and analog voltage readout pigtail, so PT-100 is solely on Ethercat communication.
Famis 28411 Weekly In-Lock SUS Charge Measurement.
This command is very useful to determine if the SUS charge measurements ran in the last week:
ls /opt/rtcds/userapps/release/sus/common/scripts/quad/InLockChargeMeasurements/rec_LHO -lt | head -n 6
Coherence for bias_drive_bias_off is 0.06426009151210332, which is below the threshold of 0.1. Skipping this measurement
Cannot calculate beta/beta2 because some measurements failed or have insufficient coherence!
Cannot calculate alpha/gamma because some measurements failed or have insufficient coherence!
Something went wrong with analysis, skipping ITMX_13_Hz_1434811847
TITLE: 06/25 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:
Currently Observing at 150Mpc and have been Locked for just over 2 hours. Relocking went pretty smooth, with the only issue being DRMI/PRMI taking very long to catch still. During maintenance today, the auxiliary pumps around HAM1 were turned off, and the jitter noise that we had been seeing around 550-600Hz is basically almost fully gone.
LOG:
14:30 Observing at 145Mpc and have been Locked for almost 4 hours
14:45 Out of Observing to run SUS Charge measurements
15:00 Lockloss during SWAP_BACK_ETMX
- It doesnt look like it was due to anything that was happening with the swap
19:31 Started relocking
- Initial alignment
- Lockloss from MOVE_SPOTS
22:33 NOMINAL_LOW_NOISE
22:34 Observing
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:03 | FAC | Chris, Eric | XARM | n | Bee-am sealing | 17:32 |
15:09 | FAC | Randy | LVEA | n | Drilling holes in a pipe | 16:17 |
15:10 | FAC | Nellie, Kim | LVEA | n | Tech clean | 16:27 |
15:21 | VAC | Gerardo, Jordan | LVEA, FCETube | n | Turning off HAM1 pump, replacing gauge in FCETube, craning onto termination slab | 19:25 |
15:33 | VAC | Janos | EX, EY | n | Pumping line parts collection | 18:40 |
15:39 | FAC | Tyler | XARM | n | Driving Big Red to move spool | 16:39 |
15:45 | SQZ | Camilla, Sheila | LVEA | y(local) | SQZT7 work | 18:49 |
15:45 | PSL | Jason | LVEA | n | Inventory | 17:32 |
15:46 | FAC | Mitchell | LVEA | n | Inventory | 16:09 |
15:53 | CDS | Jonathan | remote | n | Rebuilding camera software | 16:01 |
15:54 | VAC | Travis | LVEA | n | Helping Gerardo and Jordan | 17:21 |
16:07 | SUS | RyanC | CR | N | OPLEV charge measurements, EY & EX | 17:38 |
16:14 | FAC | Richard, Ken | LVEA | n | Looking at cable tray near HAM6 | 16:27 |
16:15 | Christina | LVEA | n | Inventory | 16:33 | |
16:16 | VAC | Jackie | FCETube | n | Joining Jordan and Gerardo | 17:14 |
16:18 | CDS | Patrick | remote | n | Reimaging VAC Beckhoff computer | 19:05 |
16:33 | FAC | Randy | MX | n | Craning | 18:20 |
16:33 | Christina | MY | n | Taking a photo | 17:01 | |
16:36 | SEI | Jim | remote | n | HAM1 ISI measurements | 19:05 |
16:52 | FAC | Mitchell | FCETube | n | Inventory | 17:59 |
16:52 | FAC | Tyler | XARM | n | Meeting up with Chris, Eric, and the bees | 17:07 |
16:57 | Keita | LVEA | n | Checking analog camera inventory | 17:33 | |
16:59 | FAC | Kim, Nellie | HAM Shack | n | Tech clean | 17:27 |
17:05 | EE | Fil, Marc | LVEA | n | Fixing BSC temp sensor | 18:35 |
17:11 | Jennie, Leo, Tooba | LVEA | n | Tour | 17:42 | |
17:24 | Betsy, Brian O, LIGO India people | LVEA | n | LIGO India planning | 18:09 | |
17:24 | Christina | LVEA | n | Inventory | 17:33 | |
17:27 | FAC | Kim | MX | n | Tech clean | 18:28 |
17:28 | FAC | Nellie | MY | n | Tech clean | 18:58 |
17:34 | Christina | HAM Shack | n | Looking for a forklift | 18:17 | |
17:42 | SQZ | Jennie, Leo | LVEA | y(local) | Joining on SQZT7 | 18:31 |
18:16 | Betsy + others | EY | n | Looking at stuff | 19:05 | |
18:18 | Christina | LVEA | n | Still looking for a forklift | 18:48 | |
18:49 | Camilla | OpticsLab | n | 18:51 | ||
19:11 | RyanC | LVEA | n | Sweep | 19:35 | |
19:37 | Matt | LVEA | n | Unplugging a cable | 19:40 | |
19:48 | EE | Fil | MX | n | Dropping stuff off | 20:18 |
20:19 | FAC | Chris | MY, EY | n | Checking filters | 21:16 |
20:23 | FAC | Tyler | XARM | n | Checking out more bees | 20:51 |
20:36 | VAC | Gerardo, Jordan | FCETube | n | Opening FC gate valve | 21:09 |
22:10 | PCAL | Tony | PCAL Lab | y(local) | Taking stuff | 22:16 |
Ryan S., Elenna
Ryan and I are still trying to speed up the MOVE_SPOTS state. Today, Ryan implemented new code that checks the convergence of the loops and only ramps up the ADS gains of loops that are not yet converged to help them converge faster. This appeared to work well, although the state is still slow. We are now taking the spots to the FINAL spots that the camera servos go to, instead of some old spot, so it's possible that which loops that are far off have changed.
Ryan also pointed out that the ENGAGE_ASC_FOR_FULL_IFO state is taking a while because it is limited by the convergence of the PIT3 ADS. This is likely because the POP A offset used in DRMI ASC is not quite right, so I adjusted it for pitch so the PRM should be closer to the full lock position. SDFed.
With regards to ENGAGE_ASC_FOR_FULL_IFO, the three locks that we've had after the adjustment made yesterday have made the state take an average of 4.5 minutes to get through. Before making this change, it was taking us an average of 8.5 minutes (looking at the four locks before this change), so this has made a big improvement for this state!
However, it looks like the main reason why this state still takes a pretty long time compared to most other states is because it's still needing to wait a long time for the PIT3 and YAW3 ADS to converge (ndscope). Here's the log from this last time that we went through ENGAGE_ASC and you can see that most of the time is waiting for ADS. The actual wait timers in there are only 50 seconds of waiting, so the rest of the wait timers (the one second timers) are just from the convergence checker.
I updated the POP A yaw offset so that PRC1 in DRMI will bring the PRM closer to the full lock point and hopefully make convergence in this state faster.
The last work that was done for the estimator was back on May 21st 84548. The data that I got then was of the Open Loop TFs for when SR3 OSEMINF gains were nominal (aka what they are now) vs our 'more calibrated' values that are shown in 84367. I then wrote a script that takes in the OLTF traces that were exported from diaggui and divides them for each DOF (results). The average between the two divided traces is what we are going to be using as the gain values needed to counter the 'more calibrated' values. I put these gains in as FM7(previously had old unused gains in) in the DAMP filter bank.
Today I changed the M1 coil driver state to 1, set the OSEMINF gains to the ones listed in 84367, turned on FM7 for all degrees of freedom in the DAMP filter bank to turn on the gain corrections, changed the gain for DAMP Y to be -0.1 instead of the nominal -0.5, and then took SUSPOINT to M1 transfer function measurements.The measurements can be found in: /ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/SR3/Common/Data/2025-06-24_1700_H1ISIHAM5_ST1_WhiteNoise_SR3SusPoint_{L,T,V,R,P,Y}_0p02to50Hz.xml, in svn as r12356.
After that, I took regular transfer functions with damping on for SR3 M1 to M1. Unfortunately I accidentally took the transfer functions for Pitch with the wrong bin width, so the original xml file there has double the values as the other ones. The saved .mat file, though, has the data all the same size. The measurements can be found in: /ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/SR3/SAGM1/Data/2025-06-24_1900_H1SUSSR3_M1_WhiteNoise_{L,T,V,R,P,Y}_0p02to50Hz.xml, in svn as r12359. The results are found in: /ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/SR3/SAGM1/Results/2025-06-24_1900_H1SUSSR3_M1_ALL_TFs.pdf, also r12359.
After these measurements, everything was put back to how it was before, including the OSEMINF gains, the DAMP Y gain, and the DAMP FM7 filters turned back off.
Range comparison plot:
Command Ran: python3 range_compare.py 1396833438 1434842418 --span 1800
time 1 is April 11th 2024
time 2 is today at 23:20 UTC June 24th 2025
I ran two brucos, one on NOLINES and one on CLEAN.
Overall, the message of Tony's alog is that, relative to our best range, we have lost 10 Mpc by the time we reach 100 Hz, and an additional 5 Mpc by the time we reach 1 kHz. The brucos above show a lot of low level coherence with: MICH, SRCL, PRCL, REFL RIN. There is a chance that making additional improvements to the feedforward can help. Right now it's hard to tell how many Mpc that gets us back, but it's where we should start.
I made a range comparison with a time form last night when we got up to 155Mpc (Jun 25, 2025 05:07:30 UTC (1434863268)) and compared it to a time of good range before the vent, Apr 01, 2025 03:05:42 UTC (1427511960). Here's the range comparison.
This is interesting because it indicates that the range loss at low frequency in DARM now versus right before the vent is much smaller, only 3 Mpc. But looking back to our best range in April 2024, there is even more loss in sensitivity at low frequency.
Sheila, Camilla. WP#12573
Sheila and I realigned the Homodyne after it needed to be touched to re-seat one of the PDs in 85116. Sheila balanced powers using the LO waveplate and measured visibility to be 98.4% (11.4mV min 1.46 V max).
We then used SQZ_MANAGER SQZ_READY_HD (needed to change LO gain, see sdf). Attached plot.
Last done by Kevin/Vicky in 84661, they got 6.4 to 7 dB of SQZ, 14.6 dB of anti-SQZ, and 12.1 dB of mean squeezing.
Type | NLG | Angle | SQZ (@300Hz) | DTT Ref |
LO shot noise | N/A | N/A | Used as 0dB | ref 1 |
ASQZ | 10 | (+) 204 | 14.3dB | ref 2 |
SQZ | 10 | (-) 114 | -7dB | ref 3 |
Mean SQZ | 10 | N/A | 10.7dB | ref 4 |
OPO Setpoint | Amplified Max | Amplified Min | UnAmp | Dark | NLG | OPO Gain |
95uW | 0.0530 | 0.001913 | 0.005307 | 9.1e-5 | 10.16 | -8 |
After Sheila and Camilla realigned the homodyne, I tried estimating the NLG by looking at the ADF beatnote while varying the squeezing angle to map out the ADF-LO ellipse as described in the ADF paper.
The channels used to measure the beatnote were H1:SQZ-ADF_HD_DIFF_NORM_{I,Q}. The squeeze angle was varied by sweeping H1:SQZ-CLF_REFL_RF6_PHASE_PHASEDEG from 0 to 275 degrees with both polarities of the CLF servo (H1:SQZ-CLF_SERVO_IN2POL).
I then fit the data to an ellipse. The ratio of the semi-major to semi-minor axes is denoted by G in the ADF paper and was then used to compute the NLG. This results in
G: 5
NLG: 8.8
By comparison, an NLG of 10.2 is equivalent to G=5.4, which is consitent with the accuracy of this data. This kind of measurement could probably be improved in the future by taking more data near the anti-squeezing points on the ellipse.
It's not obvious from the ADF paper how to convert from G to NLG and back. For posterity, the functions I used for these conversions are also attached. These use the "alternative OPO configuration" shown in Fig 6(a) of the ADF paper.
The data is located at /ligo/gitcommon/squeezing/sqzutils/data/NLG_HD_06_24_2025.h5.
Ansel, Sheila, Camilla
Last week, Ansel noticed that there is a 2Hz comb in DARM since the break, similar to that that we've seen from the HWS camera sync frequency and power supplies and fixed in 75876. The cabling has not been changed since, the camera sync frequency has been changed.
Our current camera sync frequencies are: ITMX = 2Hz, ITMY = 10Hz. We have typically seen these combs in H1:PEM-CS_MAG_LVEA_OUTPUTOPTICS_Y_DQ. With a 0.0005Hz BW on DTT I can't easily see these combs, see attached.
It may be difficult to see in a standard spectrum, but can be clearly seen in Fscan plots linked off of the summary pages. For the "observing" Fscan, the interactive spectrum plot shows the 2 Hz comb marked automatically. See the attached image of H1:GDS-CALIB_STRAIN_CLEAN
Verifed that the cabling has not changed since 75876.
Next steps we should follow, as listed in 75876 would be to try using a different power supply or lowering the voltage to +12V. Or, there is a note suggesting Fil could make a new cable to power both the camera and CLink's via the external supply (14V is fine for both).
Thanks Camilla. If anything can be done more rapidly than waiting another week, it would be very much appreciated. Continuing to collect contaminated data is bad for CW searches.
Matt and I turned down the Voltage supplied from 14V to 12V for each camera at ~22:00UTC when the IFO was relocking. Verified HWS cameras and code still running.
We also will plan to have Dave reimpliemnt the hws_camera_control.py script he wrote in 74951 to turn the HWS's off in Observing until we fix this issue.
The 2 Hz comb is still present in H1:GDS-CALIB_STRAIN_CLEAN after the voltage change (before the software update)