Compairing Sqz times from this morning Jun 11 2025 GPS time: 1433689084 and a 15 min span from March 26th 2025 GPS time: 1427073110
Command ran : python3 range_compare.py 1433689084 1427073110 --span 900
Today's time is well calibrated, well thermalized.
We can see New peaks in the the ASD, and some of our old peaks are higher. From 10-15hz, 25-30hz, 500- 600HZ, 2k hz esp.
We seem to also have a broadband decrease in both sensitivity and range. Where our sensitivity loss is small at most frequencies, but spans a wide frequency range.
Unfortunately our range has dropped off from 80hz -2k hz by up to ~15 Mpc. See first page.
The good news is that there are some very slight gains in range in the 50-80 hz freq range. See 3rd page
Wed Jun 11 10:10:49 2025 INFO: Fill completed in 10min 46secs
Good fill verified by Gerardo curbside.
SDF Overview looks great except this HPIHAM1 channel not found.
TITLE: 06/11 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 140Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 11mph Gusts, 5mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.18 μm/s
QUICK SUMMARY:
H1 is still locked 8 Hours and 30 minutes later!
All systems seem to be functioning.
There was talk of doing a calibration measurement, Which I started to do right after making sure there wasn't anyone still inside working the LVEA.
I ran a PCAL BroadBand with this command:
pydarm measure --run-headless bb
2025-06-11 07:44:58,555 config file: /ligo/groups/cal/H1/ifo/pydarm_cmd_H1.yaml
2025-06-11 07:44:58,571 available measurements:
pcal: PCal response, swept-sine (/ligo/groups/cal/H1/ifo/templates/PCALY2DARM_SS__template_.xml)
bb : PCal response, broad-band (/ligo/groups/cal/H1/ifo/templates/PCALY2DARM_BB__template_.xml)
The BroadBand finished. But I did not run the Simulines. It was believed by the Calibration gurus that we don't need it before Observing because our calibration .
"monitoring lines show a pretty good uncertainty for LHO this morning: https://gstlal.ligo.caltech.edu/grafana/d/StZk6BPVz/calibration-monitoring?orgId=1&var-DashDatasource=lho_calibration_monitoring_v3&var-coh_threshold=%22coh_threshold%22%20%3D%20%27cohok%27%20AND&var-detector_state=&from=1749629890225&to=1749652518797 Roughly +/-2% wiggle "
~Joe B
Clicked the button for Observing, And we went right into observing with out any SDF issues!
Went into observing at 14:57 UTC
There are messages though mostly from the SEI system, all of which are Setpoint changes see SPM DIFFS for differences for HAMs 2,3,4,5.
But these have not stopped us from getting in to Observing.
I have attached a screenshot of the broadband measurement from this morning. It shows that the calibration uncertainty is within +-2%, which means that our new calibration is excellent!
For those who want to plot the latest PCAL broadband, you can use a template that I have saved in /opt/rtcds/userapps/release/cal/h1/dtt_templates/PCAL_BB_template.xml (aka [userapps] cal/h1/dtt_templates/)
In order to use this template, you must find the GPS time of the start of the broadband measurement, which I found today by converting the timestap in Tony's post above into GPS time. This template pulls data from NDS2 because it uses GDS, so you will also need to go to the "Input" tab, and put your current GPS time in the "Epoch stop" entry that is within the "NDS2 selection" box. The current time will hopefully be after the start time of the broadband measurement, so that will ensure that the full span of the data you need is requested from NDS2. If you don't do this, the template will give you an error if you try to run.
TITLE: 06/11 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 135 Mpc
INCOMING OPERATOR: None
SHIFT SUMMARY:
We are Observing! We've been Locked for 1 hour. We got to NOMINAL_LOW_NOISE an hour ago after a couple locklosses with Elenna's help (see below) and then took a couple unthermalized broadband calibration measurements (84959, 84960). I also just adjusted the sqz angle and was able to get better squeezing for the 1.7 kHz band, but the 350 Hz band squeezing is very bad. I am selecting DOWN so that if we unlock, we don't relock.
Early in the relocking process, we were having issues with DRMI and PRMI not catching, even though we had really good DRMI flashes. I finally gave up and went to run an initial alignment, but we had a bit of a detour when an error in SDFing caused Big Noise (TM) to be sent into PM1 and caused the software WD to trip, then causing the HAM1 ISI and HAM1 HEPI to also trip. Once we got that figured out we went through a full initial alignment with no issues.
Relocking, we had two locklosses from LOWNOISE_ASC from the same spot. Here are their logs (first, second). There were no ASC oscillations before the locklosses, so it doesn't seem to be due to the 1Hz issues from earlier (849463). Looking at the logs, they both happened right after turning on FM4 for DHARD P, DcntrlLP. Elenna took a look at that filter and noticed that the ramping on time might be too short, and changed it from 5s to 10s, and updated the wait time in the guardian to match. She loaded that all in, and it worked!!
As a strange aside, after the second LOWNOISE_ASC lockloss, I went into manual IA to align PRX, but there was no light on ASC-AS_A. Left Manual IA, went through DOWN and SDF_REVERT again, then back into manual IA, and found the same issue at PRX. Looked at the ASC screen and noticed that the fast shutter was closed. Selected OPEN for the fast shutter, and it opened fine. This was a weird issue??
LOG:
23:30UTC Locked and getting data for the new calibration
23:43 Lockloss
- Started an initial alignment, trying to do automatically after PRC align was bypassed in the state graph (84950)
- Tried relocking, couldn't get DRMI or PRMI to catch, even with really good DRMI flashes
- Went to manual inital alignment to just do PRX by hand, but saw the HAM1 ISI IOP DACKILL had tripped
- Then HAM1 HEPI tripped, and I had to put PM1 in SAFE because huge numbers were coming in through the LOCK filter
- It was due to an SDF error and was corrected
- Lockloss from LOWNOISE_ASC for unknown cause (no ringup)
- Lockloss from LOWNOISE_ASC for unknown cause (no ringup)
- Tried going to manual IA to align PRX, but there was no light on ASC-AS_A. Left Manual IA, went through DOWN and SDF_REVERT again, then back into manual IA, and found the same issue at PRX. Looked at the ASC screen and noticed that the fast shutter was closed. Selected OPEN for the fast shutter, and it opened fine.
06:03 NOMINAL_LOW_NOISE
06:07 Started BB calibration measurement
06:12 Calibration measurement done
06:36 BB calibration measurement started
06:41 Calibration measurement done
07:02 Back into Observing
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
00:50 | VAC | Gerardo | LVEA | YES | Climbing around on HAM1 | 00:58 |
Unfortunately, since we don't want the ifo trying to relock all night if we lose lock, I have to select DOWN, but that means that the request for ISC_LOCK is not in the right spot for us to stay Observing. So we won't be Observing overnight, but will be locked (at least until we lose lock, then we will be in DOWN)
Here is some more information about some of the problems Oli faced last night and how they were fixed.
PM1 saturations:
Unfortunately, this problem was an error on my part. Yesterday, Sheila and I were making changes to the DC6 centering loop, which feeds back to PM1. As a part of updating the loop design, I SDFed the new filter settings, but inadvertently also SDFed the input of DC6 to be ON in safe. We don't want this; SDF is supposed to revert all the DC centering loop inputs to OFF when we lose lock. Since I made this mistake, a large junk signal came in through the input of DC6 and then was sent to the suspension, which railed PM1 and then tripped the HAM1 ISI. Once I realized what was happening, I logged in and had Oli re-SDF the inputs of DC6 P and Y to be OFF.
You can see this mistake in my attached screenshot of the DC6 SDF; I carelessly missed the "IN" among the list of differences.
DHARD P filter engagement:
In order to avoid some control instabilities, Sheila and I have been reordering some guardian states. Specifically, we moved the LOWNOISE ASC state to run after LOWNOISE LENGTH CONTROL. This should not have caused any problems, except Oli noticed that we lost lock twice at the exact same point in the locking process, right at the end of LOWNOISE ASC when the DHARD P low noise controller is engaged, FM4. I attached the two guardian logs Oli sent me demonstrating this.
I took a look at the FM4 step response in foton, and noticed that the step response is actually quite long, and the ramp time of the filter was set to 5 seconds. I also looked at the DARM signal right before lockloss, and noticed that the DARM IN1 signal had a large motion away from zero just before lockloss, like it was being kicked. My hypothesis is that the impulse of the new DHARD P filter was kicking DARM during engagement. This guardian state used to be run BEFORE we switched the coil drivers to low bandwidth, so maybe the low bandwidth coil drivers can't handle that kind of impulse.
I changed the ramp time of the filter to 10 seconds, and we proceeded through the state on the next attempt just fine.
We took another broadband measurement after having been at max power for 40 minutes in our quest to confirm the newest calibration. Of course, since we have only been at max power for 40 minutes, we are still unthermalized.
Start: 2025-06-11 06:36:03 UTC (1433658981)
End: 2025-06-11 06:41:12 UTC (1433659290)
Output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250611T063603Z.xml
We should still take another measurement when we are more thermalized, but just after 40 minutes of NLN the broadband results look good. I also checked the calibration line grafana page, and all the uncertainties are within 5%. Most are within 2-3% except the 33 Hz line which is at 4%.
As soon as we got to NLN, we took a broadband calibration measurement to check out the new calibration (84953). We had just gotten to max power 12 minutes before starting this measurement, so of course we are very unthermalized. We are hoping to take another measurement once we're thermalized.
Start: 2025-06-11 06:07:20 UTC (1433657258)
End: 2025-06-11 06:12:30 UTC (1433657568)
Output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250611T060720Z.xml
Currently trying to lock - We were almost there but lost lock during LOWNOISE_ASC for an unknown reason - there were no ringups leading up to the lockloss.
Self-explanatory from title. Set 'manual_control' to False so that when relocking, we automatically go into LOCKING_ARMS_GREEN instead of GREEN_ARMS_MANUAL, and so we go into PRMI and MICH after timing out of DRMI, instead of staying in DRMI.
Reloaded ISC_LOCK and ISC_DRMI
Francisco, Elenna, help online from Joe B
We used the thermalized calibration measurement that Tony took in alog 84949, and ran the calibration report, generating report 20250610T224009Z. We had previously done this process for a slightly earlier calibration measurement with guidance from Joe. Upon inspection of the report, Joe recommended that we change the parameter is_pro_spring
from False to True, which significantly improved the fit of the calibration. The report that Tony uploaded in his alog includes that fit change. Since we were happy with this fit, Francisco reran the pydarm report, this time requesting the generation of the GDS filters. After this completed, we inspected the comparison of the FIR filters with the DARM model, and saw very good agreement between 10 and 1000 Hz.
Two things we want to point out are that the nonsens filter fits included a lot of ripple a low frequency, but it still looks small enough that we think it is "ok". We also saw some large line features at high frequency in the TST filters, which Joe had previously assured us was ok.
While online with Joe, we had also confirmed that the DARM actuation parameters, such as gains and filters, matched in three locations: in the suspension model itself, in the CAL CS model, and in the pydarm ini file.
Since we confirmed this was all looking good, Francisco and I proceeded with the next steps, which we followed from Jeff's alog here, 83088. We ran these commands in this order:
pydarm commit 20250610T224009Z --valid
pydarm export --push 20250610T224009Z
pydarm upload 20250610T224009Z
pydarm gds restart
At this point, Jeff notes that he had to wait 12 minutes running "pydarm gds status
" and running the broadband measurement to confirm the calibration is good. Francisco and I also knew we needed to check the status of the calibration lines on grafana. However, a few minutes after we started the clock on this wait time, the IFO lost lock.
We think the calibration is good, but we have not actually been able to confirm this, which means we cannot go into observing tomorrow (Wednesday) before making this confirmation.
Doing so requires some locked time with calibration lines on and a broadband injection for a final verification of this new calibration. The hope is that we can achieve this tonight, but if not, we must do so tomorrow before going into observing. (Note: because of the different rules of "engineering" data versus "observing" data, we could go into observing mode tonight without this confirmation).
We confirmed this new calibration is good in this alog: 84963.
I am going to add a few more details and thoughts about this calibration here:
Currently, we are operating with a digital offset in SRCL, which is counteracting about 1.4 degrees of SRCL detuning. Based on the calibration measurement, operating with this offset seems to have compensated most of the anti-spring that has been previously evident in the sensing function. However, our measurements still show non-flat behavior at low frequency, which was actually best fit with a spring (aka "pro-spring"). However, the full behavior of this feature appears more like some L2A2L coupling. It may be worthwhile to test out this coupling by trying different ASC gains and running sensing function measurements.
Joe pointed out to me this morning in the cal lines grafana, and we also saw in the very early broadband measurement last night (84959), that the calibration looks very bad just at the start of lock, with uncertainties nearing 10%. This seems to level off within about 30 minutes of the start of the lock. Since that is pretty bad, we might want to consider what to do on the IFO side to compensate. Maybe our SRCL offset is too large for the first 30 minutes of lock, or there is something else we can do to mitigate this response.
Just watching the grafana for this recent lock acquisition, it took about 1 hour for the uncertainty of the 33 Hz line to drop from 8% to 2%.
Jennie W, Sheila D
I compared our optical gain and power-recycling gain between this afternoon once we were thermalised at 22:42:59 UTC and a thermalised time just before the vent on April 1st at 07:34:01 UTC.
Our optical gain looks like it has decreased by around 1% and our PRG from 52 to 50 W/W.
This might make it worth tweaking our OMC alignment to improve optical gain, but the the PRG hasn't changed much so its maybe not worth trying to improve this before the run starts by tweaking camera servo offsets.
We should be careful with the PRG comparison- I am not sure the change in the PRG is "real" because before the vent, we had not updated the PRG calibration to account for reduced power on IM4 trans that occurred after the O4a/O4b break. However, I did update the PRG calibration last week to account for it. It could still be correct; one reason why I didn't update the PRG calibration before was because it seemed "good enough", but I'm not sure if it's good enough to a few percent to make this kind of comparison.
No SQZ time taken today, 21:47:00UTC to 21:59:00UTC.
2 Seconds of data in this data span is missing.
Our tools can't pull the entire time for this No SQZ ing time.
Johnathan has given us the 2 seconds that are missing from the data 12 minute data stretch.
"I don't have good news for you. There is a 2s gap in there at 1433627912-1433627913 on H-H1_llhoft. The H-H1_HOFT_C00 is worse, I don't see frames in the 1433627.... range at all." ~ Johnathan H.
We were able to salvage 674 seconds from this time that can be useful.
Useful GPS time: 1433627238 -1433627912
I'm working on going through some Observe SDFs, so that we're ready for observing soon.
Jim is currently working on going through many of the SEI SDFs. The rest of the diffs I need to check with other commissioners to be sure about before we clear them, but I think we're getting close to having our SDFs cleared!
h1tcshws sdfs attached.
Reverted BaffePDs to what they were 3 months ago, attached, unsure why they would have changed.
SQZ ADF frequency sdfs accepted, we do not know why these would have been accepted at the values -600 that they've been at some of the past 2 weeks.
ASC SDFs were from changes to DC6, cleared.
Cleared these SDFs for the phase changes for LSC REFL A and B.
I trended these, and see that FM2 was on in all three of these last time we were in observing, so these must have been erroneously accepted in the observing snap.
I also accepted the HAM7_DK_BYPASS time from 1200 to 999999 after checking with Dave, as attached.
Sheila, Elenna, Camilla
Sheila was questioning if something is drifting for us to need an initial alignment after the majority of relocks. Elenna and I noticed that BS PIT moves a lot both while powering up /moving spots and while in NLN. Unsure from the BS alignment inputs plot what's causing this.
This was also happening before the break (see below) but the operators were similarly needing more regular initial alignments before the break too. 1 year ago this was not happening, plot.
These large BS PIT changes began 5th to 6th July 2024 (plot). This is the day shift from the time that the first lock like this happened 5th July 2024 19:26UTC (12:26PT): 78877 at the time we were doing PR2 spot moves. There also was a SUS computer restart 78892 but that appeared to be a day after this started happening.
Sheila, Camilla
This reminded Sheila of when we were heating a SUS in the past and causing the bottom mass to pitch and the ASC to move the top mass to counteract this. Then after lockloss, the bottom mass would slowly go back to it's nominal position.
We do see this on the BS since the PR2 move, see attached (top 2 left plots). See in the green bottom mass oplev trace, when the ASC is turned off on lockloss, the BS moves quickly and then slowly moves again over the next ~30 minutes, do not see simular things on PR3. Attached is the same plot before the PR2 move. And below is a list of other PR2 positions we tried, all the other positions have also made this BS drift. The total PR2 move since the good place is ~3500urad in Yaw.
To avoid this heating and BS drift, we should move back towards a PR2 YAW of closer to 3200. But, we moved PR2 to avoid the spot clipping on the scrapper baffle, e.g. 77631, 80319, 82722, 82641.
I did a bit of alog archaeology to re-remember what we'd done in the past.
To put back the soft turn-off of the BS ASC, I think we need to:
Camilla made the good point that we probably don't want to implement this and then have the first trial of it be overnight. Maybe I'll put it in sometime Monday (when we again have commissioning time), and if we lose lock we can check that it did all the right things.
I've now implemented this soft let-go of BS pit in the ISC_DRMI guardian, and loaded. We'll be able to watch it throughout the day today, including while we're commissioning, so hopefully we'll be able to see it work properly at least once (eg, from a DRMI lockloss).
This 'slow let-go' mode for BS pitch certainly makes the behavior of the BS pit oplev qualitatively different.
In the attached plots, the sharp spike up and decay down behavior around -8 hours is how it had been looking for a long time (as Camilla notes in previous logs in this thread). Around -2 hours we lost lock from NomLowNoise, and while we do get a glitch upon lockloss, the BS doesn't seem to move quite as much, and is mostly flattened out after a shorter amount of time. I also note that this time (-2 hours ago) we didn't need to do an initial alignment (which was done at the -8 hours ago time). However, as Jeff pointed out, we held at DOWN for a while to reconcile SDFs, it's not quite a fair comparison.
We'll see how things go, but there's at least a chance that this will help reduce the need for initial alignments. If needed, we can try to tweak the time constant of the 'soft let-go' to further make the optical lever signal stay more overall flat.
The SUSBS SDF safe.snap file is saved with FM1 off, so that it won't get turned back on in SDF revert. The PREP_PRMI_ASC and PREP_DRMI_ASC states both re-enable FM1 - I may need to go through and ensure it's on for MICH initial alignment.
RyanS, Jenne
We've looked at a couple of times that the BS has been let go of slowly, and it seems like the cooldown time is usually about 17 minutes until it's basically done and at where it wants to be for the next acquisition of DRMI. Attached is one such example.
Alternatively, a day or so ago Tony had to do an initial alignment. On that day, it seemed like the BS took much longer to get to its quiescent spot. I'm not yet sure why the behavior is different sometimes.
Tony is working on taking a look at our average reacquisition time, which will help tell us whether we should make another change to further improve the time it takes to get the BS to where it wants to be for acquisition.