TITLE: 06/11 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 135 Mpc
INCOMING OPERATOR: None
SHIFT SUMMARY:
We are Observing! We've been Locked for 1 hour. We got to NOMINAL_LOW_NOISE an hour ago after a couple locklosses with Elenna's help (see below) and then took a couple unthermalized broadband calibration measurements (84959, 84960). I also just adjusted the sqz angle and was able to get better squeezing for the 1.7 kHz band, but the 350 Hz band squeezing is very bad. I am selecting DOWN so that if we unlock, we don't relock.
Early in the relocking process, we were having issues with DRMI and PRMI not catching, even though we had really good DRMI flashes. I finally gave up and went to run an initial alignment, but we had a bit of a detour when an error in SDFing caused Big Noise (TM) to be sent into PM1 and caused the software WD to trip, then causing the HAM1 ISI and HAM1 HEPI to also trip. Once we got that figured out we went through a full initial alignment with no issues.
Relocking, we had two locklosses from LOWNOISE_ASC from the same spot. Here are their logs (first, second). There were no ASC oscillations before the locklosses, so it doesn't seem to be due to the 1Hz issues from earlier (849463). Looking at the logs, they both happened right after turning on FM4 for DHARD P, DcntrlLP. Elenna took a look at that filter and noticed that the ramping on time might be too short, and changed it from 5s to 10s, and updated the wait time in the guardian to match. She loaded that all in, and it worked!!
As a strange aside, after the second LOWNOISE_ASC lockloss, I went into manual IA to align PRX, but there was no light on ASC-AS_A. Left Manual IA, went through DOWN and SDF_REVERT again, then back into manual IA, and found the same issue at PRX. Looked at the ASC screen and noticed that the fast shutter was closed. Selected OPEN for the fast shutter, and it opened fine. This was a weird issue??
LOG:
23:30UTC Locked and getting data for the new calibration
23:43 Lockloss
- Started an initial alignment, trying to do automatically after PRC align was bypassed in the state graph (84950)
- Tried relocking, couldn't get DRMI or PRMI to catch, even with really good DRMI flashes
- Went to manual inital alignment to just do PRX by hand, but saw the HAM1 ISI IOP DACKILL had tripped
- Then HAM1 HEPI tripped, and I had to put PM1 in SAFE because huge numbers were coming in through the LOCK filter
- It was due to an SDF error and was corrected
- Lockloss from LOWNOISE_ASC for unknown cause (no ringup)
- Lockloss from LOWNOISE_ASC for unknown cause (no ringup)
- Tried going to manual IA to align PRX, but there was no light on ASC-AS_A. Left Manual IA, went through DOWN and SDF_REVERT again, then back into manual IA, and found the same issue at PRX. Looked at the ASC screen and noticed that the fast shutter was closed. Selected OPEN for the fast shutter, and it opened fine.
06:03 NOMINAL_LOW_NOISE
06:07 Started BB calibration measurement
06:12 Calibration measurement done
06:36 BB calibration measurement started
06:41 Calibration measurement done
07:02 Back into Observing
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
00:50 | VAC | Gerardo | LVEA | YES | Climbing around on HAM1 | 00:58 |
Unfortunately, since we don't want the ifo trying to relock all night if we lose lock, I have to select DOWN, but that means that the request for ISC_LOCK is not in the right spot for us to stay Observing. So we won't be Observing overnight, but will be locked (at least until we lose lock, then we will be in DOWN)
Here is some more information about some of the problems Oli faced last night and how they were fixed.
PM1 saturations:
Unfortunately, this problem was an error on my part. Yesterday, Sheila and I were making changes to the DC6 centering loop, which feeds back to PM1. As a part of updating the loop design, I SDFed the new filter settings, but inadvertently also SDFed the input of DC6 to be ON in safe. We don't want this; SDF is supposed to revert all the DC centering loop inputs to OFF when we lose lock. Since I made this mistake, a large junk signal came in through the input of DC6 and then was sent to the suspension, which railed PM1 and then tripped the HAM1 ISI. Once I realized what was happening, I logged in and had Oli re-SDF the inputs of DC6 P and Y to be OFF.
You can see this mistake in my attached screenshot of the DC6 SDF; I carelessly missed the "IN" among the list of differences.
DHARD P filter engagement:
In order to avoid some control instabilities, Sheila and I have been reordering some guardian states. Specifically, we moved the LOWNOISE ASC state to run after LOWNOISE LENGTH CONTROL. This should not have caused any problems, except Oli noticed that we lost lock twice at the exact same point in the locking process, right at the end of LOWNOISE ASC when the DHARD P low noise controller is engaged, FM4. I attached the two guardian logs Oli sent me demonstrating this.
I took a look at the FM4 step response in foton, and noticed that the step response is actually quite long, and the ramp time of the filter was set to 5 seconds. I also looked at the DARM signal right before lockloss, and noticed that the DARM IN1 signal had a large motion away from zero just before lockloss, like it was being kicked. My hypothesis is that the impulse of the new DHARD P filter was kicking DARM during engagement. This guardian state used to be run BEFORE we switched the coil drivers to low bandwidth, so maybe the low bandwidth coil drivers can't handle that kind of impulse.
I changed the ramp time of the filter to 10 seconds, and we proceeded through the state on the next attempt just fine.
We took another broadband measurement after having been at max power for 40 minutes in our quest to confirm the newest calibration. Of course, since we have only been at max power for 40 minutes, we are still unthermalized.
Start: 2025-06-11 06:36:03 UTC (1433658981)
End: 2025-06-11 06:41:12 UTC (1433659290)
Output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250611T063603Z.xml
We should still take another measurement when we are more thermalized, but just after 40 minutes of NLN the broadband results look good. I also checked the calibration line grafana page, and all the uncertainties are within 5%. Most are within 2-3% except the 33 Hz line which is at 4%.
As soon as we got to NLN, we took a broadband calibration measurement to check out the new calibration (84953). We had just gotten to max power 12 minutes before starting this measurement, so of course we are very unthermalized. We are hoping to take another measurement once we're thermalized.
Start: 2025-06-11 06:07:20 UTC (1433657258)
End: 2025-06-11 06:12:30 UTC (1433657568)
Output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250611T060720Z.xml
Currently trying to lock - We were almost there but lost lock during LOWNOISE_ASC for an unknown reason - there were no ringups leading up to the lockloss.
Self-explanatory from title. Set 'manual_control' to False so that when relocking, we automatically go into LOCKING_ARMS_GREEN instead of GREEN_ARMS_MANUAL, and so we go into PRMI and MICH after timing out of DRMI, instead of staying in DRMI.
Reloaded ISC_LOCK and ISC_DRMI
Francisco, Elenna, help online from Joe B
We used the thermalized calibration measurement that Tony took in alog 84949, and ran the calibration report, generating report 20250610T224009Z. We had previously done this process for a slightly earlier calibration measurement with guidance from Joe. Upon inspection of the report, Joe recommended that we change the parameter is_pro_spring
from False to True, which significantly improved the fit of the calibration. The report that Tony uploaded in his alog includes that fit change. Since we were happy with this fit, Francisco reran the pydarm report, this time requesting the generation of the GDS filters. After this completed, we inspected the comparison of the FIR filters with the DARM model, and saw very good agreement between 10 and 1000 Hz.
Two things we want to point out are that the nonsens filter fits included a lot of ripple a low frequency, but it still looks small enough that we think it is "ok". We also saw some large line features at high frequency in the TST filters, which Joe had previously assured us was ok.
While online with Joe, we had also confirmed that the DARM actuation parameters, such as gains and filters, matched in three locations: in the suspension model itself, in the CAL CS model, and in the pydarm ini file.
Since we confirmed this was all looking good, Francisco and I proceeded with the next steps, which we followed from Jeff's alog here, 83088. We ran these commands in this order:
pydarm commit 20250610T224009Z --valid
pydarm export --push 20250610T224009Z
pydarm upload 20250610T224009Z
pydarm gds restart
At this point, Jeff notes that he had to wait 12 minutes running "pydarm gds status
" and running the broadband measurement to confirm the calibration is good. Francisco and I also knew we needed to check the status of the calibration lines on grafana. However, a few minutes after we started the clock on this wait time, the IFO lost lock.
We think the calibration is good, but we have not actually been able to confirm this, which means we cannot go into observing tomorrow (Wednesday) before making this confirmation.
Doing so requires some locked time with calibration lines on and a broadband injection for a final verification of this new calibration. The hope is that we can achieve this tonight, but if not, we must do so tomorrow before going into observing. (Note: because of the different rules of "engineering" data versus "observing" data, we could go into observing mode tonight without this confirmation).
We confirmed this new calibration is good in this alog: 84963.
I am going to add a few more details and thoughts about this calibration here:
Currently, we are operating with a digital offset in SRCL, which is counteracting about 1.4 degrees of SRCL detuning. Based on the calibration measurement, operating with this offset seems to have compensated most of the anti-spring that has been previously evident in the sensing function. However, our measurements still show non-flat behavior at low frequency, which was actually best fit with a spring (aka "pro-spring"). However, the full behavior of this feature appears more like some L2A2L coupling. It may be worthwhile to test out this coupling by trying different ASC gains and running sensing function measurements.
Joe pointed out to me this morning in the cal lines grafana, and we also saw in the very early broadband measurement last night (84959), that the calibration looks very bad just at the start of lock, with uncertainties nearing 10%. This seems to level off within about 30 minutes of the start of the lock. Since that is pretty bad, we might want to consider what to do on the IFO side to compensate. Maybe our SRCL offset is too large for the first 30 minutes of lock, or there is something else we can do to mitigate this response.
Just watching the grafana for this recent lock acquisition, it took about 1 hour for the uncertainty of the 33 Hz line to drop from 8% to 2%.
TITLE: 06/10 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
Locking notes
Started Manual_Initial_Alignment
Touched up ALS-Y alignment.
Requested Initial_Alignment_Offloaded on the ALSX&Y Guardians.
Took Align_IFO to Input_Align_Offloaded, then to PRX_LOCKED. Maxed out PRM alignment.
Requested Mich_Bright_Offloaded and then finished the Manual_Initial_Alignment.
15:00 UTC Norco Truck heading down to EY.
TripLite Power plug is still on. Dave is turning it off. 15:55 UTC
16:00 UTC TEST T1572825
Nominal Low Noise Reached 16:15 UTC
Unknown Lockloss from NLN @ 16:18 UTC
Sat in DRMI For a while pushing around the Beam Splitter to get better flashes.
I returned the BS back to its last position when we had left the last IA.
Running the return slider values script in Print Only Mode reveals the following optics needed to move
Before: SUS-PRM_M1_OPTICALIGN_P_OFFSET: -1681.2,
After: SUS-PRM_M1_OPTICALIGN_P_OFFSET: -1679.4,
Before: SUS-PRM_M1_OPTICALIGN_Y_OFFSET: -41.9,
After: SUS-PRM_M1_OPTICALIGN_Y_OFFSET: -35.7,
Before: SUS-PR2_M1_OPTICALIGN_Y_OFFSET: -230.6,
After: SUS-PR2_M1_OPTICALIGN_Y_OFFSET: -228.9,
Before: SUS-SRM_M1_OPTICALIGN_P_OFFSET: 2544.1,
After: SUS-SRM_M1_OPTICALIGN_P_OFFSET: 2532.9,
Eventually we gave up and did another Man.Initial_Alignment. But may be I should have been moving PRM instead of BS.
While relocking there was a temperature excursion in the +Y side of the LVEA. Tyler took a look at it and was able to make the changes needed to reverse it.
15 minutes of NOMINAL_LOW_NOISE & Observing!!! 21:29 - 21:46 UTC !
The Squeeze is not Squoze for 15 minutes.
GRB-Short E572842 22:22 UTC
Three Calibration Sweeps !!! 84932 84940 84949
GRB-Short E572844 @ 23:12 UTC
Kevin K. started some ADF sweeps:
GPS time for start of one ADF sweep
1433628819
Kevin K. also started one more a bit earlier at
1433628286
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:22 | FAC | LVEA is LASER HAZARD | LVEA | YES | LVEA is LASER HAZARD \u0d26\u0d4d\u0d26\u0d3f(\u239a_\u239a) | 22:54 |
14:35 | fac | Eric | Mech Room | N | working in mech room on mezanine | 14:51 |
14:36 | GAC | Nelly & Kim | EY & EX | N | Technical cleaning | 16:36 |
14:44 | FAC | Randy | LVEA | y | finding path and Moving Forklift. | 14:51 |
15:12 | VAC | Gerardo & Travis | LVEA HAM1 | y | Checking VAC Status | 15:20 |
15:33 | FAC | Eric | Mech Mezz | N | Working in mech room on mezanine | 15:50 |
16:01 | PEM | Richard | CER | N | Putting PEM wire Coil back up | 16:16 |
16:21 | FAC | Randy | Mid stations | N | Storing parts. | 17:21 |
16:25 | OPS | Camilla, TJ | LVEA | Y | Sweep the LVEA | 16:52 |
16:48 | VCA | Travis | CER | N | Getting parts | 16:51 |
16:59 | VAC | Jordan | LVEA Output Arm | Y | Shutting down pump | 17:05 |
17:25 | Tour | Jeff +15 GWANW | Control Room & Overpass | N | GWANW tour | 17:26 |
17:26 | Tour | Rick + 4 ET | LVEA Mech Room | y | Giving Tour to ET Scientists | 18:26 |
17:44 | SEI | Patrick | EY | N | Getting USB stick for BSRY | 18:51 |
18:07 | FAC | Chris | ALL Mids & Ends | N | Testing Phone functionality | 18:51 |
18:31 | FAC | Tyler | VAC Prep | N | Storing tools & parts. | 18:46 |
18:55 | Tour | Rick Robert & ET | EY | N | Giving tour to ET scientists | 19:27 |
19:09 | IMC | Sheila | LVEA | Y | Plug in SR75 into psl racks | 19:23 |
23:07 | Tour | Rick, Robert +ET | Roof | N | Heading to the roof for a good view | 23:37 |
Jennie W, Sheila D
I compared our optical gain and power-recycling gain between this afternoon once we were thermalised at 22:42:59 UTC and a thermalised time just before the vent on April 1st at 07:34:01 UTC.
Our optical gain looks like it has decreased by around 1% and our PRG from 52 to 50 W/W.
This might make it worth tweaking our OMC alignment to improve optical gain, but the the PRG hasn't changed much so its maybe not worth trying to improve this before the run starts by tweaking camera servo offsets.
We should be careful with the PRG comparison- I am not sure the change in the PRG is "real" because before the vent, we had not updated the PRG calibration to account for reduced power on IM4 trans that occurred after the O4a/O4b break. However, I did update the PRG calibration last week to account for it. It could still be correct; one reason why I didn't update the PRG calibration before was because it seemed "good enough", but I'm not sure if it's good enough to a few percent to make this kind of comparison.
Oli, Ibrahim, Sheila
In preparation for automated locking tonight, we've removed the PRC align steps from the INIT_ALIGN guardian by adding a path around them in the graph.
We heard you like Calibration sweeps with a Thermalized IFO... So we Thermalized your IFO and took a 3rd CAL sweep!
pydarm measure --run-headless bb
2025-06-10 15:33:52,565 config file: /ligo/groups/cal/H1/ifo/pydarm_cmd_H1.yaml
2025-06-10 15:33:52,573 available measurements:
pcal: PCal response, swept-sine (/ligo/groups/cal/H1/ifo/templates/PCALY2DARM_SS__template_.xml)
bb : PCal response, broad-band (/ligo/groups/cal/H1/ifo/templates/PCALY2DARM_BB__template_.xml)
sens: sensing function (/ligo/groups/cal/H1/ifo/templates/DARMOLG_SS__template_.xml)
act1x: actuation X L1 (UIM) stage response (/ligo/groups/cal/H1/ifo/templates/SUSETMX_L1_SS__template_.xml)
act2x: actuation X L2 (PUM) stage response (/ligo/groups/cal/H1/ifo/templates/SUSETMX_L2_SS__template_.xml)
act3x: actuation X L3 (TST) stage response (/ligo/groups/cal/H1/ifo/templates/SUSETMX_L3_SS__template_.xml)
2025-06-10 15:33:52,574 measurement sequence:
2025-06-10 15:33:52,574 ['bb']
2025-06-10 15:33:52,606 ##########
2025-06-10 15:33:52,606 measurement: bb: PCal response, broad-band measurement
2025-06-10 15:33:52,606 measurement timestamp: 20250610T223352Z
2025-06-10 15:33:52,606 measurement template: /ligo/groups/cal/H1/ifo/templates/PCALY2DARM_BB__template_.xml
2025-06-10 15:33:52,606 measurement output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250610T223352Z.xml
2025-06-10 15:33:52,607 executing headless bb measurement...
...
~ computer noises ~
...
notification: end of measurement
notification: end of test
diag> save /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250610T223352Z.xml
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250610T223352Z.xml saved
diag> quit
EXIT KERNEL
2025-06-10 15:39:02,645 bb measurement complete.
2025-06-10 15:39:02,645 bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250610T223352Z.xml
2025-06-10 15:39:02,645 all measurements complete.
Simulines:
gpstime;python /ligo/groups/cal/src/simulines/simulines/simuLines.py -i /ligo/groups/cal/H1/simulines_settings/newDARM_20231221/settings_h1_20250212.ini;gpstime
PDT: 2025-06-10 15:40:08.872455 PDT
UTC: 2025-06-10 22:40:08.872455 UTC
GPS: 1433630426.872455
2025-06-10 22:40:09,675 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20250610T224009Z.hdf5
2025-06-10 22:40:09,690 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20250610T224009Z.hdf5
2025-06-10 22:40:09,695 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20250610T224009Z.hdf5
2025-06-10 22:40:09,702 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20250610T224009Z.hdf5
2025-06-10 22:40:09,718 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20250610T224009Z.hdf5
2025-06-10 22:40:09,718 | INFO | Overall driving parameters:
2025-06-10 22:40:09,718 | INFO | Preparing to scan the following sweeps ['DARM_OLGTF', 'PCALY2DARMTF', 'L1_SUSETMX_iEXC2DARMTF', 'L2_SUSETMX_iEXC2DARMTF', 'L3_SUSETMX_iEXC2DARMTF']
...
~ computer noises ~
...
2025-06-10 23:02:52,500 | INFO | Ending lockloss monitor. This is either due to having completed the measurement, and this functionality being terminated; or because the whole process was aborted.
2025-06-10 23:03:29,523 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20250610T224009Z.hdf5
2025-06-10 23:03:29,531 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20250610T224009Z.hdf5
2025-06-10 23:03:29,537 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20250610T224009Z.hdf5
2025-06-10 23:03:29,544 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20250610T224009Z.hdf5
2025-06-10 23:03:29,551 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20250610T224009Z.hdf5
PDT: 2025-06-10 16:03:29.690650 PDT
UTC: 2025-06-10 23:03:29.690650 UTC
GPS: 1433631827.690650
This morning we had a 1Hz instability that started in the LOWNOISE_LENGTH_CONTROL state.
The next lock we went through this state step by step, and step of reducing the SRCL1 gain from -18 to -7.5 is what started the issue. Setting this back to -18 seemed to get rid of the 1Hz signal in SRCL, but the oscillation continued to grow in PRG and other channels.
Then Elenna set CHARD P back to the high bandwith configuration by undo-ing line 4569 in ISC_LOCK, by doing:
ezca.switch('ASC-CHARD_P', 'FM9', 'ON')
ezca.switch('ASC-CHARD_P', 'FM3', 'FM8', 'OFF')
ezca['ASC-CHARD_P_GAIN'] = 80
This stopped the oscillation, then we were able to set the SRCL gain back down to -7.5 without issue, then undo the CAHRD P changes, again wihtout issue.
We don't understand this, but we have again moved the location of LOWNOISE_ASC in the guardian, so that the CHARD changes will happen before the SRCL gain reduction next time we relock.
Note to operators: If you see this 1 Hz bussing in the PRG, put CHARD into it's high gain state by pasting the above lines into a guardian shell, all at once.
TITLE: 06/10 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Commissioning
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 16mph Gusts, 13mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.18 μm/s
QUICK SUMMARY:
We are Locked and just finished another set of calibration measurements
Following instructions from 76751 I ran a CARM OLG measurement after we had been at high power for a bit more than 4 hours, see attached. The most recent comparison I can find for low noise is 80956, it looks like we would want to increase the gain by roughly 6dB to get back to a 17 kHz ugf as seen there.
Randy, Ibrahim
Today, Randy an I fit checked the BBSS Structure Support Plate (D2500013) and the Lifting Bar Assembly (D1100802) for the BBSS.
Overall, everything fit. See pictures below. Some relevant notes:
No SQZ time taken today, 21:47:00UTC to 21:59:00UTC.
2 Seconds of data in this data span is missing.
Our tools can't pull the entire time for this No SQZ ing time.
Johnathan has given us the 2 seconds that are missing from the data 12 minute data stretch.
"I don't have good news for you. There is a 2s gap in there at 1433627912-1433627913 on H-H1_llhoft. The H-H1_HOFT_C00 is worse, I don't see frames in the 1433627.... range at all." ~ Johnathan H.
We were able to salvage 674 seconds from this time that can be useful.
Useful GPS time: 1433627238 -1433627912
I'm working on going through some Observe SDFs, so that we're ready for observing soon.
Jim is currently working on going through many of the SEI SDFs. The rest of the diffs I need to check with other commissioners to be sure about before we clear them, but I think we're getting close to having our SDFs cleared!
h1tcshws sdfs attached.
Reverted BaffePDs to what they were 3 months ago, attached, unsure why they would have changed.
SQZ ADF frequency sdfs accepted, we do not know why these would have been accepted at the values -600 that they've been at some of the past 2 weeks.
ASC SDFs were from changes to DC6, cleared.
Cleared these SDFs for the phase changes for LSC REFL A and B.
I trended these, and see that FM2 was on in all three of these last time we were in observing, so these must have been erroneously accepted in the observing snap.
I also accepted the HAM7_DK_BYPASS time from 1200 to 999999 after checking with Dave, as attached.