Received phone call at 1:20amPDT.
Saw that H1 was at NLN, and ready to go to Observing, but could not due to SDF Diffs for LSC & SUSETMY (see attached screenshot):
1) LSC
I was not familiar with these channels, so I went through the exercise of trying to find their medm, but for the life of me I could not get there! The closest I got was LSC Overview / IMC-MCL Filter Bank, but they were not on that medm. (probably spent 30min looking everywhere and in between with no luck). Looked at these channels in ndscope & these channels were at their nominals for the last lock. Also looked in the alog, and only saw SDF entries for them from 2019 & 2020. Ultimately, I just decided to do a REVERT (and luckily, H1 did not lose lock).
2) SUSETMY
Then H1 automatically went back to Observe.
Maybe Guardian, for some reason took these channels to these settings? At any rate, going to try to go back to sleep since it has been an hour already (hopefully this does not happen for the next lock!).
These MCL trigger thresholds come from the IMC_LOCK Guardian and are set in the 'DOWN' and 'MOVE_TO_OFFLINE' states.
In 'DOWN', the trigger ON and trigger OFF thresholds are set at 1300 and 200, respectively, for the IMC to prepare to lock as seen in the setpoints from Corey's screenshot.
In 'MOVE_TO_OFFLINE', the trigger ON and trigger OFF thresholds are set at 100 and 90, respectively (for <4W input), as seen in the EPICS values from Corey's screenshot.
So, it would seem that after the lower thresholds were set when taking the IMC offline sometime recently, they were incorrectly accepted in SDF. I'll accept them as the correct values in the OBSERVE.snap table once H1 is back up to low noise, as I expect they'll show up as a difference again.
TITLE: 06/13 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
Observing at 148 Mpc and have been Locked for almost 6 hours. I just turned the Roll damping off for ETMY so we don't get any sdf diffs if we have to relock overnight.
When we first got back up, we were noticing the ETMY roll mode ringing up again, which we had been hoping to have solved by Elenna's changes to the SRCL FF (84998). We tried turning damping back on where we had had it in previoius locks (at 20), but this didn't seem to be doing anything this time. We eventually tried a gain of 30 instead, and this finally started damping it.
Near the beginning of the lock, we also had the SQZer unlock and it was having trouble relocking by itself because the LO could not relock due to the ASC having pulling ZM4 and ZM6 out of range. To fix this, Camilla took us to NO_SQZ, then cleared the history on the P and Y lock filters on ZM4/6. Afterwards, she verified that we would've also been able to do this by just taking SQZ manager to RESET_SQZ_ASC, and then back to FDS once that finishes.
In the hours after that, we've had the SQZer unlock multiple times, which is why we've been popping in and out of Observing so much, but each time it has been able to relock itself fine.
LOG:
21:30 Working on PRMI
21:41 Decided to just try and relock bc wind is rising
- Started an IA due to DRMI not being able to catch
- Lost lock during ENGAGE_ASC_FOR_FULL_IFO (the glitch that happens during it got too big)
23:32 NOMINAL_LOW_NOISE
23:39 Observing
23:41 Quickly out of Observing to turn on ETMY Roll damping
23:41 Back into Observing
23:54 Out of Observing due to turning ETMY roll damping back on
23:54 Back into Observing
00:01 Out of Observing due to SQZer unlocking
- LO could not relock due to the ASC running away
00:15 Back into Observing
00:17 Out of Observing to try bumping up the ETMY Roll damp gain
00:17 Back into Observing
02:17 Out of Observing due to SQZ unlock
02:20 Back into Observing
02:49 Out of Observing due to SQZ unlock
02:57 Back into Observing
03:37 Out of Observing due to SQZ unlock
03:38 Back into Observing
05:11 Out of Observing to turn the ETMY roll damping off
05:12 Back into Observing
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
21:34 | PEM | Robert | LVEA | YES | Getting LVEA ready for Observing | 22:02 |
Observing at 145 Mpc and have been Locked for 4 hours. We've been bumped out of Observing a couple times due to SQZ unlocking, but it's been able to get everything back in order by itself each time.
Jennie, Sheila, Camilla
Ran the userapps/.../sqz/h1/scripts/SCAN_PSAMS.py script with it optimizing SQZ with the 350Hz BLRMS rather than the high frequency ones to set SQZ angle.
We tried the beloiw values, see attached ndscope:
Note we don't think that the 350Hz BLRMS is a good measure of the range as the yellow 350Hz BLRMS being good don't agree with good range times. We should add a 80-250Hz BLRMS in the place of BLRMS_2 with the hope that it would follow the range better.
TITLE: 06/12 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
IFO is LOCKING in ACQUIRE_DRMI_1F
(Started my shift cover at around 10AM PT)
Today we had planned commissioning until 21:00 UTC. We were locked for the majority of it, with much of the commissioning time being spent on SQZ.
We experienced a ETM Glitch Lockloss (not human nor environment caused) - alog 85004. After this, we stopped at PRMI_LOCKED to work on PRMI ASC, but after trying for a while and not making much progress, commissioners decided to continue normal locking.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:22 | FAC | LVEA is LASER HAZARD | LVEA | YES | LVEA is LASER HAZARD \u0d26\u0d4d\u0d26\u0d3f(\u239a_\u239a) | 05:18 |
16:05 | PEM | Robert | LVEA | Y | Remove VP covers for CPY evaluation WP#12618 | 22:55 |
17:05 | FAC | Eric | EX | N | EX Chiller Work | 17:35 |
17:06 | FAC | Tyler, Philip the Bee Guy | EX | N | Bee Removal | 17:06 |
17:06 | FAC | Tyler | EX | N | Tyler | 17:06 |
19:16 | PSL | Jennie, Keita | Optics Lab | Local | ISS Work | 22:16 |
21:34 | PEM | Robert | LVEA | YES | Getting LVEA ready for Observing | 22:34 |
Lost lock due to the ETM Glitch while commissioning.
2025-06-12 23:39 UTC Back to Observing
Accepted sdfs for SUSPROC (filter for Roll modes) and for LSC (IMC-MCL_FM_TRIG thresholds) to get into Observing. Control room at the time didn't know who changed the MCL thresholds, so I just accepted them.
TITLE: 06/12 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Commissioning
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 28mph Gusts, 21mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
Currently unlocked and trying to relock the detector before the wind gets worse. Previously work was being done to fix PRMI ASC, but that's been abandoned for now since the wind is supposed to get worse and we can't keep green arms locked for too long with the wind this bad.
WP12620
Dave, Oli:
I have isolated the raw minute trends for the time period 16dec2024 - 12jan2025 from the current data, these static files are now ready for transfer to spinning media.
h1daqnds0 was reconfigured to serve the past 6 month from this temporary location and with Oli's permission its DAQD process was restarted 13:44.
Sheila, Camilla
Aim was to following 83594 with more FIS data points every 10deg to get a better SQZ picture and skip FIS ASQZ. Plan was to do this in the same lock stretch as Kevin's ADF scan so that we can compare results, lost lock before data was taken.
Type | Time (UTC) | Angle | DTT Ref |
No SQZ | 20:07:00 - 20:12:00 | N/A | ref 0 |
opo_grTrans_ setpoint_uW | Amplified Max | Amplified Min | UnAmp | Dark | NLG (usual) | OPO Gain |
95 | 0.005316 | 9.54 e-5 | 0.0002687 | -1.26e-5 | 18.9 | -8 |
After suggestion from Vicky, I checked the FC de-tuning (was changed from -32 to around -26Hz when we changed the SRC de-tuning). The current setting of -26Hz seems fine, left there. Plot attached.
This post reports on the results from SRCL dither measurement I ran in January, briefly reported in alog 82248. It's taken me a long time to write up this report because I spent significant time processing the results in different ways to try to account for some of the possible pitfalls of this measurement. The overall problem is that, in O4, this measurement has traditionally reported very low arm power compared to our other methods of estimating arm power (for example, see Craig's work in 66860). I have been doing an exhaustive study to understand why that might be.
A full derivation of this measurement can be found in Craig's dissertation Section 3.2.2, which also includes references to work by Daniel and Kiwamu that orignally inspired this method. To summarize, the idea of the measurement is that dithering the SRM creates differential amplitude sidebands due to the radiation pressure coupling in the SRC. This response can be readout as a transfer function from the differential arm length to the relative intensity noise on the arm transmission QPDs. The resulting transfer function takes a simple form of DARM/RIN = alpha/f^2. The arm power is calculated via P_arm = 1/2 * alpha * pi^2 * M * c (M = test mass mass, c = speed of light)
Here are some possible issues with the measurement:
In order to fully study this properly, I ran a bayesian inference on my measurement data with both the amplitude and power law as free parameters. I wanted to confirm that the 1/f^2 trend agrees with the data, and if it agrees, get the arm power estimate (note: if the power law is different from -2, the overall amplitude of the fit cannot be used to measure the arm power). In the process of setting up the inference, I set about making sure that the uncertainty on the line height in both DARM and the TMS QPDs was being correctly calculated.
Uncertainty and Bias:
Appendix E of Craig's thesis is a nice reference for line uncertainty.
Testing frequency dependence:
I set up a bayesian inference using a model that fits the frequency dependence and "power", assuming a gaussian distribution of my uncertainty. Following Craig's discussion in his thesis appendix E, with SNR > 5 the ASD distribution of a line can be well-approxmiated by a gaussian distribution. I assume a flat prior, with possible powers ranging from 0-1000 kW, and possible power laws from -3 to -1. I fix the inference to assume the same arm power in each arm, so it uses both A and B QPDs in each arm to fit one X arm power and one Y arm power. I then fix the frequency dependence to be the same for both arms and both QPDs in each arm.
The results from this inference do not favor a slope of -2, with a fit that gives m=-2.016 +-0.014 (95% CI).
However, fixing the slope to -2, following the model, gives the following power results (95% CI):
X arm power = 319.6 +- 2.8 kW
Y arm power = 303.5 +- 2.4 kW
These are the highest power results achieved with this method during O4. However, they are still very low compared to what we know about the interferometer, namely that our PRG at full power is about 50, and we have about 56-57 W of power on the PRM, which predicts about 360 kW of arm power, assuming an arm gain of 260. Craig, Sheila, and I have all done work to verify these numbers before and during O4. This result also indicates a significant mismatch in the arm powers; a surprising result since our pre-O4 test mass replacement of ITMY should have made the arms more well-matched to each other than in O3.
Some commentary:
One aspect of this measurement that is very constraining on the slope of the transfer function is the overall uncertainty on each transfer function measurement point, which is very narrow. This makes sense, since we are achieving fairly good SNR overall. However, while processing the data I did notice that there is modulation of the DARM/RIN transfer function, that is on the order of a Hz to a few Hz. My guess is that this is coming from the ASC modulating DARM, SRCL, or both. I'm not sure overall what effect this could have on the estimation of the transfer function or the uncertainty. Returning to the results from Dan's finesse model, the sparseness of points in this measurement also makes it harder to determine if the slope has diverged from -2 at lower frequency due to a different effect, such as some differential mismatch in the test mass radius of curvatures.
If we choose to use these measurement results, they would certainly place a lower bound on our possible arm power, which is compatible with some of Sheila's quantum noise modeling work (see 82097). However, those models require minimal or no readout losses to achieve such a low arm power, which is incompatible with some of our other results measuring readout loss, such as the work Jennie Wright has been doing.
Back to the measurement itself, we could try to improve the result slightly by integrating longer at each point, and measuring more points to get a better idea of the slope. Instead of running Craig's swept sine measurement, I injected several lines by hand because I found it easier to verify that we were achieving good SNR this way. I'm not sure if there is a way to overcome the modulation of the injection.
I am adding a link here to Jennie's alog where she measured the throughput to be 86%, suggesting 14% readout losses, 83586. She also later measured a readout loss of 12.2%, 83008.
You can see from Sheila's quantum noise fiting alog (82097) that the fit using low power, 327 kW, requires very low readout loss. Her low power model uses a readout efficiency of 91.6%.
Therefore, it seems our current readout loss measurements are at odds with the results of this SRCL dither measurement.
Francisco, Matthew, Sheila
Summary: We compared DARM noise before and after the vent. We see excess quantum noise above 800 Hz.
We wanted to quickly evaluate the squeezer after the vent. For this, we compared the DARM noise before and after the vent, with and without squeezing. Matt provided me the gps times at which our range was good with/without squeezing, before and after the vent, for four timeseries.
In the attached figure (sqz_compare_output), we plotted the ASD of GDS-CALIB_STRAIN_CLEAN
(STRAIN_CLEAN helps on having a calibrated, reduced noise channel) for each timeseries, and the quadratic difference sqrt( post^2 - pre^2 ) for SQZ and no SQZ.
The ASDs with SQZ (red and purple traces) confirm a source of excess noise after the vent. The difference between the two ASDs (dashed brown and green traces) reveals an excess quantum noise above 800Hz. Jitter in the 200-400 Hz range make it hard to asses if the excess noise is quantum. We plan on taking 30 minutes of data next week to see if the pumps are affecting the jitter, i.e. making it worse and visible on the 200-400 Hz.
The plot was acquired with gwpy. The script to acquire and plot the data is found at /ligo/home/francisco.llamas/COMMISSIONING/scripts/sqz_compare.py
Thu Jun 12 10:08:50 2025 INFO: Fill completed in 8min 46secs
The Legend button has been returned to the CDS Overview. It has been updated to include the RCG version tag.
TITLE: 06/12 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 144Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 4mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY:
H1 is still locked and has been for 5 hours and 45 minutes.
But the DCPDs are diverging again, Like there is some sort of Roll mode ring up again.
Turns out its the Same Roll mode from yesterday. I applied the same gain from yesterday to the roll mode and it has turned around immediately. I have accepted this as an SDF Diff so we can stay observing.
GRB-Short E573164 @ 1454 UTC stand down
it has been requested of me to run the PSAMS script if we were still locked this morning. Camilla just walked and I her and I will start working on PSAMs when the Stand Down ends.
Dropped out of Observing for Commissioning at 15:51UTC. Optimistic plan for commissioning time today:
Elenna, Camilla, Jenne, Ryan Short, Tony
We saw an increase in the DCPD monitor screen, which the roll monitor identified as ETMX roll mode. The existing settings from a long time ago used AS A YAW to damp roll modes, I attempted to damp this by actuating on ETMY, which seems to have worked with a gain of 20. (settings in screenshot)
Camilla trended the monitor, and sees that this started to ring up slowly around 7 am today.
We've gone back to observing with this gain setting, but we plan to turn it off when this looks well damped (if we are still locked by the end of the eve shift, we should turn it off so the owl operator doesn't get woken up by SDF diffs).
Since we were able to actuate on ETMY, that indicates this is likely the ETMY roll mode. That indicates that either the ASC or the LSC feedforward is driving this mode.
I just took a look at the SRCL feedforward, and I see a pair of high Q zeros/poles (Q ~ 1600!) at about 13.5 Hz that I missed (facepalm). That seems a bit low, since this roll mode appears to be around 13.7 Hz, but we should still remove that anyway. We can take care of this tomorrow during commissioning and that might prevent the driving of this mode.
I can't think of any ASC control that would have a significant drive at 13.7 Hz.
I just removed two high Q features at 6.5 and 13.5 Hz that were in the SRCL feedforward. I kept the same filter but removed the features, so there should be no SDF or guardian changes. My hope is that this will prevent the roll mode from being rung up, so I have turned the gain off and SDFed it. Attached is a screenshot of the SDF change for ETMY roll.
Unfortunately, it appears that the ETMY roll mode is still ringing up, so the SRCL feedforward is not the cause. Another possibility is the ASC. The CHARD P control signal is larger now around 13.7 Hz than it was in April. The attached plot shows a reference trace from April and live trace from last night's lock. I don't know if this is enough to drive this mode. The bounce roll notches are engaged on ETMY L2, and have 40 dB attenuation for the roll mode between 13.4 and 13.7 Hz.
I went down to end Y to retrieve the usb stick that I remotely copied the c:\slowcontrols directory on h1brsey to, and also to try to connect h1brsey to the kvm switch in the rack. I eventually realized that what I thought was a vga port on the back of h1brsey was probably not, and instead I found this odd seeming wiring connected from what I am guessing is a hdmi or dvi port on the back of h1brsey, to some kind of converter device, then to a usb port on a network switch. I'm not sure what this is about, so I am attaching pictures.
Sheila, Elenna, Camilla
Sheila was questioning if something is drifting for us to need an initial alignment after the majority of relocks. Elenna and I noticed that BS PIT moves a lot both while powering up /moving spots and while in NLN. Unsure from the BS alignment inputs plot what's causing this.
This was also happening before the break (see below) but the operators were similarly needing more regular initial alignments before the break too. 1 year ago this was not happening, plot.
These large BS PIT changes began 5th to 6th July 2024 (plot). This is the day shift from the time that the first lock like this happened 5th July 2024 19:26UTC (12:26PT): 78877 at the time we were doing PR2 spot moves. There also was a SUS computer restart 78892 but that appeared to be a day after this started happening.
Sheila, Camilla
This reminded Sheila of when we were heating a SUS in the past and causing the bottom mass to pitch and the ASC to move the top mass to counteract this. Then after lockloss, the bottom mass would slowly go back to it's nominal position.
We do see this on the BS since the PR2 move, see attached (top 2 left plots). See in the green bottom mass oplev trace, when the ASC is turned off on lockloss, the BS moves quickly and then slowly moves again over the next ~30 minutes, do not see simular things on PR3. Attached is the same plot before the PR2 move. And below is a list of other PR2 positions we tried, all the other positions have also made this BS drift. The total PR2 move since the good place is ~3500urad in Yaw.
To avoid this heating and BS drift, we should move back towards a PR2 YAW of closer to 3200. But, we moved PR2 to avoid the spot clipping on the scrapper baffle, e.g. 77631, 80319, 82722, 82641.
I did a bit of alog archaeology to re-remember what we'd done in the past.
To put back the soft turn-off of the BS ASC, I think we need to:
Camilla made the good point that we probably don't want to implement this and then have the first trial of it be overnight. Maybe I'll put it in sometime Monday (when we again have commissioning time), and if we lose lock we can check that it did all the right things.
I've now implemented this soft let-go of BS pit in the ISC_DRMI guardian, and loaded. We'll be able to watch it throughout the day today, including while we're commissioning, so hopefully we'll be able to see it work properly at least once (eg, from a DRMI lockloss).
This 'slow let-go' mode for BS pitch certainly makes the behavior of the BS pit oplev qualitatively different.
In the attached plots, the sharp spike up and decay down behavior around -8 hours is how it had been looking for a long time (as Camilla notes in previous logs in this thread). Around -2 hours we lost lock from NomLowNoise, and while we do get a glitch upon lockloss, the BS doesn't seem to move quite as much, and is mostly flattened out after a shorter amount of time. I also note that this time (-2 hours ago) we didn't need to do an initial alignment (which was done at the -8 hours ago time). However, as Jeff pointed out, we held at DOWN for a while to reconcile SDFs, it's not quite a fair comparison.
We'll see how things go, but there's at least a chance that this will help reduce the need for initial alignments. If needed, we can try to tweak the time constant of the 'soft let-go' to further make the optical lever signal stay more overall flat.
The SUSBS SDF safe.snap file is saved with FM1 off, so that it won't get turned back on in SDF revert. The PREP_PRMI_ASC and PREP_DRMI_ASC states both re-enable FM1 - I may need to go through and ensure it's on for MICH initial alignment.
RyanS, Jenne
We've looked at a couple of times that the BS has been let go of slowly, and it seems like the cooldown time is usually about 17 minutes until it's basically done and at where it wants to be for the next acquisition of DRMI. Attached is one such example.
Alternatively, a day or so ago Tony had to do an initial alignment. On that day, it seemed like the BS took much longer to get to its quiescent spot. I'm not yet sure why the behavior is different sometimes.
Tony is working on taking a look at our average reacquisition time, which will help tell us whether we should make another change to further improve the time it takes to get the BS to where it wants to be for acquisition.