All looks well, aside from the known issue with LAB2 and LVEA5 seems frozen, I'll investigate that tomorrow during maintenance.
LVEA5 being off is expected, it's a pumped dust monitor so we turned it off for observing.
Closes FAMIS37206, last checked in alog84428
HEPI pump trends look mostly normal, HPI-PUMP_LO_CONTROL_VOUT has dropped slightly (~30).
After editing SQZ_MANGER to use the correct ASQZ angle (-)80deg as noted was an issue over the weekend 85072, I ran SCAN_ALIGNMENT_FDS. This appeared to work fine and we had ASQZ around 14dB. However, after it was done, SQZ didn't look good and we turned on the ASC for a few minutes, this improved the high freq SQZ with the low frequency SQZ remaining the same (maybe even slightly better), see plot. Unsure why SCAN_ALIGNMENT doesn't give as good SQZ as the ASC, but the ASC appears to be working well once we are thermalized.
Later in the week we might try using the ASC again, either from the start of the lock if we think the new THERMALIZTION GRD is working well 85083, or after we have thermalized.
A single blank line was added to the end of the filter file for H1SUSAUXEX to test whether the IFO can enter the OBSERVE state with unloaded filter changes.
If loaded, this change would have no effect on the behavior of the filters.
We should be ready to go to laser safe in the LVEA.
Alenna noted that we will have CP-Y spots on the ballast baffle even when the CP is propertly aligned (which is why we eventually want to remove/fix the baffle), so I swept the CP in pitch and yaw and found a minimum in scatter coupling at about -300 p, 300 y. The varying noise from the pump on HAM1 interfered with the measurement, so I think I can further minimize the coupling once the pump is off.
Accepted in SAFE and OBSERVE SDF tables as Robert left them. This new position has been causing ITMY saturations during LOWNOISE_COIL_DRIVERS due to the increased drive from the DAC, but this doesn't seem to be causing any noticeable locking issues.
Ryan was having a hard time with locking PRMI, so we did a repeat of the REFLAIR 45 phasing as in 84630.
We started by checking the PRCL OLG, we saw odd features there (1st attachment), phased REFLAIR45 (2nd attachment shows spectra, 3rd shows SDF), and see that the PRCL OLG looks normal again. This is partially reversing the change to the phase that we made in 84630, we've moved the phase from 97 to 87 back to 93.
Oli, Camilla, Sheila, RyanS
It was pointed out (84972) that our new SRCL offset is too big at the beginning of the lock, affecting the calibration and how well we are squeezing. Camilla had the idea of taking the unused-but-already-set-up THERMALIZATION guardian and repurposing the main state so it steps LSC-SRCL1_OFFSET from the LSC-SRCL1_OFFSET value at the end of MAX_POWER to the official offset value given in lscparams (offset['SRCL_RETUNE']). This stepping starts at the end of MAX_POWER and goes for 90 minutes. Here is a screenshot of the code.
To go with this new stepping, we've commented out the line (~5641) in ISC_LOCK's LOWNOISE_LENGTH_CONTROL (ezca['LSC-SRCL1_OFFSET'] = lscparams.offset['SRCL_RETUNE']) so that the offset doesn't get set to that constant and instead keeps stepping.
To get this to run properly while observing, we did have to unmonitor the LSC_SRCL1_OFFSET value in the Observe sdf (sdf).
Attached is a screenshot of the grafana page, highlighting the 33 Hz calibration line, which seems to be the most sensitive to thermalization. Before, when the SRCL offset was set static, it appears that the 33 Hz line uncertainty starts at about 1.09 and then decays down to about 1.02 over the first hour. With the thermalization adjustment of the SRCL offset from 0 to -455 over one hour, the 33 Hz uncertainty starts around 0.945 and then increases to 1.02 over the first hour. Seems like we overshot in the other direction, so we could start closer to -200 perhaps and move to -455.
We decided to change the guardian so that it starts at -200 before then stepping its way up to -455 over the course of 75 minutes instead of 90 minutes.
With the update to the guardian to start at -200, each calibration line uncertainty has actually stayed very flat for these first 30 minutes of lock (except for the usual very large jump in the uncertainty for the first few minutes of the lock).
This shows the entire lock using the thermalization guardian with the SRCL offset ramping from -200 to -455, The line uncertainty holds steady the entire time within 2-3%!
FAMIS 31090
Temperatures have been coming down following the weather, and PMC REFL has been moving around, but no major events of note otherwise.
The figure shows that the “mystery” beam spot on the HAM3 spool piece dropped dramatically in intensity over the break. I don’t know if it was the compensation plate move or something else that we did. Im a little skeptical that it was the compensation plate move because, before the break, I moved the compensation plate while filming the spot and saw no change (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=82252). However, I think that this is worth doing again to double check.
Mon Jun 16 10:10:49 2025 INFO: Fill completed in 10min 45secs
TC-A was below nominal range for this fill.
J. Kissel Executive Summary I've tuned the lens position and measured the projected beam profile of one of three trial fiber collimators to be used for the SPI. The intent is to do this measurement before vs. after a vacuum bake like done for previous collimators (Section 3.3 of E1500384), to see if the bake will cause any negative impact on the performance of the collimator. It also helped clear out a few "rookie mistake" bugs in my data analysis, though there's still a bit of inconsequential confusion in post-processing the fit of the beam profile. Full Context The SPI intends to use supposedly vacuum compatible Schaefter + Kirchhoff (SuK) fiber collimators (D2500094, 60FC-0-A11-03-Ti) in order to launch its two measurement (MEAS) and reference (REF) beams from their fiber optical patchcords attached to feedthroughs (D2500175) in to free-space and throughout the ISIK transceiver / interferometer (D2400107). However, unlike their (more expensive) stainless steel counterparts from MicroSense / LightPath used by the SQZ team, SuK doesn't have the facilities to specify a temperature at which they believe the titanium + D-ZK3 glass lens + CuBe & Ti lens retaining ring assembly will fail (see E2300454). As such, we're going to run one SuK collimator through the same baking procedure as the MicroSense / LightPath (see Section 3.3 of E1500384) -- slow 6 [hr] ramp up to 85 [deg C], hold for 24 hours, similar slow ramp down -- and see if it survives. A catastrophic "if it survived" result would be that the lens cracks under the stress of differential heating between the titanium, CuBe, and glass. As we don't expect this type of failure, in order to characterize "if it still functions," we want a quantitative "before" vs "after" metric. Since I have to learn how to "collimate" this type of collimator anyways (where "collimate" = mechanically tune the lens position such that emanating beam's waist is positioned as close to the lens as possible), I figure we use a 2D beam profile as our quantitative metric. We expect the symptom of a "failed" fiber collimator to be that "before bake it projects a lovely, symmetric, tunable, Gaussian beam, and after bake the profile looks asymmetric / astigmatic or the lens position is no longer consistently/freely adjustable." The SPI design requires a beam whose waist (1/e^2) radius is w0 = 1.050 mm +/- 0.1 mm. That puts the Rayleigh range at zR = pi (w0^2) / lambda = 3.25 [m], so in order to get a good fit on where the waist lies, we need a least one or two data points beyond the Rayleigh range. So that means projecting the beam over a large distance, say at least ~5-6 [m]. Measurement Details 2025-06-03_SPIFC_OpticalSetup.pdf shows the optical setup. Pretty simple; just consuming a lot of optical table space (which we thankfully have at the moment). The "back" optical table (in IFO coordinates the "-X table (with the long direction oriented in the Y direction") already has a pre-existing fiber coupled, 1064 nm, laser capable of delivering variable power of least 100 [mW] so I started with that. It's output has an FC/APC "angled physical contact" connector, where as the SuK fiber collimators have an FC/PC (flat) "physical contact" connector. I thus used a ThorLabs P5-980PM-FC-2 APC to PC fiber patch cord to couple to the SuK fiber collimator, assembled with a (12 mm)-to-(1 inch) optic mount adapter (AD12NT) and mounted in a standard 1" mirror mount, set at 5 inch beam height. Ensuring I positioned the free-space face of the collimator at the 0 [in] position, I projected to beam down the width of the optical table, placed steering mirrors at 9 [ft] 9 [in] (the maximum +Y grid holes), separated by 6 [in] in order to return the beam down the table. Along this beam, I marked out grid-hole positions to measure the beam profile with a NanoScan head at z = [0.508 0.991 1.499 2.007 3.251 4.496 5.41] [m] These are roughly even half-meter points that one gets from "finding 1 inch hole position that gets you close to 0.5 [m] increments", i.e. z = [1'8", 3'3", 4'11", 6'7", 10'8", 14'9", and 17'9"]. Using the SuK proprietary tooling, I then loosened the set-screws that had secured the lens in position with the 1.2 mm flat-head (9D-12), and adjusted the position of the lens with their short eccentric key (60EX-4), per their user manual (T2500062 == Adjustment_60FC.pdf), with the NanoScan was positioned at the 5.41 [m] position. At the 5.41 [m] position, we expect the beam radii to be w(z=5.41 m) = w0 * sqrt(1 + (z/zR)^2) = 2.036 [mm], or a beam diameter of d(z=5.41 m) = 4.072 [mm]. Regretfully, I made the rookie mistake of interpreting the NanoScan's beam widths (diameters) as radii on the fly, and "could not get a beam 'radii' lower than 3.0 [mm]," at the z = 5.41 position, as adjustments to the eccentric key / lens position approached a "just breath near it and it'll get larger" level at that beam size. This will be totally fine for "the real collimation" process, as beam widths (diameters) of 4.0 [mm] required much less a delicate touch and where quite repeatable (I found, while struggling to achieve 3.0 [mm] width). Regardless, once I said "good enough" at the lens position, I re-secured the set screws holding the lens in place. That produced a d = 3.4 [mm] beam diameter, or 1.7 [mm] radius, at the z = 5.41 [m]. I then moved the NanoScan head to the remaining locations (aligning the beam into the head at each position, as needed, with either the steering mirrors or the fiber collimator itself) to measure the rest of the beam profile as a function of distance. 2025-06-03_SPIFC_BeamScans_vs_Position.pdf shows the NanoScan head position and raw data at each z position after I tuned the lens position at z = 5.41 [m]. Having understood during the data analysis that the NanoScan software returns either 1/e^2 or D4sigma beam *diameters*, I followed modern convention and used the mean D4sigma values, and converted to radii with the factor of 2, w(z) = d(z) / 2. One can see from the raw data that the beam profile at each point is quite near ''excellently Gaussian,'' which is what we hope will remain true *after* the bake. Results 2025-06-03_spifc_S0272502_prebake_beamprofile_fit.pdf shows the output of the attached script spifc_beamprofile_S0272502_prebake_20250603.m, which uses Sheila's copy of a la mode to fit the profile of the beam. Discussion The data show a pretty darn symmetric beam in X (parallel to the table) and Y (perpendicular to the table), reflecting what had already been seen in the raw profiles. The fit predicts a waist w0 of (w0x, w0y) = (0.89576, 0.90912) [mm] at position (z0x, z0y) = (1.4017, 1.3469) [m] away, downstream from the lens. Makes total sense that the z position of the waist is *not* at the lens position, given that I tried to get the beam as small a *diameter* possible at the 5.41 [mm] position, rather than what I should have done which is to tune the lens position / beam diameter to be the desired 4.072 [mm]. What doesn't make sense to me, is that -- in trying to validate the fit and/or show that the beam behaves as an ideal Gaussian beam would -- I also plot the predicted beam radius at the Rayleigh range, w(zR) [model] = w0 * sqrt(1 + (zR/zR)^2) = w0 * sqrt(2), or (wzRx, wzRy) [model] = (1.2668,1.2857) [mm] which is much larger than the fit predicts, (wzRx, wzRy) [fit] = (0.9677,0.9958) [mm] at a Rayleigh range, zR = pi * w0^2 / lambda of (zRx,zRy) [from fit w0] = (2.3691, 2.4404) [m] Similarly confusing, if I plot a line from the waist position (z0x, z0y) to the end of the position vector (6 [m]), whose angle from the x-axis is the divergence angle theta_R = lambda / (pi * w0) i.e. a predicting a waist radius at zEnd = 6 [m] of w(zEnd) = (zEnd - z0) * atan(theta_R) this results in a beam waist at xEnd much smaller than the fit. Most demo plots, e.g. from wikipedia:Rayleigh_length or wikipedia:Beam_divergence, show that slope of the line should start to match the beam profile just after the Rayleigh range. To be fair, these demos never have quantitative axes, but still. At least the slope seems to match, even though the line is quite offset from the measured / fit z > 6 [m] asymtote. Conclusion Though I did not adjust the lens position to place the beam waist at desired position, I now have a good measurement setup and data processing regime to do so for the "production" pair of fiber collimators. For this collimator, since we're just looking at "before" vs. "after" bake data, this data set is good enough. Let's bake!
I took a moment to revisit the "cross checks" of the fit -- namely why the waist at the Rayleigh range (w(zR)) and Divergence angle's (\theta_{R}) prediction of what the waist should be in the "far field," (w(zEnd)), and solved the "mysteries." (1) For the the prediction of the waist radius at the Rayleigh range: all the above math was correct, it's merely that I did not properly offset the z position w.r.t. where the origin of the plot lie (at the fiber collimator). If I adjust the z position for that offset (zRx,zRy) [from fit w0] = (z0x+zRx, z0y+zRy) [from fiber collimator, i.e. z = 0 [m]] = (1.4017, 1.3469) + (2.3691, 2.4404) [m] = (3.7708, 3.7899) [m] then the waist radius (w0 * sqrt(2)) lands bang-on the fit and measurement. (2) For why the simplistic model of divergence angle doesn't appear to match the divergence of the measurements or model: this is merely a question of what really is "near" and "far" field. Turns out if you just expand the z range what I would call "way" out -- from 6 [m] to 20 [m] -- you see convergence between the two models. The pdf attached here is a new version of the plots showing the same data and model above, but with the simplistic models fixed. I now show two pages, page one with the original zoom from z = 0 to 6 [m] aka the "Near" field demonstrating (1), and page two zoomed out from z = 0 to 20 [m] demonstrating (2). I also attach the updated version of the code, spifc_beamprofile_S0272502_prebake_20250603.m.
WP12620 TW0 offload
The file copy to permanent archive ran from Fri 09:25 - Sat 10:42 (25hrs 07mins).
This morning, when H1 was out-of-lock and with the operator's permission I restarted NDS0 with its new daqdc file.
At 10:06 I started the deletion of the old files on h1daqtw0 in nice mode. Starting SSD-RAID disk usage was 92%.
Deletion completed at 12:08 (took 2hrs 3mins). This completes WP12620.
TITLE: 06/16 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: 1 lockloss with an automated relock without an IA, at the start of both locks today the SQZers' ASC took us in the wrong direction and degraded the range. I was not able to recover it the second time.
LOG: No log
There was two issues with the SCAN_ALIGNMENT_FDS alignment scans:
SQZ_ALIGNMENT_FDS is outdated as we haven't used it since we got the SQZ_ASC working well.
To do: find the optimum value/sign of ASQZ and put into SQZ_ALIGNMENT_FDS. Then we can try to rerun this and compare with where SQZ ASC takes us.
H1 dropped observing from 18:30 to 19:00 UTC for regularly scheduled calibration measurements, which ran without issue. A screenshot of the calibration monitor medm and the calibration report are attached.
Broadband runtime: 18:30:45 to 11:35:54 UTC
Simulines runtime: 18:36:41 to 18:59:56 UTC
We had to rerun the report to account for pro-spring in the model. Calibration looks better now -- sensing model is within 2% above 20 Hz and 5% below 20 Hz, report attached. I also updated the .ini file to now account for the pro-spring behavior.
More detailed steps:
is_pro_spring
to True
at the pydarm_H1.ini
in report 20250614T183642Z
20250614T183642Z
(in terminal, ran $pydarm report --regen --skip-gds 20250614T183642Z
)pydarm_H1.ini
file at /ligo/groups/cal/H1/ifo
as pydarm_H1.ini.250610
to save previous configuration./ligo/groups/cal/H1/ifo/pydarm_H1.ini
, set is_pro_spring
to True
Yesterday we ran a bruco on Francisco's post-vent SQZ time from 84996. Link to bruco here.
Command used: python -m bruco --ifo=H1 --channel=GDS-CALIB_STRAIN_CLEAN --gpsb=1433758866 --length=600 --outfs=4096 --fres=0.1 --dir=/home/camilla.compton/public_html/brucos/1433758866 --top=100 --webtop=20 --plot=html --nproc=20 --xlim=7:2000 --excluded=/home/elenna.capote/bruco-excluded/lho_DARM_excluded.txt
Links to to a few coherences, although I haven't done a deep dive: SRCL (some 100-250Hz), PRCL, MICH (bad 2-4Hz), PSL ISS 2nd loop, can see the jitter peaks in IMC WFS
The high coherence with CHARD P is probably coming from excess noise in CHARD P from HAM1, 84863. Jim is set to do further HAM1 ISI tuning tomorrow, so we can recheck this coherence later. We also have plans to rerun the noise budget injections to check if the CHARD coupling has changed.
We could do an iterative feedforward to take care of the residual LSC coherence, which mainly seems to be coming from MICH LSC.
We should also determine how much the MICH ASC coherence is limiting DARM and maybe change the loop design again.
Much of the other coherence seems to be jitter.
Sheila, Elenna, Camilla
Sheila was questioning if something is drifting for us to need an initial alignment after the majority of relocks. Elenna and I noticed that BS PIT moves a lot both while powering up /moving spots and while in NLN. Unsure from the BS alignment inputs plot what's causing this.
This was also happening before the break (see below) but the operators were similarly needing more regular initial alignments before the break too. 1 year ago this was not happening, plot.
These large BS PIT changes began 5th to 6th July 2024 (plot). This is the day shift from the time that the first lock like this happened 5th July 2024 19:26UTC (12:26PT): 78877 at the time we were doing PR2 spot moves. There also was a SUS computer restart 78892 but that appeared to be a day after this started happening.
Sheila, Camilla
This reminded Sheila of when we were heating a SUS in the past and causing the bottom mass to pitch and the ASC to move the top mass to counteract this. Then after lockloss, the bottom mass would slowly go back to it's nominal position.
We do see this on the BS since the PR2 move, see attached (top 2 left plots). See in the green bottom mass oplev trace, when the ASC is turned off on lockloss, the BS moves quickly and then slowly moves again over the next ~30 minutes, do not see simular things on PR3. Attached is the same plot before the PR2 move. And below is a list of other PR2 positions we tried, all the other positions have also made this BS drift. The total PR2 move since the good place is ~3500urad in Yaw.
To avoid this heating and BS drift, we should move back towards a PR2 YAW of closer to 3200. But, we moved PR2 to avoid the spot clipping on the scrapper baffle, e.g. 77631, 80319, 82722, 82641.
I did a bit of alog archaeology to re-remember what we'd done in the past.
To put back the soft turn-off of the BS ASC, I think we need to:
Camilla made the good point that we probably don't want to implement this and then have the first trial of it be overnight. Maybe I'll put it in sometime Monday (when we again have commissioning time), and if we lose lock we can check that it did all the right things.
I've now implemented this soft let-go of BS pit in the ISC_DRMI guardian, and loaded. We'll be able to watch it throughout the day today, including while we're commissioning, so hopefully we'll be able to see it work properly at least once (eg, from a DRMI lockloss).
This 'slow let-go' mode for BS pitch certainly makes the behavior of the BS pit oplev qualitatively different.
In the attached plots, the sharp spike up and decay down behavior around -8 hours is how it had been looking for a long time (as Camilla notes in previous logs in this thread). Around -2 hours we lost lock from NomLowNoise, and while we do get a glitch upon lockloss, the BS doesn't seem to move quite as much, and is mostly flattened out after a shorter amount of time. I also note that this time (-2 hours ago) we didn't need to do an initial alignment (which was done at the -8 hours ago time). However, as Jeff pointed out, we held at DOWN for a while to reconcile SDFs, it's not quite a fair comparison.
We'll see how things go, but there's at least a chance that this will help reduce the need for initial alignments. If needed, we can try to tweak the time constant of the 'soft let-go' to further make the optical lever signal stay more overall flat.
The SUSBS SDF safe.snap file is saved with FM1 off, so that it won't get turned back on in SDF revert. The PREP_PRMI_ASC and PREP_DRMI_ASC states both re-enable FM1 - I may need to go through and ensure it's on for MICH initial alignment.
RyanS, Jenne
We've looked at a couple of times that the BS has been let go of slowly, and it seems like the cooldown time is usually about 17 minutes until it's basically done and at where it wants to be for the next acquisition of DRMI. Attached is one such example.
Alternatively, a day or so ago Tony had to do an initial alignment. On that day, it seemed like the BS took much longer to get to its quiescent spot. I'm not yet sure why the behavior is different sometimes.
Tony is working on taking a look at our average reacquisition time, which will help tell us whether we should make another change to further improve the time it takes to get the BS to where it wants to be for acquisition.