Closes FAMIS37255, last checked in alog85779
Everything looks as expected, LVEA5 is off during observing.
Closes FAMIS26544, last checked in alog86165
Laser Status:
NPRO output power is 1.86W
AMP1 output power is 70.07W
AMP2 output power is 140.6W
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 1 days, 2 hr 21 minutes
Reflected power = 23.57W
Transmitted power = 105.3W
PowerSum = 128.9W
FSS:
It has been locked for 0 days 5 hr and 47 min
TPD[V] = 0.8582V
ISS:
The diffracted power is around 4.1%
Last saturation event was 0 days 5 hours and 48 minutes ago
Possible Issues:
PMC reflected power continues to be high, we can see a drop during the Tuesday incursion work, but it has risen a bit since then.
Today, we measured the calibration at three different ESD biases. First, we measured at the current bias of 269 V, and then our O4 standard bias of 136 V. Then, I stepped up to a higher bias of 409 V.
ESDAMON value | Bias Offset | L3 Drivealign gain | Calibration report | Notes |
269 V | 6.0542453 | 88.28457 |
alog: 86337 report: 20250813T153848Z |
only 1 hour thermalized at measurement time current operating bias |
136 V | 3.25 | 198.6643 |
alog: 86339 report: 20250813T162026Z |
nominal O4 bias, calibration model fit at this bias |
409 V | 8.89 | 57.587 |
alog: 86341 report: 20250813T174921Z |
ESD saturation warnings while at this bias Took 5 minutes of quiet time, cal lines on, at this new bias start: 18:12:54 UTC end: 18:18:00 UTC |
The attached plot compares the three broadband measurements at each ESD bias. It seems like the overall systematic error decreases as we increase the ESD bias.
To step up the ESD bias, I used guardian code that Sheila attached to this alog. Another relevant alog comparing simulines results at different biases is here.
Attaching figures comparing the sensing function, the actuation function, and the open loop gain (olg). All the figures are formatted in the same way, where the left side shows the bode plot from each report and the right shows the bode plot from of each measurement ratio to a reference. I used the latest exported calibration measurement "20250719T225835Z" as a reference. From 10 Hz to 1 kHz the sensing and the actuation function residuals are within 5%. The OLG is within 10% with one outlier at 410 Hz.
We are trying to understand how the systematic error is changing at each bias voltage, even though we think we are correcting the drivealign gain to account for the actuation change.
Francisco and I made some plots of how the modeled error changes. First, we pulled the model from the 20250719T225835Z report, since that is our current calibration model. Then, we pulled the kappas at the time of the lowest bias voltage measurement, since that is the bias voltage that our model is based on. We applied the kappas from that measurement time, and then calculated a new response function assuming an additional TST actuation change, ranging from no change (0%) to 1.5% change. Then, we compared each of these response functions to the kappa-corrected model.
To be clear, we are calculating the new response function as:
R = 1/C_model + (error_factor*TST_model + PUM_model + UIM_model) * D_model
The "model" in this case also has the kappa-corrected values applied, which are:
{'c': 0.98335475,
'f_c': 447.65558,
'uim': 1.0052187,
'pum': 1.0012773,
'tst': 1.0183398}
Looking at our results side-by-side with the broadband pcal measurement, we see some similarites. However, it's not exactly the same, since the frequency dependence appears slightly different in the measurement than the model.
There are some other comparisons to be made, but we can start with these. The script I used to make this plot is saved in /ligo/groups/cal/H1/ifo/scripts/compare_models_tst_err.py
Similar to alog 86227, the BTRP adapter flange and GV were installed on Tuesday at the MY station. Leak checking was completed today with no signal seen above the ~e-12 torrL/s background of the leak detector.
Pumping on this volume will continue until next Tuesday, so some additional noise may be seen by DetChar. This volume is valved out of the main volume, so the pressure readings from the PT-243 gauges can be ignored until further notice.
Here are the first and the last pictures of the leak detector values. The max was 3.5 * 10-12. 90% of the time it stayed at <1 *10-12.
As of Tuesday, August 19, the pumps have been shut off and removed from this system, and the gauge tree valved back in to the main volume. Noise/vibration and pressure monitoring at MY should be back to nominal.
The pumping cart was switched off, and the dead volume was valved back in to the main volume. The pressure dropped rapidly to ~5E-9 within a few minutes, and it continues to drop. Also, we (Travis & Janos) added some more parts (an 8" CF to 6" CF tee; CF to ISO adapters, and an ISO valve) to the assembly, and also added a Unistrut support to the tee; see attached photo. Next step is to add the booster pump itself, and anchor it to the ground.
LOTO was applied now both to the handlers of the hand angle valve and the hand Gate Valve.
At the time of starting these measurements, we had been Locked for over 3 hours, so we were fully thermalized
CALIBRATION_MONITOR screen and pydarm report are attached
Broadband
2025-08-13 17:42:30 - 17:47:47 UTC
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250813T174237Z.xml
Simulines
2025-08-13 17:48:52 - 18:12:09 UTC
/ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20250813T174853Z.hdf5
/ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20250813T174853Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20250813T174853Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20250813T174853Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20250813T174853Z.hdf5
Wed Aug 13 10:07:22 2025 INFO: Fill completed in 7min 18secs
Gerardo confirmed a good fill curbside.
At the time of starting these measurements, we had only been Locked for 1.5 hours, so we were not fully thermalized
CALIBRATION_MONITOR screen and pydarm report are attached
Broadband
2025-08-13 16:13:34 - 16:18:45 UTC
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250813T161334Z.xml
Simulines
2025-08-13 16:19:57 - 16:43:04 UTC
/ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20250813T161958Z.hdf5
/ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20250813T161958Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20250813T161958Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20250813T161958Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20250813T161958Z.hdf5
At the time of starting these measurements, we had only been Locked for 1 hour (1hr10min since MAX_POWER), so we were not fully thermalized
CALIBRATION_MONITOR screen and pydarm report are attached
Broadband
2025-08-13 15:31:30 - 15:37:02 UTC
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250813T153151Z.xml
Simulines
2025-08-13 15:38:19 - 16:01:22 UTC
/ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20250813T153820Z.hdf5
/ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20250813T153820Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20250813T153820Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20250813T153820Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20250813T153820Z.hdf5
TITLE: 08/13 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 0mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
Currently Observing at 153 Mpc and have been Locked for 10 minutes
Last night's lockloss at 2025-08-13 12:59UTC has no obvious cause, but I did notice that the dumped power into HAM6 is an interesting shape, with two bumps instead of the usual one. However, I did find a lockloss from 2025-07-28 03:13UTC whose power has the same shape, so it's probably just an alignment thing, especially since the last lockloss was positioned in such a way that the fast shutter didn't even need to fire (86325).
TITLE: 08/13 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:
Nice shift with H1 locked 6hrs15min. We did have 3 drop-outs due to TCS ITMy CO2 (all quick and automatic recoveries). ~45min after the 3rd drop, ther was superevent S250813k. Ended the night checking out peak Perseids just before the moon was rising.
LOG:
Just had a couple of drops from Observing due to TCS_ITMY_CO2 guardian saying laser is unlocked and needed to find a new locking point. (Attached are the two occurrences thus far).
TITLE: 08/12 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 143Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Currently Observing at 145 Mpc and have been Locked for 50 minutes. Relocking went pretty well today with the only concern being the fast shutter not firing during the last lockloss (86324), but luckily it seems like we were okay and that was expected in that situation.
LOG:
14:30UTC Locked for 5.5 hours and running magnetic injections
14:40 Back into Observing
14:45 Out of Observing for SUS charge measurements
14:57 Lockloss
19:18 Started relocking
- Initial alignment - BS ADS convergence took 16+ minutes (86320)
20:47 NOMINAL_LOW_NOISE
20:55 Observing
21:12 Lockloss
22:44 NOMINAL_LOW_NOISE
22:46 Observing
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
19:33 | SAF | Laser HAZARD | LVEA | YES | LVEA is Laser HAZARD | 15:33 |
15:00 | FAC | Kim, Nelly | LVEA | YES | Tech clean | 15:09 |
15:04 | FAC | Randy, Chris | LVEA | n | Craning around BSC2 | 18:46 |
15:09 | FAC | Kim | EX | n | Tech clean | 16:14 |
15:10 | FAC | Nelly | EY | n | Tech clean | 16:10 |
15:10 | PSL | Jason | PSL | YES | RefCav Alignment | 16:38 |
15:20 | VAC | Janos, Travis | MX, MY | n | Pump install | 19:21 |
15:22 | EE | Fil | CER, LVEA | n | Cable pulling | 18:14 |
15:24 | Camilla | LVEA | n | Transitioning to LASER SAFE | 15:40 | |
15:27 | EE | Marc | CER/LVEA | n | Pulling cables | 18:47 |
15:31 | VAC | Gerardo | LVEA | n | Removing turbo pump | 17:31 |
15:42 | Christina, Nichole | LVEA | n | 3IFO inventory | 18:25 | |
15:55 | Richard | LVEA | n | Surverying the floor (people) for any weaknesses | 16:12 | |
15:56 | PEM | Sam | LVEA | n | Talking to Fil and looking at accelerometers | 16:12 |
16:12 | FAC | Nelly | HAM Shack | n | Tech clean | 17:09 |
16:13 | EPO | Amber, Tour | LVEA | n | Tour | 16:31 |
16:16 | FAC | Kim | HAM Shack | n | Tech clean | 17:09 |
16:16 | SEI | Jim | LVEA | n | HEPI accumulator checks | 17:34 |
16:35 | EPO | Sam, Tooba, +1 | LVEA | n | Tour | 17:31 |
17:05 | EPO | Mike +2 Spokane Review | LVEA | n | Tour | 18:34 |
17:06 | EE | Jackie | LVEA | n | Joining Fil and Marc | 18:47 |
17:13 | FAC | Nelly, Kim | LVEA | n | Tech clean | 18:19 |
17:34 | Richard | LVEA | n | Checking on work | 17:51 | |
17:44 | Camilla | LVEA | n | Looking for Richard | 17:51 | |
18:02 | Richard, Tooba, +1 | Roof | n | Being on the roof | 18:14 | |
18:34 | Camilla | LVEA | YES | Transitioning LVEA to laser hazard | 18:47 | |
18:41 | SQZ | Sheila, Matt, Jennie | LVEA | YES | SQZT0 table work | 20:04 |
18:46 | EPO | Mike, Spokane Review | YARM | n | Driving down YARM | 20:21 |
18:47 | SQZ | Camilla | LVEA | YES | Joining SQZ crew | 20:04 |
18:49 | LASER | LVEA is LASER HAZARD | LVEA | YES | LVEA IS LASER HAZARD | 09:49 |
20:21 | VAC | Janos, Travis | MY | n | Continuing pump work | 22:17 |
20:29 | Christina, Nichole | MX, MY | n | 3IFO | 21:59 | |
23:15 | VAC | Janos | MY | n | Turning off pump | 00:45 |
TITLE: 08/12 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 144Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 8mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.17 μm/s
QUICK SUMMARY:
Got the hand-off from OIi which was standard other than their mention of the Fast Shutter note after a lockloss they had from their first locking attempt post-Maintenance. H1 has currently been locked for almost an hour.
(oh and Oli did also mention on leaving that they & Elenna noticed there was a Verbal Alarm for a PR3 saturation which happened about 20min after being in observing---they mentioned this is an odd thing for PR3.)
Operator Checksheet NOTES:
Ivey, Edgard, and Brian have created new estimator fits (86233) and blend filters (86265) for the SR3 Y estimator, and we have new rate channels (86080), so we were excited to be able to take new estimator measurements (last time 85615).
Unfortunately, there were issues with installing the new filters, so I had to make do with the old filters: for the for the estimator filters, I used the fits from fits_H1SR3_2025-06-30.mat, and the blend filters are from Estimator_blend_doublenotch_SR3yaw.m, aka the DBL_notch filter and not the new skinny notch. These are the same filters used in the testing from 85615.
So the only difference between the last estimator test and this one is that the last test had the generic satamp compensation filters (85471), and this measurement has the more precise 'best possible' compensation filters (85746). Good for us to see how much of a difference the generic vs best possible compensation filters make.
Unfortunately, due to the filter installation issues as well as still trying to re set up the estimator channels following the channel name changes, I also didn't have much time to run the tests, resulting in the actual test with the estimator being only 5 minutes. Hopefully this is okay enough for at least a preliminary view of how it's working and then next week we can run a full test with the more recent filters. Like last time, the transition between the OSEM damping and the estimator damping was very smooth and the noise out of the estimator was visibly smaller than with the regular damping (ndscope1).
Measurement times
SR3 Y damp -0.1
2025-08-12 18:28:00 - 18:44:00 UTC
SR3 Y damp -0.1, OSEM damp -0.4
2025-08-12 18:46:46 - 19:03:41 UTC
SR3 Y damp -0.1, Estimator damp -0.4
2025-08-12 19:09:00 - 19:16:51 UTC
Attached below are plots of the OSEM yaw signal, the M3 yaw optical lever witness sensor signal, and the drive request from light damping, full damping (current setting), and estimator damping modes from Oli's recent estimator test.
The blue trace is the light damping mode, the red trace is the full damping mode, and the yellow trace is the estimator damping.
The first plot is of the OSEM signal. The spectrum is dominated by OSEM noise. The blue, light damping trace shows where the suspension resonances are (around 1, 2, and 3 Hz). Under estimator damping, the resonances don't show up as expected.
This second plot is of the OPLEV signal. It is much more obvious from this plot that the estimator is damping at the resonances as expected. Between the first and second, as well as the second and third peaks, the yellow trace of the estimator damping mode is below the red trace of the full damping mode. This is good because it is expected that the estimator damping is better than the current full damping mode between the peaks. There is some estimator noise between 3 and 4 Hz from the estimator. The light damping trace also sees a noticeable amount of excess noise between 10 to 15 Hz. We suspect this is due to ground motion from maintenance: third, fourth, and fifth plots show comparisons between ground motion in July (when the light damping trace was 'normal') and August. There is excess noise in X, Y, and Z in August when compared to July.
The sixth plot is of the drive requests. This data was pulled from a newly installed 512 samples/sec channel, while the previous analysis for a test in July (see: LHO: 85745) was done using a channel that was sampling at 16 samples/sec. The low frequency full damping drive request differs significantly between July and August, likely because aliasing effects caused the July data to be unreliable. Otherwise, the estimator is requesting less drive above 5 Hz as expected. We note that the estimator rolls off sharply above 10 Hz.
The last plot is of the theoretical drive requests overlaid onto the empirical drive requests. We see that the major features of the estimator drive request are accounted for, as expected.
Oli intends to install the filter and the new, clean fits (see LHO: 86366) next Tuesday to test the yaw estimator once more. Hopefully the installation is smooth!
I would like to clarify from my initial alog that when I said that "the only difference between the last estimator test and this one is that the last test had the generic satamp compensation filters", that was a lie!! The measurements taken for calibrating and figuring out the correct response drives were taken before the satellite amplifiers were swapped for SR3, so even just the OSEMINF calibration was not done with the new satellite amplifiers in mind, so the calibration we had in there at the time was not very accurate to what we had going on, so we can't really compare this measurement to the last one.
Oli told me that the TCS CO2Y Chassis Tripped off during Maintenance this morning. This is not surprising as there was alot of craning work going on near that rack and the CO2 chassis are known to be fussy, see FRS 6639.
When I went to untrip it both indicator lights were red on it but after key-ing off/on, it turned on with no issues.
WP12746 h1omc0 Low Noise ADC Autocal Test
EJ, Jonathan, Dave:
For a first test we restarted h1iopomc0 three times to run an autocal on the low-noise ADC -- each time the autocal Failed. Next test will be to power cycle existing card leading to possible replacement.
WP12755 TW1 Raw Minute Trend Offload
Dave:
h1daqtw1 was configured to write it raw minutes into a new local directory to isolate the last 6 months of data from the running system. The NDS service on h1daqnds1 was restarted to serve these data from this location while the file transfer progresses.
Restarts
Tue12Aug2025
LOC TIME HOSTNAME MODEL/REBOOT
09:06:26 h1omc0 h1iopomc0 <<< three ADC AUTOCAL tests
09:08:12 h1omc0 h1iopomc0
09:08:58 h1omc0 h1iopomc0
09:09:12 h1omc0 h1omc <<< start user models
09:09:26 h1omc0 h1omcpi
11:51:19 h1daqnds1 [DAQ] <<< reconfigure for TW1 offload
TW1 offload status as of 07:30 Wed: 70% complete. ETA 15:15 this afternoon.
Yesterday, we installed the BTRP (Beam Tube Roughing Pump) adapter flange on the 13.25" gate valve just to the -X side of GV13. This included installing a 8" GV onto the roughing pump port of the adapter, moving the existing gauge tree onto the new adapter, and installing a 2.75" blank on an unused port. All of the new CF joints were helium leak tested and no signal was seen above the ~9e-11 torrL/s background of the leak detector.
The assembly is currently valved out of the BT vacuum volume via the 13.25" GV, and is being pumped down via small turbo and aux cart. Therefore, the PT-343 gauge reading is only reporting on the BTRP assembly pressure, not the main BT pressure, so it can be ignored until further notice of it being vavled back in. This system has been pumping via aux cart or leak detector since ~2pm yesterday, and will continue to be pumped until it is in the pressure range of the BT volume. The aux cart is isolated by foam under the wheels, but some noise may be noticed by DetChar folks, hence the DetChar tag on this report.
A before - after pair of photos. As the conductance is very bad in this complex volume, we're aiming to pump it until next Tuesday. The estimated pressure rise of the main volume after valving in this small volume next Tuesday is less than E-12 Torr (after equalizing) - in other words, negligible.
Some backstage snapshots of the great teamwork of Travis, Janos, and me on installing these: Pic. 1 - "before"; 2,3 - 90% complete.
As of Tuesday, August 12, the pumps have been shut off and removed from this system, and the gauge tree valved back in to the main volume. Noise/vibration and pressure monitoring at MX should be back to nominal.
LOTO was applied now both to the handlers of the hand angle valve and the hand Gate Valve. Also, components have been added to the header, only 1 piece away from the booster pump.
Something kicked SRM and caused this lockloss. SRM was also kicked 20 seconds earlier, but we were able to recover from that.
J. Kissel Executive Summary I've tuned the lens position and measured the projected beam profile of one of three trial fiber collimators to be used for the SPI. The intent is to do this measurement before vs. after a vacuum bake like done for previous collimators (Section 3.3 of E1500384), to see if the bake will cause any negative impact on the performance of the collimator. It also helped clear out a few "rookie mistake" bugs in my data analysis, though there's still a bit of inconsequential confusion in post-processing the fit of the beam profile. Full Context The SPI intends to use supposedly vacuum compatible Schaefter + Kirchhoff (SuK) fiber collimators (D2500094, 60FC-0-A11-03-Ti) in order to launch its two measurement (MEAS) and reference (REF) beams from their fiber optical patchcords attached to feedthroughs (D2500175) in to free-space and throughout the ISIK transceiver / interferometer (D2400107). However, unlike their (more expensive) stainless steel counterparts from MicroSense / LightPath used by the SQZ team, SuK doesn't have the facilities to specify a temperature at which they believe the titanium + D-ZK3 glass lens + CuBe & Ti lens retaining ring assembly will fail (see E2300454). As such, we're going to run one SuK collimator through the same baking procedure as the MicroSense / LightPath (see Section 3.3 of E1500384) -- slow 6 [hr] ramp up to 85 [deg C], hold for 24 hours, similar slow ramp down -- and see if it survives. A catastrophic "if it survived" result would be that the lens cracks under the stress of differential heating between the titanium, CuBe, and glass. As we don't expect this type of failure, in order to characterize "if it still functions," we want a quantitative "before" vs "after" metric. Since I have to learn how to "collimate" this type of collimator anyways (where "collimate" = mechanically tune the lens position such that emanating beam's waist is positioned as close to the lens as possible), I figure we use a 2D beam profile as our quantitative metric. We expect the symptom of a "failed" fiber collimator to be that "before bake it projects a lovely, symmetric, tunable, Gaussian beam, and after bake the profile looks asymmetric / astigmatic or the lens position is no longer consistently/freely adjustable." The SPI design requires a beam whose waist (1/e^2) radius is w0 = 1.050 mm +/- 0.1 mm. That puts the Rayleigh range at zR = pi (w0^2) / lambda = 3.25 [m], so in order to get a good fit on where the waist lies, we need a least one or two data points beyond the Rayleigh range. So that means projecting the beam over a large distance, say at least ~5-6 [m]. Measurement Details 2025-06-03_SPIFC_OpticalSetup.pdf shows the optical setup. Pretty simple; just consuming a lot of optical table space (which we thankfully have at the moment). The "back" optical table (in IFO coordinates the "-X table (with the long direction oriented in the Y direction") already has a pre-existing fiber coupled, 1064 nm, laser capable of delivering variable power of least 100 [mW] so I started with that. It's output has an FC/APC "angled physical contact" connector, where as the SuK fiber collimators have an FC/PC (flat) "physical contact" connector. I thus used a ThorLabs P5-980PM-FC-2 APC to PC fiber patch cord to couple to the SuK fiber collimator, assembled with a (12 mm)-to-(1 inch) optic mount adapter (AD12NT) and mounted in a standard 1" mirror mount, set at 5 inch beam height. Ensuring I positioned the free-space face of the collimator at the 0 [in] position, I projected to beam down the width of the optical table, placed steering mirrors at 9 [ft] 9 [in] (the maximum +Y grid holes), separated by 6 [in] in order to return the beam down the table. Along this beam, I marked out grid-hole positions to measure the beam profile with a NanoScan head at z = [0.508 0.991 1.499 2.007 3.251 4.496 5.41] [m] These are roughly even half-meter points that one gets from "finding 1 inch hole position that gets you close to 0.5 [m] increments", i.e. z = [1'8", 3'3", 4'11", 6'7", 10'8", 14'9", and 17'9"]. Using the SuK proprietary tooling, I then loosened the set-screws that had secured the lens in position with the 1.2 mm flat-head (9D-12), and adjusted the position of the lens with their short eccentric key (60EX-4), per their user manual (T2500062 == Adjustment_60FC.pdf), with the NanoScan was positioned at the 5.41 [m] position. At the 5.41 [m] position, we expect the beam radii to be w(z=5.41 m) = w0 * sqrt(1 + (z/zR)^2) = 2.036 [mm], or a beam diameter of d(z=5.41 m) = 4.072 [mm]. Regretfully, I made the rookie mistake of interpreting the NanoScan's beam widths (diameters) as radii on the fly, and "could not get a beam 'radii' lower than 3.0 [mm]," at the z = 5.41 position, as adjustments to the eccentric key / lens position approached a "just breath near it and it'll get larger" level at that beam size. This will be totally fine for "the real collimation" process, as beam widths (diameters) of 4.0 [mm] required much less a delicate touch and where quite repeatable (I found, while struggling to achieve 3.0 [mm] width). Regardless, once I said "good enough" at the lens position, I re-secured the set screws holding the lens in place. That produced a d = 3.4 [mm] beam diameter, or 1.7 [mm] radius, at the z = 5.41 [m]. I then moved the NanoScan head to the remaining locations (aligning the beam into the head at each position, as needed, with either the steering mirrors or the fiber collimator itself) to measure the rest of the beam profile as a function of distance. 2025-06-03_SPIFC_BeamScans_vs_Position.pdf shows the NanoScan head position and raw data at each z position after I tuned the lens position at z = 5.41 [m]. Having understood during the data analysis that the NanoScan software returns either 1/e^2 or D4sigma beam *diameters*, I followed modern convention and used the mean D4sigma values, and converted to radii with the factor of 2, w(z) = d(z) / 2. One can see from the raw data that the beam profile at each point is quite near ''excellently Gaussian,'' which is what we hope will remain true *after* the bake. Results 2025-06-03_spifc_S0272502_prebake_beamprofile_fit.pdf shows the output of the attached script spifc_beamprofile_S0272502_prebake_20250603.m, which uses Sheila's copy of a la mode to fit the profile of the beam. Discussion The data show a pretty darn symmetric beam in X (parallel to the table) and Y (perpendicular to the table), reflecting what had already been seen in the raw profiles. The fit predicts a waist w0 of (w0x, w0y) = (0.89576, 0.90912) [mm] at position (z0x, z0y) = (1.4017, 1.3469) [m] away, downstream from the lens. Makes total sense that the z position of the waist is *not* at the lens position, given that I tried to get the beam as small a *diameter* possible at the 5.41 [mm] position, rather than what I should have done which is to tune the lens position / beam diameter to be the desired 4.072 [mm]. What doesn't make sense to me, is that -- in trying to validate the fit and/or show that the beam behaves as an ideal Gaussian beam would -- I also plot the predicted beam radius at the Rayleigh range, w(zR) [model] = w0 * sqrt(1 + (zR/zR)^2) = w0 * sqrt(2), or (wzRx, wzRy) [model] = (1.2668,1.2857) [mm] which is much larger than the fit predicts, (wzRx, wzRy) [fit] = (0.9677,0.9958) [mm] at a Rayleigh range, zR = pi * w0^2 / lambda of (zRx,zRy) [from fit w0] = (2.3691, 2.4404) [m] Similarly confusing, if I plot a line from the waist position (z0x, z0y) to the end of the position vector (6 [m]), whose angle from the x-axis is the divergence angle theta_R = lambda / (pi * w0) i.e. a predicting a waist radius at zEnd = 6 [m] of w(zEnd) = (zEnd - z0) * atan(theta_R) this results in a beam waist at xEnd much smaller than the fit. Most demo plots, e.g. from wikipedia:Rayleigh_length or wikipedia:Beam_divergence, show that slope of the line should start to match the beam profile just after the Rayleigh range. To be fair, these demos never have quantitative axes, but still. At least the slope seems to match, even though the line is quite offset from the measured / fit z > 6 [m] asymtote. Conclusion Though I did not adjust the lens position to place the beam waist at desired position, I now have a good measurement setup and data processing regime to do so for the "production" pair of fiber collimators. For this collimator, since we're just looking at "before" vs. "after" bake data, this data set is good enough. Let's bake!
I took a moment to revisit the "cross checks" of the fit -- namely why the waist at the Rayleigh range (w(zR)) and Divergence angle's (\theta_{R}) prediction of what the waist should be in the "far field," (w(zEnd)), and solved the "mysteries." (1) For the the prediction of the waist radius at the Rayleigh range: all the above math was correct, it's merely that I did not properly offset the z position w.r.t. where the origin of the plot lie (at the fiber collimator). If I adjust the z position for that offset (zRx,zRy) [from fit w0] = (z0x+zRx, z0y+zRy) [from fiber collimator, i.e. z = 0 [m]] = (1.4017, 1.3469) + (2.3691, 2.4404) [m] = (3.7708, 3.7899) [m] then the waist radius (w0 * sqrt(2)) lands bang-on the fit and measurement. (2) For why the simplistic model of divergence angle doesn't appear to match the divergence of the measurements or model: this is merely a question of what really is "near" and "far" field. Turns out if you just expand the z range what I would call "way" out -- from 6 [m] to 20 [m] -- you see convergence between the two models. The pdf attached here is a new version of the plots showing the same data and model above, but with the simplistic models fixed. I now show two pages, page one with the original zoom from z = 0 to 6 [m] aka the "Near" field demonstrating (1), and page two zoomed out from z = 0 to 20 [m] demonstrating (2). I also attach the updated version of the code, spifc_beamprofile_S0272502_prebake_20250603.m.