After editing SQZ_MANGER to use the correct ASQZ angle (-)80deg as noted was an issue over the weekend 85072, I ran SCAN_ALIGNMENT_FDS. This appeared to work fine and we had ASQZ around 14dB. However, after it was done, SQZ didn't look good and we turned on the ASC for a few minutes, this improved the high freq SQZ with the low frequency SQZ remaining the same (maybe even slightly better), see plot. Unsure why SCAN_ALIGNMENT doesn't give as good SQZ as the ASC, but the ASC appears to be working well once we are thermalized.
Later in the week we might try using the ASC again, either from the start of the lock if we think the new THERMALIZTION GRD is working well 85083, or after we have thermalized.
A single blank line was added to the end of the filter file for H1SUSAUXEX to test whether the IFO can enter the OBSERVE state with unloaded filter changes.
If loaded, this change would have no effect on the behavior of the filters.
We should be ready to go to laser safe in the LVEA.
Alenna noted that we will have CP-Y spots on the ballast baffle even when the CP is propertly aligned (which is why we eventually want to remove/fix the baffle), so I swept the CP in pitch and yaw and found a minimum in scatter coupling at about -300 p, 300 y. The varying noise from the pump on HAM1 interfered with the measurement, so I think I can further minimize the coupling once the pump is off.
Accepted in SAFE and OBSERVE SDF tables as Robert left them. This new position has been causing ITMY saturations during LOWNOISE_COIL_DRIVERS due to the increased drive from the DAC, but this doesn't seem to be causing any noticeable locking issues.
Ryan was having a hard time with locking PRMI, so we did a repeat of the REFLAIR 45 phasing as in 84630.
We started by checking the PRCL OLG, we saw odd features there (1st attachment), phased REFLAIR45 (2nd attachment shows spectra, 3rd shows SDF), and see that the PRCL OLG looks normal again. This is partially reversing the change to the phase that we made in 84630, we've moved the phase from 97 to 87 back to 93.
Oli, Camilla, Sheila, RyanS
It was pointed out (84972) that our new SRCL offset is too big at the beginning of the lock, affecting the calibration and how well we are squeezing. Camilla had the idea of taking the unused-but-already-set-up THERMALIZATION guardian and repurposing the main state so it steps LSC-SRCL1_OFFSET from the LSC-SRCL1_OFFSET value at the end of MAX_POWER to the official offset value given in lscparams (offset['SRCL_RETUNE']). This stepping starts at the end of MAX_POWER and goes for 90 minutes. Here is a screenshot of the code.
To go with this new stepping, we've commented out the line (~5641) in ISC_LOCK's LOWNOISE_LENGTH_CONTROL (ezca['LSC-SRCL1_OFFSET'] = lscparams.offset['SRCL_RETUNE']) so that the offset doesn't get set to that constant and instead keeps stepping.
To get this to run properly while observing, we did have to unmonitor the LSC_SRCL1_OFFSET value in the Observe sdf (sdf).
Attached is a screenshot of the grafana page, highlighting the 33 Hz calibration line, which seems to be the most sensitive to thermalization. Before, when the SRCL offset was set static, it appears that the 33 Hz line uncertainty starts at about 1.09 and then decays down to about 1.02 over the first hour. With the thermalization adjustment of the SRCL offset from 0 to -455 over one hour, the 33 Hz uncertainty starts around 0.945 and then increases to 1.02 over the first hour. Seems like we overshot in the other direction, so we could start closer to -200 perhaps and move to -455.
We decided to change the guardian so that it starts at -200 before then stepping its way up to -455 over the course of 75 minutes instead of 90 minutes.
With the update to the guardian to start at -200, each calibration line uncertainty has actually stayed very flat for these first 30 minutes of lock (except for the usual very large jump in the uncertainty for the first few minutes of the lock).
This shows the entire lock using the thermalization guardian with the SRCL offset ramping from -200 to -455, The line uncertainty holds steady the entire time within 2-3%!
FAMIS 31090
Temperatures have been coming down following the weather, and PMC REFL has been moving around, but no major events of note otherwise.
The figure shows that the “mystery” beam spot on the HAM3 spool piece dropped dramatically in intensity over the break. I don’t know if it was the compensation plate move or something else that we did. Im a little skeptical that it was the compensation plate move because, before the break, I moved the compensation plate while filming the spot and saw no change (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=82252). However, I think that this is worth doing again to double check.
Mon Jun 16 10:10:49 2025 INFO: Fill completed in 10min 45secs
TC-A was below nominal range for this fill.
J. Kissel Executive Summary I've tuned the lens position and measured the projected beam profile of one of three trial fiber collimators to be used for the SPI. The intent is to do this measurement before vs. after a vacuum bake like done for previous collimators (Section 3.3 of E1500384), to see if the bake will cause any negative impact on the performance of the collimator. It also helped clear out a few "rookie mistake" bugs in my data analysis, though there's still a bit of inconsequential confusion in post-processing the fit of the beam profile. Full Context The SPI intends to use supposedly vacuum compatible Schaefter + Kirchhoff (SuK) fiber collimators (D2500094, 60FC-0-A11-03-Ti) in order to launch its two measurement (MEAS) and reference (REF) beams from their fiber optical patchcords attached to feedthroughs (D2500175) in to free-space and throughout the ISIK transceiver / interferometer (D2400107). However, unlike their (more expensive) stainless steel counterparts from MicroSense / LightPath used by the SQZ team, SuK doesn't have the facilities to specify a temperature at which they believe the titanium + D-ZK3 glass lens + CuBe & Ti lens retaining ring assembly will fail (see E2300454). As such, we're going to run one SuK collimator through the same baking procedure as the MicroSense / LightPath (see Section 3.3 of E1500384) -- slow 6 [hr] ramp up to 85 [deg C], hold for 24 hours, similar slow ramp down -- and see if it survives. A catastrophic "if it survived" result would be that the lens cracks under the stress of differential heating between the titanium, CuBe, and glass. As we don't expect this type of failure, in order to characterize "if it still functions," we want a quantitative "before" vs "after" metric. Since I have to learn how to "collimate" this type of collimator anyways (where "collimate" = mechanically tune the lens position such that emanating beam's waist is positioned as close to the lens as possible), I figure we use a 2D beam profile as our quantitative metric. We expect the symptom of a "failed" fiber collimator to be that "before bake it projects a lovely, symmetric, tunable, Gaussian beam, and after bake the profile looks asymmetric / astigmatic or the lens position is no longer consistently/freely adjustable." The SPI design requires a beam whose waist (1/e^2) radius is w0 = 1.050 mm +/- 0.1 mm. That puts the Rayleigh range at zR = pi (w0^2) / lambda = 3.25 [m], so in order to get a good fit on where the waist lies, we need a least one or two data points beyond the Rayleigh range. So that means projecting the beam over a large distance, say at least ~5-6 [m]. Measurement Details 2025-06-03_SPIFC_OpticalSetup.pdf shows the optical setup. Pretty simple; just consuming a lot of optical table space (which we thankfully have at the moment). The "back" optical table (in IFO coordinates the "-X table (with the long direction oriented in the Y direction") already has a pre-existing fiber coupled, 1064 nm, laser capable of delivering variable power of least 100 [mW] so I started with that. It's output has an FC/APC "angled physical contact" connector, where as the SuK fiber collimators have an FC/PC (flat) "physical contact" connector. I thus used a ThorLabs P5-980PM-FC-2 APC to PC fiber patch cord to couple to the SuK fiber collimator, assembled with a (12 mm)-to-(1 inch) optic mount adapter (AD12NT) and mounted in a standard 1" mirror mount, set at 5 inch beam height. Ensuring I positioned the free-space face of the collimator at the 0 [in] position, I projected to beam down the width of the optical table, placed steering mirrors at 9 [ft] 9 [in] (the maximum +Y grid holes), separated by 6 [in] in order to return the beam down the table. Along this beam, I marked out grid-hole positions to measure the beam profile with a NanoScan head at z = [0.508 0.991 1.499 2.007 3.251 4.496 5.41] [m] These are roughly even half-meter points that one gets from "finding 1 inch hole position that gets you close to 0.5 [m] increments", i.e. z = [1'8", 3'3", 4'11", 6'7", 10'8", 14'9", and 17'9"]. Using the SuK proprietary tooling, I then loosened the set-screws that had secured the lens in position with the 1.2 mm flat-head (9D-12), and adjusted the position of the lens with their short eccentric key (60EX-4), per their user manual (T2500062 == Adjustment_60FC.pdf), with the NanoScan was positioned at the 5.41 [m] position. At the 5.41 [m] position, we expect the beam radii to be w(z=5.41 m) = w0 * sqrt(1 + (z/zR)^2) = 2.036 [mm], or a beam diameter of d(z=5.41 m) = 4.072 [mm]. Regretfully, I made the rookie mistake of interpreting the NanoScan's beam widths (diameters) as radii on the fly, and "could not get a beam 'radii' lower than 3.0 [mm]," at the z = 5.41 position, as adjustments to the eccentric key / lens position approached a "just breath near it and it'll get larger" level at that beam size. This will be totally fine for "the real collimation" process, as beam widths (diameters) of 4.0 [mm] required much less a delicate touch and where quite repeatable (I found, while struggling to achieve 3.0 [mm] width). Regardless, once I said "good enough" at the lens position, I re-secured the set screws holding the lens in place. That produced a d = 3.4 [mm] beam diameter, or 1.7 [mm] radius, at the z = 5.41 [m]. I then moved the NanoScan head to the remaining locations (aligning the beam into the head at each position, as needed, with either the steering mirrors or the fiber collimator itself) to measure the rest of the beam profile as a function of distance. 2025-06-03_SPIFC_BeamScans_vs_Position.pdf shows the NanoScan head position and raw data at each z position after I tuned the lens position at z = 5.41 [m]. Having understood during the data analysis that the NanoScan software returns either 1/e^2 or D4sigma beam *diameters*, I followed modern convention and used the mean D4sigma values, and converted to radii with the factor of 2, w(z) = d(z) / 2. One can see from the raw data that the beam profile at each point is quite near ''excellently Gaussian,'' which is what we hope will remain true *after* the bake. Results 2025-06-03_spifc_S0272502_prebake_beamprofile_fit.pdf shows the output of the attached script spifc_beamprofile_S0272502_prebake_20250603.m, which uses Sheila's copy of a la mode to fit the profile of the beam. Discussion The data show a pretty darn symmetric beam in X (parallel to the table) and Y (perpendicular to the table), reflecting what had already been seen in the raw profiles. The fit predicts a waist w0 of (w0x, w0y) = (0.89576, 0.90912) [mm] at position (z0x, z0y) = (1.4017, 1.3469) [m] away, downstream from the lens. Makes total sense that the z position of the waist is *not* at the lens position, given that I tried to get the beam as small a *diameter* possible at the 5.41 [mm] position, rather than what I should have done which is to tune the lens position / beam diameter to be the desired 4.072 [mm]. What doesn't make sense to me, is that -- in trying to validate the fit and/or show that the beam behaves as an ideal Gaussian beam would -- I also plot the predicted beam radius at the Rayleigh range, w(zR) [model] = w0 * sqrt(1 + (zR/zR)^2) = w0 * sqrt(2), or (wzRx, wzRy) [model] = (1.2668,1.2857) [mm] which is much larger than the fit predicts, (wzRx, wzRy) [fit] = (0.9677,0.9958) [mm] at a Rayleigh range, zR = pi * w0^2 / lambda of (zRx,zRy) [from fit w0] = (2.3691, 2.4404) [m] Similarly confusing, if I plot a line from the waist position (z0x, z0y) to the end of the position vector (6 [m]), whose angle from the x-axis is the divergence angle theta_R = lambda / (pi * w0) i.e. a predicting a waist radius at zEnd = 6 [m] of w(zEnd) = (zEnd - z0) * atan(theta_R) this results in a beam waist at xEnd much smaller than the fit. Most demo plots, e.g. from wikipedia:Rayleigh_length or wikipedia:Beam_divergence, show that slope of the line should start to match the beam profile just after the Rayleigh range. To be fair, these demos never have quantitative axes, but still. At least the slope seems to match, even though the line is quite offset from the measured / fit z > 6 [m] asymtote. Conclusion Though I did not adjust the lens position to place the beam waist at desired position, I now have a good measurement setup and data processing regime to do so for the "production" pair of fiber collimators. For this collimator, since we're just looking at "before" vs. "after" bake data, this data set is good enough. Let's bake!
WP12620 TW0 offload
The file copy to permanent archive ran from Fri 09:25 - Sat 10:42 (25hrs 07mins).
This morning, when H1 was out-of-lock and with the operator's permission I restarted NDS0 with its new daqdc file.
At 10:06 I started the deletion of the old files on h1daqtw0 in nice mode. Starting SSD-RAID disk usage was 92%.
Deletion completed at 12:08 (took 2hrs 3mins). This completes WP12620.
TITLE: 06/16 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 2mph Gusts, 0mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.05 μm/s
QUICK SUMMARY: Looks like just one lockloss overnight with an automatic recovery. H1 has been locked and observing for 7 hours. Commissioning time today slated for 15:00 to 21:00 UTC.
TITLE: 06/16 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: 1 lockloss with an automated relock without an IA, at the start of both locks today the SQZers' ASC took us in the wrong direction and degraded the range. I was not able to recover it the second time.
LOG: No log
There was two issues with the SCAN_ALIGNMENT_FDS alignment scans:
SQZ_ALIGNMENT_FDS is outdated as we haven't used it since we got the SQZ_ASC working well.
To do: find the optimum value/sign of ASQZ and put into SQZ_ALIGNMENT_FDS. Then we can try to rerun this and compare with where SQZ ASC takes us.
Ryan C, Jonathan, Dave:
Starting around 18:28 Sun 15jan2025 the control room reported name resolution issues within CDS. Also the GC WIFI went offline.
The CDS alarm system froze up at 18:28, which agrees with the time the other services went offline.
Jonathan is reporting issues contacting GC DNS and managment machines, indicating this could be a GC issue.
Jonathan is heading to the site to investigate.
From the control room perspective:
teamspeak continues to run on the verbal machine.
phones continue to work
alog is accessible if the IP number is used, not the name.
scripts are failing if they need to resolve names, this is preventing squeezer work and H1's range is down to the 80s.
the alarm/alert system cannot resolve twilio's address, so no alarm texts/emails can be sent.
The issue has been resolved by power cycling the sw-osb163-0 switch. This is what DNS and a few other key services hang off of.
I restarted the switch around 8:14pm local time. Ryan C. confirms that he has access to the alog. I can get to the management machines and the dns servers, both locally and via offsite routes.
Alarms restarted itself at 20:20 and I restarted alerts at 20:54. Test messages confirmed these services are working correctly.
Opened FRS34439 to cover this, specifically how it impacted on control room operations.
H1 dropped observing from 18:30 to 19:00 UTC for regularly scheduled calibration measurements, which ran without issue. A screenshot of the calibration monitor medm and the calibration report are attached.
Broadband runtime: 18:30:45 to 11:35:54 UTC
Simulines runtime: 18:36:41 to 18:59:56 UTC
We had to rerun the report to account for pro-spring in the model. Calibration looks better now -- sensing model is within 2% above 20 Hz and 5% below 20 Hz, report attached. I also updated the .ini file to now account for the pro-spring behavior.
More detailed steps:
is_pro_spring
to True
at the pydarm_H1.ini
in report 20250614T183642Z
20250614T183642Z
(in terminal, ran $pydarm report --regen --skip-gds 20250614T183642Z
)pydarm_H1.ini
file at /ligo/groups/cal/H1/ifo
as pydarm_H1.ini.250610
to save previous configuration./ligo/groups/cal/H1/ifo/pydarm_H1.ini
, set is_pro_spring
to True
Yesterday we ran a bruco on Francisco's post-vent SQZ time from 84996. Link to bruco here.
Command used: python -m bruco --ifo=H1 --channel=GDS-CALIB_STRAIN_CLEAN --gpsb=1433758866 --length=600 --outfs=4096 --fres=0.1 --dir=/home/camilla.compton/public_html/brucos/1433758866 --top=100 --webtop=20 --plot=html --nproc=20 --xlim=7:2000 --excluded=/home/elenna.capote/bruco-excluded/lho_DARM_excluded.txt
Links to to a few coherences, although I haven't done a deep dive: SRCL (some 100-250Hz), PRCL, MICH (bad 2-4Hz), PSL ISS 2nd loop, can see the jitter peaks in IMC WFS
The high coherence with CHARD P is probably coming from excess noise in CHARD P from HAM1, 84863. Jim is set to do further HAM1 ISI tuning tomorrow, so we can recheck this coherence later. We also have plans to rerun the noise budget injections to check if the CHARD coupling has changed.
We could do an iterative feedforward to take care of the residual LSC coherence, which mainly seems to be coming from MICH LSC.
We should also determine how much the MICH ASC coherence is limiting DARM and maybe change the loop design again.
Much of the other coherence seems to be jitter.
This post reports on the results from SRCL dither measurement I ran in January, briefly reported in alog 82248. It's taken me a long time to write up this report because I spent significant time processing the results in different ways to try to account for some of the possible pitfalls of this measurement. The overall problem is that, in O4, this measurement has traditionally reported very low arm power compared to our other methods of estimating arm power (for example, see Craig's work in 66860). I have been doing an exhaustive study to understand why that might be.
A full derivation of this measurement can be found in Craig's dissertation Section 3.2.2, which also includes references to work by Daniel and Kiwamu that orignally inspired this method. To summarize, the idea of the measurement is that dithering the SRM creates differential amplitude sidebands due to the radiation pressure coupling in the SRC. This response can be readout as a transfer function from the differential arm length to the relative intensity noise on the arm transmission QPDs. The resulting transfer function takes a simple form of DARM/RIN = alpha/f^2. The arm power is calculated via P_arm = 1/2 * alpha * pi^2 * M * c (M = test mass mass, c = speed of light)
Here are some possible issues with the measurement:
In order to fully study this properly, I ran a bayesian inference on my measurement data with both the amplitude and power law as free parameters. I wanted to confirm that the 1/f^2 trend agrees with the data, and if it agrees, get the arm power estimate (note: if the power law is different from -2, the overall amplitude of the fit cannot be used to measure the arm power). In the process of setting up the inference, I set about making sure that the uncertainty on the line height in both DARM and the TMS QPDs was being correctly calculated.
Uncertainty and Bias:
Appendix E of Craig's thesis is a nice reference for line uncertainty.
Testing frequency dependence:
I set up a bayesian inference using a model that fits the frequency dependence and "power", assuming a gaussian distribution of my uncertainty. Following Craig's discussion in his thesis appendix E, with SNR > 5 the ASD distribution of a line can be well-approxmiated by a gaussian distribution. I assume a flat prior, with possible powers ranging from 0-1000 kW, and possible power laws from -3 to -1. I fix the inference to assume the same arm power in each arm, so it uses both A and B QPDs in each arm to fit one X arm power and one Y arm power. I then fix the frequency dependence to be the same for both arms and both QPDs in each arm.
The results from this inference do not favor a slope of -2, with a fit that gives m=-2.016 +-0.014 (95% CI).
However, fixing the slope to -2, following the model, gives the following power results (95% CI):
X arm power = 319.6 +- 2.8 kW
Y arm power = 303.5 +- 2.4 kW
These are the highest power results achieved with this method during O4. However, they are still very low compared to what we know about the interferometer, namely that our PRG at full power is about 50, and we have about 56-57 W of power on the PRM, which predicts about 360 kW of arm power, assuming an arm gain of 260. Craig, Sheila, and I have all done work to verify these numbers before and during O4. This result also indicates a significant mismatch in the arm powers; a surprising result since our pre-O4 test mass replacement of ITMY should have made the arms more well-matched to each other than in O3.
Some commentary:
One aspect of this measurement that is very constraining on the slope of the transfer function is the overall uncertainty on each transfer function measurement point, which is very narrow. This makes sense, since we are achieving fairly good SNR overall. However, while processing the data I did notice that there is modulation of the DARM/RIN transfer function, that is on the order of a Hz to a few Hz. My guess is that this is coming from the ASC modulating DARM, SRCL, or both. I'm not sure overall what effect this could have on the estimation of the transfer function or the uncertainty. Returning to the results from Dan's finesse model, the sparseness of points in this measurement also makes it harder to determine if the slope has diverged from -2 at lower frequency due to a different effect, such as some differential mismatch in the test mass radius of curvatures.
If we choose to use these measurement results, they would certainly place a lower bound on our possible arm power, which is compatible with some of Sheila's quantum noise modeling work (see 82097). However, those models require minimal or no readout losses to achieve such a low arm power, which is incompatible with some of our other results measuring readout loss, such as the work Jennie Wright has been doing.
Back to the measurement itself, we could try to improve the result slightly by integrating longer at each point, and measuring more points to get a better idea of the slope. Instead of running Craig's swept sine measurement, I injected several lines by hand because I found it easier to verify that we were achieving good SNR this way. I'm not sure if there is a way to overcome the modulation of the injection.
I am adding a link here to Jennie's alog where she measured the throughput to be 86%, suggesting 14% readout losses, 83586. She also later measured a readout loss of 12.2%, 83008.
You can see from Sheila's quantum noise fiting alog (82097) that the fit using low power, 327 kW, requires very low readout loss. Her low power model uses a readout efficiency of 91.6%.
Therefore, it seems our current readout loss measurements are at odds with the results of this SRCL dither measurement.
Sheila, Elenna, Camilla
Sheila was questioning if something is drifting for us to need an initial alignment after the majority of relocks. Elenna and I noticed that BS PIT moves a lot both while powering up /moving spots and while in NLN. Unsure from the BS alignment inputs plot what's causing this.
This was also happening before the break (see below) but the operators were similarly needing more regular initial alignments before the break too. 1 year ago this was not happening, plot.
These large BS PIT changes began 5th to 6th July 2024 (plot). This is the day shift from the time that the first lock like this happened 5th July 2024 19:26UTC (12:26PT): 78877 at the time we were doing PR2 spot moves. There also was a SUS computer restart 78892 but that appeared to be a day after this started happening.
Sheila, Camilla
This reminded Sheila of when we were heating a SUS in the past and causing the bottom mass to pitch and the ASC to move the top mass to counteract this. Then after lockloss, the bottom mass would slowly go back to it's nominal position.
We do see this on the BS since the PR2 move, see attached (top 2 left plots). See in the green bottom mass oplev trace, when the ASC is turned off on lockloss, the BS moves quickly and then slowly moves again over the next ~30 minutes, do not see simular things on PR3. Attached is the same plot before the PR2 move. And below is a list of other PR2 positions we tried, all the other positions have also made this BS drift. The total PR2 move since the good place is ~3500urad in Yaw.
To avoid this heating and BS drift, we should move back towards a PR2 YAW of closer to 3200. But, we moved PR2 to avoid the spot clipping on the scrapper baffle, e.g. 77631, 80319, 82722, 82641.
I did a bit of alog archaeology to re-remember what we'd done in the past.
To put back the soft turn-off of the BS ASC, I think we need to:
Camilla made the good point that we probably don't want to implement this and then have the first trial of it be overnight. Maybe I'll put it in sometime Monday (when we again have commissioning time), and if we lose lock we can check that it did all the right things.
I've now implemented this soft let-go of BS pit in the ISC_DRMI guardian, and loaded. We'll be able to watch it throughout the day today, including while we're commissioning, so hopefully we'll be able to see it work properly at least once (eg, from a DRMI lockloss).
This 'slow let-go' mode for BS pitch certainly makes the behavior of the BS pit oplev qualitatively different.
In the attached plots, the sharp spike up and decay down behavior around -8 hours is how it had been looking for a long time (as Camilla notes in previous logs in this thread). Around -2 hours we lost lock from NomLowNoise, and while we do get a glitch upon lockloss, the BS doesn't seem to move quite as much, and is mostly flattened out after a shorter amount of time. I also note that this time (-2 hours ago) we didn't need to do an initial alignment (which was done at the -8 hours ago time). However, as Jeff pointed out, we held at DOWN for a while to reconcile SDFs, it's not quite a fair comparison.
We'll see how things go, but there's at least a chance that this will help reduce the need for initial alignments. If needed, we can try to tweak the time constant of the 'soft let-go' to further make the optical lever signal stay more overall flat.
The SUSBS SDF safe.snap file is saved with FM1 off, so that it won't get turned back on in SDF revert. The PREP_PRMI_ASC and PREP_DRMI_ASC states both re-enable FM1 - I may need to go through and ensure it's on for MICH initial alignment.
RyanS, Jenne
We've looked at a couple of times that the BS has been let go of slowly, and it seems like the cooldown time is usually about 17 minutes until it's basically done and at where it wants to be for the next acquisition of DRMI. Attached is one such example.
Alternatively, a day or so ago Tony had to do an initial alignment. On that day, it seemed like the BS took much longer to get to its quiescent spot. I'm not yet sure why the behavior is different sometimes.
Tony is working on taking a look at our average reacquisition time, which will help tell us whether we should make another change to further improve the time it takes to get the BS to where it wants to be for acquisition.