Matt Todd, Jennie Wright, Sheila Dwyer
Today we lost lock right before the commissioning window, and so we made another effort at moving the spot on PR2 out of lock, correcting some mistakes made previously. Here's an outline of steps to take:
When relocking:
Today, we did not pico on these QPDs, but we need to. We will plan to do that Monday or Tuesday (next time we relock), and then we will need to update the offsets.
Today, I also forgot to revert the change to ISC_DRMI before we went to observing. So, I've now edited it to turn back on the PRC1 + PRC2 loops, but someone will need to load ISC_DRMI at the next opurtunity.
J. Kissel, at the prodding of S. Dwyer, A. Effler, D. Sigg, B. Weaver, and P. Fritschel Context The calibration of the DC alignment range / position of the ITMY CP, aka CPy, has been called into question recently under the microscope of "how misaligned is the ITMX CP, and do we have the actuation range to realign it?" given that it's been identified to be a cause of excess scattered light (see e.g. LHO:82252 and LHO:82396). What's in question / What Metrics Are Valid to Compare Some work has been done here LHO:77557 to identify that we think CPy is misaligned "down" i.e. in positive pitch by 0.55 [mrad] = 550 [urad] Peter reminds folks, in LHO:77587 that the DC range of the top mass actuators should be 440 [urad] and estimates that drives ~45 [mA] through the coils, pointing to my calibration of the coil current readbacks from LHO:77545. But in that same LHO:77587, he calls out that - Slider - OSEMs calibrations into [urad] disagree by a factor of 1130 / 440 = 2.5x. #YUCK1. And Daniel points out that there's a factor of 2x error in my interpretation of the coil current calibration from LHO:77545. #YUCK2. Note -- there's conversation about the optical lever readback disagreeing with these metrics as well, but the optical lever looks at the HR surface of the main chain test mass, so it's a false comparison to suggest that this is also "wrong." Yes, technically the optical lever beam hits and reflects some portion of all surfaces of the QUAD, but by the time this spots all hit the optical lever QPD, they're sufficiently spatially separated that we have to chose one, and the install team works hard to make sure that they've directed the reflection off of the test mass HR surface onto the QPD and no other reflection. That being said, the fact hat these optical lever readings of the test mass have been identified to be wrong in the past as well (see LHO:63833 for ITMX and LHO:43816 for ITMY) doesn't help the human sort which wrong metrics are the valid ones to complain about in this context. #FACEPALM So, yes, a lot of confusing metrics around there, and all the one's we *should* be comparing disagree -- and seemingly by large factors of 2x to 4x. So let's try to sort out the #YUCKs. Comparing the big picture of all the things that "should" be the same #YUCK1 In our modeling and calibrating, we assume (1) All ITM Reaction Chains have the same dynamical response (in rotation, for the on-diagonal terms, that's in units of [rad]/[N.m]) (2) All OSEM sensors on all have been normalized to have the same "ideal" calibration from ADC counts to [um]. (3) All mechanical arrangements of OSEMs are the same, so we can use the same lever arm to convert an individually sensed OSEM [um] into a rotation in [urad], and vice versa that a requested [N.m] drive in the EULER basis creates the same Force [N] at each OSEM coil, and (4) All OSEM top-mass actuator chains are chains are the same, (with 18 or 20 bit DACs and QTOP coil drivers, and 10x10 magnets), so the same DAC counts produces the same force at the OSEM's location. In order the check "are the same sensors / actuators reporting different values for (ideally) the same mechanical system," I used our library of historical data and allquads_2025-02-06_AllITMR0Chains_Comparison_R0_Y-Y_TF_zoomed.pdf However, for pitch, we do see a good bit of difference in allquads_2025-02-06_AllITMR0Chains_Comparison_R0_P-P_TF_zoomed.pdf. Of course, we're used to looking at these plots over many orders of magnitude and call what we see "good enough" to make sure the resonances are all in the right place. If I actually call out the DC magnitude of the transfer functions in the comparisons, you do actually see several factors of two, and differences between our four instantiations of the same suspension: ITM R0 P2P TF DC magnitude Model 0.184782 Model/Meas L1ITMY / Others Meas L1 ITMX 0.0675491 2.7355 ~3x 1.4028 L1 ITMY 0.0939456 1.9501 ~2x 1 H1 ITMX 0.0517654 3.5696 ~3.5x 1.8305 H1 ITMY 0.0947546 1.9501 ~2x ~1 So, there is definitely something different about these -- ideally identical -- suspensions. I think it's an amazing testimate to the install teams that both L1 and H1's ITMY have virtually identical DC magnitude (and AC transfer functions). Of course, "ideal," in terms of mechanics is muddled by cables that are laced thru the UIM / PUM / TST stages -- we've seen (from LONG ago) that specific (most) arrangements of the cables can stiffen the reaction chain, and setting the cables in such a way that they do *not* influence the pitch dynamics is hard -- see LHO:1769, 2085 and LHO:2117. I attach the R0 P2P plot plot from LHO:2117 that shows how much influence the cabling *can* have. I had the impression that this was only an impact on the "3rd" mode of the transfer function, but when you actually look at it with this "factors of two at DC" in mind, the data clearly shows cabling impact on the DC stiffness as well, and again factors of two are possible with different cable arrangements. So, in PITCH when we request the actuators to push these suspensions at DC, we may get a different answer at the optic i.e. the compensation plate, or CP. This may be some source of the disagreement between OSEM *sensors* and requested drive from the OSEM coil sliders. Resolving how much current is being driven through the coils, as reported by the FASTIMON or RMSIMON channels #YUCK2 (A) At H1, can confirm that all the QUAD's top masses, both main chain and reaction chain are using QTOP coil drivers, as designed, with no modifications -- see the e-Travelers within the "Quad Top Coil Drivers" serial numbers listed as related to H1 SUS C5 (S1301872) (B) I was about to make the same claim of L1, but in doing the due diligence with L1 SUS C5 (S1105375), I see that the S1000369 Quad Top Driver was modified to give more drive strength on ITMY R0 F1,F2,F3 - the pitch and yaw coils, and there's no follow-up record suggesting it was reverted. The work permit from Stuart Aston mentioned in LLO:28375 indicates a request to increase the strength by 25%. The action is also documented by Carl Adams and Michael Laxen in LHO:28301. It would be helpful to confirm if this mod is still in place, and if not, then the e-Traveler should be updated with record of the reverting. I'm guessing the mod is still in place, because there's mention of the serial number that was originally there being swapped in elsewhere in 2019 -- see LLO:46238. (C) That being said, I can at least make the statement confidently that all QUAD TOP Coil Drivers in play are using the same, original noise monitor circuit D070480. (D) Looking back all the content on the DCC page, Daniel's right about my mis-calibration of the coil driver current monitor from LHO:77545. This darn monitor circuit will be the death of me. The error comes in during a misunderstanding of how the single-ended output of the current monitor circuit is piped into our differential ADCs, namely the line * "single-ended voltage piped into only one leg of differential ADC" factor of two its DB25 output J1 in the interconnect drawing because of which I added the factor of 2 [V_DF] / 1 [V_SE] to the calibration. If you look at interconnect drawing you can see that the "F" (for FAST I MON) and "S" (for Slow RMSIMON) single-ended voltages are piped into the output DB25's positive pins, and the negative pins are connected to 0 V. This is a big unusual, because typical LIGO differential ADC driver circuits copy and invert the single-ended voltage and pipe the original single-ended voltage to the positive leg and the negative copy to the positive leg, such that V_SE = V_{D+} = - V_{D-}. Comparing these two configurations, (i) piping a signal ended voltage into only one leg, and 0V into the other (V_{SE} - V_{REF}) - (0 - V_{REF}) = V_SE (ii) copying and inverting the single ended voltage yields, (V_{SE+} - V_{REF}) - (V_{SE-} - V_{REF}) = (V_{SE} - V_{REF}) - ( - V_{SE} - V_{REF}) = 2 V_{SE} So, I'd used the (ii) configuration's calibration rather than (i), which is the case for the current monitors (and everything on that noise monitor board). The corrected the RMSIMON calibration is thus calibration_QTOP [ct/A] = 2 * 40.00 [V/A] * (10e3 / 30e3) * 1 * (2^16 / 40 [ct/V]) = 4.3691e+04 [ct/A] or 43.691 [ct/mA] or 0.0229 [mA/ct] Taking the values Peter shows in the F1 RMSIMON in his ndscope session in LHO:77587, Slider [urad] RMSIMON [ct] RMSIMON [mA] 440 4022.74 92.0732 0 113.972 2.6086 Delta 3908.77 89.4646 So, we're already driving a lot of coil current into the BOSEMs, if this calibration doesn't have any more flaws in it. I'd also like to super confirm with LLO that they've still got 25% more range on their ITMY QUAD top coil driver, 'cause if they're consistently using any substantial amount of the supposed range, than they've been holding these BOSEMs at larger than 100 [mA] for a long time, which goes against Dennis' old modeled requirement (see LLO:13456). I'll follow-up next Tuesday with some cold-hard measurements to back up the model of the coil driver and its current monitor.
Closes FAMIS#26029, last checked 82536
Compared to last week:
All corner station (except HAM8) St1 H2 is elevated at 5.8Hz
All corner station (except HAM8) St1 elevated in all sensors at 3.5 Hz
HAM 5/6 H3 elevated at 8Hz
ITMX ST2 H3 elevated between 5.5 and 8.5 Hz
ITMY ST2 H1/H2/H3 elevated between 5.5 and 8 Hz
BS ST2 H1/H2/H3 elevated between 5.5 and 8 Hz
Thu Feb 06 10:05:35 2025 INFO: Fill completed in 5min 32secs
TCmins [-62C, -44C] OAT (-2C, 28F) DeltaTempTime 10:05:47
Overnight, the electric reheat in the ducting of the VEA continued heating the space, even though the automation system was commanding the heat off. The space rose over set point by several degrees, which made relocking the IFO infeasible. I checked the control circuit of the heater and didn't find any obvious problems. Once I re-energized the control circuit, the heat remained off, making it difficult to find the cause of the problem. I watched the heater cycle normally per automation command, so for the time being it is working correctly. I will monitor it throughout the day.
Planned Saturday Calibration sweep done using the usual wiki.
Simulines start
PST: 2025-02-06 08:36:47.419400 PST
UTC: 2025-02-06 16:36:47.419400 UTC
GPS: 1422895025.419400
Simulines stop, we lost lock in the middle of the measurement.
PST: 2025-02-06 08:58:38.495605 PST
UTC: 2025-02-06 16:58:38.495605 UTC
GPS: 1422896336.495605
TITLE: 02/06 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 162Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 6mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.21 μm/s
QUICK SUMMARY:
H1 Manager contacted me because we couldn't get into squeezing. the filter cavity was having trouble locking even green. It was locking on the wrong modes(02 and 01), with FC_TRANS_C_LF_OUTPUT reading below 100. Referencing 72084, I paused the FC guardian, closed the servo for SQZ green, and checked the FC2 driftmon. FC2 had drifted A LOT in the past hours (ndscope). I tried moving the sliders until we were back in the location where we were 4 hours ago when our squeezing was good, but this wasn't enough to let us get back to a 00 mode, and moving the sliders around more in other directions didn't get me closer either. I then checked the SQZ troubleshooting wiki and plotted the M3 witness channels along with the driftmon for both FC1 and FC2. I adjusted the sliders according to the witness channels for FC1 and we got back to a 00 mode! I then fiddled back and forth between using the driftmon channels and using the witness channels to maximize FC_TRANS_C_LF_OUTPUT until we looked good, then I unpaused the FC node and we were able to enter FDS and then Observing!
TITLE: 02/06 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
To Do List: Hit LOAD for VIOLIN_DAMPING guardian at next lockloss and use settings for IY5 that RyanC mentions in his summary.
Overall the ITMy Mode5/6 are slowly being damped down with Ryan's settings and have been monitoring to see that it continues to damp down. Hopefully H1 can stay locked overnight to damp these modes down.
Had about 60-90min of snow falling and sticking tonight (only about less than 0.5").
LOG:
For FAMIS #26359: All looks well for the last week for all site HVAC fans (see attached trends).
3-month trend for the SUS HWWDs.
As far as this 3-month stretch, we get at least ONE bit for all FOUR Test Masses. No further action needed.
Attached are monthly TCS trends for CO2 and HWS lasers. (FAMIS link)
TITLE: 02/06 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 16mph Gusts, 11mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.28 μm/s
QUICK SUMMARY:
H1's been locked 5.75hrs. Ryan-C mentioned the issues with IY5 violin mode, but sounds like he has a setting that is slowly damping it down. If there is a lockloss, I need to do a LOAD of the VIOLIN DAMP guardian (so the IY5 damps with 0 gain) & then enter the settings which work for him.
Microseism has been trending down over the last 8-ish hours and winds look a little calmer compared to 24hrs ago.
TITLE: 02/05 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: 1 lockloss with an easy relock, we've been locked for 6 hours.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
17:16 | SAFETY | LASER SAFE ( \u2022_\u2022) | LVEA | SAFE! | LVEA SAFE!!! | 19:08 |
17:13 | PSL | Jason | CR | N | ISS loop adjustment | 17:15 |
17:15 | CAL | Tony | PCAL lab | Y | PS11 measurement | 17:40 |
18:03 | FAC | Kim | Receiving | N | Cardboard | 18:54 |
18:12 | VAC | Travis | LVEA | N | Quick part check near high bay | 18:16 |
19:31 | FIT | Matt | Xarm | N | Runnin' | 20:13 |
21:36 | ISC | Matt | Optics lab | N | Cheeta optics mounting | 22:28 |
21:40 | CAL | Tony | PCAL lab | Y | PCAL work | 22:28 |
17:10 UTC lockloss
18:34 UTC Observing
ITMY mode5/6s damping didn't seem to be going well today, I've been monitoring it on DTT after seeing the tall line on DARM and the DCPD min/max plot looking flat. I've found some new settings that seem to be bringing it down, based on the long & short monitors and DTT but more testing is needed to confirm. The new settings are FM5 + FM6 + FM10 G= -0.01, removing FM8. (Tagging SUS, OPS). I've set the gain to 0 in lscparams but I have not loaded the VIOLIN_DAMPING guardian.
21:52 UTC superevent S250205ee
00:00 - 00:03 UTC we dropped observing as the SUS_PI guardian fought (successfully) PI 24 in the new "XTREME_PI_DAMPING" state
Sheila, Camilla, follow on from 82640.
We made some more changes to SQZ_MANGER to hopefully simplify it:
Saved and added to svn but not loaded.
Once reloaded, as states have been changed, any open SQZ_MANAGER medm's should be closed and reopened.
I did load this today, there don't seem to have been any issues in this lock.
Sheila and I are continuing to check various PD calibrations (82260). Today we checked the POP A LF calibration.
Currently there is a filter labeled "to_uW" that is a gain of 4.015. After some searching, Sheila tracked this to an alog by Kiwamu, 13905, with [cnts/W] = 0.76 [A/W] x 200 [Ohm] x 216 / 40 [cnts/V]. Invert this number and multiply by 1e6 to get uW/ct.
Trusting our recalibration of IM4 trans, we have 56.6 W incident on PRM. We trust our PRG is about 50 at this time, so 2.83 kW are in the PRC. PR2 transmission is 229 ppm (see galaxy optics page). Then, the HAM1 splitter is 5.4% to POP (see logs like 63523, 63625). So we expect 34 mW on POP. At this time, there was about 30.5 mW measured on POP according to Kiwamu's calibration.
I have added another filter to the POP_A_LF bank called "to_W_PRC", that should calibrate the readout of this PD to Watts of power in the PRC.
POP_A_LF = T_PR2 * T_M12 * PRC_W, and T_PR2 is 229 ppm and T_M12 is 0.054. I also added a gain of 1e-6 since FM10 calibrates to uW of power on the PD.
Both FM9 (to_W_PRC) and FM10 (to_uW) should be engaged so that POP_A_LF_OUT reads out the power in the PRC.
I loaded the filter but did not engage it.
More thoughts about these calibrations!
I trended back to last Wednesday to get more exact numbers.
input power = 56.8 W
PRG = 51.3
POP A LF (Kiwamu calibration) = 30.7 mW
predicted POP A LF = 0.054 * 229 ppm * 56.8 W * 51.3 W/W = 36 mW
ratio = 30.7 mW / 36 mW = 0.852
If the above calibrations of PRG and input power are correct, we are missing about 15% of the power on POP.
Sheila, Mayank
We tried measuring the width of aperture of scrapper baffle in front of PR2.
The plan was to change the beam spot on PR2 mirror while simultaneosly monitoring the circulating power in X (ASC-X_PWR_CIRC_OUT) ( It is a caliberated channel derived from ASC-X_TR_A_SUM_OUT_16 and ASC-X_TR_B_SUM_OUT_16) . When the beam starts hitting the edge of aperture of scrapper baffle the circulating power in X will drop.
In oder to change the PR2 beam spot. We changed the PR3 yaw slider while activating PR2spotmove script (It adjust the slider values of PR2 and PRM and IM4 in such a way the the light going from PR3 to the the Interferometer remaines unchanged while the spot on PR2 moves.)
Steps we followed
A2L procedure
What we did
2. Perfomed A2L on the three mirrors PR2 PRM PR3 to estimate original position. The gains which minimized the three injection peaks were.
3.The PR3 Yaw slider was decreased to 75.2 the such that the TRX value dropped to 0.047 (6%)
Perfomed A2L on the three mirrors PR2 PRM PR3 to estimate one edge of the Baffle. The gains which minimized the three injection peaks were
4. The PR3 Yaw slider was increased to 103 such that the TRX value dropped to 0.047 (6%)
Perfomed A2L on the three mirrors PR2 PRM PR3 to estimate other edge of the Baffle. The gains which minimized the three injection peaks were
As expected the beam spot does not move on PR3 and PRM.
The PR2 beam spot moved by 1.612 mm.
Thoughts: Sheila suggested that this movement of 1.612 mm is too small to see the clipping at the baffle. Most likely the reduction in circulating power was due to something else. To probe this, we tried to go further on the PR3 Yaw sliders (greater than the 103 and less than 75.2) however the X arm was not locking. This was probably because we were not getting the required PDH signal for Xarm locking on the POP port due to misalignment. Sheila suggested that we can
[M. Todd, C. Compton, G. Vajente, S. Dwyer]
To understand the effect of the Relative Intensity Noise (RIN) of the CO2 laser (Access 5W L5L) proposed for CHETA on the DARM loop, we've done a brief study to check whether the addition of the RIN as displacement noise in deltaL will cause saturation at several key points in the DARM loop such as the ESD driver and DCPDs. The estimates we've made on the RIN at these points are calibrated with the DARM model in pydarm, which models the DARM loop during Nominal Low Noise; however, appropriate checks have been made that these estimates are accurate or at least over-estimating of the effects during lower power stages (when the CHETA laser will be on).
This estimate is done by propagating displacement noise in deltaL (how CHETA RIN is modeled, m/rtHz) to counts RMS of the ESD DAC. The RMS value of this should stay below 25% or so of the saturation level of the DAC, which is 2**19. To do this, we multiply the loop suppressed CHETA RIN (calibrated into DARM) by the transfer functions mapping deltaL to ESD counts (all are calculated at NLN using pydarm).
The CHETA RIN in ESD cts RMS is 0.161% of the saturation level, and in L2 coil cts RMS is 1.098%, and in L3 coil cts RMS is 0.015%. It is worth noting that the CHETA RIN RMS at these points is around 10x higher than that which we expect with just DARM during NLN.
We also checked to make sure that the ESD cts RMS during power-up states is not higher than that during NLN, meaning the calibration using NLN values gives us a worst case scenario of the CHETA RIN impact on ESD cts RMS.
List of Figures:
1) Loop Model Diagram with labeled nodes
2) CHETA RIN in ESD cts RMS
3) CHETA RIN in L2coil cts RMS
4) CHETA RIN in L1coil cts RMS
5) DARM Open Loop Gain - pydarm
6) DARM Sensing Function - pydarm
7) DARM Control Function (Digitals) - pydarm
8) Transfer Function: L3DAC / DARM_CTRL - pydarm
9) Transfer Function: L2DAC / DARM_CTRL - pydarm
10) Transfer Function: L1DAC / DARM_CTRL - pydarm
11) ASD/RMS ESD cts during power-up states - diaggui H1:SUS-ETMX-L3_MASTER_OUT_UL_DQ
12) CHETA RIN ASD (raw)
This estimate is done by propagating displacement noise in deltaL (how CHETA RIN is modeled, m/rtHz) to counts RMS of the DCPD ADC. The RMS value of this should stay below 25% or so of the saturation level of the DAC, which is 2**15. To do this, we multiply the loop suppressed CHETA RIN (calibrated into DARM) by the transfer functions mapping deltaL to DCPD ADC counts, using the filters in Foton files. This gives us the whitened ADC counts, so by multiplying by the anti-whitening filter we get the unwhitened DCPD ADC cts RMS, which is what is at risk of saturation.
The CHETA RIN in DCPD cts RMS is 3.651% of the saturation level. Again, it is worth noting that the CHETA RIN RMS at this point is around 10x higher than that which we expect with just DARM during NLN.
We also checked to make sure that the DCPD-A ADC channel is coherent with DARM_ERR. In short, it is up to 300Hz, where controls noise dominates our signal -- after 300Hz shot noise becomes the dominant noise source and reduces our coherence.
List of Figures:
1) Loop Model Diagram with labeled nodes
2) CHETA RIN in DCPD ADC cts RMS
3) Transfer Function: DCPD-ADC / DELTAL_CTRL
4) Coherence: DCPD-A / DARM_ERR
Calibrating CHETA RIN to ESD cts RMS
Calibrating CHETA RIN to DCPD ADC cts RMS
Previous related alogs:
1) alog 82456
Is the propagation of RIN into displacement consistent with the photothermal calculations done by Braginsky and Cerdonio? One can use Eq. 8 of Braginsky (1999) except with the replacement of the absorbed shot noise power 2 hbar omega_0 Wabs with the absorbed classical laser power. Then using
alpha = 0.6 ppm/K
sigma = 0.17
rho = 2200 kg/m^3
C = 700 J/(kg K)
r0 = 53 mm / sqrt(2)
I find sqrt(Sxx) = 1.6e-18 m/rtHz as the displacement from a single test mass assuming a CHETA RIN of 1e-5/rtHz and an absorbed power of 1 W.
[M. Todd, E. Hall]
Indeed the propagation of RIN int DARM laid out in T050064 is consistent with the work done by Braginsky and Cerdonio. The calibration follows the form in Figure 1.
Attached is a comparison plot of the two propagtions, using the parameters set above in Evan's comment.
Updating this post with some busier plots that show how other CO2 laser noise is projected into the various stages. As well as adding flat RIN curve propagations to give an intuition as to what RINs we do not need to even worry about in NLN.
I've also reattached the codes used because of a correction to the way the ASD integration was being done.
The plots also extend to lower frequency to show the behavior of the RIN propagation to each channel (mostly falling off below 10Hz). This is why we take the "RMS" value to be the integrated value of the ASD at 10Hz, and compare that to the saturation limit. It also gives a better display of the RMS from DARM in NLN at propagated to the above channels, showing that overall the RIN should have a small effect on these drives and ADCs.
Sheila, Dave, Ryan Crouch, Tony
This afternoon after the maintence window when we first started using guardian, once the ASC safe.snap was loaded by SDF revert we started sending large signals to the quads. We found that this was due to the camera servos having their gains set to large numbers. This was set this was in the safe.snap file.
After I set these two zero in safe.snap (which is really down.snap), Ryan again went through the guardian down, and this time we started to saturate the quads because of the arm asc loops (which we probably didn't notice the first time because we tried running down when we saw that there was a problem, and down would turn these off but not the camera servos).
Dave looked in the svn for this file, which he had committed this morning with this set of: diffs from this mornings svn commit . Looking through these, it kind of seems like somehow the safe.snap may have been overwritten with the observe.snap file.
Dave reverted that to the file from 7 days ago, which has Elenna's changes to the POP QPD offsets. Then I reverted all the diffs, so that we set all settings back to 7 days ago except those that are not monitored.
After this, Mayank and I were using various initial alignment states to make some clipping checks, which Mayank will alog. We noticed that the INP1Y loop (to IM4) was oscillating, so we reduced the gain in that from 10 to 40, on line 917 of ALIGN_IFO.py We also saw that there is an oscillation in the PRC ASC if we sit in PRX, but we haven't fixed that. These should not be due to whatever our safe.snap problem is, we hope.
Edit to add: We looked at the last lockloss, when the guardian went through SVN revert at 7 am yesterday Feb 3rd. It looks like the camera gains were 0 in the safe.snap at that time, but it was 100 by the time we did SDF revert at 20:51 UTC (1pacific time) today.
We need to add one more step to this procedure: pico on the POP QPDs 82683
For more context, here's a brief history of where our spot has been:
Today, we have some extra nonstationary noise between 20-50 Hz, which we hoped would be fixed by pico'ing on the POP QPDs but it hasn't been fixed, as you can see from the range and rayleigh statistic in the attachments.
Back in May 2024, we had an unrelated squeezer problem that caused some confusion: 78033. We were in this alignment from 5/20/24 at 19 UTC to 22:42 UTC on 5/23/24 15 UTC. We did not see this large glitchy behavoir at this time, and there was a stretch of time when the range was 160, although there were also times when the range was lower.