Workstations were updated and rebooted. This was an OS packages update. Conda packages were not updated.
IFO is in ENGAGE_ASC_FOR_FULL_IFO and LOCKING
Got called at 2:30 AM after guardian failed to lock ALS during 30mph wind gusts (which seems to have been the cause of the lockloss itself) The cause for the call itself was that ISC_LOCK had stalled after a few attempts in which I simply unstalled it. However, DRMI was still having issues locking and alignment looked poor. A 5.7 EQ (+ aftershock) delayed my running initial alignment but it ran automatically after the EQ passed. Locking is so far fully automatic and DRMI locked with a much better signal. The lockloss stalling issue doesnt seem to have recurred while relocking.
TITLE: 07/15 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 144Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Lockloss at the beginning of the shift, high winds made relocking a struggle till they calmed down in the last 1.5 hours. We've been locked for about an hour as of 05:00 UTC.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 22:56 | SAF | LVEA | LVEA | YES | LVEA IS LASER HAZARD \u0d26\u0d3f(\u239a_\u239a) | 14:34 |
| 22:46 | ISS | Keita | Optics Lab | No | Working on PSL ISS system | 00:08 |
00:11 UTC Observing
00:23 UTC lockloss after only 15 minutes
The winds are teetering around 30-40 mph for gusts and 20-30 mph for the 3 minute avg
We've lost lock in the same spot twice during CARM_OFFSET_REDUCTION with a large DHARD oscillation and a "Low recycling gain" warning. Potentially meaning alignment is off somewhere?
02:39 - 02:57 UTC Initial Alignment
04:08 UTC Observing
Originally, the suspensions who have already had their satamps swapped for ECR E2400330 had their compensation filters updated to a generic 5.31:0.0969 zp filter that would get put in using my script /ligo/svncommon/SusSVN/sus/trunk/Common/PythonTools/satampswap_generic_filterupdate_ECR_E2400330.py (r12449).
However, Jeff went through the entire set of new satamps and tested each channel to determine the actual transimpedance and frequency response of each channel. We can use the measured frequency response to replace the generic filter with more precise compensation filters, so I have made a script that does just that - it updates the compensation filters in OSEMINF FM1 for whichever suspension you want with the more accurate zero and pole values for each channel. This script can be found at /ligo/svncommon/SusSVN/sus/trunk/Common/PythonTools/satampswap_bestpossible_filterupdate_ECR_E2400330.py (r12449).
I have used this script to update the compensation filters for the suspensions that have already had their satamps swapped (see txt file): SR3, SRM, BS, PRM, PR3, ITMX, ITMY, SR2. These new filters have all been loaded in.
Running the Best Possible Filter Update Script
This script can be found at /ligo/svncommon/SusSVN/sus/trunk/Common/PythonTools/satampswap_bestpossible_filterupdate_ECR_E2400330.py. To run it, all you need to do is:
Examples:
python3 /ligo/svncommon/SusSVN/sus/trunk/Common/PythonTools/satampswap_bestpossible_filterupdate_ECR_E2400330.py --opt SR2
Or to update multiple suspensions at once:
python3 /ligo/svncommon/SusSVN/sus/trunk/Common/PythonTools/satampswap_bestpossible_filterupdate_ECR_E2400330.py -o SR2 ITMX PRM
Running the Generic Filter Update Script
In case the measurements aren't done but you would like to update the compensation filters, you can use the generic filter script, satampswap_generic_filterupdate_ECR_E2400330.py. To run this script:
00:23 UTC lockloss
04:08 UTC Observing
Ivey, When fitting transfer functions for the OSEM estimator, I tested several transfer function fitting programs to find one that is both accurate and efficient. A brief side note: In the 6/24 M1-to-M1 dataset that Oli took, we observed an unexpected right-hand zero near 1.2 Hz (see attached plot), where we had expected a left-hand zero. After comparing with the 4/15 and 4/18 datasets, we found no corresponding right-hand zero, suggesting the 6/24 zero is likely due to noise. I wrote a document highlighting the pros and cons of each of the fitting programs here: TF Fitting Comparison (Google Doc) The programs include: Vectfit3 (I recommend using this with strong inverse weighting): Uses vector fitting. Created by Bjorn Gustavsen. Not built into MATLAB. Performs nearly identically to Vectfit4, but has more documentation. Rational: Built-in MATLAB function. Uses the AAA algorithm. Spectrumest: Built-in MATLAB function. Chooses an internal algorithm based on the data. Rationalfit: Built-in MATLAB function. Uses vector fitting. Generally, these programs fit poles and end behavior well, but often compromise the accuracy of the fit in the zero regions. In these cases, we often have to manually refine the fit in the zero regions. However, Vectfit3 performs very well on the 6/24 M1-to-M1 and SusPoint-to-M1 datasets when used with strong inverse weighting. We were able to produce better fits than manual methods, and expect this approach to be more efficient for future use. Example plots are available here: Plot Comparison Slides (Google Slides) Attached is the MATLAB code used for the M1-to-M1 fits. To use vectfit3, you must download the package here. All other functions used are built into MATLAB's toolboxes. Measurement file paths:/ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/SR3/Common/Data/2025-06-24_1700_H1ISIHAM5_ST1_WhiteNoise_SR3SusPoint_L_to_Y_0p02to50Hz.xml/ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/SR3/SAGM1/Data/2025-06-24_1900_H1SUSSR3_M1_WhiteNoise_L_to_Y_0p02to50Hz.xml
TITLE: 07/14 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Two locklosses today, one of which during commissioning time from a M6.8 earthquake out of Panama. The second lockloss we're still recovering from, and even after running an initial alignment, there have been several lockloss at low states, possibly from rising wind speeds.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 22:56 | SAF | LVEA | LVEA | YES | LVEA IS LASER HAZARD | 14:34 |
| 14:50 | FAC | Randy | MY | N | Checking on forklift | 15:05 |
| 15:46 | PEM | Robert, Sam | LVEA/CR | - | Accoustic injections | 17:55 |
| 16:29 | FAC | Kim | MY | N | Technical cleaning | 17:30 |
| 16:29 | FAC | Nellie | MX | N | Technical cleaning | 17:13 |
| 17:16 | FAC | Tyler | BT near EX | N | Bees | 17:40 |
| 17:33 | PSL | Jason | MX | N | Inventory | 18:04 |
| 17:59 | PEM | Robert | LVEA | - | Replacing viewport covers | 18:16 |
| 22:46 | ISS | Keita | Optics Lab | N | ISS array work | Ongoing |
TITLE: 07/14 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 25mph Gusts, 16mph 3min avg
Primary useism: 0.09 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY:
Lockloss @ 21:56 UTC after almost 2 hours locked - link to lockloss tool
This appeared to line up with an incoming M5.8 EQ from Japan, but the R-waves hadn't fully arrived and the ground wasn't moving all the much at the time. Also, the first errant thing happening I can see is a hit on DARM, which seems unusual.
00:11 UTC Observing
Ivey, Edgard
Follow up to Oli's estimator test [LHO: 85615].
Ivey took the data for the three tests that Oli ran and got some spectra to show that the estimator is working as expected. We can see three traces in each one of the figures attached:
The first figure shows the M1 Yaw signals as seen by the M1 OSEMs. As you can see, the spectrum is completely dominated by sensor noise, except at the suspension resonances. Under light damping, the OSEM actually sees the suspension modes ring, but under 'OSEM only' and Estimator damping, the resonances don't show up anymore, as expected. We note that there is a bit of the 3 Hz resonance that seems to not be fully damped by the estimator.
The second figure is one of the money plots. It shows the M3 Yaw as seen by the SR3 OPLEV. In it, we can see that the OSEM only, and the Estimator damping are both successful at damping down the resonances of the suspension. This means that the Estimator is working as intended. We note that in the estimator case, there is a bit of excess noise at 2 Hz.
The last figure shows the total drive request for the M1 OSEMs in drive cts. As expected, the Estimator is requesting less drive above 5 Hz when compared to the OSEM only damping (which is the current setting). Since this is completely aligned with our expectations, we believe this implies that the estimator is working as intended and we are likely reducing the SR3 M1 Y OSEM noise injection into M3 Y by a factor of 5 at 10 Hz and above.
We also note that the drive request at 2 Hz has a bump likely due to model mismatch. This effect is likely the cause for the excess 2Hz noise seen in the OPLEV signal. We might need to model it to see why it would lead to a coherent addition of motion that can be captured by the M3 OPLEV.
Since this test was so successful, our expectation is that we will be trying to affix the OSEM calibrations, a Yaw estimator, and then a Pitch estimator for SR3 in the next few weeks. Stay Tuned!
Jim, Elenna
We had a 6.6 earthquake begin rolling in from Panama, so Jim and I tried to take the ASC arm control loops to the high bandwidth state. I also turned off the LSC feedforward which drives the ETMY PUM.
This obviously creates a lot of noise in DARM, but we are curious to see if it helps us ride out a large earthquake.
This consists of:
Some of these things can be done by hand, but others, like transitioning filters and gains together have to be done with guardian code to ensure they are done at the same time. I copied and pasted lines of code into a guardian shell.
These are the lines of code that will do everything I mentioned above:
ezca.get_LIGOFilter('ASC-CHARD_Y').ramp_gain(300, ramp_time=10, wait=False)ezca.switch('ASC-CHARD_Y', 'FM3', 'FM8', 'FM9', 'OFF')ezca.switch('ASC-DHARD_Y', 'FM1', 'FM3', 'FM4', 'FM5', 'FM8', 'OFF')ezca.switch('ASC-CHARD_P', 'FM9', 'ON')ezca.switch('ASC-CHARD_P', 'FM3', 'FM8', 'OFF')ezca['ASC-CHARD_P_GAIN'] = 80ezca.get_LIGOFilter('ASC-DSOFT_Y').ramp_gain(30, ramp_time=5, wait=False)ezca.get_LIGOFilter('ASC-DSOFT_P').ramp_gain(10, ramp_time=5, wait=False)ezca.switch('ASC-DHARD_P', 'FM4', 'FM8', 'OFF')ezca['LSC-PRCLFF_GAIN'] = 0ezca['LSC-MICHFF_GAIN'] = 0ezca['LSC-SRCLFF1_GAIN'] = 0I saved this as a script called "lownoise_asc_revert.py" in my home directory. This is a bit of a misnomer since it also reverts the LSC feedforward.
We are still locked so far, but we are waiting to see how this goes (R wave just arrived).
We lost lock when the ground motion got to be about 2.5 micron/s.
This was a "large earthquake" aka within the yellow band on the EQ response zone plot. Since these earthquakes are highly likely to cause lockloss, Jim and I are thinking we could try this high bandwidth control reversion for these earthquakes to see if this can help us survive the earthquake. This would take us out of observing and kill the range (we were at about 70 Mpc before the lockloss), but we could then go back to lownoise once the earthquake passes.
Jim also thinks he can make some adjustments to other seismic controls, but I'll let him explain how that would work since he is the expert.
I refined the script to include some sleeps, and wrote another script to revert the reversion so it will put the ASC and LSC feedforward back in the nominal low noise state.
Both scripts are attached. They should be tested!
I have added buttons to run modified versions of Elenna's scripts to my seismic overview ISI_CONFIG screen, the smaller red and blue buttons that say "ASC Hi Gn" and "ASC Low noise" in the upper left, but I don't think we are ready to suggest anyone use them yet. I added a bit of code to ask if you are sure, so they shouldn't be very easy to launch accidentally. I'm trying to compile some data to estimate how much run time we lose or could gain before investing much more effort in automating this.
FAMIS 31094
FSS TPD signal has been dropping over the past week, so it could probably do with some RefCav alignment adjustments soon. Otherwise, not much else to report.
I plan on taking a look at RefCav alignment during maintenance tomorrow; Ryan will perform a remote RefCav alignment today ONLY if a valid T.O.O. presents itsefl (i.e. a large EQ has us down for some time). The last time I did a remote alignment on July 1st I was not able to get back to the previous TPD voltage, an indication that something upstream of the picomotors is moving (this something is usually the double-pass AOM). Further, the RefCav REFL spot on the PSL Quad Display (the lower-left image) hints that an enclosure incursion may be necessary; the central spot is mostly centered between the upper and lower lobes (which hints that the alignment into the RefCav is likely OK), but the lobes themselves are much brighter than normal. This is generally an early indication that an on-table RefCav beam alignment is in our near future.
The plan at this point is for me to try a remote alignment early in the maintenance period, unless Ryan is able to try it during a T.O.O. today. Should the TPD return to above 0.8 V then nothing further happens, we will continue to monitor this as we normally do; but if it cannot then I will go into the enclosure and perform and on-table RefCav beam alignment (I have filed WP 12683 in case I do have to go into the enclosure). The IMC will need to be UNLOCKED while I am working on the RefCav, regardless of if I'm in the Control Room or in the PSL enclosure (same goes for if Ryan gets a T.O.O. to do the remote alignment today).
A M6.6 EQ out of Panama knocked H1 out of lock this morning, so I took that opportunity to touch up the RefCav alignment. I was only able to increase the signal on the TPD from about 0.700 V to 0.764 V, so not quite back to what it was before, meaning Jason will need to adjust alignment on-table tomorrow as planned.
Jennie W, Rahul
Yesterday we made some measurements to calibrate the spot size on the QPD as we scan the beam position across it.
We used a connector Fil made us to plug in the OT301 QPD amplifier into a DC power supply after checking it contained voltage regulators that could cope with a voltage between 12 and 19 V ( as the unit says it expects DC supply but the previous one we were using was AC with a 100mA current rating and was getting too hot so we assume that was the incorrect one). We hooked it up at 16V (this draws about 150mA of current). The QPD readout looks normal and does not have any of the strange sawtooth we saw with the original power cable.
We moved the M2MS beam measurement system out of the way of the translation stage.
To calibrate the QPD we need to change the lateral position of the M1 mirror and lens to change the yaw positioning on the QPD and measure the X and Y voltages from the QPD.
We need to check we are centred first. The QPD bullseye readout shows the beam is off a tiny bit in yaw but this was as good as we could get at centering the beam when we moved the QPD. All 8 PDs are reading about 4.6 V so this means the beam is well centred in the array plane.
We measure 11000 counts on the bullseye qpd readout at this M1 position.
|
Translation Stage inch |
QPD X (mV ) |
QPD Y (V) |
|---|---|---|
| 4.13 | 239e-3 | -1.77 |
| 4.14 | 252e-3 | -1.84 |
| 4.15 | 2.34 | -1.60 |
| 4.16 | 4.46 | -1.17 |
| 4.17 | 4.26 | -1.11 |
| 4.18 | 5.62 | -835e-3 |
| 4.19 | 7.80 | -600e-3 |
| 4.20 | 7.81 | -321e-3 |
| 4.21 | 8.45 | -222e-3 |
| 4.22 | 8.82 | +70.6e-3 |
|
4.23 |
9.19 |
771e-3 |
| 4.24 | 9.28 | 1.12 |
| 4.25 | 9.37 | 1.88 |
| 4.26 | 9.36 | 2.36 |
| 4.27 | 9.37 | 2.38 |
| 4.28 | 9.44 | 2.71 |
| 4.29 | 9.47 | 3.10 |
| 4.30 | 9.50 | 3.41 |
| 4.31 | 9.49 | 3.35 |
| 4.32 | 9.51 | 3.72 |
| 4.33 | 9.55 | 4.27 |
| 4.34 | 9.58 | 4.55 |
| 4.35 | 9.62 | 4.86 |
| 4.36 | 9.66 | 5.44 |
| 4.37 | 9.65 | 5.31 |
| 4.38 | 9.63 | 5.60 |
| 4.39 | 9.69 | 5.75 |
| 4.40 | 9.69 | 6.00 |
| 4.41 | 9.70 | 6.15 |
| 4.42 | 9.70 | 6.16 |
| 4.43 | 9.71 | 6.33 |
| 4.44 | 9.71 | 6.50 |
| 4.45 | 9.72 | 6.72 |
| 4.46 | 9.74 | 7.09 |
| 4.47 | 9.73 | 6.87 |
| 4.48 | 9.74 | 7.40 |
| 4.49 | 9.75 | 7.46 |
| 4.50 | 9.74 | 7.46 |
| 4.51 | 9.76 | 7.75 |
| 4.52 | 9.74 | 7.67 |
| 4.53 | 9.73 | 7.82 |
| 4.54 | 9.74 | 7.96 |
| 4.55 | 9.73 | 8.08 |
| 4.56 | 9.72 | 8.33 |
| 4.57 | 9.71 | 8.43 |
| 4.58 | 9.70 | 8.50 |
| 4.13 | 448e-3 | -1.98 |
| 4.12 | -1.02 | -2.34 |
| 4,11 | -1.17 | -2.14 |
| 4.10 | -2.92 | -2.43 |
| 4.09 | -4.63 | -3.30 |
| 4.08 | -5.91 | -3.18 |
| 4.07 | -6.97 | -3.40 |
| 4.06 | -8.17 | -4.24 |
| 4.05 | -8.13 | -4.28 |
| 4.04 | -8.52 | -4.47 |
| 4.03 | -8.76 | -4.77 |
| 4.02 | -8.89 | -5.27 |
| 4.01 | -9.01 | -5.45 |
| 4.0 | -9.08 | -5.44 |
| 3.99 | -9.10 | -5.85 |
| 3.98 | -9.11 | -5.91 |
| 3.97 | -9.11 | -5.93 |
| 3.96 | -9.12 | -6.16 |
| 3.95 | -9.12 | -6.18 |
| 3.94 | -9.13 | -6.34 |
| 3.93 | -9.13 | -6.49 |
| 3.92 | -9.12 | -6.49 |
| 3.91 | -9.13 | -6.61 |
| 3.90 | -9.11 | -6.55 |
| 3.89 | -9.11 | -6.70 |
| 3.88 | -9.10 | -6.46 |
| 4.13 | ||
I plotted the data from lowest reading on the translation stage to highest and fitted the linear region using Calibrate_QPD.m which is attached.
Data is shown in attached pdf.
The slop of the linear region in V/inch is 112 V/inch. Which means to if the beam moved 8.93 e-3 inches on the QPD in yaw, the yaw readout would change by 1 Volt.
I altered the code to plot in mm and the constant is 4.4 V/mm.
D'oh I read the scale on the translation stage wrong so the x readings are actually lower by a factor of 10.
This makes the slope 44.1 V/mm which is more in line with the 65.11 V/mm Mayank and Shiva found for the QPD calibration here.
Ours could be different because we have a slightly different beam size and we moved the QPD in its housing to centre it which could have changed X to Y coupling in the QPD readout.
This implies our beam diameter on the QPD is around 0.4mm which makes a lot more sense considering the diode is 3mm!
As a cross-check we used the QPD 'bullseye' readout unit and Rahul changed the translation stage in yaw and we measured the beam dropping from 10400 counts in the middle of the QPP to 100s of counts at the edges.
| Translation Stage [inch] | QPD Sum Counts |
|---|---|
| 0.413 | 10400 |
| 0.365 | 500 |
| 0.413 | 10400 |
| 0.49 | 400 |
diode size ~ ((0.49-0.365)*0.0254*1000) = 3.175 mm.
I redid the graphs for the horizontal motion of the input beam to X motion on the QPD with better labels (first attached graph) and did a fit for the Y data on the QPD collected at each horizontal position of the input beam (second attached graph). The third graph attached is comparing both fits on one graph.
If we take into account the input beam horizontal axis is not aligned with the QPD, we can work out the resultant calibration relative to the mirror displacement as:
V change along mirror displacement axis = sqrt((change V in X)^2 + (change V in Y)^2)
Calibration = V change along mirror displacemnt axis/change in mirror position
= 4.644 V/mm.
angle of QPD horizontal axis with mirror displacement axis = tan^-1(Voltage change V in Y/ Voltage change in X) = 38.8 degrees.
I got the above caluclation of the QPD calibration in the horizontal direction wrong as I use the total change in voltage we measured across the whole range of horizontal scan and not just the linear region where the beam is close to centred on the QPD.
The horizontal beam scan calibration is actually:
sqrt(11.8^2 + 44.1^2) = 10.6 V/mm
with an angle of tan^-1(11.8/44.1) = 14.9 degrees to the X direction on the QPD.
$python3 generate_measurement_data.py --WS PS4 --date 2025-06-24Reading in config file from python file in scripts../../../Common/O4PSparams.yamlPS4 rho, kappa, u_rel on 2025-06-24 corrected to ES temperature 299.2 K :-4.70052522573445 -0.0002694340454223 2.66508565972755e-05Copying the scripts into tD directory...Connected to nds.ligo-wa.caltech.edumartel runreading data at start_time: 1435423390reading data at start_time: 1435423770reading data at start_time: 1435424085reading data at start_time: 1435424665reading data at start_time: 1435425020reading data at start_time: 1435425335reading data at start_time: 1435425435reading data at start_time: 1435426070reading data at start_time: 1435426405Ratios: -0.46199911560110457 -0.4661225769446798writing nds2 data to filesfinishing writingBackground Values:bg1 = 9.235188; Background of TX when WS is at TXbg2 = 5.284960; Background of WS when WS is at TXbg3 = 9.145166; Background of TX when WS is at RXbg4 = 5.413446; Background of WS when WS is at RXbg5 = 9.219525; Background of TXbg6 = 0.642557; Background of RXThe uncertainty reported below are Relative Standard Deviation in percentIntermediate RatiosRatioWS_TX_it = -0.461999;RatioWS_TX_ot = -0.466123;RatioWS_TX_ir = -0.455904;RatioWS_TX_or = -0.461457;RatioWS_TX_it_unc = 0.092717;RatioWS_TX_ot_unc = 0.098001;RatioWS_TX_ir_unc = 0.097458;RatioWS_TX_or_unc = 0.092076;Optical EfficiencyOE_Inner_beam = 0.986610;OE_Outer_beam = 0.990080;Weighted_Optical_Efficiency = 0.988345;OE_Inner_beam_unc = 0.062698;OE_Outer_beam_unc = 0.063147;Weighted_Optical_Efficiency_unc = 0.088986;Martel Voltage fit:Gradient = 1636.767545;Intercept = 0.229197;Power Imbalance = 0.991154;Endstation Power sensors to WS ratios::Ratio_WS_TX = -1.077445;Ratio_WS_RX = -1.391120;
Ratio_WS_TX_unc = 0.058121;Ratio_WS_RX_unc = 0.042422;
========================================================================== Values for Force Coefficients ==============================================================================
Key Pcal Values :GS = -5.135100; Gold Standard Value in (V/W)WS = -4.700525; Working Standard Value
costheta = 0.988362; Angle of incidencec = 299792458.000000; Speed of LightEnd Station Values :TXWS = -1.077445; Tx to WS Rel responsivity (V/V)sigma_TXWS = 0.000626; Uncertainity of Tx to WS Rel responsivity (V/V)RXWS = -1.391120; Rx to WS Rel responsivity (V/V)sigma_RXWS = 0.000590; Uncertainity of Rx to WS Rel responsivity (V/V)
e = 0.988345; Optical Efficiencysigma_e = 0.000879; Uncertainity in Optical Efficiency
Martel Voltage fit :Martel_gradient = 1636.767545; Martel to output channel (C/V)Martel_intercept = 0.229197; Intercept of fit of Martel to output (C/V)
Power Loss Apportion :beta = 0.998895; Ratio between input and output (Beta)E_T = 0.993606; TX Optical efficiencysigma_E_T = 0.000442; Uncertainity in TX Optical efficiencyE_R = 0.994705; RX Optical Efficiencysigma_E_R = 0.000443; Uncertainity in RX Optical efficiency
Force Coefficients :FC_TxPD = 7.903342e-13; TxPD Force CoefficientFC_RxPD = 6.193451e-13; RxPD Force Coefficientsigma_FC_TxPD = 5.805084e-16; TxPD Force Coefficientsigma_FC_RxPD = 3.826232e-16; RxPD Force Coefficientdata written to ../../measurements/LHO_EndX/tD20250701/Comment regarding the missing signal on the EX Pcal MEDM:
We noticed that H1:CAL-PCALX_OFS_DRIVE_MON was not working as expected a few weeks ago. On this expedition to make ES measurements, Dripta and I used a few breakout boards to ensure that the "OFS drive monitor" signal was coming out of the Pcal chassis and into the ADC Chassis. We confirm that there was a signal coming out of the Pcal Interface Chassis back board (D1400149V1), the DB9 output labeled "To Fast ADC", pins 1 and 6 (not the same name but, by process of elimination, we assume that "OFS drive monitor" is, in the drawing, "InLoopOut±") , so we can rule out the Pcal chassis. Due to lack of time, however, we were not able to pinpoint the step at which this signal is lost.
Tony found the next step in our hunt: Page 3 of D1300226V13 shows the ADC side of "To Fast ADC", specifically, "ADC CHAN±00" is assigned, in the drawing, to InLoopOut± . I am tagging CDS for any insight on their side. Discussions and a follow-up plan are in progress to find the signal.