FranciscoL, TonyS, MattT, RickS
On Tuesday, March 25, we reverted the PCALX lower beam to its nominal center. We expect to see a change of 4 HOPs in \chi_XY -- returning to the value it was two week ago.
Target was placed with a 33 degree offset as seen on the first attachment (TARGET_ON -- featuring a responsible scientist, wearing laser goggles). Each individual beam voltage values, as found, were very similar to the values recorded at the end of the move done last week.
The following table shows the voltage values as read by the Keithley voltmeter we use during the procedure
| Step | Comment | Readout [V] |
|---|---|---|
| 1 | Both beams - target off | 3.379 |
| 2 | Both beams - target on (as found) (BEFORE_MOVE) | 2.937 |
| 3 | Lower beam after actuation | 1.395 |
| 4 | Both beams - target on (AFTER_MOVE) | 2.903 |
| 5 | Both beams - target off | 3.394 |
The IFO has not regain lock at the time of writing this alog which limits further observations from this move.
WP 12393
The FE and IO chassis for h1seih16 were powered down for in-rack cabling of the ISI electronics. All cables are now routed and dressed. The long field cables were left disconnected, will wait until they are in connected at the flange.
The AI chassis on U38 was removed. AA chassis from U39 was moved down. This matches LLO's rack configuration, alog 75328.
CDS Team
RyanC, TJ
I wrote a new decorator in ISC_library (@ISC_library.bring_unlocked_imc_2w_decorator(nodes)) for the specific scenario of the IMC losing lock while we're at 10Ws which would give it trouble relocking - alog82436. The decorator looks for the IMC being unlocked with a power above 2Ws and if the rotation stage is stationary it requests the LASER_PWR GRD to 2Ws.
We sprinkled the decorator into ALIGN_IFO, INIT_ALIGN, into the MICH, SRC and AS_CENTERING states, I also added it to CHECK_MICH_FRINGES and MICH_OFFLOADED in ISC_LOCK. We successfully tested it today during an initial alignment by breaking the IMC lock during these states.
Tue Mar 25 10:12:47 2025 INFO: Fill completed in 12min 43secs
TITLE: 03/25 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 3mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.25 μm/s
QUICK SUMMARY: H1 is currently relocking up to MOVE_SPOTS. Looks like H1 was able to lock twice last night and most recently lost lock about an hour ago. Maintenance day today, but I'll let H1 continue until those activities begin.
Halted locking for the start of maintenance day. ISC_LOCK to 'IDLE' and seismic environment to 'MAINTENANCE' at 14:40 UTC.
Power cycling the dust monitor did not help, later in the day I restarted the IOC using telnet which was successful.
Workstations were updated and rebooted. Os packages were updated, and conda packages were also updated.
In the CDS conda environment, gwpy was upgraded to version 3.0.12. Among other improvements, this version fixes an issue that could cause matplotlib to lock up while creating pltos after importing gwpy.
IFO is in LOCKING at FIND_IR
I got called at 03:11 AM because ALSY Guardian went into fault during initial alignment and stayed there for an hour at which point IFO Notify was triggered to call the OWL Ops. I couldn't see anything immediately wrong with Y-ARM other than it got itself into a weird state. Long story short, the ALSY IR DC Power is low, and around the threshold for ALSY guardian faulting.
I went into init to try another initial alignment but ALSY got into the same state. Upon further investigaiton, this was due to the ALS Guardian detecting an error in the locking relating to the ALS_Y_LASER_IR_DC. I noticed that ALSY would only fault if the power went below 4.5 mW, which I later realized to be on the boarder, fluctuating between "too low" and over 4.5. I set the threshold on the same page (attached pic) to 4.48 (lower by 0.02mW), which seems to have fixed the issue. I waited until ALS Locked in both INITIAL_ALIGNMENT and LOCKING_ARMS_GREEN to ensure that the fix worked. I waited a bit more and can confirm we can lock DRMI (we lost lock a few minutes thereafter though).
I have accepted and screenshoted the SDF corresponding to the power change in SAFE.SNAP just in case we lose lock on the way to NLN (since this gets reset with each LL). I will likely get called again and need to accept the SDF in OBSERVE later, though that may be during DAY OPS time. Other screenshots have been attached, including the screen where I made the power threshold change. I've re-set my OWL and will stay logged in to expect a call for potential SDF confirmation.
Accepted ALSY IR DC Power Change in OBSERVE (attached). H1 is now OBSERVING.
Ryan Short, Sheila
Last November the power out of this laser dropped 13% after the current was reduced to get it operating in a more stable region during the PSL NPRO difficulties, 81117. Daniel lowered some thresholds and raised some gains at the time, but not the ALS Y LASER IR threshold. It has been close to the limit since but only drifted below last night, due to it's normal long slow drift. I've lowered the threshold to 4 mW now, so that we have a similar level of headroom for this error as we did before the laser current drop. I've accepted this in safe.snap but not in observe
Ryan Short also has been looking at some of our other ALS locking problems. He noticed that sometimes the ALS PLL beckhoff gives an error based on the reference cavity transmission. This is leftover from when the ALS pick off was in transmission of the reference cavity, but it has been moved so this error check isn't helpful anymore. We've set the threshold to -1 for ALS X and Y H1:ALS-Y_FIBR_LOCK_REFCAV_TRANSLIM, and also accepted these in safe.snap, but not observe.
TITLE: 03/24 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
Rough shift with DRMI locking all of a sudden being VERY finicky. Hoping POPAIR gain change helps with locking (after the end-of-shift-alignment).
LOG:
The shift started with Ryan looking into why ALSy was causing grief in the afternoon (it would later clear up on its own), but since the beginning of the EVE shift, DRMI locking has been the issue.
Have went through 2-Initial Alignments, as well as several successful PRMIs and CHECK MICH FRINGES.
DRMI Symptoms:
DRMI would start out the first few seconds with strong flashes seen on the camera and decent flashes on ndscopes, but
Then the behavior would switch to an ugly camera image which would be seen as DRMI looking like it's going to catch, but within 1-sec POP18+90 (and also the camera) would "wobble" back down to zero in a "slow flash".
(It feels like DRMI has been like this over the weekend as well, but after some of these wobbles to zero after these "slow flashes," DRMI would lock. Not the case tonight---have had zero DRMI locks going on 6+hrs.)
It feels like the alignment is decent, and no environmental issues, but it doesn't seem to trigger or start LSC control when these flashes start (hence their "slow" drop of POP18+90.).
What's Been Tried:
Now: We Wait
Just before Sheila and I were going to go with this new gain, a M6.7 EQ rolled through (of course). Probably have another 30-90 min before the ground calms down, but the hope is we'll have the same luck Elenna, Ibrahim, and Sheila had on March 4th with DRMI locking right up!
TITLE: 03/24 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Corey
SHIFT SUMMARY: H1 was locked this morning following the windstorm of last night, but lost lock during commissioning time. Relocking after that was simple, but after again losing lock about an hour later, H1 has not been able to relock since then.
I'll write up a separate and more detailed entry later, but most of the struggle with relocking stemmed from an issue I started noticing with ALS-Y that would cause spontaneous locklosses at states after LOCKING_ARMS_GREEN. At seemingly random intervals, the ALS-Y PLL would jump from its Locked state into Ramp Gain, which flags the PLL as "not okay," causing Guardian to assume a lockloss. Since I don't know what causes this state transition, I'm not sure what the underlying issue might be, but so far when this happens the first thing I can see changing is this FIBR_LOCK_STATE. Eventually, these glitches stopped late in the afternoon. An example of one of these instances (in this case during FIND_IR) is shown in the ndscope screenshot attached.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 19:57 | SAF | LASER HAZARD | LVEA | YES | LVEA IS LASER HAZARD | 20:34 |
| 15:22 | FAC | Nellie | MY | N | Technical cleaning | 15:48 |
| 15:58 | FAC | Kim | MX | N | Technical cleaning | 16:46 |
| 16:30 | FAC | Tyler | X-arm | N | Tumbleweed inventory | 16:44 |
| 17:06 | CAL | Jeff | LVEA | - | OMC DCPD measurement | 17:16 |
| 17:17 | AOS | Jason | Opt Lab | N | Inventory | 17:36 |
| 18:26 | ISC | Mayank, Jennie | Opt Lab | Local | ISS array work | 00:24 |
| 18:45 | FAC | Tyler | CR | N | Swapping EX fans | 18:47 |
| 19:26 | VAC | Janos | EX MER | N | Looking for parts | 19:43 |
| 19:52 | VAC | Travis | EY MER | N | Measurement | 20:09 |
| 20:06 | SAF | Tony | LVEA | - | Transition to SAFE | 20:34 |
| 21:02 | CAL | Tony | PCal Lab | Local | Measurement to prepare | 21:37 |
TITLE: 03/24 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 18mph Gusts, 11mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.27 μm/s
QUICK SUMMARY:
H1's been down and Ryan S shared issues he was observing (see what he says about ALSy in an upcoming alog). He was running an Initial Alignment during our shift handoff. Arm locking was fine, but DRMI looked pretty bad. Went through PRMI a couple times (was successfully completed) as well as a round of CHECK MICH FRINGES---DRMI still looked bad. Now running through a 2nd consecutive alilgnment.
Winds are much calmer (under 20mph) than last night and microseism.
I'm starting to look at the nice data set Camilla took with different non linear gains in the interferometer, 83370. A few weeks ago we took a similar dataset on the homodyne, see 83040.
OPO threshold and NLG measurements by both methods now make sense:
Based on the important realization in 83032 that we previously had pump depletion while we were measuring NLG by injecting a seed beam, we reduced the seed power for these two more recent datasets. This means that we can rely on the NLG measurements to fit the OPO threshold, and not have the OPO threshold as a free parameter in this dataset. (In the homodyne dataset, I had the OPO threshold as a free parameter, and it fit the NLG data well. With the IFO data, we want to fit more parameters so it's nice to be able to fit the threshold independently.). The first attachment shows a plot of nonlinear gain measured two ways, from Camilla's table in 83370. The first method of NLG calculation is the one we normally use at LHO, where we measure the amplified seed while scanning the seed PZT, and then block the green and scan the OPO to measure the unamplified seed level (blue dots in 1st attachment). The second method is to measure the amplified and deamplified seed while the seed PZT is scanning, (max and min), and nlg = {[1+sqrt(amplified./deamplified)]/2}^2 (orange pluses in attached plot). The fit amplified/ unamplified method gives a threshold of 158.1uW OPO transmitted power in this case, while the amplified/ deamplified method gives 157/5uW.
Mean squeezing lump and estimate of eta:
The second attachment here shows all the spectra that Camilla saved in 83370. As she mentioned there, there is something strange happening in the mean squeezing spectra from 300-400 Hz, which is probably due to the ADF at 322Hz was on while the LO loop was unlocked. In the future it would be nice to turn off the ADF when we do mean squeezing so that we don't see this. This could also be adding noise at low frequencies, making the mean squeezing measurement confusing.
Mean squeezing is injecting squeezing with the LO loop unlocked, which means that it averages over the squeezing angles, and if we know the nonlinear gain (the generated squeezing) the mean squeezing level is determined only by the total efficiency. With x = sqrt(P/P_thresh)
RyanS reported getting HAM3 CPS noise notifications on DIAG_MAIN. Looking at the log, there have been several periods where the blrms noise monitors have been going over threshold. Digging in to HAM3's performance this morning, CPS noise seems to be affection mostly RZ, but the other horizontal dofs are affected. as well. First attached image, is comparision with HAM2. Top asds are the rotational dofs, bottom are the X/Y/Z dofs. HAM3 RX/RY/Z vertical dofs are all mostly similar to HAM2, but the cyan trace vs the pink trace on the top plot shows HAM3 RZ is moving much more that HAM2 RZ.
Second image shows asds of the individual sensors, the blue trace on the top plot is H2 CPS, which has a higher noise floor than the other sensors, so I suspect this sensor is the root cause of the HAM3s problems. All of the other senors seem more or less healthy. If I get a chance or the ISI becomes unstable, I would like to go to the floor or CER to power cycle some stuff. Otherwise, I can wait till tomorrow.
This trend shows when the H2 CPS started misbehaving. Top two plots are the 1-3hz log blrms for the RZ GS13s HAM2 is pretty steady, but something happens at ~10:30am pst on the 18th. Bottom plot shows the horizontal CPS during this time, H1 and H3 stay more or less the same, but H2 shows an increase in noise that has persisted until now.
Got permission from Sheila to try addressing this because locking is not going well. Powered off the CPS racks for ~10 secs and noise was gone when the signals came back up. Will keep monitoring.
This morning at 09:04:42 Mon 24mar2025 we had a BSC3 sensor gitch, which VACSTAT normally logs as a single gauge event and does not send alarms. Today however many LVEA gauges also tripped and alarms were sent to the Vacuum Team.
The reason is that on 13mar2025 we had a lock-loss due to an LVEA vacuum event which was below VACSTAT's nominal trip level. To counter this some LVEA slope trip levels were reduced from 1.0e-10 to 3.0e-12 (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=83366)
Fast forward to today and the sequence was:
09:04:42 standard BSC3 sensor glitch, num_glitched = 1, all other gauges set to sensitive mode, meaning 3.0e-13 for most LVEA gauges
09:05:03 2nd LVEA sensor glitched on noise while in sensitive mode, num_glitched = 2, Alarms sent to Vacuum Team
09:11:07 3rd LVEA sensor glitched on noise while in sensitive mode
09:12:37 4th LVEA sensor glitched on noise while in sensitive mode
09:18:35 5th LVEA sensor glitched on noise while in sensitive mode
Vacstat code was changed to permit a sensitivity multiplier to be applied on a gauge-by-gauge basis. The multiplier for those LVEA gauges which have a 3.0e-12 slope trip was set to 1.0 (i.e. no additional sensitivity following a BSC3 trip). The increased sensitivity for the Delta-P trip level was retained.
prod.yaml
---
ifo: H1
glitch_monitor:
lookback_times: [60, 300, 600]
proc_period_secs: 10
vacuum_gauges:
default:
glitch_press_rate: [1.0e-10, 1.0e-10, 1.0e-10]
glitch_delta_press: [1.0e-08, 1.0e-08, 1.0e-08]
valid_value_min: 1.0e-10
valid_value_max: 1.0e-04
sensitivity_press_multiplier: 10.0
sensitivity_deltap_multiplier: 10.0
H0:VAC-LY_X0_PT100B_PRESS_TORR:
description: "Corner Station HAM1"
H0:VAC-LY_Y1_PT120B_PRESS_TORR:
description: "Corner Station BSC2"
glitch_press_rate: [3.0e-12, 3.0e-12, 3.0e-12]
sensitivity_press_multiplier: 1.0
H0:VAC-LX_Y8_PT132_MOD2_PRESS_TORR:
description: "Corner Station BSC3"
H0:VAC-LX_Y0_PT110_MOD1_PRESS_TORR:
description: "Corner Station HAM6"
H0:VAC-LY_Y3_PT114B_PRESS_TORR:
description: "Corner Station CP1"
glitch_press_rate: [3.0e-12, 3.0e-12, 3.0e-12]
sensitivity_press_multiplier: 1.0
H0:VAC-LX_X3_PT134B_PRESS_TORR:
description: "Corner Station CP2"
glitch_press_rate: [3.0e-12, 3.0e-12, 3.0e-12]
sensitivity_press_multiplier: 1.0
H0:VAC-LY_Y4_PT124B_PRESS_TORR:
description: "Corner Station Y-Arm"
glitch_press_rate: [3.0e-12, 3.0e-12, 3.0e-12]
sensitivity_press_multiplier: 1.0
H0:VAC-LX_X4_PT144B_PRESS_TORR:
description: "Corner Station X-Arm"
glitch_press_rate: [3.0e-12, 3.0e-12, 3.0e-12]
sensitivity_press_multiplier: 1.0
H0:VAC-MY_Y1_PT243B_PRESS_TORR:
description: "Mid-Y Y1 Beam Tube"
H0:VAC-MY_Y5_PT246B_PRESS_TORR:
description: "Mid-Y Y2 Beam Tube"
H0:VAC-EY_Y6_PT427_MOD1_PRESS_TORR:
description: "End-Y Y6 BT Ion Pump"
H0:VAC-EY_Y1_PT423B_PRESS_TORR:
description: "End-Y Beam Tube"
H0:VAC-EY_Y3_PT410B_PRESS_TORR:
description: "End-Y BSC10"
H0:VAC-MX_X1_PT343B_PRESS_TORR:
description: "Mid-X X1 Beam Tube"
H0:VAC-MX_X5_PT346B_PRESS_TORR:
description: "Mid-X X2 Beam Tube"
H0:VAC-EX_X6_PT527_MOD1_PRESS_TORR:
description: "End-X X6 BT Ion Pump"
H0:VAC-EX_X1_PT523B_PRESS_TORR:
description: "End-X Beam Tube"
H0:VAC-EX_X3_PT510B_PRESS_TORR:
description: "End-X BSC9"
I'm looking again at the OSEM estimator we want to try on PR3 - see https://dcc.ligo.org/LIGO-G2402303 for description of that idea.
We want to make a yaw estimator, because that should be the easiest one for which we have a hope of seeing some difference (vertical is probably easier, but you can't measure it). One thing which makes this hard is that the cross coupling from L drive to Y readout is large.
But - a quick comparison (first figure) shows that the L to Y coupling (yellow) does not match the Y to L coupling (purple). If this were a drive from the OSEMs, then this should match. This is actuatually a drive from the ISI, so it is not actually reciprocal - but the ideas are still relevant. For an OSEM drive - we know that mechanical systems are reciprocal, so, to the extent that yellow doesn't match purple, this coupling can not be in the mechanics.
Never-the-less, the similarity of the Length to Length and the Length to Yaw indicates that there is likely a great deal of cross-coupling in the OSEM sensors. We see that the Y response shows a bunch of the L resonances (L to L is the red TF); you drive L, and you see L in the Y signal. This smells of a coupling where the Y sensors see L motion. This is quite plausible if the two L OSEMs on the top mass are not calibrated correctly - because they are very close together, even a small scale-factor error will result in pretty big Y response to L motion.
Next - I did a quick fit (figure 2). I took the Y<-L TF (yellow, measured back in LHO alog 80863) and fit the L<-L TF to it (red), and then subtracted the L<-L component. The fit coefficient which gives the smallest response at the 1.59 Hz peak is about -0.85 rad/meter.
In figure 3, you can see the result in green, which is generally much better. The big peak at 1.59 Hz is much smaller, and the peak at 0.64 is reduced. There is more from the peak at 0.75 (this is related to pitch. Why should the Yaw osems see Pitch motion? maybe transverse motion of the little flags? I don't know, and it's going to be a headache).
The improved Y<-L (green) and the original L<-Y (purple) still don't match, even though they are much closer than the original yellow/purple pair. Hence there is more which could be gained by someone with more cleverness and time than I have right now.
figure 4 - I've plotted just the Y<-Y and Y<-L improved.
Note - The units are wrong - the drive units are all meters or radians not forces and torques, and we know, because of the d-offset in the mounting of the top wires from the suspoint to the top mass, that a L drive of the ISI has first order L and P forces and torques on the top mass. I still need to calculate how much pitch motion we expect to see in the yaw reponse for the mode at 0.75 Hz.
In the meantime - this argues that the yaw motion of PR3 could be reduced quite a bit with a simple update to the SUS large triple model, I suggest a matrix similar to the CPS align in the ISI. I happen to have the PR3 model open right now because I'm trying to add the OSEM estimator parts to it. Look for an ECR in a day or two...
This is run from the code {SUS_SVN}/HLTS/Common/MatlabTools/plotHLTS_ISI_dtttfs_M1_remove_xcouple'
-Brian
ah HA! There is already a SENSALIGN matrix in the model for the M1 OSEMs - this is a great place to implement corrections calculated in the Euler basis. No model changes are needed, thanks Jeff!
If this is a gain error in 1 of the L osems, how big is it? - about 15%.
Move the top mass, let osem #1 measure a distance m1, and osem #2 measure m2.
Give osem #2 a gain error, so it's response is really (1+e) of the true distance.
Translate the top mass by d1 with no rotation, and the two signals will be m1= d1 and m2=d1*(1+e)
L is (m1 + m2)/2 = d1/2 + d1*(1+e)/2 = d1*(1+e/2)
The angle will be (m1 - m2)/s where s is the separation between the osems.
I think that s=0.16 meters for top mass of HLTS (from make_sus_hlts_projections.m in the SUS SVN)
Angle measured is (d1 - d1(1+e))/s = -d1 * e /s
The angle/length for a length drive is
-(d1 * e /s)/ ( d1*(1+e/2)) = 1/s * (-e/(1+e/2)) = -0.85 in this measurement
if e is small, then e is approx = 0.85 * s = 0.85 rad/m * 0.16 m = 0.14
so a 14% gain difference between the rt and lf osems will give you about a 0.85 rad/ meter cross coupling. (actually closer to 15% -
0.15/ (1 + 0.075) = 0.1395, but the approx is pretty good.
15% seem like a lot to me, but that's what I'm seeing.
I'm adding another plot from the set to show vertical-roll coupling.
fig 1 - Here, you see that the vertical to roll cross-couping is large. This is consistent with a miscalibrated vertical sensor causing common-mode vertical motion to appear as roll. Spoiler-alert - Edgard just predicted this to be true, and he thinks that sensor T1 is off by about 15%. He also thinks the right sensor is 15% smaller than the left.
-update-
fig 2- I've also added the Vertical-Pitch plot. Here again we see significant response of the vertical motion in the Pitch DOF. We can compare this with what Edgard finds. This will be a smaller difference becasue the the pitch sensors (T2 and T3, I think) are very close together (9 cm total separation, see below).
Here are the spacings as documented i the SUS_SVN/HLTS/Common/MatlabTools/make_sushlts_projections.m
I was looking at the M1 ---> M1 transfer functions last week to see if I could do some OSEM gain calibration.
The details of the proposed sensor rejiggling is a bit involved, but the basic idea is that the part of the M1-to-M1 transfer function coming from the mechanical plant should be reciprocal (up to the impedances of the ISI). I tried to symmetrize the measured plant by changing the gains of the OSEMs, then later by including the possibility that the OSEMs might be seeing off-axis motion.
Three figures and three findings below:
0) Finding 1: The reciprocity only allows us to find the relative calibrations of the OSEMs, so all of the results below are scaled to the units where the scale of the T1 OSEM is 1. If we want absolute calibrations, we will have to use an independent measurement, like the ISI-->M1 transfer functions. This will be important when we analyze the results below.
1) Figure 1: shows the full 6x6 M1-->M1 transfer function matrix between all of the DOFs in the Euler basis of PR3. The rows represent the output DOF and the columns represent thr input DOF. The dashed lines represent the transpose of the transfer function in question for easier comparison. The transfer matrix is not reciprocal.
2) Finding 2: The diagonal correction (relative to T1) is given by:
I will post more analysis in the Euler basis later.
Here's a view of the Plant model for the HLTS - damping off, motion of M1. These are for reference as we look at which cross-coupling should exist. (spoiler - not many)
First plot is the TF from the ISI to the M1 osems.
L is coupled to P, T & R are coupled, but that's all the coupling we have in the HLTS model for ISI -> M1.
Second plot is the TF from the M1 drives to the M1 osems.
L & P are coupled, T & R are coupled, but that's all the coupling we have in the HLTS model for M1 -> M1.
These plots are Magnitude only, and I've fixed the axes.
For the OSEM to OSEM TFs, the level of the TFs in the blank panels is very small - likely numerical issues. The peaks are at the 1e-12 to 1e-14 level.
@Brian, Edgard -- I wonder if some of this ~10-20% mismatch in OSEM calibration is that we approximate the D0901284-v4 sat amp whitening stage with a compensating filter of z:p = (10:0.4) Hz? (I got on this idea thru modeling the *improvement* to the whitening stage that is already in play at LLO and will be incoming into LHO this summer; E2400330) If you math out the frequency response from the circuit diagram and component values, the response is defined by % Vo R180 % ---- = (-1) * -------------------------------- % Vi Z_{in}^{upper} || Z_{in}^{lower} % % R181 (1 + s * (R180 + R182) * C_total) % = (-1) * ---- * -------------------------------- % R182 (1 + s * (R180) * C_total) So for the D0901284-v4 values of R180 = 750; R182 = 20e3; C150 = 10e-6; C151 = 10e-6; R181 = 20e3; that creates a frequency response of f.zero = 1/(2*pi*(R180+R182)*C_total) = 0.3835 [Hz]; f.pole = 1/(2*pi*R180*C_total) = 10.6103 [Hz]; I attach a plot that shows the ratio of the this "circuit component value ideal" response to approximate response, and the response ratio hits 7.5% by 10 Hz and ~11% by 100 Hz. This is, of course for one OSEM channel's signal chain. I haven't modeled how this systematic error in compensation would stack up with linear combinations of slight variants of this response given component value precision/accuracy, but ... ... I also am quite confident that no one really wants to go through an measure and fit the zero and pole of every OSEM channel's sat amp frequency response, so maybe you're doing the right thing by "just" measuring it with this technique and compensating for it in the SENSALIGN matrix. Or at least measure one sat amp box's worth, and see how consistent the four channels are and whether they're closer to 0.4:10 Hz or 0.3835:10.6103 Hz. Anyways -- I thought it might be useful to be aware of the many steps along the way that we've been lazy about the details in calibrating the OSEMs, and this would be one way to "fix it in hardware."
The new clean air supply (compressor, tank, dryer and a series of extra filters), which was received in the end of 2024, was installed in the EX mechanical room as a replacement for the old system. The installation work was carried out in the last few weeks, the last step - the startup by Rogers Machinery - happened today. This new system has a 69 cfm air delivery operating with 3 pcs. of 7.5 HP motors (in sum 22.5 HP). In comparison, the old system had a 50 cfm air delivery, operating with 5 pcs. of 5 HP motors (in sum 25 HP). Moreover, the new system (unlike the old one) has an automatic dew point monitor, and a complete pair of redundant dryer towers. So, this new system is a major improvement. The reason for this 69 cfm limit (and not more) is that the cooling of the compressor units in the MER is still feasible, moreover, the filters and the airline do not need any upgrades, they can still accommodate the airflow. Both the new and old systems are able to produce at least -40 deg F dew point air on paper. During the startup, the new system was however able to produce much better than this - it was ~-70 deg F (and dropping) - as you can see in the attached photo. Last but not least, a huge congratulations to the Vacuum team for the installation, as this was the first instance, when the installation of a clean air system was carried out by LIGO staff, so this is indeed a huge achievement. Also, big thanks to Chris, who cleaned some parts for the compressor, to Tyler, who helped a lot in the heavy lifting, and to Richard & Ken, who did the electrical wiring. From next week on, we repeat this same installation at the EY station.