SUMMARY: Back To OBSERVING, but got here after going in a few circles.
H1 had a lockloss before the shift, but when I arrived H1 was at NLN, BUT SQZ had issues.
I opened up the SQZ Overview screen and could see that the SQZ_SHG guardian node was bonkos (so I had this "all node" up the whole time...it was crazy because it was frantically moving through states to get LOCKED, but could not), BUT also I saw.....
1) DIAG_MAIN
DIAG_MAIN had notifications flashing which said: "ISS Pump is off. See alog 70050." So, this is where I immediately switched focus.
2) Alog70050: "What To Do If The SQZ ISS Saturates"
Immediately followed the instructions from alog70050 which were pretty straightforward. Remember: H1's not Observing, so I jumped on the alog instructions and wanted to get back to Observing ASAP.) I took the opo_grTrans_setpoint_uW from 80 to 50, and tried to get SQZ back to FDS, but no go (SQZ Manager stuck....and SQZ_SHG still bonkos!).
At this point, I saw that there were several other sub-entries with updated instructions and notes. So I went through them and took opo_grTrans_setpoint_uW to 60 (no FDS + SQZ_SHG still crazy), and finally set opo_grTrans_setpoint_uW = 75 (but still no FDS + SQZ_SHG still crazy).
At this point, I'm assuming DIAG_SDF sent me on a wild goose chase. Soooooo, I focused on the erratic SQZ_SHG......
3) Alog search: "SQZ_SHG" ---> H1:SQZ-SHG_TEC_SETTEMP Taken to 33.9
This did the trick! And this search took me to early Feb2025 alogs from (1) RyanS alog82599 which sounded what I had and then (2) Ibrahim's alog82581 which laid out instructions for what to do for adjusting the SHG TEC Set Temperature (went from 33.7 to 33.9; see attachment #1). AND---during these adjustments the infamous SQZ_SHG finally LOCKED!!
After this it was easy and straightforward taking the SQZ Manager to FDS and get H1 back to Observing.
NOTE: I wanted to see the last time this Set Temperature was adjusted and it was Feb 17, 2025. Doing an alog search on just "SHG" + tag: SQZ took me to when it was last adjusted: By ME! During an OWL wake-up call, I adjusted this set point from ~35.1 to 33.7 on 02172025 at 1521utc/72amPST (see attachment #2).
The only SDF to ACCEPT was the H1:SQZ-SHG_TEC_SETTEMP = 33.7 (see attachment #3). BUT: Remember other changes I made (when I erroneously thought I had to adjust the OPO TEC TEMP which are not in SDF:
Hindsight is 20/20, but if I addressed the "bonkos SQZ_SHG" via instructions from an alog search first, I would have saved some time! :)
During this time that the SHG guardian was cycling through it's states, that was happening because of the hard fault checker, which checks for errors on the SQZ laser, PMC transdiode, SHG demod, phase shifter.
The demod had an error because the RF level was too high, indeed this was above this threshold in this time and dropped back to normal allowing Corey to lock the squeezer.
The second screenshot shows a recent time that the SHG scaned and locked sucsesfully, in this case as the PZT scans the RF level goes up as expected when the cavity is close to resonance, and also goes above the threshold of 0dBm for a moment, causing the demod to have an error. This must have happened not at the moment when the guardian was checking this error, so that the guardian allowed it to continue to lock.
It doesn't make sense to throw an error about this RF level when the cavity is scanning, so I've commented out the demod check from the hardfault checker.
Also, looking at this hardfault checker, I noticed that it is check for a fault on the PMC trans PD. It would be prefferable to have the OMC guardian do whatever checks it needs to do on the PMC, and trust the sqz manager to correctly not ask the SHG to lock when the PMC is unlocked. Indeed, SQZ manager has a PMC checker when it is asking the SHG to lock, so I've commented out this PMC checker in the SHG guardian. This same logic applies to the check on the squeezer laser, leaving us with only a check on the SHG phase shifter in the hardfault checker.
Editing to add: I wondered why Corey got the message about the pump ISS. DIAG_MAIN has two checks for the squeezer, first that SQZ_MANAGER is in the nominal state, then second that the pump ISS is on. I added an elif to the pump ISS one, so if the sqz manager isn't in the nominal state this will be the only message that the operator sees, Ryan Short and I looked at the SQZ_MANAGER and indeed it seems that there isn't a check for the pump ISS in FREQ_DEP_SQZ.
SQZ_SHG guardian and DIAG_MAIN will need to be reloaded at the next oppurtunity.
TITLE: 03/07 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 2mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.22 μm/s
QUICK SUMMARY:
H1's just made it to NLN .after a 7.75hr lock overnight (lockloss), but has a SQZ ISS Pump Off issue. microseism are low and winds are as well.
TITLE: 03/07 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: ->Ryan S.<-
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 8mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.20 μm/s
SHIFT SUMMARY:
H1 was locked for 7 Hours and 17 minutes.
Until a sudden and Unknown lockloss struck at 5:24 UTC, Screenshots of lockloss ndscopes attached.
I took the last half hour of my shift to run an Initial_alignment before the start of Ryans Shift.
H1 is currently just past at CARM_TO_TR.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
00:32 | WFS | Keita | Optics Lab | Yes | Parts for WFS | 01:36 |
BRS Dift Trends --Monthly FAMIS 26452
BRSs are not trending beyond their red thresh holds.
I assembled the 45MHz WFS unit in the optics lab. Assembly drawing: D1102002.
BOM:
I confirmed that the baseplate and the WFS body are electrically isolated from each other.
There were many black spots on the WFS body (2nd pic) as well as the aluminum foil used for wrapping (3rd pic). It seems that this is a result of rubbing of aluminum against aluminum. I cannot wipe it off but this should be aluminum powder and not some organic material.
QPD orientation is such that the tab on the can is at 1:30 o'clock position seen from the front (4th pic). You cannot tell it from the picture but there's a hole punched to the side of the can.
Clean SMP - dirty SMA cables are in a bag inside the other clean room in the optics lab. DB25 interface cable is being made (or was made?) by Fil.
This WFS Assembly (D1102002) has been given the dcc-generated Serial Number of: S1300637 (with its electronics installed & sealed with 36MHz & 45MHz detection frequencies). As Keita notes, this s/n is etched by hand on the WFS Body "part" (D1102004 s/n016).
Here is ICS information for this new POP WFS with the Assembly Load here: ASSY-D1102002-S1300637
(NOTE: When this WFS is installed in HAM1, we should also move this "ICS WFS Assy Load" into the next Assy Load up: "ISC HAM1 Assembly" (ICS LINK: ASSY-D1000313-H1)
Tested the in-vac POP_X sensor in the optics lab:
All electronics tests passed! We are ready for installation.
TITLE: 03/07 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 144Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 21mph Gusts, 16mph 3min avg
Primary useism: 0.06 μm/s
Secondary useism: 0.25 μm/s
QUICK SUMMARY:
H1 has been locked and Observing for 2 hours and 23 minutes.
All systems are running well, though the range seems a bit low.
TITLE: 03/07 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Currently Observing at 147 Mpc and have been Locked for 2.5 hours. Relocking after the lockloss during the calibration measurements was fully automatic and went relatively smoothly.
LOG:
20:35UTC Lockloss during calibration measurements
22:07 NOMINAL_LOW_NOISE
22:10 Observing
23:25 Three military jet planes flew overhead at a very low altitude (tagging Detchar)
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
16:02 | FAC | Nelly | Opt lab | n | Tech clean | 16:14 |
19:21 | SQZ | Sheila, Mayank | LVEA - SQZ | n | SQZ rack meas. | 19:52 |
21:05 | TOUR | Sheila, Nately | LVEA | N | Tour | 21:29 |
21:53 | ISC | Matt, Siva, Mayank | OpticsLab | n | ISS Array Alignment | 22:48 |
23:55 | VAC | Jordan, Fifer reps | Mids | n | Vacuum work | 00:39 |
00:32 | WFS | Keita | Optics Lab | Yes | Parts for WFS | 02:32 |
I found evidence of possible scattered light while looking at some data from a lock yesterday. Attached is a whitened spectrogram of 30 minutes of data starting at GPS 1425273749. It looks like the peaks are around 28, 38, and 48 Hz, but they are broad and it's hard to tell the exact frequency and spacing. Sheila thinks this may have appeared after Tuesday maintenance. Tagging detchar request so some tests can be run to help us track down the source!
Ryan Short and I have been looking through the summary pages to see what we could learn about this.
Our range has been shaggy and low since Tuesday, this does seem to line up well with Tuesday maintence. Comparing the glitch rate before and after Tuesday isn't as easy, Iara Ota pointed me to the DMT omega glitch pages to make the comparison for Wed when omicron wasn't working. DMT omega pages don't show the problem very clearly, but the omicron based ones do show more glitches SNR 8 and higher since Tuesday maintence, we can compare Monday to Thursday.
Hveto does flag something interesting, which is that the ETMX optical lever vetos a lot of these glitches, both the pitch and yaw channels are picked by hveto, and they don't seem related to glitches in other channels. The oplev wasn't appearing in hveto before Tuesday.
In recent weeks (every day after Feb 26), there have been large jumps in the amplitude of ground motion between 10-30 Hz at ETMX during the night. A good example of this behavior is on March 1 (see the relevant summary page plot from this page). This jump in ground motion occurs around 3 UTC and then returns to the lower level after 16 UTC. The exact times of the jumps change from night to night, but the change in seismic state is quite abrupt, and seems to line up roughly with the time periods when this scattering appears.
Ryan found this alog from Oli: 83093 about this noise. Looking back through the summary pages, it does seem that this started turning off and on Feb 20th, before the 20th this blrms was constantly at the level of 200 nm/s.
Comparing the EX ground BLRMS to the optical lever spectra, whenever this ground noise is on you can see it in the optical lever pitch and yaw, indicating that the optical lever is sensing ground motion. Feb 22nd is a nice example of the optical lever getting quieter when the ground does. However, at this time we don't see the glitches in DARM yet, and hveto doesn't pick up the optical lever channel until yesterday. I'm having a hard time telling when this ground motion started to line up with glitches in DARM, it does for the last two days.
I ran each chilled water pump at the Mid Stations in order to exercise their seals and bearings since these pumps are currently off for the season. This will cause the temperature trending of the loop water to show an decrease for a brief period. The Mid Stations chillers are currently shut down and will remain off until late Spring.
F. Clara, D. Barker, J. Kissel, O. Patane, D. Sigg ECR E1700228 (0) Confirm the analog cabling plan implicit in D0902810-v10 -- see LHO:83168. (1) Move PR3 optical lever from h1sush2b over to h1sush2a to make room for PM1 on h1sush2b, WP 12370 Front-end Model Prep work: LHO:83194 (2) Add new POP WFS controls path to ASC front-end model for driving PM1, WP: 12374 Front-end Model Prep work: LHO:83195 (3) Add PM1 as an "HSSS_FF_MASTER.mdl" into the h1sushtts.mdl, following the other HTTS in HAM1, RM1 and RM2 WP: 12375 Front-end Model Prep work: LHO:83196 and LHO:83211 Note, as Oli mentions in LHO:83196, we'll likely need one more model change to h1sushtts.mdl and the future h1isiham1.mdl model to send up some fresh-and-better ISI feed-forward signals to the RMs and PM1, but this will come in due time. The goal is to execute all of this prep work and work-permits during upcoming maintenance period, Mar 11 2025 (with a back-up date of Mar 18 2025 if needed).
J. Kissel ECR E1700228 One last thing in prep for PM1 from the simulink model perspective: in addition to the "controls" model that Oli and I prepped yesterday (LHO:83196, LHO:83194), we need to add the HAM-A coil driver voltage monitors, the "VMONs" to the h1susauxh2 model. Currently the HAM12 wiring diagram D0902810-v10 is in conflict, indicating that these VMON channels are to be piped in (1) on page 12 to "IN25-28" of the SUS-C4 AA chassis at U11, (2) on page 14 to "IN25-28" of the SUS-C3 AA chassis at U31 which either way feed into ADC0 of the h1susauxh2 IO chassis/computer. I'm 100% confident that (2) is right, and will redline the drawing. As such, I've populated the new PM1 VMON filters with the ADC0 channels 24-27. See attached screenshots. I've tested that the model compiles successfully on the h1build machine. The updates to the model have been committed to the userapps repo as /opt/rtcds/userapps/release/sus/h1/models/ h1susauxh2.mdl rev 30909. No library parts were used or impacted.
Following the usual Cal meas wiki instructions , I ran a calibration broadband and simulines measurement, but the IFO lost lock near the very end of simulines. All test points were cleared with the script wrapup.
Simulines start:
PST: 2025-03-06 12:12:00.699516 PST
UTC: 2025-03-06 20:12:00.699516 UTC
GPS: 1425327138.699516
Simulines output near end:
2025-03-06 20:35:04,501 | INFO | Drive, on DARM_OLGTF, at frequency: 1083.3, and amplitude 1e-09, is finished. G
PS start and end time stamps: 1425328503, 1425328518
2025-03-06 20:35:04,502 | INFO | Scanning frequency 1200.0 in Scan : DARM_OLGTF on PID: 861436
2025-03-06 20:35:04,502 | INFO | Drive, on DARM_OLGTF, at frequency: 1200.0, is now running for 23 seconds.
2025-03-06 20:35:05,621 | INFO | Drive, on L2_SUSETMX_iEXC2DARMTF, at frequency: 43.6, and amplitude 0.40648, is
finished. GPS start and end time stamps: 1425328503, 1425328518
2025-03-06 20:35:05,622 | INFO | Scanning frequency 51.05 in Scan : L2_SUSETMX_iEXC2DARMTF on PID: 861446
2025-03-06 20:35:05,622 | INFO | Drive, on L2_SUSETMX_iEXC2DARMTF, at frequency: 51.05, is now running for 23 se
conds.
2025-03-06 20:35:06,760 | INFO | Drive, on L3_SUSETMX_iEXC2DARMTF, at frequency: 289.84, and amplitude 0.49024,
is finished. GPS start and end time stamps: 1425328503, 1425328518
2025-03-06 20:35:07,271 | INFO | 4 still running.
2025-03-06 20:35:11,138 | INFO | Drive, on PCALY2DARMTF, at frequency: 7.68, and amplitude 16534, is finished. G
PS start and end time stamps: 1425328496, 1425328523
2025-03-06 20:35:11,279 | INFO | 3 still running.
2025-03-06 20:35:11,721 | ERROR | IFO not in Low Noise state, Sending Interrupts to excitations and main thread.
2025-03-06 20:35:11,721 | ERROR | Ramping Down Excitation on channel H1:LSC-DARM1_EXC
2025-03-06 20:35:11,721 | ERROR | Aborting main thread and Data recording, if any. Cleaning up temporary file st
ructure.
Process Process-12:
Traceback (most recent call last):
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 340, in checkIfLocked
os.kill(pid,signal.SIGINT)
ProcessLookupError: [Errno 3] No such process
2025-03-06 20:35:24,341 | INFO | Drive, on L1_SUSETMX_iEXC2DARMTF, at frequency: 12.86, and amplitude 19.723, is
finished. GPS start and end time stamps: 1425328522, 1425328538
Process Process-9:
Traceback (most recent call last):
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 678, in generateSignalInjection
results[scan] = tempObj
File "", line 2, in __setitem__
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/managers.py", line 817, in _callmethod
conn.send((self._id, methodname, args, kwds))
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/connection.py", line 411, in _send_bytes
self._send(header + buf)
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
2025-03-06 20:35:30,666 | INFO | Drive, on L2_SUSETMX_iEXC2DARMTF, at frequency: 51.05, and amplitude 0.34489, i
s finished. GPS start and end time stamps: 1425328529, 1425328544
Process Process-10:
Traceback (most recent call last):
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 678, in generateSignalInjection
results[scan] = tempObj
File "", line 2, in __setitem__
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/managers.py", line 817, in _callmethod
conn.send((self._id, methodname, args, kwds))
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/connection.py", line 411, in _send_bytes
self._send(header + buf)
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
ICE default IO error handler doing an exit(), pid = 861402, errno = 32
PST: 2025-03-06 12:35:30.912878 PST
UTC: 2025-03-06 20:35:30.912878 UTC
GPS: 1425328548.912878
22:10UTC Back to Observing after lockloss
Thu Mar 06 10:13:00 2025 INFO: Fill completed in 12min 56secs
Just before calibration and commissoining time we lost lock. Just like the lock loss before it, the only sign of anything moving beforehand is a very small wiggle in ETMY output 10's of ms before. This might be nothing, but it's the only thing I've found so far.
Once we get back we will jump straight into unthermalized commissioning followed by a calibration.
TITLE: 03/06 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 9mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.32 μm/s
QUICK SUMMARY: Currently in a stand down, we've been locked for 5.5 hours. Calm environment, no alarms.
TITLE: 03/06 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 10mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.29 μm/s
SHIFT SUMMARY:
Once LHO H1 was relocked at the very beginning of my shift, it stayed locked for the next 4 hours and 40 minutes.
Everything has been running well.
LOG:
No Log
I'm looking again at the OSEM estimator we want to try on PR3 - see https://dcc.ligo.org/LIGO-G2402303 for description of that idea.
We want to make a yaw estimator, because that should be the easiest one for which we have a hope of seeing some difference (vertical is probably easier, but you can't measure it). One thing which makes this hard is that the cross coupling from L drive to Y readout is large.
But - a quick comparison (first figure) shows that the L to Y coupling (yellow) does not match the Y to L coupling (purple). If this were a drive from the OSEMs, then this should match. This is actuatually a drive from the ISI, so it is not actually reciprocal - but the ideas are still relevant. For an OSEM drive - we know that mechanical systems are reciprocal, so, to the extent that yellow doesn't match purple, this coupling can not be in the mechanics.
Never-the-less, the similarity of the Length to Length and the Length to Yaw indicates that there is likely a great deal of cross-coupling in the OSEM sensors. We see that the Y response shows a bunch of the L resonances (L to L is the red TF); you drive L, and you see L in the Y signal. This smells of a coupling where the Y sensors see L motion. This is quite plausible if the two L OSEMs on the top mass are not calibrated correctly - because they are very close together, even a small scale-factor error will result in pretty big Y response to L motion.
Next - I did a quick fit (figure 2). I took the Y<-L TF (yellow, measured back in LHO alog 80863) and fit the L<-L TF to it (red), and then subtracted the L<-L component. The fit coefficient which gives the smallest response at the 1.59 Hz peak is about -0.85 rad/meter.
In figure 3, you can see the result in green, which is generally much better. The big peak at 1.59 Hz is much smaller, and the peak at 0.64 is reduced. There is more from the peak at 0.75 (this is related to pitch. Why should the Yaw osems see Pitch motion? maybe transverse motion of the little flags? I don't know, and it's going to be a headache).
The improved Y<-L (green) and the original L<-Y (purple) still don't match, even though they are much closer than the original yellow/purple pair. Hence there is more which could be gained by someone with more cleverness and time than I have right now.
figure 4 - I've plotted just the Y<-Y and Y<-L improved.
Note - The units are wrong - the drive units are all meters or radians not forces and torques, and we know, because of the d-offset in the mounting of the top wires from the suspoint to the top mass, that a L drive of the ISI has first order L and P forces and torques on the top mass. I still need to calculate how much pitch motion we expect to see in the yaw reponse for the mode at 0.75 Hz.
In the meantime - this argues that the yaw motion of PR3 could be reduced quite a bit with a simple update to the SUS large triple model, I suggest a matrix similar to the CPS align in the ISI. I happen to have the PR3 model open right now because I'm trying to add the OSEM estimator parts to it. Look for an ECR in a day or two...
This is run from the code {SUS_SVN}/HLTS/Common/MatlabTools/plotHLTS_ISI_dtttfs_M1_remove_xcouple'
-Brian
ah HA! There is already a SENSALIGN matrix in the model for the M1 OSEMs - this is a great place to implement corrections calculated in the Euler basis. No model changes are needed, thanks Jeff!
If this is a gain error in 1 of the L osems, how big is it? - about 15%.
Move the top mass, let osem #1 measure a distance m1, and osem #2 measure m2.
Give osem #2 a gain error, so it's response is really (1+e) of the true distance.
Translate the top mass by d1 with no rotation, and the two signals will be m1= d1 and m2=d1*(1+e)
L is (m1 + m2)/2 = d1/2 + d1*(1+e)/2 = d1*(1+e/2)
The angle will be (m1 - m2)/s where s is the separation between the osems.
I think that s=0.16 meters for top mass of HLTS (from make_sus_hlts_projections.m in the SUS SVN)
Angle measured is (d1 - d1(1+e))/s = -d1 * e /s
The angle/length for a length drive is
-(d1 * e /s)/ ( d1*(1+e/2)) = 1/s * (-e/(1+e/2)) = -0.85 in this measurement
if e is small, then e is approx = 0.85 * s = 0.85 rad/m * 0.16 m = 0.14
so a 14% gain difference between the rt and lf osems will give you about a 0.85 rad/ meter cross coupling. (actually closer to 15% -
0.15/ (1 + 0.075) = 0.1395, but the approx is pretty good.
15% seem like a lot to me, but that's what I'm seeing.
I'm adding another plot from the set to show vertical-roll coupling.
fig 1 - Here, you see that the vertical to roll cross-couping is large. This is consistent with a miscalibrated vertical sensor causing common-mode vertical motion to appear as roll. Spoiler-alert - Edgard just predicted this to be true, and he thinks that sensor T1 is off by about 15%. He also thinks the right sensor is 15% smaller than the left.
-update-
fig 2- I've also added the Vertical-Pitch plot. Here again we see significant response of the vertical motion in the Pitch DOF. We can compare this with what Edgard finds. This will be a smaller difference becasue the the pitch sensors (T2 and T3, I think) are very close together (9 cm total separation, see below).
Here are the spacings as documented i the SUS_SVN/HLTS/Common/MatlabTools/make_sushlts_projections.m
I was looking at the M1 ---> M1 transfer functions last week to see if I could do some OSEM gain calibration.
The details of the proposed sensor rejiggling is a bit involved, but the basic idea is that the part of the M1-to-M1 transfer function coming from the mechanical plant should be reciprocal (up to the impedances of the ISI). I tried to symmetrize the measured plant by changing the gains of the OSEMs, then later by including the possibility that the OSEMs might be seeing off-axis motion.
Three figures and three findings below:
0) Finding 1: The reciprocity only allows us to find the relative calibrations of the OSEMs, so all of the results below are scaled to the units where the scale of the T1 OSEM is 1. If we want absolute calibrations, we will have to use an independent measurement, like the ISI-->M1 transfer functions. This will be important when we analyze the results below.
1) Figure 1: shows the full 6x6 M1-->M1 transfer function matrix between all of the DOFs in the Euler basis of PR3. The rows represent the output DOF and the columns represent thr input DOF. The dashed lines represent the transpose of the transfer function in question for easier comparison. The transfer matrix is not reciprocal.
2) Finding 2: The diagonal correction (relative to T1) is given by:
I will post more analysis in the Euler basis later.
Here's a view of the Plant model for the HLTS - damping off, motion of M1. These are for reference as we look at which cross-coupling should exist. (spoiler - not many)
First plot is the TF from the ISI to the M1 osems.
L is coupled to P, T & R are coupled, but that's all the coupling we have in the HLTS model for ISI -> M1.
Second plot is the TF from the M1 drives to the M1 osems.
L & P are coupled, T & R are coupled, but that's all the coupling we have in the HLTS model for M1 -> M1.
These plots are Magnitude only, and I've fixed the axes.
For the OSEM to OSEM TFs, the level of the TFs in the blank panels is very small - likely numerical issues. The peaks are at the 1e-12 to 1e-14 level.
@Brian, Edgard -- I wonder if some of this ~10-20% mismatch in OSEM calibration is that we approximate the D0901284-v4 sat amp whitening stage with a compensating filter of z:p = (10:0.4) Hz? (I got on this idea thru modeling the *improvement* to the whitening stage that is already in play at LLO and will be incoming into LHO this summer; E2400330) If you math out the frequency response from the circuit diagram and component values, the response is defined by % Vo R180 % ---- = (-1) * -------------------------------- % Vi Z_{in}^{upper} || Z_{in}^{lower} % % R181 (1 + s * (R180 + R182) * C_total) % = (-1) * ---- * -------------------------------- % R182 (1 + s * (R180) * C_total) So for the D0901284-v4 values of R180 = 750; R182 = 20e3; C150 = 10e-6; C151 = 10e-6; R181 = 20e3; that creates a frequency response of f.zero = 1/(2*pi*(R180+R182)*C_total) = 0.3835 [Hz]; f.pole = 1/(2*pi*R180*C_total) = 10.6103 [Hz]; I attach a plot that shows the ratio of the this "circuit component value ideal" response to approximate response, and the response ratio hits 7.5% by 10 Hz and ~11% by 100 Hz. This is, of course for one OSEM channel's signal chain. I haven't modeled how this systematic error in compensation would stack up with linear combinations of slight variants of this response given component value precision/accuracy, but ... ... I also am quite confident that no one really wants to go through an measure and fit the zero and pole of every OSEM channel's sat amp frequency response, so maybe you're doing the right thing by "just" measuring it with this technique and compensating for it in the SENSALIGN matrix. Or at least measure one sat amp box's worth, and see how consistent the four channels are and whether they're closer to 0.4:10 Hz or 0.3835:10.6103 Hz. Anyways -- I thought it might be useful to be aware of the many steps along the way that we've been lazy about the details in calibrating the OSEMs, and this would be one way to "fix it in hardware."
Jeff, Oli
ECR E1700228
More preparation to make way for PM1 - Jeff and I went into the h1sushtts simulink model and added in PM1 and its necessary connections(h1sushtts before). It was basically a copy of the RM1 and RM2 control blocks, with the input ADC channels taking 24 - 27, and channels 8 - 11 on the DAC (h1sushtts after - PM1+output).
We also copied the RMs PCIe inputs, but the channels coming in from the TTL4C on HAM1 are going to be removed for the RMs when the ISI is installed on HAM1 and replaced with the new ISI channels, and so PM1 will never have the HAM1_TTL4C channels. Since we want to be able to compile and test the model before then, I have put in a constant 0 in place of the TTL4C channels for PM1 (h1sushtts after - IPC INP). Once we have the new ISI channels, we can add these connections in using those channels, as well as update the channels for the RMs.
Daniel just added in the new ASC channels for PM1 (83195), so I was able to successfully compile h1sushtts. It has not yet been installed.
The model file can be found in /opt/rtcds/userapps/release/sus/h1/models/, and the changes to h1sushtts.mdl have been committed to the svn as revision 30907.
Just few "slept on it, and remembered we should" things to add: (1) Attached is the DAQ channel list that comes with the installation of PM1. We didn't cover it explicitly above because it comes standard with the /opt/rtcds/userapps/release/sus/common/models/ HSSS_FF_MASTER.mdl library part, but as it is new (small) weight on the DAQ, it's worth calling out. 4x new channels stored at 512 Hz, and 13x at 256 Hz. (2) Also, the oft-forgotten coil driver output voltage monitor channels, the so-called VMONs needed to be absorbed by the h1susauxh2 front-end too, so we've now done the model prep that as well -- see LHO:83211.