Famis 25079
inLock SUS Charge Measurement
While searching for the files created by the inLock Sus Charge Measurments.
I noticed that there were multiples of a few of the files created today in the directory: /opt/rtcds/userapps/release/sus/common/scripts/quad/InLockChargeMeasurements/rec_LHO
ls -l | grep "Aug 15"
-rw-r--r-- 1 1010 controls 160 Aug 15 07:50 ETMY_12_Hz_1376146243.txt
-rw-r--r-- 1 1010 controls 160 Aug 15 08:22 ETMY_12_Hz_1376148152.txt
-rw-r--r-- 1 1010 controls 160 Aug 15 07:50 ITMX_14_Hz_1376146241.txt
-rw-r--r-- 1 1010 controls 160 Aug 15 08:22 ITMX_14_Hz_1376148154.txt
-rw-r--r-- 1 1010 controls 160 Aug 15 07:50 ITMY_15_Hz_1376146220.txt
-rw-r--r-- 1 1010 controls 160 Aug 15 08:08 ITMY_15_Hz_1376147322.txt
-rw-r--r-- 1 1010 controls 160 Aug 15 08:21 ITMY_15_Hz_1376148134.txt
listing all files, filtering for only files that contain the string ETMX, and then filtering those for files that contain "Aug 15" with the following command:
ls -l | grep "ETMX" | grep "Aug 15"
Returned no files, which means that while it looks like it was ran twice, it never completed ETMX.
I'm not sure if the analysis will run with out all the files or not.
SUS_CHARGE LOG:
2023-08-15_15:26:18.969345Z SUS_CHARGE LOAD ERROR: see log for more info (LOAD to reset)
2023-08-15_15:26:53.512031Z SUS_CHARGE LOAD REQUEST
2023-08-15_15:26:53.524359Z SUS_CHARGE RELOAD requested. reloading system data...
2023-08-15_15:26:53.527151Z SUS_CHARGE Traceback (most recent call last):
2023-08-15_15:26:53.527151Z File "/usr/lib/python3/dist-packages/guardian/daemon.py", line 566, in run
2023-08-15_15:26:53.527151Z self.reload_system()
2023-08-15_15:26:53.527151Z File "/usr/lib/python3/dist-packages/guardian/daemon.py", line 327, in reload_system
2023-08-15_15:26:53.527151Z self.system.load()
2023-08-15_15:26:53.527151Z File "/usr/lib/python3/dist-packages/guardian/system.py", line 400, in load
2023-08-15_15:26:53.527151Z module = self._load_module()
2023-08-15_15:26:53.527151Z File "/usr/lib/python3/dist-packages/guardian/system.py", line 287, in _load_module
2023-08-15_15:26:53.527151Z self._module = self._import(self._modname)
2023-08-15_15:26:53.527151Z File "/usr/lib/python3/dist-packages/guardian/system.py", line 159, in _import
2023-08-15_15:26:53.527151Z module = _builtin__import__(name, *args, **kwargs)
2023-08-15_15:26:53.527151Z File "<frozen importlib._bootstrap>", line 1109, in __import__
2023-08-15_15:26:53.527151Z File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
2023-08-15_15:26:53.527151Z File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
2023-08-15_15:26:53.527151Z File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
2023-08-15_15:26:53.527151Z File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
2023-08-15_15:26:53.527151Z File "<frozen importlib._bootstrap_external>", line 786, in exec_module
2023-08-15_15:26:53.527151Z File "<frozen importlib._bootstrap_external>", line 923, in get_code
2023-08-15_15:26:53.527151Z File "<frozen importlib._bootstrap_external>", line 853, in source_to_code
2023-08-15_15:26:53.527151Z File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
2023-08-15_15:26:53.527151Z File "/opt/rtcds/userapps/release/sus/h1/guardian/SUS_CHARGE.py", line 67
2023-08-15_15:26:53.527151Z ezca.get_LIGOFilter('SUS-ETMX_L3_DRIVEALIGN_L2L').ramp_gain(lscparams.ETMX_GND_MIN_DriveAlign_gain, ramp_time=20, wait=False)
2023-08-15_15:26:53.527151Z ^
2023-08-15_15:26:53.527151Z IndentationError: unindent does not match any outer indentation level
2023-08-15_15:26:53.527151Z SUS_CHARGE LOAD ERROR: see log for more info (LOAD to reset)
2023-08-15_15:29:10.009828Z SUS_CHARGE LOAD REQUEST
2023-08-15_15:29:10.011001Z SUS_CHARGE RELOAD requested. reloading system data...
2023-08-15_15:29:10.050137Z SUS_CHARGE module path: /opt/rtcds/userapps/release/sus/h1/guardian/SUS_CHARGE.py
2023-08-15_15:29:10.050393Z SUS_CHARGE user code: /opt/rtcds/userapps/release/isc/h1/guardian/lscparams.py
2023-08-15_15:29:10.286761Z SUS_CHARGE system archive: code changes detected and committed
2023-08-15_15:29:10.331427Z SUS_CHARGE system archive: id: 9b481a54e45bfda96fa2f39f98978d76aa6ec7c0 (162824613)
2023-08-15_15:29:10.331427Z SUS_CHARGE RELOAD complete
2023-08-15_15:29:10.332868Z SUS_CHARGE calculating path: SWAP_TO_ITMX->INJECTIONS_COMPLETE
2023-08-15_15:29:14.129521Z SUS_CHARGE OP: EXEC
2023-08-15_15:29:14.129521Z SUS_CHARGE executing state: SWAP_TO_ITMX (11)
2023-08-15_15:29:14.135913Z SUS_CHARGE W: RELOADING @ SWAP_TO_ITMX.main
2023-08-15_15:29:14.158532Z SUS_CHARGE [SWAP_TO_ITMX.enter]
2023-08-15_15:29:14.276536Z SUS_CHARGE [SWAP_TO_ITMX.main] ezca: H1:SUS-ITMX_L3_DRIVEALIGN_L2L_TRAMP => 10
2023-08-15_15:29:14.277081Z SUS_CHARGE [SWAP_TO_ITMX.main] ezca: H1:SUS-ITMX_L3_DRIVEALIGN_L2L_GAIN => 0
2023-08-15_15:29:17.820392Z SUS_CHARGE REQUEST: DOWN
2023-08-15_15:29:17.821281Z SUS_CHARGE calculating path: SWAP_TO_ITMX->DOWN
2023-08-15_15:29:17.822235Z SUS_CHARGE new target: DOWN
2023-08-15_15:29:17.822364Z SUS_CHARGE GOTO REDIRECT
2023-08-15_15:29:17.822669Z SUS_CHARGE REDIRECT requested, timeout in 1.000 seconds
2023-08-15_15:29:17.824392Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:17.895303Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:17.958976Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.018262Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.079443Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.130595Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.197848Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.253456Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.318549Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.378993Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.446375Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.507978Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.576823Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.641493Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.695114Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.774571Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.822999Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.823662Z SUS_CHARGE REDIRECT timeout reached. worker terminate and reset...
2023-08-15_15:29:18.831141Z SUS_CHARGE worker terminated
2023-08-15_15:29:18.849938Z SUS_CHARGE W: initialized
2023-08-15_15:29:18.871834Z SUS_CHARGE W: EZCA v1.4.0
2023-08-15_15:29:18.872835Z SUS_CHARGE W: EZCA CA prefix: H1:
2023-08-15_15:29:18.872835Z SUS_CHARGE W: ready
2023-08-15_15:29:18.872980Z SUS_CHARGE worker ready
2023-08-15_15:29:18.883790Z SUS_CHARGE EDGE: SWAP_TO_ITMX->DOWN
2023-08-15_15:29:18.884081Z SUS_CHARGE calculating path: DOWN->DOWN
2023-08-15_15:29:18.886386Z SUS_CHARGE executing state: DOWN (2)
2023-08-15_15:29:18.891745Z SUS_CHARGE [DOWN.enter]
2023-08-15_15:29:18.893116Z Warning: Duplicate EPICS CA Address list entry "10.101.0.255:5064" discarded
2023-08-15_15:29:20.216958Z SUS_CHARGE [DOWN.main] All nodes taken to DOWN, ISC_LOCK should have taken care of reverting settings.
ESD_EXC_ETMX LOG:
2023-08-01_15:07:01.324869Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-01_15:07:01.325477Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-15_15:02:53.269349Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-15_15:02:53.269349Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-15_15:08:26.888655Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-15_15:08:26.888655Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-15_15:29:20.255431Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-15_15:29:20.255431Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-01_15:07:01.324869Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-01_15:07:01.325477Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-15_15:02:53.269349Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-15_15:02:53.269349Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-15_15:08:26.888655Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-15_15:08:26.888655Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-15_15:29:20.255431Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-15_15:29:20.255431Z ESD_EXC_ETMX calculating path: DOWN->DOWN
ESD_EXC_ITMX log:
2023-08-15_15:22:16.033411Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] Restoring things to the way they were before the measurement
2023-08-15_15:22:16.033411Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] Ramping on bias on ITMX ESD
2023-08-15_15:22:16.034430Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_GAIN => 0
2023-08-15_15:22:18.266457Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_SW1 => 8
2023-08-15_15:22:18.517569Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS => OFF: OFFSET
2023-08-15_15:22:18.518166Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_TRAMP => 20
2023-08-15_15:22:18.518777Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_GAIN => 1.0
2023-08-15_15:22:38.431399Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_TRAMP => 2.0
2023-08-15_15:22:41.264244Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_DRIVEALIGN_L2L_SW1S => 5124
2023-08-15_15:22:41.515470Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_DRIVEALIGN_L2L => ONLY ON: INPUT, DECIMATION, FM4, FM5, OUTPUT
2023-08-15_15:22:41.515470Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] all done
2023-08-15_15:22:41.632738Z ESD_EXC_ITMX EDGE: RESTORE_SETTINGS->COMPLETE
2023-08-15_15:22:41.632738Z ESD_EXC_ITMX calculating path: COMPLETE->COMPLETE
2023-08-15_15:22:41.632738Z ESD_EXC_ITMX executing state: COMPLETE (30)
2023-08-15_15:22:41.636417Z ESD_EXC_ITMX [COMPLETE.enter]
2023-08-15_15:22:41.764636Z ESD_EXC_ITMX REQUEST: DOWN
2023-08-15_15:22:41.764636Z ESD_EXC_ITMX calculating path: COMPLETE->DOWN
2023-08-15_15:22:41.764636Z ESD_EXC_ITMX new target: DOWN
2023-08-15_15:22:41.764636Z ESD_EXC_ITMX GOTO REDIRECT
2023-08-15_15:22:41.764636Z ESD_EXC_ITMX REDIRECT requested, timeout in 1.000 seconds
2023-08-15_15:22:41.768046Z ESD_EXC_ITMX REDIRECT caught
2023-08-15_15:22:41.768046Z ESD_EXC_ITMX [COMPLETE.redirect]
2023-08-15_15:22:41.824688Z ESD_EXC_ITMX EDGE: COMPLETE->DOWN
2023-08-15_15:22:41.824688Z ESD_EXC_ITMX calculating path: DOWN->DOWN
2023-08-15_15:22:41.824688Z ESD_EXC_ITMX executing state: DOWN (1)
2023-08-15_15:22:41.827615Z ESD_EXC_ITMX [DOWN.main] Stopping bias_drive_bias_on
2023-08-15_15:22:41.827615Z ESD_EXC_ITMX [DOWN.main] Stopping L_drive_bias_on
2023-08-15_15:22:41.827615Z ESD_EXC_ITMX [DOWN.main] Stopping bias_drive_bias_off
2023-08-15_15:22:41.827615Z ESD_EXC_ITMX [DOWN.main] Stopping L_drive_bias_off
2023-08-15_15:22:41.923244Z ESD_EXC_ITMX [DOWN.main] Clearing bias_drive_bias_on
2023-08-15_15:22:42.059154Z ESD_EXC_ITMX [DOWN.main] Clearing L_drive_bias_on
2023-08-15_15:22:42.216133Z ESD_EXC_ITMX [DOWN.main] Clearing bias_drive_bias_off
2023-08-15_15:22:42.349505Z ESD_EXC_ITMX [DOWN.main] Clearing L_drive_bias_off
2023-08-15_15:29:20.260953Z ESD_EXC_ITMX REQUEST: DOWN
2023-08-15_15:29:20.260953Z ESD_EXC_ITMX calculating path: DOWN->DOWN
2023-08-15_15:18:31.953103Z ESD_EXC_ITMX [L_DRIVE_WITH_BIAS.enter]
2023-08-15_15:18:34.481594Z ESD_EXC_ITMX [L_DRIVE_WITH_BIAS.main] Starting 14Hz Sine injection on H1:SUS-ITMX_L3_DRIVEALIGN_L2L_EXC
2023-08-15_15:18:34.482160Z ESD_EXC_ITMX [L_DRIVE_WITH_BIAS.main] timer['Injection duration'] = 62
2023-08-15_15:19:36.482043Z ESD_EXC_ITMX [L_DRIVE_WITH_BIAS.run] timer['Injection duration'] done
2023-08-15_15:19:36.516842Z ESD_EXC_ITMX [L_DRIVE_WITH_BIAS.run] Injection finished
2023-08-15_15:19:38.908908Z ESD_EXC_ITMX [L_DRIVE_WITH_BIAS.run] Stopping injection on H1:SUS-ITMX_L3_DRIVEALIGN_L2L_EXC
2023-08-15_15:19:39.011256Z ESD_EXC_ITMX EDGE: L_DRIVE_WITH_BIAS->TURN_BIAS_OFF
2023-08-15_15:19:39.011836Z ESD_EXC_ITMX calculating path: TURN_BIAS_OFF->COMPLETE
2023-08-15_15:19:39.012099Z ESD_EXC_ITMX new target: BIAS_DRIVE_NO_BIAS
2023-08-15_15:19:39.018534Z ESD_EXC_ITMX executing state: TURN_BIAS_OFF (15)
2023-08-15_15:19:39.019024Z ESD_EXC_ITMX [TURN_BIAS_OFF.enter]
2023-08-15_15:19:39.019710Z ESD_EXC_ITMX [TURN_BIAS_OFF.main] Ramping off bias on ITMX ESD
2023-08-15_15:19:39.020547Z ESD_EXC_ITMX [TURN_BIAS_OFF.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_GAIN => 0
2023-08-15_15:19:58.934813Z ESD_EXC_ITMX [TURN_BIAS_OFF.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_OFFSET => 0
2023-08-15_15:19:58.935544Z ESD_EXC_ITMX [TURN_BIAS_OFF.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_TRAMP => 2
2023-08-15_15:19:58.935902Z ESD_EXC_ITMX [TURN_BIAS_OFF.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_GAIN => 1
2023-08-15_15:20:01.140528Z ESD_EXC_ITMX EDGE: TURN_BIAS_OFF->BIAS_DRIVE_NO_BIAS
2023-08-15_15:20:01.141391Z ESD_EXC_ITMX calculating path: BIAS_DRIVE_NO_BIAS->COMPLETE
2023-08-15_15:20:01.142015Z ESD_EXC_ITMX new target: L_DRIVE_NO_BIAS
2023-08-15_15:20:01.143337Z ESD_EXC_ITMX executing state: BIAS_DRIVE_NO_BIAS (16)
2023-08-15_15:20:01.144372Z ESD_EXC_ITMX [BIAS_DRIVE_NO_BIAS.enter]
2023-08-15_15:20:03.673255Z ESD_EXC_ITMX [BIAS_DRIVE_NO_BIAS.main] Starting 14Hz Sine injection on H1:SUS-ITMX_L3_LOCK_BIAS_EXC
2023-08-15_15:20:03.673786Z ESD_EXC_ITMX [BIAS_DRIVE_NO_BIAS.main] timer['Injection duration'] = 62
2023-08-15_15:21:05.674028Z ESD_EXC_ITMX [BIAS_DRIVE_NO_BIAS.run] timer['Injection duration'] done
2023-08-15_15:21:05.697880Z ESD_EXC_ITMX [BIAS_DRIVE_NO_BIAS.run] Injection finished
2023-08-15_15:21:07.987796Z ESD_EXC_ITMX [BIAS_DRIVE_NO_BIAS.run] Stopping injection on H1:SUS-ITMX_L3_LOCK_BIAS_EXC
2023-08-15_15:21:08.072581Z ESD_EXC_ITMX EDGE: BIAS_DRIVE_NO_BIAS->L_DRIVE_NO_BIAS
2023-08-15_15:21:08.072581Z ESD_EXC_ITMX calculating path: L_DRIVE_NO_BIAS->COMPLETE
2023-08-15_15:21:08.073301Z ESD_EXC_ITMX new target: RESTORE_SETTINGS
2023-08-15_15:21:08.076744Z ESD_EXC_ITMX executing state: L_DRIVE_NO_BIAS (17)
2023-08-15_15:21:08.079417Z ESD_EXC_ITMX [L_DRIVE_NO_BIAS.enter]
2023-08-15_15:21:10.597939Z ESD_EXC_ITMX [L_DRIVE_NO_BIAS.main] Starting 14Hz Sine injection on H1:SUS-ITMX_L3_DRIVEALIGN_L2L_EXC
2023-08-15_15:21:10.598481Z ESD_EXC_ITMX [L_DRIVE_NO_BIAS.main] timer['Injection duration'] = 62
2023-08-15_15:22:12.598413Z ESD_EXC_ITMX [L_DRIVE_NO_BIAS.run] timer['Injection duration'] done
2023-08-15_15:22:12.633547Z ESD_EXC_ITMX [L_DRIVE_NO_BIAS.run] Injection finished
2023-08-15_15:22:15.937968Z ESD_EXC_ITMX [L_DRIVE_NO_BIAS.run] Stopping injection on H1:SUS-ITMX_L3_DRIVEALIGN_L2L_EXC
2023-08-15_15:22:16.018077Z ESD_EXC_ITMX EDGE: L_DRIVE_NO_BIAS->RESTORE_SETTINGS
2023-08-15_15:22:16.018395Z ESD_EXC_ITMX calculating path: RESTORE_SETTINGS->COMPLETE
2023-08-15_15:22:16.018676Z ESD_EXC_ITMX new target: COMPLETE
2023-08-15_15:22:16.019499Z ESD_EXC_ITMX executing state: RESTORE_SETTINGS (25)
2023-08-15_15:22:16.019891Z ESD_EXC_ITMX [RESTORE_SETTINGS.enter]
2023-08-15_15:22:16.020220Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] Finished with all excitations
2023-08-15_15:22:16.033260Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] Saved GPS times in logfile: /opt/rtcds/userapps/release/sus/common/scripts/quad/InLockChargeMeasurements/rec_LHO/ITMX_14_Hz_1376148154.txt
2023-08-15_15:22:16.033411Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] Restoring things to the way they were before the me
Despite some great efforts to track down the source from Jenne and Jeff, we are still seeing a 102 Hz line rung up right at the end of the lownoise_length_control state. Since we had a random lockloss, I asked Tony to take us to lownoise_esd_etmx and I tried walking through lownoise length control by hand (copying the guardian code line by line into the shell).
The lines 5427-5468 ramp various gains to zero, set up the filters and drive matrix for the LSC feedforward, and prepare for the SRCL offset. These lines run fine and do not ring up the 102 Hz line.
I am able to run the first action line of the run state, which sets the MICH FF gain to 1 (line 5480). This runs fine, no 102 Hz line. Then, I ran the next line to turn on the SRCL FF gain (line 5481). This caused an immediate lockloss (huh?), despite the fact that this code has run many times just fine.
On the next lock attempt, I tried running the MICH and SRCL gain lines at the exact same time. Also immediate lockloss.
I have no idea why this is such an issue. All it does it ramp the gains to 1 (the tramps are set on a previous line to 3 seconds).
Both of these locklosses seem to ring up a test mass bounce mode, suggesting that the SRCL FF (I assume) is kicking a test mass pretty hard.
This might be a red herring, or maybe it's a clue. I don't see any 102 Hz line during these locklosses though.
The offending lines:
I think it's pretty clear that this is an LSC feedforward problem. I attached two ndscopes of the ETMX L3 master outs, one zoomed in and one zoomed out. The massive oscillation in the signal is the 102 Hz line, which I first begin to see in the time series starting at UTC 5:24:32 and some milliseconds. This corresponds exactly to the time in the guardian log when the LSC feedforward gain is ramped on (see copied guardian log below).
I have added two new lines to lownoise_length_control that increases the LSC FF ramp time from 3 to 10 seconds. These lines are at the end of the main state, right before the run state. I also increased the timer in the first step of the run state to wait 10 seconds after the FF gains are set, before moving to the next part of the run state which changes LSC gains.
This will result in two SDF diffs for the feedforward ramp times in the LSC model. It can be accepted. Tagging Ops
SDF changes accepted picture attached.
Reverted these changes to make it past LOWNOISE_LENGTH_CONTROL
5471 ezca['LSC-MICHFF_TRAMP'] = 3 Changed back to 3 from 10
5472 ezca['LSC-SRCLFF1_TRAMP'] = 3 Changed back to 3 from 10
And
5488 self.timer['wait'] = 1 #Changed back to 1 from 10.
This may not be a problem with a filter kick, but with the filter making some loop unstable and driving up the 102Hz line. I suspect that changing the aux dofs gain immediately afterwards makes it stable again. If so, slowing down the transition only makes it worse. We may need to reorder the steps.
I_X Saturation 2:52 UTC right before this lockloss.
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1376189496
Cause unknown. Likely not an earthquake.
Acquired the IFO while it was in Manual Initial Alignment.
Had troubles with SRM in ACQUIRE_SRY . Touched SRM one last time and it locked just fine.
Started Locking, and ALSX & Y were way off.
Interveined and eventually Jenne said we should redo intial alignment with ITMs due to the ITM Camera Servo Error signals were large.
Initial_Alignment started Got to PRC_ALGNING and PRM was continuously saturating.
We then took ISC_LOCK to down and MANUAL_INITIAL_ALIGNMENT.
Worked through PREP_FOR_PRX and manually moved PRM to maximize Peaks and Valleys on H1:ASC-AS_DC_NSUM_OUT16.
Then moved through MICH_BRIGHT OFFLOADED and SR2 ALIGN manually.
I did have to move SRM again.
After this Locking happened quickly Starting at 00:55 UTC.
Reached NLN at 1:39 UTC and Observing at 2:03 UTC
Dust alarms in the optics labs are going off. Wind is only 15 mph.
Current IFO Status: NOMINAL_LOW_NOISE & OBSERVING.
E. Capote, J. Kissel, T. Sanchez Since TJ ran the dark offsets script during maintenance recovery this afternoon (LHO:72251), we had to accept all of those new offsets in observe. Here's the screenshots of all the models that had offsets that we accepted.
LSC-REFL_A_RF Offsets were accecpted
OAF-LASERNOISE_NONSENS_COEFF changes were accepted.
Here's the OMC DCPDs. Tagging CAL just in case.
Accepted SDFs for ASCIMC, ISCEY, ISCEX and ASC (three screenshots).
We've done three measurements assess OMC loss, but we've found that there's something weird about DCPD transient response (will attach another alog, but it seems as if something is railing inside the chamber, or maybe the OMC DCPD inductor is responding nonlinearly causing some kind of soft saturation, or maybe it's just a whitening-dewhitening mismatch).
Because of this, Measurement 1 below is suspect, Measurement 2 is definitely OK, Measurement 3 is probably OK too.
Analysis will come later.
Throughout the measurement, RF sidebands (9, 45 and 118) were OFF, H1:OMC-DCPD_A_GAINSET and B were set to low (a factor of 10 smaller than nominal), and IMC-PWR_IN was 10W.
Measurement 1. Scan OMC PZT and measure the MM loss.
Lock OMC, align OMC reasonably well, unlock, scan the PZT slowly. The best scan is between 19:26:25 and 19:34:44 UTC (t=[-31m,-23m] in the 1st attachment).
"Align reasonably well" was a challenge as we were using the OMC QPD for OMC ASC, changing alignment meant changing QPD offset, and doing so frequently railed OMC suspension. While changing the offsets, maximum DCPD_SUM I could reach was 16.9, but I kept bumping OMCS so I gave up (sensor correction was off for the first half of this effort and that didn't help). In the end, usable data was obtained with default offset that gave us ~16.6 when locked, but we know that that was NOT the best alignment.
As you can see, the peaks in DCPD_SUM during the scan are all about ~14, much smaller than 16-something. (At t=-36m, the OMC was held at resonance and DCPD_SUM was ~16.6. After the scan at t=-20m, DCPD_SUM was ~16.45. So the alignment drift wasn't much of a problem.)
It turns out that we had to scan slower than this even though this was REALLY slow (~8 minutes for one cycle) and/or lower the laser power.
Measurement 2. OMC throughput.
Lock the OMC to 00 resonance (19:37:27-19:38:28 UTC, roughly t=[-20m, -19m] on the 1st attachment).
Measure DCPD_SUM, input power (via ASC-OMC_A and ASC-OMC_B SUM) and reflected power (via OMC-REFL_A).
With the same alignment into OMC, find the time where the OMC was off-resonance (DCPD transmission was minimal 19:25:58-19:26:20 UTC), and measure DCPD_SUM, input power and reflected power.
Calculate the throughput.
Measurement 3. OMC Finesse
I roughly kept the OMC at resonance by adjusting pzt voltage. DCPD_SUM was slowly drifting but it was about 16.0.
I started injecting into PSL frequency with an amplitude of ~+-600kHz (1.2MHzpp) via IMC-L_EXC. I slowed down the injection frequency until the peak value gets back to 16.0. (Second attachment.)
Best scan data is obtained 19:56:20-19:57:20 UTC.
OMC DCPD transient response
Let's see the 2nd attachment of the above alog (which is attached again).
At t=[-2m40s, -2m20s], nothing is railing at ADC. You can tell that as ADC saturation means that OMC-DCPD_A_STAT_MAX and/or OMC-DCPD_A_STAT_MIN hit +-512k (MAX only went to 120k and MIN only went to -80k in this case).
However, note that the DC value for MAX and MIN during the time OMC was kept on resonance were about 48k and 47k, respectively. The fact that MIN went to negative 80k and MAX went to positive 120k means that the transient response was huge.
Even though it was huge, if nothing was railing nor saturating, you'd still expect that the peak height in DCPD_SUM is the same as when the OMC was kept on resonance, but clearly that's not the case. At first when the scan was fast the peak value was maybe 60% of what it should have been, and as I slowed down the scan the peak gradually went back to ~99% or so.
In the case of the PZT scan, we're talking about the slow velocity where each 00 peak is ~0.6sec wide (and that was too fast, we should have reduced the laser power, anyway see 2nd attachment).
Where does the mismatch come from?
A simple whitening-dewhitening mismatch? Is something railing inside the chamber, or maybe soft-saturating because of large transient (e.g. big coil?). I think we need a help from Hartmut.
I'm using Keita's times from the OMC visibility measurement above to runthe script described in 73873, and using the dark offset times from that alog as well. This time includes more higher order mode content, and slightly higher overall efficiency, than the time in 73873, which is somewhat confusing.
Results:
Power on refl diode when cavity is off resonance: 22.764 mW
Incident power on OMC breadboard (before QPD pickoff): 23.207 mW
Power on refl diode on resonance: 2.081 mW
Measured effiency (DCPD current/responsivity if QE=1)/ incident power on OMC breadboard: 82.7 %
assumed QE: 100 %
power in transmission (for this QE) 19.187 mW
HOM content infered: 8.979 %
Cavity transmission infered: 91.712 %
predicted efficiency () (R_inputBS * mode_matching * cavity_transmission * QE): 82.676 %
omc efficency for 00 mode (including pick off BS, cavity transmission, and QE): 90.832 %
round trip loss: 685 (ppm)
Finesse: 392.814
assumed QE: 96.0 %
power in transmission (for this QE) 19.986 mW
HOM content infered: 9.099 %
Cavity transmission infered: 95.660 %
predicted efficiency () (R_inputBS * mode_matching * cavity_transmission * QE): 82.676 %
omc efficency for 00 mode (including pick off BS, cavity transmission, and QE): 90.952 %
round trip loss: 348 (ppm)
Finesse: 401.145
Tagging
- CAL (because of the suggestion that there's a mismatch between analog whitening and digital compensation for it [I doubt it]),
- CDS (because Ali James -- who was Hartmut's student that built up the in-vac transimpedance amplifier -- now has been hired as our newest CDS anlaog electronics engineer),
- DetChar (because understanding this transient behavior may be a clue to some other DetChar / GW channel related transient features, since Keita refers to studing *the* DCPDs -- the OMC DCPDs).
Sheila and I examined the new ASC subbudget today (see 72245) and we did not like the shape of CHARD Y above 20 Hz. This new shape is a result of Gabriele's adjustment to CHARD Y to give more low frequency suppression (71866, 71927).
Since this feature appears above 20 Hz and a recent measurement of CHARD Y shows that the 3.3 Hz UGF has a 44 deg phase margin, I figure we can design some 20 Hz cutoff filter that removes this feature with minimal effect on the loop stability.
Attached is a screenshot of the new design (in red) compared with the old design (in gold). I first turned off "JLP200", which is a very high frequency low pass that I think is unnecessary. Then I constructed a low Q elliptical low pass at 20 Hz with 10 dB of suppression. The result is 6 deg less of phase margin at 3.3 Hz. I minimized any passband ripple so that the gain increase around 10 Hz is only 1 dB.
I updated the guardian. It will no longer use FM10 (200 Hz low pass), and will turn on FM8 (20 Hz low pass) along with Gabriele's filters. No gain change is required.
This filter has been successfully tested.
TITLE: 08/15 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Tony
SHIFT SUMMARY: A slightly longer maintenance day and then a few small issues that have us just now finishing up initial alignment. After maintenance activities were done and we started initial alignment, it was noticed that the sidebands were not turned back on. After they where turned back on we were still not able to get through the INPUT_ALIGN state. We could very briefly catch but then the IMC would lose lock. This ended up being because the H1:LSC-POP_A_RF45_I_OFFSET needed to be changed from -72 to -68 during this state. This offset is for during this state as we change the whitening for POP A during this particular part of initial alignment away from its nominal setting.
Before this offset was found, I had run the dark offset script, but I forgot that our POP A whitening was in the initial alignment setting. I manually changed and re-accepted these back to where they were.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:31 | FAC | Karen | EX | n | Tech clean | 17:28 |
| 15:31 | FAC | Randy | EY | n | Cleanroom curtains | 18:22 |
| 15:32 | FAC | Cindi | FCES | n | Tech clean | 17:18 |
| 15:32 | CDS/SEI | Jim | HAM1 | n | HAM1 model restart | 15:52 |
| 15:33 | CDS | Dave | remote | n | OAF model restart | 15:43 |
| 15:37 | SEI | Jim | LVEA-HAM1 | n | HAM1 HEPI accumulator swap | 15:52 |
| 15:40 | TCS | Camilla | Mech room | n | TCS chiller swap | 16:10 |
| 15:41 | FAC | Chris | EY, MY, CS, MX, EX | n | FAMIS checks | 16:49 |
| 15:45 | PSL | Jason, Ryan | CR | n | PMC ref cav alignment | 16:21 |
| 15:47 | SEI/CDS | Fil | EX | n | Cabling for SEI HEPI | 17:47 |
| 15:52 | SEI | Jim | EX | n | Replace zerks | 16:35 |
| 16:05 | PEM | Robert, Lance, Genevieve | LVEA, EY | n | Move shaker from LVEA to EY | 17:55 |
| 16:14 | TCS | Camilla | LVEA | N | Turn on CO2 lasers after chiller swap | 16:35 |
| 16:28 | PCAL | Rick, Julianna, Tony | EX | YES | PCAL measurement | 18:43 |
| 16:53 | FAC | Christina | FCES | n | Property search | 19:22 |
| 17:02 | CDS | f2f tour group | LVEA, roof | n | Tour | 18:04 |
| 17:13 | CC | Mitch, Ibrahim | Ends | - | Dust monitor famis check | 17:56 |
| 17:19 | FAC | Cindi, Karen | LVEA | n | Tech clean | 18:52 |
| 17:41 | SUS | Jeff | EX | n | TMSX measurement | 18:59 |
| 17:55 | PEM | Robert | EX, EY | n | Moving equipment from EX to EY | 19:06 |
| 18:04 | VAC | Janos, Gerardo | outbuildings, LVEA | n | Checks (started around 1530) | 18:40 |
| 18:06 | CDS | Fil | EY | n | SEI HEPI cabling | 19:02 |
| 18:10 | ISC | Keita | CR | n | OM2 measurement | 19:40 |
| 22:11 | VAC | Gerardo, Janos | LVEA | n | Grab equipment | 22:20 |
TITLE: 08/15 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 11mph Gusts, 9mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.15 μm/s
QUICK SUMMARY:
Current IFO Status: Aligning
Julianna Lewis, TonyS, RickS This morning we moved the upper (inner) Pcal beam at X-end back to center, then horiztonal by 5 mm at the entrance aperture of the Pcal Rx sensor to test the impact on the calibration of the Pcal system at X-end. This work is a continuance from the previous movement on August 8th 2023. See the following alog, Alog from August 8th, 2023
We expect that the impact on the calibration of the Xend Pcal will be given by the dot product of the Pcal and IFO displacement vectors on the ETM.
The work proceeded as follows:
We will gather data for the following 7 days and observe any changes in the X/Y comparison.
J. Kissel Executive Summary: Driving TMSX in YAW from the top mass (M1) OSEM coils, and using the same stage M1 OSEM sensors as my response channels -- thus far, I can only report null results for finding any excitable mechanical features 102.12833 Hz. I found a feature at 103.585 +/- 0.004 Hz, but it's only a single feature, so I suspect it's the M1, "lower" blade bending mode rather than any evidence for violin modes. But, given the arrangement of sensors, actuators, and the wires on the transmon, I retrospectively am not surprised to not have found violin modes. I'm looking to confirm whether the frequency that the IFO has been ringing up recently, 102.12833 Hz, is a violin mode of TMSX, as is currently "the most likely suspect" for what this giant feature is that rings up for the first ~3 hours of a lock stretch, after "apparently" being "rung up" by the turn on process for the LSC feed-forward (see, among others, LHO aLOG 72064, 72214, 72221). Recall the TMTS is a double suspension -- see D0901880 for top-level assembly drawing. Recall that TMSX is a "production" TMS, rather than a "First Article" TMS, but I can't find a single source link that tells us the difference clearly and concisely. - From the ISI, there are two blades (no drawing! Only an excel spreadsheet of characterization under D1200116), and two wires suspending the top mass (M1). - From M1, there are also two blades (no drawing! Only an excel spreadsheet of characterization under D1200117), and *four* wires suspending the optical bench and telescope assembly. The TMS is unique in that it's the only 4-wire suspension clamped from 2 blade tips (see D1101163). So, if we drive the M1 stage (the only place we can), and with no IFO all we have are M1 stage sensors, then we would hope to see - two violin modes from the Sus. Point to M1 "upper" wires, *maybe* each split into two orthogonal DOFs if the Q is high enough and the bending points move asymmetrically enough - four violin modes from the M1 to M2 "lower" wires, again, *maybe* split into eight. T1300876, table 3.11 for the "production" TMTS predicts a single violin mode frequency from first principles, the properties of the wires, and the load on the wires, to be - Upper -- 331.7 Hz - Lower -- 104.2 Hz Looking at the OSEM arrangement for the TMTS (see E1200045) -- which is *like* the QUAD, but rotated 90 deg such that only one SD OSEM senses / actuates in Longitudinal -- I chose to start the exploration for violin modes by driving in Yaw around 100-110 Hz using awggui, and measuring the responses, transfer functions, and coherence simultaneously in DTT. I slowly narrowed in my 6th order elliptic band pass on the region, such that I could drive more and more power. I had plenty enough SNR in the M1 OSEM to M1 OSEM transfer function at these frequencies that I could get coherent transfer functions. I did find a feature at 103.585 +/- 0.002 Hz, but since it's only one feature rather than four (or eight), I can't claim that "this is *the* violin mode(s)" but I guess instead that it's a blade bending mode of the "lower" M1 to M2 blades. Why? :: These TMTS M1-to-M2 blades are much like the QUAD's M0-to-L1 and L1-to-L2 blades -- copies and pastes of the UIM blades -- which have bending modes at around ~110 Hz (see analysis for the QUAD blade conditions in T1300595). :: while I expect energy is getting into the violin modes from the M1 Yaw drive, I guess I'm not surprised that the energy of those violin modes does not couple back through the blade clamps, the blades, and the M1 mass to the OSEM flags enough to see above the OSEM sensor noise. Here's some of the data: - 1st attachment: The transfer function between M1 Y drive to M1 Yaw and Transverse OSEM response, with an FFT length of 128 seconds for a frequency resolution of 1/128 = 0.0078125 Hz (and an effective noise bin width of (1/128)*1.5 = 0.01171875 Hz given that I'm using the default Hanning window, which has a NENBW of 1.5 bins). This was the first clue that I wouldn't find anything at 102.129 Hz, and that there was something at 103.585 Hz. - 2nd attachment: Yaw Drive with a tighter band-pass filter from 102 to 103 Hz, and measured with the same 0.0078 mHz resolution, we see nothing in this band. - 3rd attachment: Yaw Drive with a higher frequency 1 Hz band pass, from 103 to 104 Hz, and measured with a much higher frequency resolution -- FFT length of 512 seconds, for a resolution of 1.95 mHz, and effective noise binwidth of 2.92 mHz.
awggui settings, and drive amplitudes Attachment 1: drive settings for the 101 to 105 Hz band pass. Attachment 2: drive settings for the 102 to 103 Hz band pass. Attachment 3: drive settings for the 103 to 104 Hz band pass. Attachment 4: TMTSs at H1 still have 18-bit DACs, i.e. a saturation limit of 2^17 = 131072 ct_peak, or an rms limit of 2^17/sqrt(2) = 92681 ct_rms. I drove the TMS with the top mass coil driver in state 1 (analog low pass OFF), with a F2 and F3 DAC output request RMS of 40000 ct_rms (this is captured during the 103 to 104 Hz band pass settings).
Elenna, Sheila
We got data today to rerun our noise budget with the current noise (150Mpc). We got quiet time with no squeezing for 10 minutes starting at 1375723813, with no large glitches. We ran excitations for LSC, laser noise, and ASC. We had quiet time with squeezing injected from the previous night of observing, I choose 1375695779 as a time with high range and no large glitches. This is commit 50358cda
Elenna, Sheila
We ran the noise budget code for this no squeezing time.
This is all commited as 0f9ffe0e
Sheila, Vicky - we have re-run the noise budget for following times:
Noise budget with squeezing. Changes here: using GDS instead of CAL-DELTAL, closer thermalized FDS time to no-sqz, using updated IFO gwinc parameters related to quantum noise calculation.
(Edit: was a glitch in the old time; updated to an FDS time without glitches. All plots updated.)
PDT: 2023-08-10 08:45:00.000000 PDT
UTC: 2023-08-10 15:45:00.000000 UTC
GPS: 1375717518.000000
PDT: 2023-08-10 09:35:52.000000 PDT
UTC: 2023-08-10 16:35:52.000000 UTC
GPS: 1375720570.000000
Noise budget with no squeezing. Same time as above, now calculates using gwinc quantum noise calculation instead of semiclassical calculation used previously.
PDT: 2023-08-10 10:18:11.000000 PDT
UTC: 2023-08-10 17:18:11.000000 UTC
GPS: 1375723109.000000
Both sqz & no-sqz noise budgets now use the correlated quantum noise calculation from gwinc, instead of semiclassical calculations for SN & QRPN. The gwinc budget parameters related to quantum noise calculation are consistent with the recent sqz data set (8/2, alog 72565), with readout losses evenly split between IFO output losses that influence optical gain (20%) and SQZ injection losses (20%), parameters in plot title here. This is high on SQZ injection losses, and slightly conservative on IFO output losses. This updated FDS time is thermalized and closer to the No-SQZ time; the time used previously was several hours earlier near the start of lock, w/ ifo not yet thermalized.
Unlike before, both budgets now show GDS-CALIB STRAIN, which on 8/10 was more accurately calibrated (see Louis's alog on Aug 8, LHO:72075, comparing CAL-DELTAL and GDS vs. PCAL sweep, and his record from 72531). CAL-DELTAL was previously overestimating range due to calibration inaccuracies. We got GDS-CALIB_STRAIN data from nds servers, and at first weren't able to get input jitter data from nds, due to the sampling rate change of IMC-WFS channels from 2k to 16k, 71242. Jonathan H. helped us fix this issue, so we can now pull GDS data and input jitter data from nds.ligo-wa.caltech.edu:31200 -- thank you Jonathan!! With this, the input jitter sub-budget is kind of interesting, looks to be mostly IMC-WFS in YAW.
A quick thought on discrepancy between expected and measured DARM between below several hundred Hz-- I don't know if this could be related to the recent update to gwinc CTN parameters (high/low index loss angles), related to quantum noise, or mystery noise. The recent gwinc CTN update seemed to have dropped the calculated CTN level slightly (maybe 10-15% or so). In April 2023, Kevin helped update CTN parameters LHO:68499 to reconcile H1 budget with the official gwinc parameters, while Evan made a correlated noise measurement 68482 where noise in the bucket seems more consistent with the older CTN estimate from gwinc (or very slightly higher). Another idea is that it could be related to quantum noise, such as SRCL detuning or sqz angle which could've changed since the sqz dataset, as quantum noise can also affect the noise in this region.
All pushed as git commit 28cf2664.
Edit: All pushed again as git commit 33ffd60b.
Added noise budgets with squeezing for HVAC off time on August 17 from alogs 72308, 72297.
When comparing this HVAC off time on Aug 17 with the noise budget from above on Aug 10, it's interesting to note the broadband difference in input jitter (Aug 10 vs Aug 17, HVAC off). Between these times, worth noting that I think there were several additional improvements (like LSC FF or SUS-related) as well.
Edit: updated 8/10 input jitter budget to the less glitchy noise budget time.
Much of the gap between expected DARM (black traces) and measured DARM (red traces) in the noise budget looks compatible with elevating the CTN trace. Budget plots with 100 Hz CTN @ 1.45e-20 m/rtHz are attached below for the no-HVAC times. This is almost 30% higher than the new gwinc nominal CTN at 100 Hz (i.e., 1.128e-20 m/rtHz --> 1.45e-20 m/rtHz). Compared to the old gwinc estimate of 1.3e-20, this is ~11% higher. Quantum noise calculation unchanged here.
This CTN level is similar to the 30% of excess correlated noise that Evan H. observed in April 2023, see LHO:68482. His cross-correlation measurement sees ~30% excess correlated noise around 100 Hz after subtracting input jitter noise, where that "30%" is using the newer gwinc CTN estimate of 1.128e-20 m/rtHz @ 100 Hz. This elevated correlated noise, if attributed to CTN, corresponds to CTN @ 100 Hz of about 1.3*1.128 = 1.46e-20 m/rtHz. See this git merge request for the gwinc CTN update ; this update lowered the expected CTN at 100 Hz by ~15%, from 1.3e-20 (old) to 1.1e-20 m/rtHz (new), based on updated MIT measurements.
For reference, I have plotted these various CTN levels as dotted traces in the thermal sub-budget.
To elevate CTN levels by 30% in the budget code, I scaled both high+low index loss angles by a factor of 1.8, specifically Philhighn 3.89e-4 --> 7e-4 ; Phillown 2.3e-5 --> 4.14e-5. It seems like much higher than this level ~1.45e-20 might be difficult to reconcile with the full budget.
Noteworthy w.r.t. squeezing: from the laser noise sub-budget, laser frequency noise looks within 33% of squeezed shot noise with ~3.7dB of squeezing. By contrast, the L1 noise budget from Aug 2023 (LLO:66532) shows laser noise at the ~20% level of squeezed shot noise with 5.3 dB of squeezing -- i.e. a lower laser noise floor past shot noise.
The following plots can be found in /ligo/gitcommon/NoiseBudget/aligoNB/out/H1/lho_all_noisebudgets_081723_noHVAC_elevatedCTN, and not yet commited to the git repo.
Plots with higher CTN are attached here for the SQZ / no-SQZ proper noise budget times from 8/10, when injections were run.
Comparing the sqz vs. no-sqz budgets suggests there might be more to understand here, to tease apart the contributions from coating thermal noise (CTN) vs. quantum noise in the bucket. In particular, something disturbing that stands out, is that I imagined that if elevated CTN is the physical effect we're missing, it would reconcile both NBs with and without squeezing. However, there is still some discrepancy in the un-squeezed budget, which was not resolved by CTN, and seems to have a consistent shape. I'm wondering if this is related to the IFO configuration as it affects the quantum noise without squeezing. I think this could result from a non-zero but small SRCL detuning since it looks like elevated noise, with a clear shape, that increases below the DARM pole. Simply elevating CTN to match the no-sqz budget would put us in conflict with squeezed darm, so I don't think it makes sense to elevate CTN further. The budget currently has 0 SRCL detuning as it "seems small-ish", but this parameter is somewhat unconstrained in the quantum noise models.
In models, the readout angle is upper-bounded by Sheila's contrast defect measurement, though in principle it could probably be anything lower than that too, which could be worth exploring. It might be helpful to have an external measurement of the thermalized physical SRCL detuning, or in the models allowing the SRCL detunings to vary, to explore how it fits or is constrained by the fuller noise budget picture.
Plots with squeezing can be found in /ligo/gitcommon/NoiseBudget/aligoNB/out/H1/lho_all_noisebudgets. No squeezing plots are in /ligo/gitcommon/NoiseBudget/aligoNB/out/H1/lho_darm_nosqz_noisebudget.
I pushed to git commit 70ca191c without elevated CTN and the associated extra traces. The relevant parameters are left commented out at the bottom of the QuantumParams file, and relevant code to plot the extra traces is commented out in the lho_all_noisebudgets script.
Follow up on previous tests (72106)
First I injected noise on SR2_M1_DAMP_P and SR2_M1_DAMP_L to measure the transfer function to SRCL. The result shows that the shape is different and the ratio is not constant in frequency. Therefore we probably can't cancel the coupling of SR2_DAMP_P to SRCL by rebalancing the driving matrix. Although I haven't thought carefully if there is some loop correction I need to do for those transfer functions. I measured and plotted the DAMP_*_OUT to SRCL_OUT. transfer functions. It might still be worth trying to change the P driving matrix while monitoring a P line to minimize the coupling to SRCL.
Then I reduced the damping gains for SR2 and SR3 even further. We are now running with SR2_M1_DAMP_*_GAIN = -0.1 (was -0.5 for all but P that was -0.2 since I reduced it yesterday). Also SR3_M1_DAMP_*_GAIN = -0.2 (was -1). This has improved a lot the SRCL motion and also improved DARM RMS. It looks like it also improved the range.
Tony has accepted this new configuration in SDF.
Detailed log below for future reference.
Time with SR2 P gain at -0.2 (but before that too)
from PDT: 2023-08-10 08:52:40.466492 PDT
UTC: 2023-08-10 15:52:40.466492 UTC
GPS: 1375717978.466492
to PDT: 2023-08-10 09:00:06.986101 PDT
UTC: 2023-08-10 16:00:06.986101 UTC
GPS: 1375718424.986101
H1:SUS-SR2_M1_DAMP_P_EXC butter("BandPass",4,1,10) ampl 2
from PDT: 2023-08-10 09:07:18.701326 PDT
UTC: 2023-08-10 16:07:18.701326 UTC
GPS: 1375718856.701326
to PDT: 2023-08-10 09:10:48.310499 PDT
UTC: 2023-08-10 16:10:48.310499 UTC
GPS: 1375719066.310499
H1:SUS-SR2_M1_DAMP_L_EXC butter("BandPass",4,1,10) ampl 0.2
from PDT: 2023-08-10 09:13:48.039178 PDT
UTC: 2023-08-10 16:13:48.039178 UTC
GPS: 1375719246.039178
to PDT: 2023-08-10 09:17:08.657970 PDT
UTC: 2023-08-10 16:17:08.657970 UTC
GPS: 1375719446.657970
All SR2 damping at -0.2, all SR3 damping at -0.5
start PDT: 2023-08-10 09:31:47.701973 PDT
UTC: 2023-08-10 16:31:47.701973 UTC
GPS: 1375720325.701973
to PDT: 2023-08-10 09:37:34.801318 PDT
UTC: 2023-08-10 16:37:34.801318 UTC
GPS: 1375720672.801318
All SR2 damping at -0.2, all SR3 damping at -0.2
start PDT: 2023-08-10 09:38:42.830657 PDT
UTC: 2023-08-10 16:38:42.830657 UTC
GPS: 1375720740.830657
to PDT: 2023-08-10 09:43:58.578103 PDT
UTC: 2023-08-10 16:43:58.578103 UTC
GPS: 1375721056.578103
All SR2 damping at -0.1, all SR3 damping at -0.2
start PDT: 2023-08-10 09:45:38.009515 PDT
UTC: 2023-08-10 16:45:38.009515 UTC
GPS: 1375721156.009515
If our overall goal is to remove peaks from DARM that dominate the RMS, reducing these damping gains is not the best way to acheive that. SR2 L damping gain was reduced by a factor of 5 in this alog, and a resulting 2.8 Hz peak is now being injected into DARM from SRCL. This 2.8 Hz peak corresponds to a 2.8 Hz SR2 L resonance. There is no length control on SR2, so the only way to suppress any length motion of SR2 is via the top stage damping loops. The same can be said for SR3, whose gains were reduced by 80%. It may be that we are reducing sensor noise injected into SRCL from 3-6 Hz by reducing these gains, hence the improvement Gabriele has noticed.
Comparing a DARM spectrum before and after this change to the damping gains, you can see that the reduction in the damping gain did reduce DARM and SRCL above 3 Hz, but also created a new peak in DARM and SRCL at 2.8 Hz. I also plotted spectra of all dofs of SR2 and SR3 before and after the damping gain change showing that some suspension resonances are no longer being suppressed. All reference traces are from a lock on Aug 9 before these damping gains were reduced and the live traces are from this current lock. The final plot shows a transfer function measurement of SR2 L taken by Jeff and me in Oct 2022.
Since we fell out of lock, I took the opportunity to make SR2 and SR3 damping gain adjustments. I have split the difference on the gain reductions in Gabriele's alog. I increased all the SR2 damping gains from -0.1 to -0.2 (nominal is -0.5). I increased the SR3 damping gains from -0.2 to -0.5 (nominal is -1).
This is guardian controlled in LOWNOISE_ASC, because we need to acquire lock with higher damping gains.
Once we are back in lock, I will check the presence of the 2.8 Hz peak in DARM and determine how much different the DARM RMS is from this change.
There will be SDF diffs in observe for all SR2 and SR3 damping dofs. They can be accepted.
SR2 and SR3 damping gains changes that Elenna made have been accepted
The DARM RMS increases by about 8% with these new slightly higher gains. These gains are a factor of 2/2.5 greater than Gabriele's reduction. The 2.8 Hz peak in DARM is down by 21%.
This is a somewhat difficult determination to make, given all the nonstationary noise from 20-50 Hz, but it appears the DARM sensitivity is slightly improved from 20-40 Hz with a slightly higher SR2 gain. I randomly selected several times from the past few locks with the SR2 gains set to -0.1 and recent data from the last 24 hours where SR2 gains were set to -0.2. There is a small improvement in the data with all SR2 damping gains = -0.2 and SR3 damping gains= -0.5.
I think we need to do additional tests to determine exactly how SR2 and SR3 motion limit SRCL and DARM so we can make more targeted improvements to both. My unconfirmed conclusion from this small set of data is that while we may be able to reduce reinjected sensor noise above 3 Hz with a damping gain reduction, we will also limit DARM if there is too much motion from SR2 and SR3.
These are both valid charge measurements, we could analysis either or both (and check the answer is the same). We repeated the measurements while troubleshooting the issue in 72219. We have now fixed the issue (typo) in SUS_CHARGE that was preventing the last ETMX measurement from being taken.
I just analyzed the first batch of in-lock charge measurements.
There are 13-14 plot points on most of the other plots but only 10 for ETMX.