Displaying reports 13981-14000 of 84136.Go to page Start 696 697 698 699 700 701 702 703 704 End
Reports until 11:01, Wednesday 16 August 2023
H1 ISC
jenne.driggers@LIGO.ORG - posted 11:01, Wednesday 16 August 2023 (72276)
Reboot of h1oaf with updated NonSENS c-code helped subtraction ability

Since it's been such a struggle to get jitter subtraction and laser noise subtraction to work simultaneously, I regenerated the c-code and simulink model that goes into the h1oaf model and Dave rebooted it on Tuesday.

During a brief out-of-observing segment this morning, I tried implementing both jitter and laser noise subtraction, and was successful. I'll keep working on this using some new offline testing infrastructure that I'm working on using the CDS group's librts, but I'll likely bring this to the subtraction review committee for their consideration soon. 

Attached is a spectra of the NOLINES channel (blue) versus CLEAN (red), along with the noise estimates (green + yellow = black).  Over almost all of the frequency band (really, everywhere but the 60 Hz line) the subtraction has either had no effect or has made an improvement.

The second attachment is the range plot from the summary pages, showing that the subtraction is indeed making a small improvement to the range (b;ue is above pink, rather than being identical to pink). 

I accepted the SDF diffs for updated subtraction coeffecients, but we are not applying any subtraction in observing (and won't until given the all-clear from the review committee).

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:20, Wednesday 16 August 2023 (72274)
Wed CP1 Fill

Wed Aug 16 10:09:29 2023 INFO: Fill completed in 9min 25secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 General
thomas.shaffer@LIGO.ORG - posted 10:17, Wednesday 16 August 2023 (72273)
Went out of Observing from 1647-1716 UTC for brief commissioning

Opportunistic commissioning time while LLO was down.

H1 DetChar
ansel.neunzert@LIGO.ORG - posted 09:20, Wednesday 16 August 2023 - last comment - 12:54, Wednesday 16 August 2023(72269)
1.6611 Hz comb gone after OM2 Beckhoff cable disconnected

I checked a 3-hour block of time after the work described in 72241, and it appears that the 1.6611 Hz comb was successfully removed. I've attached pre (Aug 15 10:00 UTC) vs post (Aug 16 06:00 UTC) plots, each with an averaging time of 3 hours for direct comparison. The 1.6611 Hz comb is the structure around 280 Hz marked with yellow triangles on the first plot, and absent on the second plot. There are also some untagged lines belonging to the comb around 180 Hz, which also disappear. Note that the small line still present around 269.7 Hz is not part of the comb, and the blue squares are an unrelated comb at 4.98 Hz.

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 09:41, Wednesday 16 August 2023 (72270)CDS, ISC
Tagging ISC and CDS.

Nice work Keita and Ansel!
keita.kawabe@LIGO.ORG - 12:42, Wednesday 16 August 2023 (72272)

We can leave the temporary setup for a while (it's jury-rigged to use DC power supply). But the question is what to do next.

E2100049 shows that the Beckhoff voltage output comes out of EL4134 and is supposed to be directly connected to the positive and negative input of the driver chassis (pin 6 and pin 19 of the DB25, respectively, see D2000212).

EL 4134 is a 4-channel DAC module and its output is single ended ("O" terminal and GND terminal). If you bother to download the catalog from Beckhoff web page, it turns out that all GND terminals are connected together inside. These GND terminals are not connected to GND terminals of neighboring Beckhoff modules, Beckhoff power ground nor Beckhoff chassis (I checked this with Marc in the shop). It seems as if the Beckhoff GND output is floating relative to everything else.

I don't know why there's a 1.66Hz comb between the Beckhoff GND terminal and the driver ground (pin 13 on DB25) but maybe we can connect them together? (Unfortunately E2100049 doesn't show which Beckhoff terminal is connected to which pin on the driver chassis. I assume that the GND terminal goes to negative input (pin 19) but not sure. We have to make sure whch is GND before making any connection.) However, if we do that, probably we don't want to repeat that for the second T-SAMS in the future assuming that the second DAC output in the same EL4134 module will be used, or we'll be making a ground loop.

Anyway, if the noise comes back by doing that, we could add an adjustable resistive divider using a trim pot inside the driver chassis to supply necessary voltage as a kind of inconvenient mid-term solution. We could even try to connect the Beckhoff cable back to the driver chassis to regain readback after disconnecting the DAC output inside the Beckhoff chassis.

I'll be gone for a month and cannot do these things, so it's up to Daniel.

daniel.sigg@LIGO.ORG - 12:54, Wednesday 16 August 2023 (72278)

Drawings and wiring tables for this Beckhoff chassis can be found here E1200377.

We should also check that the noise isn't propgated through the shield of the wire.

LHO General
thomas.shaffer@LIGO.ORG - posted 07:56, Wednesday 16 August 2023 (72266)
Ops Day Shift Start

TITLE: 08/16 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 3mph 5min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.12 μm/s
QUICK SUMMARY: Locked for 9.5 hours. Looks like we rode through a 6.5 from Vanuatu 2 hours ago!

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 00:12, Wednesday 16 August 2023 (72265)
Tuesday Ops Day Shift END

TITLE: 08/16 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
OUTGOING OPERATOR: Tony
INCOMING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 19mph Gusts, 14mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.13 μm/s
QUICK SUMMARY:

Acquired the IFO in manual Initial alignment.
had troubles with SRM in ACQUIRE_SRY . Touched SRM one last time and it locked just fine.
Started Locking, and ALSX & Y were way off.
Interveined and eventually Jenne said we should redo intial alignment with ITMs due to the ITM Camera Servo Error signals were large.

Initial_Alignment started Got to PRC_ALGNING and PRM was continuously saturating.
We then took ISC_LOCK to down and MANUAL_INITIAL_ALIGNMENT.
Worked through PREP_FOR_PRX and manually moved PRM to maximize Peaks and Valleys on H1:ASC-AS_DC_NSUM_OUT16.

Then moved through MICH_BRIGHT OFFLOADED and SR2 ALIGN manually.
I did have to move SRM again.


After this Locking happened quickly.

Dust alarms in the optics labs are going off. Wind is only 15 mph.

Unknown cause Lockloss 02:51:18 UTC
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1376189496

https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72261


Elena and I tried Walking through LOWNOISE_LENGTH_CONTROL manually by running the commands in the terminal to find out where the 102Hz noise is coming from.
Lost lock after running line 5481: ezca['LSC-SRCLFF1_GAIN'] = lscparams.gain['SRCLFF1'] * lscparams.dc_readout['sign']
Tried walking through that state again and we lost lock again. Same Line as above. 4:14 UTC
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72262

Made it back to NOMINAL_LOW_NOISE @ 5:26 UTC
Made it back to OBSERIVING at 5:40 UTC
 

H1 General
anthony.sanchez@LIGO.ORG - posted 23:14, Tuesday 15 August 2023 - last comment - 22:12, Thursday 17 August 2023(72264)
FAMIS 25079

Famis 25079
inLock SUS Charge Measurement
While searching for the files created by the inLock Sus Charge Measurments.
I noticed that there were multiples of a few of the files created today in the directory: /opt/rtcds/userapps/release/sus/common/scripts/quad/InLockChargeMeasurements/rec_LHO


ls -l  | grep "Aug 15"
-rw-r--r-- 1               1010 controls   160 Aug 15 07:50 ETMY_12_Hz_1376146243.txt
-rw-r--r-- 1               1010 controls   160 Aug 15 08:22 ETMY_12_Hz_1376148152.txt
-rw-r--r-- 1               1010 controls   160 Aug 15 07:50 ITMX_14_Hz_1376146241.txt
-rw-r--r-- 1               1010 controls   160 Aug 15 08:22 ITMX_14_Hz_1376148154.txt
-rw-r--r-- 1               1010 controls   160 Aug 15 07:50 ITMY_15_Hz_1376146220.txt
-rw-r--r-- 1               1010 controls   160 Aug 15 08:08 ITMY_15_Hz_1376147322.txt
-rw-r--r-- 1               1010 controls   160 Aug 15 08:21 ITMY_15_Hz_1376148134.txt


listing all files, filtering for only files that contain the string ETMX,  and then filtering those for files that contain "Aug 15" with the following command:
ls -l  | grep "ETMX" | grep "Aug 15"

Returned no files, which means that while it looks like it was ran twice, it never completed ETMX.
I'm not sure if the analysis will run with out all the files or not.

SUS_CHARGE LOG:
2023-08-15_15:26:18.969345Z SUS_CHARGE LOAD ERROR: see log for more info (LOAD to reset)
2023-08-15_15:26:53.512031Z SUS_CHARGE LOAD REQUEST
2023-08-15_15:26:53.524359Z SUS_CHARGE RELOAD requested.  reloading system data...
2023-08-15_15:26:53.527151Z SUS_CHARGE Traceback (most recent call last):
2023-08-15_15:26:53.527151Z   File "/usr/lib/python3/dist-packages/guardian/daemon.py", line 566, in run
2023-08-15_15:26:53.527151Z     self.reload_system()
2023-08-15_15:26:53.527151Z   File "/usr/lib/python3/dist-packages/guardian/daemon.py", line 327, in reload_system
2023-08-15_15:26:53.527151Z     self.system.load()
2023-08-15_15:26:53.527151Z   File "/usr/lib/python3/dist-packages/guardian/system.py", line 400, in load
2023-08-15_15:26:53.527151Z     module = self._load_module()
2023-08-15_15:26:53.527151Z   File "/usr/lib/python3/dist-packages/guardian/system.py", line 287, in _load_module
2023-08-15_15:26:53.527151Z     self._module = self._import(self._modname)
2023-08-15_15:26:53.527151Z   File "/usr/lib/python3/dist-packages/guardian/system.py", line 159, in _import
2023-08-15_15:26:53.527151Z     module = _builtin__import__(name, *args, **kwargs)
2023-08-15_15:26:53.527151Z   File "<frozen importlib._bootstrap>", line 1109, in __import__
2023-08-15_15:26:53.527151Z   File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
2023-08-15_15:26:53.527151Z   File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
2023-08-15_15:26:53.527151Z   File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
2023-08-15_15:26:53.527151Z   File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
2023-08-15_15:26:53.527151Z   File "<frozen importlib._bootstrap_external>", line 786, in exec_module
2023-08-15_15:26:53.527151Z   File "<frozen importlib._bootstrap_external>", line 923, in get_code
2023-08-15_15:26:53.527151Z   File "<frozen importlib._bootstrap_external>", line 853, in source_to_code
2023-08-15_15:26:53.527151Z   File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
2023-08-15_15:26:53.527151Z   File "/opt/rtcds/userapps/release/sus/h1/guardian/SUS_CHARGE.py", line 67
2023-08-15_15:26:53.527151Z     ezca.get_LIGOFilter('SUS-ETMX_L3_DRIVEALIGN_L2L').ramp_gain(lscparams.ETMX_GND_MIN_DriveAlign_gain, ramp_time=20, wait=False)
2023-08-15_15:26:53.527151Z                                                                                                                                  ^
2023-08-15_15:26:53.527151Z IndentationError: unindent does not match any outer indentation level
2023-08-15_15:26:53.527151Z SUS_CHARGE LOAD ERROR: see log for more info (LOAD to reset)
2023-08-15_15:29:10.009828Z SUS_CHARGE LOAD REQUEST
2023-08-15_15:29:10.011001Z SUS_CHARGE RELOAD requested.  reloading system data...
2023-08-15_15:29:10.050137Z SUS_CHARGE module path: /opt/rtcds/userapps/release/sus/h1/guardian/SUS_CHARGE.py
2023-08-15_15:29:10.050393Z SUS_CHARGE user code: /opt/rtcds/userapps/release/isc/h1/guardian/lscparams.py
2023-08-15_15:29:10.286761Z SUS_CHARGE system archive: code changes detected and committed
2023-08-15_15:29:10.331427Z SUS_CHARGE system archive: id: 9b481a54e45bfda96fa2f39f98978d76aa6ec7c0 (162824613)
2023-08-15_15:29:10.331427Z SUS_CHARGE RELOAD complete
2023-08-15_15:29:10.332868Z SUS_CHARGE calculating path: SWAP_TO_ITMX->INJECTIONS_COMPLETE
2023-08-15_15:29:14.129521Z SUS_CHARGE OP: EXEC
2023-08-15_15:29:14.129521Z SUS_CHARGE executing state: SWAP_TO_ITMX (11)
2023-08-15_15:29:14.135913Z SUS_CHARGE W: RELOADING @ SWAP_TO_ITMX.main
2023-08-15_15:29:14.158532Z SUS_CHARGE [SWAP_TO_ITMX.enter]
2023-08-15_15:29:14.276536Z SUS_CHARGE [SWAP_TO_ITMX.main] ezca: H1:SUS-ITMX_L3_DRIVEALIGN_L2L_TRAMP => 10
2023-08-15_15:29:14.277081Z SUS_CHARGE [SWAP_TO_ITMX.main] ezca: H1:SUS-ITMX_L3_DRIVEALIGN_L2L_GAIN => 0
2023-08-15_15:29:17.820392Z SUS_CHARGE REQUEST: DOWN
2023-08-15_15:29:17.821281Z SUS_CHARGE calculating path: SWAP_TO_ITMX->DOWN
2023-08-15_15:29:17.822235Z SUS_CHARGE new target: DOWN
2023-08-15_15:29:17.822364Z SUS_CHARGE GOTO REDIRECT
2023-08-15_15:29:17.822669Z SUS_CHARGE REDIRECT requested, timeout in 1.000 seconds
2023-08-15_15:29:17.824392Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:17.895303Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:17.958976Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.018262Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.079443Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.130595Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.197848Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.253456Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.318549Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.378993Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.446375Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.507978Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.576823Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.641493Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.695114Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.774571Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.822999Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.823662Z SUS_CHARGE REDIRECT timeout reached. worker terminate and reset...
2023-08-15_15:29:18.831141Z SUS_CHARGE worker terminated
2023-08-15_15:29:18.849938Z SUS_CHARGE W: initialized
2023-08-15_15:29:18.871834Z SUS_CHARGE W: EZCA v1.4.0
2023-08-15_15:29:18.872835Z SUS_CHARGE W: EZCA CA prefix: H1:
2023-08-15_15:29:18.872835Z SUS_CHARGE W: ready
2023-08-15_15:29:18.872980Z SUS_CHARGE worker ready
2023-08-15_15:29:18.883790Z SUS_CHARGE EDGE: SWAP_TO_ITMX->DOWN
2023-08-15_15:29:18.884081Z SUS_CHARGE calculating path: DOWN->DOWN
2023-08-15_15:29:18.886386Z SUS_CHARGE executing state: DOWN (2)
2023-08-15_15:29:18.891745Z SUS_CHARGE [DOWN.enter]
2023-08-15_15:29:18.893116Z Warning: Duplicate EPICS CA Address list entry "10.101.0.255:5064" discarded
2023-08-15_15:29:20.216958Z SUS_CHARGE [DOWN.main] All nodes taken to DOWN, ISC_LOCK should have taken care of reverting settings.
 

ESD_EXC_ETMX LOG:
2023-08-01_15:07:01.324869Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-01_15:07:01.325477Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-15_15:02:53.269349Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-15_15:02:53.269349Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-15_15:08:26.888655Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-15_15:08:26.888655Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-15_15:29:20.255431Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-15_15:29:20.255431Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-01_15:07:01.324869Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-01_15:07:01.325477Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-15_15:02:53.269349Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-15_15:02:53.269349Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-15_15:08:26.888655Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-15_15:08:26.888655Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-15_15:29:20.255431Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-15_15:29:20.255431Z ESD_EXC_ETMX calculating path: DOWN->DOWN


ESD_EXC_ITMX log:
2023-08-15_15:22:16.033411Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] Restoring things to the way they were before the measurement
2023-08-15_15:22:16.033411Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] Ramping on bias on ITMX ESD
2023-08-15_15:22:16.034430Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_GAIN => 0
2023-08-15_15:22:18.266457Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_SW1 => 8
2023-08-15_15:22:18.517569Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS => OFF: OFFSET
2023-08-15_15:22:18.518166Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_TRAMP => 20
2023-08-15_15:22:18.518777Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_GAIN => 1.0
2023-08-15_15:22:38.431399Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_TRAMP => 2.0
2023-08-15_15:22:41.264244Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_DRIVEALIGN_L2L_SW1S => 5124
2023-08-15_15:22:41.515470Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_DRIVEALIGN_L2L => ONLY ON: INPUT, DECIMATION, FM4, FM5, OUTPUT
2023-08-15_15:22:41.515470Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] all done
2023-08-15_15:22:41.632738Z ESD_EXC_ITMX EDGE: RESTORE_SETTINGS->COMPLETE
2023-08-15_15:22:41.632738Z ESD_EXC_ITMX calculating path: COMPLETE->COMPLETE
2023-08-15_15:22:41.632738Z ESD_EXC_ITMX executing state: COMPLETE (30)
2023-08-15_15:22:41.636417Z ESD_EXC_ITMX [COMPLETE.enter]
2023-08-15_15:22:41.764636Z ESD_EXC_ITMX REQUEST: DOWN
2023-08-15_15:22:41.764636Z ESD_EXC_ITMX calculating path: COMPLETE->DOWN
2023-08-15_15:22:41.764636Z ESD_EXC_ITMX new target: DOWN
2023-08-15_15:22:41.764636Z ESD_EXC_ITMX GOTO REDIRECT
2023-08-15_15:22:41.764636Z ESD_EXC_ITMX REDIRECT requested, timeout in 1.000 seconds
2023-08-15_15:22:41.768046Z ESD_EXC_ITMX REDIRECT caught
2023-08-15_15:22:41.768046Z ESD_EXC_ITMX [COMPLETE.redirect]
2023-08-15_15:22:41.824688Z ESD_EXC_ITMX EDGE: COMPLETE->DOWN
2023-08-15_15:22:41.824688Z ESD_EXC_ITMX calculating path: DOWN->DOWN
2023-08-15_15:22:41.824688Z ESD_EXC_ITMX executing state: DOWN (1)
2023-08-15_15:22:41.827615Z ESD_EXC_ITMX [DOWN.main] Stopping bias_drive_bias_on
2023-08-15_15:22:41.827615Z ESD_EXC_ITMX [DOWN.main] Stopping L_drive_bias_on
2023-08-15_15:22:41.827615Z ESD_EXC_ITMX [DOWN.main] Stopping bias_drive_bias_off
2023-08-15_15:22:41.827615Z ESD_EXC_ITMX [DOWN.main] Stopping L_drive_bias_off
2023-08-15_15:22:41.923244Z ESD_EXC_ITMX [DOWN.main] Clearing bias_drive_bias_on
2023-08-15_15:22:42.059154Z ESD_EXC_ITMX [DOWN.main] Clearing L_drive_bias_on
2023-08-15_15:22:42.216133Z ESD_EXC_ITMX [DOWN.main] Clearing bias_drive_bias_off
2023-08-15_15:22:42.349505Z ESD_EXC_ITMX [DOWN.main] Clearing L_drive_bias_off
2023-08-15_15:29:20.260953Z ESD_EXC_ITMX REQUEST: DOWN
2023-08-15_15:29:20.260953Z ESD_EXC_ITMX calculating path: DOWN->DOWN
2023-08-15_15:18:31.953103Z ESD_EXC_ITMX [L_DRIVE_WITH_BIAS.enter]
2023-08-15_15:18:34.481594Z ESD_EXC_ITMX [L_DRIVE_WITH_BIAS.main] Starting 14Hz Sine injection on H1:SUS-ITMX_L3_DRIVEALIGN_L2L_EXC
2023-08-15_15:18:34.482160Z ESD_EXC_ITMX [L_DRIVE_WITH_BIAS.main] timer['Injection duration'] = 62
2023-08-15_15:19:36.482043Z ESD_EXC_ITMX [L_DRIVE_WITH_BIAS.run] timer['Injection duration'] done
2023-08-15_15:19:36.516842Z ESD_EXC_ITMX [L_DRIVE_WITH_BIAS.run] Injection finished
2023-08-15_15:19:38.908908Z ESD_EXC_ITMX [L_DRIVE_WITH_BIAS.run] Stopping injection on H1:SUS-ITMX_L3_DRIVEALIGN_L2L_EXC
2023-08-15_15:19:39.011256Z ESD_EXC_ITMX EDGE: L_DRIVE_WITH_BIAS->TURN_BIAS_OFF
2023-08-15_15:19:39.011836Z ESD_EXC_ITMX calculating path: TURN_BIAS_OFF->COMPLETE
2023-08-15_15:19:39.012099Z ESD_EXC_ITMX new target: BIAS_DRIVE_NO_BIAS
2023-08-15_15:19:39.018534Z ESD_EXC_ITMX executing state: TURN_BIAS_OFF (15)
2023-08-15_15:19:39.019024Z ESD_EXC_ITMX [TURN_BIAS_OFF.enter]
2023-08-15_15:19:39.019710Z ESD_EXC_ITMX [TURN_BIAS_OFF.main] Ramping off bias on ITMX ESD
2023-08-15_15:19:39.020547Z ESD_EXC_ITMX [TURN_BIAS_OFF.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_GAIN => 0
2023-08-15_15:19:58.934813Z ESD_EXC_ITMX [TURN_BIAS_OFF.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_OFFSET => 0
2023-08-15_15:19:58.935544Z ESD_EXC_ITMX [TURN_BIAS_OFF.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_TRAMP => 2
2023-08-15_15:19:58.935902Z ESD_EXC_ITMX [TURN_BIAS_OFF.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_GAIN => 1
2023-08-15_15:20:01.140528Z ESD_EXC_ITMX EDGE: TURN_BIAS_OFF->BIAS_DRIVE_NO_BIAS
2023-08-15_15:20:01.141391Z ESD_EXC_ITMX calculating path: BIAS_DRIVE_NO_BIAS->COMPLETE
2023-08-15_15:20:01.142015Z ESD_EXC_ITMX new target: L_DRIVE_NO_BIAS
2023-08-15_15:20:01.143337Z ESD_EXC_ITMX executing state: BIAS_DRIVE_NO_BIAS (16)
2023-08-15_15:20:01.144372Z ESD_EXC_ITMX [BIAS_DRIVE_NO_BIAS.enter]
2023-08-15_15:20:03.673255Z ESD_EXC_ITMX [BIAS_DRIVE_NO_BIAS.main] Starting 14Hz Sine injection on H1:SUS-ITMX_L3_LOCK_BIAS_EXC
2023-08-15_15:20:03.673786Z ESD_EXC_ITMX [BIAS_DRIVE_NO_BIAS.main] timer['Injection duration'] = 62
2023-08-15_15:21:05.674028Z ESD_EXC_ITMX [BIAS_DRIVE_NO_BIAS.run] timer['Injection duration'] done
2023-08-15_15:21:05.697880Z ESD_EXC_ITMX [BIAS_DRIVE_NO_BIAS.run] Injection finished
2023-08-15_15:21:07.987796Z ESD_EXC_ITMX [BIAS_DRIVE_NO_BIAS.run] Stopping injection on H1:SUS-ITMX_L3_LOCK_BIAS_EXC
2023-08-15_15:21:08.072581Z ESD_EXC_ITMX EDGE: BIAS_DRIVE_NO_BIAS->L_DRIVE_NO_BIAS
2023-08-15_15:21:08.072581Z ESD_EXC_ITMX calculating path: L_DRIVE_NO_BIAS->COMPLETE
2023-08-15_15:21:08.073301Z ESD_EXC_ITMX new target: RESTORE_SETTINGS
2023-08-15_15:21:08.076744Z ESD_EXC_ITMX executing state: L_DRIVE_NO_BIAS (17)
2023-08-15_15:21:08.079417Z ESD_EXC_ITMX [L_DRIVE_NO_BIAS.enter]
2023-08-15_15:21:10.597939Z ESD_EXC_ITMX [L_DRIVE_NO_BIAS.main] Starting 14Hz Sine injection on H1:SUS-ITMX_L3_DRIVEALIGN_L2L_EXC
2023-08-15_15:21:10.598481Z ESD_EXC_ITMX [L_DRIVE_NO_BIAS.main] timer['Injection duration'] = 62
2023-08-15_15:22:12.598413Z ESD_EXC_ITMX [L_DRIVE_NO_BIAS.run] timer['Injection duration'] done
2023-08-15_15:22:12.633547Z ESD_EXC_ITMX [L_DRIVE_NO_BIAS.run] Injection finished
2023-08-15_15:22:15.937968Z ESD_EXC_ITMX [L_DRIVE_NO_BIAS.run] Stopping injection on H1:SUS-ITMX_L3_DRIVEALIGN_L2L_EXC
2023-08-15_15:22:16.018077Z ESD_EXC_ITMX EDGE: L_DRIVE_NO_BIAS->RESTORE_SETTINGS
2023-08-15_15:22:16.018395Z ESD_EXC_ITMX calculating path: RESTORE_SETTINGS->COMPLETE
2023-08-15_15:22:16.018676Z ESD_EXC_ITMX new target: COMPLETE
2023-08-15_15:22:16.019499Z ESD_EXC_ITMX executing state: RESTORE_SETTINGS (25)
2023-08-15_15:22:16.019891Z ESD_EXC_ITMX [RESTORE_SETTINGS.enter]
2023-08-15_15:22:16.020220Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] Finished with all excitations
2023-08-15_15:22:16.033260Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] Saved GPS times in logfile: /opt/rtcds/userapps/release/sus/common/scripts/quad/InLockChargeMeasurements/rec_LHO/ITMX_14_Hz_1376148154.txt
2023-08-15_15:22:16.033411Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] Restoring things to the way they were before the me

Comments related to this report
camilla.compton@LIGO.ORG - 14:55, Wednesday 16 August 2023 (72280)

These are both valid charge measurements, we could analysis either or both (and check the answer is the same). We repeated the measurements while troubleshooting the issue in 72219. We have now fixed the issue (typo) in SUS_CHARGE that was preventing the last  ETMX measurement from being taken. 

anthony.sanchez@LIGO.ORG - 22:12, Thursday 17 August 2023 (72310)

I just analyzed the first batch of in-lock charge measurements.

There are 13-14 plot points on most of the other plots but only 10 for ETMX.
 

Images attached to this comment
H1 ISC
elenna.capote@LIGO.ORG - posted 22:06, Tuesday 15 August 2023 - last comment - 23:13, Wednesday 16 August 2023(72262)
Failed attempt to debug Lownoise Length Control

Despite some great efforts to track down the source from Jenne and Jeff, we are still seeing a 102 Hz line rung up right at the end of the lownoise_length_control state. Since we had a random lockloss, I asked Tony to take us to lownoise_esd_etmx and I tried walking through lownoise length control by hand (copying the guardian code line by line into the shell).

The lines 5427-5468 ramp various gains to zero, set up the filters and drive matrix for the LSC feedforward, and prepare for the SRCL offset. These lines run fine and do not ring up the 102 Hz line.

I am able to run the first action line of the run state, which sets the MICH FF gain to 1 (line 5480). This runs fine, no 102 Hz line. Then, I ran the next line to turn on the SRCL FF gain (line 5481). This caused an immediate lockloss (huh?), despite the fact that this code has run many times just fine.

On the next lock attempt, I tried running the MICH and SRCL gain lines at the exact same time. Also immediate lockloss.

I have no idea why this is such an issue. All it does it ramp the gains to 1 (the tramps are set on a previous line to 3 seconds).

Both of these locklosses seem to ring up a test mass bounce mode, suggesting that the SRCL FF (I assume) is kicking a test mass pretty hard.

This might be a red herring, or maybe it's a clue. I don't see any 102 Hz line during these locklosses though.

The offending lines:

            ezca['LSC-MICHFF_GAIN']  = lscparams.gain['MICHFF']
            ezca['LSC-SRCLFF1_GAIN'] = lscparams.gain['SRCLFF1'] * lscparams.dc_readout['sign']
Comments related to this report
elenna.capote@LIGO.ORG - 22:50, Tuesday 15 August 2023 (72263)

I think it's pretty clear that this is an LSC feedforward problem. I attached two ndscopes of the ETMX L3 master outs, one zoomed in and one zoomed out. The massive oscillation in the signal is the 102 Hz line, which I first begin to see in the time series starting at UTC 5:24:32 and some milliseconds. This corresponds exactly to the time in the guardian log when the LSC feedforward gain is ramped on (see copied guardian log below).

2023-08-16_05:24:22.551541Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-OUTPUT_MTRX_1_10 => 1
2023-08-16_05:24:22.551849Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-OUTPUT_MTRX_2_10 => -1
2023-08-16_05:24:22.552715Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-OUTPUT_MTRX_1_11 => 1
2023-08-16_05:24:22.553143Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-OUTPUT_MTRX_2_11 => -1
2023-08-16_05:24:22.554026Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-OUTPUT_MTRX_1_13 => 1
2023-08-16_05:24:22.554332Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-OUTPUT_MTRX_2_13 => -1
2023-08-16_05:24:22.554746Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-SRCL1_OFFSET => 0
2023-08-16_05:24:22.555266Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] timer['wait'] = 10
2023-08-16_05:24:32.555489Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] timer['wait'] done
2023-08-16_05:24:32.593316Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] ezca: H1:LSC-MICHFF_GAIN => 1
2023-08-16_05:24:32.594884Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] ezca: H1:LSC-SRCLFF1_GAIN => 1

2023-08-16_05:24:32.595219Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] timer['wait'] = 1
2023-08-16_05:24:33.595397Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] timer['wait'] done
2023-08-16_05:24:33.660429Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] ezca: H1:LSC-SRCL1_GAIN => -7.5
2023-08-16_05:24:33.660711Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] ezca: H1:LSC-PRCL1_GAIN => 10.0
2023-08-16_05:24:33.661306Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] ezca: H1:LSC-MICH1_GAIN => 3.2
Images attached to this comment
elenna.capote@LIGO.ORG - 16:38, Wednesday 16 August 2023 (72282)OpsInfo

I have added two new lines to lownoise_length_control that increases the LSC FF ramp time from 3 to 10 seconds. These lines are at the end of the main state, right before the run state. I also increased the timer in the first step of the run state to wait 10 seconds after the FF gains are set, before moving to the next part of the run state which changes LSC gains.

This will result in two SDF diffs for the feedforward ramp times in the LSC model. It can be accepted. Tagging Ops

anthony.sanchez@LIGO.ORG - 18:23, Wednesday 16 August 2023 (72285)

SDF changes accepted picture attached.

Images attached to this comment
anthony.sanchez@LIGO.ORG - 23:13, Wednesday 16 August 2023 (72289)

Reverted these changes to make it past LOWNOISE_LENGTH_CONTROL

 5471       ezca['LSC-MICHFF_TRAMP'] = 3  Changed back to 3 from 10
 5472       ezca['LSC-SRCLFF1_TRAMP'] = 3 Changed back to 3 from 10

And

5488 self.timer['wait'] = 1 #Changed back to 1 from 10. 
Images attached to this comment
daniel.sigg@LIGO.ORG - 23:01, Wednesday 16 August 2023 (72290)

This may not be a problem with a filter kick, but with the filter making some loop unstable and driving up the 102Hz line. I suspect that changing the aux dofs gain immediately afterwards makes it stable again. If so, slowing down the transition only makes it worse. We may need to reorder the steps.

H1 General (Lockloss)
anthony.sanchez@LIGO.ORG - posted 21:25, Tuesday 15 August 2023 (72261)
Unknown Lockloss

I_X Saturation 2:52 UTC right before this lockloss.
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1376189496

Cause unknown. Likely not an earthquake.

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 19:07, Tuesday 15 August 2023 (72260)
Ops Tusday Mid Shift Report

Acquired the IFO while it was in Manual Initial Alignment.
Had troubles with SRM in ACQUIRE_SRY . Touched SRM one last time and it locked just fine.
Started Locking, and ALSX & Y were way off.
Interveined and eventually Jenne said we should redo intial alignment with ITMs due to the ITM Camera Servo Error signals were large.

Initial_Alignment started Got to PRC_ALGNING and PRM was continuously saturating.
We then took ISC_LOCK to down and MANUAL_INITIAL_ALIGNMENT.
Worked through PREP_FOR_PRX and manually moved PRM to maximize Peaks and Valleys on H1:ASC-AS_DC_NSUM_OUT16.

Then moved through MICH_BRIGHT OFFLOADED and SR2 ALIGN manually.
I did have to move SRM again.
After this Locking happened quickly Starting at 00:55 UTC.
Reached NLN at 1:39 UTC and Observing at 2:03 UTC


Dust alarms in the optics labs are going off. Wind is only 15 mph.


Current IFO Status: NOMINAL_LOW_NOISE & OBSERVING.
 

H1 ISC (ISC, OpsInfo)
jeffrey.kissel@LIGO.ORG - posted 18:45, Tuesday 15 August 2023 - last comment - 18:49, Tuesday 15 August 2023(72255)
Accepted Many ISC PD Electronics Offsets in OBSERVE
E. Capote, J. Kissel, T. Sanchez

Since TJ ran the dark offsets script during maintenance recovery this afternoon (LHO:72251), we had to accept all of those new offsets in observe. Here's the screenshots of all the models that had offsets that we accepted.
Comments related to this report
anthony.sanchez@LIGO.ORG - 18:49, Tuesday 15 August 2023 (72256)

LSC-REFL_A_RF Offsets were accecpted

OAF-LASERNOISE_NONSENS_COEFF changes were accepted.

Images attached to this comment
jeffrey.kissel@LIGO.ORG - 18:45, Tuesday 15 August 2023 (72257)CAL
Here's the OMC DCPDs. Tagging CAL just in case.
Images attached to this comment
elenna.capote@LIGO.ORG - 18:47, Tuesday 15 August 2023 (72258)

Accepted SDFs for ASCIMC, ISCEY, ISCEX and ASC (three screenshots).

Images attached to this comment
H1 ISC
elenna.capote@LIGO.ORG - posted 16:52, Tuesday 15 August 2023 - last comment - 18:48, Tuesday 15 August 2023(72252)
Added 20 Hz CHARD Y low pass

Sheila and I examined the new ASC subbudget today (see 72245) and we did not like the shape of CHARD Y above 20 Hz. This new shape is a result of Gabriele's adjustment to CHARD Y to give more low frequency suppression (71866, 71927).

Since this feature appears above 20 Hz and a recent measurement of CHARD Y shows that the 3.3 Hz UGF has a 44 deg phase margin, I figure we can design some 20 Hz cutoff filter that removes this feature with minimal effect on the loop stability.

Attached is a screenshot of the new design (in red) compared with the old design (in gold). I first turned off "JLP200", which is a very high frequency low pass that I think is unnecessary. Then I constructed a low Q elliptical low pass at 20 Hz with 10 dB of suppression. The result is 6 deg less of phase margin at 3.3 Hz. I minimized any passband ripple so that the gain increase around 10 Hz is only 1 dB.

I updated the guardian. It will no longer use FM10 (200 Hz low pass), and will turn on FM8 (20 Hz low pass) along with Gabriele's filters. No gain change is required.

This filter has been successfully tested.

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 18:48, Tuesday 15 August 2023 (72259)

Observe SDF accepted (see screenshot)

Images attached to this comment
H1 ISC (AWC, DetChar-Request, ISC)
keita.kawabe@LIGO.ORG - posted 13:39, Tuesday 15 August 2023 - last comment - 20:52, Wednesday 16 August 2023(72241)
OM2 Beckhoff cable disconnected, voltage reference is used as a heater driver input

As a followup of alog 72061, a batter-operated voltage reference was connected to the OM2 heater chassis. Beckhoff cable was disconnected for now.

Please check if the 1.66Hz comb is still there.

Comments related to this report
keita.kawabe@LIGO.ORG - 13:45, Tuesday 15 August 2023 (72244)

Beckhoff output was 7.15V across the positive and negative input of the driver chassis (when the cable was connected to the chassis), so the voltage reference was set to 7.15V.

We used REED R8801 because its output was clean (4th pic) while CALIBRATORS DVC-350A was noisy (5th pic).

Images attached to this comment
ansel.neunzert@LIGO.ORG - 13:50, Tuesday 15 August 2023 (72247)

detchar-request git issue for tracking purposes.

keita.kawabe@LIGO.ORG - 11:10, Wednesday 16 August 2023 (72277)

As you can see from one of the pictures above, the unit is powered with AC supply so we can leave it for a while.

keita.kawabe@LIGO.ORG - 20:52, Wednesday 16 August 2023 (72286)CDS, ISC

How to recover from power outage

If there is a power outage, the voltage reference won't come back automatically. Though I hope we never need this instruction, I'll be gone for a month and Daniel will be gone for a week, so I'm writing this down just in case.

0. Instruction manual for the voltage reference (R8801) is found in the case of the unit inside a cabinet where all voltage references are stored in the EE shop. Find it and bring it to the floor.

1. The voltage reference and the DC power supply are on top of the work table by HAM6. See the 2nd picture in the above alog.

2. The DC supply will be ON as soon as the power comes back. Confirm that the output voltage is set to ~9V. If not, set it to 9V.

3. Press the yellow power button of the voltage reference to turn it on. You'll have to press it longer than you think is required. See the 1st picture in the above alog.

4. Press the "V" button to set the unit to voltage source mode. Set the voltage to 7.15V. Use right/left buttons to move cursor to the decimal place you'd like to change, and then use up/down buttons to change the number.

5. Most likely, a funny icon that you'll never guess to mean "Auto Power Off" will be displayed at the top left corner of the LCD. Now is the time to look at the LCD description on page 4 of the manual to confirm that it's indeed the Auto Power Off icon.

6. If the icon is indeed there (i.e. the unit is in Auto Power Off mode), press power button and V button at the same time to cancel Auto Power Off. You'll have to press the buttons longer than you think is required. If the icon doesn't go away, repeat.

7. Confirm that the LCD of R8801 looks exactly like the 1st picture of the above alog. You're done.

H1 SUS (DetChar, INJ, ISC, OpsInfo)
jeffrey.kissel@LIGO.ORG - posted 09:52, Tuesday 15 August 2023 - last comment - 14:26, Monday 23 October 2023(72221)
ETMX M0 Longitudinal Damping has been fed to TMTS M1 Unfiltered Since Sep 28 2021; Now OFF.
J. Kissel, J. Driggers

I was brainstorming why LOWNOISE_LENGTH_CONTROL would be ringing up a Transmon M1 to M2 wire violin mode (modeled to be at 104.2 Hz for a "production" TMTS; see table 3.11 of T1300876) for the first time on Aug 4 2023 (see current investigation recapped in LHO:72214), and I remembered "TMS tracking..."

In short: we found that ETMX M0 L OSEM damping error signal has been fed directly to TMSX M1 L path global control path, without filtering, since Sep 28 2021. Yuck!

On Aug 30 2021, I resolved the discrepancies between L1 and H1 end-station SUS front-end models -- see LHO:59772. Included in that work, I cleaned up the Tidal path, cleaned up the "R0 tracking" path (where QUAD L2 gets fed to QUAD R0), and installed the "TMS tracking" path as per ECR E2000186 / LLO:53224. In short, "TMS tracking" couples the ETM M0 longitudinal OSEM error signal to the TMS M1 longitudinal "input to the drivealign bank" global control path, with the intent of matching the velocity of the two top masses to reduce scattered light.

On Aug 31 2021, the model changes were installed during an upgrade to the RCG -- see LHO:59797, and we've confirmed that I turned both TMSX and TMSY paths OFF, "to be commissioned later, when we have an IFO, if we need it" at
    Tuesday -- Aug 31 2021 21:22 UTC (14:22 PDT) 

However, 28 days later,
    Tuesday -- Sept 28 2021 22:16 UTC (15:16 PDT)
the TMSX filter bank got turned back on, and must have been blindly SDF saved as such -- with no filter in place -- after an EX IO chassis upgrade -- see LHO:60058. At the time, that RCG 4.2.0 still had the infamous "turn on a new filter with its input ON, output ON, and a gain of 1.0" feature, that has been since resolved with RCG 5.1.1. So ... maybe, somehow, even though the filter was already installed on Aug 31 2021, the IO chassis upgrade rebuild, reinstall, and restart of the h1sustmsx.mdl front end model re-registered the filter as new? Unclear. Regardless this direct ETMX M0 L to TMSX M1 L path has been on, without filtering, since Sep 28 2021. Yuck!

Jenne confirms the early 2021 timeline in the first attachment here.
She also confirms via a ~2 year trend of the H1:SUS-TMSY_M1_FF_L filter bank's SWSTAT, that no filter module has *ever* been turned on, confirmed that there's *never* been filtering.

Whether this *is* the source of 102.1288 Hz problems and that that frequency is the TMSX transmon violin mode is still unclear. Brief investigations thus far include
    - Jenne briefly gathered ASDs of ETMX M0 L (H1:SUS-ETMX_M0_DAMP_L_IN_DQ) and TMSX M1 L OSEMs' error signal (H1:SUS-TMSX_M1_DAMP_L_IN1_DQ) around the time of Oli's LOWNOISE_LENGTH_CONTROL time, but found that at 100 Hz, the OSEMs are limited by their own sensor noise and don't see anything.
    - She also looked through the MASTER_OUT DAC requests (), in hopes that the requested control signal would show something more or different, but found nothing suspicious around 100 Hz there either.
    - We HAVE NOT, but could look at H1:SUS-TMSX_M1_DRIVEALIGN_L_OUT_DQ since this FF control filter should be the only control signal going through that path. I'll post a comment with this.

Regardless, having this path on with no filter is clearly wrong, so we've turned off the input, output, and gain accepted the filter as OFF, OFF, and OFF in the SDF system (for TMSX, the safe.snap is the same as the observe.snap).
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 10:39, Tuesday 15 August 2023 (72226)
No obvious blast in the (errant) path between ETMX M0 L and TMSX M1 L, the control channel H1:SUS-TMSX_M1_DRIVEALIGN_L_OUT_DQ, during the turn on of the LSC FF.

Attached is a screenshot highlighting one recent lock acquisition, after the addition / separation / clean up of calibration line turns ons (LHO:72205):
    - H1:GRD-ISC_LOCK_STATE_N -- the state number of the main lock acquisition guardian,
    - H1:LSC-SRCLFF1_GAIN, H1:LSC-PRCLFF_GAIN, H1:MICHFF_GAIN -- EPICs records showing the timing of when the LSC feed forward is turned on
    - The raw ETMX M0 L damping signal, H1:SUS-ETMX_M0_DAMP_L_IN1_DQ -- stored at 256 Hz
    - The same signal, mapped (errantly) as a control signal to TMSX M1 L -- also stored at 256 Hz
    - The TMSX M1 L OSEMs H1:SUS-TMSX_M1_DAMP_L_IN1_DQ, which are too limited by their own self noise to see any of this action -- but also only stored at 256 Hz.

In the middle of the TRANSITION_FROM_ETMX (state 557), DARM control is switching from ETMX to some other collection of DARM actuators. That's when you see the ETMX M0 L (and equivalent TMSX_M1_DRIVEALIGN) channels go from relatively noisy to quiet.

Then, at the very end of the state, or the start of the next state, LOW_NOISE_ETMX_ESD (state 558), DARM control returns to ETMX, and the main chain top mass, ETMX M0 gets noisy again. 

Then, several seconds later, in LOWNOISE_LENGTH_CONTROL (state 560), the LSC feed forward gets turned on. 

So, while there is control request changes to the TMS, at least according to channels stored at 256 Hz, we don't see any obvious kicks / impulses to the TMS during this transition. 
This decreases my confidence that something was kicking up a TMS violin mode, but not substantially.
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 10:33, Wednesday 16 August 2023 (72275)DetChar, DetChar-Request
@DetChar -- 
This errant TMS tracking has been on throughout O4 until yesterday.

The last substantial nominal low noise segment before the this (with errant, bad TMS tracking) was on  
     2023-08-15       04:41:02 to 15:30:32 UTC
                      1376109680 - 1376148650
the first substantial nominal low noise segment after this change 
     2023-08-16       05:26:08 - present
                      1376198786 - 1376238848 

Apologies for the typo in the main aLOG above, but *the* channels to understand the state of the filter bank that's been turned off are 
    H1:SUS-TMSX_M1_FF_L_SWSTAT
    H1:SUS-TMSX_M1_FF_L_GAIN

if you want to use that for an automated way of determining whether the TMS tracking is on vs. off.

If the SWSTAT channel has a value of 37888 and the GAIN channel has a gain of 1.0, then the errant connection between ETMX M0 L and TMSX M1 L was ON. That channels has now a value of 32768 and 0.0, respectively, indicating that it's OFF. (Remember, for a standard filter module a SWSTAT value of 37888 is a bitword representation for "Input, Output, and Decimation switches ON." A SWSTAT value of 32768 is the same bitword representation for just "Decimation ON.")

Over the next few weeks, can you build up an assessment of how the IFO has performed a few weeks before vs. few weeks after?
     I'm thinking, in particular, in the corner of scattered light arches and glitch rates (also from scattered light), but I would happily entertain any other metric you think are interesting given the context.

     The major difference being that TMSX is no longer "following" ETMX, so there's a *change* in the relative velocity between the chains. No claim yet that this is a *better* change or worse, but there's definitely a change. As you know, the creation of this scattered-light-impacting, relative velocity between the ETM and TMS is related to the low frequency seismic input motion to the chamber, specifically between the 0.05 to 5 Hz region. *That* seismic input evolves and is non-stationary over the few weeks time scale (wind, earthquakes, microseism, etc.), so I'm guessing that you'll need that much "after" data to make a fair comparison against the "before" data. Looking at the channels called out in the lower bit of the aLOG I'm sure will be a helpful part of the investigation.

I chose "a few weeks" simply because the IFO configuration has otherwise been pretty stable "before" (e.g., we're in the "representative normal for O4" 60 W configuration rather than the early O4 75 W configuration), but I leave it to y'all's expertise and the data to figure out a fair comparison (maybe only one week, a few days, or even just the single "before" vs. "after" is enough to see a difference).
ansel.neunzert@LIGO.ORG - 14:31, Monday 21 August 2023 (72357)

detchar-request git issue for tracking purposes.

jane.glanzer@LIGO.ORG - 09:12, Thursday 05 October 2023 (73271)DetChar
Jane, Debasmita

We took a look at the Omicron and Gravity triggers before and after this tracking was turned off. The time segments chosen for this analysis were:

TMSX tracking on: 2023-07-29 19:00:00 UTC - 2023-08-15 15:30:00 UTC, ~277 hours observing time
TMSX tracking off: 2023-08-16 05:30:00 UTC - 2023-08-31 00:00:00 UTC, ~277 hours observing time

For the analysis, the Omicron parameters chosen were SNR > 7.5, and a frequency between 10 Hz and 1024 Hz. The Gravity Spy glitches included a confidence of > 90%. 

The first pdf contains glitch rate plots. In the first plot, we have the Omicron glitch rate comparison before and after the change. The second and third plots shows the comparison of the Omicron glitch rates before and after the change as a function of SNR and frequency. The fourth plot shows the Gravity Spy classifications of the glitches. What we can see from these plots is that when the errant tracking was on, the overall glitch rate was higher (~29 per hour when on, ~15 per hour when off). It was particularly high in the 7.5-50 SNR range and 10Hz - 50Hz range, which is typically where we observe scattering. The Gravity Spy plot shows that scattered light is the most common glitch type when the tracking is both on and off, but reduces after the tracking is off.

We also looked into see if these scattering glitches were coincidence in "H1:GDS-CALIB_STRAIN" and "H1:ASC-X_TR_A_NSUM_OUT_DQ", which is shown in the last pdf. From the few examples we looked at, there does seem to be some excess noise in the transmitted monitor channel when the tracking was on. If necessary, we can look into more examples of this. 
Non-image files attached to this comment
debasmita.nandi@LIGO.ORG - 14:26, Monday 23 October 2023 (73674)
Debasmita, Jane

We have plotted the ground motion trends in the following frequency bands and DOFs

1. Earthquake band (0.03 Hz--0.1 Hz) ground motion at ETMX-X, ETMX-Z and ETMX-X tilt-subtracted
2. Wind speed (0.03 Hz--0.1 Hz) at ETMX
3. Micro-seismic band (0.1 Hz--0.3 Hz) ground motion at ETMX-X

We have also calculated the mean and median of the ground motion trends for two weeks before and after the tracking was turned off. It seems that while motion in all the other bands remained almost same, the microseismic band ground motion (0.1-0.3 Hz) has increased significantly (from a mean value of 75.73 nm/s to 115.82 nm/s) when the TMS-X tracking was turned off. Still, it produced less scattering than before when the TMS-X tracking was on. 

The plots and the table are the attached here.
Non-image files attached to this comment
H1 TCS
camilla.compton@LIGO.ORG - posted 09:51, Tuesday 15 August 2023 - last comment - 16:07, Friday 18 August 2023(72220)
Swapped CO2X and CO2Y Chillers

Closes WP11368.

We've been seeing the CO2X laser regularly unlocking (alog71594) which takes us out of observing, today we swapped CO2X and CO2Y chillers to see if this issue followed the chiller.  Previously, swapping CO2Y with the spare stopped CO2Y from unlocking, alog54980.

The old CO2X (S/N ...822) chiller seems to be reporting a unsteady flow at the LVEA flow meter, see attached, suggesting the S/N ...822 chiller isn't working too well. This is the chiller TJ and I rebuilt in Febuary 67265

Swap. Following some of the procedure listed in alog#61325: turned off both lasers via medm, turned off and unplugged (electrical and water connections) both chillers, swapped the chillers, replugged in, turned chillers back on (one needed to be turned on via medm), checked water level (nothing added), turned on CO2 lasers via medm and chassis. Post-stick notes have been added to the chillers. Both lasers relocked with ~45W power.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 10:08, Wednesday 16 August 2023 (72253)

Jason, TJ, Camilla 

The worse chiller (S/N 822) flow rate dropped low enough for the CO2Y laser to trip off so swapped CO2Y back to it’s original chiller (S/N 617) and installed the spare chiller (S/N 813) for CO2X. We flushed the spare (instructions in 60792) as it hadn’t been used since February 67265. Both lasers are now running again and flow rates so far look good. 

The first set of water we ran though the spare (S/N 813) chiller has small brass or metal pieces in the water (caught in the filter), see attached. Once we drained this and added clean water there was no evidence of metal so we connected it to the main CO2X circuit. 

Looking at the removed CO2X chiller (rebuilt in February 67265), it had some black gunk in it, see attached. This is worrying as has been running though the CO2X lines since Feb and was running in the COO2Y system for ~5 hours. I should have checked the reservoir water before swapping the chillers. 

Images attached to this comment
thomas.shaffer@LIGO.ORG - 08:14, Wednesday 16 August 2023 (72267)

Overnight they seem stable as well, but the new TCSX chiller (617) looks very slightly noisier and perhaps has a slight downward trend to its flow. We'll keep watching this and see if it continues.

Images attached to this comment
thomas.shaffer@LIGO.ORG - 08:19, Wednesday 16 August 2023 (72268)

I spoke too soon. Looks like TCSX relocked at 08:27UTC last night.

Images attached to this comment
camilla.compton@LIGO.ORG - 16:07, Friday 18 August 2023 (72327)

On Tuesday evening, the removed chiller (S/N 822) drained slowly. No water came out of the drain valve, only the outlet, which was strange.  Today I took the cover off the chiller but couldn't see any issues with the drainage. I left th chiller with all values and the reservoir open so the last of the water can dry out of it. 

Displaying reports 13981-14000 of 84136.Go to page Start 696 697 698 699 700 701 702 703 704 End