Please disregard FMCS chiller alarms for the next hour while Robert runs his tests which require the chillers to be shut down for short periods.
TITLE: 08/17 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 6mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY: Locked for 9 hours, range has a few points almost touching 160Mpc. Nice.
TITLE: 08/17 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
Lockloss 23:29 UTC
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1376263782
A change was made to ISC_LOCK that will lead to an SDF Diff that will need to be accepted.
Trouble locking DRMI even after PRMI
Ran Initial Alignment.
Elevated Dust levels in Optics labs again.
locking process started at 00:31 UTC
1:16 UTC made it to NOMINAL_LOW_NOISE
1:35 UTC Made it to Observing
lockloss from NLN @ 2:22 UTC almost certainly because of a Pi ring up and a series of Locklosses at LOWNOISE_LENGTH_CONTROL Edit*: It wasn not certain at all infact.
relocking went smoothly until, Lost lock at LOWNOISE_LENGTH_CONTROL @ 3:11 UTC
relocking went through PRMI and took a while, Lost lock at LOWNOISE_LENGTH_CONTROL again at @ 4:14 UTC
I have lost lock twice at LOWNOISE_LENGTH_CONTROL tonight. I am concerned that it may be due to alog https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72262
Which is where there is a change to that state.
The counter argument to this is that I went past this earlier in my first lock of the night. Which was AFTER ISC_LOCK was loaded.
I'm doing an Initial Alignment again tonight to see if it's just poorly aligned instead, and to buy my self sometime to investigate.
I posted my findings in matter most and in the lock loss alog above . Then Called in the Commishoners to see if there was anything else I should look at.
Lines 5471 , 5472 were changed, and with the help of Danielle and Jenne pointing to line 5488 for another change that was reverted on ISC_LOCK.py, Locking went well from LOWNOISE_LENGTH_CONTROL all the way up to NLN
See comments in https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72262 for Specifics
Made it to NOMINAL_LOW_NOISE @ 5:58 UTC
Made it to Observing @ 6:02 UTC
LOG: empty
Initial Unknown Lockloss from NLN (Maybe PI related?) :
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1376274187
ScreenShot 1 ScreenShot 2 ScreenShot 3
23-08-17_02:05:06.841133Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: SUS_PI: has notification
2023-08-17_02:05:12.031195Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: SUS_PI: has notification
2023-08-17_02:05:22.397983Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: SUS_PI: has notification
2023-08-17_02:15:15.962160Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: SUS_PI: has notification
2023-08-17_02:15:21.158619Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: SUS_PI: has notification
2023-08-17_02:22:49.089168Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: SQZ_MANAGER: has notification
2023-08-17_02:22:49.343171Z ISC_LOCK [NOMINAL_LOW_NOISE.run] Unstalling IMC_LOCK
2023-08-17_02:22:49.571864Z ISC_LOCK JUMP target: LOCKLOSS
2023-08-17_02:22:49.578399Z ISC_LOCK [NOMINAL_LOW_NOISE.exit]
2023-08-17_02:22:49.629938Z ISC_LOCK JUMP: NOMINAL_LOW_NOISE->LOCKLOSS
2023-08-17_02:22:49.629938Z ISC_LOCK calculating path: LOCKLOSS->NOMINAL_LOW_NOISE
2023-08-17_02:22:49.632228Z ISC_LOCK new target: DOWN
2023-08-17_02:22:49.636266Z ISC_LOCK executing state: LOCKLOSS (2)
2023-08-17_02:22:49.647637Z ISC_LOCK [LOCKLOSS.enter]
Followed by a Lockloss at LOWNOISE_LENGHT_CONTROL
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1376277430
2023-08-17_03:16:40.676453Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-OUTPUT_MTRX_2_13 => -1
2023-08-17_03:16:40.676886Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-SRCL1_OFFSET => 0
2023-08-17_03:16:40.677417Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-MICHFF_TRAMP => 10
2023-08-17_03:16:40.677727Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-SRCLFF1_TRAMP => 10
2023-08-17_03:16:40.677943Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] timer['wait'] = 10
2023-08-17_03:16:50.678146Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] timer['wait'] done
2023-08-17_03:16:50.717927Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] ezca: H1:LSC-MICHFF_GAIN => 1
2023-08-17_03:16:50.718315Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] ezca: H1:LSC-SRCLFF1_GAIN => 1
2023-08-17_03:16:50.718524Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] timer['wait'] = 10
2023-08-17_03:16:52.342688Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] Unstalling OMC_LOCK
2023-08-17_03:16:52.344038Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] Unstalling IMC_LOCK
2023-08-17_03:16:52.576674Z ISC_LOCK JUMP target: LOCKLOSS
2023-08-17_03:16:52.576674Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.exit]
2023-08-17_03:16:52.634809Z ISC_LOCK JUMP: LOWNOISE_LENGTH_CONTROL->LOCKLOSS
2023-08-17_03:16:52.634809Z ISC_LOCK calculating path: LOCKLOSS->NOMINAL_LOW_NOISE
2023-08-17_03:16:52.635554Z ISC_LOCK new target: DOWN
2023-08-17_03:16:52.674469Z ISC_LOCK executing state: LOCKLOSS (2)
2023-08-17_03:16:52.674675Z ISC_LOCK [LOCKLOSS.enter]
relocking went through PRMI and took a while, Lost lock at LOWNOISE_LENGTH_CONTROL again at @ 4:14 UTC
2023-08-17_04:13:58.490058Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-OUTPUT_MTRX_2_11 => -1
2023-08-17_04:13:58.490906Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-OUTPUT_MTRX_1_13 => 1
2023-08-17_04:13:58.491239Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-OUTPUT_MTRX_2_13 => -1
2023-08-17_04:13:58.491568Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-SRCL1_OFFSET => 0
2023-08-17_04:13:58.492108Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-MICHFF_TRAMP => 10
2023-08-17_04:13:58.492395Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-SRCLFF1_TRAMP => 10
2023-08-17_04:13:58.492619Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] timer['wait'] = 10
2023-08-17_04:14:08.492962Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] timer['wait'] done
2023-08-17_04:14:08.528400Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] ezca: H1:LSC-MICHFF_GAIN => 1
2023-08-17_04:14:08.528870Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] ezca: H1:LSC-SRCLFF1_GAIN => 1
2023-08-17_04:14:08.529186Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] timer['wait'] = 10
2023-08-17_04:14:10.595463Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] Unstalling IMC_LOCK
2023-08-17_04:14:10.830018Z ISC_LOCK JUMP target: LOCKLOSS
2023-08-17_04:14:10.830018Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.exit]
2023-08-17_04:14:10.893208Z ISC_LOCK JUMP: LOWNOISE_LENGTH_CONTROL->LOCKLOSS
2023-08-17_04:14:10.893208Z ISC_LOCK calculating path: LOCKLOSS->NOMINAL_LOW_NOISE
2023-08-17_04:14:10.895425Z ISC_LOCK new target: DOWN
2023-08-17_04:14:10.899923Z ISC_LOCK executing state: LOCKLOSS (2)
2023-08-17_04:14:10.900490Z ISC_LOCK [LOCKLOSS.enter]
I have lost lock twice at LOWNOISE_LENGTH_CONTROL tonight. I am concerned that it may be due to alog https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72262
Which is where there is a change to that state.
The counter argument to this is that I went past this earlier in my first lock of the night. Which was AFTER ISC_LOCK was loaded.
I'm doing an Initial Alignment again tonight to see if it's just poorly aligned instead.
If I still cannot get past LOWNOISE_LENGTH_CONTROL after that I will Revert the changes in the mentioned in the alog above.
Lockloss from NOMINAL_LOW_NOISE
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1376263782
Loskloss during commishing time.
We no longer see excess noise when controlling DARM with ETMX bias -125V, 72111, and one potential change that might explain this would be the recent reduction in DARM RMS (this wouldn't be a normal upconversion as it is seen at -125V and not at +125V). To test the idea that the improvement in DARM RMS is the reason for the change, we went back to the -125V configuration and added an injection into the DARM error point at 2.1Hz.
We don't see any change in DARM for increases of a factor of 2-3 increase in the DARM error signal RMS, but we also don't see the injection in the control signal at this level because the drive to the ETM is dominated by SRCL FF at this frequency 71994. When we increased the error point injection enough that we start to see it in the drive to the ESD, we are increasing the error signal RMS by 2 orders of magnitude, and start to see it upconverted into the GW band, unsuprisingly. If we want to pursue this further we may need to think more carefully about where to inject.
While transitioning DARM control back to the usual configuration, we lost lock. I think that the configuration I was setting things back to looks OK (ie, there was no typo or anything), so I'm not sure what happened.
TITLE: 08/16 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Commissioning
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 8mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
Inherited an IFO in NOMINAL_LOW_NOISE and COMIMISHING.
TITLE: 08/16 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Locked for 17.5 hours. We are finishing up some commissioning measurements and will return to observing as LLO gets back to locked.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
17:13 | FAC | Cindi | H2 | n | Tech clean | 18:05 |
18:38 | - | Richard | Xarm | n | Walking arm checks | 18:44 |
18:45 | VAC | Gerardo | MX | n | CP6 check | 19:15 |
19:26 | ISC | Elenna | CR | n | ASC meas | 19:59 |
20:12 | PEM | Robert | EY | n | Shaker injection | 22:57 |
20:13 | SEI | Jim | HAM1 | n | HEPI FF inj | 20:28 |
22:23 | ISC | Sheila | CR | n | DARM RMS quarter bias test | 23:17 |
Today I took "unbiased" OLGs of SRC1 and SRC2 P and Y (67187). I have plot the measurements with error shading.
For reference, SRC1 controls SRM and is sensed using the AS RF72 WFS (118 MHz-45MHz). SRC2 controls SRM and SR2 and is sensed using the AS_C QPD.
SRC1 P has a UGF around 0.01 Hz (!), with a phase margin of about 90 deg. SRC1 Y is similar, 0.01 Hz UGF and phase margin of 100 deg.
SRC2 P has a UGF around 0.05 Hz with a phase margin of 42 deg. SRC2 Y has a UGF around 0.02 Hz and a phase margin of about 46 deg.
These UGFs are lower than I expected. I remember Gabriele and I targeting UGFs around 100 mHz when reworking the design of these loops. Everything runs fine and we don't seem to have any troubles with the SRC ASC during O4. But if we need more microseism suppression in the SRC, we obviously need to do a redesign. I will make a point to do some analysis soon to determine if we need more suppression of angular motion in that cavity.
These loops are definitely hard to measure and for all four measurements I was only able to get decent coherence up to 0.1 Hz. If I had more time I would run these much longer and attempt to target an even lower frequency measurement. However, the templates are still useful and I saved them with the other ASC templates for future use.
You can find the measurement templates, exported data, and processing code in '/ligo/home/elenna.capote/DRMI_ASC/SRC{1,2}'.
The templates for the these measurements are also saved in [userapps]/asc/h1/templates/SRC{1,2} as 'SRC{1,2}_{P,Y}_olg_broadband_shaped.xml'.
These measurements complete my goal to measure all the ASC loops!
I checked a 3-hour block of time after the work described in 72241, and it appears that the 1.6611 Hz comb was successfully removed. I've attached pre (Aug 15 10:00 UTC) vs post (Aug 16 06:00 UTC) plots, each with an averaging time of 3 hours for direct comparison. The 1.6611 Hz comb is the structure around 280 Hz marked with yellow triangles on the first plot, and absent on the second plot. There are also some untagged lines belonging to the comb around 180 Hz, which also disappear. Note that the small line still present around 269.7 Hz is not part of the comb, and the blue squares are an unrelated comb at 4.98 Hz.
Tagging ISC and CDS. Nice work Keita and Ansel!
We can leave the temporary setup for a while (it's jury-rigged to use DC power supply). But the question is what to do next.
E2100049 shows that the Beckhoff voltage output comes out of EL4134 and is supposed to be directly connected to the positive and negative input of the driver chassis (pin 6 and pin 19 of the DB25, respectively, see D2000212).
EL 4134 is a 4-channel DAC module and its output is single ended ("O" terminal and GND terminal). If you bother to download the catalog from Beckhoff web page, it turns out that all GND terminals are connected together inside. These GND terminals are not connected to GND terminals of neighboring Beckhoff modules, Beckhoff power ground nor Beckhoff chassis (I checked this with Marc in the shop). It seems as if the Beckhoff GND output is floating relative to everything else.
I don't know why there's a 1.66Hz comb between the Beckhoff GND terminal and the driver ground (pin 13 on DB25) but maybe we can connect them together? (Unfortunately E2100049 doesn't show which Beckhoff terminal is connected to which pin on the driver chassis. I assume that the GND terminal goes to negative input (pin 19) but not sure. We have to make sure whch is GND before making any connection.) However, if we do that, probably we don't want to repeat that for the second T-SAMS in the future assuming that the second DAC output in the same EL4134 module will be used, or we'll be making a ground loop.
Anyway, if the noise comes back by doing that, we could add an adjustable resistive divider using a trim pot inside the driver chassis to supply necessary voltage as a kind of inconvenient mid-term solution. We could even try to connect the Beckhoff cable back to the driver chassis to regain readback after disconnecting the DAC output inside the Beckhoff chassis.
I'll be gone for a month and cannot do these things, so it's up to Daniel.
Drawings and wiring tables for this Beckhoff chassis can be found here E1200377.
We should also check that the noise isn't propgated through the shield of the wire.
Famis 25079
inLock SUS Charge Measurement
While searching for the files created by the inLock Sus Charge Measurments.
I noticed that there were multiples of a few of the files created today in the directory: /opt/rtcds/userapps/release/sus/common/scripts/quad/InLockChargeMeasurements/rec_LHO
ls -l | grep "Aug 15"
-rw-r--r-- 1 1010 controls 160 Aug 15 07:50 ETMY_12_Hz_1376146243.txt
-rw-r--r-- 1 1010 controls 160 Aug 15 08:22 ETMY_12_Hz_1376148152.txt
-rw-r--r-- 1 1010 controls 160 Aug 15 07:50 ITMX_14_Hz_1376146241.txt
-rw-r--r-- 1 1010 controls 160 Aug 15 08:22 ITMX_14_Hz_1376148154.txt
-rw-r--r-- 1 1010 controls 160 Aug 15 07:50 ITMY_15_Hz_1376146220.txt
-rw-r--r-- 1 1010 controls 160 Aug 15 08:08 ITMY_15_Hz_1376147322.txt
-rw-r--r-- 1 1010 controls 160 Aug 15 08:21 ITMY_15_Hz_1376148134.txt
listing all files, filtering for only files that contain the string ETMX, and then filtering those for files that contain "Aug 15" with the following command:
ls -l | grep "ETMX" | grep "Aug 15"
Returned no files, which means that while it looks like it was ran twice, it never completed ETMX.
I'm not sure if the analysis will run with out all the files or not.
SUS_CHARGE LOG:
2023-08-15_15:26:18.969345Z SUS_CHARGE LOAD ERROR: see log for more info (LOAD to reset)
2023-08-15_15:26:53.512031Z SUS_CHARGE LOAD REQUEST
2023-08-15_15:26:53.524359Z SUS_CHARGE RELOAD requested. reloading system data...
2023-08-15_15:26:53.527151Z SUS_CHARGE Traceback (most recent call last):
2023-08-15_15:26:53.527151Z File "/usr/lib/python3/dist-packages/guardian/daemon.py", line 566, in run
2023-08-15_15:26:53.527151Z self.reload_system()
2023-08-15_15:26:53.527151Z File "/usr/lib/python3/dist-packages/guardian/daemon.py", line 327, in reload_system
2023-08-15_15:26:53.527151Z self.system.load()
2023-08-15_15:26:53.527151Z File "/usr/lib/python3/dist-packages/guardian/system.py", line 400, in load
2023-08-15_15:26:53.527151Z module = self._load_module()
2023-08-15_15:26:53.527151Z File "/usr/lib/python3/dist-packages/guardian/system.py", line 287, in _load_module
2023-08-15_15:26:53.527151Z self._module = self._import(self._modname)
2023-08-15_15:26:53.527151Z File "/usr/lib/python3/dist-packages/guardian/system.py", line 159, in _import
2023-08-15_15:26:53.527151Z module = _builtin__import__(name, *args, **kwargs)
2023-08-15_15:26:53.527151Z File "<frozen importlib._bootstrap>", line 1109, in __import__
2023-08-15_15:26:53.527151Z File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
2023-08-15_15:26:53.527151Z File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
2023-08-15_15:26:53.527151Z File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
2023-08-15_15:26:53.527151Z File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
2023-08-15_15:26:53.527151Z File "<frozen importlib._bootstrap_external>", line 786, in exec_module
2023-08-15_15:26:53.527151Z File "<frozen importlib._bootstrap_external>", line 923, in get_code
2023-08-15_15:26:53.527151Z File "<frozen importlib._bootstrap_external>", line 853, in source_to_code
2023-08-15_15:26:53.527151Z File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
2023-08-15_15:26:53.527151Z File "/opt/rtcds/userapps/release/sus/h1/guardian/SUS_CHARGE.py", line 67
2023-08-15_15:26:53.527151Z ezca.get_LIGOFilter('SUS-ETMX_L3_DRIVEALIGN_L2L').ramp_gain(lscparams.ETMX_GND_MIN_DriveAlign_gain, ramp_time=20, wait=False)
2023-08-15_15:26:53.527151Z ^
2023-08-15_15:26:53.527151Z IndentationError: unindent does not match any outer indentation level
2023-08-15_15:26:53.527151Z SUS_CHARGE LOAD ERROR: see log for more info (LOAD to reset)
2023-08-15_15:29:10.009828Z SUS_CHARGE LOAD REQUEST
2023-08-15_15:29:10.011001Z SUS_CHARGE RELOAD requested. reloading system data...
2023-08-15_15:29:10.050137Z SUS_CHARGE module path: /opt/rtcds/userapps/release/sus/h1/guardian/SUS_CHARGE.py
2023-08-15_15:29:10.050393Z SUS_CHARGE user code: /opt/rtcds/userapps/release/isc/h1/guardian/lscparams.py
2023-08-15_15:29:10.286761Z SUS_CHARGE system archive: code changes detected and committed
2023-08-15_15:29:10.331427Z SUS_CHARGE system archive: id: 9b481a54e45bfda96fa2f39f98978d76aa6ec7c0 (162824613)
2023-08-15_15:29:10.331427Z SUS_CHARGE RELOAD complete
2023-08-15_15:29:10.332868Z SUS_CHARGE calculating path: SWAP_TO_ITMX->INJECTIONS_COMPLETE
2023-08-15_15:29:14.129521Z SUS_CHARGE OP: EXEC
2023-08-15_15:29:14.129521Z SUS_CHARGE executing state: SWAP_TO_ITMX (11)
2023-08-15_15:29:14.135913Z SUS_CHARGE W: RELOADING @ SWAP_TO_ITMX.main
2023-08-15_15:29:14.158532Z SUS_CHARGE [SWAP_TO_ITMX.enter]
2023-08-15_15:29:14.276536Z SUS_CHARGE [SWAP_TO_ITMX.main] ezca: H1:SUS-ITMX_L3_DRIVEALIGN_L2L_TRAMP => 10
2023-08-15_15:29:14.277081Z SUS_CHARGE [SWAP_TO_ITMX.main] ezca: H1:SUS-ITMX_L3_DRIVEALIGN_L2L_GAIN => 0
2023-08-15_15:29:17.820392Z SUS_CHARGE REQUEST: DOWN
2023-08-15_15:29:17.821281Z SUS_CHARGE calculating path: SWAP_TO_ITMX->DOWN
2023-08-15_15:29:17.822235Z SUS_CHARGE new target: DOWN
2023-08-15_15:29:17.822364Z SUS_CHARGE GOTO REDIRECT
2023-08-15_15:29:17.822669Z SUS_CHARGE REDIRECT requested, timeout in 1.000 seconds
2023-08-15_15:29:17.824392Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:17.895303Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:17.958976Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.018262Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.079443Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.130595Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.197848Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.253456Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.318549Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.378993Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.446375Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.507978Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.576823Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.641493Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.695114Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.774571Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.822999Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.823662Z SUS_CHARGE REDIRECT timeout reached. worker terminate and reset...
2023-08-15_15:29:18.831141Z SUS_CHARGE worker terminated
2023-08-15_15:29:18.849938Z SUS_CHARGE W: initialized
2023-08-15_15:29:18.871834Z SUS_CHARGE W: EZCA v1.4.0
2023-08-15_15:29:18.872835Z SUS_CHARGE W: EZCA CA prefix: H1:
2023-08-15_15:29:18.872835Z SUS_CHARGE W: ready
2023-08-15_15:29:18.872980Z SUS_CHARGE worker ready
2023-08-15_15:29:18.883790Z SUS_CHARGE EDGE: SWAP_TO_ITMX->DOWN
2023-08-15_15:29:18.884081Z SUS_CHARGE calculating path: DOWN->DOWN
2023-08-15_15:29:18.886386Z SUS_CHARGE executing state: DOWN (2)
2023-08-15_15:29:18.891745Z SUS_CHARGE [DOWN.enter]
2023-08-15_15:29:18.893116Z Warning: Duplicate EPICS CA Address list entry "10.101.0.255:5064" discarded
2023-08-15_15:29:20.216958Z SUS_CHARGE [DOWN.main] All nodes taken to DOWN, ISC_LOCK should have taken care of reverting settings.
ESD_EXC_ETMX LOG:
2023-08-01_15:07:01.324869Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-01_15:07:01.325477Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-15_15:02:53.269349Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-15_15:02:53.269349Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-15_15:08:26.888655Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-15_15:08:26.888655Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-15_15:29:20.255431Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-15_15:29:20.255431Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-01_15:07:01.324869Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-01_15:07:01.325477Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-15_15:02:53.269349Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-15_15:02:53.269349Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-15_15:08:26.888655Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-15_15:08:26.888655Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-15_15:29:20.255431Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-15_15:29:20.255431Z ESD_EXC_ETMX calculating path: DOWN->DOWN
ESD_EXC_ITMX log:
2023-08-15_15:22:16.033411Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] Restoring things to the way they were before the measurement
2023-08-15_15:22:16.033411Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] Ramping on bias on ITMX ESD
2023-08-15_15:22:16.034430Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_GAIN => 0
2023-08-15_15:22:18.266457Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_SW1 => 8
2023-08-15_15:22:18.517569Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS => OFF: OFFSET
2023-08-15_15:22:18.518166Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_TRAMP => 20
2023-08-15_15:22:18.518777Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_GAIN => 1.0
2023-08-15_15:22:38.431399Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_TRAMP => 2.0
2023-08-15_15:22:41.264244Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_DRIVEALIGN_L2L_SW1S => 5124
2023-08-15_15:22:41.515470Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_DRIVEALIGN_L2L => ONLY ON: INPUT, DECIMATION, FM4, FM5, OUTPUT
2023-08-15_15:22:41.515470Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] all done
2023-08-15_15:22:41.632738Z ESD_EXC_ITMX EDGE: RESTORE_SETTINGS->COMPLETE
2023-08-15_15:22:41.632738Z ESD_EXC_ITMX calculating path: COMPLETE->COMPLETE
2023-08-15_15:22:41.632738Z ESD_EXC_ITMX executing state: COMPLETE (30)
2023-08-15_15:22:41.636417Z ESD_EXC_ITMX [COMPLETE.enter]
2023-08-15_15:22:41.764636Z ESD_EXC_ITMX REQUEST: DOWN
2023-08-15_15:22:41.764636Z ESD_EXC_ITMX calculating path: COMPLETE->DOWN
2023-08-15_15:22:41.764636Z ESD_EXC_ITMX new target: DOWN
2023-08-15_15:22:41.764636Z ESD_EXC_ITMX GOTO REDIRECT
2023-08-15_15:22:41.764636Z ESD_EXC_ITMX REDIRECT requested, timeout in 1.000 seconds
2023-08-15_15:22:41.768046Z ESD_EXC_ITMX REDIRECT caught
2023-08-15_15:22:41.768046Z ESD_EXC_ITMX [COMPLETE.redirect]
2023-08-15_15:22:41.824688Z ESD_EXC_ITMX EDGE: COMPLETE->DOWN
2023-08-15_15:22:41.824688Z ESD_EXC_ITMX calculating path: DOWN->DOWN
2023-08-15_15:22:41.824688Z ESD_EXC_ITMX executing state: DOWN (1)
2023-08-15_15:22:41.827615Z ESD_EXC_ITMX [DOWN.main] Stopping bias_drive_bias_on
2023-08-15_15:22:41.827615Z ESD_EXC_ITMX [DOWN.main] Stopping L_drive_bias_on
2023-08-15_15:22:41.827615Z ESD_EXC_ITMX [DOWN.main] Stopping bias_drive_bias_off
2023-08-15_15:22:41.827615Z ESD_EXC_ITMX [DOWN.main] Stopping L_drive_bias_off
2023-08-15_15:22:41.923244Z ESD_EXC_ITMX [DOWN.main] Clearing bias_drive_bias_on
2023-08-15_15:22:42.059154Z ESD_EXC_ITMX [DOWN.main] Clearing L_drive_bias_on
2023-08-15_15:22:42.216133Z ESD_EXC_ITMX [DOWN.main] Clearing bias_drive_bias_off
2023-08-15_15:22:42.349505Z ESD_EXC_ITMX [DOWN.main] Clearing L_drive_bias_off
2023-08-15_15:29:20.260953Z ESD_EXC_ITMX REQUEST: DOWN
2023-08-15_15:29:20.260953Z ESD_EXC_ITMX calculating path: DOWN->DOWN
2023-08-15_15:18:31.953103Z ESD_EXC_ITMX [L_DRIVE_WITH_BIAS.enter]
2023-08-15_15:18:34.481594Z ESD_EXC_ITMX [L_DRIVE_WITH_BIAS.main] Starting 14Hz Sine injection on H1:SUS-ITMX_L3_DRIVEALIGN_L2L_EXC
2023-08-15_15:18:34.482160Z ESD_EXC_ITMX [L_DRIVE_WITH_BIAS.main] timer['Injection duration'] = 62
2023-08-15_15:19:36.482043Z ESD_EXC_ITMX [L_DRIVE_WITH_BIAS.run] timer['Injection duration'] done
2023-08-15_15:19:36.516842Z ESD_EXC_ITMX [L_DRIVE_WITH_BIAS.run] Injection finished
2023-08-15_15:19:38.908908Z ESD_EXC_ITMX [L_DRIVE_WITH_BIAS.run] Stopping injection on H1:SUS-ITMX_L3_DRIVEALIGN_L2L_EXC
2023-08-15_15:19:39.011256Z ESD_EXC_ITMX EDGE: L_DRIVE_WITH_BIAS->TURN_BIAS_OFF
2023-08-15_15:19:39.011836Z ESD_EXC_ITMX calculating path: TURN_BIAS_OFF->COMPLETE
2023-08-15_15:19:39.012099Z ESD_EXC_ITMX new target: BIAS_DRIVE_NO_BIAS
2023-08-15_15:19:39.018534Z ESD_EXC_ITMX executing state: TURN_BIAS_OFF (15)
2023-08-15_15:19:39.019024Z ESD_EXC_ITMX [TURN_BIAS_OFF.enter]
2023-08-15_15:19:39.019710Z ESD_EXC_ITMX [TURN_BIAS_OFF.main] Ramping off bias on ITMX ESD
2023-08-15_15:19:39.020547Z ESD_EXC_ITMX [TURN_BIAS_OFF.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_GAIN => 0
2023-08-15_15:19:58.934813Z ESD_EXC_ITMX [TURN_BIAS_OFF.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_OFFSET => 0
2023-08-15_15:19:58.935544Z ESD_EXC_ITMX [TURN_BIAS_OFF.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_TRAMP => 2
2023-08-15_15:19:58.935902Z ESD_EXC_ITMX [TURN_BIAS_OFF.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_GAIN => 1
2023-08-15_15:20:01.140528Z ESD_EXC_ITMX EDGE: TURN_BIAS_OFF->BIAS_DRIVE_NO_BIAS
2023-08-15_15:20:01.141391Z ESD_EXC_ITMX calculating path: BIAS_DRIVE_NO_BIAS->COMPLETE
2023-08-15_15:20:01.142015Z ESD_EXC_ITMX new target: L_DRIVE_NO_BIAS
2023-08-15_15:20:01.143337Z ESD_EXC_ITMX executing state: BIAS_DRIVE_NO_BIAS (16)
2023-08-15_15:20:01.144372Z ESD_EXC_ITMX [BIAS_DRIVE_NO_BIAS.enter]
2023-08-15_15:20:03.673255Z ESD_EXC_ITMX [BIAS_DRIVE_NO_BIAS.main] Starting 14Hz Sine injection on H1:SUS-ITMX_L3_LOCK_BIAS_EXC
2023-08-15_15:20:03.673786Z ESD_EXC_ITMX [BIAS_DRIVE_NO_BIAS.main] timer['Injection duration'] = 62
2023-08-15_15:21:05.674028Z ESD_EXC_ITMX [BIAS_DRIVE_NO_BIAS.run] timer['Injection duration'] done
2023-08-15_15:21:05.697880Z ESD_EXC_ITMX [BIAS_DRIVE_NO_BIAS.run] Injection finished
2023-08-15_15:21:07.987796Z ESD_EXC_ITMX [BIAS_DRIVE_NO_BIAS.run] Stopping injection on H1:SUS-ITMX_L3_LOCK_BIAS_EXC
2023-08-15_15:21:08.072581Z ESD_EXC_ITMX EDGE: BIAS_DRIVE_NO_BIAS->L_DRIVE_NO_BIAS
2023-08-15_15:21:08.072581Z ESD_EXC_ITMX calculating path: L_DRIVE_NO_BIAS->COMPLETE
2023-08-15_15:21:08.073301Z ESD_EXC_ITMX new target: RESTORE_SETTINGS
2023-08-15_15:21:08.076744Z ESD_EXC_ITMX executing state: L_DRIVE_NO_BIAS (17)
2023-08-15_15:21:08.079417Z ESD_EXC_ITMX [L_DRIVE_NO_BIAS.enter]
2023-08-15_15:21:10.597939Z ESD_EXC_ITMX [L_DRIVE_NO_BIAS.main] Starting 14Hz Sine injection on H1:SUS-ITMX_L3_DRIVEALIGN_L2L_EXC
2023-08-15_15:21:10.598481Z ESD_EXC_ITMX [L_DRIVE_NO_BIAS.main] timer['Injection duration'] = 62
2023-08-15_15:22:12.598413Z ESD_EXC_ITMX [L_DRIVE_NO_BIAS.run] timer['Injection duration'] done
2023-08-15_15:22:12.633547Z ESD_EXC_ITMX [L_DRIVE_NO_BIAS.run] Injection finished
2023-08-15_15:22:15.937968Z ESD_EXC_ITMX [L_DRIVE_NO_BIAS.run] Stopping injection on H1:SUS-ITMX_L3_DRIVEALIGN_L2L_EXC
2023-08-15_15:22:16.018077Z ESD_EXC_ITMX EDGE: L_DRIVE_NO_BIAS->RESTORE_SETTINGS
2023-08-15_15:22:16.018395Z ESD_EXC_ITMX calculating path: RESTORE_SETTINGS->COMPLETE
2023-08-15_15:22:16.018676Z ESD_EXC_ITMX new target: COMPLETE
2023-08-15_15:22:16.019499Z ESD_EXC_ITMX executing state: RESTORE_SETTINGS (25)
2023-08-15_15:22:16.019891Z ESD_EXC_ITMX [RESTORE_SETTINGS.enter]
2023-08-15_15:22:16.020220Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] Finished with all excitations
2023-08-15_15:22:16.033260Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] Saved GPS times in logfile: /opt/rtcds/userapps/release/sus/common/scripts/quad/InLockChargeMeasurements/rec_LHO/ITMX_14_Hz_1376148154.txt
2023-08-15_15:22:16.033411Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] Restoring things to the way they were before the me
These are both valid charge measurements, we could analysis either or both (and check the answer is the same). We repeated the measurements while troubleshooting the issue in 72219. We have now fixed the issue (typo) in SUS_CHARGE that was preventing the last ETMX measurement from being taken.
I just analyzed the first batch of in-lock charge measurements.
There are 13-14 plot points on most of the other plots but only 10 for ETMX.
Despite some great efforts to track down the source from Jenne and Jeff, we are still seeing a 102 Hz line rung up right at the end of the lownoise_length_control state. Since we had a random lockloss, I asked Tony to take us to lownoise_esd_etmx and I tried walking through lownoise length control by hand (copying the guardian code line by line into the shell).
The lines 5427-5468 ramp various gains to zero, set up the filters and drive matrix for the LSC feedforward, and prepare for the SRCL offset. These lines run fine and do not ring up the 102 Hz line.
I am able to run the first action line of the run state, which sets the MICH FF gain to 1 (line 5480). This runs fine, no 102 Hz line. Then, I ran the next line to turn on the SRCL FF gain (line 5481). This caused an immediate lockloss (huh?), despite the fact that this code has run many times just fine.
On the next lock attempt, I tried running the MICH and SRCL gain lines at the exact same time. Also immediate lockloss.
I have no idea why this is such an issue. All it does it ramp the gains to 1 (the tramps are set on a previous line to 3 seconds).
Both of these locklosses seem to ring up a test mass bounce mode, suggesting that the SRCL FF (I assume) is kicking a test mass pretty hard.
This might be a red herring, or maybe it's a clue. I don't see any 102 Hz line during these locklosses though.
The offending lines:
I think it's pretty clear that this is an LSC feedforward problem. I attached two ndscopes of the ETMX L3 master outs, one zoomed in and one zoomed out. The massive oscillation in the signal is the 102 Hz line, which I first begin to see in the time series starting at UTC 5:24:32 and some milliseconds. This corresponds exactly to the time in the guardian log when the LSC feedforward gain is ramped on (see copied guardian log below).
I have added two new lines to lownoise_length_control that increases the LSC FF ramp time from 3 to 10 seconds. These lines are at the end of the main state, right before the run state. I also increased the timer in the first step of the run state to wait 10 seconds after the FF gains are set, before moving to the next part of the run state which changes LSC gains.
This will result in two SDF diffs for the feedforward ramp times in the LSC model. It can be accepted. Tagging Ops
SDF changes accepted picture attached.
Reverted these changes to make it past LOWNOISE_LENGTH_CONTROL
5471 ezca['LSC-MICHFF_TRAMP'] = 3 Changed back to 3 from 10
5472 ezca['LSC-SRCLFF1_TRAMP'] = 3 Changed back to 3 from 10
And
5488 self.timer['wait'] = 1 #Changed back to 1 from 10.
This may not be a problem with a filter kick, but with the filter making some loop unstable and driving up the 102Hz line. I suspect that changing the aux dofs gain immediately afterwards makes it stable again. If so, slowing down the transition only makes it worse. We may need to reorder the steps.
As a followup of alog 72061, a batter-operated voltage reference was connected to the OM2 heater chassis. Beckhoff cable was disconnected for now.
Please check if the 1.66Hz comb is still there.
Beckhoff output was 7.15V across the positive and negative input of the driver chassis (when the cable was connected to the chassis), so the voltage reference was set to 7.15V.
We used REED R8801 because its output was clean (4th pic) while CALIBRATORS DVC-350A was noisy (5th pic).
detchar-request git issue for tracking purposes.
As you can see from one of the pictures above, the unit is powered with AC supply so we can leave it for a while.
If there is a power outage, the voltage reference won't come back automatically. Though I hope we never need this instruction, I'll be gone for a month and Daniel will be gone for a week, so I'm writing this down just in case.
0. Instruction manual for the voltage reference (R8801) is found in the case of the unit inside a cabinet where all voltage references are stored in the EE shop. Find it and bring it to the floor.
1. The voltage reference and the DC power supply are on top of the work table by HAM6. See the 2nd picture in the above alog.
2. The DC supply will be ON as soon as the power comes back. Confirm that the output voltage is set to ~9V. If not, set it to 9V.
3. Press the yellow power button of the voltage reference to turn it on. You'll have to press it longer than you think is required. See the 1st picture in the above alog.
4. Press the "V" button to set the unit to voltage source mode. Set the voltage to 7.15V. Use right/left buttons to move cursor to the decimal place you'd like to change, and then use up/down buttons to change the number.
5. Most likely, a funny icon that you'll never guess to mean "Auto Power Off" will be displayed at the top left corner of the LCD. Now is the time to look at the LCD description on page 4 of the manual to confirm that it's indeed the Auto Power Off icon.
6. If the icon is indeed there (i.e. the unit is in Auto Power Off mode), press power button and V button at the same time to cancel Auto Power Off. You'll have to press the buttons longer than you think is required. If the icon doesn't go away, repeat.
7. Confirm that the LCD of R8801 looks exactly like the 1st picture of the above alog. You're done.
I'm not seeing any PI modes coming up during the 0222UTC lockloss, or any of the other lock losses from yesterday.
By "lockloss from NLN @ 2:22 UTC almost certainly because of a Pi ring up" I really mean I thought I had a smoking gun for that first lockloss yesterday, but just didn't understand the Arbitrary "y" cursors on the plots for the PI monitors.
My apologies to the PI team for making poor assumptions.