Jeff Kissel, Elenna Capote
Unfortunately, we just had another TMSX lockloss, after having swapped the TMSX F1/F2/F3/LF top mass coil driver. Jeff and I ripped our hair for a bit checking every signal we could think of. We returned to the FASTIMON signals, and found a discrepancy between them and the MASTER OUT (the digital request to the DAC). The F2 FASTIMON (coil driver output monitor) shows a jump in current, that Jeff and I calibrated to be about a 0.5 mA jump (following the conversion factor of 0.0228 mA/ct from this alog), which we calibrate to be a 50 mV jump in the voltage (following the calibration factor of 9.943 mA/V from this table). However, no such signal is present in the MASTER OUTs, indicating that the DAC is the possible culprit. The usual RMS of the FASTIMON channel seems to be about 0.002 mA, which is about 0.2 mV RMS. Therefore, this is a huge impulse being sent to the suspension, which has a 0.35 Hz resonance (T1200404). We believe this jump is causing the TMS suspension to shake at 0.35 Hz, and the motion is too large for our other slow servos to follow.
We see the jump in the F2 FASTIMON channel but not the F3, for example, in this particular lockloss. Jeff is finding similarly huge jumps in the F2 FASTIMON for some of the other locklosses (he is looking through them now).
Some corroborating evidence that this is consistently happening during (as far as I looked) all of these now-called TMSX yaw excursions or oscillations: Attachment 1 -- 2025-07-29 21:42 UTC (1437860551 lock loss) A repeat of Elenna's plot showing in my formatting the same thing -- the large F2 (uncalibrated) excursion in the F2 FASTIMON where no MASTER_OUT control request is made. Attachment 2 -- 2025-07-27 13:39 UTC (1437277872 lock loss) Attachment 3 -- 2025-07-23 10:49 UTC (1437303004 lock loss) Attachment 4 -- 2025-07-23 03:50 UTC (1437277872 lock loss)
WP12709 Replace EX Dolphin IX S600 Switch
Jonathan, Erik, EJ, Dave:
The Dolphin IX switch at EX was damaged by the 06apr2025 power glitch, it continued to function as a switch but its nework interface stopped working. This meant we couldn't fence a particular EX front end from the Dolphin fabric by disabling its switch port via the network interface. Instead we were using the IPC_PAUSE function of the IOP models. Also because the RCG needs to talk over the network to a swich on startup, Erik configured EX frontends to control an unused port on the EY switch.
This morning Erik replaced the broken h1rfmfecex0 with a good spare. The temporary control-EY-switch-because-EX-is-broken change was removed.
Before this work the EX SWWD was bypassed on h1iopseiex and h1cdsrfm was powered down.
During the startup of the new switch, the IOP models for SUS and ISC were time glitched putting them into a DACKILL state. All models on h1susex and h1iscex were restarted to recover from this.
Several minutes later h1susex spontaneously crashed, requiring a reboot. Everything as been stable from this point onwards.
WP12687 Add STANDDOWN EPICS channels to DAQ
Dave:
I added a new H1EPICS_STANDDOWN.ini to the DAQ, it was installed as part of today's DAQ restart.
WP12719 Add two FCES Ion Pumps and One Gauge To Vacuum Controls
Gerardo, Janos, Patrick, Dave
Patrick modifed h0vacly Beckhoff to read out two new FCES Ion Pumps and a new Gauge.
The new H0EPICS_VACLY.ini was added to the DAQ, requiring a EDC+DAQ retstart.
WP12689 Add SUS SR3/PR3 Fast Channels To DAQ
Jeff, Oli, Brian, Edgard, Dave:
New h1sussr3 and h1suspr3 models (HLTS suspensions) were installed this morning. Each model added two 512Hz fast channels to the DAQ. Renaming of subsystem parts resulted in the renaming of many fast and slow DAQ channels. A summary of the changes:
In sus/common/models three files were changed (svn version numbers shown):
HLTS_MASTER_W_EST.mdl production=r31259 new=32426
SIXOSEM_T_STAGE_MASTER_W_EST.mdl production=r31287 new=32426
ESTIMATOR_PARTS.mdl production=r31241 new=32426
HLTS_MASTER_W_EST.mdl:
only change is to the DAQ_Channels list, added two chans M1_ADD_[P,Y]_TOTAL
SIXOSEM_T_STAGE_MASTER_W_EST.mdl:
At top level, change the names of the two ESTIMATOR_HXTS_M1_ONLY blocks:
PIT -> EST_P
YAW -> EST_Y
Inside the ADD block:
Add two testpoints P_TOTAL, Y_TOTAL (referenced by HLTS mdl)
ESTIMATOR_PARTS.mdl:
Rename block EST -> FUSION
Rename filtermodule DAMP_EST -> DAMP_FUSION
Rename epicspart DAMP_SIGMON -> OUT_DRIVEMON
Rename testpoint DAMP_SIG -> OUT_DRIVE
DAQ_Channels list changed according to the above renames.
DAQ Changes:
This results in a large number of DAQ changes for SR3 and PR3. For each model:
+496 slow chans, -496 slow chans (rename of 496 channels).
+64 fast chans, -62 fast chans (add 2 chans, rename 62 chans).
DAQ Restart
Jonathan, Dave:
The DAQ was restarted for several changes:
New SR3 and PR3 INI, fast and slow channel renames, addition of 512Hz fast channels.
New H0EPICS_VACLY.ini, adding Ion Pumps and Gauge to EDC.
New H1EPICS_STANDDOWN.ini, adding ifo standdown channels to EDC.
This was a full EDC DAQ restart. Procedure was:
stop TW0 and TW1, then restart EDC
restart DAQ 0-leg
restart DAQ 1-leg
As usual GDS1 needed a second restart, but unusual FW1 spontaneously restarted itself after have ran for 55 minutes, an uncommon late restart.
Jonathan tested new FW2 code which sets the run number in one place and propagates it to the various frame types.
Detailed DAQ changes in attached file
Tue29Jul2025
LOC TIME HOSTNAME MODEL/REBOOT
09:02:53 h1susex h1iopsusex <<< Restarts following EX Dolphin IXS600 switch replacement
09:02:57 h1iscex h1iopiscex
09:03:07 h1susex h1susetmx
09:03:11 h1iscex h1pemex
09:03:21 h1susex h1sustmsx
09:03:25 h1iscex h1iscex
09:03:35 h1susex h1susetmxpi
09:03:39 h1iscex h1calex
09:03:53 h1iscex h1alsex
09:11:30 h1susex h1iopsusex <<< h1susex crash
09:11:43 h1susex h1susetmx
09:11:56 h1susex h1sustmsx
09:12:09 h1susex h1susetmxpi
12:15:20 h1sush2a h1suspr3 <<< New models, EST rename and 2 fast chans added
12:16:28 h1sush56 h1sussr3
12:19:29 h1susauxb123 h1edc[DAQ] <<< EDC for new VAC-LY and STANDDOWN
12:21:00 h1daqdc0 [DAQ] <<< 0-leg
12:21:08 h1daqfw0 [DAQ]
12:21:08 h1daqtw0 [DAQ]
12:21:10 h1daqnds0 [DAQ]
12:21:17 h1daqgds0 [DAQ]
12:24:10 h1daqdc1 [DAQ] <<< 1-leg
12:24:17 h1daqfw1 [DAQ]
12:24:17 h1daqtw1 [DAQ]
12:24:20 h1daqnds1 [DAQ]
12:24:27 h1daqgds1 [DAQ]
12:25:18 h1daqgds1 [DAQ] <<< GDS1 2nd restart
13:19:29 h1daqfw1 [DAQ] <<< spontaneous restart
16:17:57 h1susex h1iopsusex <<< Replace TMSX 18bit-DAC
16:18:10 h1susex h1susetmx
16:18:23 h1susex h1sustmsx
16:18:36 h1susex h1susetmxpi
I wrote a script that looks at sudden range drops for both H1 and L1 and searches those times for ETMX glitches. With this script I have been able to confirm that LLO gets ETMX glitches that they're able to ride out. However, we don't know if the glitches cause locklosses for them too.
I used /ligo/home/oli.patane/Documents/WIP/etmglitch/range_drop_investigation.ipynb to look for ETMX glitches that would cause the range to drop below 100 Mpc. I have only looked over a few days at each ifo, but it's already clear that they definitely have ETMX glitches, or some glitch that presents itself very similarly. The plots for LHO can be found in /ligo/home/oli.patane/Documents/WIP/etmglitch/range_drop_investigation/H1/, and the LLO plots in /ligo/home/oli.patane/Documents/WIP/etmglitch/range_drop_investigation/L1/. I've attached a couple here of each as examples.
I wanted to make the plots between the two ifos as similar as possible to help better judge glitch size and channels it appears in. Both ifos have matching ylims for L1, L2, and L3, and although I couldn't use the same ylims for the DCPDs, I have scaled them so the delta between the ymin and ymax is 0.3 mA for both ifos. Unfortunately, I was not able to do any scaling for DARM or CALIB STRAIN due to the amount they vary between both locks as well as between ifos.
Both LHO and LLO seem to have ETMX glitches that both appear alone and in groups. As you can see, LLO generally has much noisier ETMX L3, DCPD, and DARM channels. This hides the true morphology of the glitches in ETMX L3, and may be preventing us from seeing the glitch appear in the DCPDs and DARM as often as they appear in LHO's DCPDs and DARM channels. In LLO's examples, you can see very small glitches in the DCPDs and DARM at the same time, but proportional to the entire trace, they aren't affecting those channels as much as they can do at LHO. Feel free to take a look through the rest of the glitch examples in the directories to get a better idea of the range of how these glitches can present and affect the different parts of the ifo.
Through messing with this script I've also been able to find good thresholds to use for searching for these glitches at LLO, since their DARM and ETMX L3 channels are much noisier than ours, so it would be very easy to implement an ETMX glitch lockloss search/tag for them.
21:42 UTC lockloss, we were starting to shake from a 5.6 from El Salvador but it looks to be another ASC_Y / TMSX_Y oscillation lockloss.
Sheila, Camilla WP# 12716
We followed the setup in 80451 and additionally I plugged BNC's inside SQZT7 to directly connect t-ed off OPO_IR_PD and OPO_TRANS PDs to the table feed-through, as OPO REFL "test in" and "test out", see photo. This let us do all the work in laser safe and have been left plugged in.
We started in the second leftmost spot from the right (85589, 85488, 85297), had been there since late June but the spot degraded quickly and the green power used to achieve 80uW from OPO was now ~30mW which is close to the max available.
We first checked where we were by going to the left edge of the crystal and then returned to the leftmost spot.
Once there we measured NLG, moved the crystal in steps on 100 counts, re-optimized OPO temperature and measured NLG, similar to 85589. Once we got to a NLG above 15 with 80uW of OPO TRANS, we stopped. The required green power used to achieve 80uW was now halved from 30mW to 15mW, much better. We started witht the OPO temps et to 31.2deg and ended at 32.3deg, this is closer to where we were operating 3 years ago, photo.
|
Crystal Move
|
OPO Setpoint
|
Temp
|
Amplified Max
|
UnAmp
|
Dark
|
NLG
|
|
| Starting | 31.202 | starting spot photo | |||||
|
45x50 to left, 4x10 to left
|
45uW
|
|
0.0266166
|
0.007152
|
-3.7e-5
|
3.7
|
|
|
41x50 to left
|
|
|
|
|
|
|
|
|
29x50 to right, 12 x 10m to right
|
80uW
|
31.1435
|
0.058468
|
8.13
|
leftmost spot photo | ||
|
10 to left
|
|
same
|
|
|
|
|
Unsure we were moving here,
later plugged OPO PZT scanning back in to watch.
|
|
30 to left
|
|
same
|
|
|
|
|
|
|
50 to left
|
|
same
|
|
|
|
|
|
|
100 to left
|
|
31.2256
|
0.0606349
|
|
|
8.4
|
photo |
|
100 to left
|
|
31.306
|
0.06549
|
|
|
9.1
|
|
|
100 to left
|
|
31.404
|
0.070145
|
|
|
9.75
|
|
|
100 to left
|
|
|
0.074379
|
|
|
10.3
|
|
|
100 to left
|
|
31.6236
|
0.0788554
|
|
|
11.0
|
|
|
100 to left
|
|
31.730
|
0.0802738
|
|
|
11.2
|
|
|
100 to left
|
|
31.8546
|
0.0947171
|
|
|
13.2
|
|
|
100 to left
|
|
31.9676
|
0.10171
|
|
|
14.1
|
|
|
100 to left
|
|
32.352
|
0.112529
|
0.007088
|
-2.2e-5
|
15.8
|
Leaving here |
As the next step in fine-tuning the SR3 Y estimator, we needed to retake the SUSPOINT to M1 measurements as well as OLTFs so they could be used to calculate the filter modules for the suspoint and drive estimators. I took those measurements today.
General setup:
- in HEALTH_CHECK (but damping back on)
- damping for Y changed from -0.5 to -0.1
- OSEMINF gains and DAMP FM7 turned on (and left on afterwards 86070)
SUSPOINT to M1:
Data for those measurements can be found in /ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/SR3/Common/Data/2025-07-29_1730_H1ISIHAM5_ST1_WhiteNoise_SR3SusPoint_{L,T,V,R,P,Y}_0p02to50Hz.xml r12492
M1 to M1 (OLTFs):
After this, the next steps are to take regular transfer functions with the above setup of Y having -0.1 damping
That data is in /ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/SR3/SAGM1/Data/2025-07-29_1830_H1SUSSR3_M1_WhiteNoise_{L,T,V,R,P,Y}_0p02to50Hz_OpenLoopGainTF.xml r12493
Reminder that these open loop transfer functions were taken with the damping Y gain of -0.1, so they should not be taken as 'nominal' OLTFs.
The M1 to M1 TFs were supposed to be regular TFs so here is the alog for those: 86202
Shivaraj Kandhasam posted in MatterMost that the EY BRS signal looked off, and had been since last Tuesday (July 22). I talked to Jim about this and indeed the velocity of the BRS was consistently very high. Jim said that it looked like it needed to grab new frames for the C code calculations. Immediately after Jim reset this, by logging into the BRS computer and clicking the "Grab New Frames" button on the GUI, the velocity slowed down and it was quickly brought back to a normal realm.
I trended this back and it's happened a few times, most recently from May 21-June 5. To catch this next time it happens again, I've done two things:
We previoiusly swapped the satamp boxes for TMSX M1 F1/F2/F3/LF (85980), and I had just put in the generic 5.31:0.0969 zp compensation filters in at the time since the 'best possible' filters for that satamp were under the optic name of the OMC in the txt files. Now that we've decided that we're going to be keeping this satamp box in, we have fixed the naming in the txt files and so I was able to update the compensation filters to be the 'best possible' for each satamp channel (output). These were loaded in.
As part of the estimator work, we measured and calculated OSEMINF gains (85907) and compensating gain filter modules for the DAMP bank (86026). We believe these new values to be correct, so we have gone ahead and permanently updated the OSEMINF gains as well as turned on the compensating gains in FM7 in the DAMP filter bank for SR3.
This update means that there will be a difference in the apparent location of the DAMP INs, but this is just because of the change in the OSEMINF gains. These gains were changed (along with the compensating gains filter modules turned on) at 2025-07-29 16:58 UTC. They were put into SDF safe and will be saved in OBSERVE once we get there.
| OSEM | Old OSEMINF gain | New OSEMINF gain |
| T1 | 1.478 | 3.213 |
| T2 | 0.942 | 1.517 |
| T3 | 0.952 | 1.494 |
| LF | 1.302 | 1.733 |
| RT | 1.087 | 1.494 |
| SD | 1.290 | 1.793 |
The compensation gains put in in the DAMP filter bank are in FM7, and they are the following:
L: 0.740
T: 0.732
V: 0.548
R: 0.550
P: 0.628
Y: 0.757
Randy, Jim, and Mitchell
One section of the work platform was installed on the -X side of BSC2 this morning.
LVEA has been swept.
Closes WP 12719. Patrick T., Gerardo M., Dave B. "Update the h0vacly Beckhoff TwinCAT 3 system manager and PLC code to add the IPFCC6 and IPFCC8 filter cavity tube ion pumps and the PTCC7 filter cavity tube gauge. Will require a restart of the PLC and IOC. Will require a DAQ restart." This has been completed. The code on h0vacly is now at commit 6a6e7bc55cb87bc9188312eee1c9a8129bdf7946. No issues. Dave did a burt restore. A DAQ restart is pending.
Trying to narrow down why TMS x is involved in lock losses we have replaced the TMS coil driver that works on F1,2,3 and LF. Chassis S1102670 was replaced with S1102666. The operator returned the system to damping. This is a wait and see test.
Tue Jul 29 10:08:48 2025 INFO: Fill completed in 8min 44secs
After Richard swapped the TMSX coil driver chassis I took a look at the raw OSEM counts for the TMSX (F1, F2, F3, LF) comparing to a time earlier when TMSX and ISC_LOCK were both in the same state (ALIGNED, DOWN). I see that LFs counts are ~2500 counts lower post swap. I'm not sure if there are any other checks to run.
TITLE: 07/29 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 0mph Gusts, 0mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.05 μm/s
QUICK SUMMARY:
When the SUS_CHARGE was finishing up and SWAPPING_BACK_TO_ETMX ETMX L3 started to constantly saturate and there was a big line on DARM at ~150. This behavior persisted even after the GRD was done and went back to waiting. (tagging SUS).
15:02 UTC I induced a lockloss at by flipping the polarity of H1:IMC-REFL_SERVO_IN1POL, to break the lock and to stop the EX saturations.
I took a look at why there were saturations, and I think it's because of our changes yesterday. When the measurement finished, the ETMX bias was reverted to the new value that Sheila and I used yesterday, but the L3 drivealign gain was set to the old value of 198 instead of the new value of 88. I looked at the script and I don't understand exactly how that happened since it appears that it pulls the values right from LSCparams, but the ndscope I attached shows this is what happened.
I think this was due to the fact that the SUS_CHARGE GRD was not reloaded after lscparams was changed, I've reloaded the GRD.
Fil, Elenna, Oli
In an effort to find and solve the ASC excursion locklosses that seem to be linked to TMSX (85973), Fil swapped out the satellite amplifier that he had installed last Tuesday (85770) that had serial number S1100150 with another modified satamp that he had on hand, S1100122 (originally meant for OMC T1 T2 T3 LF). We are hoping this fixes the problem. Since this is partially for testing and since we originally were planning to use this satellite amplifier for the OMC, for now I've replaced the OSEMINF compensation filters for TMSX (which had the specific tuned filters) with the generic 5.31:0.0969 zp filters. We can update these later when we figure out which satamp will be staying there. I've loaded these filters in and brought TMSX back.
taken out: S1100150
put in: S1100122 (originally meant for OMC T1 T2 T3 LF)
Filters updated to best possible for this new satamp: 86071
Ivey used the ISO calibration measurements that I took earlier (85906) to calculate what the OSEMINF gains should be on SR3 (85907), and this script also calculates what it thinks the compensation gain in the DAMP filter bank should be.
The next step is to use OLG TFs to measure what values we would use in the DAMP filter bank to compensate for the change in OSEMINF gains, and we can compare them to the calculated values to see how close they are.
I took two sets of OLG measurements for SR3:
- a set with the nominal OSEMINF gains
T1: 1.478
T2: 0.942
T3: 0.952
LF: 1.302
RT: 1.087
SD: 1.290
- a set with the OSEMINF gains changed to the values in 85907
T1: 3.213
T2: 1.517
T3: 1.494
LF: 1.733
RT: 1.494
SD: 1.793
Measurement settings:
- SR3 in HEALTH_CHECK but with damping loops on
- SR3 damping nominal (all -0.5)
- HAM5 in ISOLATED
Nominal gain set:
/ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/SR3/SAGM1/Data/2025-07-22_1700_H1SUSSR3_M1_WhiteNoise_{L,T,V,R,P,Y}_0p02to50Hz_OpenLoopGainTF.xml r12478
New gain set:
/ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/SR3/SAGM1/Data/2025-07-22_1800_H1SUSSR3_M1_WhiteNoise_{L,T,V,R,P,Y}_0p02to50Hz_OpenLoopGainTF.xml r12478
Once I had taken these measurements, I exported txt files for each dof's OLG and used one of my scripts, /ligo/svncommon/SusSVN/sus/trunk/HLTS/Common/MatlabTools/divide_traces_tfs.m to plot the OLG for each dof to compare the traces between OSEMINF gain differences and then divide the traces and grab an average of that, which will be the compensation gain put in as a filter in the DAMP filter bank (plots). The values I got for the compensation gains are below:
L: 0.740
T: 0.732
V: 0.548
R: 0.550
P: 0.628
Y: 0.757
| DOF | OLTF measured and calculated DAMP Compensation gains | ISO Calibration measurement calculated compensation gains (85907) | Percent difference (%) |
| L | 0.740 | 0.740 | 0.0 |
| T | 0.732 | 0.719 | 1.8 |
| V | 0.548 | 0.545 | 0.5 |
| R | 0.550 | 0.545 | 0.9 |
| P | 0.628 | 0.629 | 0.2 |
| Y | 0.757 | 0.740 | 2.3 |
These are pretty similar to what my script had found them to be last time before the satamp swap (85288), as well as being very similar to the values that Ivey's script had calculated.
Maybe the accuracy from Ivey's script means that in the future we don't need to run the double sets of OLG transfer functions and can jsut use the values that the script gives.
The compensation gains have been loaded into the SR3 DAMP filter bank in FM7 as well as being updated in the estimator damp banks for P and Y. They have been loaded in but of course, are currrently left off for nominal operations since the OSEMINF gains haven't been updated yet
The OSEMINF gains and these new DAMP compensating gains have been turned on together: 86070