Displaying reports 2121-2140 of 85739.Go to page Start 103 104 105 106 107 108 109 110 111 End
Reports until 15:05, Tuesday 29 July 2025
H1 ISC (Lockloss, SUS)
oli.patane@LIGO.ORG - posted 15:05, Tuesday 29 July 2025 (86045)
ETMX glitch comparison between LHO and LLO

I wrote a script that looks at sudden range drops for both H1 and L1 and searches those times for ETMX glitches. With this script I have been able to confirm that LLO gets ETMX glitches that they're able to ride out. However, we don't know if the glitches cause locklosses for them too.

I used /ligo/home/oli.patane/Documents/WIP/etmglitch/range_drop_investigation.ipynb to look for ETMX glitches that would cause the range to drop below 100 Mpc. I have only looked over a few days at each ifo, but it's already clear that they definitely have ETMX glitches, or some glitch that presents itself very similarly. The plots for LHO can be found in /ligo/home/oli.patane/Documents/WIP/etmglitch/range_drop_investigation/H1/, and the LLO plots in /ligo/home/oli.patane/Documents/WIP/etmglitch/range_drop_investigation/L1/. I've attached a couple here of each as examples.

 

I wanted to make the plots between the two ifos as similar as possible to help better judge glitch size and channels it appears in. Both ifos have matching ylims for L1, L2, and L3, and although I couldn't use the same ylims for the DCPDs, I have scaled them so the delta between the ymin and ymax is 0.3 mA for both ifos. Unfortunately, I was not able to do any scaling for DARM or CALIB STRAIN due to the amount they vary between both locks as well as between ifos.

LHO example1, LHO example2

LLO example1, LLO example2

Both LHO and LLO seem to have ETMX glitches that both appear alone and in groups. As you can see, LLO generally has much noisier ETMX L3, DCPD, and DARM channels. This hides the true morphology of the glitches in ETMX L3, and may be preventing us from seeing the glitch appear in the DCPDs and DARM as often as they appear in LHO's DCPDs and DARM channels. In LLO's examples, you can see very small glitches in the DCPDs and DARM at the same time, but proportional to the entire trace, they aren't affecting those channels as much as they can do at LHO. Feel free to take a look through the rest of the glitch examples in the directories to get a better idea of the range of how these glitches can present and affect the different parts of the ifo.

Through messing with this script I've also been able to find good thresholds to use for searching for these glitches at LLO, since their DARM and ETMX L3 channels are much noisier than ours, so it would be very easy to implement an ETMX glitch lockloss search/tag for them.

Images attached to this report
H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 14:48, Tuesday 29 July 2025 - last comment - 15:14, Tuesday 29 July 2025(86077)
21:42 UTC lockloss

21:42 UTC lockloss, we were starting to shake from a 5.6 from El Salvador but it looks to be another ASC_Y / TMSX_Y oscillation lockloss.

Images attached to this report
Comments related to this report
ryan.crouch@LIGO.ORG - 15:14, Tuesday 29 July 2025 (86078)

The TMSX_Y oscillation is also being seen in ALS_X.

Images attached to this comment
H1 SQZ (OpsInfo)
camilla.compton@LIGO.ORG - posted 13:58, Tuesday 29 July 2025 (86067)
OPO Crystal Moved to Leftmost Spot

Sheila, Camilla WP# 12716

We followed the setup in 80451 and additionally I plugged BNC's inside SQZT7 to directly connect t-ed off OPO_IR_PD and OPO_TRANS PDs to the table feed-through, as OPO REFL "test in" and "test out", see photo. This let us do all the work in laser safe and have been left plugged in. 

We started in the second leftmost spot from the right (855898548885297), had been there since late June but the spot degraded quickly and the green power used to achieve 80uW from OPO was now ~30mW which is close to the max available.

We first checked where we were by going to the left edge of the crystal and then returned to the leftmost spot.

Once there we measured NLG, moved the crystal in steps on 100 counts, re-optimized OPO temperature and measured NLG, similar to 85589. Once we got to a NLG above 15 with 80uW of OPO TRANS, we stopped. The required green power used to achieve 80uW was now halved from 30mW to 15mW, much better. We started witht the OPO temps et to 31.2deg and ended at 32.3deg, this is closer to where we were operating 3 years ago, photo. 

Data below along with photos.  
Crystal Move
OPO Setpoint
Temp
Amplified Max
UnAmp
Dark
NLG
 
Starting   31.202         starting spot photo
45x50 to left, 4x10 to left
45uW
 
0.0266166
0.007152
-3.7e-5
3.7
41x50 to left
 
 
 
 
 
 
29x50 to right, 12 x 10m to right
80uW
31.1435
0.058468
   
8.13
leftmost spot photo 
10 to left
 
same
 
 
 
 
Unsure we were moving here,
later plugged OPO PZT scanning back in to watch.
30 to left
 
same
 
 
 
 
50 to left
 
same
 
 
 
 
100 to left
 
31.2256
0.0606349
 
 
8.4
photo 
100 to left
 
31.306
0.06549
 
 
9.1
 
100 to left
 
31.404
0.070145
 
 
9.75
 
100 to left
 
 
0.074379
 
 
10.3
 
100 to left
 
31.6236
0.0788554
 
 
11.0
 
100 to left
 
31.730
0.0802738
 
 
11.2
 
100 to left
 
31.8546
0.0947171
 
 
13.2
 
100 to left
 
31.9676
0.10171
 
             
14.1
 
100 to left
 
32.352
0.112529
0.007088
-2.2e-5
15.8
Leaving here
 
Note to Operators: This crystal spot move will mean the that OPO temperature will need to be optimized more often for the next ~week, now, we can do that using Sheila's 85532 new guardian. Do this while we're relocking or drop out of Observing. Then take the SQZ_OPO_LR guardian to SCAN_OPOTEMP (will take 3 minutes, can watch via !SQZ Scopes > OPO temp) once it's done you can take SQZ_OPO_LR back to LOCKED_CLF_DUAL and go back to Observing once SQZ is back up. The OPO temp plot will be made and put it https://lhocds.ligo-wa.caltech.edu/exports/SQZ/GRD/OPOTEMP_SCAN/
Images attached to this report
H1 SUS
oli.patane@LIGO.ORG - posted 13:53, Tuesday 29 July 2025 - last comment - 10:08, Thursday 21 August 2025(86075)
SR3 SUSPOINT and OLTFs taken for estimator fit

As the next step in fine-tuning the SR3 Y estimator, we needed to retake the SUSPOINT to M1 measurements as well as OLTFs so they could be used to calculate the filter modules for the suspoint and drive estimators. I took those measurements today.

General setup:
- in HEALTH_CHECK (but damping back on)
    - damping for Y changed from -0.5 to -0.1
- OSEMINF gains and DAMP FM7 turned on (and left on afterwards 86070)

SUSPOINT to M1:
Data for those measurements can be found in /ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/SR3/Common/Data/2025-07-29_1730_H1ISIHAM5_ST1_WhiteNoise_SR3SusPoint_{L,T,V,R,P,Y}_0p02to50Hz.xml  r12492

M1 to M1 (OLTFs):
After this, the next steps are to take regular transfer functions with the above setup of Y having -0.1 damping
That data is in /ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/SR3/SAGM1/Data/2025-07-29_1830_H1SUSSR3_M1_WhiteNoise_{L,T,V,R,P,Y}_0p02to50Hz_OpenLoopGainTF.xml  r12493
Reminder that these open loop transfer functions were taken with the damping Y gain of -0.1, so they should not be taken as 'nominal' OLTFs.

Comments related to this report
oli.patane@LIGO.ORG - 10:08, Thursday 21 August 2025 (86495)

The M1 to M1 TFs were supposed to be regular TFs so here is the alog for those: 86202

H1 SEI (DetChar)
thomas.shaffer@LIGO.ORG - posted 13:52, Tuesday 29 July 2025 (86074)
BRSY Corrupted Frame Caused High Velocity for a Week, Catch Implemented

Shivaraj Kandhasam posted in MatterMost that the EY BRS signal looked off, and had been since last Tuesday (July 22). I talked to Jim about this and indeed the velocity of the BRS was consistently very high. Jim said that it looked like it needed to grab new frames for the C code calculations. Immediately after Jim reset this, by logging into the BRS computer and clicking the "Grab New Frames" button on the GUI, the velocity slowed down and it was quickly brought back to a normal realm.

I trended this back and it's happened a few times, most recently from May 21-June 5. To catch this next time it happens again, I've done two things:

  1. Lowered the threshold that the BRS{X,Y}_STAT nodes use before moving into their DAMPER_ON_HIGH_VEL state. When these nodes go to this state, then the ST1 ETM sensor correction nodes will remove the BRS corrected signal from sensor correction, since it assumes that the sensor is not in a reliable state. The velocity during the corrupted frame times tends to be +/- 3600 or so, so I changed the threshold from 10000 to 6800. We rarely get above this value, and when we do it's from maintenance activities, very large earthquakes, or extreme wind to the point where locking isn't happening anyway. This is commonly shared code, so I had to check that this value would work for LLO as well. I've only been able to go back a week or two though before I was running into data issues. I'll double check this with LLO then coordinate with them to update the value if desired.
  2. Added a test in DIAG_MAIN to watch the BRS{X,Y}_STAT nodes and if they are in the DAMPER_ON_HIGH_VEL state for more than 6 hours consistently, then it will notify. This should be enough time to let large earthquakes ring down and allow for a dip in extreme winds to reset the timer. This test is not common, but LLO can copy if they so choose.
H1 SUS
oli.patane@LIGO.ORG - posted 13:32, Tuesday 29 July 2025 (86071)
TMSX satamp compensation updated to best possible

We previoiusly swapped the satamp boxes for TMSX M1 F1/F2/F3/LF (85980), and I had just put in the generic 5.31:0.0969 zp compensation filters in at the time since the 'best possible' filters for that satamp were under the optic name of the OMC in the txt files. Now that we've decided that we're going to be keeping this satamp box in, we have fixed the naming in the txt files and so I was able to update the compensation filters to be the 'best possible' for each satamp channel (output). These were loaded in.

Images attached to this report
H1 SUS
oli.patane@LIGO.ORG - posted 13:20, Tuesday 29 July 2025 - last comment - 14:05, Tuesday 29 July 2025(86070)
Apparent alignment shift in SR3 due to OSEM calibration update

As part of the estimator work, we measured and calculated OSEMINF gains (85907) and compensating gain filter modules for the DAMP bank (86026). We believe these new values to be correct, so we have gone ahead and permanently updated the OSEMINF gains as well as turned on the compensating gains in FM7 in the DAMP filter bank for SR3.
This update means that there will be a difference in the apparent location of the DAMP INs, but this is just because of the change in the OSEMINF gains. These gains were changed (along with the compensating gains filter modules turned on) at 2025-07-29 16:58 UTC. They were put into SDF safe and will be saved in OBSERVE once we get there.

OSEM Old OSEMINF gain New OSEMINF gain
T1 1.478 3.213
T2 0.942 1.517
T3 0.952 1.494
LF 1.302 1.733
RT 1.087 1.494
SD 1.290 1.793

The compensation gains put in in the DAMP filter bank are in FM7, and they are the following:

L: 0.740
T: 0.732
V: 0.548
R: 0.550
P: 0.628
Y: 0.757

 

Images attached to this report
Comments related to this report
ryan.crouch@LIGO.ORG - 14:05, Tuesday 29 July 2025 (86076)

Accepted these changes in SDF.

Images attached to this comment
H1 General
mitchell.robinson@LIGO.ORG - posted 13:12, Tuesday 29 July 2025 (86069)
Work Platform section intalled -X side of BSC2

Randy, Jim, and Mitchell

One section of the work platform was installed on the -X side of BSC2 this morning.

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 12:49, Tuesday 29 July 2025 (86068)
LVEA Sweep

LVEA has been swept.

H1 CDS
patrick.thomas@LIGO.ORG - posted 11:58, Tuesday 29 July 2025 (86066)
Added IPFCC6, IPFCC8, and PTCC7 to h0vacly
Closes WP 12719.

Patrick T., Gerardo M., Dave B.

"Update the h0vacly Beckhoff TwinCAT 3 system manager and PLC code to add the IPFCC6 and IPFCC8 filter cavity tube ion pumps and the PTCC7 filter cavity tube gauge. Will require a restart of the PLC and IOC. Will require a DAQ restart."

This has been completed. The code on h0vacly is now at commit 6a6e7bc55cb87bc9188312eee1c9a8129bdf7946. No issues. Dave did a burt restore. A DAQ restart is pending.
H1 SUS (CDS, Lockloss, SUS, SYS)
richard.mccarthy@LIGO.ORG - posted 11:01, Tuesday 29 July 2025 (86064)
EX TMS Coil Driver Replacement

Trying to narrow down why TMS x is involved in lock losses we have replaced the TMS coil driver that works on F1,2,3 and LF.  Chassis S1102670 was replaced with S1102666.  The operator returned the system to damping.  This is a wait and see test.

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:32, Tuesday 29 July 2025 (86062)
Tue CP1 Fill

Tue Jul 29 10:08:48 2025 INFO: Fill completed in 8min 44secs

 

Images attached to this report
H1 SUS
ryan.crouch@LIGO.ORG - posted 10:23, Tuesday 29 July 2025 - last comment - 11:41, Tuesday 29 July 2025(86061)
TMSX coil driver chassis swap OSEM check

After Richard swapped the TMSX coil driver chassis I took a look at the raw OSEM counts for the TMSX (F1, F2, F3, LF) comparing to a time earlier when TMSX and ISC_LOCK were both in the same state (ALIGNED, DOWN). I see that LFs counts are ~2500 counts lower post swap. I'm not sure if there are any other checks to run.

Images attached to this report
Comments related to this report
ryan.crouch@LIGO.ORG - 11:41, Tuesday 29 July 2025 (86065)

I took a damped and undamped spectra of the OSEMs.

Images attached to this comment
H1 CDS
erik.vonreis@LIGO.ORG - posted 09:49, Tuesday 29 July 2025 - last comment - 16:44, Tuesday 29 July 2025(86059)
Dolphin IX switch replaced at EX

The dolphin IX switch at EX had broken management interface which prevented fencing of front end from the dolphin network for safe reboots.

I installed a new switch.  As part of the process, h1cdsrfm was turned off to avoid crashing it and in turn crashing CS and EY dolphin networks. 

When I turned h1cdsrfm back on, the cdsrfm port on the switch was already enabled, which led to timing glitches on h1susex and h1iscex.  This could have been prevented by re-fencing h1cdsrfm after the new switch was turned on.

I restarted the models on h1susex and h1iscex.  A few minutes later, h1susex crashed in the dolphin driver and had to be rebooted.  The crash was reminiscent of several susex (plus all of ey) crashes related to restarts of dis_networkmgr on the bootserver.  See 67389 for one example.

Comments related to this report
erik.vonreis@LIGO.ORG - 16:44, Tuesday 29 July 2025 (86089)

Switch sn ISX600-HN-000224 was replaces with ISX600-HN-000258.

H1 General
ryan.crouch@LIGO.ORG - posted 07:34, Tuesday 29 July 2025 - last comment - 10:34, Tuesday 29 July 2025(86050)
OPS Tuesday day shift start

TITLE: 07/29 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 0mph Gusts, 0mph 3min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.05 μm/s
QUICK SUMMARY:

Comments related to this report
ryan.crouch@LIGO.ORG - 08:09, Tuesday 29 July 2025 (86051)Lockloss, SUS

When the SUS_CHARGE was finishing up and SWAPPING_BACK_TO_ETMX ETMX L3 started to constantly saturate and there was a big line on DARM at ~150. This behavior persisted even after the GRD was done and went back to waiting. (tagging SUS).

 

15:02 UTC I induced a lockloss at by flipping the polarity of H1:IMC-REFL_SERVO_IN1POL, to break the lock and to stop the EX saturations.

elenna.capote@LIGO.ORG - 10:12, Tuesday 29 July 2025 (86060)

I took a look at why there were saturations, and I think it's because of our changes yesterday. When the measurement finished, the ETMX bias was reverted to the new value that Sheila and I used yesterday, but the L3 drivealign gain was set to the old value of 198 instead of the new value of 88. I looked at the script and I don't understand exactly how that happened since it appears that it pulls the values right from LSCparams, but the ndscope I attached shows this is what happened.

Images attached to this comment
ryan.crouch@LIGO.ORG - 10:34, Tuesday 29 July 2025 (86063)

I think this was due to the fact that the SUS_CHARGE GRD was not reloaded after lscparams was changed, I've reloaded the GRD.

H1 SUS
oli.patane@LIGO.ORG - posted 17:26, Thursday 24 July 2025 - last comment - 13:32, Tuesday 29 July 2025(85980)
TMSX F1/F2/F3/LF satamp box swapped

Fil, Elenna, Oli

In an effort to find and solve the ASC excursion locklosses that seem to be linked to TMSX (85973), Fil swapped out the satellite amplifier that he had installed last Tuesday (85770) that had serial number S1100150 with another modified satamp that he had on hand, S1100122 (originally meant for OMC T1 T2 T3 LF). We are hoping this fixes the problem. Since this is partially for testing and since we originally were planning to use this satellite amplifier for the OMC, for now I've replaced the OSEMINF compensation filters for TMSX (which had the specific tuned filters) with the generic 5.31:0.0969 zp filters. We can update these later when we figure out which satamp will be staying there. I've loaded these filters in and brought TMSX back.

 

taken out: S1100150

put in: S1100122 (originally meant for OMC T1 T2 T3 LF)

Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 13:32, Tuesday 29 July 2025 (86072)

Filters updated to best possible for this new satamp: 86071

H1 SUS
oli.patane@LIGO.ORG - posted 12:54, Tuesday 22 July 2025 - last comment - 13:39, Tuesday 29 July 2025(85918)
Measuring SR3 OLG TFs to get DAMP filter compensation gains

Ivey used the ISO calibration measurements that I took earlier (85906) to calculate what the OSEMINF gains should be on SR3 (85907), and this script also calculates what it thinks the compensation gain in the DAMP filter bank should be.
The next step is to use OLG TFs to measure what values we would use in the DAMP filter bank to compensate for the change in OSEMINF gains, and we can compare them to the calculated values to see how close they are.

I took two sets of OLG measurements for SR3:
- a set with the nominal OSEMINF gains
    T1: 1.478
    T2: 0.942
    T3: 0.952
    LF: 1.302
    RT: 1.087
    SD: 1.290
- a set with the OSEMINF gains changed to the values in 85907
    T1: 3.213
    T2: 1.517
    T3: 1.494
    LF: 1.733
    RT: 1.494
    SD: 1.793

Measurement settings:
- SR3 in HEALTH_CHECK but with damping loops on
- SR3 damping nominal (all -0.5)
- HAM5 in ISOLATED

Nominal gain set:
/ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/SR3/SAGM1/Data/2025-07-22_1700_H1SUSSR3_M1_WhiteNoise_{L,T,V,R,P,Y}_0p02to50Hz_OpenLoopGainTF.xml r12478

New gain set:
/ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/SR3/SAGM1/Data/2025-07-22_1800_H1SUSSR3_M1_WhiteNoise_{L,T,V,R,P,Y}_0p02to50Hz_OpenLoopGainTF.xml r12478

Once I had taken these measurements, I exported txt files for each dof's OLG and used one of my scripts, /ligo/svncommon/SusSVN/sus/trunk/HLTS/Common/MatlabTools/divide_traces_tfs.m to plot the OLG for each dof to compare the traces between OSEMINF gain differences and then divide the traces and grab an average of that, which will be the compensation gain put in as a filter in the DAMP filter bank (plots). The values I got for the compensation gains are below:
    L: 0.740
    T: 0.732
    V: 0.548
    R: 0.550
    P: 0.628
    Y: 0.757

DOF OLTF measured and calculated DAMP Compensation gains ISO Calibration measurement calculated compensation gains (85907) Percent difference (%)
L 0.740 0.740 0.0
T 0.732 0.719 1.8
V 0.548 0.545 0.5
R 0.550 0.545 0.9
P 0.628 0.629 0.2
Y 0.757 0.740 2.3

 These are pretty similar to what my script had found them to be last time before the satamp swap (85288), as well as being very similar to the values that Ivey's script had calculated.
Maybe the accuracy from Ivey's script means that in the future we don't need to run the double sets of OLG transfer functions and can jsut use the values that the script gives.

Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 13:00, Monday 28 July 2025 (86026)

The compensation gains have been loaded into the SR3 DAMP filter bank in FM7 as well as being updated in the estimator damp banks for P and Y. They have been loaded in but of course, are currrently left off for nominal operations since the OSEMINF gains haven't been updated yet

Images attached to this comment
oli.patane@LIGO.ORG - 13:39, Tuesday 29 July 2025 (86073)

The OSEMINF gains and these new DAMP compensating gains have been turned on together: 86070

Displaying reports 2121-2140 of 85739.Go to page Start 103 104 105 106 107 108 109 110 111 End