Displaying reports 841-860 of 84478.Go to page Start 39 40 41 42 43 44 45 46 47 End
Reports until 01:25, Wednesday 30 July 2025
H1 General
thomas.shaffer@LIGO.ORG - posted 01:25, Wednesday 30 July 2025 - last comment - 17:03, Wednesday 30 July 2025(86095)
Ops Owl Update

Looks like the first attempt at locking after the ground motion settled enough failed at find IR. I was notified, but we should change this to start an initial aligment first.

Comments related to this report
thomas.shaffer@LIGO.ORG - 04:27, Wednesday 30 July 2025 (86096)

These aftershocks just keep coming!

Had a decent attempt at DRMI and PRMI before this latest one rolled thorugh, but it just wouldn't catch. After another initial alignment, where I had to touch SRM again to get SRY to lock, we're ready to another lock attempt. The ground is still moving a bit though, so I'll try to be patient.

thomas.shaffer@LIGO.ORG - 06:32, Wednesday 30 July 2025 (86097)

DRMI has locked a few times but we can't seem to make it too far after now. While the ground motion seems to have finally gone away, the FSS isn't locking. Looks like the TPD voltage is lower than where we want, but maybe thats just because it's unlocked?

After futzing with it for 40min it eventually locked after I gave the autolocker a break for ~10min. I actually forgot to turn it back on while I was digging through alogs, but that did the trick!

Another seismon warning of incoming aftershock.

DRMI just locked again! Looks like it's moving on well now so I'll set H1_MANAGER back up.

Images attached to this comment
jason.oberling@LIGO.ORG - 17:03, Wednesday 30 July 2025 (86111)OpsInfo, PSL

Re: FSS issues, Ryan S. and I have been having a chat about this.  From what we can tell the FSS was oscillating pretty wildly during this period, see the attached trends that Ryan put together of different FSS signals during this time.  We do not know what caused this oscillation.  The FSS autolocker is bouncing between states 2 and 3 (left center plot), an indication of the loop oscillating, which is also seen in the PC_MON (upper center plot) and FAST_MON (center center plot) signals.  The autolocker was doing its own gain changing during this time as well (upper right plot), but this isn't enough to clear a really bad loop oscillation.  The usual cure for this is to manually lower the FSS Fast and Common gains to the lowest slider value of -10 and wait for the oscillation to clear (it usually clears pretty quickly), then slowly raise them back to their normal values (usually in 1dB increments); there is an FSS guardian node that does this, but we have specifically stopped it from doing this during RefCav lock acquisition as it has a tendency to delay RefCav locking (the autolocker and the guardian node aren't very friendly with each other).

In the future, should this happen again try manually lowering the FSS gain sliders (found on the FSS MEDM screen under the PSL tab on the Sitemap) to their minimum and wait a little bit.  If this doesn't clear the oscillation then contact either myself or Ryan S. for further assistance.

Images attached to this comment
Working Groups DetChar (DetChar)
shivaraj.kandhasamy@LIGO.ORG - posted 23:00, Tuesday 29 July 2025 (86094)
DQ Shift report for the week from 2025-07-21 to 2025-07-27

Below is the summary of the DQ shift for the week from 2025-07-21 to 2025-07-27

The full DQ shift report with day-by-day details is available at https://wiki.ligo.org/DetChar/DataQuality/DQShiftLHO20250721

LHO General
ryan.short@LIGO.ORG - posted 22:07, Tuesday 29 July 2025 (86093)
Ops Eve Shift Summary

TITLE: 07/30 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: TJ
SHIFT SUMMARY: No observing time this shift due to the M8.8 (revised) earthquake off the Russian coast this afternoon and about a dozen M5.0+ aftershocks since. I waited until around 04:50 UTC to start untripping suspensions and seismic platforms to give the Earth time to calm down so they wouldn't just immediately trip again; everything came back up without issue, although IFO alignment looks poor. Since the ground is still shaking and the seismic environment is still in earthquake mode, H1 won't be locking for a while. However, I did set up the auto owl shift so that H1_MANAGER will start the process (and likely also need to run an alignment) once we're out of earthquake mode, even if that won't be for another hour or two.

H1 General (ISC)
anthony.sanchez@LIGO.ORG - posted 17:18, Tuesday 29 July 2025 (86092)
Daily DRMI Locking attempts since the Vent

I have attatched a PDF that contains 54 plots with minute trend data of the daily DRMI Locking process since the vent.
The DRMI Locking process being claimed here is channel H1:GRD-ISC_LOCK_STATE_N  being between states 18 and 101.
All times are in UTC.

These plots were made by  sc_iterator.py which relies on statecounter 3.0. 

 

 

Non-image files attached to this report
H1 CAL (CDS)
francisco.llamas@LIGO.ORG - posted 17:11, Tuesday 29 July 2025 (86091)
EX Pcal cabling...

FranciscoL, TonyS

Summary: We connected a BNC from the "Out Mon" of D1300599 to "In 2" of D1400423. The channel H1:CAL-PCALX_OFS_AOM_DRIVE_MON_OUT16 now plots what the AOM is doing. Problem solved.


Earlier today I went to track where in the chain of connections the InLoopOut signal was "lost" (I made that claim in 85753). All the signals of InLoopOut were working all the way out to the TCS rack. However, after a closer look on all the schematics for the Pcal and the TCS boards, I noticed that I was tracking the wrong channel. In other words, the InLoopOut channel was not the problematic channel -- this would otherwise be a super urgent matter, as the InLoopOut is the signal coming straight from the OFS PD -- but it was the AOM Mon channel (for the interested reader, look at D1300226V13 --> page 3 --> X1-TCS-C1-4 --> second (top to bottom) 9-pi D-Sub Male output, to find the channel). At the time of realizing my mistake, the maintenance period was long over, so I left it as a pending task for our next visit to EX.

The world gifted me with very high seismic activity and Tony and I finally finished the problem by connecting the BNC that was missing according to D1300226 schematic and stated in the summary. Attached are screenshots of the ndscope before (earlier today, as I was looking for the problem) and after plugging the BNC in the chassis, in their respective order. The change is seen in the green trace on the second (top to bottom) plot.

Tagging CDS so we are all aware that the problem was solved and it is concluded now.

Images attached to this report
H1 General
ryan.short@LIGO.ORG - posted 16:50, Tuesday 29 July 2025 (86090)
Very Large Earthquake; H1 Down for the Time Being

A large M8.7 earthquake off the coast of Petropavlovsk-Kamchatsky, Russia (USGS link) tripped most seismic systems in the corner and many suspensions. This one is off several charts. H1 will be down for a while.

Images attached to this report
H1 General
ryan.crouch@LIGO.ORG - posted 16:30, Tuesday 29 July 2025 (86052)
OPS Tuesday day shift summary

TITLE: 07/29 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: A 6.6 from Tonga held up locking a little bit, then we held in DOWN again at the end of the shift to swap a DAC card for TMSX.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
21:06 SAF LVEA IS LASER HAZARD LVEA Y LVEA IS LASER HAZARD 15:28
14:42 FAC CBEX Survey crew MidY N Surveying 18:32
15:02 VAC Gerardo, Jordan FCES N Cabling 18:25
15:02 FAC Kim EndX N Tech clean 16:11
15:03 FAC Nellie EndY N Tech clean 15:43
15:07 CDS Erik MSR -> EX N EndX dolphin work, EX to SAFE 16:25
15:00 EE Ken LVEA Y -> N EE work at HAM1, in and out. 18:15
15:14 OPS TJ LVEA Y -< HAZARD TRANSITION to SAFE 15:28
15:14 SQZ Camilla LVEA Y -> N Plug in a cable at SQZT7, table work 18:15
15:25 FAC Chris Endy N FAMIS filter checks 17:11
15:28 SQZ Sheila LVEA N Join Camilla at SQZT7 18:53
15:36 FAC Mitch, Randy LVEA N Craning, Nbay to HAM3. Mitch out 17:42 18:18
15:38 FAC Richard LVEA Y -> N Walkthrough in with Ken 15:38
15:41 FAC Eric EndY N Air handler checks/tests 17:15
15:48 VAC/CDS Patrick His office N VAC gauge FCES work 17:10
15:53 FAC Tyler MidY N Join survey crew 18:31
15:55 ISC Betsy LVEA N Grab some parts 16:15
12:30 FAC Tyler Fire water tower N Moving Big Red 13:00
15:59 SEI Jim LVEA N Find Mitch & Randy 17:45
16:02 CDS Jonathon His office N CDS montor restart 16:35
16:04 VAC Tony FCES N Join Gerardo 18:09
16:09 CAL Francisco PCAL lab LOCAL Grab equipment 16:30
16:18 SUS/EE Richard EndX N Coil driver chassis swap for TMSX 16:58
16:30 CAL Francisco EndX N Measurements 18:04
16:31 FAC Nellie, Kim LVEA N Tech clean 17:23
16:41 ISC Keita LVEA N Inventory 17:08
17:00 SUS Oli CR N SR3, PR3 measurements 19:00
17:02 SAF Richard LVEA N Safety checks 17:30
17:12 FAC Chris VAC prep lab N Painting and taping 19:02
17:15 FAC Eric EndX N Airhandler checks 17:47
17:44 SEI Jim EndY N Clean off HEPI pump stand 19:02
17:48 FAC Kim, Nellie FCES N Tech clean 18:11
18:16 VAC Janos, Anna LVEA N VAC checks 18:44
18:16 EPO Camilla+ LVEA N Tour 18:53
18:18 EPO Mike, Amber + KNDU-TV LVEA N Filming, Interview 19:22
18:36 SQZ Sheila LVEA N SQZT7 table work 19:29
18:48 ISC Betsy LVEA N Take a picture of the new platform 18:54
18:53 SQZ Camilla LVEA N Join Sheila 19:29
19:08 EE McCarthy LVEA HAM1 N Checking on Ken's lighting 19:39
19:37 OPS Tony LVEA N Sweep 19:48
21:13 ISC Betsy Optics lab N Parts 21:13
22:56 CDS Dave, Erik EndX N Swap TMSX DAC card 23:10
23:01 CAL Tony, Fransisco EndX N Plug in BNC, check PCAL 23:25

The work completed this morning includes but is not limited to:

 

H1 CDS
david.barker@LIGO.ORG - posted 16:09, Tuesday 29 July 2025 - last comment - 16:43, Tuesday 29 July 2025(86086)
h1susex powered down for 3rd 18bit-DAC replacement

Elenna, Jeff, Richard, Jonathan, Oli, Ryan C, Dave:

We are replacing the 3rd 18bit-DAC in h1susex (cardnum=2). This DAC is exclusively used by h1sustmsx and could be causing lock-losses.

Drawing can be found in DCC D1301004, shown in attachment. DAC to be replaced shown circled.

h1susex is powered down, SWWD has been bypassed.

 

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 16:14, Tuesday 29 July 2025 (86087)

We briefly entertained the idea of upgrading this 18bit-DAC with a 20bit-DAC, but because it is interleaved between the existing 20bit-DACs the card numbers would change for h1susetmx and h1susetmxpi requiring those models to be changed too. So for expediency we are replacing 18bit with 18bit and no model/DAQ changes are needed.

erik.vonreis@LIGO.ORG - 16:43, Tuesday 29 July 2025 (86088)

Replaced the third 18-bit DAC (18AO8 sn  101208-29) with another (18AO8 sn 110425-48).  h1susex is back up and running.

LHO General
ryan.short@LIGO.ORG - posted 16:00, Tuesday 29 July 2025 (86085)
Ops Eve Shift Start

TITLE: 07/29 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 12mph Gusts, 3mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.05 μm/s
QUICK SUMMARY: H1 is holding in 'IDLE' as team CDS works on replacing a DAC at EX that may be causing the TMS issues; see alog86079. Otherwise it sounds like it's been a busy day of maintenance and earthquakes.

H1 SUS (DAQ, SEI)
brian.lantz@LIGO.ORG - posted 15:42, Tuesday 29 July 2025 (86081)
MEDM update for SUS estimator parts

I have updated the MEDM screens for the OSEM Estimator and committed them to Userapps. This captures the name changes made to the models for SR3 and PR3 at LHO. These name changes just make the names more sensible, and do not change any functionality. LLO can pull the updates and it will make no difference because nothing at LLO uses these screens.

userapps/trunk/sus/common/medm$ svn ci -m"Mods to estimator screens to match name update in the estimator models, BTL"
Sending        estim/CONTROL_6.adl
Sending        estim/ESTIMATOR_OVERVIEW.adl
Sending        estim/FADE_CONTROL.adl
Sending        hxts/SUS_CUST_HLTS_OVERVIEW_W_EST.adl
Transmitting file data ....done
Committing transaction...
Committed revision 32544.

Note - the indicator lights for the filters on the Estimator Overview screen are not working correctly, and I plan to fix this in the next few days. the indicators in the control_6 screen have been fixed.

(They should be grey=output off, green = output on, settings as expected, red = output on, settings not correct.)

There are many settings to capture in the SDF file. Most of these are the filter settings. Those will come up in the OFF setting, and that is good, and fine to capture.

A few things do need to be updated. These include:

In the Estimator Overview screen

set the ramp time to what it was before - 5 seconds is fine

set the Initial channel to 1 (this is OFF)

set the next channel to 1 (this is also off, and it should match the initial channel. This channel will change during operation, so maybe you don't want to monitor it?)

click the OSEM select and then set the correct channel to 1 (the 'hint' at the bottom says which one it is. for Yaw estimator, pick yaw and for pitch estimator, choose pitch)

 

 

H1 AOS
elenna.capote@LIGO.ORG - posted 15:32, Tuesday 29 July 2025 - last comment - 15:58, Tuesday 29 July 2025(86079)
....I think it's the DAC: The latest in the TMSX Saga

Jeff Kissel, Elenna Capote

Unfortunately, we just had another TMSX lockloss, after having swapped the TMSX F1/F2/F3/LF top mass coil driver. Jeff and I ripped our hair for a bit checking every signal we could think of. We returned to the FASTIMON signals, and found a discrepancy between them and the MASTER OUT (the digital request to the DAC). The F2 FASTIMON (coil driver output monitor) shows a jump in current, that Jeff and I calibrated to be about a 0.5 mA jump (following the conversion factor of 0.0228 mA/ct from this alog), which we calibrate to be a 50 mV jump in the voltage (following the calibration factor of 9.943 mA/V from this table). However, no such signal is present in the MASTER OUTs, indicating that the DAC is the possible culprit. The usual RMS of the FASTIMON channel seems to be about 0.002 mA, which is about 0.2 mV RMS. Therefore, this is a huge impulse being sent to the suspension, which has a 0.35 Hz resonance (T1200404). We believe this jump is causing the TMS suspension to shake at 0.35 Hz, and the motion is too large for our other slow servos to follow.

We see the jump in the F2 FASTIMON channel but not the F3, for example, in this particular lockloss. Jeff is finding similarly huge jumps in the F2 FASTIMON for some of the other locklosses (he is looking through them now).

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 15:58, Tuesday 29 July 2025 (86084)
Some corroborating evidence that this is consistently happening during (as far as I looked) all of these now-called TMSX yaw excursions or oscillations:
Attachment 1 -- 2025-07-29 21:42 UTC (1437860551 lock loss)
     A repeat of Elenna's plot showing in my formatting the same thing -- the large F2 (uncalibrated) excursion in the F2 FASTIMON where no MASTER_OUT control request is made.

Attachment 2 -- 2025-07-27 13:39 UTC (1437277872 lock loss) 
Attachment 3 -- 2025-07-23 10:49 UTC (1437303004 lock loss) 
Attachment 4 -- 2025-07-23 03:50 UTC (1437277872 lock loss) 

Images attached to this comment
H1 CDS
david.barker@LIGO.ORG - posted 15:28, Tuesday 29 July 2025 - last comment - 08:29, Wednesday 30 July 2025(86080)
CDS Maintenance Summary: Tuesday 29th July 2025

WP12709 Replace EX Dolphin IX S600 Switch

Jonathan, Erik, EJ, Dave:

The Dolphin IX switch at EX was damaged by the 06apr2025 power glitch, it continued to function as a switch but its nework interface stopped working. This meant we couldn't fence a particular EX front end from the Dolphin fabric by disabling its switch port via the network interface. Instead we were using the IPC_PAUSE function of the IOP models. Also because the RCG needs to talk over the network to a swich on startup, Erik configured EX frontends to control an unused port on the EY switch.

This morning Erik replaced the broken h1rfmfecex0 with a good spare. The temporary control-EY-switch-because-EX-is-broken change was removed.

Before this work the EX SWWD was bypassed on h1iopseiex and h1cdsrfm was powered down.

During the startup of the new switch, the IOP models for SUS and ISC were time glitched putting them into a DACKILL state. All models on h1susex and h1iscex were restarted to recover from this.

Several minutes later h1susex spontaneously crashed, requiring a reboot. Everything as been stable from this point onwards.

WP12687 Add STANDDOWN EPICS channels to DAQ

Dave:

I added a new H1EPICS_STANDDOWN.ini to the DAQ, it was installed as part of today's DAQ restart.

WP12719 Add two FCES Ion Pumps and One Gauge To Vacuum Controls

Gerardo, Janos, Patrick, Dave

Patrick modifed h0vacly Beckhoff to read out two new FCES Ion Pumps and a new Gauge. 

The new H0EPICS_VACLY.ini was added to the DAQ, requiring a EDC+DAQ retstart.

WP12689 Add SUS SR3/PR3 Fast Channels To DAQ

Jeff, Oli, Brian, Edgard, Dave:

New h1sussr3 and h1suspr3 models (HLTS suspensions) were installed this morning. Each model added two 512Hz fast channels to the DAQ. Renaming of subsystem parts resulted in the renaming of many fast and slow DAQ channels. A summary of the changes:

In sus/common/models three files were changed (svn version numbers shown):

HLTS_MASTER_W_EST.mdl production=r31259 new=32426

SIXOSEM_T_STAGE_MASTER_W_EST.mdl  production=r31287 new=32426

ESTIMATOR_PARTS.mdl production=r31241 new=32426

HLTS_MASTER_W_EST.mdl:

only change is to the DAQ_Channels list, added two chans M1_ADD_[P,Y]_TOTAL

SIXOSEM_T_STAGE_MASTER_W_EST.mdl:

At top level, change the names of the two ESTIMATOR_HXTS_M1_ONLY blocks:

PIT -> EST_P

YAW -> EST_Y

Inside the ADD block:

Add two testpoints P_TOTAL, Y_TOTAL (referenced by HLTS mdl)

ESTIMATOR_PARTS.mdl:

Rename block EST -> FUSION

Rename filtermodule DAMP_EST -> DAMP_FUSION

Rename epicspart DAMP_SIGMON -> OUT_DRIVEMON

Rename testpoint DAMP_SIG -> OUT_DRIVE

DAQ_Channels list changed according to the above renames.

DAQ Changes:

This results in a large number of DAQ changes for SR3 and PR3. For each model:

+496 slow chans, -496 slow chans (rename of 496 channels).

+64 fast chans, -62 fast chans (add 2 chans, rename 62 chans).

DAQ Restart

Jonathan, Dave:

The DAQ was restarted for several changes:

New SR3 and PR3 INI, fast and slow channel renames, addition of 512Hz fast channels.

New H0EPICS_VACLY.ini, adding Ion Pumps and Gauge to EDC.

New H1EPICS_STANDDOWN.ini, adding ifo standdown channels to EDC.

This was a full EDC DAQ restart. Procedure was:

stop TW0 and TW1, then restart EDC

restart DAQ 0-leg

restart DAQ 1-leg

As usual GDS1 needed a second restart, but unusual FW1 spontaneously restarted itself after have ran for 55 minutes, an uncommon late restart.

Jonathan tested new FW2 code which sets the run number in one place and propagates it to the various frame types.

Comments related to this report
david.barker@LIGO.ORG - 08:29, Wednesday 30 July 2025 (86101)

Detailed DAQ changes in attached file

Non-image files attached to this comment
david.barker@LIGO.ORG - 08:20, Wednesday 30 July 2025 (86100)

Tue29Jul2025
LOC TIME HOSTNAME     MODEL/REBOOT
09:02:53 h1susex      h1iopsusex  <<< Restarts following EX Dolphin IXS600 switch replacement
09:02:57 h1iscex      h1iopiscex  
09:03:07 h1susex      h1susetmx   
09:03:11 h1iscex      h1pemex     
09:03:21 h1susex      h1sustmsx   
09:03:25 h1iscex      h1iscex     
09:03:35 h1susex      h1susetmxpi 
09:03:39 h1iscex      h1calex     
09:03:53 h1iscex      h1alsex     


09:11:30 h1susex      h1iopsusex  <<< h1susex crash
09:11:43 h1susex      h1susetmx   
09:11:56 h1susex      h1sustmsx   
09:12:09 h1susex      h1susetmxpi 


12:15:20 h1sush2a     h1suspr3    <<< New models, EST rename and 2 fast chans added
12:16:28 h1sush56     h1sussr3    


12:19:29 h1susauxb123 h1edc[DAQ] <<< EDC for new VAC-LY and STANDDOWN
12:21:00 h1daqdc0     [DAQ] <<< 0-leg
12:21:08 h1daqfw0     [DAQ]
12:21:08 h1daqtw0     [DAQ]
12:21:10 h1daqnds0    [DAQ]
12:21:17 h1daqgds0    [DAQ]
12:24:10 h1daqdc1     [DAQ] <<< 1-leg
12:24:17 h1daqfw1     [DAQ]
12:24:17 h1daqtw1     [DAQ]
12:24:20 h1daqnds1    [DAQ]
12:24:27 h1daqgds1    [DAQ]
12:25:18 h1daqgds1    [DAQ] <<< GDS1 2nd restart


13:19:29 h1daqfw1     [DAQ] <<< spontaneous restart


16:17:57 h1susex      h1iopsusex  <<< Replace TMSX 18bit-DAC
16:18:10 h1susex      h1susetmx   
16:18:23 h1susex      h1sustmsx   
16:18:36 h1susex      h1susetmxpi 
 

H1 ISC (Lockloss, SUS)
oli.patane@LIGO.ORG - posted 15:05, Tuesday 29 July 2025 (86045)
ETMX glitch comparison between LHO and LLO

I wrote a script that looks at sudden range drops for both H1 and L1 and searches those times for ETMX glitches. With this script I have been able to confirm that LLO gets ETMX glitches that they're able to ride out. However, we don't know if the glitches cause locklosses for them too.

I used /ligo/home/oli.patane/Documents/WIP/etmglitch/range_drop_investigation.ipynb to look for ETMX glitches that would cause the range to drop below 100 Mpc. I have only looked over a few days at each ifo, but it's already clear that they definitely have ETMX glitches, or some glitch that presents itself very similarly. The plots for LHO can be found in /ligo/home/oli.patane/Documents/WIP/etmglitch/range_drop_investigation/H1/, and the LLO plots in /ligo/home/oli.patane/Documents/WIP/etmglitch/range_drop_investigation/L1/. I've attached a couple here of each as examples.

 

I wanted to make the plots between the two ifos as similar as possible to help better judge glitch size and channels it appears in. Both ifos have matching ylims for L1, L2, and L3, and although I couldn't use the same ylims for the DCPDs, I have scaled them so the delta between the ymin and ymax is 0.3 mA for both ifos. Unfortunately, I was not able to do any scaling for DARM or CALIB STRAIN due to the amount they vary between both locks as well as between ifos.

LHO example1, LHO example2

LLO example1, LLO example2

Both LHO and LLO seem to have ETMX glitches that both appear alone and in groups. As you can see, LLO generally has much noisier ETMX L3, DCPD, and DARM channels. This hides the true morphology of the glitches in ETMX L3, and may be preventing us from seeing the glitch appear in the DCPDs and DARM as often as they appear in LHO's DCPDs and DARM channels. In LLO's examples, you can see very small glitches in the DCPDs and DARM at the same time, but proportional to the entire trace, they aren't affecting those channels as much as they can do at LHO. Feel free to take a look through the rest of the glitch examples in the directories to get a better idea of the range of how these glitches can present and affect the different parts of the ifo.

Through messing with this script I've also been able to find good thresholds to use for searching for these glitches at LLO, since their DARM and ETMX L3 channels are much noisier than ours, so it would be very easy to implement an ETMX glitch lockloss search/tag for them.

Images attached to this report
H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 14:48, Tuesday 29 July 2025 - last comment - 15:14, Tuesday 29 July 2025(86077)
21:42 UTC lockloss

21:42 UTC lockloss, we were starting to shake from a 5.6 from El Salvador but it looks to be another ASC_Y / TMSX_Y oscillation lockloss.

Images attached to this report
Comments related to this report
ryan.crouch@LIGO.ORG - 15:14, Tuesday 29 July 2025 (86078)

The TMSX_Y oscillation is also being seen in ALS_X.

Images attached to this comment
H1 CDS
erik.vonreis@LIGO.ORG - posted 09:49, Tuesday 29 July 2025 - last comment - 16:44, Tuesday 29 July 2025(86059)
Dolphin IX switch replaced at EX

The dolphin IX switch at EX had broken management interface which prevented fencing of front end from the dolphin network for safe reboots.

I installed a new switch.  As part of the process, h1cdsrfm was turned off to avoid crashing it and in turn crashing CS and EY dolphin networks. 

When I turned h1cdsrfm back on, the cdsrfm port on the switch was already enabled, which led to timing glitches on h1susex and h1iscex.  This could have been prevented by re-fencing h1cdsrfm after the new switch was turned on.

I restarted the models on h1susex and h1iscex.  A few minutes later, h1susex crashed in the dolphin driver and had to be rebooted.  The crash was reminiscent of several susex (plus all of ey) crashes related to restarts of dis_networkmgr on the bootserver.  See 67389 for one example.

Comments related to this report
erik.vonreis@LIGO.ORG - 16:44, Tuesday 29 July 2025 (86089)

Switch sn ISX600-HN-000224 was replaces with ISX600-HN-000258.

H1 AOS (DetChar)
preeti.sharma@LIGO.ORG - posted 10:46, Friday 18 July 2025 - last comment - 15:57, Tuesday 29 July 2025(85848)
Investigation of Lock losses due to Earthquakes

Preeti, Gaby

With the help of Ashley Patron's Eqlock script, we calculated the locklosses caused due to EQs for each observing runs. This is a part of the study of investigating the correlation between the microseism and duty-cycle (alog), so we chose winter months (Nov, Dec, Jan and Feb) of each observing run and calculated the vertical velocity of the ground from z-channel and horizontal velocity which is the quadrature sum of x and y channels at the time of locklost due to an EQ. We also did the same study for LLO (alog). 

Conclusion:

Images attached to this report
Comments related to this report
preeti.sharma@LIGO.ORG - 12:04, Thursday 24 July 2025 (85964)

After removing EQ events which happend during the locklosses, the probability of surviving lock during an EQ in O2 winter was found to be 54%, O3b winter was 64%, O4a winter was 73% and O4b winter was 41%. 

preeti.sharma@LIGO.ORG - 15:57, Tuesday 29 July 2025 (86082)

Some corrections have been done in the script after getting Derek's feedback and have attached the updated scatter plot between peak horizontal ground motion and peak vertical ground motion of Earthquake events, also data table including total number of EQs and lock probability in each observing runs' winters. Although, the number of surviving EQ events are still lower in O4 than O3. 

Images attached to this comment
Displaying reports 841-860 of 84478.Go to page Start 39 40 41 42 43 44 45 46 47 End