Displaying reports 6981-7000 of 86301.Go to page Start 346 347 348 349 350 351 352 353 354 End
Reports until 16:50, Sunday 08 December 2024
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:50, Sunday 08 December 2024 (81684)
OPS Day Shift Summary

TITLE: 12/09 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: Oli
SHIFT SUMMARY:

IFO is DOWN due to Earthquake

H1 was only able to lock for 36 minutes today (lockloss alog) in between earthquakes. It's been a very quakey day with:

We tired locking after the first 6.3 magnitude EQ died down only to have another 6.3 magnitude EQ 3 hrs later (and then another 6.3 10 mins after that). I've attached a screenshot of our microseism. The secondary microseism is also over the 90% threshold, which makes us more succeptible to lower mag EQs. We've been in useism and earthquake mode all day essentially.
 

LOG:

None

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 16:36, Sunday 08 December 2024 (81683)
Ops Eve Shift Start

TITLE: 12/09 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Earthquake
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM_EARTHQUAKE
    Wind: 6mph Gusts, 3mph 3min avg
    Primary useism: 1.11 μm/s
    Secondary useism: 0.86 μm/s
QUICK SUMMARY:

Currently in DOWN due to an earthquake in Japan. Microsseism is still pretty high so we were having a hard time getting back up anyway.

H1 SUS
oli.patane@LIGO.ORG - posted 13:56, Sunday 08 December 2024 - last comment - 19:29, Monday 09 December 2024(81668)
Suspension watchdog trips during the large earthquake

Back in March (76269) Jeff and I had updated all the suspension watchdogs (besides OFIS, OPOS, and HXDS since those were already up to date) to use better blrms filtering and to be output into um. We set the suspension watchdog thresholds to values between 100 and 300 µm, but these values were set arbitrarily since there was no way to previously see how far the stages move during different scenarios. We had upped a few of the thresholds after having some suspensions trip when they probably shouldn't have, and this is a continuation of that.
During the large earthquake that hit us on December 5th, 2024 18:46 UTC, all ISI watchdogs tripped as well as some of the stages on several suspensions. After a cursory look, all suspensions that tripped only had either the bottom or bottom+penultimate stage trip, meaning that with the exception of the single suspensions, the others' M1 stage damping should have stayed on.

We wanted to go through and check whether the trips may have just been because of the movement from the ISIs tripping. If that is the case, we want to raise the suspension watchdog thresholds for those stages so that these suspensions don't trip every single time their ISI trips, especially if the amount that they are moving is still not very large.

Suspension stages that tripped:

Triples:
- MC3 M3
- PR3 M2, M3
- SRM M2, M3
- SR2 M2, M3

Singles:
- IM1 M1
- OFI M1
- TMSX M1

MC3 (M3) (ndscope1)

When the earthquake hit and we lost lock, all stages were moving due to the earthquake, but once HAM2 ISI tripped 5 seconds after the lockloss, the rate at which the OSEMs were moving quickly accelerated, so the excess motion looks to mainly be due to the ISI trip (ndscope2).

Stage Original WD threshold Max BLRMS reached after lockloss New WD threshold
M1 150 86 150 (unchanged)
M2 150 136 175
M3 150 159 200

 

PR3 (M2, M3) (ndscope3)

Looks to be the same issue as with MC3.

Stage Original WD threshold Max BLRMS reached after lockloss New WD threshold
M1 150 72 150 (unchanged)
M2 150 162 200
M3 150 151 200

 

SRM (M2, M3) (ndscope4)

Once again, the OSEMs are moving after the lockloss due to the earthquake, but the rate they were moving at greatly increased once HAM5 saturated and the ISI watchdogs tripped.

Stage Original WD threshold Max BLRMS reached after lockloss New WD threshold
M1 150 84 150 (unchanged)
M2 150 165 200
M3 150 174 225

 

 SR2 (M2, M3) (ndscope5)

Again, the OSEMs are moving after the lockloss due to the earthquake, but the rate they were moving at greatly increased once HAM4 saturated and the ISI watchdogs tripped.

Stage Original WD threshold Max BLRMS reached after lockloss New WD threshold
M1 150 102 150 (unchanged)
M2 150 182 225
M3 150 171 225

 

IM1 (M1) (ndscope6)

Again, the OSEMs are moving after the lockloss due to the earthquake, but the rate they were moving at greatly increased once HAM2 saturated and the ISI watchdogs tripped.

Stage Original WD threshold Max BLRMS reached after lockloss New WD threshold
M1 150 175 225

 

OFI (M1) (ndscope7)

Again, the OSEMs are moving after the lockloss due to the earthquake, but the rate they were moving at greatly increased once HAM5 saturated and the ISI watchdogs tripped.

Stage Original WD threshold Max BLRMS reached after lockloss New WD threshold
M1 150 209 250

 

TMSX (M1) (ndscope8)

This one seems a bit questionable - it looks like some of the OSEMs were already moving quite a bit before the ISI tripped, and there isn't as much of a clear place where they started moving more once the ISI had tripped(ndscope9). I will still be raising the suspension trip threshold for this one just because it doesn't need to be raised very much and is within a reasonable range.

Stage Original WD threshold Max BLRMS reached after lockloss New WD threshold
M1 100 185 225
Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 19:09, Sunday 08 December 2024 (81686)

We just had an earthquake come through and trip some of the ISIs, including HAM2, and with that tripped IM2(ndscope1). I checked to see if the movement in IM2 was caused by the ISI trip and sure enough it was (ndscope2). I will be raising the suspension watchdog threshold for IM2 up to 200.

Stage Original WD threshold Max BLRMS reached after lockloss New WD threshold
M1 150 152 200
Images attached to this comment
oli.patane@LIGO.ORG - 22:45, Sunday 08 December 2024 (81691)
Images attached to this comment
oli.patane@LIGO.ORG - 19:29, Monday 09 December 2024 (81716)

Yet another earthquake!. The earthquake that hit us December 9th 23:10 UTC  tripped almost all of our ISIs, and we had three suspension stages trip as well, so here's another round of trying to figure out if they tripped because of the earthquake or because of the ISI trips. The three suspensions that tripped are different from the ones I had updated the thresholds for earlier in this alog.

I will not be making these changes right now since that would knock us out of Observing, but the next time we are down I will make the changes to the watchdog thresholds for these three suspensions.

Suspension stages that tripped:

- MC2 M3

- PRM M3

- PR2 M3

MC2 (M3) (ndscope1)

It's hard to tell for this one what the cause for M3 tripping was(ndscope2), but I will up the threshold here for M3 anyways since I'm sure even if the trip was directly caused by the earthquake, the ISI tripping definitely wouldn't have helped!

Stage Original WD threshold Max BLRMS reached after lockloss New WD threshold
M1 150 88 150 (unchanged)
M2 150 133 175
M3 150 163 200

 

PRM (M3) (ndscope3)

This one it's pretty clear that it was because of the ISI tripping(ndscope4).

Stage Original WD threshold Max BLRMS reached after lockloss New WD threshold
M1 150 44 150 (unchanged)
M2 150 122 175
M3 150 153 200

 

PR2 (M3) (ndscope5)

Again this one it's pretty clear that it was because of the ISI tripping(ndscope6).

Stage Original WD threshold Max BLRMS reached after lockloss New WD threshold
M1 150 108 150 (unchanged)
M2 150 129 175
M3 150 158 200
Images attached to this comment
H1 ISC
ibrahim.abouelfettouh@LIGO.ORG - posted 12:09, Sunday 08 December 2024 (81682)
OPS Day Midshift Update

While the microseism has gone down a small bit, locking has still been difficult due to EQs.

First, there were two back to back EQs (mag 4s and 5s) that killed a potential lock at near NLN (LASER_NOISE_SUPPRESSION)

Then, we got to NLN and were OBSERVING for 36 mins but another two back to back (mag 4s and 5s) caused a lockloss.

Finally, just now while relocking, another two higher 6.5 mag and 6.3 mag EQs from Japan are coming through.

All of this is made much worse due to the already over 1 um/s secondary microseism.

As such, I will bring H1 to DOWN and ENVIRONMENT and wait until the Earth rings down a bit.

H1 PSL (PSL)
ibrahim.abouelfettouh@LIGO.ORG - posted 11:20, Sunday 08 December 2024 (81681)
PSL Weekly Report - Weekly FAMIS 26329

Closes FAMIS 26329, Last checked in alog 81539


Laser Status:
    NPRO output power is 1.849W
    AMP1 output power is 70.27W
    AMP2 output power is 137.2W
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 11 days, 16 hr 6 minutes
    Reflected power = 22.82W
    Transmitted power = 105.8W
    PowerSum = 128.7W

FSS:
    It has been locked for 0 days 0 hr and 21 min
    TPD[V] = 0.8091V

ISS:
    The diffracted power is around 3.4%
    Last saturation event was 0 days 0 hours and 21 minutes ago


Possible Issues: None reported

H1 PEM (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 10:55, Sunday 08 December 2024 (81679)
Lockloss 18:54 UTC

Lockloss 36 mins into NLN due to 2 back to back EQs (4.5 and 5.8) that happened during very high microseism. Relocking now.

LHO VE
david.barker@LIGO.ORG - posted 10:27, Sunday 08 December 2024 (81677)
Sun CP1 Fill

Sun Dec 08 10:12:22 2024 Fill completed in 12min 19secs

Note to VAC: Texts were sent for this fill, but no emails

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 10:23, Sunday 08 December 2024 - last comment - 16:51, Sunday 08 December 2024(81676)
alarms issue 01:00 Sunday 08dec2024

Jonathan, Dave:

The alarms service on cdslogin stopped reporting around 1am this morning. Symptoms are status file was not being updated (caused alarm block on CDS Overview MEDM to turn PURPLE) and the report file was not being updated. Presumably no alarms would have been sent from this time onwards.

At 08:10 I restarted the alarms.service on cdslogin. A new report file was created but not written to, the /tmp/alarm_status.txt file was not changed (still frozen at 01:00) but I did get a startup text. Then 14 minutes later the files started being written. I raised a test alarm and got a text, but no email.

At 09:38 after not getting a keepalive email at 09:00 or any SSH login emails I rebooted cdslogin. Same behavior as 08:10; report file created not written, tmp file not created, startup text sent successfully. After 14 minutes alarms starts running, writes to file system, test alarms are texted but no emails at all.

Jonathan is going to check on bepex.

Comments related to this report
david.barker@LIGO.ORG - 10:53, Sunday 08 December 2024 (81678)

Jonathan rebooted bepex which has fixed the no-email problem with alarms and alerts. I raised a test alarm and alert to myself and got both texts and emails.

david.barker@LIGO.ORG - 11:01, Sunday 08 December 2024 (81680)
david.barker@LIGO.ORG - 16:51, Sunday 08 December 2024 (81685)

Alarms got stuck again around noon today, presumably due to a reoccurring bepex issue. I have edited the code to skip trying to use bepex and only use twilio for texts. alarms.service was restarted on cdslogin at 16:48 PST.

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 07:32, Sunday 08 December 2024 - last comment - 10:23, Sunday 08 December 2024(81674)
OPS Day Shift Start

TITLE: 12/08 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 20mph Gusts, 16mph 3min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.71 μm/s
QUICK SUMMARY:

IFO is in ENVIRONMENT and LOCKING. IFO stayed in down all last night due to high winds and microseism.

Since last night, the microseiem has leveled off and even gone down a bit. The wind hasn't changed much. Attempting to lock to see where we get to.

Comments related to this report
ibrahim.abouelfettouh@LIGO.ORG - 10:23, Sunday 08 December 2024 (81675)

OBSERVING as of 18:12 UTC.

Got very close to NLN earlier (LASER_NOISE_SUPPRESSION) but lost lock due to 3 back to back mid magnitude (4s and 5s) EQs that were exacerbated by very high microseism.

H1 General (ISC)
oli.patane@LIGO.ORG - posted 18:42, Saturday 07 December 2024 (81673)
Ops Eve Shift End

TITLE: 12/08 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: TJ/Oli
SHIFT SUMMARY: Currently unlocked. We've been sitting in DOWN for the past hour while the secondary microseism stays really high. We also currently have a smallish earthquake coming through. I will set the ifo to start trying to relock again.

Earlier while trying to relock, we were having issues with the ASLX crystal frequency. When this is a consistant issue we have to fix it by going out to the end station to adjust the crystal temperature. I trended the ALSX channels alongside the EX VEA temperatures and it looks like a couple of the temperatures went down, section D down by almost one degree F, right around when we started having crystal frequency issues. The wind also was blowing into the VEA, which we know since the dust counts were high then. I believe it's possible that the wind was cooling down the air in the part of the VEA near where we have the ALS box and changing the temperature of the crystal enough to affect the beatnote. I only have this one screenshot right now (ndscope), but I had trended back a few months and seen a possible correlation between when we get into the CHECK_CRYSTAL_FREQUENCY state for ALSX, the temperature inside the EX VEA, and the dust counts indicating wind entering the VEA. It's hard to know for sure especially because the air/wind outside is now much colder than it was a couple months ago, but it would be interesting to know the location of the D section and look for these correlations more closely. tagging ISC
LOG:

22:15 started an initial alignment
22:40 initial alignment done, relocking
    - ALSX beatnote issue - CHECK_CRYSTAL_FREQUENCY
        - toggled force/no force
        - finally caught with no force
    - ALSX beatnote issue again
        - toggled force/no force and enable/disable
    00:01 Put ifo in DOWN since we can't get past DRMI due to the high microseism
    00:29 tried relocking
    01:06 back to DOWN
02:38 Trying relocking again                                                                                                                              

Start Time System Name Location Lazer_Haz Task Time End
23:03 PEM Robert LVEA YES Finish setting up for Monday 23:48
Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:27, Saturday 07 December 2024 (81672)
OPS Day Shift Summary

TITLE: 12/08 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: Oli
SHIFT SUMMARY:

IFO is DOWN due to MICROSEISM/ENVIRONMENT since 22:09 UTC

First 6-7 hrs of shift were very calm and we were in OBSERVING for majority of the time.

The plan is to stay in DOWN and intermittently try to lock but the last few attempts have resulted in 6 pre-DRMI LLs with 0 DRMI acquisitions. Overall, microseism is just very high.

LOG:                                                                                                                                                                       

Start Time System Name Location Lazer_Haz Task Time End
23:03 PEM Robert LVEA YES Finish setting up for Monday 23:48
23:03 HAZ LVEA IS LASER HAZARD LVEA YES LVEA IS LASER HAZARD 06:09
H1 General
oli.patane@LIGO.ORG - posted 16:21, Saturday 07 December 2024 (81671)
Ops Eve Shift Start

TITLE: 12/08 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: SEISMON_ALERT_USEISM
    Wind: 15mph Gusts, 9mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.78 μm/s
QUICK SUMMARY:

Currently in DOWN and trying to wait out the microseism a bit. Thankfully wind has gone back down

H1 ISC (SUS)
ibrahim.abouelfettouh@LIGO.ORG - posted 16:12, Saturday 07 December 2024 (81670)
Investigating SRM M3 WD Trips During Initial Alignment Part 2

Trying to gather more info about the nature of these M3 SRM WD trips in light of OWL Ops being called (at least twice in recent weeks) to press one button.

Relevant Alogs:

Part 1 of this investigation: 81476

Tony OWL Call: alog 81661

TJ OWL Call: alog 81455

TJ OWL Call: alog 81325

It's mentioned in some more OPS alogs but no new info.

Next Steps:

H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 14:11, Saturday 07 December 2024 (81667)
Lockloss

Lockloss @ 12/07 22:09 UTC. Possibly due to a gust of  wind since at EY it had jumped up from lower 20s to almost 30mph the same minute of the lockloss? A possible contributer could also be the secondary microseism - it has been quickly rising over the last several hours and is now up to 2 um/s.

H1 General (SUS)
anthony.sanchez@LIGO.ORG - posted 03:03, Saturday 07 December 2024 - last comment - 15:26, Saturday 07 December 2024(81661)
SRM Watchdog trip

TITLE: 12/07 Owl Shift: 0600-1530 UTC (2200-0730 PST), all times posted in UTC
STATE of H1: Aligning
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 5mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.28 μm/s
QUICK SUMMARY:

IFO stuck in initial alignment because SRM watchdog H1:SUS-SRM_M3_WDMON_STATE trip.
Watch dog tripped while we were in Initial alignmnet not before, and was not due to ground motion.


I logged in discovered the trip. Reset the watchdog and reselected myself for Remote OWL notifications.
 

Images attached to this report
Comments related to this report
anthony.sanchez@LIGO.ORG - 03:49, Saturday 07 December 2024 (81662)SUS

SUS SDF drive aligh L2L gain change accepted.

Images attached to this comment
ibrahim.abouelfettouh@LIGO.ORG - 15:26, Saturday 07 December 2024 (81669)

Just commenting that this is not a new issue. TJ and I were investigating it earlier and had early thoughts that SRM was catching on the wrong mode during SRC alignment in ALIGN_IFO either during the re-alignment of SRM (pre-SRC align) or after the re-misalignment of SRM. This results in the guardian thinking that SRC is aligned, which results in saturations and trips because it's actually not. Again, we think this is the case as of 11/25 but still investigating. I have an alog about it here: 81476.

Displaying reports 6981-7000 of 86301.Go to page Start 346 347 348 349 350 351 352 353 354 End