Displaying reports 13081-13100 of 84091.Go to page Start 651 652 653 654 655 656 657 658 659 End
Reports until 20:54, Wednesday 27 September 2023
H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 20:54, Wednesday 27 September 2023 - last comment - 09:24, Thursday 28 September 2023(73148)
Lockloss @ 03:49

Lockloss @ 03:49 UTC - no obvious cause as of yet.

Comments related to this report
ryan.short@LIGO.ORG - 23:02, Wednesday 27 September 2023 (73150)

Back observing as of 05:04 UTC

ryan.crouch@LIGO.ORG - 09:24, Thursday 28 September 2023 (73152)

I don't see anything concerning or weird in the lockloss select scope signals, theres a ~3.5Hz oscillation in darm and PRCL.

Images attached to this comment
LHO General
ryan.short@LIGO.ORG - posted 20:04, Wednesday 27 September 2023 (73147)
Ops Eve Mid Shift Report

After waiting for high winds to calm down, H1 has just locked and started observing as of 03:02 UTC.

H1 SUS
ryan.short@LIGO.ORG - posted 17:46, Wednesday 27 September 2023 (73130)
In-Lock SUS Charge Measurement

FAMIS 26059

There is no measurement this week for ETMX as the IFO lost lock before it was finished.

The V_eff values for ETMY have quickly changed direction since the last analysis and are now trending away from zero. All other test masses look okay.

Images attached to this report
H1 SEI (Lockloss)
jim.warner@LIGO.ORG - posted 16:41, Wednesday 27 September 2023 - last comment - 11:42, Thursday 28 September 2023(73146)
ITMX ST2 H3 Coil driver caused ISI trip again

This afternoon, the ITMX ISI tripped, was continuously glitching and couldn't be recovered. Even when I put the ISI and HEPI in non-actuating states the ISI continued to get glitches that almost saturated the seismometers. Fil went out to the rack and while I watched the ISI he turned off the coil drivers one at a time. It wasn't until he turned off the third coil driver that the glitching stopped and the ISI started behaving normally. This is exactly the same as the event in July, where I found the St2 H3 sensor seemed to be mis-behaving. Looking at the current and voltage mons, it seems the St2 H3 coil was again misbehaving, attached trend shows the st2 coil voltage and current mons. The only outliers are the St2 H3 channels, this window should be during a time when the ISI masterswitch was off, the drives should be ~0.

We should pull the corner 3 coil drive at the next opportunity. I think we have a spare in the EE shop.

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 11:27, Thursday 28 September 2023 (73153)
Now corresponds to FRS Ticket 29225.
jim.warner@LIGO.ORG - 11:42, Thursday 28 September 2023 (73154)

Took ~ 1hr to find the problem, but IFO was down for several hours because of wind.

LHO General
ryan.short@LIGO.ORG - posted 16:03, Wednesday 27 September 2023 (73145)
Ops Eve Shift Start

TITLE: 09/27 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Wind
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 27mph Gusts, 16mph 5min avg
    Primary useism: 0.08 μm/s
    Secondary useism: 0.34 μm/s
QUICK SUMMARY:

H1 General (DetChar)
ryan.crouch@LIGO.ORG - posted 16:01, Wednesday 27 September 2023 (73133)
OPS Wednesday day shift summary

TITLE: 09/27 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Wind
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Pretty unsuccesful day for locking, we weren't able to relock after the morning LL due to wind.

 

15:38UTC Superevent S23927be didn't get a verbal alarm for this event, just a phone call.

I noticed this bump around 120Hz this week, I'm not sure if this is usually there? (tagging detchar)

The range started to drop in the morning, it seemed correlated with the winds picking up (gusts up to over 40mph)

17:45UTC lockloss

17:50UTC Yarm struggled on the relock. We kept losing Yarm at LOCKING_ALS and losing lock. After about over an hour of trying and seeing the weather forecast calling for it to get worse before it gets better I decided to wait in DOWN for a bit. The corner station was getting gusts of 50mph throughout the afternoon, we could easily hear the wind from the control room.

ITMX ST1 & ST2 ISI watchdog tripped at 21:52UTC while we were in down from the ISI senors glitching out. The sensors settled after a few minutes but then continued to glitch, potentially an electronics issue? Fil went to the CER to investigate and I moved ITMX SUS to safe and Jim turned off the master switch on the ISI while we investigate more. Fil did a power cycle of the chassis and it cleared the issue, it seems to have been a coil driver issue? Related to alog71566

LOG:

Start Time System Name Location Lazer_Haz Task Time End
19:56 FAC Randy LVEA N Quick checks 20:05
21:18 VAC Gerardo FCES N Parts search 21:45
21:34 FAC Randy Mech room N Move stuff around 22:34
22:16 EE Fil CER N ITMX ISI checks 22:30
Images attached to this report
H1 GRD
oli.patane@LIGO.ORG - posted 15:49, Wednesday 27 September 2023 - last comment - 16:18, Thursday 28 September 2023(73144)
ISC_LOCK State-Change Delay on Lockloss

Oli, Camilla

ISC_LOCK takes longer to recognize and move to the LOCKLOSS state when a lockloss occurs than we want it to, sometimes leading to confusion regarding lockloss causes. Example: 71659

The part of ISC_LOCK that determines whether we have lost lock is in the is_locked function in ISC_library.py(attachment1). The current parameters for determining a lockloss rely on the H1:LSC-TR_X_NORM_INMON channel - once that channel's value falls below 500, ISC_LOCK will recognize that we have lost lock and will take us to the LOCKLOSS state.

Currently, when we are locked LSC-TR_X_NORM_INMON sits right around 1540, and when a lockloss happens, it takes ~0.55-0.65s after the lockloss start (based on ASC-AS_A_DC_NSUM_OUT) to drop below 500, and then takes another ~0.07-0.15s for ISC_LOCK to register this and take us to LOCKLOSS. To minimize this latency, since LSC-TR_X_NORM_INMON never gets below 1450 once we reach NOMINAL_LOW_NOISE, Camilla and I think it would be useful for the LSC-TR_X_NORM_INMON is_locked threshold value to be raised significantly to allow ISC_LOCK to register locklosses faster than it is currently. By raising the threshold to 1300 for example(attachment2), the time between the lockloss start and ISC_LOCK changing states would be reduced by a factor of 2-3.

Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 16:18, Thursday 28 September 2023 (73161)

During locking, state 420 (CARM_TO_ANALOG) is the first state to refer to the value of TR_X to determine a lockloss, and then every state after that is also looking at TR_X. By the time we reach CARM_TO_ANALOG, the value of TR_X is already at 1500, but it dips down and will sometimes near 1300 when between states 508-557(attachment1). Because of this, we we are wanting to create a new dof specifically for when we are in NOMINAL_LOW_NOISE to get passed into the is_locked function that checks that the value of TR_X is above 1300.

Images attached to this comment
LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 12:23, Wednesday 27 September 2023 (73140)
Kobelco Compressor Update.

Late entry.

Rogers Machinery technicians showed up Tuesday with the components to fix the leak on the compressor.  The intercooler and aftercooler assemblies were removed, cooling lines on both components were cleaned and inspected, all sealing surfaces were inspected and cleaned, then the system was reassembled and tested, technicians reported no issues.  The compressor is back to usable form.  The dry air system was turned on and left running for 1 hour, after the hour I measure a dew point of -41.7 oC right before the pressure regulator.
Note, the compressor needs a yearly maintenance done, but can't be completed right now because we are waiting on a back order kit, perhaps by December.

Images attached to this report
H1 General
ryan.crouch@LIGO.ORG - posted 12:02, Wednesday 27 September 2023 - last comment - 12:23, Wednesday 27 September 2023(73136)
OPS Wednesday day shift midshift update

The wind has been picking up which seems to be dropping the range. We lost lock at 17:45UTC, and have been struggling to relock ALS, particularly Yarm since due to the higher wind speeds. Its been over an hour and we're still not able to get ALS. Hopefully it'll calm down soon.

Images attached to this report
Comments related to this report
ryan.crouch@LIGO.ORG - 12:08, Wednesday 27 September 2023 (73139)

The forecast from windy.com unfortunately predicts the wind will get worse before it gets better this afternoon (peaks around 2pm).

ryan.crouch@LIGO.ORG - 12:23, Wednesday 27 September 2023 (73141)

I'm going to hold us in down for a bit to wait out some of the wind

Images attached to this comment
H1 CDS
david.barker@LIGO.ORG - posted 11:11, Wednesday 27 September 2023 (73138)
Wed CP1 Fill

Wed Sep 27 10:27:31 2023 INFO: Fill completed in 27min 26secs

Travis confirmed a good fill curbside. Long fill, close to the 30 min timeout, Gerardo is going to tune the LLCV setting.

Images attached to this report
H1 General
ryan.crouch@LIGO.ORG - posted 10:46, Wednesday 27 September 2023 (73137)
Lockloss at 17:45UTC

Quick lockloss, no obvious reason, aside from very windy site conditions.

H1 CDS
david.barker@LIGO.ORG - posted 08:58, Wednesday 27 September 2023 (73134)
CDS Overview now shows if model has been restarted in the past 24 hours

Following the confusion last Friday when tracing DAQ CRC errors on h1susauxh2 only to discover it had actually rebooted during the early morning power glitch, I have added an indicator to the CDS overview MEDM which shows if the model has been restarted in the past 24 hours. If the uptime days EPICS record is zero, a thin purple rectangle is shown third from the right. Attachment shows the models which were started during yesterday's maintenance.

Images attached to this report
H1 General
ryan.crouch@LIGO.ORG - posted 08:00, Wednesday 27 September 2023 (73132)
OPS Wednesday day shift start

TITLE: 09/27 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Austin
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 11mph Gusts, 9mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.42 μm/s
QUICK SUMMARY:

 

LHO General
ryan.short@LIGO.ORG - posted 00:00, Wednesday 27 September 2023 (73131)
Ops Eve Shift Summary

TITLE: 09/27 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Austin
SHIFT SUMMARY: H1 lost lock early in the shift and struggled to relock, but after running an alignment locking went smoothly. H1 has now been locked for over 5 hours.

Temperatures in the LVEA from the excursion this morning have recovered (trend attached).

LOG:       

Start Time System Name Location Lazer_Haz Task Time End
21:21 CAL Tony PCAL lab LOCAL PCAL work, in at 19:30 23:31
Images attached to this report
H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 16:32, Tuesday 26 September 2023 - last comment - 18:59, Tuesday 26 September 2023(73128)
Lockloss @ 23:21 UTC

Lockloss @ 23:21 UTC - no obvious cause; LSC-DARM_IN1 saw the first movement.

Comments related to this report
ryan.short@LIGO.ORG - 18:59, Tuesday 26 September 2023 (73129)

Back to observing at 01:58 UTC

LHO General
ryan.short@LIGO.ORG - posted 16:01, Tuesday 26 September 2023 (73127)
Ops Eve Shift Start

TITLE: 09/26 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 140Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 25mph Gusts, 15mph 5min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.62 μm/s
QUICK SUMMARY:

H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 22:42, Friday 22 September 2023 - last comment - 12:22, Tuesday 03 October 2023(73064)
Lockloss

Lockloss @ 09/23 05:39UTC

Comments related to this report
oli.patane@LIGO.ORG - 01:03, Saturday 23 September 2023 (73066)

Right after lockloss DIAG_MAIN was showing:

- PSL_ISS: Diffracted power is low

- OPLEV_SUMS: ETMX sums low

 

05:41UTC LOCKING_ARMS_GREEN, the detector couldn't see ALSY at all and I noticed the ETM/ITMs L2 saturations(attachment1-L2 ITMX MASTER_OUT as example) on the saturation monitor(attachment2) and noticed the ETMY oplev moving around wildly(attachment3)

05:54 I took the detector to DOWN, and immediately could see ALSY on the cameras and ndscope; L2 saturations were all still there

05:56 and 06:05 I went to LOCKING_ARMS_GREEN again and GREEN_ARMS_MANUAL, respectively, but wasn't able to lock anything - I was able to go to LOCKING_ARMS_GREEN and GREEN_ARMS_MANUAL without the saturations getting too bad but both ALSX and Y evenually locked for a few seconds each and then unlocked and gone to basically 0 on the cameras and ndscope(attachment4). ALSX and Y sometime after this went to FAULT and were giving the messages "PDH" and "ReflPD A"

06:07 Tried going to INITIAL_ALIGNMENT but Verbal kept calling out all of the optics as saturating, ALIGN_IFO wouldn't move past SET_SUS_FOR_ALS_FPMI, and the oplev overview screen showed ETMY, ITMX, and ETMX moving all over the place (I was worried something would go really wrong if I left it in that configuration)

I tried waiting a bit and then tried INITIAL_ALIGNMENT and LOCKING_ARMS_GREEN again but had the same results.

Naoki came on and thought the saturations might be due to the camera servos, so he cleared the camera servo history and all of the saturations disappeared (attachment5) and an INITIAL_ALIGNMENT is now running fine.

Not sure what caused the saturations in the camera servos, but it'll be worth looking into whether the saturations are what ended up causing this lockloss.

Images attached to this comment
naoki.aritomi@LIGO.ORG - 13:50, Monday 25 September 2023 (73091)

As shown in the attached figure, the output of camera servo for PIT suddenly became 1e20 and the lockloss happened after 0.3s. So this lockloss is likely due to the camera servo. I checked the input signal of camera servo. The bottom left is the input of PIT1 and should be same as the BS camera output (bottom right). The BS camera output itself looks OK, but the input of PIT1 became red dashed and the output became 1e20. The situation is same for PIT2, 3. I am not sure why this happened.

Images attached to this comment
david.barker@LIGO.ORG - 13:41, Tuesday 26 September 2023 (73119)

Opened FRS29216 . It looks like a transient NaN on ETMY_Y camera centroid channel caused a latching of 1e20 on the ASC CAM_PIT filter modules outputs.

oli.patane@LIGO.ORG - 11:40, Wednesday 27 September 2023 (73135)

Camilla, Oli

We trended the ASC-CAM_PIT{1,2,3}_INMON/OUTPUT channels alongside ASC-AS_A_DC_NSUM_OUT_DQ(attachment1) and found that according to ASC-AS_A_DC_NSUM_OUT_DQ, the lockloss started(cursor 1) ~0.37s before PIT{1,2,3}_INMON read NaN(cursor 2), so the lockloss was not caused by the camera servos.

It is possible that the NaN values are linked to the light dropping off of the PIT2 and 3 cameras right after the lockloss, as when the cameras come back online ~0.2s later, both the PIT2 and PIT3 cameras are at 0, and looking back over several locklosses it looks like these two cameras tend to drop to 0 between 0.35 and 0.55s after the lockloss starts. However the PIT1 camera is still registering light for another 0.8s after coming back online(typical time for this camera).

 

Images attached to this comment
naoki.aritomi@LIGO.ORG - 12:22, Tuesday 03 October 2023 (73239)

Patrick updated the camera server to solve the issue in alog73228.

Displaying reports 13081-13100 of 84091.Go to page Start 651 652 653 654 655 656 657 658 659 End