Displaying reports 1561-1580 of 84278.Go to page Start 75 76 77 78 79 80 81 82 83 End
Reports until 16:42, Wednesday 18 June 2025
H1 CDS
david.barker@LIGO.ORG - posted 16:42, Wednesday 18 June 2025 - last comment - 10:44, Monday 23 June 2025(85164)
DAQ DC0 crash

At 16:14:31 PDT h1daqdc0 crashed. Its EPICS IOC stopped running resulting in white boxes on MEDM. Last log was 15:56.

I connected a monitor to its VGA port, its console was showing the login prompt. The cursor was not flashing and an attached keyboard was unresponsive.

Erik and I rebooted the machine by pressing the front panel RESET button. It booted and started with no problems.

Currently we don't know why dc0 froze this way.

Comments related to this report
david.barker@LIGO.ORG - 08:44, Thursday 19 June 2025 (85177)

FW0 full frame gap due to crash and restart:

Jun 18 16:14 H-H1_R-1434323584-64.gwf
Jun 18 16:26 H-H1_R-1434324288-64.gwf
 

david.barker@LIGO.ORG - 10:44, Monday 23 June 2025 (85244)
H1 General
oli.patane@LIGO.ORG - posted 16:35, Wednesday 18 June 2025 (85161)
Ops Eve Shift Start

TITLE: 06/18 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: SEISMON_ALERT
    Wind: 27mph Gusts, 14mph 3min avg
    Primary useism: 0.11 μm/s
    Secondary useism: 0.09 μm/s
QUICK SUMMARY:

Currently relocking and in MOVE_SPOTS. Wind and ground motion are still a bit high.

H1 General (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 16:16, Wednesday 18 June 2025 - last comment - 16:52, Wednesday 18 June 2025(85160)
Lockloss 21:28 UTC

Unknown cause lockloss. Could be attributed to the wind, since it is on the rise and expected to continue tonight. Gusts were over 30mph at the time of the lockloss.

It seems that there was a length kick in EY that was first seen causing the lockloss. Lockloss report.

Comments related to this report
elenna.capote@LIGO.ORG - 16:40, Wednesday 18 June 2025 (85163)

This lockloss could have been caused by some ring up that appears to be at a frequency between 11-13 Hz in the LSC loops.

Here is the lockloss tool link: https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1434317297

Images attached to this comment
oli.patane@LIGO.ORG - 16:52, Wednesday 18 June 2025 (85166)

23:50 Observing

H1 General (ISC, OpsInfo)
elenna.capote@LIGO.ORG - posted 14:07, Wednesday 18 June 2025 (85158)
Some observe SDF diffs reverted

There were a few SDF diffs before observing in ramp times that I think came from running various scripts like the A2L and the DARM offset step. I reverted all of the changes so we (hopefully) don't get another SDF diff next time we lock.

Images attached to this report
H1 ISC
elenna.capote@LIGO.ORG - posted 11:57, Wednesday 18 June 2025 (85156)
DC6 P does not contribute noise to DARM

I ran a noise budget injection into DC6 (centering loop for POP WFS), using a broadband excitation from 10-100 Hz. Based on the results in the DTT template (attached), there is not any measureable contribution to noise in DARM when injecting about 100x above ambient in the DC6 P control (bottom left plot, ref trace shows quiet time, live trace shows injection). We can include this channel in the ASC noise budget, but our code won't even generate a trace since the reference DARM and injection DARM shown here are exactly the same (top left plot, blue reference trace shows quiet time, red live trace shows injection time).

At a later time I will check DC6 Y.

Images attached to this report
H1 General
ibrahim.abouelfettouh@LIGO.ORG - posted 11:47, Wednesday 18 June 2025 - last comment - 13:46, Wednesday 18 June 2025(85155)
A2L All Run

Ran A2L script for all sus during comissioning today. Screenshot attached.

 

Images attached to this report
Comments related to this report
ibrahim.abouelfettouh@LIGO.ORG - 13:46, Wednesday 18 June 2025 (85157)

These values have been added to ISC_LOCK.py (screenshot of specific gains attached).

Images attached to this comment
LHO VE
david.barker@LIGO.ORG - posted 10:23, Wednesday 18 June 2025 (85154)
Wed CP1 fill

Wed Jun 18 10:11:15 2025 INFO: Fill completed in 11min 11secs

 

Images attached to this report
H1 SEI
anthony.sanchez@LIGO.ORG - posted 09:49, Wednesday 18 June 2025 (85153)
H1 ISI CPS Noise Spectra Check - Weekly

H1 ISI CPS Noise Spectra Check - FAMIS 26048

NEW and IMPROVED H1 ISI CPS Noise Spectra check Now includes HAM1 !

HAM1 currently have some very loud VAC equipment attached to it which is running and may be why HAM1 looks so terible relative to the rest of the othe HAMs.

Non-image files attached to this report
H1 AOS
camilla.compton@LIGO.ORG - posted 09:39, Wednesday 18 June 2025 - last comment - 14:13, Wednesday 18 June 2025(85149)
SQZ ASC and FC beamspot control back on

Sheila, Camilla

We can see large SQZ changes dependent on the OPO PZT value, we;vee seen this before, some alignment changes from this PZT should be adjusted for by FC AS and FC beamspot control. The FC beamspot control has been off since the vent, but we're turned FC beamspot control on again in the hope to reduce this dependency.

Yesterday we needed to turn the ASC on to improve high freq sqz 85147 and since we've started using the THERMALIZATION guardian 85083 to slowly adjust SRCL Offset, our squeezing and ASC error signals are reduced slightly (see below). We have turned back on the SQZ ASC as expect this new guardian will stop the ASC running away.

Now we have the THERMALIZATION guardian working, the ADF measured sqz ang change has reduced (see below), we want to try turning back on SQZ_ANG_SERVO which will take a little tuning of settings. You can see in this plot that when the OPo PZT changed, the servo would have adjusted the sqz angle too.

Also touched the SHG launch waveplates to decrease the rejected power in H1:SQZ-SHG_FIBR_REJECTED_DC_POWER.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 14:13, Wednesday 18 June 2025 (85159)

SQZ ASC was running away at the start of the lock so I've turned SQZ ASC off again.

I tried re-measureing the sensing matrix, the result was different to that measrue in September 80373 with the YAW sensor swapped (see output of /sqz/h1/scripts/ASC/python AS42_sensing_matrix_cal.py below) but when I tried it later the sensing matrix seemed to be different, I expect you need to start in good squeezing for it to work well which we were not when I tried it.

Plan to remeasure the sensing matrix more carefully (as in 80373) and then try a new one again.

Also tried today the SQZ_ANG_ADJUST servo using ADF, as the ASC was running away this was confusing, left off.

Using ZM4 and ZM6.
PIT Sensing Matrix is:
[[-0.0006984 -0.0011916]
[ 0.00143 0.001 ]]
PIT Input Matrix is:
[[ 994.44305222 1184.97834103]
[-1422.05356468 -694.51902767]]
YAW Sensing Matrix is:
[[-0.0031535 0. ]
[ 0. -0.00165 ]]
YAW Input Matrix is:
[[-317.10797527 -0. ]
[ -0. -606.06060606]]
H1 CDS
david.barker@LIGO.ORG - posted 09:07, Wednesday 18 June 2025 (85152)
h1guardian1 rebooted to fix AWG issue

Jonathan, Erik, Dave, Ibrahim, Tony:

Following the front end restarts yesterday there has been a spate of Guardian AWG connection issues e.g. alog85130

Erik recommended that we reboot h1guardian1 at the next opportunity to force new front end connections for all Guardian nodes, rather than restart each node as the problem arises.

Following a lock loss this morning, the control room gave us the go ahead to reboot h1guardian1 at 08:49 Wed18jun2025PDT. Like TJs last reboot 12 days ago, h1guardian came back and restarted all of its nodes quickly and without any problems.

H1 General (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 09:06, Wednesday 18 June 2025 (85151)
Locklosss 14:56

Unknown cause lockloss

However our two guesses are:

H1 CDS
david.barker@LIGO.ORG - posted 08:59, Wednesday 18 June 2025 (85150)
New HAM1 vacuum gauge PT100_MOD2 added to MEDM and FOM

Reminder that for the past month HAM1 pressure was being reported by a temporary "H1" version of the old cold-cathode PT100A (called H1:VAC-LY_X0_PT100B_PRESS_TORR) which uses an ADC in h0vacly and calcuates the pressure from the raw voltage signal.

Yesterday Patrick installed the h0vaclx full Beckhoff readout of this gauge via an ethernet connection. The channel name for this gauge from now onwards is H0:VAC-LX_X0_PT100_MOD2_PRESS_TORR

For now I'm showing both channels on the Vacuum MEDM and FOMs, H1:VAC-LY_X0_PT100B_PRESS_TORR upper, H0:VAC-LX_X0_PT100_MOD2_PRESS_TORR lower.

Note the H1 channel is a bigger number, the voltage signal increased slightly when Gerardo plugged the ethernet cable into h0vaclx yesterday.

Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 07:33, Wednesday 18 June 2025 (85148)
OPS Day Shift Start

TITLE: 06/18 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 3mph Gusts, 1mph 3min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.05 μm/s
QUICK SUMMARY:

IFO is in NLN and OBSERVING as of 11:57 UTC

Looks like we were able to recover from a 5.8 Mexico quake fully automatically last night.

 

H1 General (GRD)
oli.patane@LIGO.ORG - posted 22:04, Tuesday 17 June 2025 (85147)
Ops Eve Shift End

TITLE: 06/18 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:

Currently Observing at 145 Mpc and have been Locked for almost 2 hours.Everything looking good.

Two hours into our previous lock, our range was slowly dropping and we could see that SQZ wasn't looking very good, and trending back the optic allignments Camilla saw that ZM4 and ZM6 weren't where they were supposed to be because the ASC hadn't been offloaded before all the updates earlier today(ndscope1). To fix this, we popped out of Observing and turned on the SQZ ASC for three minutes, which made the sqzing better(ndscope2)! This better sqzing has persisted to this next lock stretch.

After the 01:30 lockloss (85145), I sat in DOWN for a couple minutes while restarting a few Guardian nodes since awg was restarted earlier today and this should help with the guardian node error (like in 85130) (these are the ones I could think of that probably all use awg): ALS_{X,Y}ARM, ESD_EXC_{I,E}TM{X,Y}, SUS_CHARGE, PEM_MAG_INJ Tagging Guardian
 

LOG:

23:30 Observing and have been Locked for over 1 hour
00:24 Popped out of Observing quickly to run the SQZ ASC for a few minutes
00:27 Back into Observing
01:30 Lockloss
    - Sat in DOWN for a couple minutes while I restarted some Guardian nodes
    - We ended up in CHECK_MICH_FRINGES but that didn't help, so I started an IA
03:05 NOMINAL_LOW_NOISE
03:08 Observing                                                                                                          

Start Time System Name Location Lazer_Haz Task Time End
15:22 SAF LASER SAFE LVEA SAFE LVEA is LASER SAFE ദ്ദി( •_•) 15:37
20:49 ISS Keita, Rahul Optics lab LOCAL ISS array work (Rahul out 23:45) 00:28
Images attached to this report
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 18:49, Tuesday 17 June 2025 - last comment - 20:44, Tuesday 17 June 2025(85145)
Lockloss

Lockloss @ 06/18 01:30 UTC

Comments related to this report
oli.patane@LIGO.ORG - 20:44, Tuesday 17 June 2025 (85146)

03:08 Observing

H1 SUS (CDS)
erik.vonreis@LIGO.ORG - posted 16:12, Tuesday 17 June 2025 - last comment - 17:50, Tuesday 17 June 2025(85135)
Excessive movement in ITMY may have been caused by bad IPC table in OMC0

Trend of H1:IOP-SUS_ITMY_WD_OSEM1_RMSOUT shows increased motion during the 10 minutes post-RCG upgrade that OMC0, see alog 85120, was clobbering IPCs, including two peaks.

The attached screenshot has cursors at the approximate start and end of OMC0 clobbering IPCs.  RMS remained high until guardian was started 30 minutes later, after which ITMY continued to ring until guardian was again restarted.

We will attempt to trace the clobbered IPCs to see if they plausibly could have driven ITMY.

Images attached to this report
Comments related to this report
erik.vonreis@LIGO.ORG - 17:37, Tuesday 17 June 2025 (85143)

The attach list shows the mapping from OMC0 IPCs to IPCs that were clobbered during the ten minutes OMC0 was running on the wrong IPC table.

 

Non-image files attached to this comment
erik.vonreis@LIGO.ORG - 17:50, Tuesday 17 June 2025 (85144)

ITMX, which received the same clobbered channel as ITMY, also showed a spike in movement during the same period, but was properly stilled by guardian.

 

 

Images attached to this comment
Displaying reports 1561-1580 of 84278.Go to page Start 75 76 77 78 79 80 81 82 83 End