Displaying reports 9961-9980 of 86440.Go to page Start 495 496 497 498 499 500 501 502 503 End
Reports until 18:02, Sunday 30 June 2024
H1 AOS
robert.schofield@LIGO.ORG - posted 18:02, Sunday 30 June 2024 (78767)
More ITM bias adjustment during observation

This is a continuation of the work discussed in 78734

 

Start of change (GPS)

End of change (GPS)

ITMX bias at start (V)

ITMX bias at end (V)

ITMY bias at start (V)

ITMY bias at end (V)

1403804070

1403804192

 

0

0

0

77

1403804365

1403804489

 

0

0

77

-80

1403807629

1403807751

 

0

0

-80

-20

1403809555

1403809816

0

-40

-20

-40

1403812204

1403812324

 

-40

36

 

-40

-40

1403816907

1403817127

 

36

36

-40

0

H1 PSL
ryan.short@LIGO.ORG - posted 17:11, Sunday 30 June 2024 (78766)
PSL FSS RefCav Remote Alignment Tweak

After H1 lost lock this afternoon, I took the opportunity to do a quick RefCav alignment tweak since the TPD was showing the transmission was low at around 680mV. Using the two picomotor-controlled mirrors in the FSS path, mostly adjusting in pitch, I was able to improve the signal from about 680mV to 870mV. More of an increase than I was expecting, but this should be good to last until the PMC is swapped out on Tuesday (where this alignment may need to be done again). I suspect I could've spent more time to improve this further, but I stopped here so as not to delay IFO locking any longer.

Images attached to this report
H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 16:49, Sunday 30 June 2024 - last comment - 18:04, Sunday 30 June 2024(78765)
Lockloss @ 23:39 UTC

Lockloss @ 23:39 UTC - link to lockloss tool

Locked for 26 minutes. No obvious cause, but I see the ETMX motion about 100ms before the lockloss this time.

I'm going to take this opportunity to do a quick FSS RefCav alignment adjustment so that it can hopefully make it until Tuesday when we swap out the PMC.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 18:04, Sunday 30 June 2024 (78768)

H1 back to observing at 00:57 UTC. Pretty much a fully automated relock; I just slightly adjusted PRM to make buildups during PRMI better, but it might've caught on its own.

LHO General
thomas.shaffer@LIGO.ORG - posted 16:25, Sunday 30 June 2024 (78764)
Ops Day Shift End

TITLE: 06/30 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Just got back to low noise after we had a lock loss that ended a 15 hour lock. Relocking was fully auto and it didn't even run an initial alignment.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
16:08 SAF LVEA LVEA YES LVEA IS LASER HAZARD 10:08
18:23 SQZ Terry Opt Lab local SHG work 21:47
21:05 SQZ Kar Meng Opt Lab local SHG work 01:05
22:40 PEM Robert LVEA y Looking for noise source 23:00

 

LHO General
ryan.short@LIGO.ORG - posted 16:09, Sunday 30 June 2024 - last comment - 16:22, Sunday 30 June 2024(78761)
Ops Eve Shift Start

TITLE: 06/30 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 7mph Gusts, 4mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.09 μm/s
QUICK SUMMARY: H1 lost lock about an hour ago but is relocking well; so far up to TRANSITION_FROM_ETMX.

Comments related to this report
ryan.short@LIGO.ORG - 16:22, Sunday 30 June 2024 (78762)PSL

The RefCav TPD is reading down to 0.69V, showing a warning on DIAG_MAIN, and has been falling over the past couple weeks. Since this is likely due to the increased PMC loss lowering the output of the cavity, I don't suspect fixing the RefCav alignment will get too much out of it, but I can try an alignment this evening if the IFO loses lock.

Images attached to this comment
H1 General
thomas.shaffer@LIGO.ORG - posted 15:21, Sunday 30 June 2024 (78760)
Lock loss 2216UTC

Lockloss1403820932

Ending a 15 hour lock. This lock had the ETMX wiggles as well, but they happened almost a second prior to lock loss and were larger than I have seen.

Images attached to this report
H1 PSL
thomas.shaffer@LIGO.ORG - posted 10:45, Sunday 30 June 2024 (78759)
PSL Status Report - Weekly

FAMS26260


Laser Status:
    NPRO output power is 1.821W (nominal ~2W)
    AMP1 output power is 67.15W (nominal ~70W)
    AMP2 output power is 137.3W (nominal 135-140W)
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 32 days, 23 hr 34 minutes
    Reflected power = 22.75W
    Transmitted power = 105.2W
    PowerSum = 127.9W

FSS:
    It has been locked for 0 days 11 hr and 42 min
    TPD[V] = 0.6936V

ISS:
    The diffracted power is around 2.0%
    Last saturation event was 0 days 11 hours and 43 minutes ago


Possible Issues:
    PMC reflected power is high
    FSS TPD is low

LHO VE
david.barker@LIGO.ORG - posted 10:15, Sunday 30 June 2024 (78758)
Sun CP1 Fill

Sun Jun 30 10:09:55 2024 INFO: Fill completed in 9min 52secs

Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 07:33, Sunday 30 June 2024 (78756)
Ops Day Shift Start

TITLE: 06/30 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 7mph Gusts, 6mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.07 μm/s
QUICK SUMMARY: Locked for 7.5 hours, noise and range look okay.

LHO General
ryan.short@LIGO.ORG - posted 01:00, Sunday 30 June 2024 (78755)
Ops Eve Shift Summary

TITLE: 06/30 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Three locklosses this shift, all with unknown causes but only two with the ETMX motion before that we've been seeing. Recovery from each was fairly straightforward, however the ALS-X PLL continued to be problematic and unlock when trying to recover from the first lockloss.

H1 has been observing for 1 hour.

LOG: No log for this shift.

H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 23:16, Saturday 29 June 2024 - last comment - 00:03, Sunday 30 June 2024(78753)
Lockloss @ 06:00 UTC

Lockloss @ 06:00 UTC - link to lockloss tool

Locked for 49 minutes. No obvious cause; larger ETMX motion right before the lockloss this time.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 00:03, Sunday 30 June 2024 (78754)

H1 back to observing at 07:01 UTC. Automatic relock except for manually adjusting ETMX to lock ALS X faster.

H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 21:24, Saturday 29 June 2024 - last comment - 22:15, Saturday 29 June 2024(78751)
Lockloss @ 04:15 UTC

Lockloss @ 04:15 UTC - link to lockloss tool

Locked for 38 minutes. No obvious cause, but I see the familiar small ETMX hit almost a half second before the lockloss.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 22:15, Saturday 29 June 2024 (78752)

H1 back to observing at 05:13 UTC

H1 AOS
robert.schofield@LIGO.ORG - posted 17:35, Saturday 29 June 2024 (78749)
aborted HVAC shutdowns

We lost lock just as I was beginning  HVAC shutdowns to take advantage of the nearly 160 Mpc range. When we regained lock, the range was only about what it was for my last shutdown (77477), so I will defer.  Here are the times for what I did do:

Start of shutdown (UTC)

Start of end of shutdown (UTC))

Equipment shut down

16:25

16:37

Office area HVAC

16:50

16:57 lock loss 16:55:49

Chiller, all turbines, office area HVAC, split minis in CER

H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 17:08, Saturday 29 June 2024 - last comment - 20:43, Saturday 29 June 2024(78748)
Lockloss @ 23:57 UTC

Lockloss @ 23:57 UTC - link to lockloss tool

Ends lock at 6 hours. No obvious cause, and I don't see the ETMX motion prior to this lockloss as we've seen in the past.

Comments related to this report
ryan.short@LIGO.ORG - 20:43, Saturday 29 June 2024 (78750)

H1 back to observing at 03:41 UTC.

ALS X PLL unlocking provided frequent interruptions in this relock. Eventually made it to DRMI, where PRM needed adjustments after going through MICH_FRINgES.

LHO General
thomas.shaffer@LIGO.ORG - posted 16:20, Saturday 29 June 2024 (78741)
Ops Day Shift End

TITLE: 06/29 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: One lock loss with a straight forward relock, and then a delayed calibration measurement. We've been locked for 5.5 hours.
LOG:

                                                                                                                                                                                                                                     

Start Time System Name Location Lazer_Haz Task Time End
16:08 SAF LVEA LVEA YES LVEA IS LASER HAZARD 10:08
16:23 PEM/FAC Robert Site n HVAC shutdowns 19:02
19:03 SQZ Terry Opt Lab local SHG work 00:57
H1 CAL
thomas.shaffer@LIGO.ORG - posted 14:34, Saturday 29 June 2024 - last comment - 18:27, Wednesday 03 July 2024(78746)
Calibration Sweep 2106 UTC

Calibration sweep taken today at 2106UTC in coordination with LLO and Virgo. This was delayed today since we were'nt thermalized at 1130PT.

Simulines start:

PDT: 2024-06-29 14:11:45.566107 PDT
UTC: 2024-06-29 21:11:45.566107 UTC
GPS: 1403730723.566107

End:

PDT: 2024-06-29 14:33:08.154689 PDT
UTC: 2024-06-29 21:33:08.154689 UTC
GPS: 1403732006.154689
 

I ran into the error below when I first started the simulines script, but it seemed to move on. I'm not sure if this frequently pops up and this is the first time I caught it.

Traceback (most recent call last):
 File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 314, in _
bootstrap
   self.run()
 File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 108, in r
un
   self._target(*self._args, **self._kwargs)
 File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 427, in generateSignalInje
ction
   SignalInjection(tempObj, [frequency, Amp])
 File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 484, in SignalInjection
   drive.start(ramptime=rampUp) #this is blocking, and starts on a GPS second.
 File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/awg.py", line 122, in start
   self._get_slot()
 File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/awg.py", line 106, in _get_sl
ot
   raise AWGError("can't set channel for " + self.chan)
awg.AWGError: can't set channel for H1:SUS-ETMX_L1_CAL_EXC
 

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 09:48, Sunday 30 June 2024 (78757)

One item to note is that h1susex is running a different version of awgtpman since last Tuesday.

erik.vonreis@LIGO.ORG - 10:01, Monday 01 July 2024 (78777)

This almost certainly failed to start the excitation.

I tested a 0-amplitude excitation on the same channel using awggui with no issue.

There may be something wrong the environment the script is running in.

 

 

louis.dartez@LIGO.ORG - 13:48, Monday 01 July 2024 (78784)
We haven't made any changes to the environment that is used to run simulines. The only thing that seems to have changed is that a different version of awgtpman is running now on h1susex as Dave pointed out. 

Having said that, this failure has been seen before but rarely reappears when re-running simulines. So maybe this is not that big of an issue...unless it happens again.
louis.dartez@LIGO.ORG - 15:17, Wednesday 03 July 2024 (78842)
turns out i was wrong about the environment not changing. according to step 7 of the ops calib measurement instructions, simulines has been getting run in the base cds environment...which the calibration group does not control. That's probably worth changing. In the meantime, I'm unsure if that's the cause of last week's issues.
erik.vonreis@LIGO.ORG - 16:51, Wednesday 03 July 2024 (78846)

The CDS environment was stable between June 22 (last good run) and Jun 29.

 

There may have been another failure on June 27, which would make two failures and no successes since the upgrade.

 

The attached graph for June 27 shows an excitation at EY, but no associated excitation at EX during the same period.  Compare with the graph from June 22.

Images attached to this comment
erik.vonreis@LIGO.ORG - 18:27, Wednesday 03 July 2024 (78851)

On Jun 27 and Jun 28, H1:SUS-ETMX_L2_CAL_EXCMON was excited during the test.

Displaying reports 9961-9980 of 86440.Go to page Start 495 496 497 498 499 500 501 502 503 End