Displaying reports 781-800 of 77255.Go to page Start 36 37 38 39 40 41 42 43 44 End
Reports until 16:09, Sunday 30 June 2024
LHO General
ryan.short@LIGO.ORG - posted 16:09, Sunday 30 June 2024 - last comment - 16:22, Sunday 30 June 2024(78761)
Ops Eve Shift Start

TITLE: 06/30 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 7mph Gusts, 4mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.09 μm/s
QUICK SUMMARY: H1 lost lock about an hour ago but is relocking well; so far up to TRANSITION_FROM_ETMX.

Comments related to this report
ryan.short@LIGO.ORG - 16:22, Sunday 30 June 2024 (78762)PSL

The RefCav TPD is reading down to 0.69V, showing a warning on DIAG_MAIN, and has been falling over the past couple weeks. Since this is likely due to the increased PMC loss lowering the output of the cavity, I don't suspect fixing the RefCav alignment will get too much out of it, but I can try an alignment this evening if the IFO loses lock.

Images attached to this comment
H1 General
thomas.shaffer@LIGO.ORG - posted 15:21, Sunday 30 June 2024 (78760)
Lock loss 2216UTC

Lockloss1403820932

Ending a 15 hour lock. This lock had the ETMX wiggles as well, but they happened almost a second prior to lock loss and were larger than I have seen.

Images attached to this report
H1 PSL
thomas.shaffer@LIGO.ORG - posted 10:45, Sunday 30 June 2024 (78759)
PSL Status Report - Weekly

FAMS26260


Laser Status:
    NPRO output power is 1.821W (nominal ~2W)
    AMP1 output power is 67.15W (nominal ~70W)
    AMP2 output power is 137.3W (nominal 135-140W)
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 32 days, 23 hr 34 minutes
    Reflected power = 22.75W
    Transmitted power = 105.2W
    PowerSum = 127.9W

FSS:
    It has been locked for 0 days 11 hr and 42 min
    TPD[V] = 0.6936V

ISS:
    The diffracted power is around 2.0%
    Last saturation event was 0 days 11 hours and 43 minutes ago


Possible Issues:
    PMC reflected power is high
    FSS TPD is low

LHO VE
david.barker@LIGO.ORG - posted 10:15, Sunday 30 June 2024 (78758)
Sun CP1 Fill

Sun Jun 30 10:09:55 2024 INFO: Fill completed in 9min 52secs

Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 07:33, Sunday 30 June 2024 (78756)
Ops Day Shift Start

TITLE: 06/30 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 7mph Gusts, 6mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.07 μm/s
QUICK SUMMARY: Locked for 7.5 hours, noise and range look okay.

LHO General
ryan.short@LIGO.ORG - posted 01:00, Sunday 30 June 2024 (78755)
Ops Eve Shift Summary

TITLE: 06/30 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Three locklosses this shift, all with unknown causes but only two with the ETMX motion before that we've been seeing. Recovery from each was fairly straightforward, however the ALS-X PLL continued to be problematic and unlock when trying to recover from the first lockloss.

H1 has been observing for 1 hour.

LOG: No log for this shift.

H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 23:16, Saturday 29 June 2024 - last comment - 00:03, Sunday 30 June 2024(78753)
Lockloss @ 06:00 UTC

Lockloss @ 06:00 UTC - link to lockloss tool

Locked for 49 minutes. No obvious cause; larger ETMX motion right before the lockloss this time.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 00:03, Sunday 30 June 2024 (78754)

H1 back to observing at 07:01 UTC. Automatic relock except for manually adjusting ETMX to lock ALS X faster.

H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 21:24, Saturday 29 June 2024 - last comment - 22:15, Saturday 29 June 2024(78751)
Lockloss @ 04:15 UTC

Lockloss @ 04:15 UTC - link to lockloss tool

Locked for 38 minutes. No obvious cause, but I see the familiar small ETMX hit almost a half second before the lockloss.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 22:15, Saturday 29 June 2024 (78752)

H1 back to observing at 05:13 UTC

H1 AOS
robert.schofield@LIGO.ORG - posted 17:35, Saturday 29 June 2024 (78749)
aborted HVAC shutdowns

We lost lock just as I was beginning  HVAC shutdowns to take advantage of the nearly 160 Mpc range. When we regained lock, the range was only about what it was for my last shutdown (77477), so I will defer.  Here are the times for what I did do:

Start of shutdown (UTC)

Start of end of shutdown (UTC))

Equipment shut down

16:25

16:37

Office area HVAC

16:50

16:57 lock loss 16:55:49

Chiller, all turbines, office area HVAC, split minis in CER

H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 17:08, Saturday 29 June 2024 - last comment - 20:43, Saturday 29 June 2024(78748)
Lockloss @ 23:57 UTC

Lockloss @ 23:57 UTC - link to lockloss tool

Ends lock at 6 hours. No obvious cause, and I don't see the ETMX motion prior to this lockloss as we've seen in the past.

Comments related to this report
ryan.short@LIGO.ORG - 20:43, Saturday 29 June 2024 (78750)

H1 back to observing at 03:41 UTC.

ALS X PLL unlocking provided frequent interruptions in this relock. Eventually made it to DRMI, where PRM needed adjustments after going through MICH_FRINgES.

LHO General
thomas.shaffer@LIGO.ORG - posted 16:20, Saturday 29 June 2024 (78741)
Ops Day Shift End

TITLE: 06/29 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: One lock loss with a straight forward relock, and then a delayed calibration measurement. We've been locked for 5.5 hours.
LOG:

                                                                                                                                                                                                                                     

Start Time System Name Location Lazer_Haz Task Time End
16:08 SAF LVEA LVEA YES LVEA IS LASER HAZARD 10:08
16:23 PEM/FAC Robert Site n HVAC shutdowns 19:02
19:03 SQZ Terry Opt Lab local SHG work 00:57
LHO General
ryan.short@LIGO.ORG - posted 16:06, Saturday 29 June 2024 (78747)
Ops Eve Shift Start

TITLE: 06/29 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 6mph Gusts, 4mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.06 μm/s
QUICK SUMMARY: H1 has been locked for 5 hours.

H1 CAL
thomas.shaffer@LIGO.ORG - posted 14:34, Saturday 29 June 2024 - last comment - 18:27, Wednesday 03 July 2024(78746)
Calibration Sweep 2106 UTC

Calibration sweep taken today at 2106UTC in coordination with LLO and Virgo. This was delayed today since we were'nt thermalized at 1130PT.

Simulines start:

PDT: 2024-06-29 14:11:45.566107 PDT
UTC: 2024-06-29 21:11:45.566107 UTC
GPS: 1403730723.566107

End:

PDT: 2024-06-29 14:33:08.154689 PDT
UTC: 2024-06-29 21:33:08.154689 UTC
GPS: 1403732006.154689
 

I ran into the error below when I first started the simulines script, but it seemed to move on. I'm not sure if this frequently pops up and this is the first time I caught it.

Traceback (most recent call last):
 File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 314, in _
bootstrap
   self.run()
 File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 108, in r
un
   self._target(*self._args, **self._kwargs)
 File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 427, in generateSignalInje
ction
   SignalInjection(tempObj, [frequency, Amp])
 File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 484, in SignalInjection
   drive.start(ramptime=rampUp) #this is blocking, and starts on a GPS second.
 File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/awg.py", line 122, in start
   self._get_slot()
 File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/awg.py", line 106, in _get_sl
ot
   raise AWGError("can't set channel for " + self.chan)
awg.AWGError: can't set channel for H1:SUS-ETMX_L1_CAL_EXC
 

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 09:48, Sunday 30 June 2024 (78757)

One item to note is that h1susex is running a different version of awgtpman since last Tuesday.

erik.vonreis@LIGO.ORG - 10:01, Monday 01 July 2024 (78777)

This almost certainly failed to start the excitation.

I tested a 0-amplitude excitation on the same channel using awggui with no issue.

There may be something wrong the environment the script is running in.

 

 

louis.dartez@LIGO.ORG - 13:48, Monday 01 July 2024 (78784)
We haven't made any changes to the environment that is used to run simulines. The only thing that seems to have changed is that a different version of awgtpman is running now on h1susex as Dave pointed out. 

Having said that, this failure has been seen before but rarely reappears when re-running simulines. So maybe this is not that big of an issue...unless it happens again.
louis.dartez@LIGO.ORG - 15:17, Wednesday 03 July 2024 (78842)
turns out i was wrong about the environment not changing. according to step 7 of the ops calib measurement instructions, simulines has been getting run in the base cds environment...which the calibration group does not control. That's probably worth changing. In the meantime, I'm unsure if that's the cause of last week's issues.
erik.vonreis@LIGO.ORG - 16:51, Wednesday 03 July 2024 (78846)

The CDS environment was stable between June 22 (last good run) and Jun 29.

 

There may have been another failure on June 27, which would make two failures and no successes since the upgrade.

 

The attached graph for June 27 shows an excitation at EY, but no associated excitation at EX during the same period.  Compare with the graph from June 22.

Images attached to this comment
erik.vonreis@LIGO.ORG - 18:27, Wednesday 03 July 2024 (78851)

On Jun 27 and Jun 28, H1:SUS-ETMX_L2_CAL_EXCMON was excited during the test.

H1 SQZ
andrei.danilin@LIGO.ORG - posted 11:21, Saturday 29 June 2024 - last comment - 11:21, Saturday 29 June 2024(78726)
Transfer Function from the FC control signal counts to displacement in μm

Andrei, Naoki, Sheila

For the upcoming measurement of the FC backscattering, we need to calibrate the length change of the FC. To do this, we calculated the transfer function from the GS_SL FC control signal [Hz] to FC2 displacement [μm]. We followed the steps in Diagram.png to get to result. The plot bode_all_datasets.png contains all the used datasets.

The resulting transfer function is presented here: Tranfer_func.png (where Result curve is the transfer function). The result was exported to frequency/magnitude/phase dataset and can be found in result_data.txt. The remaining .txt files contain all the used datasets for this calculation.

Assuming that the frequency of the FC resonance shift Δf equal to c/2L corresponds to the FC length change ΔL equal to λ/2. (λ = 532 nm, L = 300 m), then Δf/ΔL = c/(L * λ) = 1.88*1012  [Hz/m] = 1.88*106  [Hz/μm]. Multiplying Transfer function by this coefficient will get us the open loop unity gain frequency of 39.4 Hz. Open-loop gain plot (after multiplication) can be found in the following figure: openloop_gain.png.

Images attached to this report
Non-image files attached to this report
Comments related to this report
naoki.aritomi@LIGO.ORG - 13:17, Friday 28 June 2024 (78728)

For FC2 suspension plant, we used sus_um in H1:CAL-CS_SUM_PRCL_PRM filter bank. The sus_um is the PRM suspension plant in the unit of um/count. Although the FC2 and PRM are the same HSTS suspensions, FC2/PRM M2 and M3 actuation strength is different by 0.326/2.83 according to the suspensions control design summary table on the door of control room as shown in the attachment (TACQ for FC2, TACQ*** for PRM). So we added this factor for FC2 M3 path.

Images attached to this comment
H1 ISC
thomas.shaffer@LIGO.ORG - posted 11:00, Saturday 29 June 2024 (78744)
Changed TMSX Opticalign tramp from 20->2

The long ramp time was confusing me while I was trying to adjust the TMS to speed up locking, so I looked into what needed such a long ramp. I thought it was the TMS servo, but it seems to only use the TEST bank. I couldn't find another place in ISC_LOCK or the ALS guardians that specifically referenced this bank, so I changed it to 2sec like for TMSY and then accepted it in SDF safe and observe. We made it through this acquisition at 2sec so maybe we're OK.

Images attached to this report
H1 General
thomas.shaffer@LIGO.ORG - posted 10:08, Saturday 29 June 2024 - last comment - 11:03, Saturday 29 June 2024(78742)
Lock loss 1655 UTC

Lockloss 1403715367

There was ground motion from an earthquake just after the lock loss, but the lock loss itself seemed to be quite sudden. ETMX sees those usual wiggles that we often see.

Robert had just started running HVAC shutdown tests, but this seems very unlikely to be the cause.

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 11:03, Saturday 29 June 2024 (78745)

Back to Observing at 1800UTC

I touched up TMSX P, PRM Y, and a bit pf BS P to speed of acquisition and avoid an initial alignment.

Our ALSX PLL beatnote seems to have gone quite bad ~10 hours ago, getting < -30 dBm. It recovered during this acquisition but I worry we will have more issues with this.

Images attached to this comment
Displaying reports 781-800 of 77255.Go to page Start 36 37 38 39 40 41 42 43 44 End