Displaying reports 8601-8620 of 86141.Go to page Start 427 428 429 430 431 432 433 434 435 End
Reports until 18:46, Sunday 01 September 2024
H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 18:46, Sunday 01 September 2024 - last comment - 20:53, Sunday 01 September 2024(79860)
Lockloss @ 01:35 UTC - PI modes 28/29

Lockloss @ 01:35 UTC - link to lockloss tool

PI mode 28 suddenly rang up about 5 minutes before the lockloss (almost exactly 2 hours since reaching NLN), callouts for mode 29 started also soon after. It appears that Guardian was trying to damp mode 29 instead of 28; I recall these modes being faily close together, but I wonder if this is the best way to damp this mode as Guardian was unsuccessful. I, unfortunately, was not quick enough to intervene.

While the PIs were rung up shortly before the lockloss, I noticed the OMC TRANS camera start shaking like it has before (most recently in alog79748), and I don't recall seeing it do that so far this weekend.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 20:53, Sunday 01 September 2024 (79862)ISC

H1 back to observing at 03:46 UTC. Fully automatic relock preceded by an initial alignment.

While relocking following this lockloss, I changed the nominal input max power back to 60W at Naoki's recommendation to hopefully avoid these 80kHz PIs.

H1 General (SQZ)
ryan.short@LIGO.ORG - posted 17:54, Sunday 01 September 2024 (79859)
H1 Back to Observing - SQZ SHG Fiber Launch Power Reduced

Once H1 relocked to NLN this evening, I noticed it didn't look like FDS was being injected, and then I saw a notification on the SQZ_OPO_LR Guardian node saying, "too much pump light into fiber? Adjust the half wave plate." Tony pointed me to alog78696 where he needed to make a similar adjustment. I attempted to follow that procedure to lower the green SHG launch power below 35 (threshold in Guardian) from where it was around 36.6 using the half-wave plate, but I was unsuccessful. Moving the 'SQZ SHG FIBR HWP/QWP' picomotor (motor 3) made the launch power signal shake as the wave plate moved, but the level did not change. I called Naoki for assistance and he corrected me, saying that I should have been moving the half-wave plate before the launch and rejected PDs instead of the one after them; making the correct motor be 'SQZ Laser FIBR HWP/SHG GR power' (motor 2). He made the adjustments necessary to bring the launch power down and rejected power up, then used Guardian to inject squeezing without issue.

H1 started observing at 00:18 UTC.

H1 General (SEI)
anthony.sanchez@LIGO.ORG - posted 16:32, Sunday 01 September 2024 (79858)
Sunday Ops Day Shift End

TITLE: 09/01 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:

11 Hour Lock was lost due to a 6.4 Mag Earthquake in the Soloman Islands.

Initial alignment started when ground motion as seen by PeakMon was consistently below 800 for 5 mins. @ 21:42 UTC
I.A. Completed and the Gorund Motion was still above 200. 
Locking started at 22:09 UTC

22:40 UTC Lockloss from LOWNOISE_COIL_DRIVER when a rogue increase in ground motion struck.
Nominal_Low_Noise reached at 23:30 UTC
 

LOG:
No log

Images attached to this report
LHO General
ryan.short@LIGO.ORG - posted 16:00, Sunday 01 September 2024 (79857)
Ops Eve Shift Start

TITLE: 09/01 Eve Shift: 2300-0500 UTC (1600-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: EARTHQUAKE
    Wind: 8mph Gusts, 5mph 5min avg
    Primary useism: 0.15 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY: H1 is relocking following a large EQ and possible aftershock. Tony ran an initial alignment and H1 just locked DRMI.

H1 General
anthony.sanchez@LIGO.ORG - posted 13:24, Sunday 01 September 2024 - last comment - 13:43, Sunday 01 September 2024(79855)
Sunday Mid shift report.

TITLE: 09/01 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 10mph Gusts, 7mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.09 μm/s
QUICK SUMMARY:
Just chilling out max on this sunny Sunday!
Neil and his parents stopped by the control room for a quick chat about what we do here and a quick stroll to the overpass.
It's been great over her for exactly 11 Hours.

~!--.--!~
There is an incoming 6.6 Magnitude earthquake coming from the Soloman Islands......

 

Comments related to this report
anthony.sanchez@LIGO.ORG - 13:43, Sunday 01 September 2024 (79856)Lockloss, SEI

Lockloss From an Earthquake.

USGS 6.6 Mag Near Soloman Islands

Holding ISC_lock in IDLE for the ground motion to settle.

Images attached to this comment
H1 CDS
david.barker@LIGO.ORG - posted 08:25, Sunday 01 September 2024 - last comment - 09:01, Sunday 01 September 2024(79852)
VACSTAT BSC3 glitch, looks like a false positive

VACSTAT reported a glitch in BSC3, H0:VAC-LX_Y8_PT132_MOD2_PRESS_TORR at 02:21:57 Sun 01 Sep 2024 PDT.

This looks like a sensor glitch, it is only 6 seconds wide and has no characteristic pump-down curve. Nothing was seen in the neighbourhood BSC2 at this time.

Attachment shows VACSTAT MEDM and ndscope of PT132 covering about 40 mins.

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 08:43, Sunday 01 September 2024 (79853)

VACSTAT restarted at 08:42 to clear this glitch.

david.barker@LIGO.ORG - 09:01, Sunday 01 September 2024 (79854)

PT132 7-day trend showing two other spikes in the past week.

Images attached to this comment
LHO VE
david.barker@LIGO.ORG - posted 08:17, Sunday 01 September 2024 (79851)
Sun CP1 Fill

Sun Sep 01 08:11:50 2024 INFO: Fill completed in 11min 46secs

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 08:02, Sunday 01 September 2024 (79850)
Sunday Ops Morning shift Start

TITLE: 09/01 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 2mph Gusts, 0mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY:

When I walked in this morning the IFO had been locked and observing for 5 hours.
IFO_Notify has not contacted anyone over the OWL shift.
The IFO did lock it's self over the night it just took some time and had multiple locklosses before doing an Initial alignment which helped it get get to NLN @ 9:28 UTC.


NUC 33 survived the night!

 

LHO General
ryan.short@LIGO.ORG - posted 23:26, Saturday 31 August 2024 (79848)
Ops Eve Shift Summary

TITLE: 09/01 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Two locklosses this evening, but relocking was straightforward and so far fully automated each time. H1 is currently relocking, currently locking green arms.

LOG:

No log for this shift.

H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 23:06, Saturday 31 August 2024 (79849)
Lockloss @ 05:45 UTC

Lockloss @ 05:45 UTC - link to lockloss tool

No obvious cause, but it looks like LSC-MICH started seeing something wobble about 2 seconds before the lockloss.

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 16:38, Saturday 31 August 2024 (79846)
Saturday Ops Day Shift End

TITLE: 08/31 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
I came in and it was locked. but lost lock shortly after from the dreaded Double PI Ring up.

Relocked with out an IA.

Another Unkown lockloss.
I requested an IA which went easy peazy.
The Locking process did do the DRMI lockloss but didn't fully "lockloss" all the way to Down.

Got back up and runnig in NLN around 18:32 UTC , you know , just in time to postpone the Calibration until 22:00

Ran Francisco's ETMX Drive Align Script to try to get KAPPA back to 1.

 Did a Calibration Sweep

@ 23:00 UTC I gave NUC 33 a Hard shutdown, and Found out that the spare NUC that I'd like to replace it with is mounted to the wall insuch a way that is Not easily removed without dismounting the monitor bracket.

LOG:                                                                                                                                                                        

Start Time System Name Location Lazer_Haz Task Time End
23:58 SAF H1 LHO YES LVEA is laser HAZARD 18:24
17:12 Vac Gerardo VPW N Checking on parts 17:53
H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 16:23, Saturday 31 August 2024 - last comment - 17:21, Saturday 31 August 2024(79844)
Lockloss @ 23:14 UTC

Lockloss @ 23:14 UTC - link to lockloss tool

No obvious cause; maybe some small motion by ETMX immediately before lockloss like we've seen before, but it's much smaller than usual.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 17:21, Saturday 31 August 2024 (79847)CAL

H1 back to observing at 00:14 UTC.

To go to observing, I reverted the SDF diff on the susetmx model for the new ETMX drivealign L2L gain provided by Francisco's script (screenshot attached; see alog79841). This had not been updated in Guardian-space, so it was set to the previous setpoint during TRANSITION_FROM_ETMX. I've updated the gain to 191.711514 in lscparams.py, saved it, and loaded ISC_LOCK.

Images attached to this comment
H1 CAL
anthony.sanchez@LIGO.ORG - posted 16:01, Saturday 31 August 2024 (79843)
Calibration Sweep!!

The following gains were set to Zero:

caput H1:CAL-PCALY_PCALOSC1_OSC_SINGAIN 0
caput H1:CAL-PCALY_PCALOSC2_OSC_SINGAIN 0
caput H1:CAL-PCALY_PCALOSC3_OSC_SINGAIN 0
caput H1:CAL-PCALY_PCALOSC4_OSC_SINGAIN 0
caput H1:CAL-PCALY_PCALOSC9_OSC_SINGAIN 0
caput H1:CAL-PCALX_PCALOSC1_OSC_SINGAIN 0
caput H1:CAL-PCALX_PCALOSC1_OSC_COSGAIN 0
caput H1:CAL-PCALX_PCALOSC4_OSC_SINGAIN 0
caput H1:CAL-PCALX_PCALOSC5_OSC_SINGAIN 0
caput H1:CAL-PCALX_PCALOSC6_OSC_SINGAIN 0
caput H1:CAL-PCALX_PCALOSC7_OSC_SINGAIN 0
caput H1:CAL-PCALX_PCALOSC8_OSC_SINGAIN 0


Then I took ISC_LOCK to NLN_CAL_MEAS

22:13 UTC ran the calibration following command:  
    pydarm measure --run-headless bb

notification: end of test
diag> save /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240831T221301Z.xml
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240831T221301Z.xml saved
diag> quit
EXIT KERNEL

2024-08-31 15:18:13,008 bb measurement complete.
2024-08-31 15:18:13,008 bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240831T221301Z.xml
2024-08-31 15:18:13,008 all measurements complete.
anthony.sanchez@cdsws29:


anthony.sanchez@cdsws29: gpstime;python /ligo/groups/cal/src/simulines/simulines/simuLines.py -i /ligo/groups/cal/src/simulines/simulines/newDARM_20231221/settings_h1_newDARM_scaled_by_drivealign_20231221_factor_p1.ini;gpstime
PDT: 2024-08-31 15:18:49.210990 PDT
UTC: 2024-08-31 22:18:49.210990 UTC
GPS: 1409177947.210990


2024-08-31 22:41:50,395 | INFO | Commencing data processing.
Traceback (most recent call last):
  File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 712, in
    run(args.inputFile, args.outPath, args.record)
  File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 205, in run
    digestedObj[scan] = digestData(results[scan], data)
  File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 621, in digestData
    coh = np.float64( cohArray[index] )
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/gwpy/types/series.py", line 609, in __getitem__
    new = super().__getitem__(item)
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/gwpy/types/array.py", line 199, in __getitem__
    new = super().__getitem__(item)
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/astropy/units/quantity.py", line 1302, in __getitem__
    out = super().__getitem__(key)
IndexError: index 3074 is out of bounds for axis 0 with size 0
ICE default IO error handler doing an exit(), pid = 2858202, errno = 32
PDT: 2024-08-31 15:41:53.067044 PDT
UTC: 2024-08-31 22:41:53.067044 UTC
GPS: 1409179331.067044

 

 

These changes were reverted and restored back to their previous values.
H1:CAL-PCALY_PCALOSC1_OSC_SINGAIN
H1:CAL-PCALY_PCALOSC2_OSC_SINGAIN  
H1:CAL-PCALY_PCALOSC3_OSC_SINGAIN
H1:CAL-PCALY_PCALOSC4_OSC_SINGAIN
H1:CAL-PCALY_PCALOSC9_OSC_SINGAIN
H1:CAL-PCALX_PCALOSC1_OSC_SINGAIN
H1:CAL-PCALX_PCALOSC1_OSC_COSGAIN
H1:CAL-PCALX_PCALOSC4_OSC_SINGAIN
H1:CAL-PCALX_PCALOSC5_OSC_SINGAIN
H1:CAL-PCALX_PCALOSC6_OSC_SINGAIN
H1:CAL-PCALX_PCALOSC7_OSC_SINGAIN
H1:CAL-PCALX_PCALOSC8_OSC_SINGAIN

I then took ICS_LOCK back back to NOMINAL_LOW_NOISE.
 

Images attached to this report
LHO General
ryan.short@LIGO.ORG - posted 16:00, Saturday 31 August 2024 (79842)
Ops Eve Shift Start

TITLE: 08/31 Eve Shift: 2300-0500 UTC (1600-2200 PST), all times posted in UTC
STATE of H1: Calibration
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 12mph Gusts, 7mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY: H1 has been locked for 4.5 hours. Tony and I are wrapping up some calibration time to take the regular sweeps while Louis helped troubleshoot (see their alogs for details). Will resume observing soon.

H1 General (CAL, ISC)
anthony.sanchez@LIGO.ORG - posted 15:06, Saturday 31 August 2024 - last comment - 09:23, Thursday 26 September 2024(79841)
ETMX Drive align L2L Gain changed

anthony.sanchez@cdsws29: python3 /ligo/home/francisco.llamas/COMMISSIONING/commissioning/k2d/KappaToDrivealign.py

Fetching from 1409164474 to 1409177074

Opening new connection to h1daqnds1... connected
    [h1daqnds1] set ALLOW_DATA_ON_TAPE='False'
Checking channels list against NDS2 database... done
Downloading data: |█████████████████████████████████████████████████████████████████████████████████████| 12601.0/12601.0 (100%) ETA 00:00

Warning: H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN changed.


Average H1:CAL-CS_TDEP_KAPPA_TST_OUTPUT is -2.3121% from 1.
Accept changes of    
H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN from 187.379211 to 191.711514 and
H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN from 184.649994 to 188.919195
Proceed? [yes/no]
yes
Changing
H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN and
H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN
H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN => 191.7115136134197
anthony.sanchez@cdsws29:

 

Comments related to this report
louis.dartez@LIGO.ORG - 16:20, Saturday 31 August 2024 (79845)
I'm not sure if the value set by this script is correct. 

KAPPA_TST was 0.976879 (-2.3121%) at the time this script looked at it. The L2L DRIVEALIGN GAIN in H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN was 184.65 at the time of our last calibration update. This is the time at which KAPPA_TST was set to 1. So to offset the drift in the TST actuation strength we should change the drivealign gain to 184.65 * 1.023121 = 188.919. This script chose to update the gain to 191.711514 instead; this is 187.379211 * 1.023121, with 187.379211 being the gain value at the time the script was run. At that time, the drivealign gain was already accounting for a 1.47% drift in the actuation strength (this has so far not been properly compensated for in pyDARM and may be contributing to the error we're currently seeing...more on that later this weekend in another post.). 

So I think this script should be basing corrections as percentages applied with respect to the drivealign gain value at the time when the kappa's were last set (i.e. just after the last front end calibration update) *not* at the current time.

also, the output from that script claims that it also updated H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN but I trended it and it hadn't been changed. Those print statements should be cleaned up.
louis.dartez@LIGO.ORG - 09:23, Thursday 26 September 2024 (80304)
to close out this discussion, it turns out that the drivealign adjustment script is doing the correct thing. Each time the drivealign gain is adjusted to counteract the effect of ESD charging, the percent change reported by Kappa TST should be applied to the drivealign gain at that time rather than what the gain was when the kappa calculations were last updated.
Displaying reports 8601-8620 of 86141.Go to page Start 427 428 429 430 431 432 433 434 435 End