Displaying reports 4861-4880 of 83202.Go to page Start 240 241 242 243 244 245 246 247 248 End
Reports until 12:29, Tuesday 15 October 2024
H1 General (GRD)
anthony.sanchez@LIGO.ORG - posted 12:29, Tuesday 15 October 2024 (80682)
CDS_CA_COPY Guardian issue

CDS_CA_COPY Guardian broke.
TJ just had me do this:

anthony.sanchez@cdsws29: guardutil print CDS_CA_COPY
ifo: H1
name: CDS_CA_COPY
module:
  /opt/rtcds/userapps/release/sys/h1/guardian/CDS_CA_COPY.py
CA prefix:
nominal state: COPY
initial request: COPY
states (*=requestable):
  10 * COPY
   0   INIT
anthony.sanchez@cdsws29: guardctrl restart CDS_CA_COPY
INFO: systemd: restarting nodes: CDS_CA_COPY
anthony.sanchez@cdsws29:

 

LOG Output.....
------------------------------------------------------

                                            #3  0x00007fed0c85003b _ZN13tcpSendThread3runEv (libca.so.3.15.5 + 0x4203b)
                                             #4  0x00007fed0c7d40bb epicsThreadCallEntryPoint (libCom.so.3.15.5 + 0x3f0bb)
                                             #5  0x00007fed0c7d9f7b start_routine (libCom.so.3.15.5 + 0x44f7b)
                                             #6  0x00007fed144d4ea7 start_thread (libpthread.so.0 + 0x7ea7)
                                             #7  0x00007fed14257a6f __clone (libc.so.6 + 0xfba6f)
2024-10-15_18:45:39.055649Z guardian@CDS_CA_COPY.service: Failed with result 'watchdog'.
2024-10-15_18:45:39.056668Z guardian@CDS_CA_COPY.service: Consumed 1w 18.768s CPU time.
2024-10-15_19:06:45.533022Z Starting Advanced LIGO Guardian service: CDS_CA_COPY...
2024-10-15_19:06:45.994090Z ifo: H1
2024-10-15_19:06:45.994090Z name: CDS_CA_COPY
2024-10-15_19:06:45.994090Z module:
2024-10-15_19:06:45.994090Z   /opt/rtcds/userapps/release/sys/h1/guardian/CDS_CA_COPY.py
2024-10-15_19:06:46.019449Z CA prefix:
2024-10-15_19:06:46.019449Z nominal state: COPY
2024-10-15_19:06:46.019449Z initial request: COPY
2024-10-15_19:06:46.019449Z states (*=requestable):
2024-10-15_19:06:46.019905Z   10 * COPY
2024-10-15_19:06:46.019905Z    0   INIT
2024-10-15_19:06:46.794201Z CDS_CA_COPY Guardian v1.5.2
2024-10-15_19:06:46.800567Z cas warning: Configured TCP port was unavailable.
2024-10-15_19:06:46.800567Z cas warning: Using dynamically assigned TCP port 45771,
2024-10-15_19:06:46.800567Z cas warning: but now two or more servers share the same UDP port.
2024-10-15_19:06:46.800567Z cas warning: Depending on your IP kernel this server may not be
2024-10-15_19:06:46.800567Z cas warning: reachable with UDP unicast (a host's IP in EPICS_CA_ADDR_LIST)
2024-10-15_19:06:46.803537Z CDS_CA_COPY EPICS control prefix: H1:GRD-CDS_CA_COPY_
2024-10-15_19:06:46.803687Z CDS_CA_COPY system archive: /srv/guardian/archive/CDS_CA_COPY
2024-10-15_19:06:47.114773Z CDS_CA_COPY system archive: id: 2b2d8bae8f5cee7885337837576f9e5d47f5891a (45275322)
2024-10-15_19:06:47.114773Z CDS_CA_COPY system name: CDS_CA_COPY
2024-10-15_19:06:47.115042Z CDS_CA_COPY system CA prefix: None
2024-10-15_19:06:47.115042Z CDS_CA_COPY module path: /opt/rtcds/userapps/release/sys/h1/guardian/CDS_CA_COPY.py
2024-10-15_19:06:47.115165Z CDS_CA_COPY initial state: INIT
2024-10-15_19:06:47.115165Z CDS_CA_COPY initial request: COPY
2024-10-15_19:06:47.115165Z CDS_CA_COPY nominal state: COPY
2024-10-15_19:06:47.115165Z CDS_CA_COPY CA setpoint monitor: False
2024-10-15_19:06:47.115355Z CDS_CA_COPY CA setpoint monitor notify: True
2024-10-15_19:06:47.115355Z CDS_CA_COPY daemon initialized
2024-10-15_19:06:47.115355Z CDS_CA_COPY ============= daemon start =============
2024-10-15_19:06:47.132272Z CDS_CA_COPY W: initialized
2024-10-15_19:06:47.165033Z CDS_CA_COPY W: EZCA v1.4.0
2024-10-15_19:06:47.165517Z CDS_CA_COPY W: EZCA CA prefix: H1:

                                            #3  0x00007fed0c85003b _ZN13tcpSendThread3runEv (libca.so.3.15.5 + 0x4203b)
                                             #4  0x00007fed0c7d40bb epicsThreadCallEntryPoint (libCom.so.3.15.5 + 0x3f0bb)
                                             #5  0x00007fed0c7d9f7b start_routine (libCom.so.3.15.5 + 0x44f7b)
                                             #6  0x00007fed144d4ea7 start_thread (libpthread.so.0 + 0x7ea7)
                                             #7  0x00007fed14257a6f __clone (libc.so.6 + 0xfba6f)
2024-10-15_18:45:39.055649Z guardian@CDS_CA_COPY.service: Failed with result 'watchdog'.
2024-10-15_18:45:39.056668Z guardian@CDS_CA_COPY.service: Consumed 1w 18.768s CPU time.
2024-10-15_19:06:45.533022Z Starting Advanced LIGO Guardian service: CDS_CA_COPY...
2024-10-15_19:06:45.994090Z ifo: H1
2024-10-15_19:06:45.994090Z name: CDS_CA_COPY
2024-10-15_19:06:45.994090Z module:
2024-10-15_19:06:45.994090Z   /opt/rtcds/userapps/release/sys/h1/guardian/CDS_CA_COPY.py
2024-10-15_19:06:46.019449Z CA prefix:
2024-10-15_19:06:46.019449Z nominal state: COPY
2024-10-15_19:06:46.019449Z initial request: COPY
2024-10-15_19:06:46.019449Z states (*=requestable):
2024-10-15_19:06:46.019905Z   10 * COPY
2024-10-15_19:06:46.019905Z    0   INIT
2024-10-15_19:06:46.794201Z CDS_CA_COPY Guardian v1.5.2
2024-10-15_19:06:46.800567Z cas warning: Configured TCP port was unavailable.
2024-10-15_19:06:46.800567Z cas warning: Using dynamically assigned TCP port 45771,
2024-10-15_19:06:46.800567Z cas warning: but now two or more servers share the same UDP port.
2024-10-15_19:06:46.800567Z cas warning: Depending on your IP kernel this server may not be
2024-10-15_19:06:46.800567Z cas warning: reachable with UDP unicast (a host's IP in EPICS_CA_ADDR_LIST)
2024-10-15_19:06:46.803537Z CDS_CA_COPY EPICS control prefix: H1:GRD-CDS_CA_COPY_
2024-10-15_19:06:46.803687Z CDS_CA_COPY system archive: /srv/guardian/archive/CDS_CA_COPY
2024-10-15_19:06:47.114773Z CDS_CA_COPY system archive: id: 2b2d8bae8f5cee7885337837576f9e5d47f5891a (45275322)
2024-10-15_19:06:47.114773Z CDS_CA_COPY system name: CDS_CA_COPY
2024-10-15_19:06:47.115042Z CDS_CA_COPY system CA prefix: None
2024-10-15_19:06:47.115042Z CDS_CA_COPY module path: /opt/rtcds/userapps/release/sys/h1/guardian/CDS_CA_COPY.py
2024-10-15_19:06:47.115165Z CDS_CA_COPY initial state: INIT
2024-10-15_19:06:47.115165Z CDS_CA_COPY initial request: COPY
2024-10-15_19:06:47.115165Z CDS_CA_COPY nominal state: COPY
2024-10-15_19:06:47.115165Z CDS_CA_COPY CA setpoint monitor: False
2024-10-15_19:06:47.115355Z CDS_CA_COPY CA setpoint monitor notify: True
2024-10-15_19:06:47.115355Z CDS_CA_COPY daemon initialized
2024-10-15_19:06:47.115355Z CDS_CA_COPY ============= daemon start =============
2024-10-15_19:06:47.132272Z CDS_CA_COPY W: initialized
2024-10-15_19:06:47.165033Z CDS_CA_COPY W: EZCA v1.4.0
2024-10-15_19:06:47.165517Z CDS_CA_COPY W: EZCA CA prefix: H1:
2024-10-15_19:06:47.165517Z CDS_CA_COPY W: ready
2024-10-15_19:06:47.165615Z CDS_CA_COPY worker ready
2024-10-15_19:06:47.165615Z CDS_CA_COPY ========== executing run loop ==========
2024-10-15_19:06:47.165779Z Started Advanced LIGO Guardian service: CDS_CA_COPY.
2024-10-15_19:06:48.010127Z CDS_CA_COPY OP: EXEC
2024-10-15_19:06:48.010884Z CDS_CA_COPY MODE: AUTO
2024-10-15_19:06:48.012375Z CDS_CA_COPY calculating path: INIT->COPY
2024-10-15_19:06:48.012922Z CDS_CA_COPY new target: COPY
2024-10-15_19:06:48.046196Z CDS_CA_COPY executing state: INIT (0)
2024-10-15_19:06:48.046407Z CDS_CA_COPY [INIT.enter]
2024-10-15_19:06:48.071850Z CDS_CA_COPY JUMP target: COPY
2024-10-15_19:06:48.072301Z CDS_CA_COPY [INIT.exit]
2024-10-15_19:06:48.139271Z CDS_CA_COPY JUMP: INIT->COPY
2024-10-15_19:06:48.139587Z CDS_CA_COPY calculating path: COPY->COPY
2024-10-15_19:06:48.140156Z CDS_CA_COPY executing state: COPY (10)
2024-10-15_19:06:48.269602Z Warning: Duplicate EPICS CA Address list entry "10.101.0.255:5064" discarded
                                         Stack trace of thread 1702297:
                                             #0  0x00007fed1
                                             
                                             
                                            

H1 CDS (CDS)
erik.vonreis@LIGO.ORG - posted 12:19, Tuesday 15 October 2024 (80681)
Dolphin firmware was updated on three front ends

[EJ, Dave, Erik, Jonathan]

Dolphin firmware was updated on h1sush7, h1cdsh8, h1oaf0 from version 08 to 97.  All other frontends had the correct version.  This version will help avoid crashes such as described here:

https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=80574

H1 CDS (CDS)
erik.vonreis@LIGO.ORG - posted 12:15, Tuesday 15 October 2024 (80680)
Workstations updated

Workstations were updated and rebooted.  This was an OS packages update.  Conda packages were not updated.

H1 TCS
camilla.compton@LIGO.ORG - posted 11:15, Tuesday 15 October 2024 (80679)
Checked CO2Y beam centering prior to laser swap

TJ, Camilla WP# 12121

Current CO2Y laser: 20510-20816D which was re-gassed in 2016 and installed in October 2022 65264.

To prepare for the CO2Y laser swap next week:

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:23, Tuesday 15 October 2024 (80677)
Tue CP1 Fill

Tue Oct 15 10:11:38 2024 INFO: Fill completed in 11min 34secs

TCs looking good again.

Images attached to this report
H1 General (Laser Transition)
anthony.sanchez@LIGO.ORG - posted 09:39, Tuesday 15 October 2024 (80676)
LVEA is Now LASER HAZARD
Camilla has Transitioned the LVEA to LAZER Hazard for WP: 12121
LVEA is Now LASER  HAZARD.
H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 08:57, Tuesday 15 October 2024 (80675)
13:01 UTC lockloss

13:01 UTC lockloss from NLN (12:40 lock)

~1000 Hz oscillation in DARM, violins 1st harmonic area. There was a small DARM wiggle 200ms then a much larger DARM wiggle 100ms before the lockloss, IMC lost it the usual 1/4 sec after ASC-AS_A.

Images attached to this report
H1 SEI (SUS)
jeffrey.kissel@LIGO.ORG - posted 08:35, Tuesday 15 October 2024 (80673)
H1ISIHAM2 SUSPOINT EUL2CART Matrix Populated with Projection from H1SUSPR3 Suspension Point Euler Basis
J. Kissel
WP 12140

Per investigations described in WP 12140, I've installed elements for driving the H1 ISI HAM2 ISI in the H1 SUS PR3 suspension point's Euler basis into the ISI's H1:ISI-HAM2_SUSPOINT_EUL2CART matrix. The elements had been pre-calculated using the math from T1100617, and pulled from pre-saved collection of these matrices in 
    /opt/rtcds/userapps/release/isc/common/projections/ISI2SUS_projection_file.mat


To install, I ran the following code in matlab from a control room work station,
>> addpath /ligo/svncommon/SeiSVN/seismic/Common/MatlabTools/ % this adds fill_matrix_values.m to the path, needed below
>> cd /opt/rtcds/userapps/release/isc/common/projections/
>> load ISI2SUS_projection_file.mat
>> drvmatrx = ISI2SUSprojections.h1.pr3.EUL2CART
>> drvmatrx =
            1      -0.0112            0      -0.0115      -1.0243      -0.1768
       0.0112            1            0       1.0243      -0.0115      -0.3308
            0            0            0            0            0            1
            0            0            1       0.1805       0.3288            0
            0            0            0            1      -0.0112            0
            0            0            0       0.0112            1            0
>> fill_matrix_values('H1:ISI-HAM2_SUSPOINT_EUL2CART',drvmatrx) % this actually installs the coefficients into EPICs.


These channels are *NOT* monitored in SDF. However, to make sure they stick, I've accepted the values in both the safe and OBSERVE safe snap files.
Images attached to this report
H1 General (Lockloss)
anthony.sanchez@LIGO.ORG - posted 07:43, Tuesday 15 October 2024 - last comment - 16:46, Tuesday 15 October 2024(80671)
Tuesday Maintance Ops Shift Start

TITLE: 10/15 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 3mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.38 μm/s
QUICK SUMMARY:

When I Arrived ISC_LOCK  was in Inintial_alignment, Init_Align was in Locking Green Arms, and ALSY arm has no light on it.
Last lockloss was 2024-10-15_13:01:37Z  ISC_LOCK  NOMINAL_LOW_NOISE -> LOCKLOSS
TJ Said he got called by H1 Manny just before 7:30.

I will be running Oli's script as described in Alog 76606 to save all slider values before the CDS team does their DAQ Restart thing.

 

 

Comments related to this report
anthony.sanchez@LIGO.ORG - 07:50, Tuesday 15 October 2024 (80672)

Output of Oli's script:


anthony.sanchez@cdsws29: python3 update_sus_safesnap.py
No optics or groups entered. -h or --help to display opt-ions.
anthony.sanchez@cdsws29: python3 update_sus_safesnap.py -h
usage: update_sus_safesnap [-h] [-p] [-o [OPTICS ...]]

Reads the current OPTICALIGN_{P,Y}_OFFSET values and uses those to overwrite
the OPTICALIGN_{P,Y}_OFFSET values in the burt files linked from their
respective safe.snap files.

options:
  -h, --help            show this help message and exit
  -p, --print-only      Use -p or --print-only to only print the beforeand
                        after values without updating the values saved in
                        safe.snap. (default: False)
  -o [OPTICS ...], --optics [OPTICS ...]
                        Type the names of optic groups (ex. itm) or individual
                        optic names (ex. pr2), with spaces in between. Case
                        does not matter.
                         OptGroup | Optics       OptGroup | Optics     
                           itm       ITMX          etm       ETMX      
                                     ITMY                    ETMY      
                                                                       
                            bs       BS            tms       TMSX      
                                                             TMSY      
                            rm       RM1                               
                                     RM2           pr        PRM       
                                                             PR2       
                            im       IM1                     PR3       
                                     IM2                               
                                     IM3           mc        MC1       
                                     IM4                     MC2       
                                                             MC3       
                            sr       SRM                               
                                     SR2           ifo_out   OFI       
                                     SR3    (om if w/o OFI)  OMC       
                                                             OM1       
                            fc       FC1                     OM2       
                                     FC2                     OM3       
                                                                       
                            zm_in    OPO           zm_out    ZM4       
                                     ZM1                     ZM5       
                                     ZM1                     ZM6       
                                     ZM3                               
                         (default: none)
anthony.sanchez@cdsws29: python3 update_sus_safesnap.py -o itm bs rm im sr fc zm_in etm tms pr mc ifo_out zm_out
Running $(USERAPPS)/isc/h1/guardian/update_sus_safesnap.py
Updating and printing out saved OPTICALIGN_{P,Y}_OFFSET values from safe.snap vs current values for these optics:
ITMX ITMY BS RM1 RM2 IM1 IM2 IM3 IM4 SRM SR2 SR3 FC1 FC2 OPO ZM1 ZM2 ZM3 ETMX ETMY TMSX TMSY PRM PR2 PR3 MC1 MC2 MC3 OFI OMC OM1 OM2 OM3 ZM4 ZM5 ZM6
Traceback (most recent call last):
  File "/opt/rtcds/userapps/trunk/isc/h1/scripts/update_sus_safesnap.py", line 318, in <module>
    old_new = replace_offsets(opt_dicts(optics), print_only=print_only)
  File "/opt/rtcds/userapps/trunk/isc/h1/scripts/update_sus_safesnap.py", line 201, in replace_offsets
    cur_vals.append(ezca[pchanname])
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/ezca/ezca.py", line 304, in __getitem__
    return self.read(channel)
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/ezca/ezca.py", line 294, in read
    pv = self.connect(channel)
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/ezca/ezca.py", line 272, in connect
    raise EzcaConnectError("Could not connect to channel (timeout=%ds): %s" % (self._timeout, name))
ezca.errors.EzcaConnectError: Could not connect to channel (timeout=2s): H1:SUS-OFI_M1_OPTICALIGN_P_OFFSET
anthony.sanchez@cdsws29: python3 update_sus_safesnap.py -o zm_out
Running $(USERAPPS)/isc/h1/guardian/update_sus_safesnap.py
Updating and printing out saved OPTICALIGN_{P,Y}_OFFSET values from safe.snap vs current values for these optics:
ZM4 ZM5 ZM6

SUS-ZM4_M1_OPTICALIGN_P_OFFSET
   Old Saved: -7.92342523045482380439e+02
   New Saved: -772.2246009467159
SUS-ZM4_M1_OPTICALIGN_Y_OFFSET
   Old Saved: -9.85740674423158566242e+02
   New Saved: -981.3613025567112

SUS-ZM5_M1_OPTICALIGN_P_OFFSET
   Old Saved: -1.15000000000000000000e+02
   New Saved: -115.0
SUS-ZM5_M1_OPTICALIGN_Y_OFFSET
   Old Saved: -4.60000000000000000000e+02
   New Saved: -460.0

SUS-ZM6_M1_OPTICALIGN_P_OFFSET
   Old Saved: 1.39848934620929139783e+03
   New Saved: 1408.6829424213722
SUS-ZM6_M1_OPTICALIGN_Y_OFFSET
   Old Saved: -2.83802124439147348767e+02
   New Saved: -260.05899996129443
anthony.sanchez@cdsws29: python3 update_sus_safesnap.py -o ifo_out
Running $(USERAPPS)/isc/h1/guardian/update_sus_safesnap.py
Updating and printing out saved OPTICALIGN_{P,Y}_OFFSET values from safe.snap vs current values for these optics:
OFI OMC OM1 OM2 OM3
Traceback (most recent call last):
  File "/opt/rtcds/userapps/trunk/isc/h1/scripts/update_sus_safesnap.py", line 318, in <module>
    old_new = replace_offsets(opt_dicts(optics), print_only=print_only)
  File "/opt/rtcds/userapps/trunk/isc/h1/scripts/update_sus_safesnap.py", line 201, in replace_offsets
    cur_vals.append(ezca[pchanname])
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/ezca/ezca.py", line 304, in __getitem__
    return self.read(channel)
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/ezca/ezca.py", line 294, in read
    pv = self.connect(channel)
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/ezca/ezca.py", line 272, in connect
    raise EzcaConnectError("Could not connect to channel (timeout=%ds): %s" % (self._timeout, name))
ezca.errors.EzcaConnectError: Could not connect to channel (timeout=2s): H1:SUS-OFI_M1_OPTICALIGN_P_OFFSET
anthony.sanchez@cdsws29:

 

oli.patane@LIGO.ORG - 16:46, Tuesday 15 October 2024 (80696)

That error to my update_safe_snap script shouldn't happen again - I had put the OFI as an option in there for overwriting the OPTICALIGN offset settings, but it turns out that the OFI doesn't have opticalign offsets! Neither does the OPO, so I've removed both of those from the code.

It seems to now be working correctly (but it's tricky to test) so hopefully there are no more issues with running it.

LHO General
ryan.short@LIGO.ORG - posted 22:04, Monday 14 October 2024 (80670)
Ops Eve Shift Summary

TITLE: 10/15 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: Very quiet shift tonight. Once H1 relocked at the start of the shift, we've been observing with steady range.

H1 has now been locked for almost 5 hours.

H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 16:37, Monday 14 October 2024 - last comment - 17:24, Monday 14 October 2024(80668)
Lockloss @ 23:29 UTC

Lockloss @ 23:29 UTC - link to lockloss tool

No obvious cause, but looks to be some ETMX L3 motion right before the lockloss.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 17:24, Monday 14 October 2024 (80669)

H1 back to observing at 00:23 UTC. Fully automatic relock.

H1 General
ryan.crouch@LIGO.ORG - posted 16:31, Monday 14 October 2024 (80658)
OPS Monday day shift summary

TITLE: 10/14 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: We've been locked for almost 8 hours, locked almost the whole shift.
LOG:                                                                                                                                          

Start Time System Name Location Lazer_Haz Task Time End
15:26 FAC Karen, Kim FCES N Tech clean, out at 15:46 17:59
19:25 FAC Karen MidY N Tech clean 20:20
LHO General
ryan.short@LIGO.ORG - posted 16:00, Monday 14 October 2024 (80667)
Ops Eve Shift Start

TITLE: 10/14 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 16mph Gusts, 13mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.50 μm/s
QUICK SUMMARY: H1 has been locked for over 7 hours and observing for the past 2 hours.

LHO General
tyler.guidry@LIGO.ORG - posted 14:25, Monday 14 October 2024 - last comment - 08:48, Tuesday 15 October 2024(80666)
Well Pump Enabled - Well Tank Cleaned
The well pump has been cycled twice during our tank cleaning work. The pump is currently operating for a manual cycle time of 20 minutes, at which point we will identify its level and record it for future reference. For those interested, the well pump takes ~40 seconds to begin filling after being cycled on.

C. Soike E. Otterman T. Guidry
Comments related to this report
tyler.guidry@LIGO.ORG - 08:48, Tuesday 15 October 2024 (80674)
25 minutes of pump runtime adds roughly 48" of water to the well tank.
H1 SQZ
camilla.compton@LIGO.ORG - posted 14:00, Monday 14 October 2024 (80664)
Squeezing Data set

Sheila, Camilla

17:23 - 1745 Turned off SQZ IFO ASC and ran SCAN _SQZ_ALIGNMENT (changed ZMs by ~20urad max plot but BLRMs worse)  and then SCAN_SQZANG (changed by 4deg, BLRMS improved but still worse than original). DTT comparing: shows that the AS42 SQZ IFO ASC did better than guardian SCAN_SQZ alignment and angle.

17:50:30 - 18:30:30 UTC - Took No Squeezing time. IFO had been in NLN for 2h10m. Checked NLG = 13.8 (0.12133/0.008785), following 76542.  OPO temperature was already at best.

Went back to FDS with ASC and ran SCAN_SQZANG. After the first FDS data set, we turned off ASC.

FDS Data set:

State Time (UTC) Start time (gps) Angle DTT Ref

Notes

FDS ASC on 17:16     1  
FDS SCAN_ALIGN 17:49     2 (ASC off). SQZ worse than with AS42: plot
No SQZ 17:50:30 - 18:00:30 (10min) 1412963448 N/A 0  
FDS 18:06:00 - 18:12:00 (6min) 1412964378 182.4 3  
FDS +5deg 18:19:30 - 18:21:30 (2min) 1412965188 187.2 N/A Glitch 2 minutes in, looked simular to FDS
Mid SQZ + 18:26:00 - 18:32:00 (6min) 1412965578 216.6 4 Aimed for No SQZ level at 2kHz
A SQZ 18:36:00 - 18:42:00 (6min) 1412966178 (-)105.6 5  
A SQZ +10deg 18:42:15 - 18:48:15 (6min) 1412966553 (-)115.1 6  
A SQZ -10deg 18:48:30 - 18:54:30 (6min) 1412966928 (-)95.2 7  
Mid SQZ - 18:56:00 - 19:02:00 (6min) 1412967378 136.9 8 Aimed for No SQZ level at 2kHz
FDS +10deg 19:03:30 - 19:09:30 (6min) 1412967828 192.8 9  
FDS -10deg 19:10:00 - 19:16:00 (6min) 1412968218 172.0 10  
FDS (repeat, ASC off)
19:16:30 - 19:22:30 (6min)
 
182.4
11
Slightly worse than ref3
1h10 later with ASC off

Plot attached showing ASQZ and Mid sqz trends and FDS trends. Saved as camilla.compton/Documents/sqz/templates/dtt/14Oct2024_SQZ_FDS_ASQZ/SQZ.xml.

Then did scan SQZ ang (adjusted 182.4 to 183.9) and turned on ASC in FDS. Then turned off ASC and went to FIS (un-managed form ISC_LOCK). Tweaked SQZ angle as it wasn't optimized at high freq.

FDS Data set:

State Time (UTC) Start time (gps) Angle DTT Ref

Notes

FIS 19:40:30 - 18:46:30 (6min) 1412970048 193.4 3  
Mid SQZ + 19:47:30 - 18:53:30 (6min) 1412970468 218.0 4
Aimed for No SQZ level at 2kHz
(SEI state changed to useism at 19:52 UTC)
A SQZ 19:56:00 - 20:02:00 (6min) 1412970978 (-)103.3 5  
A SQZ +10deg 20:02:30 - 20:08:30 (6min) 1412971368 (-)113.8 6  
A SQZ -10deg 20:09:00 - 20:15:00 (6min) 1412971758 (-)93.8 7  
Mid SQZ - 20:16:30- 20:22:30 (6min) 1412972208 147.0 8 Aimed for No SQZ level at 2kHz
FIS +10deg 20:24:30- 20:30:30 (6min) 1412972688 203.8 9  
FIS -10deg 20:31:00- 20:34:30 (3m30) 1412973078 183.9 N/A Got distracted and changed angle too soon
FIS -10deg 20:35:45- 20:41:45 (6min) 1412973363 183.9 10  
FIS (repeat) 20:42:00- 20:48:00 (6min) 1412973738 193.4 11 V. simular to ref 3. Maybe slightly better

Plot attached showing FIS ASQZ and Mid sqz trends and FDS trends. Saved as camilla.compton/Documents/sqz/templates/dtt/14Oct2024_SQZ_FIS_ASQZ/SQZ.xml.

No SQZ 20:49:00UTC to 20:51:00UTC while we went back to FDS, I started it at 185deg and ran SCAN_SQZANG, angle adjusted to 189deg. Back to Observing at 20:57UTC.

Images attached to this report
H1 PSL
ryan.short@LIGO.ORG - posted 12:32, Monday 14 October 2024 (80665)
PSL 10-Day Trends

FAMIS 31055

The PSL has been taken down a few times in the past week as part of the glitching investigations, and this is seen clearly on several trends. Also, since the NPRO controller was swapped 5 days ago but the potentiometers on the daughter board were not adjusted on the one swapped in (alog80566), some of the readbacks for the NPRO channels are reporting inaccurate values, namely the laser diode powers. The controller is planned to be swapped back to the original tomorrow so these readbacks will be fixed.

Other item of note is that every time the PMC was unlocked last week, it seems to have relocked with a slightly higher reflected power and corresponding lower transmitted power (see stabilization trends). Looking at the PMC REFL camera image, there could be something to gain from an alignment tweak into the PMC, so I'll try that tomorrow during maintenance and see if there's any improvement.

Images attached to this report
H1 DetChar (DetChar, Lockloss)
bricemichael.williams@LIGO.ORG - posted 11:33, Thursday 12 September 2024 - last comment - 16:04, Wednesday 30 October 2024(80001)
Lockloss Channel Comparisons

-Brice, Sheila, Camilla

We are looking to see if there are any aux channels that are affected by certain types of locklosses. Understanding if a threshold is reached in the last few seconds prior to a lockloss can help determine the type of lockloss, which channels are affected more than others, as well as

We have gathered a list of lockloss times (using https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi) with:

  1. only Observe and Refined tags (plots, histogram)
  2. only Observe, Refined, and Windy tags (plots, histogram)
  3. only Observe, Refined, and Earthquake tags (plots, histogram)
  4. Observe, Refined, and Microseism tags (note: all of these also have an EQ tag, and all but the last 2 have an anthropogenic tag) (plots, histogram)

(issue: the plots for the first 3 lockloss types wouldn't upload to this aLog. Created a dcc for them: G2401806)

We wrote a python code to pull the data of various auxilliary channels 15 seconds before a lockloss. Graphs for each channel are created, a plot for each lockloss time are stacked on each of the graphs, and the graphs are saved to a png file. All the graphs have been shifted so that the time of lockloss is at t=0.

Histograms for each channel are created that compare the maximum displacement from zero for each lockloss time. There are also a stacked histogram based on 12 quiet microseism times (all taken from between 4.12.24 0900-0930 UTC). The histrograms are created using only the last second of data before lockloss, are normalized by dividing by the numbe rof lockloss times, and saved to a seperate pnd file from the plots.

These channels are provided via a list inside the python file and can be easily adjusted to fit a user's needs. We used the following channels:

channels = ['H1:ASC-AS_A_DC_NSUM_OUT_DQ','H1:ASC-DHARD_P_IN1_DQ','H1:ASC-DHARD_Y_IN1_DQ','H1:ASC-MICH_P_IN1_DQ', 'H1:ASC-MICH_Y_IN1_DQ','H1:ASC-SRC1_P_IN1_DQ','H1:ASC-SRC1_Y_IN1_DQ','H1:ASC-SRC2_P_IN1_DQ','H1:ASC-SRC2_Y_IN1_DQ', 'H1:ASC-PRC2_P_IN1_DQ','H1:ASC-PRC2_Y_IN1_DQ','H1:ASC-INP1_P_IN1_DQ','H1:ASC-INP1_Y_IN1_DQ','H1:ASC-DC1_P_IN1_DQ', 'H1:ASC-DC1_Y_IN1_DQ','H1:ASC-DC2_P_IN1_DQ','H1:ASC-DC2_Y_IN1_DQ','H1:ASC-DC3_P_IN1_DQ','H1:ASC-DC3_Y_IN1_DQ', 'H1:ASC-DC4_P_IN1_DQ','H1:ASC-DC4_Y_IN1_DQ']
Images attached to this report
Comments related to this report
bricemichael.williams@LIGO.ORG - 17:03, Wednesday 25 September 2024 (80294)DetChar, Lockloss

After talking with Camilla and Sheila, I adjusted the histogram plots. I excluded the last 0.1 sec before lockloss from the analysis. This is due to (in the original post plots) the H1:ASC-AS_A_NSUM_OUT_DQ channel have most of the last second (blue) histogram at a value of 1.3x10^5. Indicating that the last second of data is capturing the lockloss causing a runawawy in the channels. I also combined the ground motion locklosses (EQ, Windy, and microseism) into one set of plots (45 locklosses) and left the only observe (and Refined) tagged locklosses as another set of plots (15 locklosses). Both groups of plots have 2 stacked histograms for each channel:

  1. Blue:
    • the max displacement from zero between one second before and 0.1 second before lockloss, for each lockloss. 
    • The data is one second before until 0.1 second before lockloss, for each lockloss
    • the histogram is the max displacement from zero for each lockloss
    • The counts are weighted as 1/(number of locklosses in this data set) (i.e: the total number of counts in the histogram)
  2. Red:
    • I took all the data points from eight seconds before until 2 seconds before lockloss for each lockloss.
    • I then down-sampled the data points from 256 Hz to 16Hz sampling rate by taking every 16th data point.
    • The histogram is the displacement from zero of these down-sampled points
    • The counts are weighted as 1/(number of down-samples data points for each lockloss) (i.e: the total number of counts in the histogram)

Take notice of the histogram for the H1:ASC-DC2_P_IN1_DQ channel for the ground motion locklosses. In the last second before lockloss (blue), we can see a bimodal distribution with the right groupling centered around 0.10. The numbers above the blue bars is the percentage of the counts in that bin: about 33.33% is in the grouping around 0.10. This is in contrast to the distribution for the observe, refined locklosses where the entire (blue) distribution is under 0.02. This could indicate a threshold could be placed on this channel for lockloss tagging. More analysis will be required before that (I am going to next look at times without locklosses for comparison).

 

Images attached to this comment
bricemichael.williams@LIGO.ORG - 14:17, Wednesday 09 October 2024 (80568)DetChar, Lockloss

I started looking at the DC2 channel and the REFL_B channel, to see if there is a threshold in REFL_B that can be put for a new lockloss tag. I plotted the last eight seconds before lockloss for the various lockloss times. This time I split up the times into different graphs based on if the DC2 max displacement from zero in the last second before lockloss was above 0.06 (based on the histogram in previous comment): Greater = the max displacement is greater than 0.06, Less = the max displacement is less than 0.06. However, I discovered that some of the locklosses that are above 0.06 for the DC2 channel, are failing the logic test in the code: getting considered as having a max displacement less than 0.06 and getting plotted on the lower plots. I wonder if this is also happening in the histograms, but this would only mean that we are underestimating the number of locklosses above the threshold. This could be suppressing possible bimodal distributions for other histograms as well. (Looking into debugging this)

I split the locklosses into 5groups of 8 and 1 group of 5 to make it easier to distinghuish between the lines in the plots.

Based on the plots, I think a threshold for H1:ASC-REFL_B_DC_PIT_OUT_DQ would be 0.06 in the last 3 seconds prior to lockloss

 

 

Images attached to this comment
bricemichael.williams@LIGO.ORG - 11:30, Tuesday 15 October 2024 (80678)DetChar, Lockloss

Fixed the logic issue for splitting the plots into pass/fail the threshold of 0.06 as seen in the plot.

The histograms were unaffected by the issue.

Images attached to this comment
bricemichael.williams@LIGO.ORG - 16:04, Wednesday 30 October 2024 (80949)

Added code to the gitLab

Displaying reports 4861-4880 of 83202.Go to page Start 240 241 242 243 244 245 246 247 248 End