Displaying reports 701-720 of 83536.Go to page Start 32 33 34 35 36 37 38 39 40 End
Reports until 11:50, Tuesday 24 June 2025
H1 SUS
jeffrey.kissel@LIGO.ORG - posted 11:50, Tuesday 24 June 2025 - last comment - 12:15, Tuesday 24 June 2025(85281)
Table of Six-OSEM SUS M0/R0/M1 Damping Loop EPICs Gain Values
J. Kissel

Following LHO:85273, I realize I've lost track of which 6-OSEM top-mass suspensions have had their damping loop gains augmented through commissioning efforts. All of the below suspensions have had their loop control filters updated with a "Level 2.0" design which, during the upgrade from Level 1.0 to Level 2.0, have their EPICs gains reset to -1.0 (and any tuning of gains is built into filter banks). However, because it's far easier to check, revert, and set without knowing details of the frequency dependence of the loop shaping, it is typical to adjust the overall loop gain by increasing or decreasing the EPICs gain in the [M0/R0/M1]_DAMP_[DOF] banks, and then check the impact on ISC loops, including the DARM noise performance and ASC or ISC loop stability. 

Here's a list of the current status of all top mass DAMP gains for QUADs, Triples, and Doubles that have a 6-OSEM sensor/actuator array:
    {'ETMX M0': [-1.0, -1.0, -1.0, -1.0, -1.0, -0.5],      # Aug 2023 commissioning LHO:71927
     'ETMX R0': [1.0, 1.0, 1.0, 1.0, 1.0, 1.0],            # "Gain" filters have the feedback sign in it, rather than the EPICs gain
     'ETMY M0': [-1.0, -1.0, -1.0, -1.0, -1.0, -0.5],
     'ETMY R0': [1.0, 1.0, 1.0, 1.0, 1.0, 1.0],            # "Gain" filters have the feedback sign in it, rather than the EPICs gain
     'ITMX M0': [-1.0, -1.0, -1.0, -1.0, -1.0, -0.5],
     'ITMX R0': [1.0, 1.0, 1.0, 1.0, 1.0, 1.0],            # "Gain" filters have the feedback sign in it, rather than the EPICs gain
     'ITMY M0': [-1.0, -1.0, -1.0, -1.0, -1.0, -0.5],
     'ITMY R0': [-10.0, -10.0, -2.0, -0.3, -0.1, -0.1]}    # "Gain" filters are OFF, these are taking the place of those filters

    {'BS': [-1.0, -1.0, -1.0, -1.0, -3.0, -3.0],           # P and Y gains have been this way since 2015 at least
     'MC1': [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0],
     'MC2': [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0],
     'MC3': [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0],
     'PRM': [-0.5, -0.5, -0.5, -0.5, -0.5, -0.5],          # Aug 2023 commissioning, LHO:72130, LHO:72106
     'PR2': [-0.5, -0.5, -0.5, -0.5, -0.5, -0.5],          #   | 
     'SR2': [-0.2, -0.2, -0.2, -0.2, -0.2, -0.2],          #   |
     'SRM': [-0.5, -0.5, -0.5, -0.5, -0.5, -0.5],          #   V
     'FC1': [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0],
     'FC2': [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0],
     'PR3': [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0],
     'SR3': [-0.5, -0.5, -0.5, -0.5, -0.5, -0.5],          # Aug 2023 commissioning
     'TMSX': [-3.0, -1.0, -1.0, -1.0, -3.0, -1.0],         # L and P gains have been this way since 2015 at least
     'TMSY': [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0],
     'OMC': [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0]}

Even though the gains seem to be quite scattered and inconsistent at first, most have either been that way through all observing runs to date or were adjusted during a concerted Aug 2023 commissioning effort by Gabriele and Elenna.
Comments related to this report
jeffrey.kissel@LIGO.ORG - 11:55, Tuesday 24 June 2025 (85283)
To generate the above dictionaries of gains, I invoked the interactive guardian shell with 
    $ guardian -i 


the wrote these quick for loops grab all the triples and doubles gains:
gain_dict = {}
for iOptic, thisOptic in enumerate(['BS','MC1','MC2','MC3','PRM','PR2','SR2','SRM','FC1','FC2','PR3','SR3','TMSX','TMSY','OMC']):
    gains = []
    for jEUL in ['L','T','V','R','P','Y']:
        gains.append(ezca['SUS-'+thisOptic+'_M1_DAMP_'+jEUL+'_GAIN'])
        gain_dict.update({thisOptic:gains})



To grab the quad gains:
quadgain_dict = {}
for iOptic, thisOptic in enumerate(['ETMX','ETMY','ITMX','ITMY']):
    m0gains = []
    r0gains = []
    for jEUL in ['L','T','V','R','P','Y']:
        m0gains.append(ezca['SUS-'+thisOptic+'_M0_DAMP_'+jEUL+'_GAIN'])
        r0gains.append(ezca['SUS-'+thisOptic+'_R0_DAMP_'+jEUL+'_GAIN'])
    quadgain_dict.update({thisOptic+' M0':m0gains})
    quadgain_dict.update({thisOptic+' R0':r0gains})
elenna.capote@LIGO.ORG - 12:15, Tuesday 24 June 2025 (85284)

To make this extra fun, some of these gains are set all the time, and some are changed as a part of the locking process. Specifically, in the LOWNOISE_ASC guardian state, all test mass M0 Y gains are reduced from -1 to -0.5, and all SR2 DoF damping gains are reduced to -0.2 (I think they start at -1 but I'm not sure).

During commissioning, Gabriele and I found that noise reinjected from the damping loops limited the RMS of other control loops (like DHARD or SRCL), which then limited the RMS of DARM at low frequency, which we have assumed can all be sourced to the noise in the BOSEMs. However, for some loops, having too low gain all the time or at certain parts of the locking process disturbs locking by causing oscillations from lack of control at suspension resonances, or instabilities because the plants used to design certain controls include the presence of the damping loop (seems especially problematic when we hand off one controller to another, like going from high bandwidth to low bandwidth ASC). As a reminder, reducing the DARM RMS has been an area of significant work throughout O4 commissioning and observation.

H1 SUS (SUS)
ryan.crouch@LIGO.ORG - posted 10:53, Tuesday 24 June 2025 (85274)
OPLEV charge measurements, ETMX, ETMY

There were some light end station VAC pump checks/disassemblies but they didn't seem to affect the measurements, the error bars are much better/smaller than the last time when it was windy out.

ETMY's charge is high (>50V) on half of the quadrants, it appears stable if we discount the previous bad measurement.

ETMX's charge is hovering within +/-10V of 50V except for LR_P, it also appears stable ignoring the previous measurement.

Images attached to this report
H1 SUS
jeffrey.kissel@LIGO.ORG - posted 10:38, Tuesday 24 June 2025 (85277)
H1SUSSRM Transverse <-> Side OSEM2EUL / EUL2OSEM Elements Changed to +1.0
J. Kissel

Following LHO:85198, I've changed the sign the OSEM2EUL and EUL2OSEM matrix elements that transform from "SD" to "T" and vice versa, as SRM is one of the few HSTSs that have their "SD" OSEM mounted on the "Opposite Side," (again see G2402388 and E1100109).
    Matrix Element                  Was     In now
    H1:SUS-SRM_M1_OSEM2EUL_2_6      -1.0    +1.0
    H1:SUS-SRM_M1_EUL2OSEM_6_2      -1.0    +1.0

I attach a comparison two sets of new health check TFs, one UNDAMPED and the other DAMPED against a previous 2024 measurement. This metric (and any other metric we have) shows no change, as expected (see "Shouldn't we have discovered this with the 'health check TFs?'" section of LHO:85198).

The SUS was in the new "HEALTH CHECK" guardian state, but with the alignment offsets ON*** and all DOF's worth of damping loop gains set to -0.5 per nonimal since 2023 (see LHO:85273). 
The ISI/HEPI system was isolated, but with sensor correction OFF because it's maintenance day.
*** "It is known" that in the current mechanical alignment vs. OSEM position, that with the alignment offsets OFF, the LF and RT OSEMs aren't far above ADC 0 counts, and thus the magnitude of the TFs drops by almost a factor of two, and coherence gets worse even with twice the excitation amplitude.

I also attach SDF screenshots of me accepting these new values in the safe and OBSERVE .snap files.

The data sets are committed to
    /ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/SRM/SAGM1/Data/
        2025-06-24_1558_H1SUSSRM_M1_WhiteNoise_L_0p02to50Hz.xml
        2025-06-24_1558_H1SUSSRM_M1_WhiteNoise_P_0p02to50Hz.xml
        2025-06-24_1558_H1SUSSRM_M1_WhiteNoise_R_0p02to50Hz.xml
        2025-06-24_1558_H1SUSSRM_M1_WhiteNoise_T_0p02to50Hz.xml
        2025-06-24_1558_H1SUSSRM_M1_WhiteNoise_V_0p02to50Hz.xml
        2025-06-24_1558_H1SUSSRM_M1_WhiteNoise_Y_0p02to50Hz.xml

        2025-06-24_1629_H1SUSSRM_M1_WhiteNoise_L_0p02to50Hz.xml
        2025-06-24_1629_H1SUSSRM_M1_WhiteNoise_P_0p02to50Hz.xml
        2025-06-24_1629_H1SUSSRM_M1_WhiteNoise_R_0p02to50Hz.xml
        2025-06-24_1629_H1SUSSRM_M1_WhiteNoise_T_0p02to50Hz.xml
        2025-06-24_1629_H1SUSSRM_M1_WhiteNoise_V_0p02to50Hz.xml
        2025-06-24_1629_H1SUSSRM_M1_WhiteNoise_Y_0p02to50Hz.xml
Images attached to this report
Non-image files attached to this report
H1 PSL
ryan.crouch@LIGO.ORG - posted 10:29, Tuesday 24 June 2025 (85278)
PSL Status Report - Weekly

Laser Status:
    NPRO output power is 1.859W
    AMP1 output power is 70.23W
    AMP2 output power is 139.9W
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 7 days, 0 hr 17 minutes
    Reflected power = 23.15W
    Transmitted power = 105.5W
    PowerSum = 128.6W

FSS:
    It has been locked for 0 days 2 hr and 28 min
    TPD[V] = 0.8134V

ISS:
    The diffracted power is around 3.8%
    Last saturation event was 0 days 2 hours and 28 minutes ago


Possible Issues:
    PMC reflected power is high

H1 CDS
david.barker@LIGO.ORG - posted 10:00, Tuesday 24 June 2025 (85276)
bypass h0vacly cell phone alarms while Patrick has this IOC down for upgrade.

Tue Jun 24 09:59:20 PM PDT 2025
For channel(s):
    H0:VAC-LY_CP1_100_LLCV_MAN_POS_PCT
    H0:VAC-LY_CP1_LT105_DEWAR_LEVEL_PCT
    H0:VAC-LY_CP1_LT105_DEWAR_LEVEL_PCT_ERROR
    H0:VAC-LY_CP1_PT101_DISCHARGE_PRESS_PSIG
    H0:VAC-LY_CP1_PT101_DISCHARGE_PRESS_PSIG_ERROR
    H0:VAC-LY_CP1_TE102A_DISCHARGE_TEMP_DEGC
    H0:VAC-LY_CP1_TE102A_DISCHARGE_TEMP_DEGC_ERROR
    H0:VAC-LY_CP1_TE102B_DISCHARGE_TEMP_DEGC
    H0:VAC-LY_CP1_TE102B_DISCHARGE_TEMP_DEGC_ERROR
    H0:VAC-LY_GV5_ZSM159A_VALVE_ANIM
    H0:VAC-LY_GV6_ZSM169A_VALVE_ANIM
    H0:VAC-LY_Y3_PT114B_PRESS_TORR
    H0:VAC-LY_Y3_PT114B_PRESS_TORR_ERROR
    H0:VAC-LY_Y4_PT124B_PRESS_TORR
    H0:VAC-LY_Y4_PT124B_PRESS_TORR_ERROR
    H1:DAQ-H1EDC_CHAN_NOCON
 

H1 SUS (CSWG)
jeffrey.kissel@LIGO.ORG - posted 09:59, Tuesday 24 June 2025 (85273)
For the Record: SRM Damping Loop EPICs Gains have been set at -0.5 since Apr 21 2023
J. Kissel

Just stating for the record, as the 2023 aLOG records are a bit unclear (LHO:72106 and LHO:72130 are all I could find, and they claim -0.2), I state here that all DOFs of H1 SUS SRM's M1 damping loop gains have been at -0.5 since 2023-04-21 22:49 UTC, prior to that, they were the -1.0 as designed (see 2022 upgrade to "level 2.0", LHO:65310). See attached trend confirming this to be true.
Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 09:57, Tuesday 24 June 2025 (85275)
Tue CP1 Fill (note earlier time)

Tue Jun 24 09:40:55 2025 INFO: Fill completed in 8min 51secs

Today's fill was ran at 09:33 to complete it before Patrick started his work on h0vacly.

Images attached to this report
H1 CDS
jonathan.hanks@LIGO.ORG - posted 09:06, Tuesday 24 June 2025 - last comment - 09:08, Tuesday 24 June 2025(85271)
WP 12570 Update digivideo servers with a new build of the pylon camera software (no application/server changes)

Per WP 12570 we stopped the cameras on h1digivdeo[45] and updated the pylon and pylon-camera-server software.  This was to bring in a fix in the pylon library for leaked file descriptors when a camera dropped and reconnected.

Last week we just restarted the software to close all the leaked file descriptors.  This should solve the problem.  This was not a feature update of the camera software, just a rebuild against a newer pylon library.

This was done 8:50-9:00am local time (16:00 UTC).

The procedure was to:

1. run apt-get update

2. stop all the camera processes via the management web page

3. run apt-get install pylon pylon-camera-server

4. restart all the camera processes via the management web page

5. spot check a few cameras to make sure things come back.

Comments related to this report
jonathan.hanks@LIGO.ORG - 09:08, Tuesday 24 June 2025 (85272)

For posterity, pylon was updated to 8.1.0 and pylon-camera-server was updated to 0.1.18

H1 CDS
jonathan.hanks@LIGO.ORG - posted 08:43, Tuesday 24 June 2025 (85270)
Updated leap second databases on core infrastructure

A FAMIS task reminded us to update the leap second files on some core infrastructure that we don't update much.

These were updated either by updating the tzdata package or copying over a current leap-seconds.list + leapseconds file to /usr/share/zoneinfo.

h1daqd* machines where updated to tzdata 2025b-0+deb11u1
h1guardian1 already had this applied already.
Erik is updating h1vmboot5-5 and it's diskless roots
h1fs[01] have been updated
h1hwinj1 has been updated
h1digivideo2 has been updated
h1hwsmsr & h1hwse[xy] have been updated
h1fescript0

This is regular maintenance.  An example of what happens when we miss this is in https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=82769

H1 General
oli.patane@LIGO.ORG - posted 07:40, Tuesday 24 June 2025 (85269)
Ops Day Shift Start

TITLE: 06/24 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 2mph 3min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.06 μm/s
QUICK SUMMARY:

Currently Observing and have been Locked for almost 4 hours. Magnetic injections just finished.

H1 CDS
erik.vonreis@LIGO.ORG - posted 06:37, Tuesday 24 June 2025 (85268)
Workstations updated

Workstations updated and rebooted.  This was an OS packages update.  Conda packages were not updated.

H1 General (CDS)
thomas.shaffer@LIGO.ORG - posted 00:35, Tuesday 24 June 2025 (85267)
Ops Owl Update

The FC TRANS GR (CAM33) camera looks to have crashed or at least the channels for its controls were no longer accessable. The image was still viewable, at least from the screenshots fom. i restarted the process via the browser interface linked from the camera overview and that did the trick. Back to Observing at 0730UTC.

LHO General
ryan.short@LIGO.ORG - posted 22:05, Monday 23 June 2025 (85266)
Ops Eve Shift Summary

TITLE: 06/24 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: TJ
SHIFT SUMMARY: H1 lost lock due to one or more earthquakes this evening and is still working on relocking. Despite everything running smoothly and automatically on the way back up, there was a lockloss for some reason during MAX_POWER as I type this, so H1 will be trying again on its own.

H1 General (Lockloss, SEI)
ryan.short@LIGO.ORG - posted 20:18, Monday 23 June 2025 (85265)
Lockloss @ 03:03 UTC

Lockloss @ 03:03 UTC after almost 6 hours locked - link to lockloss tool

Several quakes rolling through around this time; hard to say which was the real cause but likely a M5.7 in the Caribbean.

LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 18:06, Monday 23 June 2025 (85264)
HAM1 Pumpdown Update

(Jordan V., Travis S., Gerardo M.)

Up to this morning the volume of HAM1 was being pumped by a turbo pump and an ion pump, we have removed the turbo pump, we closed its isolation valve, but the system remains on (SS500 cart with scroll pump, and turbo pump are still ON).  The turbo pump was isolated to let the ion pump take over the pumping of HAM1, it took a few hours but the ion pump seems to be doing good managing the internal pressure of HAM1, see attached trend, we did have a small anomaly that can be noted on the same trend data, a little spike, we are looking into it.  If the pressure continues to improve, we'll be able to turn off all other auxiliary systems and decouple everything from the turbo pump tomorrow.

 

Images attached to this report
H1 PSL
ryan.short@LIGO.ORG - posted 16:36, Monday 23 June 2025 (85263)
PSL 10-Day Trends

FAMIS 31091

Nothing major to report; things are looking stable this week.

Images attached to this report
H1 SUS
jeffrey.kissel@LIGO.ORG - posted 12:03, Friday 20 June 2025 - last comment - 10:39, Tuesday 24 June 2025(85198)
SRM Transverse OSEM, Mounted on Opposite Side, has incorrect OSEM2EUL / EUL2OSEM matrix sign; Inconsequential, but should be Fixed for Sanity's Sake.
J. Kissel

I'm building up some controls design documentation for the derivations of the OSEM2EUL / EUL2OSEM matrices for existing suspension types (see G2402388), in prep for deriving new ones for e.g. the HRTS and if we upgrade any SUS to use the 2-DOF AuSEMs.

In doing so, I re-remembered that the HLTS, HSTS, OMC controls arrangement poster (E1100109), defining the now-called "6HT" OSEM arrangement in T2400265 calls out two possible positions for the transverse sensor, the "side" and "opposite side," which I'll abbreviate as SD and OS, respectively from here on.

If the transverse sensor is mounted in the SD position, then as the suspension moves in +T, the OSEM further occults the LED beam, creating a more negative ADC signal. Thus, the OSEM2EUL matrix's SD to T element should be -1.0.
If the transverse sensor is mounted in the OS position, then as the suspension moves in +T, the OSEM opens up revealing more LED beam, creating a more positive ADC signal. Thus, the OSEM2EUL matrix's SD to T element should be +1.0.

Not *actually* remembering that the HLTSs PR3, SR3, and two of the 9 HSTSs, SR2 and SRM use OS as their transverse sensor yesterday, and missing the note from Betsy in the abstract of E1100109 to look at the each SUS' Systems Level SolidWorks assembly for transverse sensor location assignment (prior to this morning it was not in red, nor did it call which suspension explicitly have their transverse sensor mounted in the OS position), I was worried that we'd missed this when defining the sign of *all* HLTS / HSTS / OMCS OSEM2EUL / EUL2OSEM matrices, and assumed they were all installed as SD OSEMs with -1.0 OSEM2EUL and EUL2OSEM matrix elements.

Below, I inventory the status with 
    - suspension name, 
    - a reference to picture of the transverse OSEM (or the corresponding flag w/o the OSEM), 
    - confirming SW drawing does match the picture, 
    - the current value / sign of the OSEM2EUL / EUL2OSEM matrix element (H1:SUS-${OPTIC}_M1_OSEM2EUL_2_6 or H1:SUS-${OPTIC}_M1_EUL2OSEM_6_2)
    - a conclusion of "all good" or what's wrong.


Optic	T Sensor	aLOG pic	SW check	OSEM2EUL	Conclusion
	Mount				value		/EUL2OSEM
MC1     SD              LHO:6014        D0901088 g      -1.0            all good
MC3     SD              LHO:39098       D0901089 g      -1.0            all good
PRM     SD              LHO:39682       D0901090 g      -1.0            all good
PR3     OS              LHO:39682       D0901086 g      +1.0            all good

MC2     SD              LHO:85195       D0901099 g      -1.0            all good
PR2     SD              LHO:85195       D0901098 g      -1.0            all good

SR2     OS              LHO:41768       D0901128 g      +1.0            all good

SRM     OS              LHO:60515       D0901133 g      -1.0		OSEM2EUL/EUL2OSEM wrong!
SR3     OS              LHO:60515       D0901132 g      +1.0            all good

FC1     SD              LHO:61710       D1900364 g      -1.0            all good
FC2     SD              LHO:65530       D1900368 g      -1.0            all good

OMC     SD              LHO:75529       D1300240 g      -1.0            all good (see also G1300086)


So, as the title of this aLOG states, we've got the sign wrong on SRM.

Shouldn't we have discovered this with the "health check TFs?"
Why doesn't this show as a difference in the "plant" ("health check") transfer functions when comparing against other SUS that have the sign right?
Why *don't* we need a different sign on SRM's transverse damping loop? 

Because the sign in the EUL2OSEM drive and OSEM2EUL sensed motion is self consistent:

When SRM EUL2OSEM matrix requests to drive in +T as though it had an OSEM coil in the "SD" position, it's actually driving in -T because the OSEM coil is in the OS position. 
On the other side, the OSEM2EUL matrix corrects for a "SD" OSEM, with "more negative when moves in +T", and and has the (incorrect) -1.0 in the OSEM2EUL matrix. But since the SUS is actually moving in -T, making the flag occult more of the OS OSEM LED beam, yielding a more negative ADC signal, the -T is reported +T in the DAMP bank because of minus sign in "SD" OSEM2EUL matrix. 
So the phase between DAMP OUT and DAMP IN at DC is still zero, as though "everything was normal," because requested physical drive +T is sensed as +T.

Thus the Sensor / Drive phase is zero at DC like every other HSTS, we can use the same feedback -1.0 sign like every other DOF and every other HSTS.

Does this matter for the IFO?
No. This sensor is only used to damp transverse, i.e. transverse to the main IFO beam. If there're no defects on the SRM optic HR surface, and the transverse displacement doesn't span a large fraction of the beam width, then there should be no coupling into L, P or Y which are the DOFs to which the IFO should be sensitive. 
This is corroborated by Josh recent work where he measured the coupling of the "SD" OSEM drive (actually in the OS position, driving -T) and found it to be negligible; see LHO:83277, specifically this SRM plot.
Not only is the IFO not sensitive to the transverse drive from the OSEM, but also the absolute sign of whether it's +T or -T doesn't matter since there's no *other* sensors that measure this DOF that we'd have to worry about comparing signs against.

Should we fix it?
I vote yes, but with a low priority, perhaps during maintenance when we have the time to gather a "post transverse sensor sign change fix" set of transfer functions.
Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 10:39, Tuesday 24 June 2025 (85279)
H1 SUS SRM's Basis Transformation Matrix elements for Transverse has been rectified as of 2025-06-24. See LHO:85277.
Displaying reports 701-720 of 83536.Go to page Start 32 33 34 35 36 37 38 39 40 End