Displaying reports 16601-16620 of 86649.Go to page Start 827 828 829 830 831 832 833 834 835 End
Reports until 20:25, Friday 11 August 2023
LHO FMCS (PEM)
oli.patane@LIGO.ORG - posted 20:25, Friday 11 August 2023 (72166)
HVAC Fan Vibrometers (FAMIS 25594)

Closes FAMIS #25594, last checked in 71978

Corner Station Fans (Attachment1)
Corner station fans all well within range. The line ~2.5 days ago was due to the corner station fans shutting off from the Fire Alarm(72097).

Outbuilding Fans (Attachment2)
All outbuilding fans well within range.

Other observations
-On August 5th at 18:07UTC, something happened to MY_FAN1_270_1/2 that caused MY_FAN1_270_1 to go from ~0.7 to ~0.3.

Images attached to this report
H1 CAL (CAL)
richard.savage@LIGO.ORG - posted 16:17, Friday 11 August 2023 (72161)
Assessing impact of displacement of Xend Pcal beam during Tuesday maintenance earlier this week

During Tuesday maintance this week, we moved the upper (inner) beam of the Xend Pcal system down from it's nominal location by 2.5 mm on the surface of the ETM (see aLog entry 72063).

The observed change in the Pcal X/Y comparison should give a measurement of the vertical component of the displacement of the interferometer beam from the center of the ETM.  This displacement, denoted by the b_y, is given by

Delta_XY =  c/2 * b_y * M / I

where Delta_XY is the observed change in the Pcal X/Y comparison (after minus before), c is the change in the vertical position of upper Pcal beam, the factor of 1/2 results from only moving one of the two Pcal beams,  b_y is the vertical component of the displacement of the interferometer beam from the center of the optic, M is the mass of the optic, and I is the moment of intertia of the ETM for rotation about an axis parallel to the face of the optic and and through the center of the face of the optic.

Thus the interferometer beam displacement can be estimated by b_y  = 2* Delta_XY / (c * M / I).

For the Xend ETM, M / I = 0.94e-4 / mm^2 and c = - 2.5 mm.  Thus b_y = -0.85e4 * Delta_XY mm

Using DTT to analyze data (1024 sec FFTs, 50% overlap, 10 avgs) during a lock stretch the day before we moved the beam, 08/07 from about 08:00 to 23:00 UTC, and after the move on 08/09 between 00:30 and 06:30 and between 11:30 and 15:30.  We observe a change in the X/Y comparison of about - 24e-4 (see attached plot).  This would indicate an interferoemter beam offset of about 20.7 mm in the positive y direction, ABOVE the center of the optic.

Info provided by JenneB (seel below) indicated that the pitch measurements using the electrostatic acutators indicate that the interferometer beam is offset by about 14.3 mm BELOW the center of the ETM.

We will look at more data from before and after the beam position move and double-check our calcuation to make sure we aren't missing a minus sign somewhere.

Next Tuesday, we plan to move the beam back to it's nominal vertical potition and offset it to the left (when viewing the fact of the optic from the BS side) by 2.5 mm to assess the horizontal component of the interferometer beam offset from center.

-------------------------
From JenneD on 7/21/23:

Folder for getting the spot position is /opt/rtcds/userapps/trunk/isc/common/scripts/decoup/BeamPosition/
Using matlab....
help a2l_lookup:  look up spot position for a given a2l gain on a test mass
Input:
1) 'PIT' or 'YAW'
2) a2l gain Output:
1) spot position in mm from test mass center ("spot position" is really the actuation node position; if the spot is co-located with the actuation node (eg. servo-ed there) then this also represents the spot position)
Sign convention for spot position: up (+Vert on SUS screens) is positive for pitch and farther to the left (+Trans on SUS screens) is positive for yaw.

ETMX:

a2l_lookup('PIT',4.0) Spot is -14.3 mm from the PIT center of the optic
a2l_lookup('YAW',4.4) Spot is 16.2 mm from the YAW center of the optic

ETMY:

a2l_lookup('PIT',4.60) Spot is -17.1 mm from the PIT center of the optic
a2l_lookup('YAW',3.2) Spot is 11.8 mm from the YAW center of the optic

Non-image files attached to this report
H1 General
oli.patane@LIGO.ORG - posted 16:09, Friday 11 August 2023 (72162)
Ops EVE Shift Start

TITLE: 08/11 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 12mph Gusts, 10mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.09 μm/s
QUICK SUMMARY:

Taking over for Ryan S. Observing and Locked for 3 hours.

LHO General
ryan.short@LIGO.ORG - posted 16:02, Friday 11 August 2023 (72156)
Ops Day Shift Summary

TITLE: 08/11 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Shift started troubleshooting locking issues eventually tracked down to recently modified SR2/3 damping gains (see alog 72152). Two short lock stretches in the morning to do some PEM injections while L1 was down. Back to observing by early afternoon, now locked for almost 3 hours.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
15:56 FAC Randy MX - Inventory 17:56
15:56 PEM Robert EY - PEM injections 16:51
16:15 FAC Cindi MY - Cleaning 16:50
17:45 FAC Cindi MY - Tech clean 19:20
17:59 PEM Robert EY - PEM injections 19:10
18:54 SEI Jim MX, MY - 3IFO inventory 20:25
19:52 PEM Robert EY - Shutting off amps, turning off lights 20:18
20:25 SEI Jim EY - Wind fence pic 20:40
21:39 CAL Tony, Genevieve PCal Lab - Cleaning up equipment 22:32
H1 SEI
jim.warner@LIGO.ORG - posted 14:45, Friday 11 August 2023 (72158)
August Wind Fence inspection

Finished a walk through of the wind fences today. Both fences looked fine, no damage. Attached photo is of the EX fence. No pics of the EY fence, still waiting on my replacement phone, but there's nothing to show.

Images attached to this report
H1 PSL
ryan.short@LIGO.ORG - posted 11:28, Friday 11 August 2023 (72155)
PSL Status Report - Weekly

FAMIS 25491


Laser Status:
    NPRO output power is 1.83W (nominal ~2W)
    AMP1 output power is 67.19W (nominal ~70W)
    AMP2 output power is 134.8W (nominal 135-140W)
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN

PMC:
    It has been locked 5 days, 3 hr 15 minutes
    Reflected power = 17.33W
    Transmitted power = 108.6W
    PowerSum = 125.9W

FSS:
    It has been locked for 0 days 1 hr and 45 min
    TPD[V] = 0.9241V

ISS:
    The diffracted power is around 2.1%
    Last saturation event was 0 days 1 hours and 45 minutes ago


Possible Issues: None

LHO VE
david.barker@LIGO.ORG - posted 10:20, Friday 11 August 2023 (72153)
Fri CP1 Fill

Fri Aug 11 10:17:15 2023 INFO: Fill completed in 17min 10secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 ISC (SUS)
jenne.driggers@LIGO.ORG - posted 10:04, Friday 11 August 2023 - last comment - 10:51, Friday 11 August 2023(72152)
Locking is better with higher SR2, SR3 damping gains

After the locking struggles this morning (alogs 72145 and 72149), Gabriele suggested reverting the SR2 and SR3 damping gains back to a higher value.  RyanS did that by hand, and the IFO got all the way to NLN the next lock with no further assistance (I believe). 

While the IFO was relocking, I added a few lines to the end of LOWNOISE_ASC to set the SR2 and SR3 damping gains to their lower values from alog 72130.

TJ and RyanS are working right now on a way to ensure that we have the higher gain values for lock acquisition and initial alignment, but also the lower gains accepted in the Observe.snap file (as of early this morning, the safe.snap and observe.snap files are still linked).

Comments related to this report
thomas.shaffer@LIGO.ORG - 10:51, Friday 11 August 2023 (72154)

The safe and observe files for SR2 and SR3 are now unlinked. The higher gain values are saved in the safe.snaps and the lower gains have been accepted in the observe.snaps.

Images attached to this comment
LHO General
ryan.short@LIGO.ORG - posted 08:02, Friday 11 August 2023 (72150)
Ops Day Shift Start

TITLE: 08/11 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Austin
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 4mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.09 μm/s
QUICK SUMMARY: H1 lost lock at 11:55 UTC and has been struggling to relock since; appears to be the same issue encountered by Ryan C. and Austin last night (alogs 72145 and 72149). Starting with troubleshooting now.

H1 ISC (OpsInfo)
austin.jennings@LIGO.ORG - posted 03:15, Friday 11 August 2023 (72149)
Troubleshooting Mysterious DRMI Locklosses - 8/11

Follow up to Ryan C. alog. Previously when running into issues engaging DRMI ASC, we would wait in ENGAGE DRMI ASC for a few minutes to let the signals converge. In this case, we would lose lock after a few seconds, we noticed that a few ASC signals would start to converge then suddenly swing wildly, causing the lockloss. These were most apparent in the MICH, SRC1, and PRC2 signals, both in P and Y. Keita and I decided to try and walk the suspensions that feed into these signals, as their signal input was well into the thousands (into the tens of thousands particularly for PRC2). After attempting to walk the BS (MICH), SRM (SRC1), and PRC2 (PR2), we were able to get the ASC signals closer to 0, however this didn't appear to help once we went back to ENGAGE DRMI ASC.

Here are the original values for the 3 suspensions I moved in case they need to be reverted at a future time:

PR2 P: 1582.4 Y: 3236.0

BS P: 98.61 Y: -395.81

SRM P: 2228.3 Y: -3144.0

This image is showing the the ASC signals looked like pre movement of the three suspensions, and this is what the signal looked like post movement (sorry these scopes have poor scaling). Keita suggested we start looking into the ASC loops themselves, particularly at SRC2. Before the DRMI ASC loops turned on we turned OFF the SRC2 servo, then went back to ENGAGE DRMI ASC. This seemed to be able to hold us in ENGAGE ASC. At this point, we tried turning the SRC2 loop back on, but with a halved gain for both P/Y (P original gain was 60, Y was 100). When turning on the servo with half the gain, we were still seeing a good amount of 1hz motion in the signal, so we tried setting the gain to be a quarter of the original value...same result.

Next, we tried halving the SRC1 gains, from 4, down to 2. Then, we tried to add back in the quartered SRC2 gains, which still yielded the same result. At this point, it was decided that we entirely leave the SRC2 loop off, while keeping the halved SRC1 P/Y loops and try to continue locking - this worked and we were able to continue locking. Eventually, guardian took over the ASC loops once we got to ENGAGE ASC, and we had no issues relocking afterwards. This should be looked at tomorrow during the day, but at least now we have a temporary workaround if we lose lock again.

Steps taken to bypass this issue - Tagging OpsInfo:

1) Wait in DRMI LOCKED PREP ASC

2) Turn OFF the SRC2 P/Y loops

3) Set SRC1 P/Y gains to 2 (originally 4) - note this step was us taking extra safe steps to err on the side of caution since the SRC2 oscillation was coupling into SRC1

4) Continue the locking process - guardian will eventually take over and set the control loops to their nominal state

Back to NLN @ 10:12 UTC.
 

Images attached to this report
H1 AOS
ryan.crouch@LIGO.ORG - posted 00:30, Friday 11 August 2023 (72145)
OPS Thursday eve shift summary

TITLE: 08/11 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Austin
SHIFT SUMMARY:

Lock#1:

We rode through a 5.9 from Japan and a 5.1 from NZ which came through within a few minutes of eachother around 00:45UTC (peakmon maxed out at 1100).

Superevent S23081n

Lockloss @ 04:25

Lock#2:

Couldn't get any flashes at DRMI or PRMI, went through CHECK_MICH, still nothing on PRMI, so H1MANAGER took us to initial alignment. Lost it during OFFLOAD_DRMI_ASC, SRM SR2 BS saturations then LL. LSC-POPAIR_B_RF90_I_ERR_DQ starts oscillating and growing a minute of so before we lose it.

Lock#3:

Lost it at OFFLOAD_DRMI_ASC again, same situation.

Lock#4:

Lost it at FIND_IR

Lock#5-6:

The IMC seems to be taking longer to lock, Lost it at DRMI. The flashes were noticibly worse during aquire_drmi so I decided to try and do another initial alignment.

Xarm was struggling and looked weird and was very fuzzy on the scope but the camera image looked fine? The signals weren't converging so I requested XARM to unlocked and it then went into increase flashes then locked but encountered the same issue but it was able to get to offload this attempt and get past green_arms. During SRY the OM suspensions weren't getting cleared so there were a lot of IFO_OUT saturations from OM1_Y and OM3_P.  Finished initial alignment then went back into locking.

Same issue, theres a ringup during DRMI_LOCKED_CHECK_ASC that kills it. I called Keita for some help near the end of the shift and told the incoming operator about the issues and at his suggestion tried to hold us in TURN_ON_BS_STAGE2 since it may be the ASC signals that are causing issues? We're holding here as of 07:00UTC

LOG:                                                                                                                                                                         

Start Time System Name Location Lazer_Haz Task Time End
22:03 PEM Robert EX N PEM injection 22:54
Images attached to this report
H1 General
ryan.crouch@LIGO.ORG - posted 21:26, Thursday 10 August 2023 - last comment - 23:43, Thursday 10 August 2023(72147)
Lockloss @ 04:45

No clear reason, quick lockloss

Comments related to this report
ryan.crouch@LIGO.ORG - 23:43, Thursday 10 August 2023 (72148)

Relocking is struggling, theres some ringup during DRMI_LOCKED_CHECK_ASC that keeps killing it.

Images attached to this comment
H1 General
ryan.crouch@LIGO.ORG - posted 20:06, Thursday 10 August 2023 (72146)
OPS Thursday eve shift midshift update

STATE of H1: Observing at 151Mpc

We've been locked for 18:13, everythings stable.

H1 SUS
gabriele.vajente@LIGO.ORG - posted 10:14, Thursday 10 August 2023 - last comment - 14:43, Tuesday 15 August 2023(72130)
Reducing SR2 and SR3 damping

Follow up on previous tests (72106)

First I injected noise on SR2_M1_DAMP_P and SR2_M1_DAMP_L to measure the transfer function to SRCL. The result shows that the shape is different and the ratio is not constant in frequency. Therefore we probably can't cancel the coupling of SR2_DAMP_P to SRCL by rebalancing the driving matrix. Although I haven't thought carefully if there is some loop correction I need to do for those transfer functions. I measured and plotted the DAMP_*_OUT to SRCL_OUT. transfer functions. It might still be worth trying to change the P driving matrix while monitoring a P line to minimize the coupling to SRCL.

Then I reduced the damping gains for SR2 and SR3 even further. We are now running with SR2_M1_DAMP_*_GAIN = -0.1 (was -0.5 for all but P that was -0.2 since I reduced it yesterday). Also SR3_M1_DAMP_*_GAIN = -0.2 (was -1). This has improved a lot the SRCL motion and also improved DARM RMS. It looks like it also improved the range.

Tony has accepted this new configuration in SDF.

Detailed log below for future reference.

 

Time with SR2 P gain at -0.2 (but before that too)
from    PDT: 2023-08-10 08:52:40.466492 PDT
        UTC: 2023-08-10 15:52:40.466492 UTC
        GPS: 1375717978.466492
to      PDT: 2023-08-10 09:00:06.986101 PDT
        UTC: 2023-08-10 16:00:06.986101 UTC
        GPS: 1375718424.986101

H1:SUS-SR2_M1_DAMP_P_EXC  butter("BandPass",4,1,10) ampl 2
from    PDT: 2023-08-10 09:07:18.701326 PDT
        UTC: 2023-08-10 16:07:18.701326 UTC
        GPS: 1375718856.701326
to      PDT: 2023-08-10 09:10:48.310499 PDT
        UTC: 2023-08-10 16:10:48.310499 UTC
        GPS: 1375719066.310499

H1:SUS-SR2_M1_DAMP_L_EXC  butter("BandPass",4,1,10) ampl 0.2
from    PDT: 2023-08-10 09:13:48.039178 PDT
        UTC: 2023-08-10 16:13:48.039178 UTC
        GPS: 1375719246.039178
to      PDT: 2023-08-10 09:17:08.657970 PDT
        UTC: 2023-08-10 16:17:08.657970 UTC
        GPS: 1375719446.657970

All SR2 damping at -0.2, all SR3 damping at -0.5
start   PDT: 2023-08-10 09:31:47.701973 PDT
        UTC: 2023-08-10 16:31:47.701973 UTC
        GPS: 1375720325.701973
to      PDT: 2023-08-10 09:37:34.801318 PDT
        UTC: 2023-08-10 16:37:34.801318 UTC
        GPS: 1375720672.801318

All SR2 damping at -0.2, all SR3 damping at -0.2
start   PDT: 2023-08-10 09:38:42.830657 PDT
        UTC: 2023-08-10 16:38:42.830657 UTC
        GPS: 1375720740.830657
to      PDT: 2023-08-10 09:43:58.578103 PDT
        UTC: 2023-08-10 16:43:58.578103 UTC
        GPS: 1375721056.578103

All SR2 damping at -0.1, all SR3 damping at -0.2
start   PDT: 2023-08-10 09:45:38.009515 PDT
        UTC: 2023-08-10 16:45:38.009515 UTC
        GPS: 1375721156.009515

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 16:30, Friday 11 August 2023 (72159)

If our overall goal is to remove peaks from DARM that dominate the RMS, reducing these damping gains is not the best way to acheive that. SR2 L damping gain was reduced by a factor of 5 in this alog, and a resulting 2.8 Hz peak is now being injected into DARM from SRCL. This 2.8 Hz peak corresponds to a 2.8 Hz SR2 L resonance. There is no length control on SR2, so the only way to suppress any length motion of SR2 is via the top stage damping loops. The same can be said for SR3, whose gains were reduced by 80%. It may be that we are reducing sensor noise injected into SRCL from 3-6 Hz by reducing these gains, hence the improvement Gabriele has noticed.

Comparing a DARM spectrum before and after this change to the damping gains, you can see that the reduction in the damping gain did reduce DARM and SRCL above 3 Hz, but also created a new peak in DARM and SRCL at 2.8 Hz. I also plotted spectra of all dofs of SR2 and SR3 before and after the damping gain change showing that some suspension resonances are no longer being suppressed. All reference traces are from a lock on Aug 9 before these damping gains were reduced and the live traces are from this current lock. The final plot shows a transfer function measurement of SR2 L taken by Jeff and me in Oct 2022.

Images attached to this comment
elenna.capote@LIGO.ORG - 16:16, Monday 14 August 2023 (72204)

Since we fell out of lock, I took the opportunity to make SR2 and SR3 damping gain adjustments. I have split the difference on the gain reductions in Gabriele's alog. I increased all the SR2 damping gains from -0.1 to -0.2 (nominal is -0.5). I increased the SR3 damping gains from -0.2 to -0.5 (nominal is -1).

This is guardian controlled in LOWNOISE_ASC, because we need to acquire lock with higher damping gains.

Once we are back in lock, I will check the presence of the 2.8 Hz peak in DARM and determine how much different the DARM RMS is from this change.

There will be SDF diffs in observe for all SR2 and SR3 damping dofs. They can be accepted.

oli.patane@LIGO.ORG - 16:55, Monday 14 August 2023 (72206)

SR2 and SR3 damping gains changes that Elenna made have been accepted

Images attached to this comment
elenna.capote@LIGO.ORG - 17:47, Monday 14 August 2023 (72208)

The DARM RMS increases by about 8% with these new slightly higher gains. These gains are a factor of 2/2.5 greater than Gabriele's reduction. The 2.8 Hz peak in DARM is down by 21%.

Images attached to this comment
elenna.capote@LIGO.ORG - 14:43, Tuesday 15 August 2023 (72249)

This is a somewhat difficult determination to make, given all the nonstationary noise from 20-50 Hz, but it appears the DARM sensitivity is slightly improved from 20-40 Hz with a slightly higher SR2 gain. I randomly selected several times from the past few locks with the SR2 gains set to -0.1 and recent data from the last 24 hours where SR2 gains were set to -0.2. There is a small improvement in the data with all SR2 damping gains = -0.2 and SR3 damping gains= -0.5.

I think we need to do additional tests to determine exactly how SR2 and SR3 motion limit SRCL and DARM so we can make more targeted improvements to both. My unconfirmed conclusion from this small set of data is that while we may be able to reduce reinjected sensor noise above 3 Hz with a damping gain reduction, we will also limit DARM if there is too much motion from SR2 and SR3.

Images attached to this comment
H1 DetChar (CAL, DetChar)
derek.davis@LIGO.ORG - posted 15:01, Tuesday 08 August 2023 - last comment - 21:49, Tuesday 29 August 2023(72064)
Excess noise near 102.13 Hz calibration line

Benoit, Ansel, Derek

Benoit noticed that for recent locks, the 102.13 Hz calibration line is much louder than typical for the first few hours of the lock. An example of this behavior is shown in the attached spectrogram of H1 strain data on August 5 - this is the first day this behavior appeared. Ansel noted that this feature includes a comb-like structure around the line that is only present in the H1:GDS-CALIB_STRAIN_NOLINES channel and not H1:GDS-CALIB_STRAIN (see spectra for CALIB_STRAIN and CALIB_STRAIN_NOLINES on Aug 5). This issue also visible in the PCAL trends for the 102.13 Hz line. 

We are not sure if the excess noise near 102.13 Hz is from the calibration line itself or another noise source that is near the line. However, the behavior has been present for every lock since 12:30 UTC on August 5 2023. 

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 09:21, Wednesday 09 August 2023 (72094)CAL
FYI, 
$ gpstime Aug 05 2023 12:30 UTC
    PDT: 2023-08-05 05:30:00.000000 PDT
    UTC: 2023-08-05 12:30:00.000000 UTC
    GPS: 1375273818.000000
so... this behavior seems to have started at 5:30a local time on a Saturday. Therefore *very* unlikely that the start of this issue is intentional / human change driven.

The investigation continues....
making sure to tag CAL.
jeffrey.kissel@LIGO.ORG - 09:42, Wednesday 09 August 2023 (72095)
Other facts and recent events:
- Attached are 2 screenshots that show the actual *digital* excitation is not changing with time in anyway.
    :: 2023-08-08_H1PCALEX_OSC7_102p13Hz_Line_3mo_trend.png shows the specific oscillator, --- PCALX's OSC7 which drives the 102.13 Hz line's EPICs channel version of its output. The minute trend shows the max, min, and mean of the output, and there's no change in amplitude.
    :: 2023-08-08_H1PCALEX_EXC_SUM_3mo_trend.png shows a trend of the total excitation sum from PCAL X. This also shows *no* change in time in amplitude.

Both trends show the Aug 02 2023 change in amplitude kerfuffle I caused that Corey found and a bit later rectified -- see LHO:71894 and subsequent comments, but that was done, over with an solved, definitely by Aug 03 2023 UTC and unrelated to the start up of this problem.

It's also well after I installed new oscillators and rebooted the PCALX, PCALY, and OMC models on Aug 01 2023 (see LHO:71881).
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 10:44, Wednesday 09 August 2023 (72100)
The front-end version of the calibration's systematic error at 102.13 Hz also shows the long, time-dependent issue -- this will allow us to trend the issue against other channels

Folks in the calibration group have found that the online monitoring system for the
    -  overall DARM response function systematic error
    - (absolute reference) / (Calibrated Data Product) [m/m] 
    - ( \eta_R ) ^ (-1) 
    - (C / 1+G)_pcal / (C / 1+G)_strain
    - CAL-DELTAL_REF_PCAL_DQ / GDS-CALIB_STRAIN
(all different ways of saying the same thing; see T1900169) in calibration at each PCAL calibration line frequency -- the "grafana" pages -- are showing *huge* amounts of systematic error during these times when the amplitude of the line is super loud.

Though this metric is super useful because it's dreadfully obvious that things are going wrong -- this metric is not in any normal frame structure, so you can't compare it against other channels to find out what's causing the systematic error.

However -- remember -- we commissioned a front-end version of this monitoring during ER15 -- see LHO:69285.

That means the channels 
     H1:CAL-CS_TDEP_PCAL_LINE8_COMPARISON_OSC_FREQ        << the frequency of the monitor
    H1:CAL-CS_TDEP_PCAL_LINE8_SYSERROR_MAG_MPM        << the magnitude of the systematic error
    H1:CAL-CS_TDEP_PCAL_LINE8_SYSERROR_PHA_DEG        << the phase of the systematic error 

tell you (what's supposed to be***) equivalent information.

*** One might say that "what's suppose to be" is the same as "roughly equivalent" due to the following reasons: 
    (1) because we're human, the one system is displaying the systematic error \eta_R, and the other is displaying the inverse ( \eta_R ) ^ (-1) 
    (2) Because this is early-days in the front-end system, it uses the "less complete" calibrated channel CAL-DELTAL_EXTERNAL_DQ rather than the "fully correct" channel GDS-CALIB_STRAIN

But because the problem is so dreadfully obvious in these metrics, even though they're only *roughly* equivalent, you can see the same thing.
In the attached screenshot, I show both metrics for the most recent observation stretch, between 10:15 and 14:00 UTC on 2023-Aug-09.

Let's use this front-end metric to narrow down the problem via trending.
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 10:58, Wednesday 09 August 2023 (72102)CAL, DetChar
There appears to be no change in the PCALX analog excitation monitors either.

Attached is a trend of some key channels in the optical follower servo -- the analog feedback system that serves as intensity stabilization and excitation power linearization for the PCAL's laser light that gets transmitted to the test mass -- the actuator of which is an acousto-optic modulator (an AOM). There seems to be no major differences in the max, min, and mean of these signals before vs. after these problems started on Aug 05 2023.

H1:CAL-PCALX_OFS_PD_OUT_DQ
H1:CAL-PCALX_OFS_AOM_DRIVE_MON_OUT_DQ
Images attached to this comment
madeline.wade@LIGO.ORG - 11:34, Wednesday 09 August 2023 (72104)

I believe this is caused by the presence of another line very close to the 102.13 Hz pcal line.  This second line is present at the start of a lock stretch but seems to go away as the lock stretch continues.  I have attached a plot showing a zoom-in on an ASD around 102.1-102.2 Hz right after a lock stretch (orange), where the second peak is evident, and well into a lock stretch (blue) where the PCAL line is still present, but the second peak right below it in frequency is gone.  This ASD is computed using an hour of data for each curve, so we can get the needed resolution for these two peaks.

I don't know the origin of this second line.  However, a quick fix to the issue could be moving the PCAL line over by about a Hz.  The second attached plot shows that the spectrum looks pretty clean from 101-102 Hz, so somewhere in there would be probably be okay for a new location of the PCAL line.

Images attached to this comment
ansel.neunzert@LIGO.ORG - 11:47, Wednesday 09 August 2023 (72105)

Since it looks like the additional noise is at 102.12833 Hz, I did a quick check in Fscan data from Aug 5 for channels where there is high coherence with DELTAL_EXTERNAL at 102.12833 but *not* at 102.13000 Hz. This narrows down to just a few channels:

  • H1:PEM-EX_MAG_EBAY_SEIRACK_{Z,Y}_DQ .
  • H1:PEM-EX_ADC_0_09_OUT_DQ
  • H1:ASC-OMC_A_YAW_OUT_DQ. Note that other ASC-OMC channels (Fscan tracks A,B and PIT,YAW) see high coherence at both frequencies.

(lines git issue opened as we work on this.)

jeffrey.kissel@LIGO.ORG - 14:45, Wednesday 09 August 2023 (72110)
As a result of Ansel's discovery, and conversation on the CAL call today -- I've moved the calibration line frequency from 102.13 to 104.23 Hz. See LHO:72108.
derek.davis@LIGO.ORG - 13:28, Friday 11 August 2023 (72157)

This line may have appeared in the previous lock the day before (Aug 4). The daily spectrogram for Aug 4 shows a line near 100 Hz starting at 21:00 UTC. 

Images attached to this comment
elenna.capote@LIGO.ORG - 16:49, Friday 11 August 2023 (72163)

Looking at alogs leading up to the time Derek notes above, I noticed that Gabriele retuned and tested new LSC FF. This change may be related to this new peak. Remembering some issues we had recently where DHARD filter impulses were ringing up violin modes, I checked the new LSC FF filters and how they are engaged in the guardian. Some of them have no ramp time, and the filter bank is turned on immediately along with the filters in the guardian. I have no idea why that would cause a peak at 102 Hz, but I updated those filters to have a 3 second ramp.

oli.patane@LIGO.ORG - 16:51, Friday 11 August 2023 (72164)

Reloaded the H1LSC model to load in Elenna's filter changes

ansel.neunzert@LIGO.ORG - 13:36, Monday 14 August 2023 (72197)

Now that the calibration line has been moved, the comb-like structure at the calibration line frequency is no longer present (checked in the CLEAN channel).

We can also see the shape of the 102.12833 Hz line much more clearly without the overlapping calibration line. I have attached a plot for reference on the width and shape.

Images attached to this comment
camilla.compton@LIGO.ORG - 16:33, Monday 14 August 2023 (72203)ISC

As discussed in todays commissioning meeting, I checked TMSX and ETMX movement for a kick during locking and couldn't see anything suspicious. I did find some increase motion/noise every 8Hz in TMSX 1s into ENGAGE_SOFT_LOOPS when ISC_LOCK isn't explicitly doing anything, plot attached. However this noise was present prior to Aug 4th, (July 30th attached).

TMS is suspicious as Betsy found that TMS's have violin modes ~103-104Hz.

Jeff draws attendtion to 38295, showing modes of quad blade springs above 110Hz, and 24917 showing quad top wire modes above 300Hz.

Elenna's notes with calibration lines off (as we are experimenting with for current lock) we can see this 102Hz peak at ISC_LOCK state ENGAGE_ASC_FOR_FULL_IFO. We were mistaken.

Images attached to this comment
elenna.capote@LIGO.ORG - 21:49, Tuesday 29 August 2023 (72544)

To preserve documentation, this problem has now been solved, with more details in 7253772319, and 72262.

The cause of this peak was a spurious, narrow, 102 Hz feature in the SRCL feedforward that we didn't catch when the filter was made. This has been been fixed, and the cause of the mistake has been documented in the first alog listed above so we hopefully don't repeat this error.

H1 OpsInfo
ryan.short@LIGO.ORG - posted 15:51, Saturday 05 August 2023 - last comment - 15:35, Friday 11 August 2023(71990)
nln_not_obs Timer in IFO_NOTIFY Increased

At 13:00 UTC this morning, H1 had relocked automatically all the way to NOMINAL_LOW_NOISE and the only thing preventing the observation intent bit to be set to OBSERVE was for the ADS to converge in order to switch over to the camera servos. Even though this is expected behavior, IFO_NOTIFY sent an alert to me as the operator because H1 had been in NLN for 3 minutes and the intent bit had not been flipped. After 13 minutes of waiting in NLN for the camera servos to turn on, the intent bit was set to OBSERVE automatically without any intervention.

Since we've seen ADS take up to 15 minutes to converge while in NLN before we can go to observing, I've increased IFO_NOTIFY's nln_not_obs timer from 3 minutes to 15 to avoid unnecessary notifications while ADS is converging as expected.

Comments related to this report
ryan.short@LIGO.ORG - 15:35, Friday 11 August 2023 (72160)

I've made this alert condition a bit smarter. IFO_NOTIFY will now wait for 20 minutes after reaching NOMINAL_LOW_NOISE for the camera servos to turn on before moving to 'ALERT_ACTIVE.' If ADS converges and the camera servos turn on before then (as we'd expect), the timer is stopped and a new 3 minute timer starts to indicate we've actually reached the point where nominally we would move to OBSERVE. If that timer expires, IFO_NOTIFY moves to 'ALERT_ACTIVE' to indicate something is preventing the move to OBSERVE.

These changes are loaded into the guardian and committed to svn, revision 26132.

Displaying reports 16601-16620 of 86649.Go to page Start 827 828 829 830 831 832 833 834 835 End