Displaying reports 14001-14020 of 84095.Go to page Start 697 698 699 700 701 702 703 704 705 End
Reports until 21:13, Monday 14 August 2023
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 21:13, Monday 14 August 2023 - last comment - 23:16, Monday 14 August 2023(72211)
Lockloss

Lockloss @ 3:57 due to me having selected the wrong state to stop at before going into LOWNOISE_LENGTH_CONTROL for our 102Hz peak test :(. Currently relocking and everything is going well!!

Comments related to this report
oli.patane@LIGO.ORG - 22:12, Monday 14 August 2023 (72213)

Back to Observing at 5:11UTC!

H1 General
oli.patane@LIGO.ORG - posted 20:32, Monday 14 August 2023 (72210)
Ops EVE MidShift Report

After losing lock at 2:09UTC, I started relocking the detector but had it stop right before it got to LOWNOISE_LENGTH_CONTROL so we could see whether the 102Hz peak (72064) is related to the LSC filter gains turning on or the calibration lines turning on. We got our answer, so I continued and we are currently in NOMINAL_LOW_NOISE waiting for ADS to converge so we can go into Observing.

H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 19:15, Monday 14 August 2023 - last comment - 23:51, Monday 14 August 2023(72209)
Lockloss

Lockloss at 2:09UTC. Got an EX saturation callout immediately before.

In relocking, I will be setting ISC_LOCK so we go to LOWNOISE_LENGTH_CONTROL instead of straight to NOMINAL_LOW_NOISE, to see if the 102Hz line is caused by LOWNOISE_LENGTH_CONTROL or by the calibration lines engaging at TURN_ON_CALIBRATION_LINES (72205).

Comments related to this report
oli.patane@LIGO.ORG - 23:51, Monday 14 August 2023 (72216)DetChar

I noticed a weird set of glitches on the glitchgram fom that took place between 3:13 and 3:57UTC(spectrogram, omicron triggers), ramping up in frequency from 160-200Hz over that timespan. Even though we weren't Observing when this was happening, the diagonal line on many of the summary page plots is hard to miss and so I wanted to post this and tag DetChar to give info as to why these glitches appeared and why they (presumably) shouldn't be seen again.

This is after we had reached NOMINAL_LOW_NOISE but were not Observing yet due to waiting for ADS to converge. Although I don't know the direct cause of these glitches, them appearing, along with ADS not converging, was because I had selected the wrong state to pause at before going into LOWNOISE_LENGTH_CONTROL(for 102Hz peak test), so even though I was then able to proceed to NOMINAL_LOW_NOISE, the detector wasn't in the correct configuration. Once we lost lock trying to correct the issue, relocking automatically went through the correct states. So these glitches occurred during a non-nominal NOMINAL_LOW_NOISE.

Images attached to this comment
H1 CAL (CAL, GRD, ISC, OpsInfo)
ryan.short@LIGO.ORG - posted 16:48, Monday 14 August 2023 (72205)
Added TURN_ON_CALIBRATION_LINES State to ISC_LOCK

R. Short, J. Kissel

Following investigations into the 102Hz feature that has been showing up for the past few weeks (see alogs 72064 and 72108 for context), it was decided to try a lock acquisition with the calibrations lines off until much later in the locking sequence. What this means in more specific terms is as follows:

For each of these points where the calibration lines are turned off or on, the code structure is now consistent. Each time "the lines are turned on/off," the cal lines for ETMX stages L1, L2, and L3 (CLK, SIN, and COS), PCal X, PCal Y, and DARMOSC are all toggled.

The OMC and SUSETMX SAFE SDF tables have been updated appropriately (screenshots attached).

These changes were implemented on August 14th around 22:45 UTC, are committed to svn, and ISC_LOCK has been loaded.

Images attached to this report
LHO General (DetChar)
ryan.short@LIGO.ORG - posted 16:04, Monday 14 August 2023 (72192)
Ops Day Shift Summary

TITLE: 08/14 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Commissioning time this morning while L1 was down and observing this afternoon until a lockloss shortly before the end of shift.

H1 is relocking automatically, currently locking PRMI.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
16:38 FAC Cindi MX - Technical cleaning 17:46
16:39 ISC Elenna CR - ASC injections 19:27
16:39 PEM Robert LVEA/CR - PEM injections 20:19
16:41 FAC Randy LVEA N-bay - Hardware checks 17:12
16:42 VAC Travis MY - Turbopump maintenance (back/forth all day) 20:33
17:13 FAC Randy EY - Plug in scissor lift 18:13
H1 General
oli.patane@LIGO.ORG - posted 16:03, Monday 14 August 2023 (72202)
Ops EVE Shift Start

TITLE: 08/14 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 7mph Gusts, 4mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.16 μm/s
QUICK SUMMARY:

Detector went down a bit before I arrived (72201) for unknown causes. Currently working its way back up and is doing well.

H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 16:02, Monday 14 August 2023 - last comment - 21:43, Monday 14 August 2023(72201)
Lockloss @ 22:44 UTC

Lockloss @ 22:44 UTC - cause currently unknown, seemed fast

Comments related to this report
oli.patane@LIGO.ORG - 21:43, Monday 14 August 2023 (72212)

00:00 Back into Observing

H1 ISC
elenna.capote@LIGO.ORG - posted 15:54, Monday 14 August 2023 (72200)
INP1 Open Loop Gain Measurements

Today I took "unbiased" OLGs of INP1 P and Y (67187). I have plot the measurements with error shading.

INP1 P has a UGF of about 0.036 Hz and phase margin of 87 deg. This UGF seems very low for the target Gabriele and I had when we redesigned this loop (69108). I think should be closer to 0.1 Hz. INP1 Y has a UGF of about 0.25 Hz with phase margin 35 deg, which is higher than I would have expected for our target. Time permitting, I will look into the design of both of these loops and see if there are any adjustments worth making.

You can find the measurement templates, exported data, and processing code in '/ligo/home/elenna.capote/DRMI_ASC/INP1'.

The templates for the these measurements are also saved in [userapps]/asc/h1/templates/INP1 as 'INP1_{P,Y}_olg_broadband_shaped.xml'.

As a reminder, INP1 controls IM4 and is sensed on a combination of REFL RF45 WFS.

Images attached to this report
H1 ISC
elenna.capote@LIGO.ORG - posted 15:42, Monday 14 August 2023 (72199)
PRC2 Open Loop Gain Measurements

Today I took "unbiased" OLGs of PRC2 P and Y (67187). I have plot the measurements with error shading.

PRC2 P has a UGF of about 0.12 Hz with a phase margin of 46 deg. PRC2 Y has a UGF of about 0.17 Hz with a phase margin of 53 deg.

This lines up with the target UGF Gabriele and I had when we redesigned this loop (69108).

You can find the measurement templates, exported data, and processing code in '/ligo/home/elenna.capote/DRMI_ASC/PRC2'.

The templates for the these measurements are also saved in [userapps]/asc/h1/templates/PRC2 as 'PRC2_{P,Y}_olg_broadband_shaped.xml'.

As a reminder, PRC2 controls PR2 and is the "only" ASC loop that controls the PRC. We do not run PRC1 in full lock, and we only control PRM angle with the (very low bandwidth) camera servo. PRC2 is currently sensed on a combination of REFL RF9 WFS.

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 14:47, Monday 14 August 2023 - last comment - 17:24, Tuesday 05 September 2023(72198)
Picket Fence MEDM

I've written a python program to generate a H1CDS_PICKET_FENCE.adl MEDM (see attached). This can be opened from the SITEMAP as the last entry in the SEI pull-down.

All the non-string PVs can be trended using the DAQ.

Images attached to this report
Comments related to this report
edgard.bonilla@LIGO.ORG - 17:24, Tuesday 05 September 2023 (72702)

Dave, I love this!

what do i need to do to get this script so I can monitor the picket fences while doing debugging here at Stanford too?

 

Edgard

H1 General
ryan.short@LIGO.ORG - posted 13:19, Monday 14 August 2023 (72196)
H1 Resumes Observing

H1 has wrapped up commissioning activities and is back observing as of 20:18 UTC

H1 SQZ (OpsInfo)
camilla.compton@LIGO.ORG - posted 11:48, Monday 14 August 2023 - last comment - 10:36, Thursday 14 September 2023(72195)
Unmonitored syscssqz channels that have been taking IFO out of observing

Naoki and I unmonitored  H1:SQZ-FIBR_SERVO_COMGAIN and H1:SQZ-FIBR_SERVO_FASTGAIN from syscssqz observe.snap. They have been regularly  taking us out of observing (72171) by changing when the TTFSS isn't really unlocking, see 71652. If the TTFSS really unlocks there will be other sdf diffs and the sqz guardians will unlock. 

We still plan to investigate this further tomorrow. We can monitor if it keeps happening using the channels.

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 10:42, Tuesday 15 August 2023 (72227)

Daniel, Sheila

We looked at one of these incidents, to see what information we could get from the beckhoff error checking.  The attached screenshot shows that when this happened on August 12th at 12:35 UTC, the beckhoff error code for the TTFSS was 2^20, counting down on the automated error screen (second attachment) the 20th error is Beatnote out of range of frequency comparator.  We looked at the beatnote error epics channel, which does seem to be well within the tolerances.  Daniel thinks that the error is happening faster than it can be recorded by epics.  He proposes that we go into the beckhoff code and add a condition that the error condition has to be met for 0.1s before throwing the error. 

Images attached to this comment
camilla.compton@LIGO.ORG - 10:17, Friday 18 August 2023 (72317)

In the last 5 days these channels would have taken us out of observing 13 times if they were still monitored, plot attached. Worryingly, 9 times in the last 14 hours, see attached.

Maybe something has changed in SQZ to make the TTFSS more sensitive. The IFO has been locked for 35 hours where sometimes we get close to the edges of our PZT ranges due to temperature drifts over long locks. 

Images attached to this comment
victoriaa.xu@LIGO.ORG - 12:25, Tuesday 22 August 2023 (72372)SQZ

I wonder if the TTFSS 1611 PD is saturated as power from the PSL fiber has drifted. Trending RFMON and DC volts from the TTFSS PD, it looks like in the past 2-3 months, the green beatnote's demod RF MON has increased (its RF max is 7), while the bottom gray DC volts signal from the PD has flattened out around -2.3V. Also looks like the RF MON got noisier as the PD DC volts saturated.

This PD should see the 160 MHz beatnote between the PSL (via fiber) and SQZ laser (free space). From LHO:44546, it looks like this PD "normally" would have like 360uW on it, with 180uW from each arm. If we trust the PD calibrations, then current PD values report ~600uW total DC power on the 1611 PD (red), with 40uW transmitted from the PSL fiber (green trend). Pick-offs for the remaining sqz laser free-space path (iem sqz laser seed/LO PDs) don't see power changes, so unlikely the saturations are coming from upstream sqz laser alignment. Not sure if there's some PD calibration issues going on here. In any case, all fiber PDs seem to be off from their nominal values, consistent with their drifts in the past few months.

I adjusted the TTFSS waveplates on the PSL fiber path to bring the FIBR PDs closer to their nominal values, and at least so we're not saturing the 1611. TTFSS and squeezer locks seem to have come back fine. We can see if this helps the SDF issues at all.

Images attached to this comment
camilla.compton@LIGO.ORG - 10:36, Thursday 14 September 2023 (72881)

These were re-monitored in 72679 after Daniel adjusted the SQZ Laser Diode Nominal Current, stopping this issue.

LHO VE
david.barker@LIGO.ORG - posted 10:22, Monday 14 August 2023 (72194)
Mon CP1 Fill

Mon Aug 14 10:11:36 2023 INFO: Fill completed in 11min 32secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 SUS
gabriele.vajente@LIGO.ORG - posted 10:14, Thursday 10 August 2023 - last comment - 14:43, Tuesday 15 August 2023(72130)
Reducing SR2 and SR3 damping

Follow up on previous tests (72106)

First I injected noise on SR2_M1_DAMP_P and SR2_M1_DAMP_L to measure the transfer function to SRCL. The result shows that the shape is different and the ratio is not constant in frequency. Therefore we probably can't cancel the coupling of SR2_DAMP_P to SRCL by rebalancing the driving matrix. Although I haven't thought carefully if there is some loop correction I need to do for those transfer functions. I measured and plotted the DAMP_*_OUT to SRCL_OUT. transfer functions. It might still be worth trying to change the P driving matrix while monitoring a P line to minimize the coupling to SRCL.

Then I reduced the damping gains for SR2 and SR3 even further. We are now running with SR2_M1_DAMP_*_GAIN = -0.1 (was -0.5 for all but P that was -0.2 since I reduced it yesterday). Also SR3_M1_DAMP_*_GAIN = -0.2 (was -1). This has improved a lot the SRCL motion and also improved DARM RMS. It looks like it also improved the range.

Tony has accepted this new configuration in SDF.

Detailed log below for future reference.

 

Time with SR2 P gain at -0.2 (but before that too)
from    PDT: 2023-08-10 08:52:40.466492 PDT
        UTC: 2023-08-10 15:52:40.466492 UTC
        GPS: 1375717978.466492
to      PDT: 2023-08-10 09:00:06.986101 PDT
        UTC: 2023-08-10 16:00:06.986101 UTC
        GPS: 1375718424.986101

H1:SUS-SR2_M1_DAMP_P_EXC  butter("BandPass",4,1,10) ampl 2
from    PDT: 2023-08-10 09:07:18.701326 PDT
        UTC: 2023-08-10 16:07:18.701326 UTC
        GPS: 1375718856.701326
to      PDT: 2023-08-10 09:10:48.310499 PDT
        UTC: 2023-08-10 16:10:48.310499 UTC
        GPS: 1375719066.310499

H1:SUS-SR2_M1_DAMP_L_EXC  butter("BandPass",4,1,10) ampl 0.2
from    PDT: 2023-08-10 09:13:48.039178 PDT
        UTC: 2023-08-10 16:13:48.039178 UTC
        GPS: 1375719246.039178
to      PDT: 2023-08-10 09:17:08.657970 PDT
        UTC: 2023-08-10 16:17:08.657970 UTC
        GPS: 1375719446.657970

All SR2 damping at -0.2, all SR3 damping at -0.5
start   PDT: 2023-08-10 09:31:47.701973 PDT
        UTC: 2023-08-10 16:31:47.701973 UTC
        GPS: 1375720325.701973
to      PDT: 2023-08-10 09:37:34.801318 PDT
        UTC: 2023-08-10 16:37:34.801318 UTC
        GPS: 1375720672.801318

All SR2 damping at -0.2, all SR3 damping at -0.2
start   PDT: 2023-08-10 09:38:42.830657 PDT
        UTC: 2023-08-10 16:38:42.830657 UTC
        GPS: 1375720740.830657
to      PDT: 2023-08-10 09:43:58.578103 PDT
        UTC: 2023-08-10 16:43:58.578103 UTC
        GPS: 1375721056.578103

All SR2 damping at -0.1, all SR3 damping at -0.2
start   PDT: 2023-08-10 09:45:38.009515 PDT
        UTC: 2023-08-10 16:45:38.009515 UTC
        GPS: 1375721156.009515

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 16:30, Friday 11 August 2023 (72159)

If our overall goal is to remove peaks from DARM that dominate the RMS, reducing these damping gains is not the best way to acheive that. SR2 L damping gain was reduced by a factor of 5 in this alog, and a resulting 2.8 Hz peak is now being injected into DARM from SRCL. This 2.8 Hz peak corresponds to a 2.8 Hz SR2 L resonance. There is no length control on SR2, so the only way to suppress any length motion of SR2 is via the top stage damping loops. The same can be said for SR3, whose gains were reduced by 80%. It may be that we are reducing sensor noise injected into SRCL from 3-6 Hz by reducing these gains, hence the improvement Gabriele has noticed.

Comparing a DARM spectrum before and after this change to the damping gains, you can see that the reduction in the damping gain did reduce DARM and SRCL above 3 Hz, but also created a new peak in DARM and SRCL at 2.8 Hz. I also plotted spectra of all dofs of SR2 and SR3 before and after the damping gain change showing that some suspension resonances are no longer being suppressed. All reference traces are from a lock on Aug 9 before these damping gains were reduced and the live traces are from this current lock. The final plot shows a transfer function measurement of SR2 L taken by Jeff and me in Oct 2022.

Images attached to this comment
elenna.capote@LIGO.ORG - 16:16, Monday 14 August 2023 (72204)

Since we fell out of lock, I took the opportunity to make SR2 and SR3 damping gain adjustments. I have split the difference on the gain reductions in Gabriele's alog. I increased all the SR2 damping gains from -0.1 to -0.2 (nominal is -0.5). I increased the SR3 damping gains from -0.2 to -0.5 (nominal is -1).

This is guardian controlled in LOWNOISE_ASC, because we need to acquire lock with higher damping gains.

Once we are back in lock, I will check the presence of the 2.8 Hz peak in DARM and determine how much different the DARM RMS is from this change.

There will be SDF diffs in observe for all SR2 and SR3 damping dofs. They can be accepted.

oli.patane@LIGO.ORG - 16:55, Monday 14 August 2023 (72206)

SR2 and SR3 damping gains changes that Elenna made have been accepted

Images attached to this comment
elenna.capote@LIGO.ORG - 17:47, Monday 14 August 2023 (72208)

The DARM RMS increases by about 8% with these new slightly higher gains. These gains are a factor of 2/2.5 greater than Gabriele's reduction. The 2.8 Hz peak in DARM is down by 21%.

Images attached to this comment
elenna.capote@LIGO.ORG - 14:43, Tuesday 15 August 2023 (72249)

This is a somewhat difficult determination to make, given all the nonstationary noise from 20-50 Hz, but it appears the DARM sensitivity is slightly improved from 20-40 Hz with a slightly higher SR2 gain. I randomly selected several times from the past few locks with the SR2 gains set to -0.1 and recent data from the last 24 hours where SR2 gains were set to -0.2. There is a small improvement in the data with all SR2 damping gains = -0.2 and SR3 damping gains= -0.5.

I think we need to do additional tests to determine exactly how SR2 and SR3 motion limit SRCL and DARM so we can make more targeted improvements to both. My unconfirmed conclusion from this small set of data is that while we may be able to reduce reinjected sensor noise above 3 Hz with a damping gain reduction, we will also limit DARM if there is too much motion from SR2 and SR3.

Images attached to this comment
H1 CAL (DetChar)
jeffrey.kissel@LIGO.ORG - posted 14:41, Wednesday 09 August 2023 - last comment - 17:29, Monday 14 August 2023(72108)
102 Hz Feature is NOT a Calibration Line Issue; Regardless, Calibration Systematic Error Monitor Line MOVED from 102.13 to 104.23 Hz
J. Kissel, A. Neunzert, E. Goetz, V. Bossilkov

As we continue the investigation in understanding why the noise the region around 102.13 Hz gets SUPER loud at the beginning of nominal low noise segments, and the calibration line seems to be reporting a huge amount of systematic error (see investigations in LHO:72064), Ansel has found that some new electronics noise has appeared in the end station as of Saturday Aug 5 2023 around 05:30a PT at frequency extremely, and unluckily close to the 102.13000 Hz calibration line -- something at 102.12833 Hz; see LHO:72105.

While we haven't yet ID'd the cause, and thus no have a solution -- we can still change the calibration frequency to move it away from this feature in hopes that there're not beating together terribly like that are now.

I've changed the calibration frequency line frequency to 104.23 Hz as of 21:13 UTC on Aug 09 2023.

This avoids 
    (a) LLO's similar frequency at 101.63 Hz, and 
    (b) because the former frequency, 102.13 Hz was near the upper edge of the 9.33 Hz wide [92.88, 102.21) Hz pulsar spin down, "non-vetoed" band, this new frequency 104.23 Hz skips up to the next 18.55 Hz wide "non-veto" band between [104.22, 122.77) Hz according to LHO:68139.

Stay tuned 
   - to see if this band-aid fix actually helps, or just spreads out the spacing between the comb, and
   - as we continue to investigate the issue of from where this thing came.

Other things of note: 

Since
   - this feature is *not* related to the calibration line itself, 
   - this calibration line is NOT used to generate any time-dependent correction factors and thus the calibration pipeline itself, nor the data it produces is affected
   - this calibration line is used only to *monitor* the calibration systematic error, 
   - this feature is clearly identified in an auxiliary PEM channel -- and that same channel *doesn't* see the calibration line
we conclude that there *isn't* some large systematic error that is occuring, it's just the calculation that's getting spoiled and misreporting large systematic error.
Thus, we make NO plan to do anything on further with the calibration or systematic error estimate side of things from this.

We anticipate that this now falls squarely into the noise subtraction pipeline's shoulders. Given that this 102.12833 Hz noise has a clear witness channel, and the noise creates non-linear nastiness, I expect this will be an excellent candidate for offline non-linear / NONSENS cleaning.


Here's the latest list of calibration lines:
Freq (Hz)   Actuator                   Purpose                      Channel that defines Freq             Changes Since Last Update (LHO:69736)     
15.6        ETMX UIM (L1) SUS          \kappa_UIM excitation        H1:SUS-ETMY_L1_CAL_LINE_FREQ          No change
16.4        ETMX PUM (L2) SUS          \kappa_PUM excitation        H1:SUS-ETMY_L2_CAL_LINE_FREQ          No change
17.1        PCALY                      actuator kappa reference     H1:CAL-PCALY_PCALOSC1_OSC_FREQ        No change
17.6        ETMX TST (L3) SUS          \kappa_TST excitation        H1:SUS-ETMY_L3_CAL_LINE_FREQ          No change
33.43       PCALX                      Systematic error lines       H1:CAL-PCALX_PCALOSC4_OSC_FREQ        No change
53.67         |                            |                        H1:CAL-PCALX_PCALOSC5_OSC_FREQ        No change
77.73         |                            |                        H1:CAL-PCALX_PCALOSC6_OSC_FREQ        No change
104.23        |                            |                        H1:CAL-PCALX_PCALOSC7_OSC_FREQ        FREQUENCY CHANGE; THIS ALOG
283.91        V                            V                        H1:CAL-PCALX_PCALOSC8_OSC_FREQ        No change
284.01      PCALY                      PCALXY comparison            H1:CAL-PCALY_PCALOSC4_OSC_FREQ        No change
410.3       PCALY                      f_cc and kappa_C             H1:CAL-PCALY_PCALOSC2_OSC_FREQ        No change
1083.7      PCALY                      f_cc and kappa_C monitor     H1:CAL-PCALY_PCALOSC3_OSC_FREQ        No change
n*500+1.3   PCALX                      Systematic error lines       H1:CAL-PCALX_PCALOSC1_OSC_FREQ        No change (n=[2,3,4,5,6,7,8])
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 17:29, Monday 14 August 2023 (72207)
Just a post-facto proof that this calibration line frequency change from 102.13 to 104.23 Hz drastically improved the symptom that the response function systematic error, as computed by this ~100 Hz line, was huge for hours while the actual 102.128333 Hz line was loud.

The attached screenshot shows a 2 days before and two days after the change (again on 2023-08-09 at 21:13 UTC). The green trace, which shows that there is no longer erroneously reported large error as computed by the 102.13 and then 104.23 Hz lines at the beginning of nominal low noise segments.
Images attached to this comment
H1 DetChar (CAL, DetChar)
derek.davis@LIGO.ORG - posted 15:01, Tuesday 08 August 2023 - last comment - 21:49, Tuesday 29 August 2023(72064)
Excess noise near 102.13 Hz calibration line

Benoit, Ansel, Derek

Benoit noticed that for recent locks, the 102.13 Hz calibration line is much louder than typical for the first few hours of the lock. An example of this behavior is shown in the attached spectrogram of H1 strain data on August 5 - this is the first day this behavior appeared. Ansel noted that this feature includes a comb-like structure around the line that is only present in the H1:GDS-CALIB_STRAIN_NOLINES channel and not H1:GDS-CALIB_STRAIN (see spectra for CALIB_STRAIN and CALIB_STRAIN_NOLINES on Aug 5). This issue also visible in the PCAL trends for the 102.13 Hz line. 

We are not sure if the excess noise near 102.13 Hz is from the calibration line itself or another noise source that is near the line. However, the behavior has been present for every lock since 12:30 UTC on August 5 2023. 

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 09:21, Wednesday 09 August 2023 (72094)CAL
FYI, 
$ gpstime Aug 05 2023 12:30 UTC
    PDT: 2023-08-05 05:30:00.000000 PDT
    UTC: 2023-08-05 12:30:00.000000 UTC
    GPS: 1375273818.000000
so... this behavior seems to have started at 5:30a local time on a Saturday. Therefore *very* unlikely that the start of this issue is intentional / human change driven.

The investigation continues....
making sure to tag CAL.
jeffrey.kissel@LIGO.ORG - 09:42, Wednesday 09 August 2023 (72095)
Other facts and recent events:
- Attached are 2 screenshots that show the actual *digital* excitation is not changing with time in anyway.
    :: 2023-08-08_H1PCALEX_OSC7_102p13Hz_Line_3mo_trend.png shows the specific oscillator, --- PCALX's OSC7 which drives the 102.13 Hz line's EPICs channel version of its output. The minute trend shows the max, min, and mean of the output, and there's no change in amplitude.
    :: 2023-08-08_H1PCALEX_EXC_SUM_3mo_trend.png shows a trend of the total excitation sum from PCAL X. This also shows *no* change in time in amplitude.

Both trends show the Aug 02 2023 change in amplitude kerfuffle I caused that Corey found and a bit later rectified -- see LHO:71894 and subsequent comments, but that was done, over with an solved, definitely by Aug 03 2023 UTC and unrelated to the start up of this problem.

It's also well after I installed new oscillators and rebooted the PCALX, PCALY, and OMC models on Aug 01 2023 (see LHO:71881).
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 10:44, Wednesday 09 August 2023 (72100)
The front-end version of the calibration's systematic error at 102.13 Hz also shows the long, time-dependent issue -- this will allow us to trend the issue against other channels

Folks in the calibration group have found that the online monitoring system for the
    -  overall DARM response function systematic error
    - (absolute reference) / (Calibrated Data Product) [m/m] 
    - ( \eta_R ) ^ (-1) 
    - (C / 1+G)_pcal / (C / 1+G)_strain
    - CAL-DELTAL_REF_PCAL_DQ / GDS-CALIB_STRAIN
(all different ways of saying the same thing; see T1900169) in calibration at each PCAL calibration line frequency -- the "grafana" pages -- are showing *huge* amounts of systematic error during these times when the amplitude of the line is super loud.

Though this metric is super useful because it's dreadfully obvious that things are going wrong -- this metric is not in any normal frame structure, so you can't compare it against other channels to find out what's causing the systematic error.

However -- remember -- we commissioned a front-end version of this monitoring during ER15 -- see LHO:69285.

That means the channels 
     H1:CAL-CS_TDEP_PCAL_LINE8_COMPARISON_OSC_FREQ        << the frequency of the monitor
    H1:CAL-CS_TDEP_PCAL_LINE8_SYSERROR_MAG_MPM        << the magnitude of the systematic error
    H1:CAL-CS_TDEP_PCAL_LINE8_SYSERROR_PHA_DEG        << the phase of the systematic error 

tell you (what's supposed to be***) equivalent information.

*** One might say that "what's suppose to be" is the same as "roughly equivalent" due to the following reasons: 
    (1) because we're human, the one system is displaying the systematic error \eta_R, and the other is displaying the inverse ( \eta_R ) ^ (-1) 
    (2) Because this is early-days in the front-end system, it uses the "less complete" calibrated channel CAL-DELTAL_EXTERNAL_DQ rather than the "fully correct" channel GDS-CALIB_STRAIN

But because the problem is so dreadfully obvious in these metrics, even though they're only *roughly* equivalent, you can see the same thing.
In the attached screenshot, I show both metrics for the most recent observation stretch, between 10:15 and 14:00 UTC on 2023-Aug-09.

Let's use this front-end metric to narrow down the problem via trending.
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 10:58, Wednesday 09 August 2023 (72102)CAL, DetChar
There appears to be no change in the PCALX analog excitation monitors either.

Attached is a trend of some key channels in the optical follower servo -- the analog feedback system that serves as intensity stabilization and excitation power linearization for the PCAL's laser light that gets transmitted to the test mass -- the actuator of which is an acousto-optic modulator (an AOM). There seems to be no major differences in the max, min, and mean of these signals before vs. after these problems started on Aug 05 2023.

H1:CAL-PCALX_OFS_PD_OUT_DQ
H1:CAL-PCALX_OFS_AOM_DRIVE_MON_OUT_DQ
Images attached to this comment
madeline.wade@LIGO.ORG - 11:34, Wednesday 09 August 2023 (72104)

I believe this is caused by the presence of another line very close to the 102.13 Hz pcal line.  This second line is present at the start of a lock stretch but seems to go away as the lock stretch continues.  I have attached a plot showing a zoom-in on an ASD around 102.1-102.2 Hz right after a lock stretch (orange), where the second peak is evident, and well into a lock stretch (blue) where the PCAL line is still present, but the second peak right below it in frequency is gone.  This ASD is computed using an hour of data for each curve, so we can get the needed resolution for these two peaks.

I don't know the origin of this second line.  However, a quick fix to the issue could be moving the PCAL line over by about a Hz.  The second attached plot shows that the spectrum looks pretty clean from 101-102 Hz, so somewhere in there would be probably be okay for a new location of the PCAL line.

Images attached to this comment
ansel.neunzert@LIGO.ORG - 11:47, Wednesday 09 August 2023 (72105)

Since it looks like the additional noise is at 102.12833 Hz, I did a quick check in Fscan data from Aug 5 for channels where there is high coherence with DELTAL_EXTERNAL at 102.12833 but *not* at 102.13000 Hz. This narrows down to just a few channels:

  • H1:PEM-EX_MAG_EBAY_SEIRACK_{Z,Y}_DQ .
  • H1:PEM-EX_ADC_0_09_OUT_DQ
  • H1:ASC-OMC_A_YAW_OUT_DQ. Note that other ASC-OMC channels (Fscan tracks A,B and PIT,YAW) see high coherence at both frequencies.

(lines git issue opened as we work on this.)

jeffrey.kissel@LIGO.ORG - 14:45, Wednesday 09 August 2023 (72110)
As a result of Ansel's discovery, and conversation on the CAL call today -- I've moved the calibration line frequency from 102.13 to 104.23 Hz. See LHO:72108.
derek.davis@LIGO.ORG - 13:28, Friday 11 August 2023 (72157)

This line may have appeared in the previous lock the day before (Aug 4). The daily spectrogram for Aug 4 shows a line near 100 Hz starting at 21:00 UTC. 

Images attached to this comment
elenna.capote@LIGO.ORG - 16:49, Friday 11 August 2023 (72163)

Looking at alogs leading up to the time Derek notes above, I noticed that Gabriele retuned and tested new LSC FF. This change may be related to this new peak. Remembering some issues we had recently where DHARD filter impulses were ringing up violin modes, I checked the new LSC FF filters and how they are engaged in the guardian. Some of them have no ramp time, and the filter bank is turned on immediately along with the filters in the guardian. I have no idea why that would cause a peak at 102 Hz, but I updated those filters to have a 3 second ramp.

oli.patane@LIGO.ORG - 16:51, Friday 11 August 2023 (72164)

Reloaded the H1LSC model to load in Elenna's filter changes

ansel.neunzert@LIGO.ORG - 13:36, Monday 14 August 2023 (72197)

Now that the calibration line has been moved, the comb-like structure at the calibration line frequency is no longer present (checked in the CLEAN channel).

We can also see the shape of the 102.12833 Hz line much more clearly without the overlapping calibration line. I have attached a plot for reference on the width and shape.

Images attached to this comment
camilla.compton@LIGO.ORG - 16:33, Monday 14 August 2023 (72203)ISC

As discussed in todays commissioning meeting, I checked TMSX and ETMX movement for a kick during locking and couldn't see anything suspicious. I did find some increase motion/noise every 8Hz in TMSX 1s into ENGAGE_SOFT_LOOPS when ISC_LOCK isn't explicitly doing anything, plot attached. However this noise was present prior to Aug 4th, (July 30th attached).

TMS is suspicious as Betsy found that TMS's have violin modes ~103-104Hz.

Jeff draws attendtion to 38295, showing modes of quad blade springs above 110Hz, and 24917 showing quad top wire modes above 300Hz.

Elenna's notes with calibration lines off (as we are experimenting with for current lock) we can see this 102Hz peak at ISC_LOCK state ENGAGE_ASC_FOR_FULL_IFO. We were mistaken.

Images attached to this comment
elenna.capote@LIGO.ORG - 21:49, Tuesday 29 August 2023 (72544)

To preserve documentation, this problem has now been solved, with more details in 7253772319, and 72262.

The cause of this peak was a spurious, narrow, 102 Hz feature in the SRCL feedforward that we didn't catch when the filter was made. This has been been fixed, and the cause of the mistake has been documented in the first alog listed above so we hopefully don't repeat this error.

Displaying reports 14001-14020 of 84095.Go to page Start 697 698 699 700 701 702 703 704 705 End