Displaying reports 14041-14060 of 84146.Go to page Start 699 700 701 702 703 704 705 706 707 End
Reports until 09:55, Tuesday 15 August 2023
H1 ISC
jenne.driggers@LIGO.ORG - posted 09:55, Tuesday 15 August 2023 (72224)
LSC FF Filters accepted in safe.snap (not just observe.snap)

The LSC FF filters that we've been using since Aug 4th (alog 71961) had their settings saved in the Observe.snap, but not in the safe.snap.  Normally we'd think this is fine, but it seems that these FF filters are potentially causing the ~102 Hz line to ring up each lock (see, eg, alog 72214). Based on our recent learning from the DHARD filter turn on sequence kicking the violin modes of the quads, I saved the current LSCFF filter settings in the safe.snap so that the filters don't get changed unnecessarily during lock acquisition.

Separately, Jeff is writing a thorough alog on how it might be that this LSC FF is causing issues with what is potentially a TMSX wire violin mode (alog 72221).

Images attached to this report
H1 SUS (DetChar, INJ, ISC, OpsInfo)
jeffrey.kissel@LIGO.ORG - posted 09:52, Tuesday 15 August 2023 - last comment - 14:26, Monday 23 October 2023(72221)
ETMX M0 Longitudinal Damping has been fed to TMTS M1 Unfiltered Since Sep 28 2021; Now OFF.
J. Kissel, J. Driggers

I was brainstorming why LOWNOISE_LENGTH_CONTROL would be ringing up a Transmon M1 to M2 wire violin mode (modeled to be at 104.2 Hz for a "production" TMTS; see table 3.11 of T1300876) for the first time on Aug 4 2023 (see current investigation recapped in LHO:72214), and I remembered "TMS tracking..."

In short: we found that ETMX M0 L OSEM damping error signal has been fed directly to TMSX M1 L path global control path, without filtering, since Sep 28 2021. Yuck!

On Aug 30 2021, I resolved the discrepancies between L1 and H1 end-station SUS front-end models -- see LHO:59772. Included in that work, I cleaned up the Tidal path, cleaned up the "R0 tracking" path (where QUAD L2 gets fed to QUAD R0), and installed the "TMS tracking" path as per ECR E2000186 / LLO:53224. In short, "TMS tracking" couples the ETM M0 longitudinal OSEM error signal to the TMS M1 longitudinal "input to the drivealign bank" global control path, with the intent of matching the velocity of the two top masses to reduce scattered light.

On Aug 31 2021, the model changes were installed during an upgrade to the RCG -- see LHO:59797, and we've confirmed that I turned both TMSX and TMSY paths OFF, "to be commissioned later, when we have an IFO, if we need it" at
    Tuesday -- Aug 31 2021 21:22 UTC (14:22 PDT) 

However, 28 days later,
    Tuesday -- Sept 28 2021 22:16 UTC (15:16 PDT)
the TMSX filter bank got turned back on, and must have been blindly SDF saved as such -- with no filter in place -- after an EX IO chassis upgrade -- see LHO:60058. At the time, that RCG 4.2.0 still had the infamous "turn on a new filter with its input ON, output ON, and a gain of 1.0" feature, that has been since resolved with RCG 5.1.1. So ... maybe, somehow, even though the filter was already installed on Aug 31 2021, the IO chassis upgrade rebuild, reinstall, and restart of the h1sustmsx.mdl front end model re-registered the filter as new? Unclear. Regardless this direct ETMX M0 L to TMSX M1 L path has been on, without filtering, since Sep 28 2021. Yuck!

Jenne confirms the early 2021 timeline in the first attachment here.
She also confirms via a ~2 year trend of the H1:SUS-TMSY_M1_FF_L filter bank's SWSTAT, that no filter module has *ever* been turned on, confirmed that there's *never* been filtering.

Whether this *is* the source of 102.1288 Hz problems and that that frequency is the TMSX transmon violin mode is still unclear. Brief investigations thus far include
    - Jenne briefly gathered ASDs of ETMX M0 L (H1:SUS-ETMX_M0_DAMP_L_IN_DQ) and TMSX M1 L OSEMs' error signal (H1:SUS-TMSX_M1_DAMP_L_IN1_DQ) around the time of Oli's LOWNOISE_LENGTH_CONTROL time, but found that at 100 Hz, the OSEMs are limited by their own sensor noise and don't see anything.
    - She also looked through the MASTER_OUT DAC requests (), in hopes that the requested control signal would show something more or different, but found nothing suspicious around 100 Hz there either.
    - We HAVE NOT, but could look at H1:SUS-TMSX_M1_DRIVEALIGN_L_OUT_DQ since this FF control filter should be the only control signal going through that path. I'll post a comment with this.

Regardless, having this path on with no filter is clearly wrong, so we've turned off the input, output, and gain accepted the filter as OFF, OFF, and OFF in the SDF system (for TMSX, the safe.snap is the same as the observe.snap).
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 10:39, Tuesday 15 August 2023 (72226)
No obvious blast in the (errant) path between ETMX M0 L and TMSX M1 L, the control channel H1:SUS-TMSX_M1_DRIVEALIGN_L_OUT_DQ, during the turn on of the LSC FF.

Attached is a screenshot highlighting one recent lock acquisition, after the addition / separation / clean up of calibration line turns ons (LHO:72205):
    - H1:GRD-ISC_LOCK_STATE_N -- the state number of the main lock acquisition guardian,
    - H1:LSC-SRCLFF1_GAIN, H1:LSC-PRCLFF_GAIN, H1:MICHFF_GAIN -- EPICs records showing the timing of when the LSC feed forward is turned on
    - The raw ETMX M0 L damping signal, H1:SUS-ETMX_M0_DAMP_L_IN1_DQ -- stored at 256 Hz
    - The same signal, mapped (errantly) as a control signal to TMSX M1 L -- also stored at 256 Hz
    - The TMSX M1 L OSEMs H1:SUS-TMSX_M1_DAMP_L_IN1_DQ, which are too limited by their own self noise to see any of this action -- but also only stored at 256 Hz.

In the middle of the TRANSITION_FROM_ETMX (state 557), DARM control is switching from ETMX to some other collection of DARM actuators. That's when you see the ETMX M0 L (and equivalent TMSX_M1_DRIVEALIGN) channels go from relatively noisy to quiet.

Then, at the very end of the state, or the start of the next state, LOW_NOISE_ETMX_ESD (state 558), DARM control returns to ETMX, and the main chain top mass, ETMX M0 gets noisy again. 

Then, several seconds later, in LOWNOISE_LENGTH_CONTROL (state 560), the LSC feed forward gets turned on. 

So, while there is control request changes to the TMS, at least according to channels stored at 256 Hz, we don't see any obvious kicks / impulses to the TMS during this transition. 
This decreases my confidence that something was kicking up a TMS violin mode, but not substantially.
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 10:33, Wednesday 16 August 2023 (72275)DetChar, DetChar-Request
@DetChar -- 
This errant TMS tracking has been on throughout O4 until yesterday.

The last substantial nominal low noise segment before the this (with errant, bad TMS tracking) was on  
     2023-08-15       04:41:02 to 15:30:32 UTC
                      1376109680 - 1376148650
the first substantial nominal low noise segment after this change 
     2023-08-16       05:26:08 - present
                      1376198786 - 1376238848 

Apologies for the typo in the main aLOG above, but *the* channels to understand the state of the filter bank that's been turned off are 
    H1:SUS-TMSX_M1_FF_L_SWSTAT
    H1:SUS-TMSX_M1_FF_L_GAIN

if you want to use that for an automated way of determining whether the TMS tracking is on vs. off.

If the SWSTAT channel has a value of 37888 and the GAIN channel has a gain of 1.0, then the errant connection between ETMX M0 L and TMSX M1 L was ON. That channels has now a value of 32768 and 0.0, respectively, indicating that it's OFF. (Remember, for a standard filter module a SWSTAT value of 37888 is a bitword representation for "Input, Output, and Decimation switches ON." A SWSTAT value of 32768 is the same bitword representation for just "Decimation ON.")

Over the next few weeks, can you build up an assessment of how the IFO has performed a few weeks before vs. few weeks after?
     I'm thinking, in particular, in the corner of scattered light arches and glitch rates (also from scattered light), but I would happily entertain any other metric you think are interesting given the context.

     The major difference being that TMSX is no longer "following" ETMX, so there's a *change* in the relative velocity between the chains. No claim yet that this is a *better* change or worse, but there's definitely a change. As you know, the creation of this scattered-light-impacting, relative velocity between the ETM and TMS is related to the low frequency seismic input motion to the chamber, specifically between the 0.05 to 5 Hz region. *That* seismic input evolves and is non-stationary over the few weeks time scale (wind, earthquakes, microseism, etc.), so I'm guessing that you'll need that much "after" data to make a fair comparison against the "before" data. Looking at the channels called out in the lower bit of the aLOG I'm sure will be a helpful part of the investigation.

I chose "a few weeks" simply because the IFO configuration has otherwise been pretty stable "before" (e.g., we're in the "representative normal for O4" 60 W configuration rather than the early O4 75 W configuration), but I leave it to y'all's expertise and the data to figure out a fair comparison (maybe only one week, a few days, or even just the single "before" vs. "after" is enough to see a difference).
ansel.neunzert@LIGO.ORG - 14:31, Monday 21 August 2023 (72357)

detchar-request git issue for tracking purposes.

jane.glanzer@LIGO.ORG - 09:12, Thursday 05 October 2023 (73271)DetChar
Jane, Debasmita

We took a look at the Omicron and Gravity triggers before and after this tracking was turned off. The time segments chosen for this analysis were:

TMSX tracking on: 2023-07-29 19:00:00 UTC - 2023-08-15 15:30:00 UTC, ~277 hours observing time
TMSX tracking off: 2023-08-16 05:30:00 UTC - 2023-08-31 00:00:00 UTC, ~277 hours observing time

For the analysis, the Omicron parameters chosen were SNR > 7.5, and a frequency between 10 Hz and 1024 Hz. The Gravity Spy glitches included a confidence of > 90%. 

The first pdf contains glitch rate plots. In the first plot, we have the Omicron glitch rate comparison before and after the change. The second and third plots shows the comparison of the Omicron glitch rates before and after the change as a function of SNR and frequency. The fourth plot shows the Gravity Spy classifications of the glitches. What we can see from these plots is that when the errant tracking was on, the overall glitch rate was higher (~29 per hour when on, ~15 per hour when off). It was particularly high in the 7.5-50 SNR range and 10Hz - 50Hz range, which is typically where we observe scattering. The Gravity Spy plot shows that scattered light is the most common glitch type when the tracking is both on and off, but reduces after the tracking is off.

We also looked into see if these scattering glitches were coincidence in "H1:GDS-CALIB_STRAIN" and "H1:ASC-X_TR_A_NSUM_OUT_DQ", which is shown in the last pdf. From the few examples we looked at, there does seem to be some excess noise in the transmitted monitor channel when the tracking was on. If necessary, we can look into more examples of this. 
Non-image files attached to this comment
debasmita.nandi@LIGO.ORG - 14:26, Monday 23 October 2023 (73674)
Debasmita, Jane

We have plotted the ground motion trends in the following frequency bands and DOFs

1. Earthquake band (0.03 Hz--0.1 Hz) ground motion at ETMX-X, ETMX-Z and ETMX-X tilt-subtracted
2. Wind speed (0.03 Hz--0.1 Hz) at ETMX
3. Micro-seismic band (0.1 Hz--0.3 Hz) ground motion at ETMX-X

We have also calculated the mean and median of the ground motion trends for two weeks before and after the tracking was turned off. It seems that while motion in all the other bands remained almost same, the microseismic band ground motion (0.1-0.3 Hz) has increased significantly (from a mean value of 75.73 nm/s to 115.82 nm/s) when the TMS-X tracking was turned off. Still, it produced less scattering than before when the TMS-X tracking was on. 

The plots and the table are the attached here.
Non-image files attached to this comment
H1 TCS
camilla.compton@LIGO.ORG - posted 09:51, Tuesday 15 August 2023 - last comment - 16:07, Friday 18 August 2023(72220)
Swapped CO2X and CO2Y Chillers

Closes WP11368.

We've been seeing the CO2X laser regularly unlocking (alog71594) which takes us out of observing, today we swapped CO2X and CO2Y chillers to see if this issue followed the chiller.  Previously, swapping CO2Y with the spare stopped CO2Y from unlocking, alog54980.

The old CO2X (S/N ...822) chiller seems to be reporting a unsteady flow at the LVEA flow meter, see attached, suggesting the S/N ...822 chiller isn't working too well. This is the chiller TJ and I rebuilt in Febuary 67265

Swap. Following some of the procedure listed in alog#61325: turned off both lasers via medm, turned off and unplugged (electrical and water connections) both chillers, swapped the chillers, replugged in, turned chillers back on (one needed to be turned on via medm), checked water level (nothing added), turned on CO2 lasers via medm and chassis. Post-stick notes have been added to the chillers. Both lasers relocked with ~45W power.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 10:08, Wednesday 16 August 2023 (72253)

Jason, TJ, Camilla 

The worse chiller (S/N 822) flow rate dropped low enough for the CO2Y laser to trip off so swapped CO2Y back to it’s original chiller (S/N 617) and installed the spare chiller (S/N 813) for CO2X. We flushed the spare (instructions in 60792) as it hadn’t been used since February 67265. Both lasers are now running again and flow rates so far look good. 

The first set of water we ran though the spare (S/N 813) chiller has small brass or metal pieces in the water (caught in the filter), see attached. Once we drained this and added clean water there was no evidence of metal so we connected it to the main CO2X circuit. 

Looking at the removed CO2X chiller (rebuilt in February 67265), it had some black gunk in it, see attached. This is worrying as has been running though the CO2X lines since Feb and was running in the COO2Y system for ~5 hours. I should have checked the reservoir water before swapping the chillers. 

Images attached to this comment
thomas.shaffer@LIGO.ORG - 08:14, Wednesday 16 August 2023 (72267)

Overnight they seem stable as well, but the new TCSX chiller (617) looks very slightly noisier and perhaps has a slight downward trend to its flow. We'll keep watching this and see if it continues.

Images attached to this comment
thomas.shaffer@LIGO.ORG - 08:19, Wednesday 16 August 2023 (72268)

I spoke too soon. Looks like TCSX relocked at 08:27UTC last night.

Images attached to this comment
camilla.compton@LIGO.ORG - 16:07, Friday 18 August 2023 (72327)

On Tuesday evening, the removed chiller (S/N 822) drained slowly. No water came out of the drain valve, only the outlet, which was strange.  Today I took the cover off the chiller but couldn't see any issues with the drainage. I left th chiller with all values and the reservoir open so the last of the water can dry out of it. 

H1 PSL
jason.oberling@LIGO.ORG - posted 09:39, Tuesday 15 August 2023 - last comment - 09:41, Tuesday 15 August 2023(72222)
PSL PMC/RefCav Remote Alignment Tweak

J. Oberling, R. Short

This morning we tweaked the beam alignment into the PMC and FSS RefCav, remotely from the Control Room.  With the ISS OFF, the PMC started and ended at:

Some improvement, but not much.  With the ISS ON PMC Refl is unchanged and PMC Trans = 109.2 W; we did have to adjust the ISS RefSignal for the improvement in PMC Trans, it changed from -1.98V to -2.0V.  Moving on to the FSS RefCav, with ISS ON we started and ended at:

Again, not much improvement, but some.  Also noting that the laser power out of both amplifiers is down a little bit (~2W on each), so it looks like it's getting close to time to adjust the pump diode operating currents; this could also be why we can't get much improvement with an alignment tweak (we've had higher PMC Trans and lower PMC Refl in the past), we could be seeing output mode chagnes due to the natural slow drop in pump diode power.  Will keep an eye on things.

We accepted the RefSignal change in the ISS safe.snap SDF, and also a couple of 2nd loop ISS diffs that we confirmed are the correct values when the IFO is down.  Ryan will attach the screeshots as a comment.

Comments related to this report
ryan.short@LIGO.ORG - 09:41, Tuesday 15 August 2023 (72223)

PSLISS SAFE SDF accepted diffs attached.

Images attached to this comment
H1 SUS
thomas.shaffer@LIGO.ORG - posted 08:35, Tuesday 15 August 2023 (72219)
SUS_CHARGE node running into connection error

I'm still not sure what happened, but the SUS_CHARGE Guardian had a connection error for the H1:SUS-ITMX_L3_DRIVEALIGN_L2L filter bank, but it looks like it should have been a GAIN. Looks like this happened last week as well - alog71899. Before Camilla came in and pointed that out to me, I first tried stopping the node and then re-exec'ing it to create new epics connections, but this didn't work. I took the node to DOWN and then tried the injections again, but then stopped it a bit too early. I also fixed some tabs to spaces that might also have been creating some guardian confusion.

Fixed now and we ran it past the error point then brought it to DOWN for maintenance.

LHO General
thomas.shaffer@LIGO.ORG - posted 08:11, Tuesday 15 August 2023 (72218)
Ops Day Shift Start

TITLE: 08/15 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 2mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.15 μm/s
QUICK SUMMARY: SUS_CHARGE ran into a connection error (see separate alog), maintenance has began with minor activities on site.

H1 General
oli.patane@LIGO.ORG - posted 00:10, Tuesday 15 August 2023 (72217)
Ops EVE Shift End

TITLE: 08/15 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Came in to the detector in the process of relocking, and during my shift we had two locklosses, one unknown, one known, but the detector locked itself back up quickly each time without a need for initial alignment or any help. We are currently Observing and have been Locked for 2hrs 25mins.

23:00 Relocking from lockloss that happened before I arrived (72201)

23:46:40 NOMINAL_LOW_NOISE
00:00:26 Observing

02:09 Lockloss(72209), I am taking us to LOWNOISE_LENGTH_CONTROL for 102Hz peak test(72214)
03:13 Reached NOMINAL_LOW_NOISE

03:57 Lockloss caused by me and Jenne, we are each taking 50% of the blame :( (72211)
04:41 Reached NOMINAL_LOW_NOISE
05:11 Observing


LOG:

no log

H1 ISC (CAL, DetChar)
oli.patane@LIGO.ORG - posted 22:28, Monday 14 August 2023 (72214)
Narrowing Down Cause of 102Hz Peak

In investigating the cause of the peak at 102Hz (72064, 72108), the cause had been narrowed down in between a calibration lines issue or an LSC FF issue. Since the change to the LSCFF ramp times did not fix the peak (72188), Jeff and Ryan S added a new ISC_LOCK state today(well 8/14 22:45UTC) (72205), TURN_ON_CALIBRATION_LINES, so that the calibration lines aren't turned on until late in the locking process.

In the following locking sequence (72201), the peak still appeared around the LOWNOISE_LENGTH_CONTROL and TURN_ON_CALIBRATION_LINES states, so it was decided that the next time we lost lock, we would pause on the way back up at LOWNOISE_LENGTH_CONTROL to determine which of the two states the peak was linked to.

We lost lock at 2:09UTC (72209), so I set the detector to only go through to the state right before LOWNOISE_LENGTH_CONTROL (I selected LOWNOISE_ESD_ETMY when it should've been LOWNOISE_ESD_ETMX but that presumeably wouldn't which of the two states the peak turned on at), and then selected LOWNOISE_LENGTH_CONTROL and the 102Hz peak appeared quickly after (full spectrum, zoomed in). Once we had been in that state for a bit I moved to TURN_ON_CALIBRATION_LINES to see if that would cause the peak to change in any way, but it didn't change(zoomed in - not the best ss sorry). So the peak at 102Hz is caused by one of the filter gains in LOWNOISE_LENGTH_CONTROL.

Images attached to this report
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 21:13, Monday 14 August 2023 - last comment - 23:16, Monday 14 August 2023(72211)
Lockloss

Lockloss @ 3:57 due to me having selected the wrong state to stop at before going into LOWNOISE_LENGTH_CONTROL for our 102Hz peak test :(. Currently relocking and everything is going well!!

Comments related to this report
oli.patane@LIGO.ORG - 22:12, Monday 14 August 2023 (72213)

Back to Observing at 5:11UTC!

H1 General
oli.patane@LIGO.ORG - posted 20:32, Monday 14 August 2023 (72210)
Ops EVE MidShift Report

After losing lock at 2:09UTC, I started relocking the detector but had it stop right before it got to LOWNOISE_LENGTH_CONTROL so we could see whether the 102Hz peak (72064) is related to the LSC filter gains turning on or the calibration lines turning on. We got our answer, so I continued and we are currently in NOMINAL_LOW_NOISE waiting for ADS to converge so we can go into Observing.

H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 19:15, Monday 14 August 2023 - last comment - 23:51, Monday 14 August 2023(72209)
Lockloss

Lockloss at 2:09UTC. Got an EX saturation callout immediately before.

In relocking, I will be setting ISC_LOCK so we go to LOWNOISE_LENGTH_CONTROL instead of straight to NOMINAL_LOW_NOISE, to see if the 102Hz line is caused by LOWNOISE_LENGTH_CONTROL or by the calibration lines engaging at TURN_ON_CALIBRATION_LINES (72205).

Comments related to this report
oli.patane@LIGO.ORG - 23:51, Monday 14 August 2023 (72216)DetChar

I noticed a weird set of glitches on the glitchgram fom that took place between 3:13 and 3:57UTC(spectrogram, omicron triggers), ramping up in frequency from 160-200Hz over that timespan. Even though we weren't Observing when this was happening, the diagonal line on many of the summary page plots is hard to miss and so I wanted to post this and tag DetChar to give info as to why these glitches appeared and why they (presumably) shouldn't be seen again.

This is after we had reached NOMINAL_LOW_NOISE but were not Observing yet due to waiting for ADS to converge. Although I don't know the direct cause of these glitches, them appearing, along with ADS not converging, was because I had selected the wrong state to pause at before going into LOWNOISE_LENGTH_CONTROL(for 102Hz peak test), so even though I was then able to proceed to NOMINAL_LOW_NOISE, the detector wasn't in the correct configuration. Once we lost lock trying to correct the issue, relocking automatically went through the correct states. So these glitches occurred during a non-nominal NOMINAL_LOW_NOISE.

Images attached to this comment
H1 CAL (CAL, GRD, ISC, OpsInfo)
ryan.short@LIGO.ORG - posted 16:48, Monday 14 August 2023 (72205)
Added TURN_ON_CALIBRATION_LINES State to ISC_LOCK

R. Short, J. Kissel

Following investigations into the 102Hz feature that has been showing up for the past few weeks (see alogs 72064 and 72108 for context), it was decided to try a lock acquisition with the calibrations lines off until much later in the locking sequence. What this means in more specific terms is as follows:

For each of these points where the calibration lines are turned off or on, the code structure is now consistent. Each time "the lines are turned on/off," the cal lines for ETMX stages L1, L2, and L3 (CLK, SIN, and COS), PCal X, PCal Y, and DARMOSC are all toggled.

The OMC and SUSETMX SAFE SDF tables have been updated appropriately (screenshots attached).

These changes were implemented on August 14th around 22:45 UTC, are committed to svn, and ISC_LOCK has been loaded.

Images attached to this report
H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 16:02, Monday 14 August 2023 - last comment - 21:43, Monday 14 August 2023(72201)
Lockloss @ 22:44 UTC

Lockloss @ 22:44 UTC - cause currently unknown, seemed fast

Comments related to this report
oli.patane@LIGO.ORG - 21:43, Monday 14 August 2023 (72212)

00:00 Back into Observing

H1 SUS
gabriele.vajente@LIGO.ORG - posted 10:14, Thursday 10 August 2023 - last comment - 14:43, Tuesday 15 August 2023(72130)
Reducing SR2 and SR3 damping

Follow up on previous tests (72106)

First I injected noise on SR2_M1_DAMP_P and SR2_M1_DAMP_L to measure the transfer function to SRCL. The result shows that the shape is different and the ratio is not constant in frequency. Therefore we probably can't cancel the coupling of SR2_DAMP_P to SRCL by rebalancing the driving matrix. Although I haven't thought carefully if there is some loop correction I need to do for those transfer functions. I measured and plotted the DAMP_*_OUT to SRCL_OUT. transfer functions. It might still be worth trying to change the P driving matrix while monitoring a P line to minimize the coupling to SRCL.

Then I reduced the damping gains for SR2 and SR3 even further. We are now running with SR2_M1_DAMP_*_GAIN = -0.1 (was -0.5 for all but P that was -0.2 since I reduced it yesterday). Also SR3_M1_DAMP_*_GAIN = -0.2 (was -1). This has improved a lot the SRCL motion and also improved DARM RMS. It looks like it also improved the range.

Tony has accepted this new configuration in SDF.

Detailed log below for future reference.

 

Time with SR2 P gain at -0.2 (but before that too)
from    PDT: 2023-08-10 08:52:40.466492 PDT
        UTC: 2023-08-10 15:52:40.466492 UTC
        GPS: 1375717978.466492
to      PDT: 2023-08-10 09:00:06.986101 PDT
        UTC: 2023-08-10 16:00:06.986101 UTC
        GPS: 1375718424.986101

H1:SUS-SR2_M1_DAMP_P_EXC  butter("BandPass",4,1,10) ampl 2
from    PDT: 2023-08-10 09:07:18.701326 PDT
        UTC: 2023-08-10 16:07:18.701326 UTC
        GPS: 1375718856.701326
to      PDT: 2023-08-10 09:10:48.310499 PDT
        UTC: 2023-08-10 16:10:48.310499 UTC
        GPS: 1375719066.310499

H1:SUS-SR2_M1_DAMP_L_EXC  butter("BandPass",4,1,10) ampl 0.2
from    PDT: 2023-08-10 09:13:48.039178 PDT
        UTC: 2023-08-10 16:13:48.039178 UTC
        GPS: 1375719246.039178
to      PDT: 2023-08-10 09:17:08.657970 PDT
        UTC: 2023-08-10 16:17:08.657970 UTC
        GPS: 1375719446.657970

All SR2 damping at -0.2, all SR3 damping at -0.5
start   PDT: 2023-08-10 09:31:47.701973 PDT
        UTC: 2023-08-10 16:31:47.701973 UTC
        GPS: 1375720325.701973
to      PDT: 2023-08-10 09:37:34.801318 PDT
        UTC: 2023-08-10 16:37:34.801318 UTC
        GPS: 1375720672.801318

All SR2 damping at -0.2, all SR3 damping at -0.2
start   PDT: 2023-08-10 09:38:42.830657 PDT
        UTC: 2023-08-10 16:38:42.830657 UTC
        GPS: 1375720740.830657
to      PDT: 2023-08-10 09:43:58.578103 PDT
        UTC: 2023-08-10 16:43:58.578103 UTC
        GPS: 1375721056.578103

All SR2 damping at -0.1, all SR3 damping at -0.2
start   PDT: 2023-08-10 09:45:38.009515 PDT
        UTC: 2023-08-10 16:45:38.009515 UTC
        GPS: 1375721156.009515

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 16:30, Friday 11 August 2023 (72159)

If our overall goal is to remove peaks from DARM that dominate the RMS, reducing these damping gains is not the best way to acheive that. SR2 L damping gain was reduced by a factor of 5 in this alog, and a resulting 2.8 Hz peak is now being injected into DARM from SRCL. This 2.8 Hz peak corresponds to a 2.8 Hz SR2 L resonance. There is no length control on SR2, so the only way to suppress any length motion of SR2 is via the top stage damping loops. The same can be said for SR3, whose gains were reduced by 80%. It may be that we are reducing sensor noise injected into SRCL from 3-6 Hz by reducing these gains, hence the improvement Gabriele has noticed.

Comparing a DARM spectrum before and after this change to the damping gains, you can see that the reduction in the damping gain did reduce DARM and SRCL above 3 Hz, but also created a new peak in DARM and SRCL at 2.8 Hz. I also plotted spectra of all dofs of SR2 and SR3 before and after the damping gain change showing that some suspension resonances are no longer being suppressed. All reference traces are from a lock on Aug 9 before these damping gains were reduced and the live traces are from this current lock. The final plot shows a transfer function measurement of SR2 L taken by Jeff and me in Oct 2022.

Images attached to this comment
elenna.capote@LIGO.ORG - 16:16, Monday 14 August 2023 (72204)

Since we fell out of lock, I took the opportunity to make SR2 and SR3 damping gain adjustments. I have split the difference on the gain reductions in Gabriele's alog. I increased all the SR2 damping gains from -0.1 to -0.2 (nominal is -0.5). I increased the SR3 damping gains from -0.2 to -0.5 (nominal is -1).

This is guardian controlled in LOWNOISE_ASC, because we need to acquire lock with higher damping gains.

Once we are back in lock, I will check the presence of the 2.8 Hz peak in DARM and determine how much different the DARM RMS is from this change.

There will be SDF diffs in observe for all SR2 and SR3 damping dofs. They can be accepted.

oli.patane@LIGO.ORG - 16:55, Monday 14 August 2023 (72206)

SR2 and SR3 damping gains changes that Elenna made have been accepted

Images attached to this comment
elenna.capote@LIGO.ORG - 17:47, Monday 14 August 2023 (72208)

The DARM RMS increases by about 8% with these new slightly higher gains. These gains are a factor of 2/2.5 greater than Gabriele's reduction. The 2.8 Hz peak in DARM is down by 21%.

Images attached to this comment
elenna.capote@LIGO.ORG - 14:43, Tuesday 15 August 2023 (72249)

This is a somewhat difficult determination to make, given all the nonstationary noise from 20-50 Hz, but it appears the DARM sensitivity is slightly improved from 20-40 Hz with a slightly higher SR2 gain. I randomly selected several times from the past few locks with the SR2 gains set to -0.1 and recent data from the last 24 hours where SR2 gains were set to -0.2. There is a small improvement in the data with all SR2 damping gains = -0.2 and SR3 damping gains= -0.5.

I think we need to do additional tests to determine exactly how SR2 and SR3 motion limit SRCL and DARM so we can make more targeted improvements to both. My unconfirmed conclusion from this small set of data is that while we may be able to reduce reinjected sensor noise above 3 Hz with a damping gain reduction, we will also limit DARM if there is too much motion from SR2 and SR3.

Images attached to this comment
H1 CAL (DetChar)
jeffrey.kissel@LIGO.ORG - posted 14:41, Wednesday 09 August 2023 - last comment - 17:29, Monday 14 August 2023(72108)
102 Hz Feature is NOT a Calibration Line Issue; Regardless, Calibration Systematic Error Monitor Line MOVED from 102.13 to 104.23 Hz
J. Kissel, A. Neunzert, E. Goetz, V. Bossilkov

As we continue the investigation in understanding why the noise the region around 102.13 Hz gets SUPER loud at the beginning of nominal low noise segments, and the calibration line seems to be reporting a huge amount of systematic error (see investigations in LHO:72064), Ansel has found that some new electronics noise has appeared in the end station as of Saturday Aug 5 2023 around 05:30a PT at frequency extremely, and unluckily close to the 102.13000 Hz calibration line -- something at 102.12833 Hz; see LHO:72105.

While we haven't yet ID'd the cause, and thus no have a solution -- we can still change the calibration frequency to move it away from this feature in hopes that there're not beating together terribly like that are now.

I've changed the calibration frequency line frequency to 104.23 Hz as of 21:13 UTC on Aug 09 2023.

This avoids 
    (a) LLO's similar frequency at 101.63 Hz, and 
    (b) because the former frequency, 102.13 Hz was near the upper edge of the 9.33 Hz wide [92.88, 102.21) Hz pulsar spin down, "non-vetoed" band, this new frequency 104.23 Hz skips up to the next 18.55 Hz wide "non-veto" band between [104.22, 122.77) Hz according to LHO:68139.

Stay tuned 
   - to see if this band-aid fix actually helps, or just spreads out the spacing between the comb, and
   - as we continue to investigate the issue of from where this thing came.

Other things of note: 

Since
   - this feature is *not* related to the calibration line itself, 
   - this calibration line is NOT used to generate any time-dependent correction factors and thus the calibration pipeline itself, nor the data it produces is affected
   - this calibration line is used only to *monitor* the calibration systematic error, 
   - this feature is clearly identified in an auxiliary PEM channel -- and that same channel *doesn't* see the calibration line
we conclude that there *isn't* some large systematic error that is occuring, it's just the calculation that's getting spoiled and misreporting large systematic error.
Thus, we make NO plan to do anything on further with the calibration or systematic error estimate side of things from this.

We anticipate that this now falls squarely into the noise subtraction pipeline's shoulders. Given that this 102.12833 Hz noise has a clear witness channel, and the noise creates non-linear nastiness, I expect this will be an excellent candidate for offline non-linear / NONSENS cleaning.


Here's the latest list of calibration lines:
Freq (Hz)   Actuator                   Purpose                      Channel that defines Freq             Changes Since Last Update (LHO:69736)     
15.6        ETMX UIM (L1) SUS          \kappa_UIM excitation        H1:SUS-ETMY_L1_CAL_LINE_FREQ          No change
16.4        ETMX PUM (L2) SUS          \kappa_PUM excitation        H1:SUS-ETMY_L2_CAL_LINE_FREQ          No change
17.1        PCALY                      actuator kappa reference     H1:CAL-PCALY_PCALOSC1_OSC_FREQ        No change
17.6        ETMX TST (L3) SUS          \kappa_TST excitation        H1:SUS-ETMY_L3_CAL_LINE_FREQ          No change
33.43       PCALX                      Systematic error lines       H1:CAL-PCALX_PCALOSC4_OSC_FREQ        No change
53.67         |                            |                        H1:CAL-PCALX_PCALOSC5_OSC_FREQ        No change
77.73         |                            |                        H1:CAL-PCALX_PCALOSC6_OSC_FREQ        No change
104.23        |                            |                        H1:CAL-PCALX_PCALOSC7_OSC_FREQ        FREQUENCY CHANGE; THIS ALOG
283.91        V                            V                        H1:CAL-PCALX_PCALOSC8_OSC_FREQ        No change
284.01      PCALY                      PCALXY comparison            H1:CAL-PCALY_PCALOSC4_OSC_FREQ        No change
410.3       PCALY                      f_cc and kappa_C             H1:CAL-PCALY_PCALOSC2_OSC_FREQ        No change
1083.7      PCALY                      f_cc and kappa_C monitor     H1:CAL-PCALY_PCALOSC3_OSC_FREQ        No change
n*500+1.3   PCALX                      Systematic error lines       H1:CAL-PCALX_PCALOSC1_OSC_FREQ        No change (n=[2,3,4,5,6,7,8])
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 17:29, Monday 14 August 2023 (72207)
Just a post-facto proof that this calibration line frequency change from 102.13 to 104.23 Hz drastically improved the symptom that the response function systematic error, as computed by this ~100 Hz line, was huge for hours while the actual 102.128333 Hz line was loud.

The attached screenshot shows a 2 days before and two days after the change (again on 2023-08-09 at 21:13 UTC). The green trace, which shows that there is no longer erroneously reported large error as computed by the 102.13 and then 104.23 Hz lines at the beginning of nominal low noise segments.
Images attached to this comment
H1 DetChar (CAL, DetChar)
derek.davis@LIGO.ORG - posted 15:01, Tuesday 08 August 2023 - last comment - 21:49, Tuesday 29 August 2023(72064)
Excess noise near 102.13 Hz calibration line

Benoit, Ansel, Derek

Benoit noticed that for recent locks, the 102.13 Hz calibration line is much louder than typical for the first few hours of the lock. An example of this behavior is shown in the attached spectrogram of H1 strain data on August 5 - this is the first day this behavior appeared. Ansel noted that this feature includes a comb-like structure around the line that is only present in the H1:GDS-CALIB_STRAIN_NOLINES channel and not H1:GDS-CALIB_STRAIN (see spectra for CALIB_STRAIN and CALIB_STRAIN_NOLINES on Aug 5). This issue also visible in the PCAL trends for the 102.13 Hz line. 

We are not sure if the excess noise near 102.13 Hz is from the calibration line itself or another noise source that is near the line. However, the behavior has been present for every lock since 12:30 UTC on August 5 2023. 

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 09:21, Wednesday 09 August 2023 (72094)CAL
FYI, 
$ gpstime Aug 05 2023 12:30 UTC
    PDT: 2023-08-05 05:30:00.000000 PDT
    UTC: 2023-08-05 12:30:00.000000 UTC
    GPS: 1375273818.000000
so... this behavior seems to have started at 5:30a local time on a Saturday. Therefore *very* unlikely that the start of this issue is intentional / human change driven.

The investigation continues....
making sure to tag CAL.
jeffrey.kissel@LIGO.ORG - 09:42, Wednesday 09 August 2023 (72095)
Other facts and recent events:
- Attached are 2 screenshots that show the actual *digital* excitation is not changing with time in anyway.
    :: 2023-08-08_H1PCALEX_OSC7_102p13Hz_Line_3mo_trend.png shows the specific oscillator, --- PCALX's OSC7 which drives the 102.13 Hz line's EPICs channel version of its output. The minute trend shows the max, min, and mean of the output, and there's no change in amplitude.
    :: 2023-08-08_H1PCALEX_EXC_SUM_3mo_trend.png shows a trend of the total excitation sum from PCAL X. This also shows *no* change in time in amplitude.

Both trends show the Aug 02 2023 change in amplitude kerfuffle I caused that Corey found and a bit later rectified -- see LHO:71894 and subsequent comments, but that was done, over with an solved, definitely by Aug 03 2023 UTC and unrelated to the start up of this problem.

It's also well after I installed new oscillators and rebooted the PCALX, PCALY, and OMC models on Aug 01 2023 (see LHO:71881).
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 10:44, Wednesday 09 August 2023 (72100)
The front-end version of the calibration's systematic error at 102.13 Hz also shows the long, time-dependent issue -- this will allow us to trend the issue against other channels

Folks in the calibration group have found that the online monitoring system for the
    -  overall DARM response function systematic error
    - (absolute reference) / (Calibrated Data Product) [m/m] 
    - ( \eta_R ) ^ (-1) 
    - (C / 1+G)_pcal / (C / 1+G)_strain
    - CAL-DELTAL_REF_PCAL_DQ / GDS-CALIB_STRAIN
(all different ways of saying the same thing; see T1900169) in calibration at each PCAL calibration line frequency -- the "grafana" pages -- are showing *huge* amounts of systematic error during these times when the amplitude of the line is super loud.

Though this metric is super useful because it's dreadfully obvious that things are going wrong -- this metric is not in any normal frame structure, so you can't compare it against other channels to find out what's causing the systematic error.

However -- remember -- we commissioned a front-end version of this monitoring during ER15 -- see LHO:69285.

That means the channels 
     H1:CAL-CS_TDEP_PCAL_LINE8_COMPARISON_OSC_FREQ        << the frequency of the monitor
    H1:CAL-CS_TDEP_PCAL_LINE8_SYSERROR_MAG_MPM        << the magnitude of the systematic error
    H1:CAL-CS_TDEP_PCAL_LINE8_SYSERROR_PHA_DEG        << the phase of the systematic error 

tell you (what's supposed to be***) equivalent information.

*** One might say that "what's suppose to be" is the same as "roughly equivalent" due to the following reasons: 
    (1) because we're human, the one system is displaying the systematic error \eta_R, and the other is displaying the inverse ( \eta_R ) ^ (-1) 
    (2) Because this is early-days in the front-end system, it uses the "less complete" calibrated channel CAL-DELTAL_EXTERNAL_DQ rather than the "fully correct" channel GDS-CALIB_STRAIN

But because the problem is so dreadfully obvious in these metrics, even though they're only *roughly* equivalent, you can see the same thing.
In the attached screenshot, I show both metrics for the most recent observation stretch, between 10:15 and 14:00 UTC on 2023-Aug-09.

Let's use this front-end metric to narrow down the problem via trending.
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 10:58, Wednesday 09 August 2023 (72102)CAL, DetChar
There appears to be no change in the PCALX analog excitation monitors either.

Attached is a trend of some key channels in the optical follower servo -- the analog feedback system that serves as intensity stabilization and excitation power linearization for the PCAL's laser light that gets transmitted to the test mass -- the actuator of which is an acousto-optic modulator (an AOM). There seems to be no major differences in the max, min, and mean of these signals before vs. after these problems started on Aug 05 2023.

H1:CAL-PCALX_OFS_PD_OUT_DQ
H1:CAL-PCALX_OFS_AOM_DRIVE_MON_OUT_DQ
Images attached to this comment
madeline.wade@LIGO.ORG - 11:34, Wednesday 09 August 2023 (72104)

I believe this is caused by the presence of another line very close to the 102.13 Hz pcal line.  This second line is present at the start of a lock stretch but seems to go away as the lock stretch continues.  I have attached a plot showing a zoom-in on an ASD around 102.1-102.2 Hz right after a lock stretch (orange), where the second peak is evident, and well into a lock stretch (blue) where the PCAL line is still present, but the second peak right below it in frequency is gone.  This ASD is computed using an hour of data for each curve, so we can get the needed resolution for these two peaks.

I don't know the origin of this second line.  However, a quick fix to the issue could be moving the PCAL line over by about a Hz.  The second attached plot shows that the spectrum looks pretty clean from 101-102 Hz, so somewhere in there would be probably be okay for a new location of the PCAL line.

Images attached to this comment
ansel.neunzert@LIGO.ORG - 11:47, Wednesday 09 August 2023 (72105)

Since it looks like the additional noise is at 102.12833 Hz, I did a quick check in Fscan data from Aug 5 for channels where there is high coherence with DELTAL_EXTERNAL at 102.12833 but *not* at 102.13000 Hz. This narrows down to just a few channels:

  • H1:PEM-EX_MAG_EBAY_SEIRACK_{Z,Y}_DQ .
  • H1:PEM-EX_ADC_0_09_OUT_DQ
  • H1:ASC-OMC_A_YAW_OUT_DQ. Note that other ASC-OMC channels (Fscan tracks A,B and PIT,YAW) see high coherence at both frequencies.

(lines git issue opened as we work on this.)

jeffrey.kissel@LIGO.ORG - 14:45, Wednesday 09 August 2023 (72110)
As a result of Ansel's discovery, and conversation on the CAL call today -- I've moved the calibration line frequency from 102.13 to 104.23 Hz. See LHO:72108.
derek.davis@LIGO.ORG - 13:28, Friday 11 August 2023 (72157)

This line may have appeared in the previous lock the day before (Aug 4). The daily spectrogram for Aug 4 shows a line near 100 Hz starting at 21:00 UTC. 

Images attached to this comment
elenna.capote@LIGO.ORG - 16:49, Friday 11 August 2023 (72163)

Looking at alogs leading up to the time Derek notes above, I noticed that Gabriele retuned and tested new LSC FF. This change may be related to this new peak. Remembering some issues we had recently where DHARD filter impulses were ringing up violin modes, I checked the new LSC FF filters and how they are engaged in the guardian. Some of them have no ramp time, and the filter bank is turned on immediately along with the filters in the guardian. I have no idea why that would cause a peak at 102 Hz, but I updated those filters to have a 3 second ramp.

oli.patane@LIGO.ORG - 16:51, Friday 11 August 2023 (72164)

Reloaded the H1LSC model to load in Elenna's filter changes

ansel.neunzert@LIGO.ORG - 13:36, Monday 14 August 2023 (72197)

Now that the calibration line has been moved, the comb-like structure at the calibration line frequency is no longer present (checked in the CLEAN channel).

We can also see the shape of the 102.12833 Hz line much more clearly without the overlapping calibration line. I have attached a plot for reference on the width and shape.

Images attached to this comment
camilla.compton@LIGO.ORG - 16:33, Monday 14 August 2023 (72203)ISC

As discussed in todays commissioning meeting, I checked TMSX and ETMX movement for a kick during locking and couldn't see anything suspicious. I did find some increase motion/noise every 8Hz in TMSX 1s into ENGAGE_SOFT_LOOPS when ISC_LOCK isn't explicitly doing anything, plot attached. However this noise was present prior to Aug 4th, (July 30th attached).

TMS is suspicious as Betsy found that TMS's have violin modes ~103-104Hz.

Jeff draws attendtion to 38295, showing modes of quad blade springs above 110Hz, and 24917 showing quad top wire modes above 300Hz.

Elenna's notes with calibration lines off (as we are experimenting with for current lock) we can see this 102Hz peak at ISC_LOCK state ENGAGE_ASC_FOR_FULL_IFO. We were mistaken.

Images attached to this comment
elenna.capote@LIGO.ORG - 21:49, Tuesday 29 August 2023 (72544)

To preserve documentation, this problem has now been solved, with more details in 7253772319, and 72262.

The cause of this peak was a spurious, narrow, 102 Hz feature in the SRCL feedforward that we didn't catch when the filter was made. This has been been fixed, and the cause of the mistake has been documented in the first alog listed above so we hopefully don't repeat this error.

Displaying reports 14041-14060 of 84146.Go to page Start 699 700 701 702 703 704 705 706 707 End