Displaying reports 18881-18900 of 87024.Go to page Start 941 942 943 944 945 946 947 948 949 End
Reports until 09:28, Wednesday 07 June 2023
LHO FMCS
bubba.gateley@LIGO.ORG - posted 09:28, Wednesday 07 June 2023 - last comment - 10:14, Wednesday 07 June 2023(70222)
Well Pump Running
Well pump started now to replenish fire water tank. 
Comments related to this report
thomas.shaffer@LIGO.ORG - 09:35, Wednesday 07 June 2023 (70223)DetChar

I'm not sure anyone has looked at the well pump before, so the GPS start time was 1370190407 +/- 2sec. Bubba says that this pump runs automatically, generally overnight for 4 hours. Today's work should also run the pump for 4 hours unless manually turned off.

Edit: Tagging DetChar explicitly, not just in task

david.barker@LIGO.ORG - 10:14, Wednesday 07 June 2023 (70228)

A reminder that the well pump run time trend plots for 1 week, 1 month and 3 months are available on the CDS web server

1-week

H1 CDS
david.barker@LIGO.ORG - posted 09:06, Wednesday 07 June 2023 - last comment - 10:21, Wednesday 07 June 2023(70221)
Note to tconvert users, we are working on expired leapseconds data file

If anyone is using tconvert, today it is producing the warning text:

tconvert WARNING: Leap-second info in /ligo/data/tcleaps/tcleaps.txt is no longer certain to be valid, and we were unable to get updated info from any LDAS web server.  Continuing with possibly-outdated info.

The leap seconds file expired one hour ago 2023-06-07 08:08:28.000000 PDT

This is just a warning message, the resulting GPS time is still correct. There have been no leapseconds applied to UTC since Dec 2016.

Comments related to this report
jameson.rollins@LIGO.ORG - 09:46, Wednesday 07 June 2023 (70224)

I encourage people to use the gpstime utility, which is better supported and less prone to these kinds of failures, instead of tconvert.

david.barker@LIGO.ORG - 10:21, Wednesday 07 June 2023 (70230)

I've extended the expiration time for tconvert's tcleaps.txt to 2026-08-07 17:55:08.000000 PDT. 

H1 DetChar (DetChar)
ansel.neunzert@LIGO.ORG - posted 08:04, Wednesday 07 June 2023 - last comment - 06:03, Tuesday 11 July 2023(70219)
Combs contaminate many spectral bins in H1 weekly averaged data

The good: we now have access to longer segments of observing-quality data and can get a clearer look at narrow spectral artifacts. It looks like the line and comb situation is fairly stable and consistent with previous observations in smaller data sets. We're generally not seeing new problems arising, but are better understanding the existing problems and their scope.

The bad: Combs are more pervasive than previously known, especially at intermediate and high frequencies. In particular, a large number of spectral bins are contaminated by a set of ~9.47 Hz combs over a wide spectral range, which is problematic for CW searches.

The details:

The 4.98423 Hz comb is still present and clearly visible at low frequencies. The ~9.47 Hz combs (specifically 9.47431, 9.475383, and 9.480526 Hz) are weaker, but the larger data set reveals that they are more pervasive. They contaminate a spectral region spanning from about 200 Hz to 930 Hz (98th harmonic). There are 3 distinct combs involved in this structure; the triple peak can only be seen at high spectral resolution.

There is also a 29.969515 Hz comb which is visible up to its 60th harmonic at about 1800 Hz, and a 99.99864 Hz comb which is visible up to at least 2 kHz.

Coherence information:

We also have averaged coherence data over the same time period. As a reminder, Fscan tracks a limited set of high-priority channels and not the full DetChar list.

Prior related alogs: 68261, 66925

Attached figures: (1) 200-930 Hz plot demonstrating the range of the 9.47 Hz combs, as well as some of the other noted combs. (2) Zoom on the triple peak of these combs. (3) Low frequency spectrum demonstrating the 4.98 Hz comb.

Images attached to this report
Comments related to this report
ansel.neunzert@LIGO.ORG - 06:03, Tuesday 11 July 2023 (71222)

Carolina Li, Ansel Neunzert

Summary

It looks like the pervasive ~9.47 Hz combs are coherent with:

  • H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ
  • H1:IMC-WFS_B_Q_YAW_OUT_DQ
  • H1:IMC-WFS_A_DC_YAW_OUT_DQ
  • H1:PEM-CS_ACC_PSL_TABLE1_Y_DQ

Related to jitter?

(The WFS channels see the 29.97 Hz comb too.)

Background

For reasons of computational cost, Fscan only tracks a limited channel list-- not including these channels. However, daily bruco scans are generated through the STAMP-PEM monitor (thanks Kiet!) which are are lower resolution but cover many more channels. Carolina has been working to cross-reference the STAMP-PEM data with Fscan data to extract hints about promising channels for noise-hunting, working around the frequency resolution by leveraging the fact that combs show up in multiple spectral bins and should have the same coherences in all of them. Carolina generated a generated a heat map counting the number of times that various channels were coherent with h(t) in the STAMP-PEM frequency bins (low resolution) corresponding to Fscan auto-generated combs (high resolution). Fig 1 shows her initial test case, which clearly highlights the listed channels. This prompted me to follow up with higher-resolution Fscan spectra and confirm.

In parallel, Elenna had recommended taking a look at a wider set of channels related to jitter noise, which it just so happens these channels are part of...

Attached figures

1: heat map generated using STAMP-PEM coherence data and Fscan comb lists
2-5: high resolution Fscan coherence spectra for the channels listed, overlaid with Fscan comb lists

Attached data is for July 6.

Images attached to this comment
LHO General
thomas.shaffer@LIGO.ORG - posted 07:58, Wednesday 07 June 2023 (70220)
Ops Day Shift Start

TITLE: 06/07 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 137Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 3mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.09 μm/s
QUICK SUMMARY: Locked for 4 hours 20min. Quiet on site otherwise.

 

H1 General
camilla.compton@LIGO.ORG - posted 07:57, Wednesday 07 June 2023 (70217)
OPS Owl Shift Summary

TITLE: 06/07 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 137Mpc
SHIFT SUMMARY: Locked in Observing majority of shift, IFO has been locked last 4h20. 
LOG:

Lock #1 Lockloss at 09:07UTC 1370164044 after 9h54 in NLN.

Unsure of cause, lockloss tool tagged "DCPD saturation" but there were low violins,  no PI's (checked sitemap > PI > PI Overview > ! DTT  Templates > 10.4kHz) and this wasn't a PRCL 11Hz ring up. This lockloss rang up the violins.

Relocking

Green arms locked well with good beatnotes (-4dB COMM and -8dB DIFF). PRMI didn't lock so I moved PR3 yaw from 154.6 to 154.1 (direction suggested by TJ in 70176, he moved 153.5 to 154.6 due to temperature excursion). This move it dropped COMM beatnote from -4 to -6dB but locked PRMI. Attached is the PR3 sliders and oplevs today and over the last 2 months (y-cursors at current values). All automatic apart from PR3 touch.

Had to wait in OMC_WHITENING for ~20 minutes for the violins to damp. 

Lock #2 NLN and Observing at 10:38UTC.

Attempted to speed up EY01 violin damping but neither increasing or turning off the gain worked, reverted to nominal settings, slow and steady, see attached plot.

Images attached to this report
H1 General
camilla.compton@LIGO.ORG - posted 03:57, Wednesday 07 June 2023 (70218)
OPS Owl Mid-shift Summary

STATE of H1: Observing at 133Mpc (Been in NLN for 20 minutes)

Lockloss at 09:07UTC 1370164044 after 9h54 in NLN.
Unsure of cause, lockloss tool tagged "DCPD saturation" but there were no PI's (checked sitemap > PI > PI Overview > ! DTT  Templates > 10.4kHz) and this wasn't a PRCL 11Hz ring up. This lockloss rang up the violins.
I moved PR3 yaw from 154.6 to 154.1 (direction suggested by TJ in 70176).
Back to NLN and Observing at 10:38UTC. Had to wait in OMC_WHITENING for ~20 minutes for the violins to damp. 
H1 General
camilla.compton@LIGO.ORG - posted 00:03, Wednesday 07 June 2023 (70216)
OPS Owl Shift Start

TITLE: 06/07 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 134Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 11mph Gusts, 9mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.11 μm/s
QUICK SUMMARY: Been in NLN for 7h50m. TJ notes I may need to move PR3 on a relock 70176.

H1 General
ryan.crouch@LIGO.ORG - posted 23:57, Tuesday 06 June 2023 (70208)
OPS Tuesday Eve shift summary

TITLE: 06/07 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 135Mpc
SHIFT SUMMARY:

Lock#1:

TJ was reaquiring lock at MAX_POWER when I arrived, we made it to NLN at 23:11UTC. I had to restart the DARM fob after it crashed twice this evening.

In observing at 23:29UTC, out at 00:31UTC for some SQZ work. Back in observing at 01:19UTC

EX saturations at 23:21UTC, 00:35UTC, 01:14UTC, 01:54UTC

LOG:                                                                                                                                                                                                                                                                                                   

Start Time System Name Location Lazer_Haz Task Time End
23:09 VAC Janos MidX/EndX N Turn off pump and put away 23:29
00:31 SQZ Vicky CR N SQZ work 01:18
H1 General
ryan.crouch@LIGO.ORG - posted 20:03, Tuesday 06 June 2023 (70214)
OPS Tuesday Eve mid shift update

STATE of H1: Observing at 131Mpc

Images attached to this report
H1 ISC (DetChar)
victoriaa.xu@LIGO.ORG - posted 19:46, Tuesday 06 June 2023 (70215)
Some things about the recent excess 120Hz jitter noise in DARM

There's been a very noticeable excess of jitter in DARM, esp the broad peak ~120 Hz. (pointed out by many, incl. today 70206). The 120 Hz broad noise peak (+some extra lines and noise in that band), seem to clearly be jitter.

Gabriele and Andrew L. have seen the peaks show up as early as May 24, and starring clearly in every lock since 5/27. Could this be related to ISCT1 on-table alignments the week of 5/24-5/27? That's something I've seen in alogs from that time period. If so, is there an alignment that recovers the previous scenario, when this jitter was better? Some events that week:
1) ISCT1 alignments done Tuesday 5/24 (69897), the 120 Hz then briefly appears (are we sure it was not in any locks before ISCT1 work?),
2) then there weren't many locks for the week,
3) the HAM ISI issues were resolved briefly starting Friday 5/26 (69934) with on-table alignment re-done one last time ~5/26 23:30 UTC (69960),
4) the 120 Hz jitter is visible in locks ever since

Andrew Lundgren sees that this 120 Hz is coherent with jitter witnesses, H1:IMC-WFS_A_DC_PIT/YAW, and that there's both more coupling and more jitter. Jenne's cleaning 70054 sees this jitter clearly, but it is not cleaning totally all of it away.

H1 SUS
ryan.crouch@LIGO.ORG - posted 19:40, Tuesday 06 June 2023 (70213)
Inlock charge measurement - FAMIS

FAMIS 25069

The inlock charge measurements again caused a lockloss (alog 70170) while swapping from ITMX to ETMX, but it was able to successfully do all 4 quads.

Images attached to this report
H1 SQZ
victoriaa.xu@LIGO.ORG - posted 19:28, Tuesday 06 June 2023 (70212)
Notes from briefly trying some different gen sqz levels today

Jenne, Vicky. We took ~1 hour for sqz commissioning, from hours ~1-2 of lock.

Overall change is increasing the generated SQZ level from 12dB (60uW, last lock) to 14dB starting this lock (75uW). I don't think sqz is soley responsible for the lower range this evening (possibly some extra no-sqz noise). And, it feels like trends in the SQZ BLRMS are dominated by drifts of the technical noise and SRCL detuning over thermalization, so there isn't a clear correlation of more sqz = more range or better blrms. So while I do think more sqz is better, I think right now this test was probably limited by the elevated + drifting technical noise in the ifo, so I backed off the sqzing. Maybe with a thermalized IFO with stable technical noise, we'd see clearer trends of total noise vs. injected squeezing. Some notes from today:

Images attached to this report
H1 DetChar
ryan.crouch@LIGO.ORG - posted 16:36, Tuesday 06 June 2023 (70206)
OPS Tuesday Eve shift update

We had Janos down at MidX/EndX from 23:10UTC to 23:23UTC turning off and putting away a pump. We reaquired NLN around this time (23:11) and saw some features on DARM that we weren't sure the exact cause of, such as a line at 105Hz. Tagging DETCHAR

H1 CDS
david.barker@LIGO.ORG - posted 16:34, Tuesday 06 June 2023 - last comment - 16:37, Tuesday 06 June 2023(70205)
CDS Maintenance Summary: Tuesday 6th June 2023

WP11226 Mount spare NAT router

Jonathan:

The spare router was rack-mounted in the MSR.

WP11245 TW0 raw minute trend offloading

Dave:

I have started the copy of the past 8 months' raw minute trend from h1daqtw0's SSD raid to h1daqframes-0 SATABOY for permanent archival. The copy is expected to take several days.

nds0 DAQD was restarted to serve these files from their temporary location whilst the copy is onging. Other than nds0, there were no DAQ restarts today.

Comments related to this report
david.barker@LIGO.ORG - 16:37, Tuesday 06 June 2023 (70207)

Tue06Jun2023
LOC TIME HOSTNAME     MODEL/REBOOT
11:58:22 h1daqnds0    [DAQ]
 
 

LHO VE (PEM, VE)
gerardo.moreno@LIGO.ORG - posted 16:27, Tuesday 06 June 2023 - last comment - 16:57, Tuesday 06 June 2023(70200)
Vacuum Rack 1Y16 Door is ON

(Jordan V., Betsy W., Gerardo M.)

The front door for the rack was successfully installed today.  We could not determine the noise mitigation, because we finished at the same time the fire alarm started, so it was very hard to hear the net effect of the door installation.

A fiber chassis, installed at very top slot was affecting the door path, it protrude a bit thus prevented the door from closing, the brackets for this fiber chassis were removed, and the chassis gently pushed back enough to allow the door to close.  This will be visited next week to determine if we can install the removed brackets in a different configuration.

 

 

 

Images attached to this report
Comments related to this report
gerardo.moreno@LIGO.ORG - 16:37, Tuesday 06 June 2023 (70203)CDS, VE

Note: While at the rack, I noticed an "error screen" on the vacuum controls display, Beckhoff computer, see attached photo for more information regarding the error.  I did not do anything with the screen, after taking a photo, I slowly walked away from rack, then I notified Patrick.

Images attached to this comment
patrick.thomas@LIGO.ORG - 16:57, Tuesday 06 June 2023 (70210)
From a quick internet search it may be automatic updates running for Visual Studio. We should schedule a time to log in and try to disable them.
LHO VE (CDS, SEI, VE)
janos.csizmazia@LIGO.ORG - posted 08:38, Tuesday 06 June 2023 - last comment - 11:47, Wednesday 07 June 2023(70178)
CP8 (EX) dewar jacket pumping
I have started pumping exactly at 8:31:00.
Comments related to this report
janos.csizmazia@LIGO.ORG - 09:06, Tuesday 06 June 2023 (70180)
Here is the stackup. The pressure was 27 microns at the start of the pumping.
Images attached to this comment
janos.csizmazia@LIGO.ORG - 13:00, Tuesday 06 June 2023 (70189)
As we've spoken with Robert, I have generated some noise at EX, then switched the pump on/off a few times. The specifics:
Stomping (I used a steel bar, you should see a ~2 Hz noise):
- Start: 12:44:00
- Stop: 12:44:10
- Start: 12:44:40
- Stop: 12:44:50

Pump on/off sequences:
- Off: 12:46:00
- On: 12:47:00
- Off: 12:48:00
- On: 12:49:00
- Off: 12:50:00
- On: 12:51:00

The pump is now running, the pressure was 14 micron (which means, that if we could run it until Thursday, that would be sufficient - if there'll be a lock-loss, I will switch the pump off).
janos.csizmazia@LIGO.ORG - 16:33, Tuesday 06 June 2023 (70204)
The pumping was finished at 16:23, as the data analysis haven't been finished yet.
robert.schofield@LIGO.ORG - 09:48, Wednesday 07 June 2023 (70225)

I had a look at EX accelerometers and, while I see the pounding noise, I do not see the pump turn on and off. The accelerometers are more sensitive than DARM to vibration, so I think that it is fine to pump with this pump isolation setup while we are in observing mode.

jeffrey.kissel@LIGO.ORG - 11:47, Wednesday 07 June 2023 (70237)DetChar, ISC, OpsInfo, PEM, SYS
Tagging a few systems level groups who should me made aware Robert's conclusion that it's OK to run these pumps.
(Not resisting, indeed supporting Robert's conclusion by making sure other people see it and have a reference to point to when it's brought up by the vacuum team.)
H1 ISC
jenne.driggers@LIGO.ORG - posted 19:13, Monday 05 June 2023 - last comment - 17:59, Tuesday 06 June 2023(70160)
Increased PRCL2 gain further

Since Brina found in alog 70153 that we have still been having locklosses due to too-low PRCL gain, I asked RyanS to take us out of Observing for a few minutes (ended up being ~10 mins) so that I could measure the PRCL open loop gain.  It looked like the PRCL gain was a bit low, compared to the latest reference from May 31st.  I increased the PRCL2 gain from 1.5 to 1.7, and saved that value in the Observe SDF. 

In the first screenshot, I show the SDF.

In the second attached screenshot, you can see the reference from May 31st ("Prev reference"), the somewhat low-ish PRCL gain "As found, 4 hrs locked", and the higher PRCL OLG after I increased the PRCL2 gain "current measurement". 

I'm hopeful that this helps us stay locked a little longer.  We may need some more spot measurements of the PRCL gain.  I have not yet made any changes to guardian, and I don't think I can switch us to the safe.snap SDF file, so I suspect that next lock, this value will come back as 1.5.  We should probably measure the PRCL OLG early in a lock, but then modify this gain wherever it was set to 1.5, to now be 1.7.  And we should figure out why it needed to be increased!

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 07:37, Tuesday 06 June 2023 (70173)

Ran the template in lsc/h1/templates/PRCL/PRCL/PRCL_OLG_NOISE_FULL_LOCK_NLN.xml and saved results in camilla.compton/Documents (see attached).

This was with H1:LSC-PRCL2_GAIN at 1.7 and after 23 minutes at NLN. Cursor is on 41.5Hz, gain 1, phase 21deg. Unsure on what the references are.

Images attached to this comment
jenne.driggers@LIGO.ORG - 07:54, Tuesday 06 June 2023 (70174)

Thanks!  I think this means that we can (just barely) put into guardian to use 1.7 in the PRCL2 gain, rather than the 1.5 we'd been using.  I'll do that when I get out of my morning set of meetings.

jenne.driggers@LIGO.ORG - 09:52, Tuesday 06 June 2023 (70181)

I've just done this in the ISC_LOCK guardian, so next lock it'll automatically be set to a gain of 1.7.  I'll watch this as we come up from maintenance in a few hours.

jenne.driggers@LIGO.ORG - 17:05, Tuesday 06 June 2023 (70209)

Since we had not quite gone into Observe yet, I took a quick PRCL OLG measurement, 15 mins after we got to NomLowNoise.  The PRCL UGF is quite high at 50 Hz, shown in red in the attachment (pink is the same old reference as the red trace in Camilla's plot above).  I could see some gain peaking in the DRMI DTT on the wall, but that relaxed quite quickly, which is consistent with Camilla's plot 23 mins into a lock looking like a much more sensible loop with less gain peaking.  Since the goal UGF is around 30 Hz, a better thing to do will be to modify the thermalization guardian to increase the PRCL gain a little bit when we're first locked, but then much more than it currently is later in the lock, and put PRCL2 gain back to its nominal value of 1. 

Images attached to this comment
jenne.driggers@LIGO.ORG - 17:59, Tuesday 06 June 2023 (70211)

I think I have a candidate new 'equation' for the PRCL gain thermalization.  In the attached plot, the overall digital PRCL gain (so, PRCL1 gain times PRCL 2 gain) is plotted versus minutes, for the first 6 hours of lock that the thermalization guardian is changing things.  After the first 6 hours, it just holds whatever gain was in there.

Blue is what the thermalization guardian was set up to do, with a variable in the thermalization called GAIN_SCALE set to 3.0, and PRCL2 gain set to 1.

Orange is what we've been running with for the last several weeks, with PRCL2 gain set to 1.5. 

Green is what we've been running the last few locks, with PRCL2 gain set to 1.7.

The three circle markers are estimates of what the gain ought to be to keep the UGF at 30 Hz, from measurements that Camilla and I have taken.

The pink trace is a candidate thermalization gain equation that would replace the blue one, and we'd use with PRCL2 gain set back to 1.  For this, the only change to the thermalization guardian is that the GAIN_SCALE would be set to 5.55 (rather than the current value of 3.0), and the PRCL2 gain would be set back to 1.

I'll make this change next time the IFO is unlocked and I'm around.

Images attached to this comment
Displaying reports 18881-18900 of 87024.Go to page Start 941 942 943 944 945 946 947 948 949 End