Displaying reports 17741-17760 of 86771.Go to page Start 884 885 886 887 888 889 890 891 892 End
Reports until 00:00, Friday 07 July 2023
H1 General
ryan.crouch@LIGO.ORG - posted 00:00, Friday 07 July 2023 (71131)
OPS Friday Owl shift start

TITLE: 07/07 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 143Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 7mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.10 μm/s
QUICK SUMMARY:

H1 DetChar
kamiel.janssens@LIGO.ORG - posted 23:53, Thursday 06 July 2023 (71130)
Broadband jitter noise (100-200Hz)

When using the Carleton linefinder tool (which Daniel Nykamp is further developing) following up some lines for stochastic searches I came across loud and broadband coherence in H1:IMC-F_OUT_DQ with CAL-DELTAL_EXTERNAL, see figure 1 (produced by FScan, provided by the linefinder tool).

Andrew Ludgren confirmed the same coherence is also visible with GDS-CALIB_STRAIN_CLEAN and the noise sources was identified as jitter, see: https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=361570

 

Images attached to this report
H1 General (Lockloss)
camilla.compton@LIGO.ORG - posted 21:21, Thursday 06 July 2023 - last comment - 14:33, Thursday 20 July 2023(71126)
Lockloss @ 04:01UTC

Lockloss after 46h25 in NLN. 1372737707.

No obvious cause, DCPD saturation tag. DARM was the first loop to change attached plot.

NOISE_CLEAN reloaded as requested in 71124.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 23:25, Thursday 06 July 2023 (71129)ISC, SUS

Back to NLN and Observing at 06:24UTC.

This lockloss has again rung up the violins again, 71063. 1h25m in OMC_WHITENING. ITMY was slowest to damp.

The 500Hz violins were low before the lockloss, 1000Hz and higher harmonics still a little elevated, DARM at 03:50UTC attached.
The OMC channels H1:FEC-8_ADC_OVERFLOW_0_{12,13} were still overflowing, see attached ndscope and compare zero overflows on the "-8 days ago" lock to todays with ~500/second. Is this expected?

The last lock we had with low violins was the 42hr lock after Tuesday maintenance, 06/27 21UTC to 06/2915UTC. Search for things that changed during that time:

  • During Tuesday maintenance: We changed OM2 TSAMs 70849 but no arm alignment changes seen 70886, ISS gain was increased 70895, OMC single bounce measurement was taken 70866
  • During the long lock: Jenne adjusted the LSC input matrix 70919
  • During 06/29 lock acquisition: We had issues with ALSX PLL 70951, 70959 and the revCav alignment was tweaked 70944.
Images attached to this comment
rahul.kumar@LIGO.ORG - 07:44, Friday 07 July 2023 (71135)SUS

All the modes (both 500Hz and kHz) have been damping down nicely in the last 6hrs of so (EY20 is still higher than expected and we don't have setting for it - have tried finding one few times but it needs more effort).

camilla.compton@LIGO.ORG - 14:33, Thursday 20 July 2023 (71560)

Daniel and Sheila note that the H1:FEC-8_ADC_OVERFLOW_0_{12,13} are from before the ADC was updated so the real OMC channels are not overflowing, this is just a relic and can be used to see OMC_DPDC being closer to saturating level, could equally see using  H1:OMC-DCPD_{A,B}_WINDOW_{MIN,MAX}. 

We checked that OMC gain settings were not changed on the 29th June.

H1 General
camilla.compton@LIGO.ORG - posted 20:04, Thursday 06 July 2023 (71125)
OPS Eve Mid-shift Summary

STATE of H1: Observing at 143Mpc
IFO has been locked for 45h25, new O4 record!

We went out of Observing 00:04 to 00:11UTC to restart CW hardware injection code 71122 and TOO edit to NOISE_CLEAN 71124.

H1 ISC
jenne.driggers@LIGO.ORG - posted 17:27, Thursday 06 July 2023 (71124)
Made a change to NOISE_CLEAN, reverted before back to Observe

Since Dave needed us to go out of Observe for a few minutes to clear some memory so the CW hardware injections wouldn't stop overnight, I thought about changing the *way* in which the NonSENS noise estimate is turned off.  However, I then realized that I wasn't 100% sure that it would work, so I backed out my changes.  However, I didn't get the guardian code reverted and reloaded before we went back to Observing.  So, I have asked the operators (currently Camilla and Oli) to reload the NOISE_CLEAN guardian next time we're out of Observing. 


I had been thinking about whether there was a nice way to have the output of the nonsens noise estimate be saved, without actually sending a non-zero signal to the calibration pipeline, since currently the noise subtraction is turned off (and has been since we lowered our laser power).  I had thought about doing this by turning off the output of the filter bank, and having the gain be non-zero.  But, then I realized that I'm not 100% sure where the NOISE_EST_DBL_DQ is saved from, and if doing that would have made the NOISE_EST_DBL_DQ become non-zero, which would then effectively have the subtraction be on (which I don't want).  Since I was trying to be speedy, I just backed out my change, and will spend some more time thinking about how I want to do this. 

However, since I had set the NOISE_CLEAN guardian to "turn off" the noise estimate in the way I wasn't sure would work and loaded it, we now need to reload the old version of the NOISE_CLEAN guardian.  The guardian is back and ready, but Dave was done and we went back to Observe before I loaded the NOISE_CLEAN guardian.  If we do not, then next time we lock the operator will see SDF diffs on the OAF model.  Please do not accept those OAF SDF diffs

If we get to NomLowNoise and there is an SDF diff of OAF-NOISE_WHITENING_GAIN = 1, then reload the NOISE_CLEAN guardian, request it to DOWN, then request it to SUBTRACTING_NOISE.  This should have the output switch be ON, and the GAIN = 0.

 

H1 CDS
david.barker@LIGO.ORG - posted 17:24, Thursday 06 July 2023 - last comment - 17:26, Thursday 06 July 2023(71122)
Brief time out of observe to restart CW hardware injection code

Camilla, Dave:

WP11290. To free up memory on h1hwinj1 we restarted the psinject process on h1hwinj1. This in turn took H1 out of observe due to gain ramping of the INJ_CW filter module.

Before the restart h1hwinj1 had 2GB memory free, which would have lasted only until 4am PDT tomorrow (Friday). After the restart it now has 14GB available, which should take us through to late Sunday. A second restart will be needed over the weekend, and the memory leak will be fixed next Tuesday.

Comments related to this report
david.barker@LIGO.ORG - 17:26, Thursday 06 July 2023 (71123)

17:04 PDT Out of observe, code restart

17:11 PDT Back in observe

H1 CDS
david.barker@LIGO.ORG - posted 16:16, Thursday 06 July 2023 (71120)
dts environment IOC stopped, delayed reaction to network outage

At 15:13 the IOC which serves the DTS environment EPICS channels stopped running, causing the EDC to go red with a disconnect count of 10. This was a delayed reaction to the loss of network connection between CDS and the GC network in the DTS room of the H2 building.

The timeline is:

13:13 PDT network connection went down, all DTS EPICS channels froze at their last values

14:00 PDT Network was restored, but IOC still had frozen channels

15:13 PDT The cdsioc0 systemd process which maintains the SSH-Tunnel restarted, establishing a good tunnel but causing the IOC to crash in the process

15:58 PDT I restarted the IOC, all the channels became available and the EDC went GREEN

H1 General
oli.patane@LIGO.ORG - posted 16:15, Thursday 06 July 2023 (71119)
OPS EVE Shift Start
TITLE: 07/06 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 13mph Gusts, 8mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.12 μm/s 
QUICK SUMMARY:

Locked for 41:40 and Observing.
Planning to come out of Observing around 5pm to restart CW injection system.
H1 General
anthony.sanchez@LIGO.ORG - posted 16:09, Thursday 06 July 2023 (71118)
Thursday Ops Day Shift End

TITLE: 07/06 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 13mph Gusts, 11mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.10 μm/s
QUICK SUMMARY:
There was an internet outage that was aloged here: alog 71112 and it was quickly resolved.
There was an issue with H1:CDS-DTS_ENV_TEMPERATURE_DEGF that was caused by the outage and found later.Dave has since fixed this.                      

Current IFO Status:
Locked and Observing for 41 hours 
Target of oppertunity for restart CW Injection system which will drop us out of Observing scheduled at 5pm tonight.                                                                                                                                                                                                

Start Time System Name Location Lazer_Haz Task Time End
14:53 FAC Karen MY - Technical cleaning 16:10
14:54 FAC Kim MX - Technical cleaning 16:10
16:23 FAC Randy EX & EY N getting measurements for brackets 17:08
16:38 VAC Janos End X N Checking VAC System at EX 16:59
17:50 Tour Cassidy & Co Ctrl RM n 2 Tour Groups coming into the Control room. 19:20
18:17 FAC Mitch EX & EY N FAMIS HEPI Pump Checks 20:17
23:01 CDS Dave B H2 N Restar H1 EDC connections. 01:01
H1 General
mitchell.robinson@LIGO.ORG - posted 14:26, Thursday 06 July 2023 (71115)
Hepi fluid level check
  Level (inches)    
Location Value Difference from
last reported levels
Drip Pans Leaks
CS 5 9/16   Clean No unaddressed puddles
EX 7 3/8 +1/16 Clean No unaddressed puddles
EY 8 3/8 -1/16 Clean No unaddressed puddles
H1 General
mitchell.robinson@LIGO.ORG - posted 14:22, Thursday 06 July 2023 (71114)
Monthly Dust Monitor Vacuum Pump Check

EY is running smoothly, holding at -19 and temps are within range.

EX is running smoothly, holding at -22. Jim and I were unable to adjust to -19. Temps were within range.

Corner pump sounds louder than it should. Holding at -19. Temp on the Pump housing is running at 170F. I have shut it down for today. I will cycle it back on tomorrow and monitor the temp. This pump was rebuild a couple of weeks ago.

H1 General (CDS, OpsInfo)
anthony.sanchez@LIGO.ORG - posted 13:37, Thursday 06 July 2023 - last comment - 16:43, Thursday 06 July 2023(71112)
Thursday Ops Day shift mid day update

15:23 UTC CAMERA_SERVO Guardian took us out of Oserving again. after a minute it brought us back.
15:24 UTC back to Observing.

CDS & PHONE OUTAGE:
20:24 UTC There is currently a CDS Internet Connection issue. The CDS Connection to the outside world is down. Jonathan is working on rebooting the router.

Comments related to this report
anthony.sanchez@LIGO.ORG - 14:04, Thursday 06 July 2023 (71113)

LHO Control room is back on the internet.
Jonathan and Nyath power cycled the GC switch in the MSR and that resolved the issue.

naoki.aritomi@LIGO.ORG - 16:43, Thursday 06 July 2023 (71121)

As shown in the attached figure, ETMX camera (PIT2 and YAW2) freezed for 4s and the camera guardian went to WAIT FOR CAMERA state and came back in 1 min. The ETMX camera freeze also happened last Wednesday in alog70933.

Images attached to this comment
H1 SEI
jim.warner@LIGO.ORG - posted 12:57, Thursday 06 July 2023 (71111)
July Wind Fence Inspection

Inspected both wind fences, both seem in good shape. Also got a good reminder to be aware of wild animals. Mitch and I came within 20 feet of 2 bull elk that were hiding from the sun on the northside of the EX building. No pictures of them, sadly.

Images attached to this report
H1 DetChar (DetChar)
ansel.neunzert@LIGO.ORG - posted 11:06, Thursday 06 July 2023 (71109)
Odd harmonics of an 11.904 Hz comb present in both L1 and H1

Details posted on the LLO logbook under the same subject line, just linking here: https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=66036

H1 DetChar (DetChar)
ansel.neunzert@LIGO.ORG - posted 10:45, Thursday 06 July 2023 - last comment - 09:12, Wednesday 26 July 2023(71108)
1.6611 Hz comb re-appeared June 27

Perhaps unsurprisingly given its previous history, the strong 1.6611 Hz comb that disappeared (alog 69791) in late May has resurfaced. It shows up clearly in Fscans; I did some additional digging and it looks like the first traces appear on June 27th in the 12:00-14:00 UTC range. This corresponds in time with some of the work described in alog 70849, but OM2 heater changes don't account for the previous disappearance of the comb; Sheila confirms that heater wasn't on earlier in May. So it's still not clear what's going on.

Comments related to this report
ansel.neunzert@LIGO.ORG - 14:31, Friday 07 July 2023 (71144)

Update: it's coherent with H1_PEM-EX_VMON_ETMX_ESDPOWER48_DQ and H1_PEM-EX_VMON_ETMX_ESDPOWER18_DQ, and *not* with CS or EY VMON channels.

(Last time we tried to hunt this comb down, I think we didn't have high resolution coherence plots generated to high enough frequencies for these channels.)

Plots attached. The gray dots are harmonics of a separate 99.9989 Hz comb.

Images attached to this comment
evan.goetz@LIGO.ORG - 12:34, Wednesday 19 July 2023 (71507)DetChar
It looks like the behaviour of this comb changed again on July 13, shifting slightly in frequency, before then disappearing again on July 14. It is as yet unclear what caused the changes.

The attached weekly average Fscan From July 12 - 19 shows these changes around 280 Hz especially.
Images attached to this comment
evan.goetz@LIGO.ORG - 09:12, Wednesday 26 July 2023 (71726)
This comb seems to reappear between 7:30 and 9:00 UTC on July 19, 2023. Hopefully this time range can point to something that specifically changes. See attached daily Fscan image
Images attached to this comment
H1 ISC (GRD, OpsInfo)
thomas.shaffer@LIGO.ORG - posted 15:14, Wednesday 05 July 2023 - last comment - 21:39, Thursday 06 July 2023(71078)
Added wrapper to DRMI/PRMI states that use getdata

Over the weekend we ran into a few times (alog71043alog71026, alog71008) that we tried to get data via cdsutils getdata function in an ISC_LOCK guardian state, and it returned nothing. This caused an error in ISC_LOCK, fixed by simply reloading the node since the function just had to try again to get the data. This is not a new thing, but it's definitely another reminder that we have to be prepared for different outcomes anytime we request data.

Some months ago I made with Jonathan's help, a function wrapper that can be used to handle hung data grabs. While not the issue we saw over the weekend, it's still a good idea to use this whenever we try getting data in a Guardian node. The file is (userapps)/sys/h1/guardian/timeout_utils.py and there is either a decorator (@timeout) or a wrapper function (call_with_timeout) than can be used.

For the specific issue we saw over the weekend, a solution is to just do a simple check that the data is actually there before trying to do anything with it (ie. if data:). Using this situation as a good example:

 

# This wrapper should handle hung nds data grabs

popdata_prmi = call_with_timeout(cdu.getdata, 'LSC-POPAIR_B_RF90_I_ERR_DQ', -60)

# This conditional handles None data returned

if popdata_prmi.data:

    if popdata_prmi.data.max() < 20:

        log('no POPAIR RF90 flashes above 20, going to CHECK MICH FRINGES')

        return 'CHECK_MICH_FRINGES'

    else:

        self.timer['PRMI_POPAIR_check'] = 60
Comments related to this report
thomas.shaffer@LIGO.ORG - 15:30, Wednesday 05 July 2023 (71079)

I should have added that this fix was loaded into ISC_LOCK by Tony during commissioning today and is ready for our next relock.

camilla.compton@LIGO.ORG - 21:39, Thursday 06 July 2023 (71127)OpsInfo

This threw the attached error at 2034-07-07 04:14UTC. I edited ISC_LOCK for prmi and drmi checkers from 'if popdata_prmi.data:' to 'if popdata_prmi:'.

This seemed to work but I'm not sure if it will cover all every case. If this goes into error again I suggest the operator start by reloading ISC_LOCK and, if necessary, the "elif self.timer['PRMI_POPAIR_check'] " block of code can be commented out. Tagging OpInfo.

After this edit and a reload, the checker seems to work well, logging that there was no RF18 flashes above 120 (true) and moving to PRMI locking before the old 5 minute 'try_PRMI' timer finished.

Images attached to this comment
Displaying reports 17741-17760 of 86771.Go to page Start 884 885 886 887 888 889 890 891 892 End