Displaying reports 13901-13920 of 86264.Go to page Start 692 693 694 695 696 697 698 699 700 End
Reports until 10:32, Saturday 02 December 2023
H1 CDS
david.barker@LIGO.ORG - posted 10:32, Saturday 02 December 2023 - last comment - 12:06, Saturday 02 December 2023(74539)
ETMX Hardware Watchdog Tripped at 07:23 this morning

We are investigating why the ETMX HWWD saw high SUS RMS for 20 minutes which caused it to trip, whereas all the other HWWDs saw less than 1 minute of high RMS. It would appear that ETMX's high RMS stopped within one minute of the time the HWWD powered down the ETMX ISI coil drivers.

Comments related to this report
david.barker@LIGO.ORG - 11:06, Saturday 02 December 2023 (74540)

Jim, Fil, Tony, Dave:

Tony spoke with Jim on the phone, I spoke with Fil, we have a procedure to untrip the ETMX HWWD:

We are keeping the EX SWWD tripped at this point (no ISI or HEPI DAC drives)

Tony will go to EX with a CDS laptop, communicating via teamspeak

Tony to verify initial state: HWWD is tripped, and ISI coil drivers are powered down

Tony will untrip the HWWD via its front panel button, verify the coil drivers spring back to life. If any need a reset, they will be reset.

Wait for several minute to see if SUS ETMX RMS rings up

Untrip the SWWD so DAC drive is restored, wait several minutes to see if RMS rings up

Tony heads back to the control room.

If at any point the RMS rings up, Tony will power down the ISI coil drivers via their back panel power switch, one at a time to see if we can identify which unit(s) is doing this. At this point Jim and/or Fil will need to go to the site.

 

david.barker@LIGO.ORG - 11:37, Saturday 02 December 2023 (74541)

Timeline of ETMX watchdogs (all times local)

07:04:20 HWWD and SWWD detect high SUS motion RMS and start their countdowns

07:09:22 SWWD h1iopsusex 1st countdown completes, starts h1iopseiex countdown

07:14:21 SWWD h1iopseiex countdown completes, all ISI and HEPI DAC drives are zeroed. RMS continues

07:24:01 HWWD 20 minute countdown completes, all three ISI Coil Driver Chassis are POWERED DOWN

07:24:31 RMS motion of SUS ETMX below trip point for both SWWD and HWWD.

 

Images attached to this comment
david.barker@LIGO.ORG - 11:39, Saturday 02 December 2023 (74542)

11:27 Tony untripped the ETMX HWWD, all came back with no problems.

After 2 minutes with no ring-up, I untripped the SUS and SEI SWWD, no problems.

Tony started both ISI and HEPI drives, again no problems

After 5 minutes Tony left EX and is heading back to the control room.

david.barker@LIGO.ORG - 11:50, Saturday 02 December 2023 (74543)

Here is a trend of SUS ETMX top OSEM RMS channels from 07:00 to 07:30.

All OSEMs ring up and exceed their 110mV trip level at 07:04:20. The F1, F2, F3 OSEMs keep the trip active throughout, the others (LF, RT, SD) ring back down. The trip+20minute time mark is shown, at which time the ISI Coil Drivers are powered down. The RMS ring down to below the trip level over the next 34 seconds.

Images attached to this comment
anthony.sanchez@LIGO.ORG - 12:06, Saturday 02 December 2023 (74544)

While trying to figure out what eactly was wrong and the proper order to resetting all of these systems. I spotted a ring up happening. We have a before and an after of DCPDs.

Images attached to this comment
LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 08:45, Saturday 02 December 2023 (74538)
Gauge for Ion Pump Y2-8 Not Reporting

Just like gauge at X2-8, the gauge at Y2-8 is not reporting a pressure as of 1 hour ago, due to the lack of sunlight needed to charge batteries via solar panels. No action is needed at this time.

Images attached to this report
H1 General (SEI)
anthony.sanchez@LIGO.ORG - posted 08:43, Saturday 02 December 2023 (74537)
Saturday OPS Day Shift Start

TITLE: 12/02 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: Owl Canceled.
CURRENT ENVIRONMENT:
    SEI_ENV state: LARGE_EQ
    Wind: 15mph Gusts, 9mph 5min avg
    Primary useism: 5.06 μm/s
    Secondary useism: 0.66 μm/s
QUICK SUMMARY:

When I Walked in H1 was being Saturated by a 7.6 mag Earthquakes whos peak is above the top of our earthquake FOM scale, and of course it's gang of aftershocks of mag 6.4 -5 ish.
I tried resetting the watchdogs for ISIs and HEPIs , but the HEPI tripped again.
ISC_LOCK was in Initial_Aligment, took that to IDLE, which stopped verbals from letting us know that the "IFO OUT" Saturation limits are being met.

Current DIAG_MAIN messages:
ESD_DRIVER: ESD X driver OFF
IOP_DACKILL_MON: iopsusex (87) IOP DACKILL Tripped
IOP_DACKILL_MON: iopsusex (90) IOP DACKILL Tripped
SEI_IOP_WD: SEI IOP ['ETMX'] coilmon drop out
SEI_STATE: ['ETMX'] is not nominal

I will begin resolving these issues.
 

Images attached to this report
H1 General (CDS, DetChar, OpsInfo, SEI, SQZ, SUS)
corey.gray@LIGO.ORG - posted 01:29, Saturday 02 December 2023 - last comment - 11:05, Tuesday 05 December 2023(74536)
H1 Back to Observing After Storms, Drifts, & Clipping

It's not pretty, but H1 is back in OBSERVING; range started at 143 (but it's been nosediving the last 15min to 100Mpc).  Surprised to see violin modes looking fairly normal (even after being out of lock for a while and with the high useism). 

SDF Diffs (see attach#1):  Accepted difs for SUS: ITMx, ITMy, & BS as well as SEI:  ETMx Sensor Correction

Also needed to restart the nuc30 ALIGO DARM dtt session because it Timed Out.

Microseism is still pretty high solidly on the 95th percentile mark.

Wireless Access Point in the LVEA was ON, so I turned it OFF.  I left the WAP in the MSR ON.SQZ ASC AS42 not on?? Please RESET_SQZ_ASC

DARM (see attach#2) is elevated from 10-70Hz (the reason for the low range) and also broadly at high frequency....since this looked Squeezer-ish, I checked the squeezer to see that the SQZ MANAGER had a notification about "SQZ ASC AS42 not on?? Please RESET_SQZ_ASC".  When I saw this, I was just about to post something in CHAT, but saw that Naoki was already on it and posted a CHAT asking if I could take H1 out of OBSERVING for him to reset SQZ_ASC, so I did and a few minutes later I took H1 back to OBSERVING.  Now our range should look better...it's already back above 140Mpc!  :)

Taking GRD-IFO to AUTO & H1-Manager to LOW NOISE.

Still see DARM higher from about 10-55Hz.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 11:05, Tuesday 05 December 2023 (74605)
If Naoki hadn't been available, the Troubleshooting SQZ wiki (Justin's > OPs wiki > Troubleshooting SQZ) shows there's instructions for the message "SQZ ASC AS42 not on??"  in alog 71083.
H1 AOS
corey.gray@LIGO.ORG - posted 00:21, Saturday 02 December 2023 (74532)
Fri Ops EVE Summary

TITLE: 12/02 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: N/A
SHIFT SUMMARY:

See mid-shift summary for beginning of the shift. 

For the 2nd half of the shift ended up moving away from alignments and locking and with Sheila focusing on the PRMI lock---namely why POP18 & 90 were so low in power.  Toward the end of the shift Sheila noticed how POP18 had been slowly drifting down in power for PRMI_ASC.  So she started walking PR2 + PR3 (along with a little PRM) in yaw.  And she was able to increase the PRMI powers.  

Remember this is with several Optics with new Offsets!  

I continued walking PR2 + PR3 (along with a little PRM) in pitch (& also revisiting yaw), but I don't know if I improved the powers much more than what Sheila did....maybe a little.  After going in circles walking optics for PRMI w/ NO ASC (POP18 was touching 80 counts), went to PRMI_ASC---this took POP18 up to 90+!  (at the beginning of the shift it was down around 50).

Let ISC LOCK continue, and DRMI locked all on its own.

I will take ISC_LOCK to NOMINAL LOW NOISE and see how it goes.  I imagine there could be some possible violin mode issues, but we'll see.

I should say, more walking of PRMs could possibly improve PRMI's POP18, but basically we are back to PRM powers we had back around Nov17 (NOTE:  on Nov10, POP18 was up to almost 100).

H1 currently at MOVE_SPOTS.

LOG:

H1 ISC
sheila.dwyer@LIGO.ORG - posted 23:05, Friday 01 December 2023 (74535)
PRMI alignment search, suspect clipping in PRC

Corey, TJ, Sheila

Sumary: We've had some clipping in the PRC, which has been getting worse over the last couple of weeks.  This could easily be the cause of our bad noise, and locking difficulties.  We had some sucsess unclipping it by moving yaw, but we aren't yet back to the POP18 build ups we had in mid November. Sensors for looking at PR3 drifts are confusing.

Corey was struggling with PRMI locking this evening.  When I logged in he and others had been working through initial alignment steps, and Corey was able to lock PRMI but with low buildups.  Corey and I spent some time looking at some history of POP18 build ups when PRMI is locked over the last several weeks:

As you can see, the POP 18 build up has been slowly decreasing, even in times when we were locking. We then ran PRMI ASC (REFL 9 to PRM, AS 45 to BS) while we moved PR2, and saw that this did have an impact on the build ups, which it shouldn't if we aren't clipping. (see first attachment for pitch move of PR2).  This pitch move made the build ups more stable, but we couldn't improve the build up with PR2 pitch alone.  We saw similar behavoir with yaw, and moved PR2 yaw in the negative direction.  The second attached screenshot shows a walk of moving both PR3 and PR2 in the negative yaw direction, which did increase the POP 18 build up to about 80 counts.  This was done with ASC off. 

At this time we had trouble keeping PRMI locked for a little bit, it looked like the "mitosis" glitches that Ibrahim saw Wed which we believe were caused by the BS oplev (BS optical lever damping runs while PRMI ASC runs, we tried to turn it off but that was unstable).  We tried a few more rounds of yaw walking with the ASC off, but we saw POP18 drift up to 86.  I think the next step would be to try to walk PR3 and PR2 in pitch to get back to the POP18 build ups close to 100 that we had a few weeks ago, Corey is trying that now.  We probably would be able to lock the IFO with the current value of 86, if all else fails (but if we are still clipping in the PRC that could be causing bad noise).

What's the drift?

The third attachment here is a trend of several sensors of PR3 alignment over the last few weeks, they do not agree with each other.  The optical lever would indicate that the drift of PR3 has drifted in the positive yaw direction, while the top mass osems would indicate that it has drifted in the negative direction.  When I moved PR3, both sensors agree that the sign of the move was negative yaw, so there is not a sign error in one of them.  This reminds me a lot of 70008, and the incident that inspired that alog where the ISI was drifting so the suspension osems didn't show the drift.  TJ points out this alog about the PR3 sensor calibrations: 70197

 

 

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 20:06, Friday 01 December 2023 (74534)
Mid-Shift Status

After the Suspension Offset changes from the end of the day shift, ran a Manual Initial Alignment (except for Input Align).  H1 made it to Acquire DRMI, but flashes did not look great.  Went with LSC setting chage RyanS used a few nights ago (alog74487), and this allowed for short locks.  Tried to touch it up with the SRM, but did not have luck keeping it locked enough to let ASC take over.

Revisited Input_Align, since we skipped this earlier.  And it completed, however...

Returned to locking, but the alignment looked horrible for PRMI.  

Returning to Manual Alignment and skipping Input Align.

LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 16:33, Friday 01 December 2023 (74531)
FCT Augmentation Update, Installation of More O-ring Valves

(Jordan V., Gerardo M.)
Late entry.
Last Tuesday Jordan and I installed 4 o-ring valves on the following crosses, FC-C2, FC-C3, FC-C4 and FC-C5.
All o-ring valves were installed on the +X 2.75" CF port for all crosses, all valves were leak tested and all new joints passed.

 

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 16:24, Friday 01 December 2023 (74530)
Fri Ops EVE Transition

TITLE: 12/02 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 3mph 5min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.47 μm/s
QUICK SUMMARY:

Austin just exited the LVEA after finishing BS oplev investigations.

Walked in to manual alignment by Tony & TJ, but they are currently troubleshooting the SR optics.

H1 General
anthony.sanchez@LIGO.ORG - posted 16:23, Friday 01 December 2023 - last comment - 17:36, Friday 01 December 2023(74526)
Friday Ops day Shift Dec 1st

TITLE: 12/02 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
LOG:

 

MicroSeism is still pretty high early in the morning,
Ran an Initial Alignment which completed successfully.

Paused ISC_LOCK on Acquire_DRMI_1F
Because DRMI was in a PRMI - DRMI loop

Changed the following channels to try and replicate Ryans technique.
H1:LSC-PRCL2_GAIN ->1.2 -1.5
H1:LSC-SRCL2_GAIN ->1.4 -1.5 just trying different settings.

H1:LSC-MICH_TRIG_THRESH_ON -> 25
H1:LSC-SRCL_TRIG_THRESH_ON -> 25
Move SRM ,SR2, and the Beam splitter.
 


Eventually ISC_LOCK being paused caused an issue with ALS due to X arm getting unlocked while TJ and Austin changed the power on OPLEV out by HAM4.

I then took ISC_LOCK back to execute. This caused an Immdiate lockloss.
While relocking we made changes to ISC_LOCK.py to increase the time that we wait before going to Acquire_PRMI.
Lines 1480 and 1481 were changes to:  
self.timer['try_PRMI'] = 3600 # Was 600, reduce to 300 24Oct2019 JCD
self.timer['DRMI_POPAIR_check'] = 3600
This is a temporary  change to try to lock while the MicroSeism is Elevated.

Pushed around SRM, SR2 and the BS trying to get DRMI locked.
Eventualy I took ISC_LOCK back to initial alignment.
That completed and we got back to ACQUIRE_DRMI_1F and I increased the gains again but it still will not lock.

Checked the Whitening filters for changes over the last week that would affect Acquire_DRMI_1F.
I have Changed the gains
H1:LSC-PRCL2_GAIN ->1.2 -1.8
H1:LSC-SRCL2_GAIN ->1.4 -1.8 just trying different settings.

None of this has worked out well.


Trended all the Quads and noticed that they have all changed in their vertical axis with the outside air temp. Please See comment (74533)
ITMY Vertical Motion
image(2).png

image(1).png
image.png

Changed H1:SUS-ITMY_M0_TEST_V_OFFSET to 90,000
and H1:SUS-ITMX_M0_TEST_V_OFFSET to 80,000

Ran dither align script.
And starting Initial Alignment now.
While Initial Alignment, was running SRC_ALIGN ISI HAM4 was tripped likley due to the BS OPLEV re-adjustments
We are having problems aligning SRM.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                

Start Time System Name Location Lazer_Haz Task Time End
17:06 FAC Chris Site n Clear snow w/ John Deere tractor 18:36
18:18 OPS TJ and Austin LVEA N Adjusting Oplev power. 18:48
19:02 OPS TJ & Austin LVEA HAM4 N Adjusting BS OPLEV 19:27
19:39 FAC Kim Laundry room N Swifting laundry room 19:49
21:15 OPS Austin LVEA HAM4 N Readjusting BS OPLEV 21:40
21:22 SUS Ryan Crouch & Austin Optics Lab N Magnet testing 22:22
23:21 OPlev Austin LVEA HAM4 N ReAdjusting OPLEV 23:51

 

Images attached to this report
Comments related to this report
anthony.sanchez@LIGO.ORG - 17:36, Friday 01 December 2023 (74533)

Today, while trying to troubleshoot the DRMI locking issues we have been having.
TJ, Oli, and myself have found that ALL of the suspensions have Moved and that Movement is likely coupled to the outside air temperatures. Not Just the QUADS.
Temperature channels used can be found by SITEMAP->PEM->Weather Summary.


The verticle motion Channels we used are found by clicking the MEDM screen for each optic and clicking H1:SUS-ETMX_M0_DAMP_V_INMON
found in this screenshot.

Using those channels we were able make the plots attached to this alog. and I mentioned this in my Alog from my shift, but I felt like it needed more context.

We showed those plots to Sheila and She gave asked us to move ITMY M0 to -87.6 and ITMY R) V to -11.3. 
And ITMX R0 to 77, and M0 to -38.

For this I changed: 
H1:SUS-ITMX_M0_TEST_V_SW1=ON
H1:SUS-ITMX_M0_TEST_V_OFFSET = 80000

H1:SUS-ETMX_M0_TEST_V_SW1 = ON
H1:SUS-ITMX_R0_TEST_V_OFFSET = 85700

And Oli Changed
H1:SUS-ITMX_R0_TEST_V_OFFSET = 85700
H1:SUS-ITMY_R0_TEST_V_OFFSET = 88000

We then saved the SDFs and started an initial alignment.
But the Initial Alignment did not go well due to HAM4 ISI tripping, and Beam splitter alignment.
Later on in the we decided to make the same change to the Beam Splitter.
H1:SUS-BS_M1_TEST_V_OFFSET = 19000
BS has no reaction chain and thus no reaction chain offset is needed.
I did this twice because we forgot that SDF revert would undo all the changes.
Saved these SDF1 and SDF2
We then started an Initial Alignment which is still ongoing.
 

Images attached to this comment
H1 SUS
austin.jennings@LIGO.ORG - posted 16:07, Friday 01 December 2023 (74529)
BS Oplev Power Adjustment

In an attempt to try and troubleshoot the BS oplev glitching, it was noted by Jason that a good way to help with this problem solving was to try and maximize the oplev laser power (if it wasn't maxed already). I went into the LVEA to try and adjust/maximize the laser power on the oplev box to see if the sum counts would go up, but they did not. After this adjustment, Jason noticed that the oplev sum channel on detchar was very noisy at this point - ss attached. He recommended lowering the power a bit, so I went back in to do so, but noticed upon doing this that the sum counts started dropping very rapidly. Not sure what's going on, but Jason and I are both thinking that we might need to swap the oplev laser so we can open up the current one and look into it. Ultimately, we decided to readjust the power output back to the second adjustment where the noise was prevalent on detchar (as the sum counts were consistent at this power) to get us through the weekend.

Images attached to this report
H1 SUS
ryan.crouch@LIGO.ORG - posted 12:22, Friday 01 December 2023 (74522)
SUS rubbing script

I think for my previous SUS rubbing check alog74493 I used a bad time while the SUSs were still ringing down from a lockloss so I ran it a few more times with check different times and the same reference.

1385436898 (12/01 03:34 UTC): The only thing of note here is that MC1 has a broadband noise increase.
1385284902 (11/29 09:21UTC): The ITMs have a 1 Hz peak but only in L and P, MC1 has the same broadband noise increase, and OM3 has a ~1.5Hz peak.
1385222616 (11/28 16:03UTC): The ITMs have the same 1Hz features and OM3 has the same features around ~1.5Hz. MC1 did not have the noise increase here.
Images attached to this report
Non-image files attached to this report
H1 SUS
ryan.crouch@LIGO.ORG - posted 09:44, Friday 01 December 2023 - last comment - 14:00, Friday 01 December 2023(74517)
BS OPLEV glitching

The BS OPLEV is glitching mostly in PIT (a few in YAW but nowhere near as many) but this doesn't seem to be new behavior, Its been going on for at least the past year from the ndscope.

Images attached to this report
Comments related to this report
ryan.crouch@LIGO.ORG - 14:00, Friday 01 December 2023 (74525)

This may not have been a great trend, the glitches seen may just be the kicks from losing lock.

H1 ISC
camilla.compton@LIGO.ORG - posted 16:10, Thursday 30 November 2023 - last comment - 14:56, Friday 01 December 2023(74502)
ASC Loops Comparing 3 times, Good Darm, Fuzzy Darm and Bad Low Freq Darm

ASC loops at three times are compared are below in DTT, all 0-232Hz, 0.05Hz BW with 50 averages and 45% overlap for ~ 8 minutes of data used. Good DARM: 2023/11/23 6:45UTC; Fuzzy Darm: 2023/11/23 7:15UTC; Bad Low Freq DARM: 2023/11/30 7:45UTC (useism high) range 140MPc.

Reused Jenne's 64760 rubbing script for these loops with 10minutes of data. PDFs attached. Comparing to 2023/11/23 6:45UTC good DARM time:

 

Images attached to this report
Non-image files attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 12:05, Friday 01 December 2023 (74523)SUS

Because I just picked a time with a glitch, I compared CHARD in the lock before Tuesday maintenance 11/28 08:58UTC and the lock after  11/29 06:37UTC, You can clearly see the 1.1 and 3.4Hz peaks increase, plot

Interestingly, there is a peak at 10.8Hz that wasn't there on the 23rd before or after the DARM fuzziness started but was there on 11/28 before maintenance started  see zoomed plot.

Rahul didn't see anything strange on the quads in 74503, but I think that measurement is taken with damping off. Tagging SUS.

Images attached to this comment
camilla.compton@LIGO.ORG - 13:54, Friday 01 December 2023 (74524)SUS

Sheila noted that the 3.3Hz peak isn't visible from the top mass (won't show in Ryan's 74522). Looking at the Quad L2 witness channels when we are unlocked, there is no difference before and after Tuesday maintenance, no features above 3Hz, plot attached.

When we are locked, the 1.1 and 3.4Hz features are visible in all quads in the lock after Tuesday maintenance, but not the lock before , plot attached.

Images attached to this comment
sheila.dwyer@LIGO.ORG - 14:56, Friday 01 December 2023 (74528)

This is a plot that shows the same times as Camilla's above, plotting the R0 L2DAMP IN signals which are the same signals as L2 WIT.  This plot suggests that the extra motion 1.1 at 3.3 Hz is worse for ITMY than ITMX.  (This is pretty much the same as Camilla's plot, but with the ITMs plotted on top of each other).

Images attached to this comment
H1 SUS (ISC, OpsInfo)
ryan.crouch@LIGO.ORG - posted 15:55, Thursday 30 November 2023 - last comment - 11:20, Friday 01 December 2023(74493)
Ran Jenne's sus rubbing ipynb

I ran Jenne's rubbing script using a reference time of us being in DOWN on the 22nd before the fuzziness started on the 23rd around 07:00UTC and a check time of 10:00UTC on the 23rd while also in DOWN, I didn't really see anything noteworthy, some noise increase in the 0.1Hz area (microseism)?

I also ran using the same reference time against a time this morning and there was a couple of suspensions that looked interesting, there were some spikes in the 1.0 - 10 Hz bandwidth for PR2, PR3, PRM, SR3, SR2, MC3, MC1, and OM3. In the 0.0 - 1.0 bandwidth the Quads saw an increase in noise, specifically the ITMs, TMSs, and ETMY. SRM and SR2 have some large spikes just under 1 Hz. The ITMs temperature has slightly increased, ~ 0.25 - 0.3 degrees F but these changes are less than 0.5 degrees so its probably not a worry.

 

Later in the day I ran the script a 3rd time using a time from yesterday (13:54UTC 11/29) with the same reference from the 22nd and most of the features I saw are no longer there, except for ITMX. Maybe the time I chose earlier was just a bad time?

Images attached to this report
Non-image files attached to this report
Comments related to this report
ryan.crouch@LIGO.ORG - 13:40, Thursday 30 November 2023 (74500)SEI

I checked the ISI motion on the ITMs, and ETMs using the BLEND GS13 channels. I also took a spectra of SRM and SR2. They all pretty much look as they did a month ago except SR2s low frequency noise is higher, 0.1 Hz and below.

Images attached to this comment
rahul.kumar@LIGO.ORG - 12:43, Thursday 30 November 2023 (74503)SUS

I took raw osem spectra of all four QUADs (the 0.45Hz peak seen over here is a suspension mode - reference shown here) and Beam splitter and they look fine - please see the screenshot attached below. The measurements were taken when the IFO was in DOWN state.

Since we had time (like 5-10mins), I also took a quick transfer function measurements on ITMY - P (F1, F2, F3 BOSEM), T (SD BOSEM), V(LF, RTBOSEM) dof and they look healthy. ITMY has been restored to its nominal state after the measurements were complete.

Ibrahim is now trying to lock the IFO.

Images attached to this comment
rahul.kumar@LIGO.ORG - 14:17, Thursday 30 November 2023 (74505)SUS

Since Jenne was concerned about SRM, SR2, SR3 and OM3 based on Ryan's plot above, I took raw osem spectra of the same and they look fine to me - see plot attached below. OM3 has a small peak at 2.28 Hz and at 40Hz approximately - which I am currently investigating by going back in time (update - similar features also visible when I run these measurements three weeks back in time).

Images attached to this comment
sheila.dwyer@LIGO.ORG - 10:50, Friday 01 December 2023 (74519)

I used dtt to reproduce Ryan's plot for SR2 using the gps times in his second pdf attachment.  Looking at the DAMP in signals this reproduces Ryan's result.  Looking at the individual osem signals (right colum plots), it looks like the problem is mostly visible in LF and RT osems. 

Images attached to this comment
sheila.dwyer@LIGO.ORG - 11:20, Friday 01 December 2023 (74520)

Looking at a time series of the time from Ryan's second attachment, that is just a few seconds after a lockloss.  So the alarming things that we are seeing are just ring downs of the lockloss transient.

Images attached to this comment
H1 General (ISC)
oli.patane@LIGO.ORG - posted 14:57, Thursday 30 November 2023 - last comment - 11:43, Friday 01 December 2023(74504)
Follow-up on 11/29 AS72 Gain Locklosses

I looked into the two locklosses from November 29th, 01:39UTC and 04:25UTC from during the time where the AS72 Gain was increased from 12dB to 21dB (74457), before it was reverted due to potentially having caused the next two locklosses.

It doesn't seem to me that both locklosses had the same cause, since there seems to be some discrepencies between the two locks as well as between the actual locklosses. Based on looking through the lockloss ndscopes, it looks like the first lockloss was probably caused by the AS72 gain, but I don't think the second one was, or at least wasn't the only cause. Here's a quick breakdown of some differences:

ASC Loops (ASC-01:39UTC, ASC-04:25UTC)
01:39 Lockloss - Signals look normal up to lockloss
04:25 Lockloss - Many signals are much larger than during the previous lock, and some are increasing in the last ~10s in the INMON channel.

LSC Loops (LSC-01:39UTC, LSC-04:25UTC)
01:39 Lockloss - LSC loops look much different than usual
04:25 Lockloss - Loops look pretty normal

SUS MASTER_OUTs (SUS-bothtimes, SUS-01:39UTC, SUS-04:25UTC)
01:39 Lockloss - MASTER_OUT channel values are more or less stable once in NOMINAL_LOW_NOISE
04:25 Lockloss - During last ~15minutes of lock, many MASTER_OUT values double and stay that large up to the lockloss

Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 11:43, Friday 01 December 2023 (74521)

Sheila, Oli

I was looking for any overflows from AS72 before the locklosses, and there were none, seen here (01:39UTC, 04:25UTC; zoomed out for the sake of better viewing, but I found the time of lockloss using AS_A_NSUM_OUT while zoomed in). The rise in Accumulated Overflows doesn't actually happen until after the locklosses, which is normal.

So we can maybe say that the increase in the AS72 gain was not the cause of these two locklosses.

An extra small note about the 04:25UTC lockloss: Regular oscillations can be seen(ndscope) on ASC-AS_A_NSUM_OUT starting -9s until LL and on LSC-PR_GAIN_OUT starting -11s until LL.

Images attached to this comment
Displaying reports 13901-13920 of 86264.Go to page Start 692 693 694 695 696 697 698 699 700 End