Displaying reports 4401-4420 of 83137.Go to page Start 217 218 219 220 221 222 223 224 225 End
Reports until 16:59, Tuesday 05 November 2024
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 16:59, Tuesday 05 November 2024 - last comment - 09:21, Monday 11 November 2024(81084)
Lockloss

Lockloss @ 11/06 00:58UTC

Comments related to this report
oli.patane@LIGO.ORG - 17:09, Tuesday 05 November 2024 (81085)

This lockloss seems to have the AOM driver monitor glitching right before the lockloss(ndscope), similar to what I had noticed in the two toher locklosses from this past weekend(81037).

Images attached to this comment
oli.patane@LIGO.ORG - 19:51, Tuesday 05 November 2024 (81087)

03:45UTC Observing

camilla.compton@LIGO.ORG - 08:44, Monday 11 November 2024 (81189)Lockloss, PSL

In 81073 (Tuesday 05 November) Jason/Ryan S gave the ISS more range to stop it running out of range and going unstable. But on Tuesday evening, Oli saw this type of lockloss with again 81089. Not sure if we've seen it since then. 

The channel to check in the ~second before the lockloss is H1:PSL-ISS_AOM_DRIVER_MON_OUT_DQ, I added this to the lockloss trends shown by the 'lockloss' command line tool via /opt/rtcds/userapps/release/sys/h1/templates/locklossplots/psl_fss_imc.yaml

I checked all the "IMC" tagged locklosses since Wednesday 6th and didn't see any more of these.

jason.oberling@LIGO.ORG - 09:21, Monday 11 November 2024 (81190)

This happened before we fixed the NPRO mode hopping problem, which we did on Wednesday, Nov 6th.  Not seeing any more of these locklosses since lends credence to our suspicion that the ISS was not responsible for these locklosses, the NPRO mode hopping was (NPRO mode hops cause the power output from the PMC to drop, the ISS sees this drop and does its job by lowering the diffracted power %, once the diffracted power % hits 0 the ISS unlocks and/or goes unstable).

H1 SQZ (OpsInfo)
camilla.compton@LIGO.ORG - posted 16:57, Tuesday 05 November 2024 (81083)
SQZ ASC and SQZ angle servo now reset in '''RESET_SQZ_ASC''

Vicky, Camilla

I've updated instructions in the SQZ wiki: https://cdswiki.ligo-wa.caltech.edu/wiki/Troubleshooting%20SQZ

If the squeezing runs away, you should now be able to take SQZ_MANAGER to '''RESET_SQZ_ASC_FDS''' (turns off and clear the ASC and turns off the angle servo, will also reset the SQZ angle to 190 deg if it's outside 100-250 range) and the request FREQ_DEP_SQZ (will got through SQZ_ASC_FDS) to turn back on SQZ ASC and angle control.

This is to avoid issues as in 81027. We still might need to think about what should happen to the FC ASC.

H1 ISC (ISC)
keita.kawabe@LIGO.ORG - posted 16:39, Tuesday 05 November 2024 - last comment - 12:29, Thursday 07 November 2024(81080)
Installation of the AS power monitor in the AS_AIR camera enclosure

I've roughly copied the LLO configuration for the AS power monitor (that won't saturate after lock losses) and installed an additional photodiode in the AS_AIR camera enclosure. PD output goes to H1:PEM-CS_ADC_5_19_2K_OUT_DQ for now.

GiGE used to receive ~40ppm of the power coming into HAM6. I replaced the steering mirror in front of GigE with 90:10, the camera now receives ~36ppm and the beam going to the photodiode is ~4ppm. But I installed ND1.0 in front of the PD, so the actual power on PD is ~0.4ppm.

See the attached cartoon (1st attachment) and the picture (2nd attachment).

Details:

  1. Replaced the HR mirror with 90:10 splitter (BS1-1064-90-1025-45P), roughly kept the beam position on the camera, and installed a Thorlabs PDA520 (Si detector, ~0.3AW) with OD1.0 absorptive ND filter in the transmission. I set the gain of PDA520 to 0dB (transimpedance=10kOhm).
  2. Reflection of the GigE was hitting the mirror holder of 90:10, so I inserted a beam dump there. The beam dump is not blocking the forward-going beam at all (3rd attachment).
  3. Reflection of the ND filter hits the black side panel of the enclosure. I thought of using a magnetic base, but the enclosure material is non-magnetic. Angling the PD to steer the reflection into the beam dump for GigE reflection is possible but takes time (the PD is in a inconvenient location to see the beam spot).
  4. Fil made a custom cable to route power and signal through the unused DB9 feedthrough on the enclosure.  Pin 1 = Signal, Pin 6 = Signal GND, Pin 4 = +12V, Pin 5 = -12V, Pin 9 = power GND. (However, the power GND and signal GND are connected inside the PD.) All pins are isolated from the chamber/enclosure as the 1/2" metal post is isolated from the PD via a very short (~1/4"?) plastic adapter.
  5. Calibration of this channel (using ASC-AS_C_NSUM) seems to be about 48mW/Ct.
Images attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 17:11, Tuesday 05 November 2024 (81086)

This is the first look of the lock loss from 60W. At least the channel didn't saturate but we might need more headroom (it should rail at 32k counts).

The peak power is in this example is something like 670W. (I cannot figure out for now which AA filter and maybe decimation filter are in place for this channel, these things might be impacting the shape of the peak.)

Operators, please check if this channel rails after locklosses. If it does I have to change the ND filter.

Also, it would be nice if the lock loss tool automatically triggers a script to integrate the lock loss peak (which is yet to be written).

Images attached to this comment
ryan.crouch@LIGO.ORG - 09:20, Wednesday 06 November 2024 (81096)ISC, OpsInfo

Tagging Opsinfo

Also checking out the channel during the last 3 high power locklosses this morning (NLN, OMC_WHITENING, and MOVE_SPOTS). For the NLN lockloss, it peaked at ~16.3k cts 80ms after the IMC lost lock. Dropping from OMC_WHITENING only saw ~11.5k cts 100ms after ASC lost it. Dropping from MOVE_SPOTS saw a much higher reading (at the railing value?) of ~33k cts also ~100 ms after ASC and IMC lost lock.

Images attached to this comment
ryan.crouch@LIGO.ORG - 11:05, Wednesday 06 November 2024 (81102)

Camilla taking us down at 10W earlier this morning did not rail the new channel, it saw about ~21k cts.

Images attached to this comment
keita.kawabe@LIGO.ORG - 17:34, Wednesday 06 November 2024 (81111)

As for the filtering associated with the 2k DAQ, PEM seems to have a standard ISC AA but the most impactful filter is a 8x decimation filter (16k -> 2k). Erik told me that the same 8x filter is implemented as src/include/drv/daqLib.c: line 187 (bi-quad form) line 280 (not bi-quad) and one is mathematically transformed to the other and vice versa.

In the attached, it takes ~1.3ms for the step response of the decimation filter to reach its first unity point, which is not really great but OK for what we're observing as the lock loss peaks seem to be ~10ms FWHM. For now I say that it's not unreasonable to use this channel as is.

Images attached to this comment
keita.kawabe@LIGO.ORG - 12:29, Thursday 07 November 2024 (81112)

I added ND0.6, which will buy us about a factor of 4.

(I'd have used ND2.0 instead of ND1.0 plus ND0.6, but it turns out that Thorlabs ND2.0 is more transparent at 1um relative to 532nm than ND1.0 and ND0.6 are. Looking at their data, ND2.0 seems to transmit ~4 or 5%. ND 1.0 and ND 0.6 are closer to nominal optical density at 1um.)

New calibration for H1:PEM-CS_ASC_5_19_2K_OUT_DQ using ASC-AS_C_NSUM_OUT (after ND was increased to 1.0+0.4) is ~0.708/4.00~0.177W/counts.

Images attached to this comment
LHO General
corey.gray@LIGO.ORG - posted 16:36, Tuesday 05 November 2024 (81065)
Tues DAY Ops Summary

TITLE: 11/05 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY:

For most of the shift, it was a bit windy, and at around 230pm PT, winds died for a bit.  And microseism continues to drop.  

PSL group (Jason & RyanS) made a temperature change (there was no PSL Room incursion) and this required changes to the ALS & SQZ lasers (by Daniel & Vicky).  H1 made it to NLN just after 3pm PT and then work began with the Squeezer (Vicky, Camilla)....but then there was a lockloss.

H1's currently on its way back to NLN and the hope is the Squeezer will lock after the squeezer adjustments by Vicky/Camilla.
LOG:

LHO VE (VE)
travis.sadecki@LIGO.ORG - posted 16:16, Tuesday 05 November 2024 - last comment - 16:53, Tuesday 05 November 2024(81078)
PT-154 issues

Jim and Fil alerted me that the MKS 390 gauge on A2 cross of FC-A (between HAM7 and BSC3) was making a clicking noise. While I was elsewhere moving compressors around, Patrick and Fil attempted to reset it via software, but with no luck.  H1 lost lock while I was looking at trends of this gauge, so I went to the LVEA and power cycled the gauge.  After power cycling, the "State" light on the gauge changed to blinking alternating green and amber, where before power cycling it was blinking only green, but the clicking sound is still present, although maybe at a reduced rate (see attached videos).   Looking at trends, the gauge started to get noisy ~2 days ago and got progressively worse through today.  

The vacuum overview MEDM is currently reporting this gauge as "nan".

Images attached to this report
Non-image files attached to this report
Comments related to this report
patrick.thomas@LIGO.ORG - 16:53, Tuesday 05 November 2024 (81082)
Sorry for the confusion, we didn't attempt to reset it in software, just looked to see if it was reporting errors, which it was (see attached screenshots).

Looking these codes up, it appears to be a hardware error which should be reported to the manufacturer.
Images attached to this comment
H1 General
oli.patane@LIGO.ORG - posted 16:10, Tuesday 05 November 2024 - last comment - 16:50, Tuesday 05 November 2024(81077)
Ops Eve Shift Start

TITLE: 11/06 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 10mph Gusts, 8mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.25 μm/s
QUICK SUMMARY:

Currently in DRMI and trying to relock.

Comments related to this report
oli.patane@LIGO.ORG - 16:50, Tuesday 05 November 2024 (81081)

11/06 00:50UTC Observing

X1 SUS
oli.patane@LIGO.ORG - posted 16:03, Tuesday 05 November 2024 - last comment - 12:05, Wednesday 13 November 2024(81075)
10/31 BBSS M1 Transfer function measurements

Following the raising of the entire BBSS by 5mm (80931), we took transfer function measurements and compared them to the previous measurements from October 11th(80711), where the M1 blade height was the same but the BBSS had not yet been moved up by the 5mm. They are looking pretty similar to the previoius measurements, but there are extra peaks in V2V that line up with the peaks in pitch.

The data for these measurements can be found in:

/ligo/svncommon/SusSVN/sus/trunk/BBSS/X1/BS/SAGM1/Data/2024-10-31_2300_tfs/

The results for these measurements can be found in:

/ligo/svncommon/SusSVN/sus/trunk/BBSS/X1/BS/SAGM1/Results/2024-10-31_2300_tfs/

The results comparing the October 11th measurements to the October 31st measurements can be found in:

/ligo/svncommon/SusSVN/sus/trunk/BBSS/Common/Data/allbbss_2024-Oct11-Oct31_X1SUSBS_M1/

Non-image files attached to this report
Comments related to this report
ibrahim.abouelfettouh@LIGO.ORG - 12:05, Wednesday 13 November 2024 (81253)

BBSS Status of Drift since moving whole BBSS up from the Top Stage to compensate for TM blades moving down (alog 809391)

Plots:

BBSSF1DriftPUMDoFsCurrentConfig: 6 day trend of LPY micron counts for the PUM.

  • There seems to be a ~80 micron +YAW movement.
  • Minimal (20 micron) drift in L.
  • No P drift.
  • There is no previous PUM screenshot because the flags were too low in the BOSEMs to be considered accurately sensing (which is one of the main reasons we raised the BBSS from the Top Stages to begin with).

BBSSF1DriftTMDoFsCurrentConfig: 6 day trend of LTVRPY micron counts for the Top Mass.

  • There seems to be a ~80 micron in +YAW movement.
  • Minimal (+40 micron) drift in P amounting to 6.77 micron/day of drift. No other drift.

BBSSF1DriftTMDoFsPreviousConfig: 18 day trend of LTPRPY micron counts for Top Mass.

  • No drift observed. The 57 micron displacement in P amounts to 3.17micron/day of drift, which is neglibible.

BBSSF1DriftTMBOSEMssPreviousConfig: 18 day trend of BOSEM counts for Top Mass.

  • No drift observed. The 196ct displacement in F1 amounts to 10.89ct/day of drift, which is neglibible.

BBSSF1DriftTMBOSEMsCurrentConfig: 6 day trend of BOSEM counts for Top Mass.

  • Minimal drift observed in F1. The  micron displacement in P amounts to 60ct/day of drift, which is slightly worse than prev. In general, this is about 6x more drift than the previous config, but 1/3 the drift of our past more drifty configs.

First Look Findings:

  • There is a YAW drift in the current config compared with previous, which is somewhat understandable since blade movements at the top stage risk YAW displacement but why is it drifting and not just an offset?
  • There is slightly more F1/P drift in the current config than the previous, but I would consider it largely minimal compared to the general amount of drift we've seen in other configurations.
Images attached to this comment
H1 PSL (ISC, SQZ)
victoriaa.xu@LIGO.ORG - posted 13:41, Tuesday 05 November 2024 - last comment - 17:00, Tuesday 05 November 2024(81074)
Adjusted SQZ + ALS Y / X laser temperatures

Daniel, Camilla, Vicky

After PSL laser frequency change 81073, we adjusted the SQZ + ALS X + ALS Y laser crystal temperatures again, as before in lho80922.

Again the steps we took:

Images attached to this report
Comments related to this report
victoriaa.xu@LIGO.ORG - 17:00, Tuesday 05 November 2024 (81079)

Squeezer recovery at NLN was complicated after this temp change, but after fixing laser mode hopping and updating OPO temperature, SQZ is working again at NLN.

OPO TEC temperature adjusted from 31.468 C --> 31.290 C, to maximize seed NLG. Change is consistent with last time lho80926 (though not obvious that laser freq changes mean the red/green opo co-resonance should change, but ok).

Squeezer laser was likely modehopping at the new settings above. Symptoms:
   - Screenshot of TTFSS beatnote before / after / afterAfter the PSL laser freq change.
   - PMC_TRANS was 430 (nominal H1:SQZ-PMC_TRANS_DC_POWERMON~675)
   - PMC_REFL was 370  (nominal H1:SQZ-PMC_REFL_DC_POWERMON~44)
   - Bad PMC transmission meant not enough SHG power, and while OPO could lock, the OPO PUMP ISS could not reach the correct pump power. So aside from mode hopping being bad, it would have meant bad squeezing.

Camilla and I went to the floor. SQZ Laser controller changes made to avoid mode hopping:

  • SQZ Laser current 1.935 A --> 1.885 A
  • SQZ Laser crystal temp 27.02 C --> 27.07 C

This adjustment seemed quite sensitive to laser current and crystal temperature, with a tunable range kinda less than I expected.

Compared to previous SQZ laser adjustments to avoid mode-hopping (see old laser 51238, 51135, 49500, 49327), this temperature adjustment seems small for the current (amps) adjustment.. but, seemed to work.

PMC cavity scans look clean at these new settings (laser looks single-mode), see screenshot. For +/- 200 MHz steps of crystal frequency, still see a 160 MHz beatnote. Later could double-check a PMC cavity scan at +200 MHz just to be sure. PMC cavity scan template added to medm SQZT0 screen (! PMC) and at $(userapps)/sqz/h1/Templates/ndscope/pmc_cavity_scan.yaml.

Images attached to this comment
H1 PSL
jason.oberling@LIGO.ORG - posted 12:58, Tuesday 05 November 2024 - last comment - 16:11, Tuesday 05 November 2024(81073)
PSL Instability Since NPRO Swap

R. Short, J. Oberling

Today we were originally planning to take a look at the TTFSS, but noticed something before going into the enclosure that we think is the cause of some of the PSL instability since the completion of the NPRO swap.  We were looking at some trends from yesterday's ISS OFF test, recreating the plots Ryan C. had made, when we noticed the ISS get angry and enter into an unlock/relock cycle that it couldn't get out of.  While it was doing this we saw that the PMC Refl power was up around 40W 30W and "breathing" with PMC Trans; as PMC Refl would increase, PMC Trans would decrease, but the sum of the 2 was unchanged.  We then unlocked the ISS.  As we sat and watching things for a bit after the ISS unlock we saw that PMC Refl would change by many Watts as it was breathing.  We tried unlocking the IMC, the behavior was unchanged; we then tried unlocking the FSS RefCav, and the behavior was still unchanged.  It was at this point we noticed that the moves in PMC Refl were matched by very small moves (1-2 mW) in the NPRO output power; as the NPRO output power went up, PMC Refl went down, and vice versa.  This looked a whole lot like the NPRO was mode hopping, and 2 or more modes were competing against each other.  This would explain the instability we've seen recently.  The ISS PDs are located after the PMC, so any change in PMC output (like from competing modes) would get a response from the ISS.  So if PMC Refl would go up, as we had been seeing, and PMC Trans drops in response, also as we had been seeing, the ISS would interpret this as a drop in power and reduce the ISS AOM diffraction in response.  When the ISS ran out of range on the AOM diffraction it would then become unstable and unlock; it would then try to lock again, see it "needed" to provide more power, run out of range on the AOM, and unlock.  While this is happening the FSS RefCav TPD would get very noisy and drop, as power out of the PMC was dropping.  You can see it doing this in the final 3 plots from the ISS OFF test.  To give the ISS more range we increased the power bank for the ISS by moving the ISS Offset slider from 3.3 to 4.1; this moved the default diffraction % from ~3% to ~4%.  We also increased the ISS RefSignal so when locked the ISS is diffracting ~6% instead of ~2.5%.

To try to fix this we unlocked the PMC and moved the NPRO crystal temperature, via the FSS MEDM screen, to a different RefCav resonance to see if the mode hopping behavior improved.  We went up 2 RefCav resonance and things looked OK, so we locked the FSS here and then the ISS.  After a couple minutes PMC Refl began running away higher and the ISS moved the AOM lower in response until it ran out of range and unlocked.  With the increased range on the diffracted power it survived a couple of these excursions, but then PMC Refl increased to the point where the ISS emptied its power bank again and unlocked.  So we tried moving down by a RefCav resonance (so 1 resonance away from our starting NPRO crystal temperature instead of 2) and it was the same: things looked good, locked the stabilization systems, and after a couple minutes PMC Refl would run away again (again the ISS survived a couple of smaller excursions but then a large one would empty the bank and unlock it again).  So we decided to try going up a 3rd resonance on the RefCav.  Again, things looked stable so we locked the FSS and ISS here and let it sit for a while.  After almost an hour we saw no runaway on PMC Refl, so we're going to try operating at this higher NPRO crystal temperature for a bit and see how things go (we went 2 hours with the ISS OFF yesterday and saw nothing, so this comes and goes at potentially longer time scales).  The NPRO crystal temperature is now 25.2 °C (it started at 24.6 °C), and the temperature control reading on the FSS MEDM screen is now around 0.5 (it was at -0.17 when we started).  We have left the ISS Offset at 4.1, and have moved the RefSignal so it is diffracting ~4%; this to give the ISS a little more range since we've been seeing it move a little more with this new NPRO (should the crystal temperature change solve the PMC Refl runaway issue we could probably revert this change, but I want to keep it a little higher for now).  Ryan S. will edit DIAG_MAIN to change the upper limit threshold on the diffracted power % to clear the "Diffracted power is too high" alert, and Daniel, Vicky, and Camilla have been re-tuning the SQZ and ALS lasers to match the new PSL NPRO cyrstal temperature.  Also, the ISS 2nd loop will have to be adjusted to run with this higher diffracted power %.

Edit: Corrected the initial high PMC Refl from 40W to 30W.  We saw excursions up towards 50W later on, as can be seen in the plots Ryan posted below.

Comments related to this report
ryan.short@LIGO.ORG - 16:11, Tuesday 05 November 2024 (81076)

Sadly, after about an hour and a half of leaving the PSL in this configuration while initial alignment was running for the IFO, we saw similar behavior as what was causing the ISS oscillations as seen last night (see attachment). There's a power jump in the NPRO output power (while the FSS was glitching, but likely unrelated), followed by a dip in NPRO power that lines up with a ~2W increase in PMC reflected power, likely indicating a mode hop. This power excursion was too much of a change out of the PMC for the ISS, so the ISS lost lock. Since then, there have been small power excursions (up to around 18-19W), but nothing large enough to cause the ISS to give out.

As Jason mentioned, I've edited the range of ISS diffracted power checked by DIAG_MAIN to be between 2.5% and 5.5% as we'll be running with the ISS at around 4% at least for now. I also checked with Daniel on how to adjust the secondloop tuning to account for this change, and he advised that the ISS_SECONDLOOP_REFERENCE_DFR_CAL_OFFSET should be set to be the negative value of the diffracted power percentage, so I changed it to be -4.2 and accepted it in SDF.

Last attachments are of the situation as we first saw it this morning before making adjustments, and a zoomed-out trend over our whole work this morning.

Images attached to this comment
H1 PSL
ryan.short@LIGO.ORG - posted 12:58, Tuesday 05 November 2024 (81072)
PSL Cooling Water pH Test

FAMIS 21614

pH of PSL chiller water was measured to be just above 10.0 according to the color of the test strip.

LHO VE
david.barker@LIGO.ORG - posted 10:12, Tuesday 05 November 2024 (81070)
Tue CP1 Fill

Tue Nov 05 10:06:12 2024 INFO: Fill completed in 6min 8secs

 

Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 03:31, Tuesday 05 November 2024 - last comment - 11:21, Tuesday 05 November 2024(81062)
OPS OWL Shift Summary

IFO is in NLN and OBSERVING as of 11:15 UTC.

IMC/EX Rail/Non-Lockloss Lockloss Investigation:

In this order and according to the plots, this is what I believe happened.

I believe EX saturated, prompting IMC to fault while guardian was in DRMI_LOCKED_CHECK_ASC (23:17 PT), putting guardian in a weird fault state but keeping it in DRMI, without unlocking (still a mystery). Ryan C noticed this (23:32 PT) and requested IMC_LOCK to DOWN (23:38 PT), tripping MC2’s M2 and M3 (M3 first by a few ms). This prompted Guardian to call me (23:39 PT). What is strange is that even after Ryan C successfully put IMC in DOWN, guardian was unable to realize that IMC was in DOWN, and STAYED in DRMI_LOCKED_CHECK_ASC until Ryan C requested it to go to INIT. Only after then did the EX Saturations stop. While the EX L3 stage is what saturated before IMC, I don’t know what caused EX to saturate like this. The wind and microseism were not too bad so this could definitely be one of the other known glitches happening before all of this, causing EX to rail.

Here’s the timeline (Times in PT)

23:17: EX saturates. 300ms later, IMC faults as a cause of this. 

23:32: Ryan C notices this weird behavior in IMC lock and MC2 and texted me. He noticed that IMC lost lock and faulted, but that this didn’t prompt an IFO Lockloss. Guardian was still at DRMI_LOCKED_CHECK_ASC, but not understanding that IMC is unlocked and EX is still railing. 

23:38: In response, Ryan C put IMC Lock to DOWN, which tripped MC2’s M3 and M2 stages. This called me. I was experiencing technical issues logging in, so ventured to site (made it on-site 00:40 UTC).

00:00: Ryan C successfully downed IFO by requesting INIT. Only then did EX stop saturating.

00:40: I start investigating this weird IMC fault. I also untrip MC2 and start an initial alignment (fully auto). We lose lock at LOWNOISE_ESD_ETMX, seemingly due to large sus instability probably from the prior railing since current wind and microseism aren’t absurdly high. (EY, IX, HAM6 and EX saturate). The LL tool is showing an ADS excursion tag.

03:15: NLN and OBSERVING achieved. We got to OMC_Whitening at 02:36 but violins were understandably quite high after this weird issue.

Evidence in plots explained:

Img 1: M3 trips 440ms before M2. Doesn’t say much but was suspicious before I found out that EX saturated first. Was the first thing I investigated since it was the reason for call (I think).

Img 2: M3 and M2 stages of MC2 showing IMC fault beginning 23:17 as a result of EX saturations (later img). All the way until Ryan C downs IMC at 23:38, which is also when the WD tripped and when I was called.

Img 3: IMC lock faulting but NOT causing ISC_LOCK to go to lose lock. This plot shows that guardian did not put IFO in DOWN or cause it to lose lock. IFO is in DRMI_LOCKED_CHECK_ASC but IMC is faulting. Even when Ryan C downed the IMC (which is where the crosshair is), this did not cause a ISC to go to DOWN. The end time axis is when Ryan C put IFO in INIT, finally causing a Lockloss and ending the railing in EX (00:00).

Img 4: EX railing for 42 minutes straight, from 23:17 to 00:00. 

Img 5: Ex beginning to rail 300ms before IMC faults. 

Img 6: EX L2 and L3 OSEMs at the time of the saturation. Interestingly, L2 doesn’t saturate but before the saturation, there is erratic behavior. Once this noisy signal stops, L3 saturates.

Img 7: EX L1 and M0 OSEMs at the time of the saturation, zoomed in. It seems that there is a noisy and loud signal, (possibly a glitch or due to microseism?) in the M0 stage that is very short which may have kicked off this whole thing.

Img 8: EX L1 and M0 OSEMs at the whole duration of saturation. We can see the moves that L1 took throughout the 42 minutes of railing, and the two kicks when the railing started and stopped.

Img 9 and 10: An OSEM from each stage of ETMX, including one extra from M0 (since signals were differentially noisy). Img 9 is zoomed in to try to capture what started railing first. Img 10 shows the whole picture with L3’s railing. I don’t know what to make of this.

Further investigation:

See what else may have glitched or lost lock first. Ryan C’s induced lockloss, saving the constant EX railing doesn’t seem to show up under the LL tool, so this would have to be done in order of likely suspects. I’ve never seen this behavior before what this was if anyone else has.

Other:

Ryan’s post EVE update: alog 81061

Ryan’s EVE Summary: alog 81057

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 09:46, Tuesday 05 November 2024 (81068)ISC

Couple of strange things that happened before this series of events that Ibrahim has written out:

  • At 06:54:50 UTC ISC_LOCK stops checking the convergences in DRMI_LOCKED_CHECK_ASC
    • The log shows nothing, last log from ISC_LOCK:
      • 2024-11-05_06:54:50.340024Z ISC_LOCK [DRMI_LOCKED_CHECK_ASC.run] INP1 Pit not converged, thresh=5
      • 2024-11-05_06:54:50.340243Z ISC_LOCK [DRMI_LOCKED_CHECK_ASC.run] PRC1 Pit not converged, thresh=400
      • 2024-11-05_07:38:20.006093Z ISC_LOCK REQUEST: DOWN
  • At 07:17 UTC, the green arms and DRMI loose lock, this should have triggered ISC_LOCK to loose lock but didn't:
    • DRMI_LOCKED_CHECK_ASC has a @ISC_library.assert_dof_locked_gen(['IMC', 'XARM_GREEN', 'YARM_GREEN', 'DRMI']) checker
      • IMC-PWR_IN_OUTPUT stayed high so may not have triggered
      • 'XARM_GREEN', 'YARM_GREEN' should have triggered lockloss as the arms started in TRANSISION,  then X_ARM went to FAULT, and then both to CHECK_CRYSTAL_FREQUENCY. Should loose lock if not in one of: ''End Locked','Slow Engaged', 'Transition', 'Red Locked'
      • DRMI checker checks that LSC-MICH/PRCL/SRCL_TRIG_MON are all > 0.5. This was not True so should have also lost lock.
  • At the same time, FSS looses lock and IMC_LOCK goes into fault, the rest is noted by Ibrahim/Ryan.
Why did ISC_LOCK stop doing the DRMI_LOCKED_CHECK_ASC convergence checker? ISC_LOCK_STATUS was RUN (2) the whole time.
And why did ISC_LOCK not notice the DRMI/Green arms lockloss? Plot attached
Images attached to this comment
camilla.compton@LIGO.ORG - 10:09, Tuesday 05 November 2024 (81069)

TJ Suggested checking on the ISC_DRMI node: seemed fine, was in DRMI_3F_LOCKED from 06:54 until DRMI unlocked at 17:17UTC, then it went to DOWN.

2024-11-05_06:54:59.650968Z ISC_DRMI [DRMI_3F_LOCKED.run] timer['t_DRMI_3f'] done
2024-11-05_07:17:40.442614Z ISC_DRMI JUMP target: DOWN
2024-11-05_07:17:40.442614Z ISC_DRMI [DRMI_3F_LOCKED.exit]
2024-11-05_07:17:40.442614Z ISC_DRMI STALLED
2024-11-05_07:17:40.521984Z ISC_DRMI JUMP: DRMI_3F_LOCKED->DOWN
2024-11-05_07:17:40.521984Z ISC_DRMI calculating path: DOWN->DRMI_3F_LOCKED
2024-11-05_07:17:40.521984Z ISC_DRMI new target: PREP_DRMI
2024-11-05_07:17:40.521984Z ISC_DRMI executing state: DOWN (10)
2024-11-05_07:17:40.524286Z ISC_DRMI [DOWN.enter]

TJ also asked what the H1:GRD-ISC_LOCK_EXECTIME was, this kept getting larger and larger e.g. after 60s was at 60, like ISC_LOCK got hung, see attached (bottom left plot). Starting getting larger at 6:54:50 UTC which was the same time as the last message from ISC_LOCK. Reached a maximum of 3908 seconds ~ 65minutes before Ryan reset it using INIT. Another simpler plot here.

Images attached to this comment
camilla.compton@LIGO.ORG - 11:21, Tuesday 05 November 2024 (81071)

TJ worked out that this is due to a call to cdu.avg without a timeout.

ISC_LOCK DRMI_LOCKED_CHECK_ASC converge checker must have returned True so it went ahead to the next lines when contained a call to nds via cdu.avg().

We've had previous issues with similar calls getting hung. TJ has already written a fix to avoid this, see 71078.

'from timeout_utils import call_with_timeout' was already imported as is used for PRMI checker. I edited the calls to cdu.avg to use the timeout wrapper in ISC_LOCK:

E.g. I edited: self.ASA_36Q_PIT_INMON = cdu.avg(-10, 'ASC-AS_A_RF36_Q_PIT_INMON') 
To: self.ASA_36Q_PIT_INMON = call_with_timeout(cdu.avg, -10, 'ASC-AS_A_RF36_Q_PIT_INMON')
 
##TODO we've seen and fixed similar issues is SQZ_FC and SUS_PI, there could still be these calls without the timeout wrapper in other guardian nodes.
Displaying reports 4401-4420 of 83137.Go to page Start 217 218 219 220 221 222 223 224 225 End