Displaying reports 5981-6000 of 86441.Go to page Start 296 297 298 299 300 301 302 303 304 End
Reports until 21:19, Sunday 16 February 2025
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 21:19, Sunday 16 February 2025 (82846)
Lockloss

Lockloss @ 02/17 05:16 UTC  after 3 hours locked due to an earthquake. Earthquake didn't shake the ground too hard, but we still lost lock becuase secondary microseism is also really high.

LHO General (SQZ)
ryan.short@LIGO.ORG - posted 17:05, Sunday 16 February 2025 (82840)
Ops Day Shift Summary

TITLE: 02/17 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Aligning
INCOMING OPERATOR: Oli
SHIFT SUMMARY: H1 has been locked for 3 hours. As I'm posting this, H1 is unlocked and running an alignment. One lockloss this morning, then much of the shift was spent attempting to relock H1 with several failed attempts. After relocking, we noticed the range was noticeably lower than recent locks and that DARM in higher frequencies looked poor, pointing to SQZ not being optimized.

SQZ Troubleshooting: I started by just trying to take SQZ_MANAGER to 'DOWN' then back up to 'FREQ_DEP_SQZ' to see if that would fix the range issue, but the FC ASC was not turning on after the FC locked. The SQZ-FC_ASC_TRIGGER_INMON was not reaching its threshold of 3 to turn ASC on, so I thought improving FC alignment by adjusting FC2 might help. Ultimately, after following the FC alignment instructions from the SQZ troubleshooting wiki, the FC would not lock at all in IR, so I reverted my FC2 move and then attempted to adjust the OPO temperature. Again following the instructions on the wiki, with SQZ_MANAGER in 'LOCK_CF,' I was able to improve SQZ-CLF_REFL_RF6_ABS_OUPUT and the FC was able to lock successfully with ASC on. After then taking SQZ_MANAGER all the way up to 'FREQ_DEP_SQZ,' I again adjusted the OPO temperature (in the other direction) as both the inspiral range and SQZ BLRMS looked worse than when we had started. I was able to bring these back to about where they were when I dropped observing, so ultimately this adventure didn't provide much improvement. H1 then returned to observing.

 LOG:

H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 16:53, Sunday 16 February 2025 - last comment - 18:15, Sunday 16 February 2025(82842)
Lockloss

Lockloss @ 02/17 00:49 UTC after over 3 hours locked

Comments related to this report
oli.patane@LIGO.ORG - 18:15, Sunday 16 February 2025 (82843)

02:11 Back to Observing

H1 General
oli.patane@LIGO.ORG - posted 16:45, Sunday 16 February 2025 - last comment - 18:52, Sunday 16 February 2025(82839)
Ops Eve Shift Start

TITLE: 02/17 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 140 Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 9mph Gusts, 5mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.53 μm/s
QUICK SUMMARY:

Currently Observing and have been Locked for 3 hours. Our range was bad, which we figured was due to squeezing being way worse than last lock and not getting better with thermalization. We popped out of Observing to adjust squeezing but it was giving us issues and we had trouble getting IR to lock (see Ryan's alog). Finally were able to get the filter cavity locked by adjusting the OPO temp, but our squeezing was still bad, so Ryan adjusted the OPO temperature again and got a tiny bit of squeezing out (-1db), but not much. Range is basically back to where it was before all this.

Comments related to this report
oli.patane@LIGO.ORG - 18:52, Sunday 16 February 2025 (82845)

Ryan's alog: 82840

H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 10:24, Sunday 16 February 2025 - last comment - 13:36, Sunday 16 February 2025(82837)
Lockloss @ 17:03 UTC

Lockloss @ 17:03 UTC - link to lockloss tool

ETMX glitch lockloss, no other obvious cause.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 13:36, Sunday 16 February 2025 (82838)

H1 back to observing at 21:35 UTC. Long but pretty much automatic relock due to two locklosses at TRANSITION_FROM_ETMX and many more around ALS and DRMI, possibly because of the high microseism.

LHO VE
david.barker@LIGO.ORG - posted 10:21, Sunday 16 February 2025 (82836)
Sun CP1 Fill

Sun Feb 16 10:11:43 2025 INFO: Fill completed in 11min 40secs

TCmins [-54C, -53C] OAT (+1C, 34F) DeltaTempTime 10:11:46

Images attached to this report
LHO General
ryan.short@LIGO.ORG - posted 08:23, Sunday 16 February 2025 (82835)
Ops Day Shift Start

TITLE: 02/16 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 7mph Gusts, 5mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.62 μm/s
QUICK SUMMARY: H1 has been locked for 4.5 hours. Inch or two of new snow on the ground this morning.

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 22:00, Saturday 15 February 2025 (82834)
OPS Eve Shift Summary

TITLE: 02/16 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:

IFO is in NLN and OBSERVING as of 04:21 UTC

Overall calm shift with one Lockloss (alog 82832). Lock re-acquisition was slow due to high and increasing microseism - while we didn't lose lock during re-acquisition, we DRMI, PRMI and MICH_FRINGES took a long time. Squeezing is still bad but range trace seems to be on the rise.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
19:34 SAF Laser Haz LVEA YES LVEA is laser HAZARD!!! 06:13
00:09 PEST Robert Optics Lab N Termite Wellness Check (and part inventory) 03:08
H1 ISC (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 18:32, Saturday 15 February 2025 - last comment - 20:22, Saturday 15 February 2025(82832)
Lockloss 02:29 UTC

Lockloss - no immediate environmental or other cause. Will continue investigating as we relock.

Comments related to this report
ibrahim.abouelfettouh@LIGO.ORG - 20:22, Saturday 15 February 2025 (82833)

Back to NLN as of 04:21 UTC

LHO General (OpsInfo)
ryan.short@LIGO.ORG - posted 16:38, Saturday 15 February 2025 (82829)
Ops Day Shift Summary

TITLE: 02/16 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: One lockloss this shift at the end of calibration sweeps and H1 has been trying to relock since. [OpsInfo] I've put new damping settings for ETMY mode 20 into the Guardian and loaded; these have been working well the past couple of days.

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:14, Saturday 15 February 2025 - last comment - 17:05, Saturday 15 February 2025(82830)
OPS Eve Shift Start

TITLE: 02/16 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 3mph Gusts, 1mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.60 μm/s
QUICK SUMMARY:

IFO is in LOCKING

Having a lot of issues, starting with a LL due to a failed Cal Sweep. Since then, DAY Ops (Ryan S) has experienced locking issues at ALS, IR, DRMI and TRANSITION_ETMX. Interestingly, despite high microseism, ALS locking doesn't look noisy and IFO seems to lose lock without evidence of high ground motion.

Will continue investigating, troubleshooting and locking.

Comments related to this report
ibrahim.abouelfettouh@LIGO.ORG - 17:05, Saturday 15 February 2025 (82831)

Back to OBSERVING as of 01:04 UTC

H1 PSL
ryan.short@LIGO.ORG - posted 16:02, Saturday 15 February 2025 (82828)
PSL FSS RefCav Remote Alignment Tweak

While attempting to lock H1 this afternoon, occasionally I would notice the ALS-X transmission get a bit noisy, then it would drop out entirely and there would be a lockloss. On a couple of the more recent instances of this, there was also an MC2 saturation callout right before the lockloss, leading me to believe the IMC was having trouble staying locked. The RefCav alignment has been dropping over the past several days (likely due to the change in weather) and has needed an adjustment, so, thinking that might help the IMC, I paused between lock attempts to touch up the RefCav alignment using the picomotor mirrors in the FSS path. I was only able to improve the signal on the TPD from about 630mV to 740mV in the few minutes I spent, but I was about at the limit of improvement I think I could get. I'm not sure if this helped at all as H1 still has yet to relock, but I figure this is still an improvement.

H1 CAL (Lockloss)
ryan.short@LIGO.ORG - posted 12:16, Saturday 15 February 2025 (82827)
Lockloss after Broadband and Simulines Calibration Sweeps

Following the wiki instructions, I ran broadband and Simulines calibration sweeps this morning. This was with the updated Simulines sweep from Vlad with the last frequency point cut out, however H1 again lost lock towards the end of the Simulines injections (link to lockloss tool). See previous investigations into these locklosses in alog82704.

The following injections were the ones still running at the time of the lockloss at 20:01:53 UTC:

2025-02-15 20:01:30,766 | INFO | Scanning frequency 7.68 in Scan : PCALY2DARMTF on PID: 1977568
2025-02-15 20:01:30,766 | INFO | Drive, on PCALY2DARMTF, at frequency: 7.680030000117188, is now running for 35 seconds.
2025-02-15 20:01:35,038 | INFO | Drive, on L1_SUSETMX_iEXC2DARMTF, at frequency: 13, and amplitude 20.726, is finished. GPS start and end time stamps: 1423684888, 1423684908
2025-02-15 20:01:35,038 | INFO | Scanning frequency 14.5 in Scan : L1_SUSETMX_iEXC2DARMTF on PID: 1977571
2025-02-15 20:01:35,038 | INFO | Drive, on L1_SUSETMX_iEXC2DARMTF, at frequency: 14.5, is now running for 28 seconds.
2025-02-15 20:01:36,152 | INFO | Drive, on DARM_OLGTF, at frequency: 875.25, and amplitude 3.7558e-10, is finished. GPS start and end time stamps: 1423684888, 1423684908
2025-02-15 20:01:36,152 | INFO | Scanning frequency 1083.7 in Scan : DARM_OLGTF on PID: 1977565
2025-02-15 20:01:36,153 | INFO | Drive, on DARM_OLGTF, at frequency: 1083.7, is now running for 28 seconds.
2025-02-15 20:01:42,692 | INFO | Drive, on L3_SUSETMX_iEXC2DARMTF, at frequency: 131.75, and amplitude 0.36336, is finished. GPS start and end time stamps: 1423684896, 1423684916
2025-02-15 20:01:42,692 | INFO | Scanning frequency 155.75 in Scan : L3_SUSETMX_iEXC2DARMTF on PID: 1977577
2025-02-15 20:01:42,692 | INFO | Drive, on L3_SUSETMX_iEXC2DARMTF, at frequency: 155.75, is now running for 28 seconds.
2025-02-15 20:01:43,808 | INFO | Drive, on L2_SUSETMX_iEXC2DARMTF, at frequency: 37.25, and amplitude 0.47133, is finished. GPS start and end time stamps: 1423684896, 1423684916
2025-02-15 20:01:43,808 | INFO | Scanning frequency 43.75 in Scan : L2_SUSETMX_iEXC2DARMTF on PID: 1977574
2025-02-15 20:01:43,808 | INFO | Drive, on L2_SUSETMX_iEXC2DARMTF, at frequency: 43.75, is now running for 28 seconds.
2025-02-15 20:01:54,427 | ERROR | IFO not in Low Noise state, Sending Interrupts to excitations and main thread.

Broadband sweep ran from 19:30:50 to 19:36:06 UTC, Simulines sweep ran from 19:36:53 to 20:01:53 UTC (ended at lockloss). The 7.68Hz DARM injection is clearly seen on the lockloss trends.

Images attached to this report
H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 13:41, Friday 14 February 2025 - last comment - 18:51, Sunday 16 February 2025(82809)
Lockloss @ 21:15 UTC

Lockloss @ 21:15 UTC - link to lockloss tool

The lockloss tool tags this as WINDY (although wind speeds were only up to around 20mph) and there seems to be an 11Hz oscillation that starts about a second before the lockloss seen by all quads, PRCL, MICH, SRCL, and DARM.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 15:42, Friday 14 February 2025 (82815)ISC

H1 back to observing at 23:11 UTC. Fully automatic relock after I started an initial alignment soon after the last lockloss.

After H1 reached NLN, I ran the A2L script (unthermalized) for both P & Y on all quads. Results here:

  Initial Final Diff
ETMX P 3.35 3.6 0.25
ETMX Y 4.94 4.92 -0.02
ETMY P 5.48 5.64 0.16
ETMY Y 1.29 1.35 0.06
ITMX P -0.67 -0.64 0.03
ITMX Y 2.97 3.0 0.03
ITMY P -0.06 -0.03 0.03
ITMY Y -2.51 -2.53 -0.02

New A2L gains were updated in lscparams and ISC_LOCK was loaded. I also REVERTED all outstanding SDF diffs from running the script (time-ramps on quads and ADS matrix changes in ASC). The A2L gains themselves are not monitored.

Images attached to this comment
ryan.short@LIGO.ORG - 16:19, Friday 14 February 2025 (82817)

Ran coherence check for a time right after returning to observing to check to A2L gains (see attached). Sheila comments that this looks just a bit better than the last time this was run and checked on Feb 11th (alog82737).

Images attached to this comment
oli.patane@LIGO.ORG - 18:51, Sunday 16 February 2025 (82844)

Another lockloss that looks just like this one was seen 02/16 09:48UTC. Same 11 Hz oscillation 1 second before lockloss, seen in the same places.

H1 CDS (CDS)
corey.gray@LIGO.ORG - posted 03:07, Friday 14 February 2025 - last comment - 09:06, Tuesday 18 February 2025(82799)
H1 Wake Up Due To: Due to BS Camera No Longer Updating & Taking CAMERA_SERVO Out Of Nominal

BS Camera stopped updating just like in alogs:

This takes the Camera Sevo guardian into a neverending loop (and takes ISC LOCK out of Nominal and H1 out of Observe).  See attached screenshot.

So, I had to wake up Dave so he could restart the computer & process for the BS Camera.  (Dave mentioned there is a new computer for this camera to be installed soon and it should help with this problem.)

As soon as Dave got the BS camera back, the CAMERA SERVO node got back to nominal, but I had accepted the SDF diffs for ASC which happened when this issue started, so I had to go back and ACCEPT the correct settings.  Then we automatically went back to Observing.

OK, back to trying to go back to sleep again!  LOL

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 14:58, Friday 14 February 2025 (82814)

Full procedure is:

Open BS (cam26) image viewer, verify it is a blue-screen (it was) and keep the viewer running

Verify we can ping h1cam26 (we could) and keep the ping running

ssh onto sw-lvea-aux from cdsmonitor using the command "network_automation shell sw-lvea-aux"

IOS commands: "enable", "configure terminal", "interface gigabitEthernet 0/35"

Power down h1cam26 with the "shutdown" IOS command, verify pings to h1cam26 stop (they did)

After about 10 seconds power the camera back up with the IOS command "no shutdown"

Wait for h1cam26 to start responding to pings (it did).

ssh onto h1digivideo2 as user root.

Delete the h1cam26 process (kill -9 <pid>), where pid given in file /tmp/H1-VID-CAM26_server.pid

Wait for monit to restart CAM26's process, verify image starts streaming on the viewer (it did).

corey.gray@LIGO.ORG - 16:48, Sunday 16 February 2025 (82841)

FRS:  https://services1.ligo-la.caltech.edu/FRS/show_bug.cgi?id=33320

corey.gray@LIGO.ORG - 09:06, Tuesday 18 February 2025 (82876)

Forgot once again to note timing for this wake-up.  This wake-up was at 233amPDT (1033utc), and I was roughly done with this work in about 45min after phoning Dave for help.

Displaying reports 5981-6000 of 86441.Go to page Start 296 297 298 299 300 301 302 303 304 End