Displaying reports 9801-9820 of 86427.Go to page Start 487 488 489 490 491 492 493 494 495 End
Reports until 07:28, Sunday 07 July 2024
H1 General
ryan.short@LIGO.ORG - posted 07:28, Sunday 07 July 2024 (78916)
Ops Owl Shift Summary

H1 called for assistance at 13:30 UTC because it was stuck trying to lock ALSY. I gave ETMY two clicks in yaw and that seemed to be all it needed; H1 is relocking smoothly and up to PREP_DC_READOUT.

LHO General
thomas.shaffer@LIGO.ORG - posted 01:00, Sunday 07 July 2024 (78914)
Ops Eve Shift End

TITLE: 07/07 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:

The first lock acquistion had the same issues that Corey saw during his shift with ALS unable to stay locked or unable to get past FIND IR. After some time it started to have longer times between lock losses and it eventually made it past ALS. I noticed that the COMM beat note was low (-18 dBm then grew to -16 over 30 min). This might explain part of the troubles. The second lock acquisiton was a very similar situation but the beat note started at -16dBm so ALS was more stable sooner. Maybe we found the threshold, and I did NOT touch PR3. I also had to run an initial alignment since DRMI was a bit too far off to catch. We are now on DC readout and moving forward.

ITMX mode 13 was rung up when I arrived and I was able to damp it fairly quick with Rahul's settings and then slowly adding gain.

LOG:

H1 General
thomas.shaffer@LIGO.ORG - posted 00:13, Sunday 07 July 2024 - last comment - 09:05, Monday 08 July 2024(78913)
Lock loss 0642

Lock loss 1404369750

Again, not seeing much of anything on any of the plots. Another 4-5 hour lock.

Comments related to this report
vladimir.bossilkov@LIGO.ORG - 07:22, Sunday 07 July 2024 (78915)

Hey so I had a quick squiz whether your locklosses might be PI related. The regularity of your lock lengths is very suspicious.

You had at least one lockloss from a ~80296 PI on July 5th just after 6am UTC.
Since then you have been passing through it, exciting it, but surviving the PI-Fly-By.

On those plots you will notice broader a feature that moves slowly up in frequency and excites modes as it passes (kinda vertical-diagonalish on those plots) that smells like the interacting optical mode to me.

Last couple of days you have lost lock when that feature reaches ~80306, where there's a couple of modes which grow a little in amplitude as the broad feature approaches.
It is hard for me to say which mechanical mode had the super high gain that would make you lose lock on the scale of seconds [because these modes would move wildly in frequency at LHO as I discuss in the link at the end of this line], but its in that ballpark,and I quote myself: avoid at all costs.
Please investigate if this is PI and whether you need to tweak up your TCS.

EDIT: Longer lock on the 8th makes me think 80kHz PIs are probably fine. Worth double checking carefully in the 20-25 kHz region since your mode spacing looks pretty high. It might be subtle to see in the PI plot on the lock pages since that aliases a lot of noise all over the place.

camilla.compton@LIGO.ORG - 09:05, Monday 08 July 2024 (78937)ISC

Thanks for pointing us to these PI locklosses out Vlad, Ryan also caught one of them in 7886778874.

The lockloss website for the July 5th 637UTC lockloss 1404196646 tags OMC_DCPD and can see elevated noise on the "PI Monitor" dropdown on the website too, maybe peaks at 15 and 30kHz.

Maybe the reason we saw these around July 5th is as our PR2 moves 78878 changed the amount of circulating power we saw in the arms, plot attached.
We still need to check if we're seeing any PI locklosses in the last ~2 days  with the higher circulating power: 1404445312  has the OMC_DCPD tag and shows some elevated noise in the PI bands, 1404346170 and has the OMC_DCPD tag but no elevated noise in the PI bands
Images attached to this comment
vladimir.bossilkov@LIGO.ORG - 08:40, Monday 08 July 2024 (78938)

Edit2: There are some elevated peaks at many frequencies before a lockloss I chose to look at, but nothing alarming enough. The largest peak change is something at 3619 Hz, which could be a red herring or aliased from a much higher frequency, but I don't know what that peak is.

H1 General
thomas.shaffer@LIGO.ORG - posted 18:11, Saturday 06 July 2024 - last comment - 00:38, Sunday 07 July 2024(78910)
Lock loss 0009 UTC

Lockloss 1404346170

Quite sudden, I'm not seeing anything on the lock loss ndscopes, even the many new ones. DIAG_MAIN reports that the ref cav transmission is low, which it seems to have been this way for the past 4 days. This seems really low and potentially an issue, but we've had locks within these last 4 days so perhaps not. I'm running into the same issue that Corey had this morning with ALS. It seems that it won't stay locked. I'm still catching up on the week's alogs, so maybe I'll find a hint soon.

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 19:34, Saturday 06 July 2024 (78911)

Back to observing at 0232 UTC.

ALS wouldn't hold a lock, and I still can't figure out exactly why. Just like with Corey on hist shift, it just decided to hold long enough for DRMI. There was one lock loss here, but then a bit later it worked again and we got off of ALS and then it was all the way up to low noise. ITMX mode 13 is still just as high as before, I'll continue to try Rahul's settings.

thomas.shaffer@LIGO.ORG - 00:38, Sunday 07 July 2024 (78912)

Noticing now that the COMM beat note was very low and slowly trended up over time. This perhaps allowed for longer locks, and eventually long enough to get off ALS. Looking back at the last few locks, it's definitely lower than it has been even over the last few days. I'm hesitant to touch PR3, but this doesn't seem to be working.

Images attached to this comment
LHO General
corey.gray@LIGO.ORG - posted 16:29, Saturday 06 July 2024 (78899)
Sat DAY Ops Summary

TITLE: 07/06 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY:

H1 was locked at beginning of the shift, but had a lockloss just after 10am local.  Relocking sadly took 3hrs, but it isn't clear why it took so long, because other than trying multiple times and eventually running an alignment and continued with more locking attempts, H1 finally made it past DRMI (and then locking was mostly automatic from there), but.... within an hour one of the 1kHz violins rung up (itmx13).  I made some fairly large gain drops/increases which probably didn't help---currently Rahul is closely damping with much smaller gains.  The DARM spectrum on nuc30 is glitchy/ugly---assuming it's due to noisier violins.

Sadly, no Calibration run today.

Guardian Notes:

LOG:

LHO General
thomas.shaffer@LIGO.ORG - posted 16:09, Saturday 06 July 2024 (78909)
Ops Eve Shift Start

TITLE: 07/06 Day Shift: 2330-0100 UTC (1630-0100 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: Corey G
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 17mph Gusts, 14mph 5min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.07 μm/s
QUICK SUMMARY: Locked for 3 hours but the 2nd order violins are quite rung up. They are damping, but slowly. A calibration measurement was not run today due to a short lock and now the high violins. If these damp down I'll run one.

H1 SUS (SUS)
rahul.kumar@LIGO.ORG - posted 15:51, Saturday 06 July 2024 (78908)
Violin mode - ITMX mode 13 was rung up

Received a text message from Corey that ITMX mode 13 was rung up and the nominal settings was making it worse. Corey tried increasing/decreasing the gain settings without any luck. At this point I applied the following settings to try and damp the mode,

Gain = -1 phase -90 degrees - increases
Gain = +1 phase -60 degrees - increases
Gain = +0.1 no phase - increases
Gain =  +0.1 +60degrees - increases

The above settings did not work and the one shown below seems to working for now as shown in the plot,

FM1+FM2+FM4+FM10 gain +0.1 (phase -90 degrees). After some time I will try ramping up the gain to see if I can bring IX13 down at a faster rate.

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 12:39, Saturday 06 July 2024 (78907)
Mid-Shift Status (Sat)

H1 had a 4.5hr lock after last night's Timing Error, but then there was a random lockloss this morning just after 10am local.  

Sadly, locking has not been trivial with many lockloss for green arms (fully complete & automatic Initial Alignment fixed those).  Then had hiccups with DRMI oddly.  Had about 3-4 locklosses around the engage ASC for DRMI.  I texted Camilla at this point to give a heads up for help, but as we chatted on TeamSpeak, H1 made it past DRMI ASC!  (so leaving her on top of call list).  Currently H1 is engaging asc for full ifo.

Fred will soon enter the control room with a high school class from New Zealand.  Robert is also on-site.

H1 General
corey.gray@LIGO.ORG - posted 10:43, Saturday 06 July 2024 - last comment - 13:14, Saturday 06 July 2024(78904)
Lockloss & Locking Notes

h1 just had a lockloss and green arms looked decent, but it kept hanging up with locklosses (6 so far) with the LOCKING ALS. 

Images attached to this report
Comments related to this report
corey.gray@LIGO.ORG - 11:26, Saturday 06 July 2024 (78905)
  • Initial Alignment was fully automated & no issues.
  • Paused at problem spots earlier (LOCKING ALS & also FIND IR---no issues.
  • DRMI immediately locked, BUT
    • lockloss at DRMI_LOCKED_CHECK_ASC
    • 2nd DRMI lock took around 4-minutes.  Held at drmi locked prep asc a few minutes & then moved to engage drmi asc....
    • lockloss as the ASC control signals were being offloaded (probably same happened for previous lock).
    • Will investigate steps of ENGAGE DRMI ASC.
corey.gray@LIGO.ORG - 13:14, Saturday 06 July 2024 (78906)

Will attempt to tweak up SRM (maybe PRM/BS) at ACQUIRE DRMI 1F to help ease ASC for DRMI.

  • SRM made no change
  • PRM pit 2-clicks positively (dropped POP18 & lockloss)

Attempt #2:  automatically taken to check mich fringes then back to drmi...

  • I took to PRMI since drmi looked bad, but then a lockloss.

Attempt #3:  Went to PRMI fine.

  • Lockloss at engage drmi asc.

Attempt #4:  DRMI ASC completed FINE this time!  (I had phoned Camilla before before this lock with fears I'd need help!  Luckily, we both watched it get past DRMI.)  

  • Went back to OBSERVING at 1:11pm (2011utc)....just as Fred was giving a tour to a high school class from Aukland New Zealand!
LHO VE
david.barker@LIGO.ORG - posted 08:27, Saturday 06 July 2024 (78900)
Sat CP1 Fill, new time 08:00

Because of possible heat issues we are running the CP1 fills earlier in the morning, 8am instead of 10am. For today's fill the outside temp at 8am is 30C (85F).

Sat Jul 06 08:17:25 2024 INFO: Fill completed in 17min 21secs

This run used the standard "summer" trip temps of -130C. Both TCs quickly got to -100C and TCA crept down to -100C before the fill completed. I have lowered the trip to -140C to give a margin of safety.

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 07:41, Saturday 06 July 2024 (78898)
Sat DAY Ops Transition

TITLE: 07/06 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 13mph Gusts, 10mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.13 μm/s
QUICK SUMMARY:

H1's been locked/observing 2hrs after the historic efforts in recovery from the timing/dolphin crash last night.  All looks decent at the moment (winds died about 1hr ago, but the Xarm (EX + MX) are both above 10mph oddly.

It is Saturday, so that means it's Calibration Day in about 4hrs *knock on wood*.

The front entrance gate won't close all the way so I have exited and entered it without having to swipe in (been told it's due to heat).

H1 General
ryan.short@LIGO.ORG - posted 05:38, Saturday 06 July 2024 (78897)
H1 Recovery After CDS Timing Error - Ops Owl Shift Summary

Tony handed off the IFO to me at the end of his shift and after the CDS team had recovered following a timing error and Dolphin glitch; see alog78892 for details on that. Generally alignment was very bad, not unsurprising since mutliple seismic platforms tripped as a result of the glitch.

Executive summary: H1 is back to observing as of 12:35 UTC after a lengthy alignment recovery process. See below for my general outline of notes I kept through the process.

---
Post Timing Glitch Recovery - 6 July 2024 (Owl)
 
IMC won't lock; alignment on camera looks bad
Restored optic sliders to after last IA (GPS 1404280186)
Seeing that several optics not actually in right place according to OSEMs (not unexpected with SEI trips) and no light on ALS cams
Trending PRC optics...
PR3 move-
P: -117.1 -> -122.2
Y: 152.4 -> 99.8
PR2 needed slight adjustment also
Got light back on ALS cams
Moved ETMs to get green alignment back
Completed green arms part of IA
 
XARM IR locking on wrong mode during INPUT_ALIGN
Adjusted PR2 to improve IR flashes; still not a great AS AIR cam image (haven't brought SRC back at this point yet)
Completed INPUT_ALIGN part of IA
 
Completed PRC_ALIGN part of IA, no intervention
 
AS AIR cam looks bad at start of MICH_BRIGHT_ALIGN
Trending SRC optics...
SR3 move-
P: 437.9 -> 438.8
Y: -148.9 -> 122.3
Moved SR2 by large amount in P & Y also
MICH won't lock, restoring BS alignment
ASC-AS_A_DC_NSUM gets very noisy whenever trying to ACQUIRE_MICH_BRIGHT; spot on AS AIR cam flashes rapidly
Skipping MICH for now, onto SR2 align
 
Completed SR2_ALIGN part of IA, no intervention
 
Tried MICH again, no improvement, onto SRC_ALIGN
Seeing same issue with ASC-AS_A_DC_NSUM signal getting very noisy and camera spot flashing when trying to ACQUIRE_SRY
SRCL_TRIG conditions are being met; can occasionally get to SRY_LOCKED for a second or two
Since alignment actually doesn't look too bad, going to try and lock mostly out of curiosity
 
Lock aquisition went very well; DRMI on first try, no locklosses going up
Reached NLN @ 12:33 UTC, observing @ 12:35 UTC
LHO FMCS (PEM)
anthony.sanchez@LIGO.ORG - posted 02:07, Saturday 06 July 2024 (78896)
FAMIS Virbometer HVAC fans

FAMIS 26314
There looks to be a slight increase in noise on MR_FAN6 about 5 days ago.

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 01:12, Saturday 06 July 2024 - last comment - 09:26, Saturday 06 July 2024(78892)
Timing Error, restart of SUS frontends needed

Tony, Jim, Erik, Dave:

We had a timing error which caused DACKILLs on h1susb123, h1sush34, h1sush56 and DAC_FIFO errors on h1sush2a.

There was no obvious cause of the timing error which caused the Dolphin glitch, we noted that h1calcs was the only model with a DAQ CRC error (see attached).

After diag-reset and crc-reset only the SUS dackill and fifo error persisted. 

We restarted all the models on h1susb123, h1sush2a, h1sush34, h1sush56 after bypassing the SWWD SEI systems for BSC1,2,3 and HAM2,3,4,5,6.

SUS models came back OK, we removed the SEI SWWD bypasses and handed the system over to Tony.

Images attached to this report
Comments related to this report
erik.vonreis@LIGO.ORG - 01:46, Saturday 06 July 2024 (78894)

H1SUSH2A DACs went into error 300 ms before the CRC SUM increased on h1calcs. 

 

The DAQ (I believe) reports CRC errors 120 ms after a dropped packet, leaving 180 ms. unaccounted for.

Images attached to this comment
david.barker@LIGO.ORG - 09:17, Saturday 06 July 2024 (78901)

FRS31532 created for this issue. I has been closed as resolved-by-restarting.

david.barker@LIGO.ORG - 09:26, Saturday 06 July 2024 (78902)

Model restart logs from this morning:

Sat06Jul2024
LOC TIME HOSTNAME     MODEL/REBOOT
01:12:22 h1susb123    h1iopsusb123
01:12:33 h1sush2a     h1iopsush2a 
01:12:39 h1sush34     h1iopsush34 
01:12:43 h1susb123    h1susitmy   
01:12:47 h1sush2a     h1susmc1    
01:12:56 h1sush56     h1iopsush56 
01:12:57 h1susb123    h1susbs     
01:12:59 h1sush34     h1susmc2    
01:13:01 h1sush2a     h1susmc3    
01:13:10 h1sush56     h1sussrm    
01:13:11 h1susb123    h1susitmx   
01:13:13 h1sush34     h1suspr2    
01:13:15 h1sush2a     h1susprm    
01:13:24 h1sush56     h1sussr3    
01:13:25 h1susb123    h1susitmpi  
01:13:27 h1sush34     h1sussr2    
01:13:29 h1sush2a     h1suspr3    
01:13:38 h1sush56     h1susifoout 
01:13:52 h1sush56     h1sussqzout 
 

Displaying reports 9801-9820 of 86427.Go to page Start 487 488 489 490 491 492 493 494 495 End