Displaying reports 981-1000 of 80598.Go to page Start 46 47 48 49 50 51 52 53 54 End
Reports until 16:11, Wednesday 25 December 2024
LHO General
corey.gray@LIGO.ORG - posted 16:11, Wednesday 25 December 2024 (81988)
Wed EVE Ops Transition

TITLE: 12/26 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 8mph Gusts, 4mph 3min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.70 μm/s
QUICK SUMMARY:

Just getting the Christmas Handoff from Tony and his surprisingly lively/busy day!  But all looks well in hand and this is where I take H1. 
Microseism was trending down overnight which is nice, but see that it has had a drift up around the 95th percentile line over the last 12hrs (hoping it doesn't skyrocket again...but we do have some big weather locally (atleast for the mountains with snow dump)).

H1 CDS
david.barker@LIGO.ORG - posted 15:12, Wednesday 25 December 2024 (81986)
BSC3 PT132 vacuum gauge glitch, reset VACSTAT

At 14:36 Wed 25dec2024 PDT we had another BSC3 PT132 gauge glitch which put VACSTAT into sensitive mode. At 15:10 I reset VACSTAT.

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 11:57, Wednesday 25 December 2024 (81984)
Wed CP1 Fill

Wed Dec 25 10:13:18 2024 INFO: Fill completed in 13min 14secs

 

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 09:46, Wednesday 25 December 2024 - last comment - 15:39, Wednesday 25 December 2024(81979)
Vacuum event at EY 04:42:53 PDT

We saw a vacuum event at EY at 02:42:53 PDT which propagated to MY. The increase in pressure was about 50% the level which would have triggered VACSTAT, so no VACSTAT alarm was raised.

The event was coincident with a lock loss.

Images attached to this report
Comments related to this report
anthony.sanchez@LIGO.ORG - 09:58, Wednesday 25 December 2024 (81980)
Images attached to this comment
daniel.sigg@LIGO.ORG - 11:18, Wednesday 25 December 2024 (81982)

Looking at the plots of this event there is about a 4-5 sec delay between the lock loss and the first reponse of the vacuum gauges at EY.

The EY transmon shows a decay of the intra cavity power over about 100-200 ms, which is normal. There was nothing significant on PEM channels.There is no obvious resean why this lock loss would have caused a vacuum spike.

The delay between EY and MY is about 8 minutes.

Images attached to this comment
anthony.sanchez@LIGO.ORG - 12:12, Wednesday 25 December 2024 (81983)CDS, DetChar-Request, Lockloss, VE

Christmas Lockloss # 1

There was a Lockloss from Observing last night that was followed by an increase in pressure in End station Y and Mid station Y. But not in the Corner station.
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1419158677

Out of an abundance of caution of melting things in the EY chamber, or an overactive imagination, I Held us in Check_Violins_Before PowerUp and called Daniel to determine if I should continue powering up the IFO.
He asked if the fast shutter, shuttered.
Yes, the Fast Shutter Shuttered.
Then he mentioned checking the Circulating Power in the Arms.
ndsope showing both the Fast shutter pop up and the circulating power.

He also checked the microphones at EY and didn't see anything at that time.

I checked the ground motion  and didn't see anything that stood out as a major event.
 

After Speaking with Daniel, My Fear of accidentally melting something were resolved. So I started to power up beyond 10 Watts.  Eventually reaching NLN.
 

Images attached to this comment
gerardo.moreno@LIGO.ORG - 15:04, Wednesday 25 December 2024 (81985)VE

It seems that we may have lost an ion pump, or there was glitch related to an ion pump controller, this pump is located at Y2-8, near the end station.  Heading to the site, be there sometime around 4.

Attached is a trend for most of the relatable signals at Y-End, the glitch is first noticed at the gauge located at Y2-9, color red, then the IP controller signal, also in red.

Images attached to this comment
david.barker@LIGO.ORG - 15:39, Wednesday 25 December 2024 (81987)

Ion pump IP17 voltage glitch corresponds with H1 lock loss to within a second. It takes about 10 seconds for the molecules to make their way to EY.

Images attached to this comment
H1 General (CDS)
anthony.sanchez@LIGO.ORG - posted 07:59, Wednesday 25 December 2024 - last comment - 16:11, Wednesday 25 December 2024(81975)
Merry Christmas from LIGO Handford!!!

TITLE: 12/25 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 15mph Gusts, 11mph 3min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.43 μm/s
QUICK SUMMARY:

H1 Is currently Locked and in Nominal Low Noise but one of the ADS cameras is not working currectly.
Erik has joined the Teamspeak to help me restart it.


Info Ive collected  so far.
ADS Camera Servo for the Beam splitter is glitched out again and needs to be restarted.
H1cam26 hosted on h1digivideo2.
nslookup returns 10.106.0.46

Pinging h1cam26 returns data though.
Pinging 10.106.0.46 also returns data.



 

 

Comments related to this report
erik.vonreis@LIGO.ORG - 08:26, Wednesday 25 December 2024 (81976)

Camera now works after a restart.

david.barker@LIGO.ORG - 08:50, Wednesday 25 December 2024 (81977)

BS h1cam26 went offline at 05:32 PDT this morning, 25dec2024. This was the first test of my blue-screen monitor for this camera in cds_report, which worked well.

Images attached to this comment
anthony.sanchez@LIGO.ORG - 16:11, Wednesday 25 December 2024 (81989)

We needed to restart the service running on H1digivideo2 again After the reboot. This is why it didn't work earlier this morning.

H1 General
thomas.shaffer@LIGO.ORG - posted 05:55, Wednesday 25 December 2024 (81974)
Out of Observing from BS camera

Looks like hte BS camera (h1cam26) has frozen. This happened last in October (<a href="https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=80555">alog80555</a>). I'll give CDS a call to get it back momentarily.

LHO General
corey.gray@LIGO.ORG - posted 22:01, Tuesday 24 December 2024 - last comment - 22:29, Tuesday 24 December 2024(81970)
Tues EVE Ops Summary

TITLE: 12/25 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: TJ
SHIFT SUMMARY:

H1 managed to make it close to NLN, BUT it's been stuck at OMC WHITENING for most of this shift due to rung up violins (violins were fine yesterday btw).  For the violins, The rung up violins were:

Since H1 hasn't made it to NLN during the shift, have been chatting with TJ and the plan is to continue to let H1 wait out the rung up violins and let autopilot take H1 to NLN/OBSERVING---hopefully!

(H1 missed out on a GW Candidate [S241225c] which occured at 0425utc.)

LOG:

Comments related to this report
corey.gray@LIGO.ORG - 22:29, Tuesday 24 December 2024 (81973)

0626:  And H1 has made it to OBSERVING Finally!!  (it took about 5hrs to damp out those pesky violins!)

And at 0628 had an H1 range data point of 155Mpc via CDS Overview and 160Mpc on the range FOM.

LHO General
corey.gray@LIGO.ORG - posted 19:58, Tuesday 24 December 2024 (81972)
Mid-Shift-ish Status

H1's been at OMC WHITENING for 2.5hrs as we continue to wait for rung up violins to damp down.  

After consulting with Rahul, have turned OFF damping for ITMy M5/6.

For ETMx MODE8 & ETMy MODE1, I slowly increased their gains to their nominal values.  

Slowly but surely the drop and we are getting close to be able to turn on WHITENING.  (crossing fingers for no lockloss!!)

H1 General
anthony.sanchez@LIGO.ORG - posted 16:36, Tuesday 24 December 2024 (81971)
Chirstmas Eve day shift end.

TITLE: 12/25 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
16:37 UTC Norco arrived.
17:00 Test T538484 announced on Verbals
EXTERIG: SNEWS alert active
18:33 UTC Norco truck starts leaving & Locking started.
Several Locking attempts later I start thinking that maybe there is a pattern to the locklosses , but it turns out H1 was just being finnicky & I was just being hungry. 
Ate lunch and H1 started getting fairly high in the locking process.  ISC_LOCK  Got as high as OMC_whitening. But due to Legendary sized violins modes trying to high behind the DARM legend we had to wait for those to be damped. We lost lock before reaching NLN.

LOG:
                                                                                                                                                                        

Start Time System Name Location Lazer_Haz Task Time End
22:08 OPS LVEA LVEA N LASER SAFE 15:07
22:08 PCAL Dripta PCAL Lab Yes Taking pictures of PCAL lab Slider positions 22:23

 

LHO General
corey.gray@LIGO.ORG - posted 16:26, Tuesday 24 December 2024 (81969)
Tues EVE Ops Transition

TITLE: 12/25 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 3mph Gusts, 2mph 3min avg
    Primary useism: 0.07 μm/s
    Secondary useism: 0.63 μm/s 
QUICK SUMMARY:
Happy Christmas Eve! 

Just got the lowdown of Tony's fighting the good fight with our high microseism.  Sad to hear that the violins were rung up and that prevented him from progressing thru OMC WHITENING and getting H1 finally back to OBSERVING, but atleast he got us that much closer!

H1 just completed Tony's alignment & H1 returns to locking.  Microseism is just about where it was 24hrs ago for the beginning of my shift (with it increasing a little 24hrs ago and then returning to it's spot just above the 95th percentile of the Secondary Microseism.

Atleast winds are calmer than 24hrs ago (they died down about 13hrs ago).

H1 ISC
anthony.sanchez@LIGO.ORG - posted 13:04, Tuesday 24 December 2024 (81968)
troubleshooting instantaneous Resonance Locklosses.

I noticed a pattern of locklosses in the locking process around CARM_TO_REFL or Resonance.
The LOCK_AQU Survey was updated Yesterday with 3 locklosses from Resonance.
Requesting CARM_TO_REFL and allowing that state to complete before moving on to Resonance allowed me to see that there seems to be a consistent lockloss around the point we load a matrix.
H1:LSC-PD_DOF_MTRX_LOAD_MATRIX=>1

Test:
Comment out the load  matrix line and sit in Carm_to Refl for a while.
ISC_library.intrix.load()
Commenting Out Line 2670 from ISC_Lock, allowed me to stay locked in CARM_TO_REFL seems stable.

For some reason loading this matrix seems to be unlocking the IFO.

After that test that seemed to confirm my suspicion that loading the matrix unlocks the IFO. I had to unlock H1 and try again.... undo my changes.

And Of Course H1 waltzed through CARM_TO_REFL  & Resonance with out issue.....

Conclusion: I should have just eaten my lunch.
 

Displaying reports 981-1000 of 80598.Go to page Start 46 47 48 49 50 51 52 53 54 End