Displaying reports 5081-5100 of 83224.Go to page Start 251 252 253 254 255 256 257 258 259 End
Reports until 10:15, Saturday 05 October 2024
X1 SUS (EPO, SUS)
corey.gray@LIGO.ORG - posted 10:15, Saturday 05 October 2024 - last comment - 10:58, Saturday 05 October 2024(80406)
The Saga Of The aLIGO Fiber Puller At LHO (up ~9/25/2024)

Editor's Note:  In an attempt at documenting recent SUS Fiber Puller activities and share overall status of work of the Advanced LIGO Fiber Puller Lab located in the LSB building here at LHO, have had requests to post our notes/summaries (which in the last few years had been relegated to notes in logbooks, email chains, hearsay, etc.) in the LHO Logbook and tagging the ALOG section as "X1" with a Primary Task of "SUS".

Summary Of The Last 2-Years

(Alphabetical list of people who have helped/been involved:  C.Gray, J.Kissel R.McCarthy J.Oberling, T.Sadecki, R.Savage, TJ Shaffer, R.Short, B.Weaver)

Upgrade To 200W Laser (Fall 2022)

Until recently, the last batch of Pulled Fibers were pulled in Fall 2022.  At this time, the Fiber Puller started having issues related to the laser (f100 CO2 laser by Synrad; 100W and 10,600um)--this laser was well over a dozen years old and at the end of its operational life.  Higher power lasers had already been purchased for the Fiber Puller about 5-years ago in preparation for an upgrade and a possible need to pull thicker diameter silica fibers for heavier suspensions.  Because of this, it was decided to upgrade the Fiber Puller with one of these 200W lasers (f201 CO2 laser by Synrad; 200W and 10,600um).  The 200W laser is bigger than the previous laser, and because of this, the back of this laser hangs off the edge of the optics table, but Rick made a nice breadboard which offers a platform to support the back of this laser (see image1).  Cabinets and furniture were removed along the wall to allow extra space to for walking around this area of the lab.

After RyanS and I installed the new laser & re-did the chiller plumbing, we moved on to aligning the laser into the Fiber Puller. Since alignment work was next, we took this opportunity to swap in new upper & lower conical mirrors in the Fiber Puller (Travis ordered these a while ago.). 

Alignment Work Begins (2023-2024)

Then, with the help of Karl Toland's thesis (Chapter 2), we started Fiber Puller alignment.  This included Mechanical checks of the Fiber Puller which looked mostly OK; it was noticed there is a slight tilt forward of the tower.  We decided to not change this.  The periscope which points the beam into the Fiber Puller was reconfigured to allow its translation stages (namely the horizontal one) to have better adjustability for pointing into the Fiber Puller--previously the horizontal translation stage only moved forward and back which did not allow for horizontal translations of the beam into the Fiber Puller (this only required us to rotate the translation stage 90deg).

After this it was all optical alignment work, and this proved to be non-trivial.  Additionally, one of the gold mirrors was burned.  This mirror had a glass substrate, so it was replaced with a gold mirror with a copper substrate (also took this opportunity to replace several dirty input gold mirrors).

Then there was several months of no progress with alignment.  Eventually Travis & Jason checked out the status of alignment.  And in Aug 2024, Jason was able to give us a decent looking beam on our stock and after almost 2-years, fibers were finally pulled again!

Return To Fiber Pulling!  ...For A Little While.

Between Aug 2-20th, six fibers were pulled (4 passes & 2 fails).  At this point, while preparing to pull another fiber, one of the 1/8" ID tubing to the shutter burst on Aug 20, 2024.

NOTE:  Here I will paste in the email summary I sent to the group on Sept 26th.


Recovery From The Plumbing Failure On Aug 20, 2024

[start of email]

Summary:  

After the 1/8" chilling hose burst (8/20), weeks were spent with recovery and buying new plumbing hardware.  There is now better hosing for the shutter + beam bender (mirror mount) and most of the fittings have been replaced with push-to-connects (excepted for barbed fitting of the beam bender (see image 6)).  We have been running the chiller w/ new plumbing since 9/17/2024.

For the plumbing fix above, this was done in-situ without removing the shutter or mirror mount with the hopes of preserving the alignment, but first looks at alignment by myself and Jason at separate times did see an alignment change on our fiber stock target.  On Sept 25th, we were both able to work on the Fiber Puller, and Jason was able to get us a decent alignment back, we pulled our first fiber since the 8/20 chiller plumbing link---and this fiber also passed on 9/26!

PLUMBING (see image 5):

The backstory is years ago a water-cooled shutter & mirror mount were installed (1/8" ID plumbing using a mix of push-to-connect & barbed fittings) for the Fiber Puller.  The water-cooled shutter & mirror mount were added in series to the laser (which has 1/2"OD tubing) via an aluminum manifold.  During the 100W to 200W laser swap by RyanS and I, we discovered everything downstream of the laser (1/8" ID stuff) had been clogged (most likely for years) from the manifold downstream to the shutter + mirror mount.  This led to us replacing all lines, cleaning the manifold, and starting to use OptiShield with the chiller water with hopes of preventing any corrosion clogs.  This worked for a while during the upgrade and even through six NEW fibers pulled in August, but after the 6th fiber there was a huge burst in the tubing at the shutter.  

This led to repair work, with a few issues/notes observed:  

  1. Non-Water Tubing FAIL:  The line which failed (I reordered what was originally installed) was very bendy and easy to work with.  BUT over several purchase orders, it was noticed that this line was rated for "air" use.  This led to being more mindful of types of tubing.  Firstly making sure we had (1) water-rated tubing and also (2) tubing with decent max-pressure ratings.  Perhaps overkill, but ultimately went with tubing with max pressure rating of 120psi.  This new tubing was definitely much stiffer than the previous tubing.  
  2. Metric & Inch:   Previously--between the manifold, shutter and beam-bender we had a mix of different fitting connections.  Additionally, because we were looking for: water-rated, high max-pressure rating, consistent fittings, etc, it limited options.  Ultimately, went with metric tubing + push-to-connect fittings (see image 4).  Able to make this change for everything, except for the mirror mount (it remained with its barbed fittings, but it has 3 barbs and it feels like a good connection).  Additionally, our pressures are pretty low (under 50psi)---not measured for the new 6mm lines, but the chiller's pressure dial reads ~25psi for the system.
  3. 90deg Elbow Fittings Under Shutter (see images 2 & 3):  When the group checked out this plumbing work a few weeks ago, TJ suggested elbow fittings under the shutter (mainly because this is where the previous tubing burst and the lines do take a bit of bend under the shutter here).  New push-to-connect elbows were added.
  4. Pressure/Flow values & Adding Sensors:  Had not really paid attention to what the pressure & flow of the system are through all this work, but after the failure it was brought up by Travis and Jason. We have a Flow sensor on the supply line (downriver from the laser) and a pressure gauge (also on supply side at the chiller).  Just for a rough idea of numbers, here are values observed:
    1. Flow sensor:  this ran at 1.0gpm with just the laser, but after the new 6mmOD system was added it's increased to 2.0gpm
    2. Pressure gauge:  was running around 30psi at some points during the rework (more consistently at 26psi after the new 6mmOD system added)
    3. TO DO list:  More Sensors!  Would be nice to add more flow/pressure sensors at more points in the plumbing system--especially for the 6mmOD lines.  With focus on getting operation back, I did not get a chance to look into ordering this, but we should do this at some point.

ALIGNMENT Once Again:

For all of the plumbing work above, I opted to do it "in situ" by not "touching" the shutter or beam bender (I did not remove them from the beam path and only connected/disconnected their tubing)---this was all in the hopes of not changing the alignment---we had a nice alignment which produced decent fibers in August! 

Although I attempted to be careful with all of the plumbing work above, first checks by both myself and Jason later showed we were off a little when we looked at the fiber stock target with the red beam.  

[On Sept 25, 2024], Jason and I were both able to work together in the fiber lab.  Here are some of the notes I took while Jason did his alignment magic!

Alignment Attempt #1:

Red beam alignment

Jason went through alignment touch-up via the periscope mirrors.  He was quickly able to get a decent beam on the fiber stock target.  At this point, he checked to see how this beam looked with the alignment irises which have been in place all the while (between the periscope and Fiber Puller).  After aligning beam to the irises, he once again had a decent beam on the fiber stock.  In both cases, there was a "hot spot" noted on the "back" side of the fiber stock.  

CO2 beam alignment

With the main CO2 beam on a new fiber stock, we needed to turn the power up to about 55-60% on the laser to be able to see the beam via the cameras in the Labview app.  With the cameras, I tweaked the (periscope mirrors) to get a better alignment (which allowed us to lower the power to about 50%).  Jason had a try at tweaking alignment via the cameras (I believe this was his first chance getting to do this—especially since the side camera was better-focused on our stock back in August).  

We were able to optimize a little more, but it still looked not great (see image 7).  I thought it might have looked like what we had when we were pulling fibers in August, but we could clearly see it wasn't great.  There was an angle on the fiber stock and the rear camera could clearly see the angle even more.  I mentioned we could probably pull a fiber, but we'd definitely have to increase the laser power.  We did not pull a fiber however.  We broke for lunch with hopes at more checks in the afternoon.

Alignment Attempt #2 With Mechanical Offset Check Between Top & Bottom Clamp Translation Stages:

Translation Stages:

Thinking there was still something amiss mechanically, Jason wanted to check the Top & Bottom Clamp Translation Stages.  We do this using a special tool to set the Upper translation stage and the Lower translation stage (the latter we have not touched for a while).

Jason went through several iterations of making small adjustments to both the upper and lower translation stages (see image 8).  On one of the final adjustments, he intentionally overshot the upper translation stages.  This was to hopefully help us later (it did!).

Tower:

NOTE:  At this point, Jason continued with mechanical checks and we looked at the Tower with a bubble level.  There is an observable tilt forward of the tower albeit small.  (this is mentioned above)  Karl's thesis mentions this should be fixed, but there are no tolerances given or go/no go values.  And it's not clear how to do this without a major change to the system.  There are alot of surface-to-surface mechanical contacts in the tower, so one would need to loosen lots of bolts and then possibly install shims and retorque down everything.  Anyway---a big job.  Since it is a small tilt, we are just noting it and decided to move on.  In the future, the Fiber Puller will need to be upgraded to a taller tower, this will definitely give us the opportunity to remove the tilt from the tower (note:  This could be a reason for [the known years-long issue of ] why we always have to make adjustments to the upper clamp translation stage almost every time we pull a fiber).

Red beam alignment

Once again, Jason very quickly went through an alignment onto our fiber stock.  One nice thing:  no "hot spot" on back side of stock!

CO2 beam alignment

A new stock was installed, and we heated it.  We could clearly see the beam with the cameras at a lower power of 45%!  (lower power than Attempt #1!   Our 1st good sign!)

And the 2nd good signs was the forward & back alignment of the stock was GOOD (in other words, there was not an offset in this direction!  Another sign the translation stage mechanical checks did us good!).  The stock did show an offset in the other direction ("left/right"), so the translation stage was adjusted accordingly.  

Additionally, the spot on the stock looked MUCH better on the cameras (3rd good sign! [see image 9]).  There were minimal angles seen with the "ring spot" on the stock!  Touched up the alignment a little more, but it was already better than how we looked in the morning.  We were ready to pull!

Vaporization

Before pulling, wanted to get an idea of what laser power we could go up to before silica vaporization begins.  Ryan can correct me here, but I believe Alan told us, we should do pulls at laser powers just lower than the laser power when silica vaporization occurs.  The laser power was slowly ramped up, and I started seeing vaporization somewhere around 75-80% (it was very bright on the stock).

Polish & Pull

Not sure of what is the ideal values to use here, but to just do it, went with:

Although late in the afternoon, we went through the whole process.  Fiber was profiled and then analyzed:

This is where we ended for that day.

Have continued to pull more fibers and get more data points with an eye on the fundamental (violin) mode frequencies.  One knob we have here are those Polish & Pull powers. 

NOTE:  A reminder that even with the 100W laser toward the end, our fibers have been populating the lower range of violin mode frequencies (see fiber table).  Something to think about.

Additional Note On Alignment & Power

With our nice alignment, it should also be noted we are still running at higher powers than what we are used to (which was at 85% or ~85W with the 100W).  With the current 200W laser, we were running at ~140W in Aug for pulls and yesterday we were at ~135W.  This would point to a the overall alignment in need of some improvement.

[end of email]

Images attached to this report
Comments related to this report
corey.gray@LIGO.ORG - 10:58, Saturday 05 October 2024 (80481)EPO, SUS

Here is a video DCC Link to fiber (S2400752) pulled on Aug 19, 2024:

https://dcc.ligo.org/LIGO-E2400339

LHO VE
david.barker@LIGO.ORG - posted 08:24, Saturday 05 October 2024 (80480)
Sat CP1 Fill

Sat Oct 05 08:09:22 2024 INFO: Fill completed in 9min 18secs

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 07:35, Saturday 05 October 2024 (80478)
Sat DAY Ops Transition

TITLE: 10/05 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 6mph Gusts, 4mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.17 μm/s
QUICK SUMMARY:

H1's locked (43min) and has a night 1-3hr locks with decent relock times. Microseism continues to slowly approach the 50th percentile & light breezes.

H1 General
ryan.crouch@LIGO.ORG - posted 22:00, Friday 04 October 2024 (80473)
OPS Friday EVE shift summary

TITLE: 10/05 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: 1 lockloss, automated relock, the wind is decreasing and the micoseism also seems to be (slowly...). We've been locked for just over an hour.
LOG: No log

Images attached to this report
H1 General (ISC, Lockloss, PSL)
ryan.crouch@LIGO.ORG - posted 21:43, Friday 04 October 2024 - last comment - 13:51, Saturday 05 October 2024(80475)
Locklosses today, FSS

More investigation similar to alog80358

NLN 1412064049 lockloss tool, no tag, FSS chans looks stable.

AS_A_DC and IMC-TRANS loose lock within ~250 ms

PREP_DC_READOUT_TRANSITION 1412066248 lockloss tool, FSS tag

AS_A_DC and IMC-TRANS loose lock within 1 ms of each other, but the FSS chans look stable before hand

NLN 1412077192 lockloss tool, tool failed

AS_A_DC and IMC-TRANS loose lock within ~5 ms, small FSS glitching starts almost 2 mins before LL, large glitch 1/2 a sec before

NLN 1412084422 lockloss tool, No FSSery

AS_A_DC and IMC-TRANS loose lock within ~250 ms

NLN 1412092628 lockloss tool, FSS tag

AS_A_DC and IMC-TRANS loose lock within ~5 ms, FSS glitching starts 5 secs before LL

NLN 1412101888 lockloss tool, FSS tag

AS_A_DC and IMC-TRANS loose lock within ~250 ms

TRANSITION_FROM_ETMX 1412105312 lockloss tool, FSS tag

AS_A_DC and IMC-TRANS loose lock within ~ 13 ms, the FSS starts glitching 40 seconds before LL, FSS_PC_MON_OUTPUT starts rising ~40 seconds before LL

LOW_NOISE_ESD_ETMX 1412113952 lockloss tool, no FSSery

AS_A_DC and IMC-TRANS loose lock within ~250 ms

NLN 1412131865  lockloss tool, FSS tag

AS_A_DC and IMC-TRANS loose lock within ~1 ms, there's a FSS glitch 1 second before the LL

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 22:32, Friday 04 October 2024 (80477)Lockloss, OpsInfo, PSL

To summarize, Ryan looked at 8 locklosses from today, 6 of them had the FSS tag, 5 of them had the IMC loose lock withing 15ms of the IFO losing lock, and only 3 of them had the pattern that Ian and Camilla found was normal in O4a, where the IMC looses lock ~250 ms after the IFO losses lock.  This FSS issue seems to be having a serious impact on our duty cycle.

I had a helpful conversation with Rick this afternoon about this issue, he sugested that we trying looking for the glitches in other places in the PSL, so that we could think about tests where we don't run some of the feedback servos and see if we still see the issue. I made an ndscope template with all the PSL fast channels I could find, it is saved in sheila.dwyer/ndscope/PSL/PSL_fast_channels.yaml

Looking at 1412111658 with this scope, it looks like several of these channels that we are saving at a fast rate might not be connected, for example the OSC and HPL channels are around 0.  The only other channel that clearly shows these glitches is H1:PSL-PWR_NPRO_OUT_DQ. 

Request for operators:  If you are having locking difficulties over the weekend, it may be due to the PSL/FSS problems.  To help narrow down the cause of the problem one could sit for an hour or two with the FSS unlocked, and the ISS unlocked. If the glitches show up in the NPRO channel even without these servos locked, then we will know that the problem is not from the FSS, but from the NPRO. 

Images attached to this comment
ryan.crouch@LIGO.ORG - 13:51, Saturday 05 October 2024 (80484)ISC, Lockloss, PSL

Using Sheilas template for a few of these LLs.

1412077192

NLN LL, It looks like the FSS started glitching first then the NPRO.

1412105312

TRANSITION_FROM_ETMX LL, here it looks like the NPRO saw the first glitch then the FSS started glitching.

1412131865

NLN  LL, the NPRO power starts dropping 28 seconds before LL then the ISS, FSS, PMC and NPRO all glitch a ms before LL.

1412066248

PREP_DC_READOUT_TRANSITION LL, not much going on here, some small FSS glitches and PSL_ILS_MIXER drops.

1412092628

NLN LL, the FSS and NPRO seem to glitch at the same time ~5 sec before the LL, glitching gets more frequent closer to the LL.

Images attached to this comment
H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 19:51, Friday 04 October 2024 - last comment - 21:00, Friday 04 October 2024(80472)
Lockloss 02:50 UTC

02:50 UTC lockloss

Comments related to this report
ryan.crouch@LIGO.ORG - 20:10, Friday 04 October 2024 (80474)ISC, Lockloss

Lockloss tool tagged FSS_OSCILATION

AS_A_DC and IMC-TRANS lost lock within 1 ms of eachother

Images attached to this comment
ryan.crouch@LIGO.ORG - 21:00, Friday 04 October 2024 (80476)

03:59 UTC Observing, automated relock with 1 round of check_mich, and 2 of PRMI and DRMI.

LHO General
corey.gray@LIGO.ORG - posted 16:26, Friday 04 October 2024 (80459)
Fri Ops DAY Shift Summary

TITLE: 10/04 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:

A bit of a rough day with H1.  Had two short locks and then it was rough going with locking, and then it got worse with winds in the afternoon.  BUT, at the end of the shift H1 was able to get to OMC WHITENING---the violin modes are elevated.
LOG:

H1 General
ryan.crouch@LIGO.ORG - posted 16:08, Friday 04 October 2024 - last comment - 16:34, Friday 04 October 2024(80470)
OPS Friday EVE shift start

TITLE: 10/04 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Wind
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 29mph Gusts, 16mph 3min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.22 μm/s
QUICK SUMMARY:

Comments related to this report
ryan.crouch@LIGO.ORG - 16:34, Friday 04 October 2024 (80471)

23:27 UTC Observing

H1 General
corey.gray@LIGO.ORG - posted 13:27, Friday 04 October 2024 - last comment - 15:21, Friday 04 October 2024(80468)
Mid Shift Status

H1 continues to have a rough day with short locks, and it's gotten worse with H1 dropping out at various states while trying to get to NLN.  Although H1, had been locking DRMI sort of decently, ran an Initial Alignment to see if this can help (it had been over 24hrs since the last IA was run).

For the last lock about 3hrs ago, OPO temperature was adjusted and this yielded a better range, but it was only locked 47min.

Comments related to this report
corey.gray@LIGO.ORG - 15:21, Friday 04 October 2024 (80469)

Locking woes continue.  Have had a couple of locks make it all the way past MAX POWER only to have locklosses, but most of our locklosses of late are early ISC LOCK stages.  This is some weather.  Winds are now in the 30mph range and microseism is above 50th percentile.

Images attached to this comment
H1 PSL (PSL)
marc.pirello@LIGO.ORG - posted 12:10, Friday 04 October 2024 - last comment - 17:59, Monday 11 November 2024(80467)
PSL FSS PZT Locked Comparison H1 and L1

In the process of investigating the locklosses due to FSS glitching and working on spare chassis for the FSS in the PSL, we compared the power spectrum of the PZT monitor between H1 and L1.  We found some difference in the power spectrum, plot attached.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 17:59, Monday 11 November 2024 (81210)PSL

I discovered today that LLO is indeed doing some additional digital filtering to their FSS_FASTMON channel, which would very likely explain the difference in spectra Marc shows above. Just looking at the MEDM screen for the filter bank, it shows three filters in use (called "cts2V', "NPRO", and "toMHz") while LHO is using none; parameters of these are attached in a screenshot. I'm not entirely sure what the purpose of these are, but from what I can tell there is an additional pole at 10Hz, which would explain the 1/f-looking drop in noise towards higher frequencies.

Images attached to this comment
H1 OpsInfo (SQZ)
sheila.dwyer@LIGO.ORG - posted 10:40, Friday 04 October 2024 - last comment - 08:52, Wednesday 08 January 2025(80461)
OPO temperature adjustment instructions

Because we moved the OPO crystal position yesterday, 8045, the crystal absorption will be changing for a few weeks, and we will need to adjust the temperature setting for the OPO until it settles, as Tony and I did last night. 80455.  Here are two sets of instructions, one that can be used while in observing to adjust this, and one that can be done when we loose lock or are out of observing for some other reason. 

For these first few days, please follow the out of observing instructions when relocking, if this hasn't been done in the last few hours, and please try to do it in the last few hours of the evening shift so that the temperature is close to well tuned at the start of the owl shift.

Out of observing instructions (to be done while recovering from a lockloss, can be done while ISC_LOCK is locking the IFO) (screenshot) :

In observing instructions:

Both the OPO temperature and the SQZ phase are ignored in SDF and can be adjusted while we are in observing, but it's important to log the times of any adjustments made. 

Changes made to make this easier (Thanks Corey for beta testing the instructions):

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 08:52, Wednesday 08 January 2025 (82177)

Update to the In observing instructions:

Both the OPO temperature and the SQZ phase are ignored in SDF, but it's important go out of observing for the change and to log any adjustments made. 

  • To adjust the temperature in full lock, go to the purple SQZ_SCOPE button and choose the OPO temp scope, adjust the OPO temperature to maximize CLF_REFL_RF6_ABS (the green trace). 
  • After doing this it might be necessary to adjust the SQZ PHASE, which can be adjusted in steps of 5 degrees or so.  Set this to maximize the squeezing level, which you can do by watching the DARM FDS dtt template.  It is available from the SQZ DTT purple button, or on NUC33.  It is easiest to adjust this for high frequency squeezing (around 1kHz), this should get you close to good squeezing at 100 Hz. This step isn't need anymore as the SQZ_ANG_ADJUST servo should take care of this. 
H1 ISC
sheila.dwyer@LIGO.ORG - posted 10:05, Friday 04 October 2024 (80464)
one DRMI lockloss investigation

Corey has had a series of locklosses from DRMI/PRMI and ALS, I've looked into one of these: 1412094082

In this case it looks like there is a large glitch seen in all DRMI LSC signals (and ASC, but it seems to start in LSC), with a small glitch about 4 seconds before lockloss, which DRMI recovers from.  Then there is a larger glitch that happened about 1 second before lockloss, where POP18 drops below the trigger threshold, and the DRMI guardian goes to DOWN, at the same time the ALS Y arm looses green lock, which then unlocks the ALS X arm. 

This glitch doesn't seem to be cause by the DRMI ASC, although we were in the process of engaging that ASC when it happened.  It also isn't due to the BS ISI isolation coming on, which hadn't happened yet. 

Edit:  I looked into a second lockloss, from PRMI: 1412095134, this one is different in that it happens very quickly and PRMI and ALS signals drop out at close to the same time.  This happened while the DRMI guardian was waiting for the refl WFS centering to converge.

Images attached to this report
H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 15:55, Thursday 03 October 2024 - last comment - 11:47, Friday 04 October 2024(80453)
Lockloss @ 22:19 UTC

Lockloss @ 22:19 UTC - link to lockloss tool

No obvious cause, but it looks like the IMC lost lock at the same time as the arms. However, I don't suspect the FSS in this case.

Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 11:47, Friday 04 October 2024 (80466)
Images attached to this comment
H1 General (Lockloss, PSL)
anthony.sanchez@LIGO.ORG - posted 16:29, Wednesday 02 October 2024 - last comment - 11:21, Friday 04 October 2024(80436)
Control room has Internet access again and an unrelated Lockloss

TITLE: 10/02 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 21mph Gusts, 13mph 3min avg
    Primary useism: 0.08 μm/s
    Secondary useism: 0.45 μm/s
QUICK SUMMARY:

Sudden Lockloss @ 21:55 UTC very likley cause by a PSL FSS issue.

IMC Had a hard time relocking locking after the lockloss.

NLN Reached @ 22:54 UTC
Observing reached @ 22:56 UTC

Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 11:21, Friday 04 October 2024 (80465)

Definitely looks like the FSS had a large glitch and lost lock before DARM saw the lockloss. This lockloss didn't have the FSS glitches happening before though.

Images attached to this comment
Displaying reports 5081-5100 of 83224.Go to page Start 251 252 253 254 255 256 257 258 259 End