Displaying reports 2841-2860 of 83068.Go to page Start 139 140 141 142 143 144 145 146 147 End
Reports until 16:13, Monday 03 February 2025
H1 ISC (PSL)
mayank.chaturvedi@LIGO.ORG - posted 16:13, Monday 03 February 2025 (82612)
M2 Measuremnt of the Laser for ISS PD alignment

[ Jennie, Siva, Mayank]
Laser: Axcel ,Model Designation: BF-A64-0130-PP

We removed the Lens(f= 250 mm) in front of M2 meter and we placed a lens (1 m focal length) roughly 1 m away from the m2 meter.
The lens was adjusted so that the beam spot falls roughly at the center of traveling range of the stage.

We measured the M2 of the beam which gave the following result
M²x = 1.09 and M²y= 1.10.
Beam Waist Diameter X =702 µm
Beam Waist Diameter Y =703 µm
Beam Waist Position X =132 mm
Beam Waist Position Y =121 mm
Rayleigh Length X = 332 mm
Rayleigh Length Y =331 mm

The m2 value was better compared to the measurement done with the 250mm lens. However the Beam spot location of X and Y direction were off by 10 mm.
This astigmatism could be because of mismatch between beam and the optics axis of the lens.
We tried manually adjusting the lens but this was the minimum we could get by hand.
we can try tomorrow by placing the lens on a vertical x y stage to get fine movement.   

 

Images attached to this report
Non-image files attached to this report
LHO General
ryan.short@LIGO.ORG - posted 15:59, Monday 03 February 2025 (82611)
Ops Eve Shift Start

TITLE: 02/03 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 20mph Gusts, 14mph 3min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.34 μm/s
QUICK SUMMARY: H1 has been locked for almost 5 hours.

H1 SQZ
camilla.compton@LIGO.ORG - posted 12:12, Monday 03 February 2025 (82604)
Improving SQZ guardians to avoid repeat of issues over the weekend.

Sheila, Camilla, Matt Todd

Sheila, Ibrahim, Tony and Ryan S had lots of SQZ issues over the weekend: 82588, 82599, 82597, 82585, 82581

Changed made to avoid these:

I'm also updating SQZ troubleshooting wiki, to hopefully be more clear and allow the operator team to more easily solve future SQZ issues.

Images attached to this report
H1 ISC
jim.warner@LIGO.ORG - posted 11:19, Monday 03 February 2025 - last comment - 10:32, Monday 03 March 2025(82608)
ESD glitch limit added to ISC_LOCK

During commisioning this morning, we added the final part of the ESD glitch limiting, by adding the actual limit part to ISC_LOCK. I added a limit value of 524188 to ETMX_L3_ESD_UR/UL/LL/LR filter banks, which are the upstream part of the 28 bit dac configuration for SUS ETMX. These limits are engaged in LOWNOISE_ESD_ETMX, but turned off again in PREP_FOR_LOCKING.

In LOWNOISE_ESD_ETMX I added:

            log('turning on esd limits to reduce ETMX glitches')
            for limits in ['UL','UR','LL','LR']:
                ezca.get_LIGOFilter('SUS-ETMX_L3_ESD_%s'%limits).switch_on('LIMIT')

So if we start losing lock at this step, these lines could be commented out. The limit turn-off in PREP_FOR_LOCKING is probably benign.

Diffs have been accepted in sdf.

I think the only way to tell if this is working is to wait and see if we have fewer ETMX glitch locklosses, or if we start riding through glitches that has caused locklosses in the past.

Comments related to this report
camilla.compton@LIGO.ORG - 11:48, Monday 03 February 2025 (82609)

Using the lockloss tool, we've had 115 Observe locklosses since Dec 01, 23 of those were also tagged ETM glitch, which is around 20%.

camilla.compton@LIGO.ORG - 12:15, Monday 10 February 2025 (82723)SEI

Since Feb 4th, we've had 13 locklosses from Observing, 6 of these tagged ETM_GLITCH: 02/1002/0902/0902/0802/0802/06

sheila.dwyer@LIGO.ORG - 11:30, Tuesday 11 February 2025 (82743)

Jim, Sheila, Oli, TJ

We are thinking about how to evaluate this change.  In the meantime we made a comparison similar to Camilla's: In the 7 days since this change, we've had 13 locklosses from observing, with 7 tagged by the lockloss tool as ETM glitch (and more than that identified by operators), compare to 7 days before the change we had 19 observe locklosses of which 3 had the tag. 

We will leave the change in for another week at least to get more data of what it's impact is.

jim.warner@LIGO.ORG - 10:32, Monday 03 March 2025 (83138)

I forgot to post this at the time, we took the limit turn on out of the guardian on Feb 12, with the last lock ending at 14:30 PST, so locks since that date have had the filters engaged, but since they multiply to 1, they shouldn't have an effect without the limit. We ran this scheme between Feb 3 17:40 utc until Feb 12 22:15 utc.

Camilla asked about turning this back on, I think we should do that. All that needs to be done is uncommenting out the lines (currently 5462-5464 in ISC_LOCK.py):

            #log('turning on esd limits to reduce ETMX glitches')
            #for limits in ['UL','UR','LL','LR']:
            #    ezca.get_LIGOFilter('SUS-ETMX_L3_ESD_%s'%limits).switch_on('LIMIT')

The turn off of the limit is still in one of the very early states is ISC_LOCK, so nothing beyond accepting new sdfs should be needed.

LHO VE
david.barker@LIGO.ORG - posted 10:49, Monday 03 February 2025 (82607)
Mon CP1 Fill

Mon Feb 03 10:07:31 2025 INFO: Fill completed in 7min 28secs

TCmins [-89C, -87C] OAT (1C, 34F) DeltaTempTime 10:07:40

Images attached to this report
H1 ISC
ibrahim.abouelfettouh@LIGO.ORG - posted 10:11, Monday 03 February 2025 - last comment - 17:01, Monday 03 February 2025(82605)
Low Range Coherence Check

Low range coherence check due to the low range we've been having in our last lock. Attached below.

Shown is worse range for 10Hz to 55Hz compared to Dec 15 '24.

Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 10:18, Monday 03 February 2025 (82606)

Using the range comparison script, it also looks like most of the range drop is from below 100Hz

Non-image files attached to this comment
ibrahim.abouelfettouh@LIGO.ORG - 17:01, Monday 03 February 2025 (82615)

DARM Coherence Check after A2L scripts running.

Images attached to this comment
H1 PSL (PSL)
ibrahim.abouelfettouh@LIGO.ORG - posted 07:51, Monday 03 February 2025 (82603)
PSL Weekly Report - Weekly FAMIS 26355

Closes FAMIS 26355, Last checked in alog 82451


Laser Status:
    NPRO output power is 1.845W
    AMP1 output power is 70.07W
    AMP2 output power is 135.9W
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 5 days, 21 hr 55 minutes
    Reflected power = 27.05W
    Transmitted power = 101.4W
    PowerSum = 128.4W

FSS:
    It has been locked for 0 days 0 hr and 49 min
    TPD[V] = 0.7664V

ISS:
    The diffracted power is around 3.2%
    Last saturation event was 0 days 0 hours and 49 minutes ago


Possible Issues:
    PMC reflected power is high

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 07:42, Monday 03 February 2025 (82602)
OPS Day Shift Start

TITLE: 02/03 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 6mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.36 μm/s
QUICK SUMMARY:

IFO is LOCKING at ACQUIRE_PRMI

IFO was locking when I came in having lost lock only 30 mins ago (14:57 UTC).

LHO General
ryan.short@LIGO.ORG - posted 00:39, Monday 03 February 2025 (82601)
Ops Eve Shift Summary

TITLE: 02/03 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Mostly quiet shift with a drop from observing due to SQZ and a lockloss near the end of the shift. H1 just finished an initial alignment and has started relocking.

H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 21:52, Sunday 02 February 2025 (82600)
Lockloss @ 05:36 UTC

Lockloss @ 05:36 UTC - link to lockloss tool

No obvious cause, and I don't really see evidence of an ETM glitch this time.

Images attached to this report
H1 SQZ
ryan.short@LIGO.ORG - posted 20:01, Sunday 02 February 2025 (82599)
SHG Temperature Adjusted

At 01:02 UTC, H1 dropped from observing due to the SHG dropping out, and it would not come back on its own. I brought SQZ_MANAGER to 'DOWN' and raised the SHG temperature (H1:SQZ-SHG_TEC_SETTEMP) from 34.6 to 35.2 to raise the OPO ISS control signal and SHG green power back up to around 2.8 and 120mW, respectively. After that, I requested SQZ_MANAGER back to 'FREQ_DEP_SQZ' and everything came back without issue. Once I accepted the new SHG temperature in SDF, H1 returned to observing at 01:13 UTC.

Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:32, Sunday 02 February 2025 (82597)
OPS Day Shift Summary

TITLE: 02/02 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 143Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:

IFO is in NLN and OBSERVING as of 20:31 UTC

Many, many issues today all SQZ related. In short, our range is terrible and we know it is SQZ related.

Most of the story can be found in alog 82588 (and comments) but to summarize:

Otherwise, Dave did a vacstat restart due to a glitch - alog 82595

LOG:

Images attached to this report
LHO General
ryan.short@LIGO.ORG - posted 16:04, Sunday 02 February 2025 (82596)
Ops Eve Shift Start

TITLE: 02/02 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 22mph Gusts, 18mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.41 μm/s
QUICK SUMMARY: H1 has been locked for 3.5 hours. Ibrahim is bringing me up to speed on the SQZ issues of the morning.

H1 CDS
david.barker@LIGO.ORG - posted 13:51, Sunday 02 February 2025 (82595)
VACSTAT BSC3 sensor glitch detected, service restarted

VACSTAT detected a single BSC3 sensor glitch at 13:34 this afternoon. Last glitch was 20 days ago. I restarted vacstat_ioc.service on cdsioc0 at 13:46 and disabled HAM6's gauge.

H1 General (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 11:10, Sunday 02 February 2025 - last comment - 12:57, Sunday 02 February 2025(82592)
Lockloss 19:02 UTC

Unknown cause Lockloss. Sheila and I were troubleshooting a host of SQZ mysteries when this happened (not actually touching anything just discussing). It's tempting to say SQZ related since all our issues this weekend have been but yet unknown.

Comments related to this report
ibrahim.abouelfettouh@LIGO.ORG - 12:57, Sunday 02 February 2025 (82594)

Back to NLN as of 20:31 UTC

Sheila and I troubleshot more SQZ stuff, ultimately deciding to turn off SQZ ANG ADJUST since it was oscillating the phase, and thus our range, unnecessarily. I edited sqzparams.py to turn it off and then had to edit SQZ ANG guardian to make DOWN the nominal state. Now we're observing but will need to manually change the angle to optimize range, which I just did and will do again once thermalized.

H1 SQZ (Lockloss, SQZ)
ibrahim.abouelfettouh@LIGO.ORG - posted 08:41, Sunday 02 February 2025 - last comment - 16:47, Sunday 02 February 2025(82588)
SQZ Morning Troubles (Lockloss 16:17)

SQZ Morning Troubles

After reading Tony's alog, I had the suspicion that this was the SHG power being too low, causing lock issues. I trended Sheila's SHG Power while adjusting the SHG temp to maximize. This worked to lock SQZ almost immediately - screenshot below.

However, despite SQZ locking, it looked like in trying to lock for hours the SQZ_ANG_ASC had gone awry and as the wiki put it, "ran away" I requested the reset SQZ_ANG guardian but this didn't work. Then, I saw another SQZ Angle reset guardian, called "Reset_SQZ_ANG_FDS", which immediately broke guardian. The node was in error and a bunch of settings in ZM4 abd ZM5 got changed. I reloaded the SQZ guardian and successfully got back to FREQ_DEP_SQZ. I saw there were some SDF changes that had occured so I undid them, successfully making guardian complain less and less. Then, there were 2 TCS SDF OFFSETs that were ZM4/5 M2 SAMS OFFSET and they were off by 100V, sitting at their max 200V. I recalled that when SQZ would drop out, it would notify something along the lines of "is this thing on?". I then made the assumption that because this slider was at its max, all I'd need to do was revert this change, whic happened over 3hrs ago. I did that and then we lost lock immediately.

What I think happened in order:

  1. We get to NLN, SQZ FC can't lock due to low SHG power. It had degraded from Sheila and I's set point of 2.2 to a further 1.45.
  2. Guardian maxes out its ability to attempt locking by using ZMs and SQZ Ang ASC (if those aren't the same thing) and sits there, not having FC locked and not being able to move any more.
  3. Tony gets called and realizes the issue and can only get to a certain level before LOCK_LO_FDS rails, I assumer due to the above change. This means that even if we readjust the power, the ZM sliders would have to be adjusted.
  4. I get to site, change the temp to maximize SHG power and SQZ locks but at this weird state where ZM5 and ZM4 have saturated M2 SAMS OFFSETS. I readjust but the slider difference is so violent, it causes a lockloss.

Now, I'm relocking, having reverted the M2 SAM OFFSETs that guardian made and having maximized the SHG Temp power (which interestingly had to go down contrary to yesterday). I've also attached as screenshot of the SDF change related to the AWC-SM4_M2_SAMS_OFFSET because I'm about to rever the change on the assumption it is erroneous.

What I find interesting is that lowering the SHG temp increases the SHG power first linearly, then after some arbitrary value, exponentially but only for one big and final jump (kind of like a temperature resonance if you can excuse my science fiction). You can see this in the SHG DC POWERMON trend that shows a huge final jump.

Images attached to this report
Comments related to this report
ibrahim.abouelfettouh@LIGO.ORG - 10:05, Sunday 02 February 2025 (82589)

Recovered from Lockloss and got to OBSERVING automatically - 17:58 UTC.

Only SDF was the only change I made which was re-adjusting SHG temperature to maximize its power.

Images attached to this comment
ibrahim.abouelfettouh@LIGO.ORG - 10:48, Sunday 02 February 2025 (82591)

Investigating further, I realize that once I fixed the SHG temp issue, the message "SQZ ASC AS42 not on??"  was the flashing notification, in which I should have followed: alog 71083. Instead, SDF led me to what the specific issue with ASC was. I will continue to monitor and investigate. Since range is fluctuating, I will probably go into comissioninbg to try and optimize squeezing once thermalized.

sheila.dwyer@LIGO.ORG - 11:44, Sunday 02 February 2025 (82593)

Ibrahim summarized this over the phone for me, and here's our summary of 4 things that need a fix or looking into (sometime):

  • The SQZ angle shouldn't be adjusted when there is no AS42 light.  The ASC has a check for this, which means light from the squeezer is so misaligned it isn't reaching the AS WFS or the OMC, but SQZ _ANG_ADJUST seems not to check.
  • The SQZ_MANAGER RESET_SQZ_ANG_FDS has some kind of typo that made the sqz manager go into error. (log attached, main sets a self..timer['wait'] but then in run it refers to self['wait'], this state must never have been run before).  I fixed the typo and loaded it but haven't tested it.
  • RESET_SQZ_ASC sets ZM4 +5 P + Y lock gains to 1, but that is not what they are normally set to.  We can either re-write this to check the gains and reset them, or move these gains to the servo filters. (I haven't done either of these things this morning).
  • Why did the PSAMs offsets get set to 200V at 5:49 AM local time? 
    • I looked at this, it seems that SQZ_MANAGER was just sitting in FDS_READY_IFO at the time, and the strain gauge servo settings didn't change at this time, and the two changes happened at exactly the same time so, so SDF seems like the most likely thing to have changed these.  I had trouble figuring out which SDF table these are in.  It does seem like the strain guage is at the same location that it was before all this happened, so the PSAMs should be at the same curvature.
Non-image files attached to this comment
ryan.short@LIGO.ORG - 16:47, Sunday 02 February 2025 (82598)

In regards to Sheila's final bullet point above, the change to the PSAMS offsets happened as a result of Tony's misclick (alog82585) where he brought SQZ_MANAGER to 'SET_ZM_SLIDERS_IFO [52]' as reported by the Guardian log at that time:

ryan.short@cdsws25[~]: guardctrl log -a 1422539317 -b 1422539318 SQZ_MANAGER
2025-02-02_13:48:19.274841Z SQZ_MANAGER [DOWN.run] timer['pause'] done
2025-02-02_13:48:19.445441Z SQZ_MANAGER EDGE: DOWN->SET_ZM_SLIDERS_IFO
2025-02-02_13:48:19.445441Z SQZ_MANAGER calculating path: SET_ZM_SLIDERS_IFO->SET_ZM_SLIDERS_IFO
2025-02-02_13:48:19.449362Z SQZ_MANAGER executing state: SET_ZM_SLIDERS_IFO (52)
2025-02-02_13:48:19.450448Z SQZ_MANAGER [SET_ZM_SLIDERS_IFO.enter]
2025-02-02_13:48:19.491549Z SQZ_MANAGER [SET_ZM_SLIDERS_IFO.main] ezca: H1:SUS-FC2_M1_OPTICALIGN_P_OFFSET => 252.9999999999991
2025-02-02_13:48:19.546438Z SQZ_MANAGER [SET_ZM_SLIDERS_IFO.main] ezca: H1:SUS-ZM4_M1_OPTICALIGN_P_OFFSET => -559.3071
2025-02-02_13:48:19.708738Z SQZ_MANAGER [SET_ZM_SLIDERS_IFO.main] ezca: H1:SUS-FC2_M1_OPTICALIGN_Y_OFFSET => 43.900000000000425
2025-02-02_13:48:19.792385Z SQZ_MANAGER [SET_ZM_SLIDERS_IFO.main] ezca: H1:AWC-ZM4_M2_SAMS_OFFSET => 200
2025-02-02_13:48:19.837611Z SQZ_MANAGER [SET_ZM_SLIDERS_IFO.main] ezca: H1:AWC-ZM5_M2_SAMS_OFFSET => 200

I also show this interaction in the attached ndscope.

Images attached to this comment
Displaying reports 2841-2860 of 83068.Go to page Start 139 140 141 142 143 144 145 146 147 End