Displaying reports 441-460 of 81411.Go to page Start 19 20 21 22 23 24 25 26 27 End
Reports until 15:11, Friday 14 March 2025
H1 ISC (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 15:11, Friday 14 March 2025 (83375)
Lockloss 22:10 UTC

ETM Glitch Lockloss. Relocking now.

H1 ISC
ibrahim.abouelfettouh@LIGO.ORG - posted 13:11, Friday 14 March 2025 - last comment - 14:39, Friday 14 March 2025(83373)
A2L All Run

During comissioning, I ran a2l script for all. Results and screenshots below.

ETMX P
Initial:  3.23
Final:    3.22
Diff:     -0.01

ETMX Y
Initial:  4.9
Final:    4.95
Diff:     0.05

ETMY P
Initial:  5.52
Final:    5.4
Diff:     -0.12

ETMY Y
Initial:  1.35
Final:    1.4
Diff:     0.05

ITMX P
Initial:  -0.53
Final:    -0.56
Diff:     -0.03

ITMX Y
Initial:  3.21
Final:    3.24
Diff:     0.03

ITMY P
Initial:  0.06
Final:    0.02
Diff:     -0.04

ITMY Y
Initial:  -2.74
Final:    -2.77
Diff:     -0.03

Images attached to this report
Comments related to this report
ibrahim.abouelfettouh@LIGO.ORG - 14:39, Friday 14 March 2025 (83374)

Reverted SDFs (QUAD TRamps and ASC Matrices) Attached.

Images attached to this comment
H1 SQZ
camilla.compton@LIGO.ORG - posted 12:10, Friday 14 March 2025 (83370)
SQZ NLG Data Set

Sheila, Camilla

NLG Measurements (follow 76542):

opo_grTrans_ setpoint_uW Amplified Max Amplified Min UnAmp Dark NLG (usual) NLG (maxmin) OPO Gain
120 0.05615 0.000255 0.000946 2.5e-5 61.0 62.7 -12
110 0.03241 0.000273     35.2 35.4 -11
90 0.01494 0.000288     16.2 16.8 -9
70 0.008177 0.000319     8.9 9.2 -7
40 0.003717 0.000392     4.0 4.2 -4
25 0.002531 0.000464     2.7 2.8 -2
80 0.010866 0.000302     11.8 12.2 -8

For 25uW we needed to reduce H1:SQZ-OPO_PZT_1_SCAN_TRIGGER_CHANNEL_1_LEVEL from 0.5 to 0.25

Data saved to camilla.compton/Documents/sqz/templates/dtt/20250314_NLG.xml
Example of data attached. Interestingly when we took the intal NO SQZ data, DARM looks different than in 2024, attached, using H1:CAL-DELTAL_EXTERNAL_DQ

FC2 is misaligned during this dataset. Aimed for 3m30 of data for each point.

In the Mean SQZ data, there was a strange peak around 300Hz, we checked if tis was seeding by moving H1:SQZ-LO_SERVO_SLOWOUTOFS, this made no difference

Type Time (UTC) NLG SQZ dB  @ 1kHz Angle DTT Ref
No SQZ 16:18:30 - 16:22:30 N/A   N/A ref 0
ASQZ 16:34:30 - 16:38:00 61 21.2 (-)86 ref1
ASQZ -10deg 16:38:30 - 16:42:00 61 20.1 (-)75 ref2
SQZ 16:45:00 - 16:48:30 61 -4.5 166 ref3
MSQZ 16:49:00 - 16:52:30 61 18.6 N/A ref4
ASQZ 17:04:30 - 17:08:00 35 19.2 (-)88 ref5
SQZ 17:11:30 - 17:15:00 35 -4.6 170 ref6
MSQZ 17:15:30 - 17:19:00 35 16.3 N/A ref7
ASQZ 17:31:00 - 17:34:30 16 15.4 (-)93 ref8
SQZ 17:38:00 - 17:41:30 16 -4.6 157 ref9
MSQZ 17:42:00 - 17:45:30 16 12.4 N/A ref10
ASQZ 17:55:30 - 17:59:00 9 12.5 (-)96 ref11
SQZ 18:03:00 - 18:06:30 9 -4.5 161 ref12
MSQZ 18:07:00 - 18:10:30 9 9.5 N/A ref13
ASQZ 18:18:30 - 18:22:00 4 8.2 (-)114 ref14
SQZ 18:25:00 - 18:28:30 4 -3.9 155 ref15
MSQZ 18:29:00 - 18:32:30 4 5.5 N/A ref16
ASQZ 18:48:00 - 18:51:30 2.7 5.9 (-)115 ref17
SQZ 18:52:30 - 18:56:00 2.7 -3.4 155 ref18
MSQZ 18:57:30 - 19:01:00 2.7 3.6 N/A ref19
Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:39, Friday 14 March 2025 (83371)
Fri CP1 Fill

Fri Mar 14 10:13:51 2025 INFO: Fill completed in 13min 48secs

 

Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 07:29, Friday 14 March 2025 (83369)
OPS Day Shift Start

TITLE: 03/14 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 0mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.27 μm/s
QUICK SUMMARY:

IFO is in NLN and OBSERVING as of 23:59 UTC (14hr 40 min lock!)

We have planned comissioning today from 6AM - 2PM (though will stay in OBSERVING until a comissioner shows up).

H1 General
oli.patane@LIGO.ORG - posted 22:00, Thursday 13 March 2025 (83368)
Ops Eve Shift End

TITLE: 03/14 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Currrently Observing at 148Mpc and have been Locked for 5 hours. The range is still low and fluctuating a bit, but it's not as bad as it was before I made the changes to the multiple phases (83367). Secondary microseeism is going back down, and the wind is low.
LOG:

23:30UTC Relocking

23:50 NOMINAL_LOW_NOISE
    - Ran SQZ ANG ADJUST to check if the SQZ servo was in a location to pull the ADF phase away
23:59 Observing

02:55 Out of Observing to check sqz angle
    - Ran SQZ ANG ADJUST and had to adjust some phases (83367)

03:13 Back into Observing

Start Time System Name Location Lazer_Haz Task Time End
21:21 ISC Camilla OpticsLab y(local) Beam dump positioning for PM1 22:30
21:47 SEI Jim LVEA n SEI rack part grab 21:50
22:46 ISC Mayank Opt lab Yes ISS PD array 00:48
22:58 ISC Keita, Rahul Opt Lab Yes PM1 optic testing 00:03
23:38 ISC Siva OpticsLab y(local) ISS PD array work 00:48
23:53 ISC Camilla OpticsLab y(local) Checking in 00:01

 

H1 General (SQZ)
oli.patane@LIGO.ORG - posted 20:51, Thursday 13 March 2025 (83367)
Ops EVE Midshift Status: SQZ Edition

We are Observing at 150Mpc and have been Locked for 4 hours.

We just got back into Observing after I took us out to touch up some sqz phases. This morning Camilla and Sheila had found (83350) that the SQZ servo was pulling the ADF phase away and causing large range drops. The same thing also happened the lock after that.

When we first got to NOMINAL_LOW_NOISE, I tried running SQZ ANG ADJUST to check if we were in a spot where the ADF phase would end up getting pulled away. At the location where the squeezing was best, the ADF sqz angle was pretty good, and the demod error was near 0 (ndscope1). I didn't adjust anything because of this, just went back into FDS and then into Observing. I planned on monitoring the ADF phase angle and then do these tests again once we were thermalized.

Over the course of thermalization, the same issue with the ADF phase was happening, although luckily the servo wasn't taking the phase in circles like it had been doing the previous couple of locks (ndscope2). Once we were thermalized, I took us out of Observing and ran SQZ ANG ADJUST. From that I saw that the best location for the sqz phase was now very close to a turning point (ndscope3). To move away from this location, I adjusted SQZ-ADF_VCXO_PLL_PHASE from 25 down to 10, and then adjusted SQZ-ADF_OMC_TRANS_PHASE from -107 to -117 to get SQZ-CLF_REFL_RF6_PHASE_PHASEDEG to oscillate around 0. The yellow BLRMS looks a lot better now and our range has gone up by 10Mpc, but my changes did make the purple BLRMS worse by a bit (ndscope4, ndscope5). I accepted the phase changes in SDF and we went back into Observing (SDF diffs).

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 16:44, Thursday 13 March 2025 (83366)
VACSTAT LVEA trip levels lowered to detect this morning's event

Janos, Gerardo, Dave:

The slope trip level for some LVEA gauges was reduced from 1.0e-10Torr to 3.0e-12Torr such that if an event similar to this morning were to happen again VACSTAT would alarm on it.

Trending the SLOPE channels over the past week showed that BSC3's gauge (PT132) cannot be lowered without causing many false positive alarms. Attachments show trend and which gauges now have the lower trip level.

Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 16:30, Thursday 13 March 2025 (83363)
Ops Day Shift End

TITLE: 03/13 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Calibration and commissioning time this morning, but we have also been struggling a bit with relocking. We aren't entirely sure what particular problem is causing the lock losses on acquisition, but the elevated wind and useism aren't helping. We are now on DC readout though, so we're making progress.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
16:01 PEM Robert LVEA n Setup shaker 16:22
16:22 IAS Jason LVEA n FARO info grab 16:38
16:31 PEM Robert EX n Shaker setup 16:59
17:57 SEI Jim, Mitchell LVEA,FCES n Check on racks 17:58
21:21 ISC Camilla OpticsLab y(local) Beam dump positioning for PM1 22:30
21:47 SEI Jim LVEA n SEI rack part grab 21:50
22:46 ISC Mayank Opt lab Yes ISS PD array ongoing
22:58 ISC Keita, Rahul Opt Lab Yes PM1 optic testing ongoing
H1 General
oli.patane@LIGO.ORG - posted 16:22, Thursday 13 March 2025 (83365)
Ops Eve Shift Start

TITLE: 03/13 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 25mph Gusts, 16mph 3min avg
    Primary useism: 0.06 μm/s
    Secondary useism: 0.34 μm/s
QUICK SUMMARY:

At ENGAGE_ASC_FOR_FULL_IFO and working on relocking with lots of lower state locklosses. These seem to be due to various causes, including high-ish winds& gusts. Secondary microseism is also a bit elevated.

LHO FMCS
eric.otterman@LIGO.ORG - posted 14:55, Thursday 13 March 2025 (83361)
H2 HVAC sub panel breaker replacement
I replaced the three pole, 25 amp circuit breaker which controls the line voltage for an HVAC circuit in the H2. Voltage output was 490 volts.The previous breaker was damaged by heat caused by a loose connection. 
H1 General
thomas.shaffer@LIGO.ORG - posted 14:54, Thursday 13 March 2025 (83360)
Lock loss 2132UTC

1425936756

No obvious cause but there was a small change in the PRG a second or two before lock loss.

H1 ISC (OpsInfo)
sheila.dwyer@LIGO.ORG - posted 11:57, Thursday 13 March 2025 - last comment - 11:22, Tuesday 18 March 2025(83357)
ALS DIFF sped up a bit

We lost lock from the calibration, so we tried to lock ALS without the linearization (some background in this alog: 83278.)  An active measurement of the transfer function from DRIVEALIGN_L to MASTER out was 1 without the linearization, and -0.757 with the linearization on.  So I've changed the DRIVEALIGN gain to -1.3 in the ALS_DIFF guardian when the use_ESD_linearization is set to false.  

We tried this once, and it stayed locked for a DARM gain of 400, but unlocked as the UIM boosts were turning on. We tried this again but it also didn't lock DIFF, so it is now out of the guardian again.

I looked at a few more of the past ALS DIFF locks, both sucsesful and unsucsesful attempts we are saturating the ESD (either the DAC or the limiter in the linearization) in the first steps of locking DIFF.  We do these steps quite slowly, stepping the darm gain to 40 waiting for the DARM1 ramp time, stepping it to 400, then waiting twice the ramp time, then engaging the boosts for offloading to L1.  I reduced the ramp time from 5 seconds to 2 seconds to make this go faster.  This worked on the first locking attempt, but that could be a coincidence. 

We will leave this in for a while, so that we can compare how frequently we loose lock at LOCKING_ALS.  In the last 7 days we've had 48 LOCKING_ALS locklosses, and 19 locklosses from NLN, so roughly 2.5 ALS locklosses per lock stretch. 

Comments related to this report
sheila.dwyer@LIGO.ORG - 10:14, Tuesday 18 March 2025 (83427)

Since the time of this alog, around 19 UTC on March 13th, we've had 68 locking_ALS locklosses and 12 NLN locklosses, so about 6 locklosses per sucsesful lock.   It seem though that the change to 2 seconds was never in place, and the guardian code still said 5 seconds.  So this issue seems to be getting worse without any change. 

Now I've loaded the change to 2 seconds, so this should be sped up after today's maintence window.

sheila.dwyer@LIGO.ORG - 11:22, Tuesday 18 March 2025 (83429)

I've looked at a bunch more of these locklosses, and they mostly happen in the time when the DARM gain is ramping, less often as the boosts are coming on in L1, and 1 I saw happened while COMM was locking.

In all the cases the linearization seems to hit its limiter before anything else goes wrong.

H1 CAL
thomas.shaffer@LIGO.ORG - posted 11:11, Thursday 13 March 2025 (83347)
Failed Calibration Sweep 1530 UTC

The calibration measurement caused another lock loss at the end of the measurement (alog83351)

Simulines start:

PDT: 2025-03-13 08:36:53.447733 PDT
UTC: 2025-03-13 15:36:53.447733 UTC
GPS: 1425915431.447733
 

End of script output:

2025-03-13 15:58:59,923 | INFO | Drive, on L2_SUSETMX_iEXC2DARMTF, at frequency: 42.45, and amplitude 0.41412, is fin
ished. GPS start and end time stamps: 1425916738, 1425916753
2025-03-13 15:58:59,923 | INFO | Scanning frequency 43.6 in Scan : L2_SUSETMX_iEXC2DARMTF on PID: 795290
2025-03-13 15:58:59,923 | INFO | Drive, on L2_SUSETMX_iEXC2DARMTF, at frequency: 43.6, is now running for 23 seconds.
2025-03-13 15:59:01,039 | INFO | Drive, on DARM_OLGTF, at frequency: 1083.3, and amplitude 1e-09, is finished. GPS st
art and end time stamps: 1425916738, 1425916753
2025-03-13 15:59:01,039 | INFO | Scanning frequency 1200.0 in Scan : DARM_OLGTF on PID: 795280
2025-03-13 15:59:01,039 | INFO | Drive, on DARM_OLGTF, at frequency: 1200.0, is now running for 23 seconds.
2025-03-13 15:59:02,168 | INFO | Drive, on L1_SUSETMX_iEXC2DARMTF, at frequency: 11.13, and amplitude 13.856, is fini
shed. GPS start and end time stamps: 1425916735, 1425916753
2025-03-13 15:59:02,168 | INFO | Scanning frequency 12.33 in Scan : L1_SUSETMX_iEXC2DARMTF on PID: 795287
2025-03-13 15:59:02,169 | INFO | Drive, on L1_SUSETMX_iEXC2DARMTF, at frequency: 12.33, is now running for 25 seconds
.
2025-03-13 15:59:07,162 | ERROR | IFO not in Low Noise state, Sending Interrupts to excitations and main thread.
2025-03-13 15:59:07,163 | ERROR | Ramping Down Excitation on channel H1:SUS-ETMX_L2_CAL_EXC
2025-03-13 15:59:07,163 | ERROR | Ramping Down Excitation on channel H1:SUS-ETMX_L3_CAL_EXC
2025-03-13 15:59:07,163 | ERROR | Ramping Down Excitation on channel H1:CAL-PCALY_SWEPT_SINE_EXC
2025-03-13 15:59:07,163 | ERROR | Ramping Down Excitation on channel H1:SUS-ETMX_L1_CAL_EXC
2025-03-13 15:59:07,163 | ERROR | Aborting main thread and Data recording, if any. Cleaning up temporary file structu
re.
2025-03-13 15:59:07,163 | ERROR | Ramping Down Excitation on channel H1:LSC-DARM1_EXC
ICE default IO error handler doing an exit(), pid = 795246, errno = 32
PDT: 2025-03-13 08:59:11.496308 PDT
UTC: 2025-03-13 15:59:11.496308 UTC
GPS: 1425916769.496308
 

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 08:22, Thursday 13 March 2025 - last comment - 16:24, Thursday 13 March 2025(83345)
LVEA vacuum glitch coincident with H1 lockloss

We had a vacuum glitch in the LVEA at 01:57 Thu 13mar2025 PDT which was coincident with a lockloss. The glitch was below VACSTAT's alarm levels by an order of magnitude, so no VACSTAT alert was issued.

The glitch is seen in most LVEA gauges, and took about 10 minutes to pump down.

Attached plots show a sample of LVEA gauges, the VACSTAT channels for LY, and the ISC_LOCK lockloss.

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 08:29, Thursday 13 March 2025 (83346)

Adding the lock loss tag and a link to the lock loss tool - 1425891454

There's no blatantly obvious cause. The wind was definitely picking up right before the lock loss, which can be seen in many of the ASC loops, but I'm not sure it was enough to cause the lock loss.

camilla.compton@LIGO.ORG - 09:58, Thursday 13 March 2025 (83353)

This may be a what-came-first the LL or the VAC spike scenario but looking at the past 3 hour trend of H1:CDS-VAC_STAT_LY_Y4_PT124B_VALUE attached,  it does not go above 3.48e-9. However 3 seconds before the lockloss, it jumps to 3.52e-9 attached. This seems suspiciously like whatever caused the VAC spike came before the lockloss.

There is a LL issue open to add a tag for this: #227, as it was previously seen in 82907.

Images attached to this comment
janos.csizmazia@LIGO.ORG - 13:35, Thursday 13 March 2025 (83359)
The only possible plausible vacuum glitch without the laser hitting something first, is IP-glitch. Gerardo is looking into this now.

Otherwise, I would say the laser hit something, which didn't cause lockloss right away (only 3 seconds later), but - obviously - caused pressure spike.

The rising wind is just too big of a coincidence to me..
jordan.vanosky@LIGO.ORG - 15:14, Thursday 13 March 2025 (83362)

Corner RGA caught small change in H2 and N2 at the time of the vacuum glitch. AMU 2 delta = 5.4e-10 amp,  AMU 28 delta = 1.61e-10 amp

Attached is a screenshot of RGA scan just before the vac glitch (1:57:02 AM 3/13/25), and second screenshot is ~10 seconds after. Top trace is the 0-100AMU scan at that time, bottom trace is the trend of typical gas components (AMU 2, 18, 28, 32, 40, 44), over ~50 minutes. Vertical line on bottom trace corresponds to time RGA was collected

RGA is a Pfeiffer Prisma Plus, 0-100AMU, with 10ms dwell time, EM enabled with 1200V multiplier voltage. Scans run continuous 0-100AMU sweeps

Images attached to this comment
gerardo.moreno@LIGO.ORG - 16:24, Thursday 13 March 2025 (83364)VE

Main ion pumps reacted to the "pressure spike" after it was noted by other instruments such as the vacuum gauges, see attached plot, first one.

The second plot shows the different gauges located at the corner station, the "pressure spike" appears to be noted first by two gauges PT120 (gauge on dome of BSC2) and PT152 (gauge located at the relay tube).  The amplitude of the "pressure spike" was very small, signature only was noted at the corner station, not noted at Mids or Ends.

Two of the ion pumps at the filter cavity tube responded to the "pressure spkie", see third attachment.

Also, the gauges located on the filter cavity tube noted the spike, including the "Relay Tube".

Images attached to this comment
H1 SUS (ISC)
thomas.shaffer@LIGO.ORG - posted 10:49, Wednesday 12 March 2025 - last comment - 12:50, Thursday 13 March 2025(83332)
pi31 slow ringup causing range drop

We noticed that the range was dropping even lower than our already low ~145Mpc. There was a lot of low frequency noise that was creeping up. Sheila suggested that we look at the PIs and sure enough, PI31 had started to slowly creep up the same time as the range degredation (see attached). The SUS_PI guardian node will turn on the damping at 3 according to the H1:SUS-PI_PROC_COMPUTE_MODE31_RMSMON channel and turn it back off when it gets below 3. For now we just tried changing that threshold to 0 to all for continuous damping. This let the mode damp down completely and it brought our range back up.

We should think more about damping this down lower and having two thresholds, an on an off.

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 11:06, Wednesday 12 March 2025 (83334)

Changed the threshold to 1. At 0 the node would continuously search for a new phase thinking that it needed to damp it further.

sheila.dwyer@LIGO.ORG - 13:16, Wednesday 12 March 2025 (83335)

The first attached screenshot shows the spectrum around 10.4 kHz (edited to fix typo), the mode which rang up and caused this issue is at 10428.38 Hz, previously identified as being in the y arm (Oli's slides)

The behavoir seemed pretty similar to what we saw a few weeks ago, 82961, with broad nonstationary noise in DARM. 

The downconversion doesn't seem to be caused by the damping, it started to have a noticable impact on the range while the damping was off, and when the damping came on and reduced the amplitude of the mode the range improved. 

In the ndscope screenshot attached you can see the range that we are using on the DCPDs ADC, this is a 20 bit DAC so it would saturate at 524k counts, when this PI was at it's highest today it was using 10% of the range, the DARM offset takes up about 20% of the ADC range.

The third attachment is the DARM spectrum at the time of this issue, as requested by Peter.  As described in 83330 our range is decreasing with thermalization in each lock, the spectrum shows the typical degredation in the spectrum in these last few days since the SQS angle servo has been keeping the squeezing more stable.  The excess noise seen when the PI was rung up has a similar spectrum to the excess noise we get after thermalization.

Also, for some information about these 10.4kHz PI modes, see G1901351

Images attached to this comment
sheila.dwyer@LIGO.ORG - 12:50, Thursday 13 March 2025 (83358)

Jeff pointed me to 82686 with the suggestion to check the channels that have different digital AA. 

The attached plot shows DCPD sum, and two alternatives, in red is H1:OMC-DCPD_16K_SUM1_OUT_DQ which has a high pass to reduce the single precision noise and no digital AA filters (and some extra lines), SUM2 has 1 digital AA filter and the high pass, and DCPD sum is the darm channel and has 2 digital AA filters and no high pass.  The broadband noise is the same for all of these, so digital aliasing doesn't seem to be involved in adding this broadband noise.

Images attached to this comment
H1 PEM (AOS)
robert.schofield@LIGO.ORG - posted 18:15, Tuesday 25 February 2025 - last comment - 12:04, Friday 14 March 2025(83050)
Annular and other 'mystery' beams found in vertex

I have recently reported that the “mystery” beam on the spool piece wall near HAM3 was coming from the direction of ITMX (82252). To further this investigation, I started photographing the area around the ITMs and BS as best I coluld through our viewports (there is not a good view towards HAM3).  I found several unexpected distributions of light in the vertex:

1. 20 degree conical annular beams from ITMs

Both ITMX/CPX (Figure 1) and ITMY/CPY (Figure 3) cast an expanding annular “beam” towards the BS with a cone half angle of roughly 20 degrees from the main beam. My best guess is that it is produced by arm cavity light hitting the bevel of the ITMs (see cartoon in Figure 1). A good test of this would be to install the new test mass cage baffles at one or more of the ITMs at LLO (presuming Anamaria finds this beam at LLO) this upcoming break. The baffle should hide the bevel and eliminate the ring of light.

2. 45 degree conical annular beams from BS

The BS appears to cast an expanding annular “beam” with a cone half angle of 45 degrees, centered around the -X, -Y direction (Figure 2), and likely another annular beam, also with a half angle of 45 degrees, centered around  the  -X, +Y direction (evidence in Figure 3). I tried to find a geometry where the bevels were also the source of these beams but didn’t. My best guess is that the annular cone is produced by reflections of light from PR3 and the ITMs off of the inner surface of the circular cage around the BS, or the inside surface of the circular barrel of the BS itself (see drawings on third page of Figure 2). 

3. Reflection of BS in ITM elliptical baffles likely visible at ITMY and HAM3

The BS beam spot is reflected towards ITMY by the slanted piece of the ITMX elliptical baffle (Figure 3). While the actual beam is not reflected towards ITMY (it isn’t clipped), the baffle reflects light towards ITMY that is scattered out of the main beam by only a few degrees.

Non-image files attached to this report
Comments related to this report
timothy.ohanlon@LIGO.ORG - 12:04, Friday 14 March 2025 (83372)

Similar images were taken at LLO for comparison (see Alog 75626)

Displaying reports 441-460 of 81411.Go to page Start 19 20 21 22 23 24 25 26 27 End