Displaying reports 121-140 of 82874.Go to page Start 3 4 5 6 7 8 9 10 11 End
Reports until 12:03, Friday 20 June 2025
H1 SUS
jeffrey.kissel@LIGO.ORG - posted 12:03, Friday 20 June 2025 - last comment - 10:39, Tuesday 24 June 2025(85198)
SRM Transverse OSEM, Mounted on Opposite Side, has incorrect OSEM2EUL / EUL2OSEM matrix sign; Inconsequential, but should be Fixed for Sanity's Sake.
J. Kissel

I'm building up some controls design documentation for the derivations of the OSEM2EUL / EUL2OSEM matrices for existing suspension types (see G2402388), in prep for deriving new ones for e.g. the HRTS and if we upgrade any SUS to use the 2-DOF AuSEMs.

In doing so, I re-remembered that the HLTS, HSTS, OMC controls arrangement poster (E1100109), defining the now-called "6HT" OSEM arrangement in T2400265 calls out two possible positions for the transverse sensor, the "side" and "opposite side," which I'll abbreviate as SD and OS, respectively from here on.

If the transverse sensor is mounted in the SD position, then as the suspension moves in +T, the OSEM further occults the LED beam, creating a more negative ADC signal. Thus, the OSEM2EUL matrix's SD to T element should be -1.0.
If the transverse sensor is mounted in the OS position, then as the suspension moves in +T, the OSEM opens up revealing more LED beam, creating a more positive ADC signal. Thus, the OSEM2EUL matrix's SD to T element should be +1.0.

Not *actually* remembering that the HLTSs PR3, SR3, and two of the 9 HSTSs, SR2 and SRM use OS as their transverse sensor yesterday, and missing the note from Betsy in the abstract of E1100109 to look at the each SUS' Systems Level SolidWorks assembly for transverse sensor location assignment (prior to this morning it was not in red, nor did it call which suspension explicitly have their transverse sensor mounted in the OS position), I was worried that we'd missed this when defining the sign of *all* HLTS / HSTS / OMCS OSEM2EUL / EUL2OSEM matrices, and assumed they were all installed as SD OSEMs with -1.0 OSEM2EUL and EUL2OSEM matrix elements.

Below, I inventory the status with 
    - suspension name, 
    - a reference to picture of the transverse OSEM (or the corresponding flag w/o the OSEM), 
    - confirming SW drawing does match the picture, 
    - the current value / sign of the OSEM2EUL / EUL2OSEM matrix element (H1:SUS-${OPTIC}_M1_OSEM2EUL_2_6 or H1:SUS-${OPTIC}_M1_EUL2OSEM_6_2)
    - a conclusion of "all good" or what's wrong.


Optic	T Sensor	aLOG pic	SW check	OSEM2EUL	Conclusion
	Mount				value		/EUL2OSEM
MC1     SD              LHO:6014        D0901088 g      -1.0            all good
MC3     SD              LHO:39098       D0901089 g      -1.0            all good
PRM     SD              LHO:39682       D0901090 g      -1.0            all good
PR3     OS              LHO:39682       D0901086 g      +1.0            all good

MC2     SD              LHO:85195       D0901099 g      -1.0            all good
PR2     SD              LHO:85195       D0901098 g      -1.0            all good

SR2     OS              LHO:41768       D0901128 g      +1.0            all good

SRM     OS              LHO:60515       D0901133 g      -1.0		OSEM2EUL/EUL2OSEM wrong!
SR3     OS              LHO:60515       D0901132 g      +1.0            all good

FC1     SD              LHO:61710       D1900364 g      -1.0            all good
FC2     SD              LHO:65530       D1900368 g      -1.0            all good

OMC     SD              LHO:75529       D1300240 g      -1.0            all good (see also G1300086)


So, as the title of this aLOG states, we've got the sign wrong on SRM.

Shouldn't we have discovered this with the "health check TFs?"
Why doesn't this show as a difference in the "plant" ("health check") transfer functions when comparing against other SUS that have the sign right?
Why *don't* we need a different sign on SRM's transverse damping loop? 

Because the sign in the EUL2OSEM drive and OSEM2EUL sensed motion is self consistent:

When SRM EUL2OSEM matrix requests to drive in +T as though it had an OSEM coil in the "SD" position, it's actually driving in -T because the OSEM coil is in the OS position. 
On the other side, the OSEM2EUL matrix corrects for a "SD" OSEM, with "more negative when moves in +T", and and has the (incorrect) -1.0 in the OSEM2EUL matrix. But since the SUS is actually moving in -T, making the flag occult more of the OS OSEM LED beam, yielding a more negative ADC signal, the -T is reported +T in the DAMP bank because of minus sign in "SD" OSEM2EUL matrix. 
So the phase between DAMP OUT and DAMP IN at DC is still zero, as though "everything was normal," because requested physical drive +T is sensed as +T.

Thus the Sensor / Drive phase is zero at DC like every other HSTS, we can use the same feedback -1.0 sign like every other DOF and every other HSTS.

Does this matter for the IFO?
No. This sensor is only used to damp transverse, i.e. transverse to the main IFO beam. If there're no defects on the SRM optic HR surface, and the transverse displacement doesn't span a large fraction of the beam width, then there should be no coupling into L, P or Y which are the DOFs to which the IFO should be sensitive. 
This is corroborated by Josh recent work where he measured the coupling of the "SD" OSEM drive (actually in the OS position, driving -T) and found it to be negligible; see LHO:83277, specifically this SRM plot.
Not only is the IFO not sensitive to the transverse drive from the OSEM, but also the absolute sign of whether it's +T or -T doesn't matter since there's no *other* sensors that measure this DOF that we'd have to worry about comparing signs against.

Should we fix it?
I vote yes, but with a low priority, perhaps during maintenance when we have the time to gather a "post transverse sensor sign change fix" set of transfer functions.
Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 10:39, Tuesday 24 June 2025 (85279)
H1 SUS SRM's Basis Transformation Matrix elements for Transverse has been rectified as of 2025-06-24. See LHO:85277.
H1 ISC (GRD, OpsInfo)
elenna.capote@LIGO.ORG - posted 11:33, Friday 20 June 2025 - last comment - 16:46, Friday 20 June 2025(85200)
PRC Align in ALIGN IFO looks good

While Corey ran an initial alignment, I went to PREP_PRC_ALIGN to check the WFS error signal for that state. Both pitch and yaw looked good, I confirmed the signals cross zero when expected, and so I turned on the PRC ALIGN state and the initial alignment engaged and offloaded properly. Seems like we can use PRC align now!

I tagged OpsInfo so operators know. I recommend that the operators just pay attention during PRC Align over the next few days just to ensure nothing is going wrong (watch build ups as the ASC engages and confirm it's going in the right direction).

Comments related to this report
ryan.short@LIGO.ORG - 16:46, Friday 20 June 2025 (85212)OpsInfo

Since PRC_ALIGN should be working again, I removed the edge in the INIT_ALIGN Guardian between 'INPUT_ALIGN_OFFLOADED' and 'MICH_BRIGHT_ALIGNING' that was added in alog84950 so that automatic initial alignments will now include PRC.

LHO VE
david.barker@LIGO.ORG - posted 11:11, Friday 20 June 2025 (85199)
Fri CP1 Fill

Fri Jun 20 10:07:53 2025 INFO: Fill completed in 7min 49secs

 

Images attached to this report
H1 ISC
sheila.dwyer@LIGO.ORG - posted 10:40, Friday 20 June 2025 (85197)
DARM offset stepping, unrelated lockloss

Corey and I ran the DARM offset stepping script using Jennie Wright's instructions from 85136.

Writing data to  data/darm_offset_steps_2025_Jun_20_15_11_46_UTC.txt  Moving DARM offset H1:OMC-READOUT_X0_OFFSET to 4 at 2025 Jun 20 15:11:46 UTC UTC (GPS 1434467524)

We lost lock 2 minutes after this script finished.  Kevin was running an ADF sweep at the time, but that was not doing anything at the time of the lockloss, so this seems like an unrelated lockloss. 

 

H1 PSL (PSL)
corey.gray@LIGO.ORG - posted 10:39, Friday 20 June 2025 (85196)
PSL Status Report (FAMIS #26427)

This is for FAMIS #26427.
Laser Status:
    NPRO output power is 1.844W
    AMP1 output power is 70.07W
    AMP2 output power is 140.3W
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 3 days, 0 hr 25 minutes
    Reflected power = 23.23W
    Transmitted power = 105.5W
    PowerSum = 128.7W

FSS:
    It has been locked for 0 days 0 hr and 5 min
    TPD[V] = 0.8216V

ISS:
    The diffracted power is around 4.0%
    Last saturation event was 0 days 0 hours and 32 minutes ago


Possible Issues:
    PMC reflected power is high

LHO General
corey.gray@LIGO.ORG - posted 07:38, Friday 20 June 2025 (85193)
Fri Ops Day Transition

TITLE: 06/20 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 14mph Gusts, 5mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.11 μm/s
QUICK SUMMARY:

H1's been locked 8.75hrs with no Owl shift notifications and a fairly quiet environmental conditions (breezes picking up a little (just over 10mph).

After the lockloss Oli notes at the end of their shift, H1 relocked fine without a need for another alignment.

Commissioning is also scheduled from 8-2PDT today.

H1 General (SQZ)
oli.patane@LIGO.ORG - posted 22:08, Thursday 19 June 2025 (85192)
Ops Eve Shift End

TITLE: 06/20 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Currently relocking and in FIND_IR.

We've been having short locks all day, two of them being during my shift, despite the locklosses seemingly being caused by different or unknown causes. None of them seem to be caused by anything environmental, as wind and ground motion have been low (secondary useism was a bit elevated, but has been coming down and is nowhere near the amoount that we would typically be having problems with it). We've also now had two locklosses during the last two relock attempts from ENGAGE_ASC_FOR_FULL_IFO, from the spot where we've been having that glitchiness in the PR gain since coming back from the vent. Also, like Ibrahim mentioned in his end of shift alog (85186), DRMI has been taking a really long time to catch even though the flashes have been great. It's been taking us down to CHECK_MICH multiple times, and each time the alilgnment looks great.

During the relock following the 2025-06-20 00:09 UTC lockloss (85188), once we got back up to NLN, the squeezer was still working on trying to relock. It wasn't able to lock the OPO and was giving the messages "Scan timeout. Check trigger level" and "Cannot lock OPO. Check pump light on SQZT0". However, I just tried re-requesting LOCKED_CLF_DUAL on the OPO guardian, and it locked fine first try. So I guess that wasn't actually a problem? tagging sqz anyway?


LOG:

23:30 Relocking
23:59 NOMINAL_LOW_NOISE
    00:01 Observing
00:09 Lockloss
    - Running a manual initial alignment
    - Lockloss at ENGAGE_ASC_FOR_FULL_IFO
    - Lockloss from CARM_TO_TR
02:05 NOMINAL_LOW_NOISE
    02:11 Observing
03:32 Lockloss
    - Running a manual initial alignment
    - Lockloss at ENGAGE_ASC_FOR_FULL_IFO
   

Start Time System Name Location Lazer_Haz Task Time End
23:36 FAC Tyler XARM n Checking on his beloved bees 23:52
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 20:33, Thursday 19 June 2025 (85191)
Lockloss

Lockloss at 2025-06-20 03:32 UTC after 1.5 hours locked. Not sure why we're having all these short locks today

H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 17:12, Thursday 19 June 2025 - last comment - 19:12, Thursday 19 June 2025(85188)
Lockloss

Lockloss at 2025-06-20 00:09 UTC after only 10 minutes locked

Comments related to this report
oli.patane@LIGO.ORG - 19:12, Thursday 19 June 2025 (85190)

02:11 Back to Observing

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:32, Thursday 19 June 2025 (85186)
OPS Day Shift Summary

TITLE: 06/19 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY:

IFO is in DHARD_WFS and LOCKING

Overall a quiet shift with three seemingly different locklosses. Two of these happened less than 2 hours into the lock.

Lockloss 1: During Simulines - Alog 85178. Calibration sweep could not produce the report but BB ran successfully - Alog 85179

Lockloss 2: ETM Glitch - Alog 85182

Lockloss 3: EY Kick right before - Alog 85184

I also reverted the edits that Camilla and Oli made by Camilla’s request. These were to SQZ_FC Beamspot Control, turning it back on (SDFs attached)

Other than that, it seems that locking is taking a very long time to lock DRMI without doing an initial alignment despite excellent flashes. Touching mirrors does not seem to help and whenever DRMI does lock, it stays locked so I do not think this has to do with alignment or too much motion (ground or otherwise).

Microseism is on the rise and there are regular 5.0 quakes happening at the mid-atlantic ridge.

LOG:

None

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 16:16, Thursday 19 June 2025 (85185)
Ops Eve Shift Start

TITLE: 06/19 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 11mph Gusts, 6mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.13 μm/s
QUICK SUMMARY:

Curently relocking and in ACQUIRE_DRMI. Flashes are great but it won't catch. It caught PRMI pretty fast, but now we're back in DRMI

H1 General (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 15:47, Thursday 19 June 2025 - last comment - 17:02, Thursday 19 June 2025(85184)
Lockloss 22:25 UTC

Unknown Cause lockloss around the same time into NLN as the previous one, which was ETM Glitch. The H1 Lockloss tool does not show this though. Rather, it shows that EY L2 was the first to have a small glitch-like shake followed by the LL.

Comments related to this report
oli.patane@LIGO.ORG - 17:02, Thursday 19 June 2025 (85187)

00:01 Observing

H1 General (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 12:16, Thursday 19 June 2025 (85182)
Lockloss 19:05 UTC

ETM Glitch Lockloss. There were no earthquakes or at the time of the lockloss. Microseism is on the rise though it is nowhere near a lock-affecting amount - same for wind. 

H1 Lock Report just ended and it was the ETM Glitch.

LHO VE
david.barker@LIGO.ORG - posted 10:10, Thursday 19 June 2025 (85181)
Thu CP1 Fill

Thu Jun 19 10:08:15 2025 INFO: Fill completed in 8min 12secs

 

Images attached to this report
H1 CAL
ibrahim.abouelfettouh@LIGO.ORG - posted 09:06, Thursday 19 June 2025 - last comment - 15:28, Friday 20 June 2025(85179)
Calibration Sweep 06/19 - Failed - Lockloss during Simulines -

Headless Start: 1434382652

2025-06-19 08:42:24,391 bb measurement complete.
2025-06-19 08:42:24,391 bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250619T153714Z.xml
2025-06-19 08:42:24,392 all measurements complete.
Headless End: 1434382962

Simulines Start: 1434383042

2025-06-19 15:55:14,510 | INFO | Drive, on L3_SUSETMX_iEXC2DARMTF, at frequency: 8.99, and amplitude 0.53965, is finished. GPS start and end time stamps: 1434383704, 1434383727
2025-06-19 15:55:14,510 | INFO | Scanning frequency 10.11 in Scan : L3_SUSETMX_iEXC2DARMTF on PID: 2067315
2025-06-19 15:55:14,511 | INFO | Drive, on L3_SUSETMX_iEXC2DARMTF, at frequency: 10.11, is now running for 28 seconds.
2025-06-19 15:55:17,845 | ERROR | IFO not in Low Noise state, Sending Interrupts to excitations and main thread.
2025-06-19 15:55:17,846 | ERROR | Ramping Down Excitation on channel H1:LSC-DARM1_EXC
2025-06-19 15:55:17,846 | ERROR | Ramping Down Excitation on channel H1:SUS-ETMX_L2_CAL_EXC
2025-06-19 15:55:17,846 | ERROR | Ramping Down Excitation on channel H1:SUS-ETMX_L3_CAL_EXC
2025-06-19 15:55:17,846 | ERROR | Ramping Down Excitation on channel H1:CAL-PCALY_SWEPT_SINE_EXC
2025-06-19 15:55:17,846 | ERROR | Ramping Down Excitation on channel H1:SUS-ETMX_L1_CAL_EXC
2025-06-19 15:55:17,846 | ERROR | Aborting main thread and Data recording, if any. Cleaning up temporary file structure.
PDT: 2025-06-19 08:55:22.252424 PDT
UTC: 2025-06-19 15:55:22.252424 UTC
Simulines End GPS: 1434383740.252424

Could not generate a report using the wiki instructions (probably due to incomplete sweep).

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 15:28, Friday 20 June 2025 (85209)

Here is a screenshot of the broadband measurement. The calibration still looks good!

Images attached to this comment
H1 General (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 08:57, Thursday 19 June 2025 - last comment - 19:21, Thursday 19 June 2025(85178)
Locklosss 15:55 UTC

Lockloss whilst running simulines. Since this has happened in the last two weeks, caused by the sweep, and there are no immediate other causes, it is probably what caused the Lockloss. We were in Observing for nearly10 hrs.

Comments related to this report
oli.patane@LIGO.ORG - 19:21, Thursday 19 June 2025 (85189)

I wasn't able to confirm for sure that the lockloss was caused by the calibration sweep. It looks like the QUAD channels all had an excursion at the same time, before the lockloss was seen in DARM(ndscope1). It does look like in the seconds before the lockloss, there were two excitations that were ramping up (H1:SUS-ETMX_L2_CAL_EXC_OUT_DQ at ~6 Hz(ndscope2) and H1:SUS-ETMX_L3_CAL_EXC_OUT_DQ at ~10Hz(ndscope3)). Their ramping up is not seen in the QUAD MASTER_OUT channels, however, so it's hard to know if those excitations were the cause.

Images attached to this comment
H1 General (OpsInfo, SQZ)
oli.patane@LIGO.ORG - posted 22:32, Wednesday 18 June 2025 - last comment - 09:47, Thursday 19 June 2025(85174)
Ops Eve Shift End

TITLE: 06/19 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Tony
SHIFT SUMMARY:

Currently relocking and at MOVE_SPOTS. We lost lock earlier after having been locked for 4.5 hours, with half of that locked time being spent messing with the SQZ filter cavity and trying to keep it from unlocking(85169). We still haven't figured out why it was unlocking. For the night we are keeping the FC beam spot control off. SDFs have been accepted to keep the FC beam spot control filter inputs off(sdf). We don't really think that was the issue, but we got out longest span (15 mins before LL) of a locked sqzer after turning it back off after the unlocking issue had started. We'll look more into the sqzing issues tomorrow, so for Tony: if we have SQZ issues overnight, just go to Observing without SQZing and it'll get dealt with tomorrow.


LOG:

23:30 Relocking and in MOVE_SPOTS
    23:40 Earthquake mode activated
23:48 NOMINAL_LOW_NOISE
    23:50 Observing
    00:10 Back to CALM
    01:40 Out of Observing due to SQZ unlocking
    01:43 Back into Observing
    01:47 Out of Observing due to SQZ unlocking
    01:50 Back into Observing
    01:56 Out of Observing due to SQZ unlocking
    02:31 Back into Observing
    02:32 Out of Observing due to SQZ unlocking
    03:16 Back into Observing
    03:17 Out of Observing due to SQZ unlocking
    03:29 Back into Observing
    03:29 Out of Observing due to SQZ unlocking
    03:45 Back into Observing
    03:51 Out of Observing due to SQZ unlocking
    03:57 Back into Observing
    03:59 Out of Observing due to SQZ unlocking
    04:03 Back into Observing
04:21 Lockloss
    - Manual initial alignment

Start Time System Name Location Lazer_Haz Task Time End
23:41 FAC Tyler MX n Checking out the bees 00:41
Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 09:47, Thursday 19 June 2025 (85180)SQZ

Ibrahim turned FC Beamspot control back on while we were relocking this morning as we do not think it was related to the FC locklosses yesterday (85172), more likely was the high wind. 

H1 SUS (ISC, SEI, TCS)
jeffrey.kissel@LIGO.ORG - posted 16:59, Thursday 30 November 2017 - last comment - 09:19, Friday 20 June 2025(39590)
B&K Hammer Done in HAM3 Baffles, SR2 Cage, and of Primary/Final/Large in-vac TCS Steering Mirrors in BSC2
J. Kissel, S. Pai

Siddhesh and I B&K Hammer'ed the new MC2 and PR2 Scrapper Baffles, SR2 Cage, and of Primary/Final/Large? in-vac TCS Steering Mirrors in BSC2. More details, pictures, and results to follow.

Notes for myself later: 
- SR2 cage accelerometer was oriented with X Y Z aligned with the IFO global coordinates. 
- All other measurements had acc Y aligned with IFO Y, and acc Z aligned with IFO X.
Comments related to this report
jeffrey.kissel@LIGO.ORG - 09:19, Friday 20 June 2025 (85195)
Adding an oldy-but-goldy picture of MC2 (background) and PR2 (foreground) taken by Conor Mow-Lowry the day of this BnK exercise. The picture is taken from the -Y HAM 3 door, so IFO +X and the beam splitter is to the right, and +L for both suspensions is to the left as their HR surfaces point back toward HAM2 (to the left).

Critical to today's yak shaving: the OSEM measuring Transverse at the top mass (M1) stage is toward the camera for both suspensions, i.e. in their +T direction, which means it's the "SIDE," or SD OSEM, per E1100109 (rather than the OPPOSITE SIDE, or OS).

(yak-shaving: while putting together the derivation of the OSEM2EUL matrices in G2402388), I'm reminded that some HSTS were assembled with the Transverse sensor on in the OS position, and I'm on the hunt as to which ones -- and making sure the (SD to T) & (T to SD) element OSEM2EUL & EUL2OSEM matrices, respectively, are correctly -1.0 if SD and +1.0 if OS.)

MC2 and PR2's OSEM2EUL matrix value H1:SUS-[MC2,PR2]_M1_OSEM2EUL_2_6 is correctly -1.0.
Images attached to this comment
Displaying reports 121-140 of 82874.Go to page Start 3 4 5 6 7 8 9 10 11 End