Displaying reports 1221-1240 of 80619.Go to page Start 58 59 60 61 62 63 64 65 66 End
Reports until 11:27, Wednesday 11 December 2024
H1 General (DetChar, ISC, SUS, TCS)
derek.davis@LIGO.ORG - posted 11:27, Wednesday 11 December 2024 (81764)
LASSO investigations into sharp turn-off of Dec 11 glitching

The broadband glitching that was present in the early hours of Dec 11 (UTC) appears to have suddenly and entirely stopped at 10:36:30 UTC - this sharp feature can be seen in the daily range plot. I completed a series of LASSO investigations around this time in the hopes that such a sharp feature would make it easier for LASSO to identify correlations. I find a number of trend channels that have drastic changes at the same time as this turn-off point related to TCS-ITMY_CO2, ALS-Y_WFS, and SUS-MC3_M1.

The runs I completed are linked here: 

  1. LASSO run of the first half of Dec 11 with the sensemon range as the primary channel 
  2. LASSO run of times near the turn-off point with TCS-ITMY_CO2 as the primary channel 
  3. LASSO run of times near the turn-off point with the sensemon range as the primary channel

Run #1 was a generic run of LASSO in the hopes of identifying a correlation. While no channel was highlighted as strongly correlated to the entire time period, this run does identify  H1:TCS-ITMY_CO2_QPD_B_SEG2_OUTPUT (rank 11) and H1:TCS-ITMY_CO2_QPD_B_SEG2_INMON (rank 15) as having a drastic feature at the turn-off point (example figure). Based on this information, I launched targeted runs #2 and #3. 

Run #2 is a run of LASSO using H1:TCS-ITMY_CO2_QPD_B_SEG2_OUTPUT as the primary channel to correlate against. This was designed to identify any additional channels that may show a drastic change in behavior at the same time. Channels of interest from this run include H1:ALS-Y_WFS_B_DC_SEG3_OUT16 (example figure) and H1:ALS-Y_WFS_B_DC_MTRX_Y_OUTMON (example figure). SEISMON channels were also found to be correlated, but this is likely a coincidence. 

Run #3 targets the same turn-on point, but with the standard sensemon range as the primary channel. This run revealed an additional channel with a change in behavior at the time of interest, H1:SUS-MC3_M1_DAMP_P_INMON (example figure). 

Based on these runs, the TCS-IMTY_CO2 and ALS-Y_WFS channels are the best leads for additional investigations into the source of this glitching. 

Images attached to this report
H1 General
ryan.crouch@LIGO.ORG - posted 07:27, Wednesday 11 December 2024 - last comment - 12:52, Wednesday 11 December 2024(81758)
OPS Wednesday DAY shift start

TITLE: 12/11 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 3mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.32 μm/s
QUICK SUMMARY:

Comments related to this report
ryan.crouch@LIGO.ORG - 07:39, Wednesday 11 December 2024 (81759)

I ran the lowrange coherence check for a good and bad range time during this current lock.

Images attached to this comment
ryan.crouch@LIGO.ORG - 10:53, Wednesday 11 December 2024 (81761)

I looked through the suspension driftmon scopes from the medm IFO_ALIGN_COMPACTEST and most looked normal compared to other locks. The main thing I saw that looked strange was a small step in MC3_P at the same time as the range gets better, I didn't see this behaviour in previous locks with bad range stretches though.

I looked at the OPLEV BLRMS as well, the main thing I saw on these scopes was that the BSs BLRM increased, largely in yaw, during the bad range times of this current lock. I don't see as obvious of a jump in other lock stretches.

Images attached to this comment
camilla.compton@LIGO.ORG - 11:49, Wednesday 11 December 2024 (81765)

Jenne suggested that we look at top mass Vertical osems to check for sagging that is causing touching. I check the quads, BS and output arm and see no drifts that correlate with the low range periods.

Images attached to this comment
ryan.crouch@LIGO.ORG - 11:53, Wednesday 11 December 2024 (81766)

I looked at verticals for MC{1,2,3}, PR{M,2,3}, and FC{1,2} and the only thing I odd I noticed is that FC2 vertical seems to move more during the bad range times than the good range. Most of them see a seasonal small downward sag over the past 40 days.

jenne.driggers@LIGO.ORG - 12:27, Wednesday 11 December 2024 (81767)

I've looked at spectra (using 64 sec of data split into 16 second chunks with 50% overlap) of the top mass osems for all suspensions, comparing between start times of 1417933576 (bad time) and 1417950319 (a little while after the sharp improvement).  None of the spectra have any of the classic 'we're rubbing' peaks. I've noted a few that I want to re-plot and zoom in (RM1 L, RM2 L, OMC L P V, FC1 P Y R T).  I'll also re-look at MC3, since that one is one of our most 'suspicious' optics right now.

I attach the spectra that I made, of these potentially suspicious optics.  The main conclusion here is that none of these are actually very suspicious, so since these are my *most* suspicious, probably we are not rubbing.  But, I'll make a few more plots of these suspensions. In all of these plots the blue 'reference time' is when the IFO is locked with good sensitivity, and the orange 'check time' is when the IFO is locked with poor sensitivity.

Images attached to this comment
jenne.driggers@LIGO.ORG - 12:52, Wednesday 11 December 2024 (81768)

I replotted the 'suspicious' top mass spectra using DTT.  I don't find anything suspicious or interesting on FC1 or MC3.

OMC is a little strange, in that it has a set of peaks that all change frequency in the same way (first attachment).  I'm not sure that this is meaningful for today's investigation though.

RM1 and RM2 are both quite strange looking in the 5-9 Hz range.  They both pick up a forest of peaks in Length (and a little bit in Pit, and maybe a teeny bit in Yaw).  Second attachment.  Going to look further into these, maybe at other times as well. Robert said that these both saw some motion on the summary page, but their motion didn't seem to correlate with the reduction in range.

Images attached to this comment
H1 General
oli.patane@LIGO.ORG - posted 22:00, Tuesday 10 December 2024 - last comment - 10:59, Wednesday 11 December 2024(81757)
Ops Eve Shift End

TITLE: 12/11 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 102Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: Currently Observing and have been Locked for almost 5 hours.  Our range is still all over the place unfortunately. I jumped in and out of Observing a few times by turning squeezing on and off to check for differences with the range (81753), but didn't find anything.
LOG:

00:30 Relocking
01:08 NOMINAL_LOW_NOISE
01:14 Observing

    02:57 Went out of Observing and turned off SQZ to see if that fixes the mystery noise 
    03:09 Back to FDS
    03:19 Turned off SQZ
    03:33 Back to FDS and back to Observing

Comments related to this report
camilla.compton@LIGO.ORG - 09:04, Wednesday 11 December 2024 (81760)ISC, SQZ

Attached plot shows that the low range glitchy behavior happens independent of whether we have SQZ injected or not. Traces show the SQZ/NO SQZ times Oli noted in green/yellow with comparisons of good SQZ and NO SQZ times in blue and red.

  • Light and dark green = Squeezing injected. Can see 20-100Hz noise.
  • Yellow and orange = NO SQZ. Can still see 20-100Hz noise.
  • Blue = Squeezing injected. Good stable range.
  • Red = NO SQZ. No noise.

Yesterday, in 81724 it seemed like the behavior stopped before we took the no SQZ time. Each trace is: 0.1Hz 50% overlap 100 averages = 500 seconds ~10 minutes. /ligo/home/camilla.compton/Documents/H1_DARM_FOM_s_glitchy.xml

Conclusions:

  • The injected squeezed light or any backscatter from HAM7/8 in not causing low range times
  • Could still be a SQZ electronics issue? Due to hveto work that Detchar and gravity spy are doing linking to SQZ channels, links in chat and pasted here: hveto/Dec9/no_sqz/, wandering lines SQZ-FC, lines CO2, gravityspy, 20241209/detchar/hveto/81730, 81587
  • Even in good range times, SQZ is added noise 20-40Hz (compare red to blue): could retune FC de-tuning to improve.
Images attached to this comment
camilla.compton@LIGO.ORG - 10:59, Wednesday 11 December 2024 (81762)

I ran BRUCOs on the bad (1417932916: 2024/12/11 06:14UTC) and good (1417949016: 2024/12/11 10:43UTC) times in the attached plot. 

Main differences in 20-100Hz region:

Images attached to this comment
H1 PSL (PSL)
masayuki.nakano@LIGO.ORG - posted 20:44, Tuesday 10 December 2024 (81756)
PMC Heater Calibration and Actuation Efficiency Analysis

[Jason, Masayuki]

Summary

The PMC heater calibration was performed last week (Tuesday). The calibration involved adjusting the temperature loop set point and monitoring the corresponding changes in PMC temperature and length. The results were validated using previous measurements and compared to similar evaluations at LLO. Additionally, the necessity of the heater for JAC operations was assessed, highlighting it would be needed for long term operation.

Details

PMC Heater Calibration

Reference to LLO Calibration

Monthly Drift Analysis

Implications for JAC Operations

The PZT can be railed in a day from this measurement, so we would need the heater for JAC operation.

Additional Observations

Spikes observed at LHO caused glitches in transmitted power, as shown in the attached plot, and may require resolution. Redesigning the filters could potentially mitigate these anomalies.

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 19:11, Tuesday 10 December 2024 - last comment - 19:51, Tuesday 10 December 2024(81753)
Out of Observing to check SQZ

Just went out of Observing and took SQZ manager to no squeezing to check if the noise issues are related to squeezing. We'll be doing no squeezing for 10mins, SQZ for 10, no sqz for 10, and then back to sqzing and Observing

Comments related to this report
oli.patane@LIGO.ORG - 19:37, Tuesday 10 December 2024 (81754)

Back to just Observing as of 03:33 UTC

oli.patane@LIGO.ORG - 19:51, Tuesday 10 December 2024 (81755)

Looks like the range is still changing a decent amount when squeezing was off (ndscope), which lines up with previous observations that we were also seeing this noise during the lock stretches a few nights ago where we weren't squeezing for multiple hours.

Images attached to this comment
H1 ISC
elenna.capote@LIGO.ORG - posted 17:07, Tuesday 10 December 2024 (81752)
Checking PRG calibration

Today Sheila realigned the beam on IM4 trans QPD (81735). This beam has been clipped at least since the O4 break due to various reasons (in this particular alog we tried to fix it, but then undid the fix, I'm adding here so we have a general time reference for when we first noticed the problem: 76291). As a result, the PRG calibration for O4b has been fishy, since it is normalized by the IM4 trans QPD power. Today, with the beam properly centered, I checked the PRG calibration to make sure it's still good.

Craig did this after ITMY was replaced: 58327. I followed his same steps with help from Ryan S and Sheila. First, we ran initial alignment on the y arm, which uses the green camera references. We waited until that alignment was converged and then stepped through the input_align_yarm states in the ALIGN_IFO guardian. I recorded the IM4 trans value as well as the y arm transmission value.

When we run a normal initial alignment, we lock input align with the x arm, so I used that opportunity to similarly record the x arm transmission value and IM4 trans. As a note: the green input alignment references are set when we converge the full IFO ASC, therefore I believe the single arm alignment after we run the green initial alignment is probably very close to the full IFO alignment.

Then, we paused in DARM_TO_DC_READOUT in the ISC LOCK guardian; this is just after all the full IFO ASC converges at 2W. I grabbed the x and y arm transmission values and IM4 trans.

Channels: H1:IMC-IM4_TRANS_NSUM_OUT16, H1:LSC-TR_X_NORM_OUT16, H1:LSC_TR_Y_NORM_OUT16

Single arm lock values:

Y-arm trans: 0.91, IM4 trans: 1.95

X-arm trans: 1.01, IM4 trans: 1.98

Full lock values:

Y-arm trans: 1684

X-arm trans: 1708

IM4 trans: 1.98

Calculation:

PRM transmission = 0.031

PRG = Tp * Y-arm trans (full ifo) / Y-arm trans (yarm only) * Input power (yarm only) / Input power (full ifo) (copied directly from Craig's alog)

PRG from Y arm = 56.5

PRG from X arm = 54.0

PRG reported from H1:LSC-PR_GAIN_OUT16 at the 2W lock time: 54

Therefore, I think our PRG calibration is correct. We can begin to cultivate a good O4b power up data set for modeling.

LHO General
ryan.short@LIGO.ORG - posted 16:40, Tuesday 10 December 2024 (81749)
Ops Day Shift Summary

TITLE: 12/11 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Long maintenance day with a great many activities wrapped up late afternoon and initial alignment started around 23:40. The biggest issue we've encountered is that the FSS autolocker is having trouble locking the RefCav; it can grab the resonance but loses it after about a second, so it's been taking quite a while to lock (I manually did it twice this afternoon to speed things along). However, if the FSS is locked, it seems happy, the only issue appears to be with relocking. Otherwise, initial alignment ran smoothly and main locking started at 00:24. Currently up to PREP_ASC_FOR_FULL_IFO.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
23:03 HAZ LVEA IS LASER HAZARD LVEA YES LVEA IS LASER HAZARD 16:45
15:40 PEM Robert LVEA Y Viewports 15:52
15:41 TCS Camilla LVEA Y Guillotines 15:52
15:58 FAC Chris EndY then X N Check paths are clear for crane inspections 18:27
16:00 PEM Robert LVEA Y Viewports 16:04
16:02 FAC Nelly FCES N Tech clean 16:51
16:03 FAC Karen EndY N->Y Tech clean 16:51
16:03 FAC Kim EndX N Tech clean 17:10
16:03 CAL Tony, Dripta, Francisco PCAL lab LOCAL Grab measurement equipment 16:10
16:11 CAL Tony, Dripta EndY Y PCAL measurement 19:30
16:12 PEM Robert LVEA Y Viewports, in and out till ~16:30 16:16
16:14 OPS Camilla LVEA Y -> N LASER transition 16:35
16:15 CDS Jonathan, Dave Remote N OAF0 work, virtual machines, H0 will go down 22:06
16:24 VAC Jordan Mech room N Start up purge air 16:49
16:46 FAC/OPS Richard LVEA N Walkaround 17:01
16:52 PSL Jason, RyanS PSL enc Y PMC mode matching 19:21
16:53 OPS LVEA LVEA N LVEA IS LASER HAZARD 04:16
16:53 FAC Tyler LVEA N Crane inspections 19:51
17:13 EE Fil HAM7 N VAC gauge, out at 19:00 19:48
17:13 VAC Janos, Jordan Ends N Mech room pump checks 18:09
17:29 FAC Kim LVEA N Tech clean 19:02
17:37 EE Marc, Fernando LVEA N ISC picomotors inspection in at 18:18 19:07
17:54 FAC Eric EndX N Ceiling sensors 18:34
17:59 VAC Travis LVEA N Close gatevalves 5 & 7 18:27
18:09 VAC Jordan LVEA N Join travis gatevalves 18:27
18:37 TCS TJ, Camilla EndX Y HWS work 20:02
19:17 VAC Travis, Jordan, Gerardo LVEA N Close GVs 5 & 7 19:39
19:50 VAC Fil, Gerardo LVEA, HAM7 N VAC Gauge, reset HV 20:05
20:19 TCS Camilla LVEA N Untrip CO2X 20:22
20:42 FAC Tyler EndX then Y N Crane inspection 23:00
20:57 PEM Robert LVEA N Setup shaker for comis later this week 21:57
21:14 IAS Jason, RyanC, Mitchell LVEA N FARO surveying 23:30
21:15 CAL Tony, Dripta PCal Lab Local Post-maintenance measurement 22:12
21:29 FAC Chris LVEA N FAMIS checks 21:52
21:47 ISC Camilla EX YES Beam profiling 23:12
22:44 VAC Gerardo LVEA N Picture on HAM7 23:00
23:39 SAF Oli, Ibrahim LVEA YES Sweep & transition to HAZARD 00:15
23:44 VAC Jordan, Janos LVEA - Turn off purge air 23:58
23:49 SAF Fil LVEA - Moving crane 00:15
23:51 CAL Tony PCal Lab Local Measurement 23:58
H1 General (PEM, SUS)
camilla.compton@LIGO.ORG - posted 16:39, Tuesday 10 December 2024 (81751)
SUS_CHARGE and PEM_MAG_INJ back to nominal start times next week

After our CO2 tests this morning 81723 we've reverted the PEM_MAG_INJ and SUS_CHARGE back to their nominal Tuesday start times of 7:20 and 7:45am. This was changed in 81474 but today was the first chance we got to turn off the CO2s before Tuesday maintenance started.

H1 General
oli.patane@LIGO.ORG - posted 16:34, Tuesday 10 December 2024 (81750)
Ops Eve Shift Start

TITLE: 12/11 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Ryan C / Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 8mph Gusts, 4mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.43 μm/s
QUICK SUMMARY:

Just started relocking after finishing an initial alignment. Little bit of a spike in the secondary microseism but nothing too bad

H1 GRD
thomas.shaffer@LIGO.ORG - posted 16:26, Tuesday 10 December 2024 (81747)
Tested modified SRY locking in ALIGN_IFO

Sheila D, TJ S

There has been a bout of SRM M3 watch dog trips around the SRY locking step of initial alignment over the last few weeks (alog81670 for example). Today Sheila and I discussed locking without the FE triggering at all, and just having the ALIGN_IFO node see good flashes and turn on the necessary filters and banks. I tried this out today but I wasn't able to get it to lock any more reliably than what we have now. No matter the method, it would seem to catch with low AS_A values, ~2500 on NSUM vs the normal 5000. From here, SRM would start to be driven before it would realize that it wasn't quite locked. I tried adding some code in ACQUIRE_SRY to try to turn on and off the SRM M1 LOCK L input and the SRCL FM4 filters and even clear the history if necessary, but this didn't work without long settling periods between attempts.

I ended up keeping the ISC_library.is_locked('SRY') looking as AS_A_DC_NSUM_OUTPUT > 4000. This value is a bit higher but safer. It will let the node run through Down and reset drives, integrators, and let SRM settle a bit. I don't think this is a fix, barely even a band aid. It will need some more thought on how to only catch on the correct mode.

Images attached to this report
LHO FMCS
ibrahim.abouelfettouh@LIGO.ORG - posted 16:21, Tuesday 10 December 2024 (81748)
LVEA Sweep 12/10

Oli, Ibrahim

IFO has been swept.

Of note:

Images attached to this report
H1 CDS (CDS, ISC, SYS, TCS)
fernando.mera@LIGO.ORG - posted 16:00, Tuesday 10 December 2024 (81746)
Picomotor controller inspections for LVEA

Per the WP12246:

Visual inspections were performed in the LVEA to track the wires and verify the picomotor controllers existence, connections and spares (physically only). The information gattered was updated in the document E1200072. Important findings were found and the document is pretty much near to the real installation. Electrical part investigations will follow to determine the spares. The names of the rack as well as the physical location for the controllers were verified using the O5 ISC wiring diagram D1900511.

Marc, Fernando

Displaying reports 1221-1240 of 80619.Go to page Start 58 59 60 61 62 63 64 65 66 End