Displaying reports 5181-5200 of 83228.Go to page Start 256 257 258 259 260 261 262 263 264 End
Reports until 20:05, Monday 30 September 2024
H1 General
oli.patane@LIGO.ORG - posted 20:05, Monday 30 September 2024 (80383)
Ops EVE Midshift Status

We've been Observing since before the start of my shift and we have been Locked for 5 hours. Our range hasn't gotten better and I haven't had a chance to tune up the squeezing. Squeezing is pretty bad with both the 350Hz and the 1.7kHz bands being around -1.2, with the lowest they got to being ~-1.5 a couple of hours into the lock. Besides the abysmal range, everything is going well.

H1 DetChar (DetChar)
yashasvininad.moon@LIGO.ORG - posted 18:47, Monday 30 September 2024 (80382)
DQ Shift LHO 16 Sept- 22 Sept 2024
1. The average duty cycle for Hanford for this week was 69.25%, with a high of 95.6% on Friday
2. The BNS range fluctuated between 150-160 Mpc, the lower end 150 Mpc is due to high winds

3. Strain lines in h(t):
   Some of the strain lines especially lower frequency can be explained due to ground motion
   Noise lines in h(t) spectrogram every day at 500Hz most of them can be correlated to after H1 gets locked back, but some of these persist. I 
   have attached a plot highlighting it


4. Some glitch clusters happen right before losing lock and can be explained by comparing to alogs, others (the one tuesday) are unexplained

5. Lockloss:
   Some locklosses can be explained due to increased ground motion (eg: monday, wednesday and friday locklosses)
   First FSS_OSCILLATION lockloss from NLN in a year on wednesday 18th Sept
   Another lockloss that tagged FSS on Thursday
   Lockloss with FSS oscillation tag on Saturday

6. PEM tab:
   Most of the noise in corner BSCs happens when H1 is not observing
   Noise in acoustics can be related to environmental factors like wind
   Recurring noise line in BSC1 motion X and sometimes other BSCs too at ~11 Hz every day starting at 15:00 UTC

7. H1:ASC-DHARD_Y_OUT_DQ and H1:PEM-EX_EFM_BSC9_ETMX_Y_OUT_DQ are present multiple times in the hveto tab, there are also some new channels.
8. The fscan plots have appeared and the line count varies from 600 to ~850 with an average of 684.
Images attached to this report
H1 AOS
ryan.short@LIGO.ORG - posted 16:53, Monday 30 September 2024 (80380)
Ops Day Shift Summary

TITLE: 09/30 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 137Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Planned commissioning time this morning, then a lockloss early afternoon with a simple relocking process afterwards made for a relatively light day.

H1 has now been locked and observing for 1.5 hours.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
23:58 SAF H1 LVEA YES LVEA is laser HAZARD Ongoing
16:45 CAL Tony, Jackie PCal Lab local Measuring 17:26
17:20 FAC Kim MX n Technical cleaning 18:27
17:32 PEM Robert LVEA - Shaker tests 18:32
H1 General
oli.patane@LIGO.ORG - posted 16:29, Monday 30 September 2024 (80381)
Ops Eve Shift End

TITLE: 09/30 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 139Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 8mph Gusts, 3mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.17 μm/s
QUICK SUMMARY:

Observing at 140 Mpc and have been Locked for 1.5 hours. Might drop out of Observing to run sqz align if Livingston loses lock or also drops out.

LHO General
filiberto.clara@LIGO.ORG - posted 14:29, Monday 30 September 2024 (80378)
OSB - Lightning Rod Repaired

WP 12095

Damaged lightning rod and mounting bracket on the OSB viewing platform replaced.

Images attached to this report
H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 14:20, Monday 30 September 2024 - last comment - 15:04, Monday 30 September 2024(80376)
Lockloss @ 20:26 UTC

Lockloss @ 20:26 UTC - link to lockloss tool

This would appear to be another FSS-related lockloss, as evidenced by the fact that the IMC and arms lost lock at the same time, we see some glitches in the FSS fast channel, and nothing else jumps out to me for another cause.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 15:04, Monday 30 September 2024 (80379)

H1 back to observing at 22:04 UTC. I needed to adjust SRM slightly to lock DRMI, but otherwise a fully automated relock.

H1 SQZ
camilla.compton@LIGO.ORG - posted 12:25, Monday 30 September 2024 (80373)
AS42 sensing matrix re-measured

Sheila, Camilla

As Naoki/Vicky started, I moved ZM4,6,5 -50urad and then +50urad and recorded change in AS_A/B_AS42_PIT/YAW. It may have been easier if I'd changed AS_A/B AS42 offsets to have PIT/YAW outputs zero'ed to start with...  

Plot attached of ZM4, ZM5 and ZM6. 

There is a some cross coupling, and ZM5 gave very strange results in pitch and yaw with overshoot and the same direction of AS42 recorded with a different direction of alignment. This suggests we should use ZM4 and ZM6 as we do for our SCAN_SQZ_ALIGNMENT script.

Sensing/Input matrices calculated using /sqz/h1/scripts/ASC/AS42_sensing_matrix_cal.py

Using ZM4 and ZM6.
PIT Sensing Matrix is:
[[-0.0048 -0.0118]
 [ 0.0071  0.0016]]
PIT Input Matrix is:
[[ 21.02496715 155.05913272]
 [-93.29829172 -63.07490145]]
YAW Sensing Matrix is:
[[-0.00085 -0.009  ]
 [ 0.0059   0.0029 ]]
YAW Input Matrix is:
[[  57.2726375   177.74266811]
 [-116.52019354  -16.78680754]]

Images attached to this report
H1 General
ryan.short@LIGO.ORG - posted 12:04, Monday 30 September 2024 (80375)
Ops Day Mid Shift Report

H1 went out of observing from 15:35 to 18:37 UTC for planned commissioning activities, which included PRCL FF testing, an update to the SQZ_PMC Guardian, shaker tests at HAM1, ITMY compensation plate sweeps, and measuring the AS42 sensing matrix by moving ZMs.

H1 has now been locked for 8 hours.

H1 CAL
louis.dartez@LIGO.ORG - posted 11:48, Monday 30 September 2024 (80372)
Troubleshooting Cal
This is a late entry to list everything we checked on Friday afternoon (which spilled to roughly midnight CT).

-- Current Status --
LHO is currently running the same cal configuration as in 20240330T211519Z. The error reported over the weekend using the PCAL monitoring lines suggests that LHO is well within the 10% magnitude & 10 degree error window after about 15 minutes into each lock (attached image). Here is a link to the 'monitoring line' grafana page that we often look at to keep track of the PCALY/GDS_CALIB_STRAIN response at the monitoring line frequencies. The link covers Saturday to today. This still suggests the presence of calibration error near 33Hz that is not currently understood but, as mentioned earlier, it's now well within the 10%/10deg window. We think that the SRCL offset adjustment in LHO:80334 and LHO:80331 accounts for the largest contribution to the improvement of the calibration at LHO in the past week. 

-- What's Been Tried --
No attempts by CAL (me, JoeB, Vlad) to correct the 33Hz error have been successful. Many things were tried throughout the week (mostly on Tuesday & Friday) that went without being properly logged. Here's a non-exhaustive list of checks we've tried (in no particular order):

  • We cross-checked every single front end filter and gain parameter in the pyDARM model file against what's installed at LHO. Other than non-impacting foton file changes (e.g. out-of-path module changes) the only mismatch we identified is that the bias voltage in pyDARM was set to 3.3 instead of the IFO's 3.25.
  • We quickly implemented SRC detuning by copying the LLO setup. I did this by making on-the-fly changes to pyDARM on a local copy of the repo. I'll have to follow up with what these exact adjustments entailed. This seemed to do what we expected but it did not significantly impact the large 33Hz error.
  • Some GDS pipeline-affecting parameters made their way into the pydarm ini at LHO. These changes were made around the time of the Cal F2F over the summer. I reverted them, regenerated GSTLAL filters and installed them in the GDS pipeline. This didn't help. The commits in question are d2bfe349, 8717a9f5, and dc7208e0.
  • We don't believe the delays being reported by the pyDARM fits of the actuation delay on the PUM stage. We made adjustments to the fitting parameters for this stage to increase the fit range to roughly 400Hz in the hopes that pyDARM would produce a somewhat believable number. More details on this item as time allows; we are going to tune our injections to reduce uncertainty on these measurements in particular. This tuning will need to be done in tandem with other tunings (e.g. reducing PCAL at low frequencies to avoid leakage from polluting simultaneous higher frequency measurements).
  • We made similar adjustments to the sensing fit parameters. We extended the low end of the fit range to better capture the detuning near 10Hz in the fit.
  • Once it approached midnight on Friday night we reverted everything and left the IFO back in the state we found it in. This process is also not straightforward and requires creating a new 'fake' pyDARM report each time due recently-discovered bugs in the pyDARM infrastructure. This bug is being tracked here.
--What Changed?-- We think the SRC change(s) on September 5 (LHO:79929, also discussed in LHO:79903) line up well with the time frame at which the 33Hz error issue popped up. Here is a screenshot showing that the 33Hz error showed up on September 5. I've also attached a shot of ndscope showing the SRCL offset change on 09/05 from -175cts to -290cts here. cal_better_after_srcl_changed.png shows the error tracked by the monitoring lines before and after the SRCL offset was changed back to -190cts. The last image is a scope shot of that change (here).
Images attached to this report
H1 PSL
ryan.short@LIGO.ORG - posted 11:01, Monday 30 September 2024 (80374)
PSL 10-Day Trends

FAMIS 31053

The NPRO power, amplifier powers, and a few of the laser diodes inside the amps look to have some drift over the past several days that follows each other, sometimes inversely. Looking at past checks, this relationship isn't new, but the NPRO power doesn't normally drift around this much. Since it's still not a huge difference, I'm mostly just noting it here and don't think it's a major issue.

The rise in PMC reflected power has definitely slowed down, and has even dropped noticeably in the past day or so by almost 0.5W.

Images attached to this report
H1 SQZ
camilla.compton@LIGO.ORG - posted 10:16, Monday 30 September 2024 (80371)
SQZ PMC PZT checker added to SQZ_MANAGER

The operator team has been noticing that we are dropping out of observing for the PMC PZT to relock (80368, 80214, 80206...).

There's already a PZT_checker() in SQZ_MANGER at FDS_READY_IFO and FIS_READY_IFO to check the OPO and SHG PZTs are not at their end of their range (OPO PZT re-locks if not between 50-110V and SHG PZT is 15-85V). If they are, it requests them to unlock and relock. This is to force them into a better place before we go into observing. 

I've added PMC to the PZT_checker, will relock if it's outside of 15-85V. Full range is 0-100V. SQZ_MANAGER reloaded. Plan to take GRD though DOWN and back to FDS. 

Images attached to this report
LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 09:50, Monday 30 September 2024 (80370)
HAM2 Annulus Ion Pump Failure

Annulus ion pump failed about 4:04 AM.  Since the annulus volume is shared by HAM1 and HAM2, the other pump is actively working on the system, noted on the attached trend plot.

System will be evaluated as soon as posible and determine if the pump or controller need replacing.

Images attached to this report
H1 ISC
camilla.compton@LIGO.ORG - posted 09:29, Monday 30 September 2024 - last comment - 11:32, Thursday 03 October 2024(80369)
PRCL FF measurements taken as FM6 fit didn't reduce PRCL noise.

I tried Elenna's FM6 from 80287, this made the PRCL coupled noise worse, see first attached plot. 

Then Ryan turned off CAL lines and we retook to preshaping (PRCLFF_excitation_ETMYpum.xml) and PRCL injection (PRCL_excitation.xml) templates.  I took the PRCL_excitation.xml injection with the PRCL FF off and increased amplitude from 0.02 to 0.05 to increase coherence over 50Hz. Exported as prclff_coherence/tf.txt, and prcl_coherence/tf_FFoff.txt. All in /opt/rtcds/userapps/release/lsc/h1/scripts/feedforward

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 13:50, Monday 30 September 2024 (80377)

Elenna pointed out that I tested the wrong filter, the new one is actual FM7, labeled "new0926". We can test that on Thursday.

elenna.capote@LIGO.ORG - 11:32, Thursday 03 October 2024 (80444)

The "correct" filter today in FM7 was tested today and still didn't work. Possibly because I still didn't have the correct pre-shaping applied in this fit. I will refit using the nice measurement Camilla took in this alog.

LHO VE
david.barker@LIGO.ORG - posted 08:31, Monday 30 September 2024 (80367)
Mon CP1 Fill

Mon Sep 30 08:11:07 2024 INFO: Fill completed in 11min 3secs

Jordan confirmed a good fill curbside.

Because of cold outside temps (8C, 46F) the TCs only just cleared the -130C trip. I have increased the trip temps to -110C, the early-winter setting.

Images attached to this report
H1 ISC (ISC, Lockloss, PSL)
ryan.short@LIGO.ORG - posted 08:13, Monday 30 September 2024 - last comment - 09:20, Monday 30 September 2024(80358)
Looking into Recent Locklosses with FSS_OSCILLATION Tags

I don't consider this a full investigative report into why we've been having these FSS_OSCILLATION tagged locklosses, but here are some of my quick initial findings:

Please feel free to add comments of anything else anyone finds.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 09:20, Monday 30 September 2024 (80366)

Two interesting additional notes:

  1. The fist lockloss with FSS_OSCLATION tag was on Tuesday September 17th, during the fire alarm test, we've since had 28 more locklosses (from all states) with this tag. Could either the Tuesday maintenance activities (ETMX DAC swapped) in alog 80153 or the fire alarm lockloss 1410638748 have started this?
  2. In the FSS_OSCALTION tagged locklosses (and some that aren't tagged), H1:ASC-AS_A_DC_NSUM_OUT_DQ and H1:IMC-TRANS_OUT_DQ are loosing lock at the same time, this is very rare in O4. Iain Morton checked in G2401576 (tagged locklosses "SAME") that we'd had < 3 of these locklosses in the whole of O4 before the emergency vent.  In G2201762 O3a_O3b_summary.pdf, I called this a TOGETHER and FAST  lockloss and we saw it exclusively in the second half of O3b. A normal lockloss as the AS_A losing lock > 100ms before IMC. See plot attached of the last 3 NLN locklosses, the left plot is normal and the right two are the strange type where the IMC looses lock at the same time. 
Images attached to this comment
LHO General
ryan.short@LIGO.ORG - posted 07:37, Monday 30 September 2024 (80365)
Ops Day Shift Start

TITLE: 09/30 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 2mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.13 μm/s
QUICK SUMMARY: H1 has been locked and observing for almost 4 hours. One lockloss overnight, which doesn't have an obvious cause but looks to have a sizeable ETMX glitch. Commissioning time is scheduled today from 15:30 to 18:30 UTC.

H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 21:59, Sunday 29 September 2024 - last comment - 21:16, Monday 30 September 2024(80364)
Lockloss

Lockloss @ 09/30 04:54UTC

We went out of Observing at 04:53:56, and then lost lock four seconds later at 04:54:01.

Comments related to this report
ryan.short@LIGO.ORG - 09:56, Monday 30 September 2024 (80368)PSL, SQZ

It looks like the reason for dropping observing at 04:53:56 UTC (four seconds before lockloss) was due to the SQZ PMC PZT exceeding its voltage limit, so it unlocked and Guardian attempted to bring things back up. This has happened several times before, where Guardian is usually successful in bringing things back and H1 returns to observing within minutes, so I'm not convinced this is the cause of the lockloss.

However, when looking at Guardian logs around this time, I noticed one of the first things that could indicate a cause for a lockloss was from the IMC_LOCK Guardian, where at 04:54:00 UTC it reported "1st loop is saturated" and opened the ISS second loop. While this process was happening over the course of several milliseconds, the lockloss occurred. Indeed, it seems the drive to the ISS AOM and the secondloop output suddenly dropped right before the first loop opened, but I don't notice a significant change in the diffracted power at this time (see attached screenshot). Unsure as of yet why this would've happened to cause this lockloss.

Other than the ISS, I don't notice any other obvious cause for this lockloss.

Images attached to this comment
oli.patane@LIGO.ORG - 21:16, Monday 30 September 2024 (80384)

After comparing Ryan's channels to darm, still not sure whether this lockloss was caused by something in the PSL or not, see attached

Images attached to this comment
Displaying reports 5181-5200 of 83228.Go to page Start 256 257 258 259 260 261 262 263 264 End