Displaying reports 7521-7540 of 85584.Go to page Start 373 374 375 376 377 378 379 380 381 End
Reports until 12:31, Tuesday 01 October 2024
H1 General
ryan.crouch@LIGO.ORG - posted 12:31, Tuesday 01 October 2024 (80399)
OPS Tuesday day shift update

The TTFSS investigation is still ongoing, microseism has greatly risen to above the 90th percentile, the wind is also rising but its only ~20 mph.

H1 General
nyath.maxwell@LIGO.ORG - posted 12:22, Tuesday 01 October 2024 (80398)
Alog, awiki, svn downtime due to VM host issues in GC
There was an issue today where the VM hypervisor running alog, svn, awiki, ldap, and some services lost its network.  It has been restored.
H1 PEM
ryan.crouch@LIGO.ORG - posted 10:41, Tuesday 01 October 2024 (80392)
High Microseism today

The microseism today has reached fully above the 90th percentile, recreating Roberts plot from alog74510 the seismometers seem to show the largest phase difference is between EX and the CS, and the 2nd highest is with EY which suggests its because of the motion from our coast? From windy.com there's currently a 10 meter wave low pressure system off the coast of WA that's pretty much along the axis of the Xarm. These seismometers are also dominated by a ~0.12 Hz oscillation.

Images attached to this report
H1 SQZ
sheila.dwyer@LIGO.ORG - posted 10:41, Tuesday 01 October 2024 - last comment - 14:50, Thursday 03 October 2024(80396)
OPO PZT voltage seems related to low range overnight

Yesterday afternoon until 2 am, we had low range because the squeezing angle was not well tuned.  As Naoki has noted 78529 this can happen when the OPO PZT is at a lower voltage than we normally operate at. This probably could have been improved by running SCAN_SQZANG. 

I've edited the OPO_PZT_OK checker, so that it requires the OPO PZT to be between 70 and 110 V (it used to be 50 to 110V).  This might mean that sometimes the OPO has difficulty locking, (ie, 76642), which will cause the IFO to call for help, but that will avoid running with low range when it needs to run SCAN_SQZANG.

 

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 14:50, Thursday 03 October 2024 (80447)

Reverted OPO checker back to 50-110V as we moved the OPO crystal spot. 

H1 AOS
jim.warner@LIGO.ORG - posted 10:28, Tuesday 01 October 2024 (80397)
Monthly wind fence inspection FAMIS Request ID:21389

No new damage noted. First pic is EX, second is EY.

Images attached to this report
H1 CDS (ISC)
erik.vonreis@LIGO.ORG - posted 10:02, Tuesday 01 October 2024 (80395)
h1iscey frontend replaced

h1iscey frontend server was replaced because of a possible bad PCI slot that was preventing connection to one of the Adnaco PCIe expansion boards in the IO Chassis , see here: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=80035

The new frontend is running without issue, connected to all four Adnacos.

The new frontend model is Supermicro X11SRL-F, the same as the old frontend.

LHO VE
david.barker@LIGO.ORG - posted 08:46, Tuesday 01 October 2024 (80390)
Tue CP1 Fill

Tue Oct 01 08:09:24 2024 INFO: Fill completed in 9min 20secs

 

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 08:43, Tuesday 01 October 2024 - last comment - 09:22, Tuesday 01 October 2024(80389)
VACSTAT detected BSC3 glitch at 02:13:44 PDT Tue 01 Oct 2024

VACSTAT issued alarms for a BSC vacuum glitch at 02:13 this morning. Attachment shows main VACSTAT MEDM, the latch plots for BSC3, and a 24 hour trend of BSC2 and BSC3.

The glitch is a square wave, 3 second wide. Vacuum goes up from 5.3e-08 to 7.5e-08 Torr for these 3 seconds. Neighbouring BSC2 shows no glitch in the trend.

Looks like a PT132_MOD2 sensor glitch.

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 09:06, Tuesday 01 October 2024 (80391)

vacstat_ioc.service was restarted on cdsioc0 at 09:04 to clear the latched event.

david.barker@LIGO.ORG - 09:22, Tuesday 01 October 2024 (80393)

This is the second false-positive VACSTAT PT132 glitch detected.

02:13 Tue 01 Oct 2024
02:21 Sun 01 Sep 2024

alog for 01sep glitch

H1 General (Lockloss)
thomas.shaffer@LIGO.ORG - posted 08:23, Tuesday 01 October 2024 (80388)
The 2 overnight lock losses

1411809803

The FSS glitching isn't seen in this lock that ended a 11 hour lock, but our old friend the DARM wiggle reared its head again. (Attachment 1)

1411824447

Useism came up fast, so many of the signals are moving a bit more, but DARM sees a 400ms ringup that I don't see other places. Again, the FSS didn't seem to be glitching at this time.

Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 07:33, Tuesday 01 October 2024 (80387)
Ops Day Shift Start

TITLE: 10/01 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventive maintenance
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 2mph Gusts, 1mph 3min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.73 μm/s
QUICK SUMMARY: The IFO had just lost lock at low noise coil drivers as I arrived. I put it into IDLE and maintenance is already starting. Useism has grown to over 90th percentile, perhaps the cause of the lock loss, but I still need to look into this.

H1 CDS
erik.vonreis@LIGO.ORG - posted 07:19, Tuesday 01 October 2024 (80386)
Workstations and Displays updated

Workstations anhd wall displays were updated and rebooted.  This was an OS packages update.  Conda packages were not updated.

H1 General
oli.patane@LIGO.ORG - posted 22:01, Monday 30 September 2024 (80385)
Ops Eve Shift End

TITLE: 10/01 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 137Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Observing at 140Mpc and have been locked for 7 hours. Range is still low unfortunately but it's been a quiet evening, but we did get another superevent!
LOG:

23:00 Observing and have been Locked for 1 hour

23:47 Superevent S240930du

H1 General
oli.patane@LIGO.ORG - posted 20:05, Monday 30 September 2024 (80383)
Ops EVE Midshift Status

We've been Observing since before the start of my shift and we have been Locked for 5 hours. Our range hasn't gotten better and I haven't had a chance to tune up the squeezing. Squeezing is pretty bad with both the 350Hz and the 1.7kHz bands being around -1.2, with the lowest they got to being ~-1.5 a couple of hours into the lock. Besides the abysmal range, everything is going well.

H1 DetChar (DetChar)
yashasvininad.moon@LIGO.ORG - posted 18:47, Monday 30 September 2024 (80382)
DQ Shift LHO 16 Sept- 22 Sept 2024
1. The average duty cycle for Hanford for this week was 69.25%, with a high of 95.6% on Friday
2. The BNS range fluctuated between 150-160 Mpc, the lower end 150 Mpc is due to high winds

3. Strain lines in h(t):
   Some of the strain lines especially lower frequency can be explained due to ground motion
   Noise lines in h(t) spectrogram every day at 500Hz most of them can be correlated to after H1 gets locked back, but some of these persist. I 
   have attached a plot highlighting it


4. Some glitch clusters happen right before losing lock and can be explained by comparing to alogs, others (the one tuesday) are unexplained

5. Lockloss:
   Some locklosses can be explained due to increased ground motion (eg: monday, wednesday and friday locklosses)
   First FSS_OSCILLATION lockloss from NLN in a year on wednesday 18th Sept
   Another lockloss that tagged FSS on Thursday
   Lockloss with FSS oscillation tag on Saturday

6. PEM tab:
   Most of the noise in corner BSCs happens when H1 is not observing
   Noise in acoustics can be related to environmental factors like wind
   Recurring noise line in BSC1 motion X and sometimes other BSCs too at ~11 Hz every day starting at 15:00 UTC

7. H1:ASC-DHARD_Y_OUT_DQ and H1:PEM-EX_EFM_BSC9_ETMX_Y_OUT_DQ are present multiple times in the hveto tab, there are also some new channels.
8. The fscan plots have appeared and the line count varies from 600 to ~850 with an average of 684.
Images attached to this report
H1 AOS
ryan.short@LIGO.ORG - posted 16:53, Monday 30 September 2024 (80380)
Ops Day Shift Summary

TITLE: 09/30 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 137Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Planned commissioning time this morning, then a lockloss early afternoon with a simple relocking process afterwards made for a relatively light day.

H1 has now been locked and observing for 1.5 hours.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
23:58 SAF H1 LVEA YES LVEA is laser HAZARD Ongoing
16:45 CAL Tony, Jackie PCal Lab local Measuring 17:26
17:20 FAC Kim MX n Technical cleaning 18:27
17:32 PEM Robert LVEA - Shaker tests 18:32
H1 General
oli.patane@LIGO.ORG - posted 16:29, Monday 30 September 2024 (80381)
Ops Eve Shift End

TITLE: 09/30 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 139Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 8mph Gusts, 3mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.17 μm/s
QUICK SUMMARY:

Observing at 140 Mpc and have been Locked for 1.5 hours. Might drop out of Observing to run sqz align if Livingston loses lock or also drops out.

H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 21:59, Sunday 29 September 2024 - last comment - 21:16, Monday 30 September 2024(80364)
Lockloss

Lockloss @ 09/30 04:54UTC

We went out of Observing at 04:53:56, and then lost lock four seconds later at 04:54:01.

Comments related to this report
ryan.short@LIGO.ORG - 09:56, Monday 30 September 2024 (80368)PSL, SQZ

It looks like the reason for dropping observing at 04:53:56 UTC (four seconds before lockloss) was due to the SQZ PMC PZT exceeding its voltage limit, so it unlocked and Guardian attempted to bring things back up. This has happened several times before, where Guardian is usually successful in bringing things back and H1 returns to observing within minutes, so I'm not convinced this is the cause of the lockloss.

However, when looking at Guardian logs around this time, I noticed one of the first things that could indicate a cause for a lockloss was from the IMC_LOCK Guardian, where at 04:54:00 UTC it reported "1st loop is saturated" and opened the ISS second loop. While this process was happening over the course of several milliseconds, the lockloss occurred. Indeed, it seems the drive to the ISS AOM and the secondloop output suddenly dropped right before the first loop opened, but I don't notice a significant change in the diffracted power at this time (see attached screenshot). Unsure as of yet why this would've happened to cause this lockloss.

Other than the ISS, I don't notice any other obvious cause for this lockloss.

Images attached to this comment
oli.patane@LIGO.ORG - 21:16, Monday 30 September 2024 (80384)

After comparing Ryan's channels to darm, still not sure whether this lockloss was caused by something in the PSL or not, see attached

Images attached to this comment
H1 AOS (SEI)
neil.doerksen@LIGO.ORG - posted 20:18, Thursday 18 July 2024 - last comment - 09:31, Tuesday 01 October 2024(79231)
Dominant Axes During EQ Lockloss

Edit : Neil knows how to make links, now. Tony and Camilla were instrumental in this revelation.

This is an update to my previous post (Neil’s previous post on this topic) about looking for possible explinations why similar seismic wave velocities on-site may or may not knock us out of lock.

The same channels are used in addition to:

SEI- * ARM_GND_BLRMS_30M_100M
SEI- * ARM_GND_BLRMS_100M_300M
Where * is C, D, X, or Y.

I have looked at all earthquake events in O4b, and only ones which knocked us out of lock. This is to simplify the pattern search, for now. Here are the results.

Total events : 29

Events with ISI y-motion dominant : 11 (30-100 Hz) : 8 (100-300 Hz)
Events with ISI x-motion dominant : 3 (30-100 Hz) : 0 (100-300 Hz)
Events with ISI z-motion dominant : 9 (30-100 Hz) : 19 (100-300 Hz)

Events with ISI xy-motion dominant : 1 : 0 (Both axes are similar in amplitude.)
Events with ISI yz-motion dominant : 0 : 1
Events with ISI xz-motion dominant : 0 : 1

Events with IS xyz-motion dominant : 5 : 0

Total SEI- * ARM recorded events : 8
CARM dominant events : 7 : 8
C/XARM dominant events : 1 : 0

Conclusion is that in the 30-100 Hz band, it is equally likely to have either z- or y-axis motion be dominant. In the 100-300 Hz band, the ratio is about 1:2 for z and y motion being dominant during lockloss.

Clearly, common modes are a common (^_^) cause of lockloss.

Note that velocity amplitudes should be explored more.

Comments related to this report
neil.doerksen@LIGO.ORG - 09:31, Tuesday 01 October 2024 (80394)

NOTE : All units are in mHz, not Hz.

Displaying reports 7521-7540 of 85584.Go to page Start 373 374 375 376 377 378 379 380 381 End