Displaying reports 1381-1400 of 80639.Go to page Start 66 67 68 69 70 71 72 73 74 End
Reports until 16:17, Wednesday 04 December 2024
H1 TCS
ryan.short@LIGO.ORG - posted 16:17, Wednesday 04 December 2024 - last comment - 10:42, Friday 03 January 2025(81622)
TCS Monthly Trends

FAMIS 28455, last checked in alog81106

Only things of note on these trends are that the ITMX spherical power has been dropping for the past week and a half, which Camilla agrees looks strange, and the ITMY SLED power has reached the lower limit of 1, so it will need replacing soon. Everything else looks normal compared to last check.

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 07:50, Thursday 05 December 2024 (81625)

Two plots, one with the ITMX V, P, Y OSEMs in the same time frame as with Ryan's plots. This could make me believe that the spherical power is just from the normal ITMX movement. The second plot is a year+ trend and I'd say that it shows this is normal movement.

Images attached to this comment
thomas.shaffer@LIGO.ORG - 07:56, Thursday 05 December 2024 (81626)

What caught my eye with these plots is thst CO2Y power has increased in the last few weeks. We've that happen after it warms up and relocks, but this seems to be trending that way after a few relocks. Not sure how to explain that one.

Also, it looks like th flow for CO2X isn't as stable at Y, worth keeping an eye on.

Images attached to this comment
camilla.compton@LIGO.ORG - 10:42, Friday 03 January 2025 (82098)

Although ITMY POWERMON is below 1 (trend), the powermon seems to read lower than ITMX, see 73371 where they were last swapped in October 2023, both fibers had 2.5mW out but  SLEDPOWERMON recorded 5 vs 3. 

I checked that the data still makes sense and isn't considerably noisier: now vs after replacement. Maybe we can stretch out the life of the SLEDs for the end of O4 but should keep an eye on them.

You can see that the spherical power form ITMX is offset from zero so we should take new references soon.

Images attached to this comment
LHO General
ryan.short@LIGO.ORG - posted 16:09, Wednesday 04 December 2024 (81621)
Ops Eve Shift Start

TITLE: 12/04 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 2mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.17 μm/s
QUICK SUMMARY: H1 is running smoothly, has been observing for 20+ hours.

LHO General
thomas.shaffer@LIGO.ORG - posted 16:00, Wednesday 04 December 2024 (81620)
Ops Day Shift End

TITLE: 12/04 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Locked for 20 hours. Very quiet shift with nothing to report.
LOG:                                                                                        

Start Time System Name Location Lazer_Haz Task Time End
15:43 FAC Karen Opt lab n Tech clean 15:58
16:51 FAC Chris, Eric EX n Fan bearing replacement in mech room 19:49
18:12 FAC Kim H2 encl n Tech clean 19:02
18:58 VAC Janos Opt lab n Vac checks 19:04
H1 PSL
ryan.short@LIGO.ORG - posted 15:23, Wednesday 04 December 2024 (81619)
PSL Cooling Water pH Test

FAMIS 21691

pH of PSL chiller water was measured to be just above 10.0 according to the color of the test strip.

LHO VE
david.barker@LIGO.ORG - posted 10:10, Wednesday 04 December 2024 (81618)
Wed CP1 Fill

Wed Dec 04 10:07:33 2024 INFO: Fill completed in 7min 30secs

Gerardo confirmed a good fill.

Images attached to this report
H1 SEI
thomas.shaffer@LIGO.ORG - posted 08:22, Wednesday 04 December 2024 (81615)
H1 ISI CPS Noise Spectra Check - Weekly

FAMIS 26019

I somehow missed this email last week so it's a week late. The last time this was ran on Nov 10, the main differences are:

 

Non-image files attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 07:36, Wednesday 04 December 2024 (81614)
Ops Day Shift Start

TITLE: 12/04 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 5mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.16 μm/s
QUICK SUMMARY: Locked for 12 hours, range looks good.

LHO General
tyler.guidry@LIGO.ORG - posted 07:03, Wednesday 04 December 2024 (81604)
DGR Storage Building Progress
The slab has been poured and finished, and the steel erection is well underway. Civil inspections took place today and I discussed with Jake a peripheral slab adjoined to the walking path to the man door for an air compressor. Insulation is beginning to get shaken out while siding and roofing goes up. Progress against the initial DGR schedule looks good. 
Images attached to this report
LHO General
ryan.short@LIGO.ORG - posted 22:01, Tuesday 03 December 2024 (81613)
Ops Eve Shift Summary

TITLE: 12/04 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 159Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Mostly quiet shift with just one lockloss with an ETMX glitch; relocking afterwards was simple. H1 has now been observing for 2.5 hours.

H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 20:50, Tuesday 03 December 2024 - last comment - 20:50, Tuesday 03 December 2024(81610)
Lockloss @ 02:07 UTC

Lockloss @ 02:07 UTC - link to lockloss tool

Looks like an ETMX glitch about 200ms before the lockloss.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 19:51, Tuesday 03 December 2024 (81612)

H1 back to observing at 03:37 UTC. BS and PRM needed alignment help to lock DRMI, but otherwise no issues relocking.

H1 PSL
ryan.short@LIGO.ORG - posted 19:50, Tuesday 03 December 2024 (81611)
PSL 10-Day Trends

FAMIS 31062

For some reason, the NPRO current has been very slightly rising, correlating with a rise in power from both NPRO diodes, but overall output NPRO power is largely unchanged.

Jason's alignment tweaks of the PMC and RefCav this morning (alog81600) can be clearly seen on the stabilization trends. In addition to alignment changes improving PMC and RefCav transmission, the signal on PMC REFL and the average ISS diffracted power are both significantly less noisy.

Images attached to this report
H1 ISC
marc.pirello@LIGO.ORG - posted 17:30, Tuesday 03 December 2024 (81609)
Kepco Power Supply Replacement

WP12221

Kepco supplies with failed fans noted last week were replaced with with refurbished Kepco power supplies.  These supplies have the updated sealed ball bearing fan motor.  Alog of from last week 81498.

The following supplies were replaced.
EX VDD-1 U22-U24 +/- 24V this powers ISC-R1, we replaced both supplies.
**  one of these supplies has a bad voltmeter, we will replace this at the next opportunity
CER C5 U9-U11 +/- 24V this powers ISC-R2&R4, we replaced both supplies.

H1 SEI
jim.warner@LIGO.ORG - posted 17:03, Tuesday 03 December 2024 (81607)
Wind fence inspection for December, multiple broken wires at EX

Did the wind fence inspection today. The EY fence looks fine, no further damage to the section that was found damaged previously. The EX fence has at least 8 broken connections, scattered over the length of the entire fence. All of the breaks are the original wires go through the split clamps that attach the wires to the uprights. I think it might be possible to patch many of the breaks to get by, but I don't know when we will have time to do the work.

EX is shown in the first two images, EY in the last two.

 

Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 16:33, Tuesday 03 December 2024 (81606)
Ops Day Shift End

TITLE: 12/04 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 160Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Network outage and maintenance day today. The network outage did not have a direct affect on the relocking of the interferometer, which was straight forward aside from my testing of ALS automation code (alog81597). At this point I believe all systems have been recovered and H(t) looks to be going offsite. The LVEA was swept.
LOG:                                                                                                 

Start Time System Name Location Lazer_Haz Task Time End
16:33 FAC Karen EY n Tech clean 17:44
16:34 FAC Kim EX n Tech clean 17:21
16:34 FAC Nelly FCES n Tech clean 17:00
16:40 Fire Chris, FPS OSB, EY, MY, MX, EX n Fire alarm testing 18:35
16:48 CDS Erik CER, EY n Checking on CDS laptops 17:40
17:00 CDS Marc, Fernando EX n Power supply fan replacement 18:04
17:13 TCS Camilla LVEA n Turn CO2s back on 17:20
17:14 SEI Jim, Neil LVEA n Swap seismometer 17:31
17:15 SEI,CC Mitchell Ends, CS mech n FAMIS checks 19:04
17:22 FAC Kim LVEA n Tech clean 18:57
17:32 SEI Jim Ends, CS n FAMIS checks 18:21
17:32 TCS/ALS Camilla, Oli EY YES Table measurements 18:50
17:42 FAC Eric EX n Check on fan bearing 18:01
18:14 PSL/ISC Sheila, Masayuki LVEA n Check on flanges and ISCT1 distances 18:43
18:14 PSL Jason CR n ISS, PMC alignment tweaks 18:46
18:15 VAC Gerardo, Jordan EX n Check on purge air system 19:04
18:16 VAC Janos EX n Mech room checks 19:11
18:20 CDS Marc, Fernando CER n Power supply swap 18:59
18:20 FAC Tyler, contractor OSB roof n Roof inspection 18:36
18:35 FAC Chris, Pest LVEA, Yarm, Xarm n Pest checks 19:44
18:57 FAC Karen LVEA n Tech clean 19:08
19:05 VAC Gerardo LVEA n Pump pictures 19:11
19:15 VAC Gerardo EY n Check on purge air 19:36
19:37 GRD TJ CR n ALS alignment testing 20:48
20:07 - Camilla LVEA n Sweep 20:27
H1 General
jonathan.hanks@LIGO.ORG - posted 16:31, Tuesday 03 December 2024 (81608)
Network/GC issues today
As a follow-up to https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=81595 with some more information.

Today the many of the site general computing services where down.

While preparing for WP 12215, Nyath was migrating systems on the GC hypervisor cluster.  This was to make a configuration change on the nodes when they were not running anything.  At the end of moving one of the VMs the network switches connecting the hypervisors and storage went into a bad state.  We saw large packet loss on the systems connected to the switches.  This manifested itself as the systems seeing disk/io errors (due to timeouts when trying to write/read data).  This had wide ranging impacts on GC.  It also caused issues on the GC to CDS switch, setting a key link to a blocking state so no traffic flowed between GC and CDS (which points to the issues being related to a spanning-tree problem).  I will note that the migration of systems is part of the designed feature set of the system and part of the normal procedure for doing maintenance on hypervisor nodes.

The first steps of work were to get access to the hypervisors and storage, trying to make sure the those items where in a good state.  Later after working through restarts on various components and consulting with Dan and Erik the main switch stack for the VM system was rebooted and that seems to have cleared up the issues.

Work in the control room continued, using the local controls account.  Though we did have to make a change to the system config.  This needs to be looked at.  We have several KDCs configured so that authentication can go to multiple locations and does not need to rely on DNS, but the setup caused us issues.  To get things working we commented out the KDC lines in the krb5.conf file.  This essentially stopped the krb5 authentication (LIGO.ORG), but allowed local auth to go forward (which is what we had designed it for, so we will re-check the configs).
LHO General
ryan.short@LIGO.ORG - posted 16:00, Tuesday 03 December 2024 (81605)
Ops Eve Shift Start

TITLE: 12/03 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 159Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 2mph Gusts, 1mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.21 μm/s
QUICK SUMMARY: Despite the network outage and maintenance day activities today, H1 seems to have recovered easily and has been observing for 2 hours.

LHO VE
david.barker@LIGO.ORG - posted 15:13, Tuesday 03 December 2024 (81602)
Tue CP1 Fill

Tue Dec 03 10:11:42 2024 Fill completed in 11min 39secs

Gerardo confirmed a good fill curbside. Late entry due to morning network issues.

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 14:09, Tuesday 03 December 2024 - last comment - 15:14, Tuesday 03 December 2024(81595)
LHO Network Down

The LHO offsite network has been down since 06:30 PST this morning (Tue 03 Dec 2024). Alog has just become operational, but network access to CDS is still down.

Comments related to this report
david.barker@LIGO.ORG - 15:14, Tuesday 03 December 2024 (81603)

Everything is operational now.

H1 TCS
thomas.shaffer@LIGO.ORG - posted 14:29, Tuesday 19 November 2024 - last comment - 09:24, Wednesday 04 December 2024(81362)
Recent TCS CO2Y lock loss and table work today

Camilla C., TJ

Recently, the CO2Y laser that we replaced on Oct 22 has been struggling to stay locked for long periods of time (alog81271 and trend from today). We've found some loose or bad cables in the past that have caused us issues, so we went out on table today to double check they are all ok.

The RF cables that lead into the side of the laser can slightly impact the output power when wiggled, in particular the ones with a BNC connector, but not to the point that we think it would be causing issues. The only cable that we found loose was for the PZT that goes to the head of the laser. The male portion of the SMA that comes out of the laser head was loose, and cannot be tightened from outside of the laser. We verified that the connection from this to the cable were good, but wiggling it did still introduce glitched in the PZT channel. I don't think that we've conviced ourselves that this is a problem though, because the PZT doesn't seem to glitch when the laser loses lock and instead it will run away.

An unfortunate consequence of the cable wiggling was that one of the Beckhoff plugs at the feedthrough must have been unseated slightly and caused our mask flipper read backs to read incorrectly. The screws for this plug were not working so we just pushed the plug back in to fully seat it and all seemed to work again.

We still are not sure why we've been having these lock losses lately, the 2nd and 3rd attachments show a few of them from the last day or so. They remind me of back in 2019 when we saw this - example1example2. The fix ultimately a chiller swap (alog54980), but the flow and water temp seem more stable this time around. Not completely ruling it out yet though.

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 09:24, Wednesday 04 December 2024 (81617)

We've only had two relocks in the last two weeks since we readjusted cables. This is within its normal behavior. I'll close this FRS32709 unless this suddenly becomes unstable again. Though there might be a larger problem of laser stability, I think closing the this FRS makes sense since it is referencing a specific instance of instability.

Both X&Y tend to have periods of long stretches where they don't relock, and periods where they have issues staying locked (attachment 2). Unless there are obvious issues with chiller supply temperature, loose cables, wrong settings, etc. I don't think that we have a great grasp as to why it loses lock sometimes.

Images attached to this comment
Displaying reports 1381-1400 of 80639.Go to page Start 66 67 68 69 70 71 72 73 74 End