Displaying reports 3861-3880 of 83112.Go to page Start 190 191 192 193 194 195 196 197 198 End
Reports until 07:36, Wednesday 04 December 2024
LHO General
thomas.shaffer@LIGO.ORG - posted 07:36, Wednesday 04 December 2024 (81614)
Ops Day Shift Start

TITLE: 12/04 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 5mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.16 μm/s
QUICK SUMMARY: Locked for 12 hours, range looks good.

LHO General
tyler.guidry@LIGO.ORG - posted 07:03, Wednesday 04 December 2024 (81604)
DGR Storage Building Progress
The slab has been poured and finished, and the steel erection is well underway. Civil inspections took place today and I discussed with Jake a peripheral slab adjoined to the walking path to the man door for an air compressor. Insulation is beginning to get shaken out while siding and roofing goes up. Progress against the initial DGR schedule looks good. 
Images attached to this report
LHO General
ryan.short@LIGO.ORG - posted 22:01, Tuesday 03 December 2024 (81613)
Ops Eve Shift Summary

TITLE: 12/04 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 159Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Mostly quiet shift with just one lockloss with an ETMX glitch; relocking afterwards was simple. H1 has now been observing for 2.5 hours.

H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 20:50, Tuesday 03 December 2024 - last comment - 20:50, Tuesday 03 December 2024(81610)
Lockloss @ 02:07 UTC

Lockloss @ 02:07 UTC - link to lockloss tool

Looks like an ETMX glitch about 200ms before the lockloss.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 19:51, Tuesday 03 December 2024 (81612)

H1 back to observing at 03:37 UTC. BS and PRM needed alignment help to lock DRMI, but otherwise no issues relocking.

H1 PSL
ryan.short@LIGO.ORG - posted 19:50, Tuesday 03 December 2024 (81611)
PSL 10-Day Trends

FAMIS 31062

For some reason, the NPRO current has been very slightly rising, correlating with a rise in power from both NPRO diodes, but overall output NPRO power is largely unchanged.

Jason's alignment tweaks of the PMC and RefCav this morning (alog81600) can be clearly seen on the stabilization trends. In addition to alignment changes improving PMC and RefCav transmission, the signal on PMC REFL and the average ISS diffracted power are both significantly less noisy.

Images attached to this report
H1 ISC
marc.pirello@LIGO.ORG - posted 17:30, Tuesday 03 December 2024 (81609)
Kepco Power Supply Replacement

WP12221

Kepco supplies with failed fans noted last week were replaced with with refurbished Kepco power supplies.  These supplies have the updated sealed ball bearing fan motor.  Alog of from last week 81498.

The following supplies were replaced.
EX VDD-1 U22-U24 +/- 24V this powers ISC-R1, we replaced both supplies.
**  one of these supplies has a bad voltmeter, we will replace this at the next opportunity
CER C5 U9-U11 +/- 24V this powers ISC-R2&R4, we replaced both supplies.

H1 SEI
jim.warner@LIGO.ORG - posted 17:03, Tuesday 03 December 2024 (81607)
Wind fence inspection for December, multiple broken wires at EX

Did the wind fence inspection today. The EY fence looks fine, no further damage to the section that was found damaged previously. The EX fence has at least 8 broken connections, scattered over the length of the entire fence. All of the breaks are the original wires go through the split clamps that attach the wires to the uprights. I think it might be possible to patch many of the breaks to get by, but I don't know when we will have time to do the work.

EX is shown in the first two images, EY in the last two.

 

Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 16:33, Tuesday 03 December 2024 (81606)
Ops Day Shift End

TITLE: 12/04 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 160Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Network outage and maintenance day today. The network outage did not have a direct affect on the relocking of the interferometer, which was straight forward aside from my testing of ALS automation code (alog81597). At this point I believe all systems have been recovered and H(t) looks to be going offsite. The LVEA was swept.
LOG:                                                                                                 

Start Time System Name Location Lazer_Haz Task Time End
16:33 FAC Karen EY n Tech clean 17:44
16:34 FAC Kim EX n Tech clean 17:21
16:34 FAC Nelly FCES n Tech clean 17:00
16:40 Fire Chris, FPS OSB, EY, MY, MX, EX n Fire alarm testing 18:35
16:48 CDS Erik CER, EY n Checking on CDS laptops 17:40
17:00 CDS Marc, Fernando EX n Power supply fan replacement 18:04
17:13 TCS Camilla LVEA n Turn CO2s back on 17:20
17:14 SEI Jim, Neil LVEA n Swap seismometer 17:31
17:15 SEI,CC Mitchell Ends, CS mech n FAMIS checks 19:04
17:22 FAC Kim LVEA n Tech clean 18:57
17:32 SEI Jim Ends, CS n FAMIS checks 18:21
17:32 TCS/ALS Camilla, Oli EY YES Table measurements 18:50
17:42 FAC Eric EX n Check on fan bearing 18:01
18:14 PSL/ISC Sheila, Masayuki LVEA n Check on flanges and ISCT1 distances 18:43
18:14 PSL Jason CR n ISS, PMC alignment tweaks 18:46
18:15 VAC Gerardo, Jordan EX n Check on purge air system 19:04
18:16 VAC Janos EX n Mech room checks 19:11
18:20 CDS Marc, Fernando CER n Power supply swap 18:59
18:20 FAC Tyler, contractor OSB roof n Roof inspection 18:36
18:35 FAC Chris, Pest LVEA, Yarm, Xarm n Pest checks 19:44
18:57 FAC Karen LVEA n Tech clean 19:08
19:05 VAC Gerardo LVEA n Pump pictures 19:11
19:15 VAC Gerardo EY n Check on purge air 19:36
19:37 GRD TJ CR n ALS alignment testing 20:48
20:07 - Camilla LVEA n Sweep 20:27
H1 General
jonathan.hanks@LIGO.ORG - posted 16:31, Tuesday 03 December 2024 (81608)
Network/GC issues today
As a follow-up to https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=81595 with some more information.

Today the many of the site general computing services where down.

While preparing for WP 12215, Nyath was migrating systems on the GC hypervisor cluster.  This was to make a configuration change on the nodes when they were not running anything.  At the end of moving one of the VMs the network switches connecting the hypervisors and storage went into a bad state.  We saw large packet loss on the systems connected to the switches.  This manifested itself as the systems seeing disk/io errors (due to timeouts when trying to write/read data).  This had wide ranging impacts on GC.  It also caused issues on the GC to CDS switch, setting a key link to a blocking state so no traffic flowed between GC and CDS (which points to the issues being related to a spanning-tree problem).  I will note that the migration of systems is part of the designed feature set of the system and part of the normal procedure for doing maintenance on hypervisor nodes.

The first steps of work were to get access to the hypervisors and storage, trying to make sure the those items where in a good state.  Later after working through restarts on various components and consulting with Dan and Erik the main switch stack for the VM system was rebooted and that seems to have cleared up the issues.

Work in the control room continued, using the local controls account.  Though we did have to make a change to the system config.  This needs to be looked at.  We have several KDCs configured so that authentication can go to multiple locations and does not need to rely on DNS, but the setup caused us issues.  To get things working we commented out the KDC lines in the krb5.conf file.  This essentially stopped the krb5 authentication (LIGO.ORG), but allowed local auth to go forward (which is what we had designed it for, so we will re-check the configs).
LHO General
ryan.short@LIGO.ORG - posted 16:00, Tuesday 03 December 2024 (81605)
Ops Eve Shift Start

TITLE: 12/03 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 159Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 2mph Gusts, 1mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.21 μm/s
QUICK SUMMARY: Despite the network outage and maintenance day activities today, H1 seems to have recovered easily and has been observing for 2 hours.

LHO VE
david.barker@LIGO.ORG - posted 15:13, Tuesday 03 December 2024 (81602)
Tue CP1 Fill

Tue Dec 03 10:11:42 2024 Fill completed in 11min 39secs

Gerardo confirmed a good fill curbside. Late entry due to morning network issues.

Images attached to this report
H1 PSL
jason.oberling@LIGO.ORG - posted 15:06, Tuesday 03 December 2024 (81600)
PSL Remote PMC and RefCav Beam Alignment Tweaks

After seeing the reflected spots on the PSL Quad display this morning and the "ISS diffracted power low" notification, since no one was using the beam today I performed a remote beam alignment tweak for both the PMC and RefCav.  I started shortly after 10am, after Masayuki and I finished his PMC measurements and sufficient time had passed for the PMC to recover.  The PMC Refl spot looked brighter on the left side vs. the right, indicating a potential alignment shift in yaw, while the RefCav Refl spot looked like it needed a pitch adjustment.

Starting with the PMC, with the ISS OFF there was ~103.3 W in transmission and ~25.0 W in reflection.  Tweaking beam angle (entirely in yaw as expected based on the Refl spot, pitch adjustment acheived nothing) resulted in ~105.2 W transmitted and ~22.7 W reflected; walking the beam got things a little better, with ~105.5 W transmitted and ~22.3 W reflected.  Seems this drop in PMC transmission was due to a slow alignment drift as the enclosure recovered from our NPRO swap ~10 days ago.  With the ISS ON and diffracting ~3.8% (I had to adjust the RefSignal to -1.97 V from -1.95 V due to the increase in PMC transmission) the PMC is now transmitting ~105.7 W and reflecting ~22.4 W.

Moving on to the RefCav, with the IMC locked the RefCav TPD was ~0.79 V.  Adjusting the beam angle got things a little better, with a TPD of ~0.82 V; as expected this adjustment was almost entirely in pitch.  While walking the beam alignment the IMC unlocked and was having a hard time relocking, so I asked TJ to take it to OFFLINE while I finished the alignment; I note this as with the IMC unlocked the RefCav TPD is generally a little bit higher than with the IMC locked.  With the IMC now unlocked I was able to get the RefCav TPD to ~0.86 V by walking the beam alignment.  While I was not able to get the TPD higher than this, the Refl spot on the PSL Quad display still looks like there's some alignment work to do; the central spot is usually centered in the ring that surrounds it, but now it looks a little low and left.  I don't have an explanation for this currently, we'll keep an eye on this in the coming days and see if/how things change.  TJ relocked the IMC and it did not have a problem this time.

H1 GRD (OpsInfo)
thomas.shaffer@LIGO.ORG - posted 14:30, Tuesday 03 December 2024 (81597)
Increase flashes has now been replaced with SCAN_ALIGNMENT

Following from last week (alog81493) I've now replaced the Increase_Flashes (IF) state calls with SCAN_ALIGNMENT (SA). The former still exists, I've created parallel paths in the ALS arm nodes, but it's not called by ISC_LOCK or INIT_ALIGN. If there are any troubles, then reverting ISC_LOCK and INIT_ALIGN should be all that's needed (no need to touch ALS nodes).

Last week I wanted to return the functionality to immediately stop if it sees a flash above threshold. I added this back in, but since it has to watch slow channels, it rarely happens. Still though, SA seems to be slightly faster than IF, accuracy seems similar.

I've also added some more fault checking and a way to "gracefully" escape the state. I would still like to go through and clean up the code a bit since it's currently a Frankenstein of comments from me integrating this. This includes some log and notifications that might still reference IF. For other time.

H1 General
thomas.shaffer@LIGO.ORG - posted 14:13, Tuesday 03 December 2024 (81596)
Observing 2212 UTC

Maintenance wrapped up, the most notable item was an internet outage. We are back to Observing.

H1 CDS
david.barker@LIGO.ORG - posted 14:09, Tuesday 03 December 2024 - last comment - 15:14, Tuesday 03 December 2024(81595)
LHO Network Down

The LHO offsite network has been down since 06:30 PST this morning (Tue 03 Dec 2024). Alog has just become operational, but network access to CDS is still down.

Comments related to this report
david.barker@LIGO.ORG - 15:14, Tuesday 03 December 2024 (81603)

Everything is operational now.

H1 General (ISC)
thomas.shaffer@LIGO.ORG - posted 17:33, Sunday 01 December 2024 - last comment - 15:12, Tuesday 03 December 2024(81570)
A few lock losses from Transition_From_ETMX

We've now had two lock losses from the Transition from ETMX state or immediately after while trying to reacquire. Unlike back on Nov 22 (alog81430), SRCL seems fine. For the first one, the locklost tool 1417133960 shows the IMC-SERVO_SPLITMON channel is saturating ~5sec before lock loss, then there is some odd LSC signals 40msecs before the tool tagged the lock loss (attachment 1). This might just be the lock loss itself though. The second one (locklost tool 1417136668) hasn't tagged anything yet, but ETMY has a glitch 2.5 sec before lock loss and ETMX seems to move more from that point (attachment 2).

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 19:33, Sunday 01 December 2024 (81571)

Another one while I was looking at the code for the Transition_From_ETMX state. We were there for a few minutes before I noticed CHARD & DHARD inputs ringing up. Unsure how to save it, I just requested it to move on but it lead to a lock loss.

Images attached to this comment
thomas.shaffer@LIGO.ORG - 20:08, Sunday 01 December 2024 (81573)

I ended up changing the SUS-ITMX_L3_LOCK_BIAS_TRAMP to 25 from its 30 to hopefully move to a safer place sooner. Since it was already changed from 60 a week ago, I didn't want to go too short. Worked this one time.

camilla.compton@LIGO.ORG - 15:04, Monday 02 December 2024 (81582)

Sheila, Camilla

We had another lockloss with an ETMX_L2 glitch beforehand this morning, plot, and it seems that even a successful transition this morning had a glitch too but it was smaller plot. We looked at the ISC_LOCK.py code and it's not yet obvious what's causing this glitch. The successful transition also had a DARM wobble up to 2e6 plot but when we have the locklosses, DARM goes to ~10e6.

While looking at all the filters, we found that ETMX_L2_LOCK_L ramp time in 20s screenshot although we only wait 10s in ISC_LOCK. We will edit this tomorrow  when we are not observing. We don't think this will effect the glitch as there is no input/output to this filter at the time of the glitch.

The only thing that seems like it could cause the glitch is the DARM1 FM1 being turned off, we don't yet understand how and had similar issues we thought we solved in  77640

Images attached to this comment
camilla.compton@LIGO.ORG - 15:12, Tuesday 03 December 2024 (81601)

This morning I edited ETMX_L2_LOCK_L FM1 ramp time down to 10s, reloaded coefficients. 

H1 ISC (TCS)
camilla.compton@LIGO.ORG - posted 16:04, Tuesday 19 November 2024 - last comment - 15:09, Tuesday 03 December 2024(81358)
Beam Profile Measurements of ALS-Y path

Oli, Camilla WP12203. Repeat of some of the work done in 2019: EX: 52608, EY: 52636, older: part 1part 2part 3.

We misaligned ITMY and turned off the ALS-Y QPD servo with H1:ALS-Y_PZT_SWITCH and placed the Ophir Si scanning slit beam profiler to measure both the 532nm ALSY outgoing beam and the ALSY return beam in the HWS path.

The outgoing beam was a little oblong in the measurements but looked pretty clean and round by eye, the return beam did not! Photos of outgoing and return beam attached.  Outgoing beam was 30mW, return beam 0.75mW. 

Attached is the 13.5% and D4sigma measurements, I also have photos of the 50% measurements if needed. Distances are measured from the optic where HWS and ALS beams combine, ALS-M11 in D1400241

We had previously removed HWS-M1B and HWS-M1C and translated HWS-M1A from whats shown in D1400241-v8 to remove clipping. 

Images attached to this report
Non-image files attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 11:07, Tuesday 26 November 2024 (81491)

TJ, Camilla

We expanded on these measurements today and measured the positions of the lenses and mirrors in both ALS and HWS beampaths and took beamscan data further from the periscope, where the beam is changing size more. Data attached for today and all data together calculated from the VP.  Photo of the beamscanner in the HWS return ALS beam path also attached.

Images attached to this comment
Non-image files attached to this comment
camilla.compton@LIGO.ORG - 15:09, Tuesday 03 December 2024 (81599)

Oli, Camilla

Today we took some beam measurements between ALS-L6 and ALS-M9. These are in the attached documents with today's data and all the data. The horizontal A1 measurements seemed strange before L6. We're unsure why as further downstream when the beam is larger and easier to see by eye it looks round.

Images attached to this comment
Non-image files attached to this comment
H1 PEM (DetChar, PEM, TCS)
robert.schofield@LIGO.ORG - posted 18:06, Thursday 14 November 2024 - last comment - 10:19, Thursday 19 December 2024(81246)
TCS-Y chiller is likely hurting Crab sensitivity

Ansel reported that a peak in DARM that interfered with the sensitivity of the Crab pulsar followed a similar time frequency path as a peak in the beam splitter microphone signal. I found that this was also the case on a shorter time scale and took advantage of the long down times last weekend to use  a movable microphone to find the source of the peak. Microphone signals don’t usually show coherence with DARM even when they are causing noise, probably because the coherence length of the sound is smaller than the spacing between the coupling sites and the microphones, hence the importance of precise time-frequency paths.

Figure 1 shows DARM and the problematic peak in microphone signals. The second page of Figure 1 shows the portable microphone signal at a location by the staging building and a location near the TCS chillers. I used accelerometers to confirm the microphone identification of the TCS chillers, and to distinguish between the two chillers (Figure 2).

I was surprised that the acoustic signal was so strong that I could see it at the staging building - when I found the signal outside, I assumed it was coming from some external HVAC component and spent quite a bit of time searching outside. I think that this may be because the suspended mezzanine (see photos on second page of Figure 2) acts as a sort of soundboard, helping couple the chiller vibrations to the air. 

Any direct vibrational coupling can be solved by vibrationally isolating the chillers. This may even help with acoustic coupling if the soundboard theory is correct. We might try this first. However, the safest solution is to either try to change the load to move the peaks to a different frequency, or put the chillers on vibration isolation in the hallway of the cinder-block HVAC housing so that the stiff room blocks the low-frequency sound. 

Reducing the coupling is another mitigation route. Vibrational coupling has apparently increased, so I think we should check jitter coupling at the DCPDs in case recent damage has made them more sensitive to beam spot position.

For next generation detectors, it might be a good idea to make the mechanical room of cinder blocks or equivalent to reduce acoustic coupling of the low frequency sources.

Non-image files attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 14:12, Monday 25 November 2024 (81472)DetChar, TCS

This afternoon TJ and I placed pieces of damping and elastic foam under the wheels of both CO2X and CO2Y TCS chillers. We placed thicker foam under CO2Y but this did make the chiller wobbly so we placed thinner foam under CO2X.

Images attached to this comment
keith.riles@LIGO.ORG - 08:10, Thursday 28 November 2024 (81525)DetChar
Unfortunately, I'm not seeing any improvement of the Crab contamination in the strain spectra this week, following the foam insertion.

Attached are ASD zoom-ins (daily and cumulative) from Nov 24, 25, 26 and 27.
Images attached to this comment
camilla.compton@LIGO.ORG - 15:02, Tuesday 03 December 2024 (81598)DetChar, TCS

This morning at 17:00UTC we turned the CO2X and CO2Y TCS chiller off and then on again, hoping this might change the frequency they are injecting into DARM. We do not expect it to effect it much we had the chillers off for a ling period 25th October 80882 when we flushed the chiller line and the issue was seen before this date.

Opened FRS 32812.

There were no expilcit changes to the TCS chillers bettween O4a and O4b although we swapped a chiller for a spare chiller in October 2023 73704

camilla.compton@LIGO.ORG - 11:27, Thursday 05 December 2024 (81634)TCS

Between 19:11 and 19:21 UTC, Robert and I swapped the foam from under CO2Y chiller (it was flattened and not providing any damping now) to new, thicker foam and 4 layers of rubber. Photo's attached. 

Images attached to this comment
keith.riles@LIGO.ORG - 06:04, Saturday 07 December 2024 (81663)
Thanks for the interventions, but I'm still not seeing improvement in the Crab region. Attached are daily snapshots from UTC Monday to Friday (Dec 2-6).
Images attached to this comment
thomas.shaffer@LIGO.ORG - 15:53, Tuesday 10 December 2024 (81745)TCS

I changed the flow of the TCSY chiller from 4.0gpm to 3.7gpm.

These Thermoflex1400 chillers have their flow rate adjusted by opening or closing a 3 way valve at the back of the chiller. for both X and Y chillers, these have been in the full open position, with the lever pointed straight up. The Y chiller has been running with 4.0gpm, so our only change was a lower flow rate. The X chiller has been at 3.7gpm already, and the manual states that these chillers shouldn't be ran below 3.8gpm. Though this was a small note in the manual and could be easily missed. Since the flow couldn't be increased via the 3 way valve on back, I didn't want to lower it further and left it as is.

Two questions came from this:

  1. Why are we running so close to the 3.8gpm minimum?
  2. Why is the flow rate for the X chiller so low?

The flow rate has been consistent for the last year+, so I don't suspect that the pumps are getting worn out. As far back as I can trend they have been around 4.0 and 3.7, with some brief periods above or below.

Images attached to this comment
keith.riles@LIGO.ORG - 07:52, Friday 13 December 2024 (81806)
Thanks for the latest intervention. It does appear to have shifted the frequency up just enough to clear the Crab band. Can it be nudged any farther, to reduce spectral leakage into the Crab? 

Attached are sample spectra from before the intervention (Dec 7 and 10) and afterward (Dec 11 and 12). Spectra from Dec 8-9 are too noisy to be helpful here.



Images attached to this comment
camilla.compton@LIGO.ORG - 11:34, Tuesday 17 December 2024 (81866)TCS

TJ touched the CO2 flow on Dec 12th around 19:45UTC 81791 so the flowrate further reduced to 3.55 GPM. Plot attached.

Images attached to this comment
thomas.shaffer@LIGO.ORG - 14:16, Tuesday 17 December 2024 (81875)

The flow of the TCSY chiller was further reduced to 3.3gpm. This should push the chiller peak lower in frequency and further away from the crab nebula.

keith.riles@LIGO.ORG - 10:19, Thursday 19 December 2024 (81902)
The further reduced flow rate seems to have given the Crab band more isolation from nearby peaks, although I'm not sure I understand the improvement in detail. Attached is a spectrum from yesterday's data in the usual form. Since the zoomed-in plots suggest (unexpectedly) that lowering flow rate moves an offending peak up in frequency, I tried broadening the band and looking at data from December 7 (before 1st flow reduction), December 16 (before most recent flow reduction) and December 18 (after most recent flow reduction). If I look at one of the accelerometer channels Robert highlighted, I do see a large peak indeed move to lower frequencies, as expected.

Attachments:
1) Usual daily h(t) spectral zoom near Crab band - December 18
2) Zoom-out for December 7, 16 and 18 overlain
3) Zoom-out for December 7, 16 and 18 overlain but with vertical offsets
4) Accelerometer spectrum for December 7 (sample starting at 18:00 UTC)
5) Accelerometer spectrum for December 16
6) Accelerometer spectrum for December 18 
Images attached to this comment
Displaying reports 3861-3880 of 83112.Go to page Start 190 191 192 193 194 195 196 197 198 End