Displaying reports 4361-4380 of 84743.Go to page Start 215 216 217 218 219 220 221 222 223 End
Reports until 11:21, Wednesday 12 February 2025
H1 CDS
david.barker@LIGO.ORG - posted 11:21, Wednesday 12 February 2025 - last comment - 13:16, Wednesday 12 February 2025(82769)
Hardware Injection process failed at 07:43 PST and failed to restart

At 07:43 Wed 12feb2025 PST the psinject process on h1hwinj1 crashed and did not restart automatically. H1 was not locked at this time.

The failure to restart was tracked to an expired leap-seconds file on h1hwinj1. This is a Scientific Linux 7 machine, this OS is obsolete and updated tzdata packages are not available. As a work around to get psinject running again, I hand copied a debian12 version of the /usr/share/zoneinfo/[leapseconds, leap-seconds.list] files. At this point monit was able to successfully restart the psinject systemd process.

In conversation with Mike Thomas at LLO, he had seen this problem several weeks ago and implemented the same solution (hand copy of the leapsecond files). Both sites will schedule an upgrade their hwinj machines to Rocky Linux post O4.

Timeline is (all times PST):

07:43 psinject process fails, H1 is unlocked

08:41 H1 ready for observe, but blocked due to lack of hw-injections

09:34 psinject problem resolved, H1 able to transition to observe.

Lost observation time: 53 minutes.

We cannot definitely say why psinject failed today at LHO and several weeks ago at LLO. Mike suspects a local cache expired, causing the code to re-read the leapseconds file and discover that it had expired 28dec2024.

Post Script:

While investigating the loss of the h1calinj CW_EXC testpoint related to this failure, EJ found that the root file system on h1vmboot1 (primary boot server) was 100% full. We deleted some 2023 logs to quickly bring this down to 97% full. At this time we don't think this had anything to do with the psinject issue.

Comments related to this report
david.barker@LIGO.ORG - 13:16, Wednesday 12 February 2025 (82770)

This had happened before, details in FRS30046

H1 TCS
ryan.crouch@LIGO.ORG - posted 10:39, Wednesday 12 February 2025 (82747)
TCS Chiller Water Level Top Off - Biweekly

FAMIS27808

I added 140mL to both chillers, bringing TCS_X from 29.5 to 30.0, and bringing TCS_Y from 9.8 to 10.3. Both the filters were clean and there was no water in the dixie cup.

LHO VE
david.barker@LIGO.ORG - posted 10:16, Wednesday 12 February 2025 (82768)
Wed CP1 Fill

Wed Feb 12 10:05:23 2025 INFO: Fill completed in 5min 20secs

Very cold morning. TCmins [-61C, -60C] OAT (-8C, 17F) DeltaTempTime 10:05:24

Images attached to this report
H1 SEI
oli.patane@LIGO.ORG - posted 10:02, Wednesday 12 February 2025 (82766)
ISI CPS Noise Spectra Check Weekly FAMIS

Closes FAMIS#26030, last checked 82671

Everything looking good this week - many sensors that were elevated last week are now lower.

Non-image files attached to this report
LHO FMCS
eric.otterman@LIGO.ORG - posted 07:48, Wednesday 12 February 2025 (82763)
LVEA temperatures
Because of the severe cold, the VEAs have been struggling to hold temperature. I increased the supply air set points of the air handlers so the spaces can hold onto more of the heat they're producing. This is what caused the increase on the trends. I returned the air handlers to auto set point so the temperatures will even out as the morning goes on.  
LHO General
thomas.shaffer@LIGO.ORG - posted 07:41, Wednesday 12 February 2025 - last comment - 10:22, Wednesday 12 February 2025(82762)
Ops Day Shift Start

TITLE: 02/12 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
   SEI_ENV state: CALM
   Wind: 3mph Gusts, 1mph 3min avg
   Primary useism: 0.02 μm/s
   Secondary useism: 0.34 μm/s
QUICK SUMMARY: Lost lock just before I got in. Looks like ALS has been been unable to stay locked even with good buildup, something we've seen over the last week or so. Running an initial alignment will generally fix this, and so far it has. VEA temperatures are not very stable, but Eric is looking into it.
 

Comments related to this report
ryan.crouch@LIGO.ORG - 10:22, Wednesday 12 February 2025 (82767)SUS

ITMY mode8's damping wasn't going well this morning, it was slowly ringing up, I'm guessing its due to the slight temperature excursions in the LVEA. I turned off the -30 phase filter and used a positive gain and it started to damp, I'll put the gain to zero in lscparams just in case. I reloaded VIOLIN_DAMPING after we lost lock so the changes should be live.

So the new settings I've found are FM1 + FM10, G = = +0.3

LHO General (SQZ)
ibrahim.abouelfettouh@LIGO.ORG - posted 22:01, Tuesday 11 February 2025 - last comment - 08:20, Wednesday 12 February 2025(82761)
OPS Eve Shift Summary

TITLE: 02/12 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY:

IFO is in NLN and OBSERVING as of 05:58 UTC

Pretty quiet shift with one EQ caused Lockloss (alog bla bla)

We also dropped out of OBSERVING briefly since SQZ unlocked. The SQZ_OPO_LR was not reaching its count threshold of 80. Following this wiki page, I first attempted to change the SHG temperature in order to increase the SHG power, which was indeed reduced compared to Lock Acquisition time. This did not work, so I ended up reverting my -0.015C change and just set the threshold from 80 to 74 in SQZ params, which did end up working. After this, I retuned the opo temp, which prompted SQZ to automatically re-lock. After SQZ ANGLE guardian optimized the angle, we were able to get back into Lock. There were no SDFs to accept before going back to OBSERVING. Total troubleshooting/outage time was 40 mins, from 02:21 UTC to 03:01 UTC.

LOG:

None

Comments related to this report
camilla.compton@LIGO.ORG - 08:20, Wednesday 12 February 2025 (82764)SQZ

SHG power drop was probably due to LVEA temp swings, I touched the wave plates in the SHG path to reduce rejected SHG light (in SQZT0 and HAM7). Then tried setting opo_grTrans_setpoint_uW  back to 80uW but controlmon was at 1.5 (too low). Reverted back to 75uW so H1:SQZ-OPO_ISS_CONTROLMON is at 2.4 (still not ideal). Didn't recheck OPO temp. 

Images attached to this comment
H1 SEI (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 20:40, Tuesday 11 February 2025 (82760)
Lockloss 4:38 UTC

Earthquake caused Lockloss (very local 4.6)

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:05, Tuesday 11 February 2025 (82757)
OPS Eve Shift Start

TITLE: 02/12 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 4mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.42 μm/s
QUICK SUMMARY:

IFO is in NLN and OBSERVING as of 21:26 UTC

 

H1 TCS
camilla.compton@LIGO.ORG - posted 15:56, Tuesday 11 February 2025 (82756)
Dis-assembled TCS VP ASSY-D1700340-001 for cleaning

Gerardo, Jordan, Camilla 

We disassembled ASSY-D1700340-001 (installed in 39957, removed in 60667) with the plan of re-cleaning the VAC surface and reusing the metal parts with new DAR ZnSe in the upcoming vent. 

Sadly I chipped the 0.5" thick D1700340 window when removing it, photo, will add a note to the ICS record. The dust damaged the in-vac o-ring so we threw away the pair of o-rings but have spares in the LVEA TCS cabinets. 

D1700338 is at C&B, other parts are stored in the PCAL lab VAC storage and the ZnSe will be placed in the LVEA TCS cabinets.. 

Images attached to this report
LHO General
tyler.guidry@LIGO.ORG - posted 14:26, Tuesday 11 February 2025 (82754)
Chilled Water Pump Swapped
Gerardo notified me earlier today he observed an abnormal noise and some wetness coming from the Y+ chilled water pump. When Eric and I arrived we more or less observed the same. It is most likely another failed pump seal which will have to be replaced. One will be ordered shortly assuming demand has lifted and there is available inventory. Pumps were swapped and no interruption to temperatures should be seen.

E. Otterman T. Guidry
H1 TCS
matthewrichard.todd@LIGO.ORG - posted 14:15, Tuesday 11 February 2025 - last comment - 13:54, Friday 14 February 2025(82752)
Beam measurements taken at EX HWS - EX ALS

Matthew, Camilla

Today we went to EX to try and measure some beam profiles of the HWS beam as well as the refl ALS beam.

Without analyzing the profiles too much, it seemed like (at least the HWS beam) matches previously taken data from Camilla and TJ.

The ALS beam still has the same behavior as reported a few years ago by Georgia and Keita (alog 52608), could not find a referencing alog) who saw two blobs instead of a nice gaussian, just as we see now

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 13:54, Friday 14 February 2025 (82772)

Attached is the data we took of the HWS beam coming out of the collimator, both the data from 11th Feb and 10th Dec (81741) together. 

We also took data of the return ALS beam, which as Matt showed was shaped like two-lobes. Attached here. We measured the beam more downstream than when we did this at EY 81358 due to the ALS return beam being very large at the ALS-M11 beamsplitter. Table layout D1800270. Distances on table measured in 62121, since then HWS_L3 has been removed. 

We didn't take any outgoing ALS data as ran out of time. 

 

Non-image files attached to this comment
LHO General
thomas.shaffer@LIGO.ORG - posted 14:14, Tuesday 11 February 2025 (82749)
Ops Day Shift End

TITLE: 02/11 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: The shift started with the IFO unlocked. I kept it down since we had a full 4 hours of maintenance planned for the day. Relocking presented a few odd ALS issues, but Sheila managed to figure it out (see below). Other than that it was smooth relocking and we have now been observing for and hour.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
16:54 SAF Laser SAFE LVEA YES LVEA is SAFE 16:54
15:45 FAC Kim, Nelly EX, EY, FCES n Tech clean 17:31
15:56 VAC Ken EX mech n Compressor electrical 20:01
16:13 PCAL Francisco EX EX VEA PCAL spot move 17:48
16:15 FAC Chris LVEA yes FAMIS checks 16:38
16:20 TCS Camilla LVEA yes Table pictures 16:58
16:22 VAC Jordan LVEA yes RGA power cycle 16:57
16:32 FAC Eric Mech room n HVAC heater element replacement 17:46
16:37 VAC Gerardo Mech room n Kabelco startup 17:07
16:37 IAS Jason, Ryan LVEA n Alignment FARO work 19:32
16:38 FAC Chris outbuildings - FAMIS checks 18:24
16:39 ISC Daniel LVEA n ISC-R2 rack 36MHz rewire 17:55
16:43 FAC Tyler LVEA, Mids - 3IFO checks 18:13
17:08 VAC Gerardo LVEA n Check on vac things 17:22
17:23 VAC Gerardo, Jordan EY n Ion pump replacement 19:03
17:24 SEI Jim, Fil EY n SEI cabling 18:13
17:31 CDS Marc MY n Parts hunt 17:56
17:35 HWS/ALS Camilla, Matt EX Yes Table beam measurement 19:48
17:51 FAC Kim, Nelly LVEA n Tech clean 18:50
17:57 CDS Marc, Sivananda LVEA n Rack check 18:04
18:18 SEI Jim, Mitchell Mids n Find parts 18:42
18:51 FAC Kim, Nelly EX yes Tech clean 19:47
19:20 FAC Tyler, Eric EY n Looking at chilled water pump 19:33
19:51 VAC Gerardo LVEA n Dew point meas 20:17
19:52 - Ryan C LVEA n Sweep 20:07
20:53 ISC Sheila LVEA Local POP_X align 21:28
21:36 VAC/TCS Camilla, Gerardo, Jordan, Travis Opt Lab n Look at ZnSe viewport

ongoing

H1 OpsInfo
thomas.shaffer@LIGO.ORG - posted 13:43, Tuesday 11 February 2025 (82753)
Minor edit to the ALS_ARM guardians

I made two minor changes to these nodes today:

  1. Certain states now check if the Beckhoff locking thinks we have unlocked and the Gaurdian node will then go back to LOCKING. This is a continuation from alog81870.
  2. Changed all nds calls to use the call_with_timeout wrapper to avoid any potential hangs.
H1 ISC
sheila.dwyer@LIGO.ORG - posted 13:30, Tuesday 11 February 2025 (82751)
POP X path photo with optics labeled, POP X PZT relieved

The POP X PZT was railed after yesterday's PR2 spot move, so Jennie and I went to the table and relieved it using the steering mirror right after the the pZT.

Elenna is looking for some information about the POP X path, so Jennie W and I took this photo and added optic labels to it.

Images attached to this report
H1 ISC
daniel.sigg@LIGO.ORG - posted 10:42, Tuesday 11 February 2025 - last comment - 16:21, Tuesday 11 February 2025(82742)
POP-X at 36.4 MHz

The POP-X readout chain has been moved from 45.5MHz to 36.4MHz. All RF chains have been checked and they look fine.

H1:ASC-POP_X_RF_DEMOD_LONOM was changed from 24 to 23.

Images attached to this report
Comments related to this report
daniel.sigg@LIGO.ORG - 16:09, Tuesday 11 February 2025 (82758)

POP_X 45MHz signal trends while powering up. If we believe the POP_X_DC calibration it detects around 12mW at full input power.

Images attached to this comment
daniel.sigg@LIGO.ORG - 16:21, Tuesday 11 February 2025 (82759)

Same trend with 36MHz signals.

All RF signals have an analog whitening gain of 21dB. There is also a digital gain in the I/Q input segments of ~2.8 at the beginning, and ~5.6 at the end of the power up. At the same time of the gain change, the 45 MHz modulation idnex is reduced by 6dB, whereas the 9MHz one is reduced by 3dB. This means the 45MHz signals are correctly compensated, whereas the 36MHz signal are not and the outputs become 3dB smaller than expected. The maximum observed INMON signal is around 8000 count and factor ~4 from saturating.

Images attached to this comment
H1 ISC
elenna.capote@LIGO.ORG - posted 12:17, Wednesday 05 February 2025 - last comment - 14:28, Tuesday 11 February 2025(82656)
Quick check of POP A LF calibration

Sheila and I are continuing to check various PD calibrations (82260). Today we checked the POP A LF calibration.

Currently there is a filter labeled "to_uW" that is a gain of 4.015. After some searching, Sheila tracked this to an alog by Kiwamu, 13905, with [cnts/W] = 0.76 [A/W] x 200 [Ohm] x 216 / 40 [cnts/V]. Invert this number and multiply by 1e6 to get uW/ct.

Trusting our recalibration of IM4 trans, we have 56.6 W incident on PRM. We trust our PRG is about 50 at this time, so 2.83 kW are in the PRC. PR2 transmission is 229 ppm (see galaxy optics page). Then, the HAM1 splitter is 5.4% to POP (see logs like 63523, 63625). So we expect 34 mW on POP. At this time, there was about 30.5 mW measured on POP according to Kiwamu's calibration.

Comments related to this report
elenna.capote@LIGO.ORG - 12:56, Monday 10 February 2025 (82724)

I have added another filter to the POP_A_LF bank called "to_W_PRC", that should calibrate the readout of this PD to Watts of power in the PRC.

POP_A_LF = T_PR2 * T_M12 * PRC_W, and T_PR2 is 229 ppm and T_M12 is 0.054. I also added a gain of 1e-6 since FM10 calibrates to uW of power on the PD.

Both FM9 (to_W_PRC) and FM10 (to_uW) should be engaged so that POP_A_LF_OUT reads out the power in the PRC.

I loaded the filter but did not engage it.

elenna.capote@LIGO.ORG - 14:28, Tuesday 11 February 2025 (82755)

More thoughts about these calibrations!

I trended back to last Wednesday to get more exact numbers.

input power = 56.8 W

PRG = 51.3

POP A LF (Kiwamu calibration) = 30.7 mW

predicted POP A LF = 0.054 * 229 ppm * 56.8 W * 51.3 W/W = 36 mW

ratio = 30.7 mW / 36 mW = 0.852

If the above calibrations of PRG and input power are correct, we are missing about 15% of the power on POP.

Displaying reports 4361-4380 of 84743.Go to page Start 215 216 217 218 219 220 221 222 223 End