Displaying reports 1841-1860 of 77271.Go to page Start 89 90 91 92 93 94 95 96 97 End
Reports until 15:51, Tuesday 07 May 2024
H1 General
thomas.shaffer@LIGO.ORG - posted 15:51, Tuesday 07 May 2024 (77699)
Observing 2240 UTC

Maintenance recovered, back to Observing at 2240 UTC.

We struggled with a handful of lock losses at inconsistent spots and two lock losses just after Transition_from_ETMX. Jenne and Ryan S tracked this down to some changes made yesterday and changed a filter (alog77698), then we made it through one time. Unsure if this will be ok net relock or if it will interfere with locking though.

H1 ISC (OpsInfo)
ryan.short@LIGO.ORG - posted 13:29, Tuesday 07 May 2024 (77692)
DIFF IR Offset Values in O4b

As a part of an effort to better streamline the FIND_IR step in the lock acquisition process, I put together a quick histogram of the DIFF offsets that have successfully locked DIFF IR since the start of O4b (April 10th). Clearly, there are two main regions where the offset needs to be in order to lock DIFF IR, so it may be beneficial to change how the IR search works to start by doing a narrow scan around these two regions, then expanding the search from there if need be.

I may expand this to also check the values from O4a and the COMM IR offsets if we find that useful.

Images attached to this report
H1 ISC
jenne.driggers@LIGO.ORG - posted 13:12, Tuesday 07 May 2024 - last comment - 10:51, Friday 10 May 2024(77690)
SR2 alignment when beam centered on AS_C, before vs after SR3 shift

Sheila asked a good question the other day of, Did SR2 alignment change between the beginning of O4b (when things were still good) and when we had the bad losses through the OFI (when things were bad, before the big shift).  The answer: no, I don't think SR2 moved very much (according to its top mass osems) when the the losses through the OFI showed up. It did move about 10 urad in yaw (see table below), which I plan to look into further.

I looked at several times throughout the last few weeks when ALIGN_IFO guardian had just finished up state 58, SR2_ALIGN at 10 W, which it now does every time initial alignment is automatically run.  These should all be single bounce off of ITMY, with the beam centered on AS_C by adjusting the SR2 sliders, for some given SR3 slider position (nothing automatic touches the SR3 sliders). 

In the table, I summarize the SR3 and SR2 top mass osems.  I've got 3 categories of times for the IFO situation:

Note that this table is not chronological, since I've grouped rows by IFO situation rather than time.  The SR2 and SR3 osem values that are in bold are the ones to compare amongst each other.  There does seem to be a 10 urad shift in SR2 yaw between the April 21st and April 23rd times.  There are no other run-throughs of the SR2_ALIGN state of ALIGN_IFO between these times to check.  This SR2 yaw shift (which is consistent even when we revert sliders to the 'pre shift' values and run SR2_ALIGN) is notable, but not nearly as large as what we ended up using for steering around the spot in the OFI.

IFO 'situation' Date / time [UTC] AS_C NSUM value SR3 Pit [M1_DAMP_P_INMON] SR3 Yaw [M1_DAMP_Y_INMON] SR2 Pit [M1_DAMP_P_INMON] SR2 Yaw [M1_DAMP_Y_INMON]
(1) before EQ, before loss, before alignment shift 17 Apr 2024 00:19:00 0.0227 -281.5 -612.2 569.8 35.3
(1) after EQ, before loss, before alignment shift 21 Apr 2024 20:08:30 0.0227 -281.5 -611.9 571.9 35.3
(2) after EQ, after loss, before alignment shift 23 Apr 2024 23:10:00 0.0193 -281.9 -612.2 572.7 26.7
(2) after EQ, after loss, shift temporarily reverted to check 7 May 2024 18:10:00 0.0187 -282.3 -616.0 558.5 23.2
(3) after EQ, after loss, after alignment shift 25 Apr 2024 12:18:20 0.0226 -291.7 -411.1 599.6 1150.0
(3) after EQ, after loss, after alignment shift 7 May 2024 19:11:15 0.0226 -292.4 -408.8 597.6 1149.9

 

Comments related to this report
jenne.driggers@LIGO.ORG - 14:08, Tuesday 07 May 2024 (77693)

After a quick re-look, that 10 urad move in SR2 yaw seems to have come during maintenance, or sometime later than the time that the loss showed up. 

In the attachment, the vertical t-cursors are at the times from the table in the parent comment on April 21st and 23rd.  The top row is SR2 pitch and yaw, and the bottom row is SR3.  The middle row shows our guardian state (i.e. when we were locked), and kappa_c which is indiciative of when we started to see loss.  In particular, there are 3 locks right after the first t-cursor, and they all have quite similar OSEM values for SR2 yaw (also the times between locks are similar-ish).  Those three locks are the last one with no loss, one with middling-bad loss, and one with the full loss.  So, it wasn't until after we had our full amount of loss that SR2 moved in yaw.  I haven't double-checked sliders yet, but probably this is a move that happened during maintenance day.

Images attached to this comment
sheila.dwyer@LIGO.ORG - 10:51, Friday 10 May 2024 (77762)

I'm using Jenne's times above to do a similar check, but looking at times when ALIGN_IFO was in state 65 (SRY align) because in that state the AS WFS centering servos are on.  This state is run shortly after state 58, so I'll reuse Jenne's numbers to refer to times and the IFO situation.

This table indicates that changes in AS power are consistent between AS_C and the AS WFS, so the beam transmitted by OM1 and reflected by OM1 see similar losses.  This makes it seem less likely that a bad spot on OM1 is the problem (and points to probably being an issue with the OFI), although it's not impossible that a loss on OM1 is seen in the same way for transmission and reflection.

    AS_C sum AS_C normalized to first row AS_A sum AS_A normalized to first row AS_B sum AS_B normalized to first row
1 April 17 00:20:15 UTC 0.0626 1 5264   5104  
1 April 21 20:09:43 UTC 0.0629 1.005 5283 1.004 5114 1.002
2 April 23 23:34:15 UTC 0.0534 0.853 4595 0.873 4359 0.854
2 AS centering was not run this time            
3 April 25 12:19:36 UTC 0.0622 0.993 5209 0.989 5083 0.996
3 May 7 19:12:30 UTC 0.0624 0.997 5241 0.996 5118 1.003

 

H1 CAL (CAL)
francisco.llamas@LIGO.ORG - posted 13:08, Tuesday 07 May 2024 (77689)
Changes to pcal force coefficients EPICS variables

DriptaB, RickS, FranciscoL

On Tuesday, May 7, we changed the following EPICS variables:

H1:CAL-PCALX_XY_COMPARE_CORR_FACT from 0.9979 to 0.9991
H1:CAL-PCALY_XY_COMPARE_CORR_FACT from 1.0013 to 1.0005

Which corresponds to a change of 0.12% for X-End and -0.08% for Y-End in the calibration of the fiduciail displacement factors.

The reason we changed these factors is due to an error in our previous calculation, see alog 77386, which led to a value that was 0.20% off from what we expected.

We will evaluate our changes once the interferometer acquires lock.

Images attached to this report
LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 12:55, Tuesday 07 May 2024 (77691)
Corner Station FAMIS Task for Turbo Pumps

Functionality test was done on the corner station turbo pumps, see notes below:

Output mode cleaner tube turbo station;
     Scroll pump hours: 5904.7
     Turbo pump hours: 5966
     Crash bearing life is at 100%

X beam manifold turbo station;
     Scroll pump hours: 1948.0
     Turbo pump hours: 1952
     Crash bearing life is at 100%

Y beam manifold turbo station;
     Scroll pump hours: 2280.8
     Turbo pump hours: 953
     Crash bearing life is at 100%

FAMIS tasks 23530, 23602 and 23650.

H1 CDS (SQZ)
filiberto.clara@LIGO.ORG - posted 12:32, Tuesday 07 May 2024 (77688)
SQZT0 - Laser Locking Fiber Beat Note Chassis

WP 11846

Repaired Laser Locking Fiber Beat Note Chassis installed in SQZT0. Unit was uninstalled and repaired, see alog 77418. Removed unit S2300259 is a working unit, part of H1 spares.

Installed Chassis S2300258
Removed Chassis S2300259

F. Clara, C. Compton

H1 CAL
francisco.llamas@LIGO.ORG - posted 12:15, Tuesday 07 May 2024 (77683)
Pcal excitations off for recording of timeseries

DriptaB, RickS, FranciscoL

On Tuesday, May 7, we turned off the pcal excitations for both end stations for a duration of 30 minutes. We want to record a timeseries of the laser power without any excitations.

GPS times:

Start - 1399131000
End - 1399132800

The channels that were *manually* turned off (and on) are:

H1:CAL-PCALX_SWEPT_SINE_ON
H1:CAL-PCALY_SWEPT_SINE_ON
H1:CAL-INJ_MASTER_SW

The guardian idle state turns off the following channels automatically on the 'PREP_FOR_LOCKING' mode (these channels were not turned back on after the time window):

H1:CAL-PCALX_OSC_SUM_ON
H1:CAL-PCALY_OSC_SUM_ON

A screenshot with Rx and Tx timeseries is attached. Further analysis is pending.

Images attached to this report
H1 General
thomas.shaffer@LIGO.ORG - posted 11:44, Tuesday 07 May 2024 (77687)
Maintenance concluded, started relocking

Maintenance activities have finsihed for the day, we are starting to relock now.

H1 ISC
marc.pirello@LIGO.ORG - posted 11:42, Tuesday 07 May 2024 (77686)
Kepco Health Checkup

I checked the Kepco power supplies located at the end and mid stations.

EY - Temps < 72F, Rack C1 17-19 (PCAL +/- 18V irregular vibration noted, low draw <1A per rail)
EX - Temps < 72F, no vibrations
MY - Temps < 72F, no vibrations
MX - Temps < 72F, no vibrations

Recommend replacing fans on EY PCAL +/- 18V supplies due to irregular vibrations.

H1 SEI (SYS, VE)
jeffrey.kissel@LIGO.ORG - posted 11:19, Tuesday 07 May 2024 (77684)
HAM3 and HAM4 D8 Feedthrus are Indeed Free of Stuff
J. Kissel, J. Freed

We took a walk-about into the LVEA this morning and confirmed that the systems drawings for the HAM3 and HAM4 chamber feedthroughs (feedthrus, viewports) flange layouts -- D1002874 and D1002875, respectively -- are correct in that the top feedthrus are current not used, and have blanks installed. Pictures attached.

20240507_WHAM3_D8.jpg WHAM3 D8 (what we plan to use for HAM23 SPI Pathfinder [and CRS] optical fiber).

20240507_WHAM4_D8.jpg WHAM4 D8 (if a future HAM54 SPI link comes to fruition)
Images attached to this report
H1 PSL
ryan.short@LIGO.ORG - posted 11:17, Tuesday 07 May 2024 (77685)
PSL PMC and FSS RefCav Remote Alignment Tweak

Ryan S., Jason O.

Since we've been seeing slow alignment changes in the PMC over recent weeks (i.e. alog77646), Jason and I tried remotely adjusting alignment into the PMC this morning using the two picomotor-controlled mirrors with the ISS off. Unfortunately, we weren't able to get much improvement out of it, only increasing PMC transmitted power from about 108.0W to 108.1W (both readings taken with ISS on before and after our adjustments). This reinforces the hypothesis that there's some alignment drift or mode-matching change happening further upstream from the PMC, such as in the amplifiers, which would need a more in-depth procedure on-table.

While here, we also took the opportunity to improve the FSS RefCav alignment as the FSS TPD signal has been lower than we'd like. Using the two picomotor-controlled mirrors in the FSS path, we saw a much more reassuring improvement here, increasing the RefCav transmitted signal from about 815mV to 880mV. We realized after the fact that the signal here was "breathing" more than it could have been since we left the IMC locked during our adjustments.

Snapshot of the PSL quad camera images taken after our adjustments included for posterity.

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 10:52, Tuesday 07 May 2024 - last comment - 14:19, Tuesday 07 May 2024(77682)
DAQ NDS1 stopped serving data due to full /run file system

Jonathan, Erik, Dave:

h1daqnds1, which is the default NDS for workstations and guardian, started reporting a full /run file system at 00:55 PDT Tue 07may2024. This caused daqd and nds processes to stop serving data.

Jonathan restarted the rts-nds service at 08:13 PDT which cleared out the /run/nds/jobs directory, freeing up the disk space.

We noted that nds0 had 1.2GB of nds jobs files, 5% of the 26GB. Jonathan pruned this down to 1% usage.

I have added a check to my hourly cds_report to warn if the nds /run file systems exceed 50% usage.

Comments related to this report
david.barker@LIGO.ORG - 14:19, Tuesday 07 May 2024 (77695)

Opened FRS31124

H1 PSL
ryan.short@LIGO.ORG - posted 10:52, Tuesday 07 May 2024 (77681)
PSL Cooling Water pH Test

FAMIS 19973

pH of PSL chiller water was measured to be just above 10.0 according to the color of the test strip.

LHO VE
david.barker@LIGO.ORG - posted 10:29, Tuesday 07 May 2024 (77680)
Tue CP1 Fill

Tue May 07 10:11:46 2024 INFO: Fill completed in 11min 42secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 DetChar (CAL, DetChar, DetChar-Request)
derek.davis@LIGO.ORG - posted 10:15, Tuesday 07 May 2024 (77679)
Severe line subtraction malfunction at start of 240507 lock segment

Tom Dent, Derek Davis

We noted that at the start of the observing segment beginning May 07 2024 02:14:08 (GPS 1399083266) there was broadband excess noise for the first few minutes of the segment. This issue appears to be due to the line subtraction issues that injected broadband excess noise. We don't the cause of the line subtraction malfunction. 

Spectrograms and spectra of the H1:GDS-CALIB_STRAIN and H1:GDS-CALIB_STRAIN_NOLINES channel shows that the excess noise was introduced during the line subtraction as the broadband noise is not present in the data before line subtraction:

It has been previously noted that it is common for the line subtraction to "turn on" up to a few minutes into an observing segment (see alog 72239) but this is the first time that we had noted this type of broadband excess noise being injected. 

Due to the severity of the excess noise, the impacted time period (1399083266 - 1399083406) will need to be CAT1 vetoed.  This includes a few seconds where there was no excess broadband noise but calibration lines were not subtracted. 

Images attached to this report
H1 ISC
camilla.compton@LIGO.ORG - posted 10:56, Monday 06 May 2024 - last comment - 11:47, Wednesday 08 May 2024(77640)
TRANSISITON _FROM_ETMX lockloss troublshooting

Sheila, Camilla.

After this morning's windy lockloss from TRANSISITON _FROM_ETMX, we continued previous 77366 troubleshooting.

Sheila manually stepped though TRANSISITON _FROM_ETMX:

Images attached to this report
Comments related to this report
jenne.driggers@LIGO.ORG - 15:40, Tuesday 07 May 2024 (77698)

[RyanS, TJ, Jenne]

We've had 2 locklosses today that seem to be about when that MadHatter Darm FM2 gets turned off.  But, since Sheila and Camilla moved the filter turning off, now the locklosses are happening a little later.

As Camilla points out, the ramptime of that filter inside Foton is very short.  I've increased it to 3 seconds and we're about to give it a try (since I don't have a better plan to try right now).  Something we haven't seen (since we're already locked right now) is whether increasing the ramp of that filter inside Foton causes trouble for the engagement of that filter, in DARM_OFFSET.

EDIT: Indeed we made it through the turning-off of the MadHatter filter with this 3 second ramp time (rather than the previous 0.1 sec ramp).  I am hopeful that it won't matter for the turning-on of the filter, since that happens in a quite stable part of the locking sequence.  But, we'll just have to see over the next few lock acquisitions.

Also EDIT: If we are unable to relock with this ramp time, attached is a screenshot of the previous ramp time, that we should revert to.

Images attached to this comment
jenne.driggers@LIGO.ORG - 11:47, Wednesday 08 May 2024 (77716)

We've now relocked twice with the longer ramp time in the MadHatter filter in place for the turn-on and turn-off actions of that filter, so it seems fine to leave in place. 

H1 DetChar (DetChar, ISC)
evan.goetz@LIGO.ORG - posted 14:11, Thursday 02 May 2024 - last comment - 14:27, Tuesday 07 May 2024(77579)
DuoTone signal seen in h(t)
This may already be known, but in case it is not I am posting it in the aLOG again. In Fscan spectra, we can see the 960 Hz and 961 Hz DuoTone signal in h(t). I don't recall seeing this in previous observing run data. Is the DuoTone signal expected to be seen in h(t)? It is hard to see on the summary pages, but it can be seen in the Fscan plots or interactive spectrum. I attach a zoom from the interactive plots from May 1, but this can also be seen as far back as the start of O4a. It is also seen at L1.
Images attached to this report
Comments related to this report
joseph.betzwieser@LIGO.ORG - 14:27, Tuesday 07 May 2024 (77696)
Just wanted to add in a DARK spectrum as reference to this, indicating this is coming from the local electronics chain, and given I see some of this on the same ADC but non-DCPD channels, likely within the chassis itself.  I also see even more of a 1 Hz forest than Evan's plot.  See for example LLO alog: 71027.
Images attached to this comment
H1 DAQ
daniel.sigg@LIGO.ORG - posted 12:18, Wednesday 07 February 2024 - last comment - 15:54, Monday 15 July 2024(75761)
Previous/spare timing master

The previous timing master which was again running out of range on the voltage to the OCXO, see alogs 68000 and  61988, has been retuned using the mechanical adjustment of the OCXO.

Today's readback voltage is at +3.88V. We will keep it running over the next few months to see, if it eventually settles.

Comments related to this report
daniel.sigg@LIGO.ORG - 10:33, Wednesday 21 February 2024 (75912)

Today's readback voltage is at +3.55V.

daniel.sigg@LIGO.ORG - 16:18, Monday 18 March 2024 (76497)

Today's readback voltage is at +3.116V.

daniel.sigg@LIGO.ORG - 15:25, Tuesday 07 May 2024 (77697)

Today's readback voltage is at +1.857V.

daniel.sigg@LIGO.ORG - 15:54, Monday 15 July 2024 (79142)

Today's readback voltage is at +0.951V.

Displaying reports 1841-1860 of 77271.Go to page Start 89 90 91 92 93 94 95 96 97 End