Displaying reports 2601-2620 of 83002.Go to page Start 127 128 129 130 131 132 133 134 135 End
Reports until 12:21, Thursday 13 February 2025
H1 General
thomas.shaffer@LIGO.ORG - posted 12:21, Thursday 13 February 2025 - last comment - 16:09, Thursday 13 February 2025(82788)
Observing 2018 UTC

Back to Observing after a lock loss and some commissioning time.

New Y2L PR2 gain SDF accepted. This might need to be accepted in the SAFE.snap as well.

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 13:11, Thursday 13 February 2025 (82789)

SQZr lost its lock and Camilla and I needed to further reduce the OPO trans that Camilla has just reduced (alog82787). We also had to ajust the OPO TEC temp.

Out of Observing 2046-2101UTC

thomas.shaffer@LIGO.ORG - 16:09, Thursday 13 February 2025 (82792)

I've also accepted this SDF in SAFE

H1 SQZ
camilla.compton@LIGO.ORG - posted 12:13, Thursday 13 February 2025 (82787)
SHG temp and alignment adjusted. Only improved from 105 to 107mW. OPO trans setpoint reduced again

Camilla, Sheila

As Sheila/Daniel did in 82057, I touched the SHG steering picos (plot) and temperature (this was annoying and not clear) but could only increase SHG power form 105 to 107mW. Our best is ~120mW. We've seen this before with LVEA temp swings.

Couldn't lock with OPO trans at 75uW (Ibrahim reduced from 80uW in 82761) so I further reduced to 70uW as in wiki and adjusted OPO temp. H1:SQZ-OPO_ISS_CONTROLMON is at 1.5V so not great but hopefully good enough especially as the LVEA temperature stabilizes.

We also increased the SQZ_ANG_SERVO gain back form 0.1 to 0.5, undo-ing part of Sheila's Feb 3rd edit.

Images attached to this report
H1 AOS
sheila.dwyer@LIGO.ORG - posted 11:43, Thursday 13 February 2025 (82786)
pico'd LSC POP

Matt, Sheila

we pico'd on the HAM3 ALS/pico to try to get better centered on the LSC pop diode.  We did this while the IFO was thermalized, so it was a little hard to tell if we've restored the power to what it was before our moves.  We looked at coherence similar to 81329  today our clipping isn't bad enough to show up as a reduction in coherence as was seen there.

 

 

Images attached to this report
H1 SUS (SUS)
ryan.crouch@LIGO.ORG - posted 10:57, Thursday 13 February 2025 (82785)
Rubbing script check

I reran the rubbing script using a time in the middle of when we were sitting in DOWN for much longer this morning (-t 1423505877, 18:17 UTC), and compared it to a time sitting in down last tuesday (-r 1422726343, 02/04/25 17:45 UTC). I have to change environments to get the script to run, I use labutils. Now I don't think I see any rubbing, the Blue trace is the 4th, the Orange trace is today.

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:24, Thursday 13 February 2025 (82784)
Thu CP1 Fill

Thu Feb 13 10:06:36 2025 INFO: Fill completed in 6min 33secs

Jordan confirmed a good fill curbside. TCmins [-69C, -68C] OAT (-7C, 20F) DeltaTempTime 10:06:41

Images attached to this report
H1 ISC
matthewrichard.todd@LIGO.ORG - posted 09:55, Thursday 13 February 2025 (82783)
Added shims to raise up ALS DCPD-X on ISCT1

[Matthew, Camilla, Sheila]

ALS was having trouble locking X, so in light of us moving around spots on PR2 and pitching PR3, Sheila tried pitching PR3 up temporarily to see if that helped (it did). Instead of leaving that we went out to ISCT1 and added two thin shims to the DCPD post, and this returned the signal to previous values.

Attached are photos of the shims on the DCPD post.

We are missing the beam-dump a little on reflection of the DCPD, but it's low power so we may be able to ignore it...if we see excess scatter for some reason we can check.

Images attached to this report
H1 General
thomas.shaffer@LIGO.ORG - posted 08:27, Thursday 13 February 2025 (82781)
Lock loss 1619 UTC

1423498761

Just before calibration and commissioning time, we lost lock at 1619 UTC. This doesn't have the usual ETM glitch a few hundred ms before the lock loss, but DARM and ETMX_L3 out do see some oddness.

Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 07:35, Thursday 13 February 2025 (82780)
Ops Day Shift Start

TITLE: 02/13 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 14mph Gusts, 9mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.24 μm/s
QUICK SUMMARY: Locked for 3 hours, cold and breezy this morning, snow expected this afternoon.

LHO General (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 22:11, Wednesday 12 February 2025 (82779)
OPS Eve Shift Summary

TITLE: 02/13 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY:

IFO is LOCKING at LOCKING_ARMS_GREEN.

We just lost lock after a very calm shift as I was writing this - cause unknown.

Accepted SDFs attached - nothing else of note.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
16:54 SAF Laser SAFE LVEA YES LVEA is SAFE 18:22
18:15 SAF Sheila LVEA n->YES LVEA laser transition to HAZARD 18:47
18:15 PEM Robert LVEA - Settup VP 19:09
18:16 ISC Sheila LVEA yes ISCT1 alignment 19:09
18:26 OPS RyanC Mech room N TCS CO2 chillers FAMIS 18:41
18:50 VAC Janos Arms n CP checks 19:32
19:09 FAC Kim H2 N Tech clean 19:25
19:33 PEM Robert LVEA yes Setting up injections for tomorrow 19:51
19:34 SAF Laser Haz LVEA YES LVEA is laser HAZARD!!! 06:13
20:39 OPS Corey Optics lab N Look for parts 21:04
22:04 EE Marc, Fil MY N Cable inventory 22:10
Images attached to this report
LHO General (CDS, PEM)
filiberto.clara@LIGO.ORG - posted 17:19, Wednesday 12 February 2025 (82778)
BSC1 and BSC3 temperature sensors

BSC1 and BSC3 temperature sensors showed a drop of more than 5 degrees withing the last few hours. To rule out possible issues with the AA chassis, looked at the corner station Main Mons channels. Signals all on same AA chassis. They showed no change. Possible issue with the PEM Power Distribution Chassis in the biergarten. Will look at tomorrow or Tuesday Maintenance.

H1:PEM-CS_TEMP_BSC1_ITMY
H1:PEM-CS_TEMP_BSC3_ITMX
H1:PEM-CS_MAINS_MON_EBAY_1
H1:PEM-CS_MAINS_MON_EBAY_2
H1:PEM-CS_MAINS_MON_EBAY_3

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:08, Wednesday 12 February 2025 (82775)
OPS Eve Shift Start

TITLE: 02/13 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 13mph Gusts, 7mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.23 μm/s
QUICK SUMMARY:

IFO is LOCKING at ENGAGE_ASC_FOR_FULL_IFO

Some ALS Locking issues as we were locking, thought to be due to recent table work that required new TRX gain normalization, which conflicted with ALS Gurardian Scan Alignment. The gain of 3.0 (up from 1.5) has been temporarily accepted in the SAFE.snap temporarily until more tuning is done tomorrow (attached SDF).

Images attached to this report
H1 SUS (FMP, ISC)
sheila.dwyer@LIGO.ORG - posted 15:42, Wednesday 12 February 2025 - last comment - 09:00, Thursday 13 February 2025(82773)
vertical osems at an extreme, short locks

We've had short locks today, and the veritcal osems suggest that the test masses are at an extreme not seen since last Febuary.  Eric noted that he had to change settings for the air handler 82763

We've had very short locks today, Jim has reverted his change from just over a week ago although we don't think that's likely to be the problem. 

Perhaps one of our optics is rubbing.

Images attached to this report
Comments related to this report
ryan.crouch@LIGO.ORG - 15:59, Wednesday 12 February 2025 (82774)SUS

I ran the rubbing script using 2 DOWN times, 1 from today (1423423402) and 1 from earlier this week (1423098322).

SRM_{P,Y}BS_P look a little funky?

Images attached to this comment
ryan.crouch@LIGO.ORG - 16:20, Wednesday 12 February 2025 (82776)

I put the images together to make it easier to look through.

Images attached to this comment
ryan.crouch@LIGO.ORG - 16:33, Wednesday 12 February 2025 (82777)SUS

All the OPLEV sums have been dropping over the past ~2 days, except for SR3, PR3 sees the largest drop but that's expected from all the spot moves?

Images attached to this comment
sheila.dwyer@LIGO.ORG - 09:00, Thursday 13 February 2025 (82782)

There was only 4 seconds of down time in the first time from this rubbing script,1423423402 .  We probably need to sit in down for the whole duration of the script (I'm not sure how much data it gets).

LHO General
thomas.shaffer@LIGO.ORG - posted 13:57, Wednesday 12 February 2025 (82765)
Ops Day Shift End

TITLE: 02/12 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Locked for 2 hours so far. Two relocks during my shift, both needed initial alignment. On the second one, Sheila went onto ISCT1 to better align ALS Diff in the hopes that this would help ALSX locking.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
16:54 SAF Laser SAFE LVEA YES LVEA is SAFE 18:22
18:15 SAF Sheila LVEA n->YES LVEA laser transition to HAZARD 18:47
18:15 PEM Robert LVEA - Settup VP 19:09
18:16 ISC Sheila LVEA yes ISCT1 alignment 19:09
18:26 OPS RyanC Mech room N TCS CO2 chillers FAMIS 18:41
18:50 VAC Janos Arms n CP checks 19:32
19:09 FAC Kim H2 N Tech clean 19:25
19:33 PEM Robert LVEA yes Setting up injections for tomorrow 19:51
19:34 SAF Laser Haz LVEA YES LVEA is laser HAZARD!!! 06:13
20:39 OPS Corey Optics lab N Look for parts 21:04
H1 SUS
jeffrey.kissel@LIGO.ORG - posted 13:53, Wednesday 12 February 2025 (82771)
TAKE TWO : Calibration for QUAD/Triple Top Coil Drivers' FASTIMON (assuming Broadband Monitor Board)
J. Kissel

Context
Recent investigations into how much DC alignment offsets the quad reaction chains (LHO:82669) drove the discovery that the calibration of the current monitor channels, aka FASTIMON channels, I'd modeled in 2024 (LHO:77545) was wrong. Because the collection of circuits involved have a lot of single-ended to differential conversions along the way and factors of two are easy to get wrong, I took the time to measure a QUAD TOP coil driver in the lab with trusty multi-meters and SR785s.

Executive Summary
For all the work validating the model below -- the corrected model of the FASTIMON channels are correct (and a factor of exactly two different from LHO:77545):
-----------------
    calibration_QTOP [V/A] = 2 * 40.00 [V/A] * (10e3 / 30e3)
                           = 26.667 [V/A]
          or 0.0267 [V/mA]
          or 37.5 [mA/V]

    calibration_QTOP [ct/A] = 2 * 40.00 [V/A] * (10e3 / 30e3) * 1 * (2^16 / 40 [ct/V])
                            = 4.3691e+04 [ct/A]
          or 43.691 [ct/mA]
          or 0.0228 [mA/ct]

-----------------
    calibration_TTOP [V/A] = 2 * 29.83 [V/A] * (10e3 / 30e3)
                           = 19.887 [V/A]
          or 0.0199 [V/mA]
          or 50.285 [mA/V]

    calibration_TTOP [ct/A] = 2 * 29.83 [V/A] * (10e3 / 30e3) * 1 * (2^16 / 40 [ct/V])
                            = 3.2582e+04 [ct/A]
          or 32.582 [ct/mA]
          or 0.0307 [mA/ct]

-----------------
    calibration_OTOP [V/A] = 2 * 279.63 [V/A] * (10e3 / 30e3)
                           = 186.42 [V/A]
          or 0.1864 [V/mA]
          or 5.3642 [mA/V]

    calibration_OTOP [ct/A] = 2 * 279.63 [V/A] * (10e3 / 30e3) * 1 * (2^16 / 40 [ct/V])
                            = 3.0543e+05 [ct/A]
          or 305.43 [ct/mA]
          or 0.00327[mA/ct]


Review of the Model

In the EE shop, I pulled the spare TOP driver, S1102666, which happens to be yet another "the only difference is the output impedance," Transmon flavor of TOP driver, D1001650, which has the same output impedance as the QUAD TOP, D0902747. But, just because I chose a "different" TOP driver, gives me the excuse to review the model, and why the output impedance matters for the calibration of the current monitor. 

Recall, for these top drivers, 
    Z_OUT   = (R5+R1) || (R90+R91) 
            = (R5+R1) * (R90+R91) / (R5+R1 + R90+R91)

    Z_OUT^(QuadTOP) = Z_OUT^(TransmonTOP) = (44 * 440) / (44 + 440) = 40.00 [V/A]
    Z_OUT^(TripleTOP) = (32 * 440) / (32 + 440) = 29.83 [V/A]
    Z_OUT^(OMCTOP) = (292 * 6600) / (292 + 6600) = 279.63 [V/A]

Which changes the calibration of the coil current monitor, because 
         ADC [ct]        [                         "R1" [V_SE] ]    1 [V_DF]    2^16 [ct]      
    ----------------   = [ 2 * Z_OUT [V_DF / A] *  ----------- ] *  -------- * ----------- 
    Coil Current [A]     [                         "R2" [V_DF] ]    1 [V_SE]    40 [V_DF] 
where you'll notice *the* difference between the equation here and that in LHO:77545 is the "conversion" from the single-ended output of the current monitor, which is piped to the positive leg of its external differential output while the negative leg is held at 0V. So, the ADC, which reads (positive - negative) will just read out a value that's equivalent to the original single-ended voltage.

Measurement Setup

I first attach diagrams of the measurement setup, so that that's no confusion about factors of two from which pins I read out how.

(1) (pages 1 and 2 of CoilDriver_FASTIMON_Calibration_Diagrams.pdf) Validating calibration at DC :: Using the SR785 and an SR785 accessor box to drive a range of DC voltage offsets differentially into the DAC input of the driver, I measured the voltage across a dummy OSEM set of resistors -- as a proxy for the coil current -- and the differential output voltage of the FASTIMON and the SLOW RMS I MON. Although we didn't expect it, I did this in two configurations to make sure the answer didn't change whether the low-pass switch is ON vs. OFF.

(2) (pages 3 thru 5 of CoilDriver_FASTIMON_Calibration_Diagrams.pdf)Validating calibration at AC (~ 1 to 100 Hz). Using a similar setup, but now making sure the inputs and outputs are read out truly differentially (making sure all BNC shields are connect to the chassis 0V), I drove a swept-sine excitation through the driver from 0.1 to 1000 Hz in various configurations, in order to capture
   (a) The "standard" transconductance measurement, Coil Current per DAC input voltage
   (b) The transfer function we really want, which is the per FAST I MON per Coil Current
   (c) A bonus, FAST I MON per DAC input voltage

Results

The rest of the .pdfs show the results of these set-ups. While mildly interesting, they all agree with the model.
Non-image files attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 11:21, Wednesday 12 February 2025 - last comment - 13:16, Wednesday 12 February 2025(82769)
Hardware Injection process failed at 07:43 PST and failed to restart

At 07:43 Wed 12feb2025 PST the psinject process on h1hwinj1 crashed and did not restart automatically. H1 was not locked at this time.

The failure to restart was tracked to an expired leap-seconds file on h1hwinj1. This is a Scientific Linux 7 machine, this OS is obsolete and updated tzdata packages are not available. As a work around to get psinject running again, I hand copied a debian12 version of the /usr/share/zoneinfo/[leapseconds, leap-seconds.list] files. At this point monit was able to successfully restart the psinject systemd process.

In conversation with Mike Thomas at LLO, he had seen this problem several weeks ago and implemented the same solution (hand copy of the leapsecond files). Both sites will schedule an upgrade their hwinj machines to Rocky Linux post O4.

Timeline is (all times PST):

07:43 psinject process fails, H1 is unlocked

08:41 H1 ready for observe, but blocked due to lack of hw-injections

09:34 psinject problem resolved, H1 able to transition to observe.

Lost observation time: 53 minutes.

We cannot definitely say why psinject failed today at LHO and several weeks ago at LLO. Mike suspects a local cache expired, causing the code to re-read the leapseconds file and discover that it had expired 28dec2024.

Post Script:

While investigating the loss of the h1calinj CW_EXC testpoint related to this failure, EJ found that the root file system on h1vmboot1 (primary boot server) was 100% full. We deleted some 2023 logs to quickly bring this down to 97% full. At this time we don't think this had anything to do with the psinject issue.

Comments related to this report
david.barker@LIGO.ORG - 13:16, Wednesday 12 February 2025 (82770)

This had happened before, details in FRS30046

Displaying reports 2601-2620 of 83002.Go to page Start 127 128 129 130 131 132 133 134 135 End