Displaying reports 1-20 of 85905.Go to page 1 2 3 4 5 6 7 8 9 10 End
Reports until 18:55, Monday 08 December 2025
H1 ISC
jenne.driggers@LIGO.ORG - posted 18:55, Monday 08 December 2025 (88432)
Locked as far as DRMI

[Anamaria, RyanS, Jenne, Oli, RyanC, MattT, JeffK]

We ran through an initial alignment (more on that in a moment), and have gotten as far as getting DRMI locked for a few minutes.  Good progress, especially for a day when the environmental conditions have been much less than favorable (wind, microseism, and earthquakes).  We'll try to make more progress tomorrow after the wind has died down overnight. 

During initial alignment, we followed Sheila's suggestion and locked the green arms.  The comm beatnote was still very small (something like -12 dBm).  PR3 is in the slider/osem position that it was before the power outage. We set Xarm ALS to use only the ETM_TMS WFS, and not use the camera loop.  We then walked ITM to try to improve the COMM beatnote.  When I did it, I had thought that I only got the comm beatnote up to -9 dBm or so (which is about where it was before the power outage), but later it seems that maybe I went too far and it's all the way up at -3 dBm.  We may consider undoing some of that ITM move.  The ITMX, ETMX, and TMSX yaw osem values nicely matched where they had been before the power outage.  All three suspensions' pitch osems are a few urad different, but going closer to the pre-ooutage place made the comm beatnote worse, so I gave up trying to match the pitch osems.  

We did not reset any camera setpoints, so probably we'll want to do the next initial alignment (if we do one) using only ETM_TMS WFS for Xgreen.  

The rest of initial alignment went smoothly, after we checked that all other optics' sliders were in their pre-outage locations.  Some were tens of urad off on the sliders, which doesn't make sense.  We had to help the alignment in several places by hand-aligning the optics a bit, but no by-hand changes to controls servos or dark offsets or anything like that.

When trying to lock, we struggled to hold Yarm locked and lock COMM and DIFF until the seismic configuration auto-switched to the microseism state.  Suddenly things were much easier.  

We caught DRMI lock 2x times on our first lock attempt, although it lost DRMI lock during ASC.  We also were able to lock PRMI, but lost lock while I was trying to adjust PRM.  Later, we locked PRMI, and were able to offload the PRMI ASC (to PRM and BS).  

The wind has picked back up and it's a struggle to hold catch DRMI lock, so we're going to try again tomorrow.

 

LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 17:23, Monday 08 December 2025 (88430)
HAM7 is vented

(Randy, Jordan, Travis, Filiberto, Gerardo)

We closed four small gate valves, two at the relay tube, RV-1 and RV-2.  Two at the filter cavity tube, between BSC3 and HAM7, FCV-1 and FCV-2.  The purge air system was on since last week, with a dew point reported by the dryer tower of -66 oC, and -44.6 oC measured chamber side.  Particulate was measured at the port by the chamber, zero for all sizes.  HAM7 ion pump was valved out. 

Filiberto helped us out with making sure high voltage was off at HAM7, we double checked with procedure M1300464.  Then, system was vented per procedure E2300169 with no issues.

Other activities at the vented chamber:

Currently the chamber has the purge air active at a very low setting.

 

Images attached to this report
H1 General
ryan.crouch@LIGO.ORG - posted 16:35, Monday 08 December 2025 (88419)
OPS Monday Day shift summary

TITLE: 12/08 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
SHIFT SUMMARY:

HAM7 was prepped then vented today and there are 4 bolts left on each door, the HAM7 HV was turned off alog88421. I changed the nuc30 DARM fom to the NO_GDS template in the launch.yaml file. Team CDS has been working their way through some of the EDC channel disconnects, the list length shrinks everytime I look at it.

We wanted to lock for health checks today but the Earth disagreed, a 7.6 from Japan rumbled in around 14:00 UTC and then a 6.7 from the same region at 22:00 UTC, and the wind started to pick up around 21:00UTC. Windy.com reports the wind will increase/remain elevated till it peaks around 10/11 PM PST | 06/07 UTC then it will should start to decrease. Ground motion and wind is still elevated as of the end of the shift.

LOG:                                                                                                                                                                                                                                          

Start Time System Name Location Lazer_Haz Task Time End
15:45 FAC Nellie LVEA N->Y->N Tech clean 18:21
16:03 FAC Kim LVEA N->Y->N Tech clean 18:21
16:14 FAC Randy LVEA N Door prep, HAM6/7 17:07
16:44 SAF Sheila LVEA N -> Y LASER HAZARD transition to HAZARD 16:53
16:53   LASER HAZARD LVEA Y LVEA IS LASER HAZARD 18:00
16:54 ISC Sheila LVEA Y SQZT7 work 17:53
17:08 CAL Tony PCAL lab Y PCAL measurement 18:28
17:13 EE Fil Mid/EndY N Power cycle electronics, timing issue 17:46
17:36 EE Marc, Daniel LVEA Y Check on racks by SQZT7 18:46
17:47 EE Fil LVEA Y Join Marc 18:13
17:55 ISC Matt Prep lab N Checks, JOT lab 18:28
18:05 VAC Travis LVEA N Prep for HAM7 vent 19:20
18:10 EE Fil LVEA n Shutting HV off for HAM7 19:10
18:17 SAF Richard LVEA N Check on FIl and Marc 18:28
18:21 VAC Gerardo LVEA N HAM7 checks 19:36
18:29 CAL Tony, Yuri FCES N   19:19
20:29 CDS Dave MidY and EndY N Plug in switch 21:16
20:46 CAL Tony PCAL lab LOCAL Grab goggles 20:48
21:48 VAC Randy LVEA N Door bolts 22:58
22:05 VAC Gerardo LVEA N HAM7 doors 23:20
22:12 VAC Jordan LVEA N HAM7 door bolts 22:38
22:16 FAC Tyler +1 MY, EY N Fire inspections 00:06
22:18 ISC Matt Prep Lab N Parts on CHETA table 22:53
22:43 VAC Travis LVEA N Join door crew 23:19
22:58   Anamaria, Rene, Alicia LVEA N Checks by PSL 23:30
23:55 CAL Tony PCAL lab LOCAL Take a quick picture 00:03
23:57 ISC Jennie Prep lab, LVEA N Gather parts Ongoing

18:32 UTC SEI_CONF back to AUTO from MAINTENANCE where it was all weekend

18:58 UTC HAM7 ISI tripped

22:03 UTC Earthquake mode as a 6.6 from Japan hit us

H1 SUS (CDS, SEI, SUS)
jeffrey.kissel@LIGO.ORG - posted 16:14, Monday 08 December 2025 (88415)
Weekend ETMY Software Watchdog Trips were Because of L2 to R0 Longitudinal Tracking not being blocked by USER WD
J. Kissel, J. Warner

Trending around this morning to understand the reported ETMY software watchdog (SWWD) trips over the weekend (LHO:88399 and LHO:88403), Jim and I conclude that -- while unfortunate -- nothing in software, electronics or hardware is doing anything wrong or broken; we just had a whopper Alaskan earthquake (see USGS report for EQ us6000rsy1 at 2025-12-06 20:41:49 UTC) and had a few big aftershocks. 

Remember, since the upgrade to the 32CH, 2^28 bit DAC last week, both end station's DAC outputs will "look CRAZY" to all those whom are used to looking at the number of counts of a 2^20 bit DAC. Namely, the maximum number of counts is a factor of 2^10 = 1024x larger than previously, saturating at +/- 2^27 = +/- 134217728 [DAC counts] (as opposed to +/-2^19 = +/- 524288 [DAC counts]).

The real conclusion: Both SWWD thresholds and USER WD Sensor Calibration need updating; they were overlooked in the change of the OSEM Sat Amp whitening filter from 0.4:10 Hz to 0.1:5.3 Hz per ECR:E2400330 / IIET:LHO:31595.
The watchdogs use a 0.1 to 10 Hz band-limited RMS as their trigger signal, and the digital ADC counts they use (calibrated into either raw ADC voltage or microns, [um], of top mass motion) will see a factor of anywhere from 2x to 4x increase in RMS value for the same OSEM sensor PD readout current. In otherwords, the triggers are "erroneously" a factor 2x to 4x more sensitive to the same displacement.

As these two watchdog trigger systems are currently mis-calibrated, I put all reference of their RMS amplitudes in quotes, i.e. ["um"]_RMS for the USER WDs and ["mV"]_RMS and quote a *change* in value when possible.
Note -- any quote of OSEM sensors (i.e. the OSEM basis OSEMINF_{OSEM}_OUT_DQ and EULER basis DAMP_{DOF}_IN1_DQ) in [um] are correctly calibrated and the ground motion sensors (and any band-limited derivatives thereof; the BLRMS and PeakMons) are similarly well-calibrated.

Also: The L2 to R0 tracking went into oscillation because the USER WDs didn't trip. AGAIN -- we really need to TURN OFF this loop programmatically until high in the lock acquisition sequence. It's too hidden -- from t a user interface standpoint -- for folks to realize that it should never be used, and always suspect, when the SUS system is barely functional (e.g. when we're vented, or after a power outage, or after a CDS hardware / software change, etc.)

Here's the timeline leading up to the first SUS/SEI software watchdog that helped us understand it there's nothing wrong with the software / electronics / hardware but instead it was the giant EQ that tripped things originaly, but then subsequent trips were because of an overlooked watchdog trigger sensor vs. threadhold mis-calibration coupled with the R0 tracking loops.
2025-12-04 
    20:25 Sitewide Power Outage.
    22:02 Power back on.

2025-12-05
    02:35 SUS-ETMY watchdog untripped, suspension recovery
    20:38 SEI-ETMY system back to FULLY ISOLATED (large gap in recovery between SUS and SEI due to SEI GRD non-functional because the RTCDS file system had not yet recovered)
    20:48 Locking/Initial alignment start for recovery.

2025-12-06 
    20:41:49 Huge 7.0 Mag EQ in Alaska

    20:46:30 First s&p-waves hit the observatory; corner station peakmon (in Z) is around 15 [um/s]_peak (30-100 mHz band)
             SUS-ETMY sees this larger motion, motion on M0 OSEM sensors in 0.1 to 10 Hz band increases from 0.01 ["um"]_RMS to 1 ["um"]_RMS.
             SUS-SWWD using the same sensors, in the same band but calibrated into ADC volts is 0.6 ["mV"]_RMS to ~5 ["mV"]_RMS

    20:51:39 ISI-ETMY ST1 USER watchdog trips because the T240s have tilted off into saturation, killing ST1 isolation loops
             SUS-ETMY sees the large DC shift in alignment from the "loss" of ST1, and 
             SUS-ETMY sees the very large motion, increasing to ~100 ["um"]_RMS (with USER WD threshold set to 150 ["um"]_RMS) -- USER WD never trips. But -- peak motion is oscillating to the 300 ["um"]_peak range (but not close to saturating the ADC.)
             SUS-SWWD reports an RMS voltage increase to 500 [mV_RMS] (with the SWWD WD threshold set to 110 ["mV"]_RMS) -- starts the alarm count-down of 600 [sec] = 10 [min].

    20:51:40 ISI-ETMY ST2 USER watchdog trips ~0.5 sec later as the GS13s go into saturation, and actuators try hard to keep up with the "missing" ST1 isolation
             SUS-ETMY really starts to shake here. 

    20:52:36 The peak love/rayleigh waves hit the site, with the corner station Z motion peakmon reporting at 140 [um/s], and the 30 - 100 mHz BLRMS reporting 225 [um/s].
             At this point its clear from the OSEMs that the mechanical system (either the ISI or the QUAD) is clanking against earthquake stops, as the OSEMs show a saw-tooth-like waveforms. 

    20:55:39 SWWD trips for suspension, shutting off suspension DAC output -- i.e. damping loops and alignment offsets -- and sending the warning that it'll trip the ISI soon.
             Since the SUS is still ringing naturally recovering from the still-large EQ and uncontrolled ISI.
    
    20:59:39 SWWD trips for seismic, shutting off all DAC output for HEPI and ISI ETMY
             SUS-ETMY OSEMs don't really notice -- it's still naturally ringing down with a LOT of displacement. There is a noticable small alignment shift as HEPI sloshes to zero.

    21:06    SUS-ETMY SIDE OSEM stops looking like a saw-tooth, the last one to naturally ring-down. After this all SUS looks wobbly, but normal.
             ISI-ETMY ST2 GS-13 stops saturating
 
    21:08    SUS-ETMY LEFT OSEM stops exceeding the SWWD threshold, the last one to do so.

2025-12-07
    00:05    HPI-ETMY and ISI-ETMY User WDs are untripped, though it was a "tripped again ; reset" messy restart for HPI because we didn't realize that the SWWD needed to be untripped.
             The SEI manager state was trying to get bck to DAMPED, which includes turning on the ISO loops for HPI.
             Since no HPI or ISI USER WDs know about the SWWD DAC shut-off, they "can begin" to do so, "not realizing" there is no physical DAC output.
             The ISI's local damping is "stable" without DACs because there's just not a lot that these loops do and they're AC coupled.
             HPI's feedback loops, which are DC coupled, will run away.

    00:11    SUS and SEI SWWD is untripped

    00:11:44 HPI USER WD untripped, 

    00:12    RMS of OSEM motion begins to ramp up again, the L / P OSEMs start to show an oscillation at almost exactly 2 Hz.
             The R0 USER WD never tripped, which allowed the H1 SUS ETMY L2 (PUM) to R0 (TOP) DC coupled longitudinal loop to flow out to the DAC.
             with the Seismic system in DAMPED (HEPI running, but ST1 and ST2 of the ISIs only lightly damped), and
             with the M0 USER WD still tripped and the main chain without any damping or control,
             after HEPI turned on, causing a shift in the alignment of the QUAD, changing the distance / spacing of the L2 stage, and
             the L2 "witness" OSEMs started feeding back the undamped main chain L2 to the reaction chain M0 stage, and slowly begain oscillating in positive feedback. see R0 turn ON vs. SWWD annotated screenshot.
             Looking at the recently measured open loop gain of this longitudinal loop -- taken with the SUS in it's nominally DAMPED condition and the ISI ISOLATED, there's a damped mode at 2 Hz.
             It seems very reasonably that this mode is a main chain mode, and when undamped would destroy the gain margin at 2 Hz and go unstable. See R0Tracking_OpenLoopGain annoted screenshot from LHO:87529.
             And as this loop pushes on the main chain, with an only-damped ISI, it's entirely plausible that the R0 oscillation coupled back into the main chain, causing a positive feedback loop.
             
    
    00:22    The main chain OSEM RMS exceeds the SWWD threshold again, as the positive feedback gets out of control peaking around ~300 ["mV"]_RMS, and the USER WD says ~100 ["um"]_RMS. Worst for the pitch / longitudinal sensors, F1, F2, F3.
             But again, this does NOT trip the R0 USER WD, because the F1, F2, F3 R0 OSEM motion is "only" 80 ["um"]_RMS still below the 150 ["um"]_RMS limit.

    00:27    SWWD trips for suspensions AGAIN as a result, shutting off all DAC output -- i.e. damping loops and alignment offsets -- and sending the warning that it'll trip the ISI soon.
             THIS kills the 
    
    00:31    SWWD trips for seismic AGAIN, shutting off all DAC output for HEPI and ISI ETMY

    15:59    SWWDs are untripped, and because the SUS USER WD is still tripped, the same L2 to R0 instability happens again.
             This is where the impression that "the watchdogs keep tripping; something broken" enters in.
             
    16:16    SWWD for sus trips again
    
    16:20    SWWD for SEI trips again 

2025-12-08
    15:34    SUS-ETMY USER WD is untripped, main chain damping starts again, and recovery goes smoothly.
    
    16:49    SUS-ETMY brought back to ALIGNED
    
Images attached to this report
Non-image files attached to this report
H1 SUS
oli.patane@LIGO.ORG - posted 15:54, Monday 08 December 2025 (88428)
Updated SUS watchdog BANDLIM filter files for SUS with updated 0.1:5.3 Hz satamps

Jeff alerted me that we had never updated the SUS watchdog compensation filters for the suspension stages with the upgraded satamps (ECR E2400330). In the SUS watchdog filter banks, in the BANDLIM bank, FM6 contains the compensation filter for the satamps. I used a script to go through and update all of these for the suspensions and stages with their precise compensation filter values (the actual values of each satamp's channel's responses were measured and live in /ligo/svncommon/SusSVN/sus/trunk/electronicstesting/lho_electronics_testing/satamp/ECR_E2400330/Results/), then loaded the new filter files in. This filter module was updated for:

Images attached to this report
H1 CDS
patrick.thomas@LIGO.ORG - posted 14:53, Monday 08 December 2025 (88426)
Started PLC and IOC for EX, CS power monitoring
The Visual Studio 2017 Community Edition installed on the end X machine (10.105.0.31) said my license had expired (it should be free), but I was unable to log into my account to renew it. I ended up reinstalling TwinCAT and selecting the option to install the Visual Studio shell as well, since that comes with TwinCAT and does not require an account. I was able to open the solution in that and run it.

The machine at the corner station (10.105.0.27) only had the shell installed, and I had no trouble with it.
H1 SEI
ryan.short@LIGO.ORG - posted 14:26, Monday 08 December 2025 (88425)
SEI_DIFF Now Ignoring HAMs 7 & 8

Jenne and I noticed that the SEI_DIFF node was reporting "chambers not nominal" and was stuck in its 'DOWN' state as a result. We realized this was because HAM7 ISI is tripped likely due to vent prep and impending door removal, so we removed HAMs 7 and 8 from the list of chambers that are checked in SEI_DIFF's "isi_guardstate_okay()" function. After loading the node, SEI_DIFF successfully went to 'BSC2_FULL_DIFF_CPS'.

H1 DAQ
daniel.sigg@LIGO.ORG - posted 13:50, Monday 08 December 2025 (88423)
TwinCAT Oddities

The picomotor controllers were not working. The software side looked ok, but there was no physical drive signal. The TwinCAT system showed an error message about "nonsensical priority order of the PLC tasks". In the past, we ignored these messages without any problems. After fixing this issue and re-activating the system, it started working again. Not usre if it just needed a restart, or if the priority order has now become important. More investigation needed.

H1 DAQ
daniel.sigg@LIGO.ORG - posted 13:45, Monday 08 December 2025 (88422)
Atomic clock reset

The atomic clock has been resynchronized with GPS. The tolerance has been reduced to <1000ns again.

Images attached to this report
H1 SQZ
filiberto.clara@LIGO.ORG - posted 12:42, Monday 08 December 2025 (88421)
HAM7 High Voltage Powered Off

M1300464 - Preparing the aLIGO Interferometer for Pumpdown or Vent

The following high voltage power supplies were powered off in preparation for the HAM 7 vent.

1. MER Mezzanine - HAM7 PSAMS HV
2. MER Mezzanine - HAM7 Piezo HV

The PSAMs were ramped down - Sheila (alog 88414)
HAM 7 Pico High Controller disabled - Gerardo
SQZ OPO TEC Servo disabled - Gerardo

H1 ISC (OpsInfo)
ryan.short@LIGO.ORG - posted 12:26, Monday 08 December 2025 - last comment - 15:18, Monday 08 December 2025(88420)
"Ignore SQZ" Flag Enabled

Since the SQZ laser is now off in preparation for the HAM7 vent, and we still want to keep trying to lock the IFO in the meantime, I've switched the "ignore_sqz" flag in lscparams.py from False to True. ISC_LOCK has been loaded.

Comments related to this report
ryan.short@LIGO.ORG - 13:58, Monday 08 December 2025 (88424)

This sent ISC_LOCK into error in a few places, so I've flipped the flag back and will revisit the logic at a later time.

ryan.short@LIGO.ORG - 15:18, Monday 08 December 2025 (88427)

I've fixed the logic in ISC_LOCK so now it ignoring SQZ_MANAGER with the flag raised is working as intended. I'm leaving the "ignore_sqz" flag True and all changes have been loaded.

H1 CDS
jonathan.hanks@LIGO.ORG - posted 10:56, Monday 08 December 2025 (88418)
Replaced a failed disk on cdsfs0

I replaced a failed disk on cdsfs0.  zpool status told us:

	  raidz3-1               DEGRADED     0     0     0
	    sdk                  ONLINE       0     0     0
	    3081545339777090432  OFFLINE      0     0     0  was /dev/sdq1
	    sdq                  ONLINE       0     0     0
	    sdn                  ONLINE       0     0     0

This hinted the disk was /dev/sdq that failed. When identifying the physical disk behind /dev/sdq (dd if=/dev/sdq of=/dev/null, which does a continuous read of the disk to make it light up), it pointed to a disk in the caddy marked 1:17. I then told zfs to fail /dev/sdq1, and then reads started showing up on the disk (as identified by the leds).be 

To be safe I took the list of drives shown by zpool status, and the list of drives listed by the os (looking in /dev/disk/by-id).  I then identified every disk on the system by doing a long read from it (to force the LED).  There was a jump in caddies from 1:15-1:17.  After accounting for all disks, I figured the bad disk was the 1:16 slot.  I then pulled that disk.  zpool status showed no other issues.

After replacing the disk I had to create a gpt partition on it using parted.

Then I replaced the disk

zpool replace fs0pool 3081545339777090432 /dev/sdz

Now it is resilvering.

	  raidz3-1                 DEGRADED     0     0     0
	    sdk                    ONLINE       0     0     0
	    replacing-1            OFFLINE      0     0     0
	      3081545339777090432  OFFLINE      0     0     0  was /dev/sdq1
	      sdz                  ONLINE       0     0     0  (resilvering)
	    sdq                    ONLINE       0     0     0
	    sdn                    ONLINE       0     0     0
	    sdj                    ONLINE       0     0     0
	    sdm                    ONLINE       0     0     0

We need to retire this array. There are hints of problems on other disks.

H1 PSL
ryan.short@LIGO.ORG - posted 10:39, Monday 08 December 2025 (88417)
PSL 10-Day Trends

FAMIS 31115

This week's check serves as a comparison on how things in the PSL came back after the long power outage last Thursday. Overall, things look good, with the exception that alignment is certainly needed into the PMC and RefCav (not surprising after the laser goes down), but we're waiting on full picomotor functionality to be restored before doing that. As is, alignment is fine enough for now.

One can see that after the system was recovered Thursday afternoon, for about a day, the environmental controls for the enclosure were in a weird state (see Jason's alog) which caused oscillations in amplifier pump diode currents and thus output power from AMP2. This has been fixed and behavior appears to be back to normal.

Additionally, the differential pressure sensor between the anteroom and laser room seems to have been fixed by the outage. Hooray.

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:21, Monday 08 December 2025 (88416)
Mon CP1 Fill

Mon Dec 08 10:12:15 2025 Fill completed in 12min 12secs
 

Images attached to this report
H1 AOS
sheila.dwyer@LIGO.ORG - posted 10:14, Monday 08 December 2025 (88414)
psams ramped off for HAM7 vent

Sheila, Filiberto

In prep for HAM7 vent, we ramped off the psams servo and the requested voltage.  This is different since the addition of the psams servo, screenshot shows how to get to the screens to ramp that off. 

first turned off the servo input, then ramp it's gain from 1 to 0 with 30 second ramp.  Then with 100 second ramp turned off the offset for the requested voltage.  This needs to be done for ZM4,5,2. 

We also set the guardians to down in prep for loosing high votlage to the OPO, SHG and PMC. 

We didn't do the pico controllers yet since Marc and Daniel are debugging them. 

Images attached to this report
H1 TCS
matthewrichard.todd@LIGO.ORG - posted 09:53, Monday 08 December 2025 (88413)
Weekend HWS transients

M. Todd, S. Dwyer, J. Driggers


Summary

Measurement Value [uD / W] Notes
Ring Heater Coupling to Substrate Lens -21.0 +/- 0.3 relative to modeled coupling, 79 +/- 1 % efficiency compared to
predicted 75-80% efficiency from arm cavity measurements.
Modeled couplings assuming 100% efficiency report around -26.5 uD/W.
SR3 Heater Coupling to Substrate Lens

ITMX HWS: 4.7 +/- 0.2

ITMY HWS: 4.6 +/- 0.1

The ITMX HWS seems to be noisier than ITMY, but give very similar mean estimates.
The estimate from Gouy phase measurements is around 5.0 uD/W.

We turned on inverse ring heater filters to speed up the heating for those (using nominal values for the settings). Because of the weekend mayhem with the earthquakes we did not get a SUPER long HWS transient measuring the full response, but we could get a pretty good estimate of the ring heater effect on the substrate thermal lens without any other heating in the measurement. This is good to compare to modeled values that we have.

I also turned on SR3 heater on Sunday to get estimates of the coupling of SR3 heating to the defocus of SR3. To do this, Jenne helped me untrip a lot of the SU watchdogs for the relevant optics to the HWS. About 3 hours after the SR3 was turned on the watchdogs must have tripped again and misaligned the optics. But fortunately we got the cooldown data for this as well and it's all pretty consistent. These measurement suggest a 4.7 uD/W coupling for SR3 heating, which is very similar to modeled coupling from Gouy phase measurements at different SR3 heater powers.

Overall, while these measurements provide more pieces to the puzzle, they make previous analyses a bit more confusing, requiring some more thought (as usual).

Images attached to this report
H1 General
ryan.crouch@LIGO.ORG - posted 07:34, Monday 08 December 2025 - last comment - 08:38, Monday 08 December 2025(88409)
OPS Monday Day shift start

TITLE: 12/08 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering  // Earthquake
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: LARGE_EQ

QUICK SUMMARY:

Comments related to this report
ryan.crouch@LIGO.ORG - 07:52, Monday 08 December 2025 (88410)CDS

When I try to run an ndscope I get the following error:

qt.qpa.xcb: could not connect to display 
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, xcb.

Aborted

*I think I was trying to launch ndscope in a terminal that I was sshed into a different computer/env*

jonathan.hanks@LIGO.ORG - 08:38, Monday 08 December 2025 (88412)

As a follow up note.  Ndscope is working for Ryan, we are not entirely sure what the issue was, maybe the console was in a strange conda environment.  When we looked at it, ndscope started fine.

LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 18:50, Tuesday 04 November 2025 - last comment - 17:46, Monday 08 December 2025(87966)
Kobelco Compressor and Dry Air Skid Functionality Test

I ran the dry air system thru its quarterly test, FAMIS task.  The system was started around 8:20 am local time, and turned off by 11:15 am.  System achieved a dew point of -50 oF, see attached photo taken towards the end of the test.  Noted that we may be running low on oil at the Kobelco compressor, checking with vendor on this.  Picture of oil level is while system is off.

Images attached to this report
Comments related to this report
gerardo.moreno@LIGO.ORG - 17:46, Monday 08 December 2025 (88431)VE

(Jordan, Gerardo)

We added some oil to the Kobelco reservoir whith the compressor off.  We added about 1/2 gallon to get the level to come up to the half mark, see attached photo of level, photo was taken after the compressor had been running for 24+ hours.  Level now is at nominal.

Images attached to this comment
Displaying reports 1-20 of 85905.Go to page 1 2 3 4 5 6 7 8 9 10 End