Displaying reports 401-420 of 81787.Go to page Start 17 18 19 20 21 22 23 24 25 End
Reports until 08:08, Tuesday 08 April 2025
LHO General
tyler.guidry@LIGO.ORG - posted 08:08, Tuesday 08 April 2025 - last comment - 11:54, Tuesday 08 April 2025(83803)
Water Leak Repair
C&E Trenching arrived yesterday morning and began work on uncovering/locating the potable water leak. Ultimately, the leak was in very close proximity to where I first noticed the persistently wet pavement. It was directly beneath secondary high voltage lines that feed the chiller yard among other things. The cause of the leak ended up being a small crack impregnated with a rock near a smashed/deformed section of pipe. There is no clear explanation as to how the pipe was damaged in the first place. The trenching team suggested that the pipe may have been damaged just before/during installation. Interestingly enough, this section of pipe was found to have two prior repairs made to it as seen in the photo with the two pipe couplers right next to each other. Based on the date printed on the pipe, these repairs were made during construction some time in 96'.

Important note: with the potable supply valved out at the water room, I replenished the holding tank to a level of 57.8" read at FMCS. Once the pipe repair had been made, I reintroduced the potable supply and the tank level fell to 52.5". In conjunction with this, the Magemter gauge prior to line repair read xx6171 gallons. Post repair the gauge read xx6460. However, I don't have much confidence in the Magmeter gauge readout in this scenario as the line turbulence causes some egregious (-250gpm+) reverse flow readings while the line recharges.

After repair, and keeping staff water usage suspended, I held the line pressure at some 80psi for 30 minutes or so and observed no leaks. There were also no drops in system pressure nor was there any flow readout at the magmeter gauge - both important and reassuring improvements.

R. McCarthy T. Guidry
Images attached to this report
Comments related to this report
corey.gray@LIGO.ORG - 11:54, Tuesday 08 April 2025 (83813)EPO

Tagging EPO for the photos!

H1 SEI
oli.patane@LIGO.ORG - posted 07:57, Tuesday 08 April 2025 (83802)
ISI CPS Noise Spectra Check Weekly FAMIS

Closes FAMIS#26038, last checked 83718

Just like last week, everything is elevated due to the vent.

Non-image files attached to this report
H1 General
oli.patane@LIGO.ORG - posted 07:40, Tuesday 08 April 2025 (83801)
Ops Day Shift Start

TITLE: 04/08 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: MAINTENANCE
    Wind: 18mph Gusts, 15mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.23 μm/s
QUICK SUMMARY:

Plans for today:

H1 CDS
erik.vonreis@LIGO.ORG - posted 06:33, Tuesday 08 April 2025 (83800)
Workstations updated

Workstations were updated and rebooted.  This was an OS packages update.  Conda packages were not updated.

LHO VE
jordan.vanosky@LIGO.ORG - posted 17:14, Monday 07 April 2025 (83799)
Installation of 1000AMU RGA Assembly

Travis, Melina, Jordan

Today we were able to install the assembly for the new 1000AMU RGA. (D2300253). This is located on the Y-beam manifold near BSC8.

The assembly had been previously built in the staging building and the final 8"CF connection to the gate valve on the YBM was done. There is a temporary support stand currently attached, a more permanent support and bump protection cage will be installed later.

The assembly was helium leak checked at all joints, He dwell of ~5 seconds per joint. No He signal was seen above the HLD background ~1.7E-10 Torr-l/s.

Assembly will continue to pumpdown overnight on a turbo, then will transition to the assembly 10 l/s diode ion pump.

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 16:35, Monday 07 April 2025 (83796)
Ops Day Shift End

TITLE: 04/07 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
SHIFT SUMMARY:

There was some good vent progress made today, but a good chunk of the day was just working on recovering all systems from last night's power outage.

Vent progress made today:
VAC - 1000 AMU RGA WORK - YBT
HAM1 - Contam Control checks
COREY - HAM1 Pictures
ISC - MC Flashing Peek to confirm MC and PSL PZT aligned
FAC - Clean BSC8 door and area
HAM1 - COAX Cable length measurement (disconnect at component)

Overview of biggest issues caused by the outage:
- Power supply for h1susb123SUS C5 (SUS C5) tripped off (83787)
    - ITMs and BS OSEM counts were all sitting very low around 3000. Power supply was flipped back on without turning coil drivers off, causing it to trip again. OSEM counts returned to normal for ITMX and BS, but ITMY coil driver filter banks for M0, L1, L2 were flashing ROCKER SWITCH DEATH. ROCKER SWITCH DEATH means that all OSEM values for that stage are less than 5.
    - Initially remedied by turning coil drivers and power supply off, then power supply back on and then turning coil drivers back on one at a time.
    - 17:46 ROCKER SWITCH DEATH started flashing again for ITMY. ITMs and BS all taken to SAFE, power supply for SUS C5 swapped. Checked before swapping - power supply was still on and had not tripped.
    - Tried power cycling the satamp - didn't work
    - SatAmp replacement worked
- ISI ITMY corner1 interface chassis power regulator was blown (83781)
    - Swapped with a spare
- Timing error for Corner A fanouts, digital port 13, analog port 14  (83777)
    - Power cycled timing comparator and it came back fine
- Timing error for Master, digital port 8, analog port 9 -
- PSL down due to Beckoff chassis off (83785)

LOG:                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            

Start Time System Name Location Lazer_Haz Task Time End
14:30 FAC Kim, Nelly EY n Tech clean 15:23
15:26 EE Jeff CER n Checking power supply 15:46
15:41   Camilla CER n Checking on Jeff 15:46
15:44 VAC Travis, Jordan LVEA n BSC8 annulus vent 16:26
15:47   Jeff, Jim CER n Checking racks again 15:58
15:50 FAC Kim, Nelly LVEA n Tech clean 19:51
16:06 PSL RyanS LVEA n Power cycling PSL chassis 16:51
16:09 EE Richard CER n Turning power supply back on 16:09
16:11   Jeff, Richard CER n Cycling coil drivers 16:23
16:20 FAC C+U water tower n Excavating to find leak 18:20
16:29 EE Marc CER n ISI chassis work 16:44
16:46 EE Richard Power supply room n   16:50
16:47 PSL RyanS, Jason LVEA n PSL ion pump power cycling 17:23
16:52 ISC Camilla LVEA n Cleaning IOT2 17:54
16:58 VAC Travis, Jordan LVEA n Prep for RGA install,  includes craning 19:04
16:59 AOS Corey LVEA n Photos + contamination control 18:33
17:07 EE Jeff, Marc CER n Power cycling timing comparator 17:21
17:17 EPO Betsy LVEA N Helping Corey 18:17
16:40 FAC Tony, Randy LVEA n Breaking BSC8 bolts 17:23
17:31 FAC Chris EY, EX n FAMIS tasks 18:38
17:45 EE Jeff, Marc CER n Replacing SUS C5 power supply 18:40
17:47 ISI Jim Ends n Restoring HEPI and checking BRS lasers 18:29
18:00 CDS Tony, Erik EX, EY n Dolphin issues, wifi issues 19:04
18:37 SEI Jim HAM8, EY, EX n EY HEPI, EX ISI coil drivers 21:01
18:37 ISC Keita LVEA n HAM1 RF work 19:30
18:40 EE Jeff, Marc CER, LVEA n Power cycling satamp 19:21
18:42 ISC Camilla LVEA n HAM1 RF cables 20:30
18:48   Betsy LVEA n Checking on HAM1 work 20:30
19:01 CDS Jonathan CER n Moving a switch 21:07
19:05 CDS Patrick EX n Beckoff for BRS 19:54
19:44 OPO Corey LVEA N Grab camera equipment by ham1 garb room 19:48
20:13 ISC Keita LVEA N HAM1 RF work cont 20:31
20:29 VAC Jordan, Melina LVEA n RGA work 23:30
20:43 EE Marc & Daniel LVEA N Measure cables HAM1 22:04
21:23 CDS Erik MY, EY, EX n Weather sensor work at MY/EY, HWS work at EY/EX 22:54
21:25 SEI Jim LVEA YES HAM1 HEPI work 23:05
21:26 PSL Keita LVEA n Work on PSL FSS issue 22:10
21:41 PSL Jason, RyanS LVEA n Staring at PSL racks 21:54
21:47   Camilla LVEA YES Transitioning to LASER HAZARD 22:02
21:54 PSL Jason LVEA YES Looking at lockout box 22:02
22:13 (⌐■_■) LVEA IS LASER HAZARD LVEA YES LVEA IS LASER HAZARD 23:24
22:24 ISC Camilla, Betsy, Keita, Elenna LVEA YES Opening lightpipe & opening HAM2 viewport 23:11
22:25 IAS Jason LVEA YES Setting up FARO for tomorrow 22:33
22:50   Richard LVEA YES Checking that people are working safely 23:05
23:06 CDS Erik EY n Fixing dust monitor 23:32
23:13 ISC Camilla LVEA YES Going back to laser safe 23:22
H1 IOO (PSL)
elenna.capote@LIGO.ORG - posted 16:21, Monday 07 April 2025 - last comment - 11:09, Monday 14 April 2025(83794)
Found PSL beam through IMC trans and refl viewports on HAM2

[Betsy, Keita, Camilla, Richard M., Elenna]

After the power outage, we wanted to get some confirmation of PZT pointing out of the PSL. Betsy, Keita and Camilla went in the cleanroom with the IR viewer and indicator cards and took the covers off the IMC refl and trans viewports on HAM2. Keita found the beam out of the IMC REFL viewport, and I adjusted the IMC PZT in pitch and yaw under direction from Keita. I accidentally moved the pitch slider a bit at the beginning due to user error. Then, we took large steps of 1000 or 2000 counts to move the offsets in pitch and yaw.

Start values: pitch: 22721, yaw: 5488

End values: pitch: 21192, yaw: 6488

Then, Betsy found the beam out of the IMC TRANS viewport and Richard marked the beam spot approximately on the outside of the cleanroom curtain with a red sharpie. This is super rough but gives us a general location of the beam. We think this is a good enough pointing recovery to proceed with vent activities.

 

Comments related to this report
keita.kawabe@LIGO.ORG - 16:43, Monday 07 April 2025 (83797)

During this work, PSL waveplate rotetor was set to 200mW and then de-energized. I haven't re-energized as we don't need higher power for a long time.

PSL laser pipe was temporarily opened for this task and was closed after.

Attached video shows the flashing of the beam in MC TRANS path. The alignment from the PSL through the IMC is not crazy. Any further refinement should be done after we start pumping down the corner before we install HAM1 optics.

Non-image files attached to this comment
ryan.short@LIGO.ORG - 11:09, Monday 14 April 2025 (83899)

Accepting PZT offsets in h1ascimc SAFE.snap table.

Images attached to this comment
H1 CAL (CAL)
joseph.betzwieser@LIGO.ORG - posted 15:39, Monday 07 April 2025 (83792)
Pydarm code updated at LHO (and LLO)
[Joe B, Jamie R]
On Friday, April 4th, Jamie updated the default pydarm install to version 20250404 (prior was 20250227) with a bug fix to allow for proper regeneration of reports utilizing the high frequency roaming line data we've been collecting all run, to help with uncertainty estimates above 1 kHz.

This is the code which lives in /ligo/groups/cal/conda/, and is the default one you get when running pydarm at the command line.

The update instructions Jamie followed can be found here: https://git.ligo.org/Calibration/pydarm/-/blob/master/DEPLOY.md?ref_type=heads
H1 TCS
camilla.compton@LIGO.ORG - posted 15:14, Monday 07 April 2025 (83788)
TCS Chillers turned back on after Power Outage

Elenna, Camilla. TCS Chillers turned back on after Power Outage

H1 SUS (CDS, SYS)
jeffrey.kissel@LIGO.ORG - posted 15:03, Monday 07 April 2025 - last comment - 13:20, Tuesday 08 April 2025(83787)
Recovery from 2025-04-06 Power Outage: +18V DC Power Supply to SUS-C5 ITMY/ITMX/BS Rack Trips, ITMY PUM OSEM SatAmp Fails; Replaced Both +/-18 V Power Supplies and Replaced ITMY PUM OSEM SatAmp
J. Kissel, R. McCarthy, M. Pirello, O. Patane, D. Barker, B. Weaver
2025-04-06 Power outage: LHO:83753

Among the things that did not recover nicely from the 2025-04-06 power outage was the +18V DC power supply to the SUS ITMY / ITMX / BS rack, SUS-C5. The power supply lives in VDC-C1 U23-U21 (Left-Hand Side if staring at the rack from the front); see D2300167. More details to come, but we replaced both +/-18V power supplies and SUS ITMY PUM OSEMs satamp did not survive the powerup, so we replaced that too.

Took out 
    +18V Power Supply S1300278
    -18V Power Supply S1300295
    ITMY PUM SatAmp S1100122

Replaced with
    +18V Power Supply S1201919
    -18V Power Supply S1201915
    ITMY PUM SatAmp S1000227
Comments related to this report
jeffrey.kissel@LIGO.ORG - 13:20, Tuesday 08 April 2025 (83810)CDS, SUS
And now... the rest of the story.

Upon recovery of the suspensions yesterday, we noticed that all the top-mass OSEM sensor values for ITMX, ITMY, and BS were low, *all* scattered from +2000 to +6000 [cts]. They typically should be typically sitting at ~half the ADC range, or ~15000 [cts]; see ~5 day trend of the top mass (main chain, M0) OSEMs for H1SUSBS M1,ITMX M0, and H1SUSITMY M0. The trends are labeled with all that has happen in the past 5 days. The corner was vented on Apr 4 / Friday, so that changes the physical position of the suspensions and the OSEMs see it. At the power outage on Apr 6, you can see a much different, much more drastic change. 

Investigations are rapid fire during these power outages, with ideas and guesses for what's wrong are flying everywhere. The one that ended up having fruit was that Dave mentioned that it looked like "they've lost a [+/-18V differential voltage] rail or something," -- where he's thinking about the old 2011 problem LLO:1857 where 
   - There's a SCSI cable that connects the SCSI ports of a given AA chassis to the SCSI port of the corresponding ADC adapter card on the back of any IO chassis
   - The ADC Adapter Card 's port has very small, male pins that can be easy bent if one's not careful during the connection of the cable.
   - Sometimes, these male pins get bent in such a way that the (rather sharp) pin stabs into the plastic of the connecter, rather than into the conductive socket of the cable. Thus, (typically) one leg, of one differential channel is floating, and this manifests digitally in that it creates an *exact*  -4300 ct (negative 4300 ct) offset that is stable and not noisy. 
   - (as a side note, this issue was insidious: once one bent male pin on the ADC adapter card was bent, and mashed into the SCSI cable, that *SCSI* cable was now molded to the *bent* pin, and plugging it in to *other* adapter cards would bend previously unbent pins, *propagating* the problem.) 

Obviously this wasn't happening to *all* the OSEMs on three suspensions without anyone touching any cables, but it gave us enough clue to go out to the racks.
Another major clue -- the signal processing electronics for ITMX, ITMY and BS are all in the same rack -- SUS-C5 in the CER.
Upon visiting the racks, we found, indeed, that all the chassis in SUS-C5 -- the coil drivers, TOP (D1001782), UIM (D0902668) and PUM (D0902668) -- had their "-15 V" power supply indicator light OFF; see FRONT and BACK pictures of SUS-C5.

Remember several quirks of the system that help us realize what's happened (and looking at the last page of ITM/BS wiring diagram, D1100022 as your visual aide):
(1) For aLIGO "UK" suspensions -- the OSEM *sensors'* PD satellite amplifiers (sat amps, located out in the LVEA within the biergarten) that live out in the LVEA field racks are powered by the coil drivers to which their OSEM *coil actuators* are connected.
So, when the SUS-C5 coil drivers lost a differential power rail, that makes both the coils and the sensors of the OSEM behave strangely (as typical with LIGO differential electronics: not "completely off" just "what the heck is that?"). 
(2) Just as an extra fun gotcha, all of the UK coil drivers back panels are *labeled incorrectly* so that the +15V supply voltage indicator LED is labeled "-15" and the -15V supply is labeled "+15".
So, this is why the obviously positive 18V coming from the rack's power rail is off, but the "+15" indicator light is on an happy.  #facepalm
(3) The AA Chassis and Binary IO for these SUS live in the adjacent SUS-C6 rack; it's + and - 18V DC power supply (separate and different from the supplies for the SUS-C5 rack) came up fine without any over-current trip. Similarly the IO chassis, which *do* live in SUS-C5, are powered by a separate single-leg +24V from another DC power supply, also coming up fine without over-current trip.
So, we had a totally normal digital readback of the odd electronics behavior.
(4) Also note, at this point, we had not yet untripped the Independent Software Watch Dog, and the QUAD's Hardware Watchdog had completely tripped. 
So, if you "turn on the damping loops" it looks like nothing's wrong; at first glance, it might *look* like there's drive going out to the suspensions because you see live and moving MASTER_OUT channels and USER MODEL DAC output, missing that there's no IOP MODEL DAC output. and it might *look* like the suspensions are moving as a result because there are some non-zero signals coming into on OSEMINF banks and they're moving around, so that means the damping loops are doing what they do and blindly taking this sensor signal, filtering it per normal, and sending a control signal out.

Oi.

So, anyways, back to the racks -- while *I* got distracted inventorying *all* the racks to see what else failed, and mapping all the blinking lights in *all* the DC power supplies (which, I learned, are a red herring) -- Richard flipped on the +18V power supply in VDC-C1 U23, identifying quickly that it had over-current-tripped when the site regained power.
See the "before" picture of VDC-C1 U23 what it looks like tripped -- the "left" (in this "front of the rack" view) power supply's power switch on the lower left is in the OFF position, and voltage and current read zero.

Turning the +18V power supply on *briefly* restored *all* OSEM readbacks, for a few minutes.
And then the same supply, VDC-C1 U23, over-current tripped again. 
So Richard and I turned off all the coil drivers in SUS-R5 via their rocker switches, turned on the VDC-C1 U23 left +18V power supply again, then one-by-one powered on the coil drivers in SUS-C5 with Richard watching the current draw on the VDC-C1 U23 power supply.

Interesting for later: when we turned on the ITMY PUM driver, he shouted down "whup! Saw that one!"
With this slow turn on, the power supply did not trip and power to the SUS-R5 held, so we left it ...for a while.
Richard and I identified that this rack's +18V and -18V power supplies had *not* yet had their fans upgraded per IIET:33728.
Given that it was functioning again and having other fish to fry, we elected to not *yet* to replace the power supplies.

Then ~10-15 minutes later, the same supply, VDC-C1 U23, over-current tripped again, again . 
So, Marc and I went forward with replacing the power supplies.
Before replacement, with the power to all the SUS-C5 rack's coil drivers off again, we measured the output voltage of both supplies via DVM: +19.35 and -18.7 [V_DC].
Then we turned off both former power supplies and swapped in the replacements (see serial numbers quoted in the main aLOG); see "after" picture.

Not knowing better we set the supplies to output to a symmetric +/-18.71 [V_DC] as measured by DVM. 
Upon initial power turn on with no SUS-R5 coil drivers on, we measured the voltage from an unused 3W3 power spigot of the SUS-R5 +/-18 V power rail, and measured a balanced +/-18.6 [V_DC].
Similar to Richard and I earlier, I individually turned on each coil driver at SUS-C5 while Marc watched the current draw at the VDC-C1 rack.
Again, once we got the ITMY PUM driver we saw a large jump in current draw. (this is one of the "important later")
I remeasured the SUS-R5 power rail, and the voltage on positive leg had dropped to +18.06 [V_DC].
So, we slowly increased the requested voltage from the power supply to achieve +18.5 [V_DC] again at the SUS-R5 power rail. 
This required 19.34 [V_DC] at the power supply.
Welp -- I guess whomever had set the +18V power supply to +19.35 [V_DC] some time in the past had come across this issue before.

Finishing up at the supplies, we restored power / turned to all the remaining coil drivers had watched it for another bit. 
No more over-current trips. 
GOOD! 

... but we're not done!

... upon returning to the ITMY MEDM overview screen on a CDS laptop still standing by the rack, we saw the "ROCKER SWITCH DEATH" or "COIL DRIVER DEATH" warning lights randomly and quickly flashing around *both* the L1 UIM and the L2 PUM COILOUTFs. Oli reported the same thing from the control room. However, both those coil drivers power rail lights looked fine and the rocker switches had not tripped. Reminding myself that these indicator lights are actually watching the OSEM sensor readbacks; if the sensors are some small threshold around zero, then the warning light flashes. This was a crude remote indicator of whether the coil driver itself had over-current tripped because again, the sensors are powered by the coil driver, so if the sensors are zero then there's a good chance the coil driver if off.
But in this case we're staring at the coil driver and it reports good health and no rocker switch over-current trip.
However we see the L2 PUM OSEMs were rapidly glitching between "normal signal" of ~15000 [cts] and a "noisy zero" around 0 [ct] -- hence the red, erratic (and red herring) warning lights.

Richard's instincts were "maybe the sat amp has gone in to oscillations" a la 2015's problem solved by an ECR (see IIET:4628), and suggest power cycling the sat amp. 
Of course, these UK satamps () are another design without a power switch, so a "power cycle" means disconnecting and reconnecting the cabling to/from the coil driver that powers it at the satamp. 
So, Marc and I headed out to SUS-R5 in the biergarten, and found that only ITMY PUM satamp had all 4 channels' fault lights on and red. See FAULT picture.
Powering off / powering on (unplugging, replugging) the sat amp did not resolve the fault lights nor the signal glitching.
We replaced the sat amp with a in-hand spare and fault lights did NOT light up and signals looked excellent. No noise, and the DC values were restored to their pre-power-outage values. See OK picture.

So, we're not sure *really* what the failure mode was for this satamp, but (a) we suspect it was a victim of the current surges and unequal power rails over the course of re-powering the SUS-C5 rack, which contains the ITMY PUM coil driver that drew a lot of current upon power up, which powers this sat-amp (this is the other of the "important later"); and (b) we had a spare and it works, so we've moved on with post-mortem to come later. 

So -- for all that -- the short answer summary is as the main aLOG says:
- The VDC-C1 U23 "left" +18V DC power supply for the SUS-R5 rack (and for specifically the ITMX, ITMY, and BS coil drivers) over-current tripped several times over the course of power restoration, leading us to
- Replace both +18V and -18V power supplies that were already stressed and planned to be swapped in the fullness of time, and 
- We swapped a sat-amp that did not survive the current surges and unequal power rail turn-ons of the power outage recovery and subsequent investigations.

Oi!
Images attached to this comment
H1 CDS
patrick.thomas@LIGO.ORG - posted 14:26, Monday 07 April 2025 - last comment - 15:30, Monday 07 April 2025(83785)
Beckhoff recovery from power outage
There was a PSL Beckhoff chassis that needed to be powered on. There is an alog saying that I configured the PSL PLC and IOC to start automatically, so maybe this is what kept it from doing so?
I physically power cycled the BRS Beckhoff machine at end X. It was unreachable from remote desktop and in a bad frozen state when I connected to it from the KVM switch.
I started the end X NCAL PLC and IOC, the end X mains power monitoring PLC and IOC, and the corner station mains power monitoring PLC and IOC.
Comments related to this report
patrick.thomas@LIGO.ORG - 15:30, Monday 07 April 2025 (83791)
On the end X mains power monitoring Beckhoff machine I had to make the tcioc firewall profile also be enabled for the private network.
H1 SUS
ryan.crouch@LIGO.ORG - posted 14:19, Monday 07 April 2025 - last comment - 09:47, Tuesday 08 April 2025(83780)
Power power outage OSEM recovery / Offset check

Oli, Ibrahim, RyanC

We took a look at the osems current positions for the suspensions post power outage to make sure the offsets are still correct, the previously referenced "Golden time" was 1427541769 (the last drmi lock before the vent). While we did compare against this time we mainly set them to before the poweroutage.

Input:

IM1_P: 368.5 -> 396.1, IM1_Y: -382.7 -> -385.4

IM2_P: 558.0 -> 792.0, IM2_Y: -174.7 -> -175.7

IM3_P: -216.3 -> -207.7, IM3_Y: 334.0 -> 346.0

IM4_P: -52.4 -> -92.4, IM4_Y: 379.5 -> 122.5

SR2_P: -114.3 -> -117.6, SR2_Y: 255.2 -> 243.6

SRM_P: 2540.3 -> 2478.3, SRM_Y: -3809.1 -> -3825.1

SR3_P: 439.8 -> 442.4, SR3_Y: -137.7 -> -143.9

Output:

ZM6_P: 1408.7 -> 811.7, ZM6_Y: -260.1 -> -206.1

OM1_P: -70.9 -> -90.8, OM1_Y: 707.2 -> 704.5

OM2_P: -1475.8 -> -1445.0, OM2_Y: -141.2 -> -290.8

OM3_P: Didn't adjust, OM3_Y: Didn't adjust

Comments related to this report
ibrahim.abouelfettouh@LIGO.ORG - 15:19, Monday 07 April 2025 (83789)

Input:

PRM_P: -1620.8 -> -1672, (Changed by -51.2) PRM_Y: 579.6 -> -75.6 (Changed by -655.2)

PR2_P: 1555 -> 1409 (Changed by 146), PR2_Y: 2800.7 -> -280.8 (Changed by -3081.5)

PR3_P: -122.2 -> -151 (Changed by -28.8), PR3_Y: -100 -> -232.4 (Changed by -132.4)

MC1_P: 833.3 -> 833.3, MC1_Y: -2230.6 -> -2230.6 (No change)

MC2_P: 591.5 -> 591.5, MC2_Y: -580.4 -> -580.4 (No change)

MC3_P: -20.3 -> -20.3, MC3_Y: -2431.1 -> -2431.1 (No change)

Attached are plots showing the offsets (and their relevant M1 OSEMs) before and after the suspension was reliably aligned.

Images attached to this comment
oli.patane@LIGO.ORG - 15:45, Monday 07 April 2025 (83793)

Comparing QUADs, BS, and TMSs pointing before to after outage. All had come back up from the power outage with a slightly different OPTICALIGN OFFSET for P and Y, due to the power outage taking everything down, and then when systems got back up, the OPTICALIGN OFFSETS were read from the sus sdf files, and those channels aren't monitored by sdf and so had older offset values. I set the offest values back to what they were before the power outage, but still had to adjust them to get the top masses pointed back to where they were before the outage.

'Before' refers to the OPTICALIGN OFFSET values before the outage, and 'After' is what I changed those values to to get the driftmon channels to match where they were before the outage.

SUS Before After
ITMX    
P -114.5 -109.7
Y 110.1 110.1 (no change)
ITMY    
P 1.6 2.0
Y -17.9 -22.4
ETMX    
P -36.6 -45.7
Y -146.2 -153.7
ETMY    
P 164.6 160.6
Y 166.7 166.7 (no change)
BS    
P 96.7 96.7 (no change)
Y -393.7 -393.7 (no change)
TMSX    
P -88.9 -89.9
Y -94.3 -92.3
TMSY    
P 79.2 82.2
Y -261.9 -261.9 (no change)

 

camilla.compton@LIGO.ORG - 08:37, Tuesday 08 April 2025 (83805)

HAM7/8 suspensions was brought back, details in 83774.

camilla.compton@LIGO.ORG - 09:35, Tuesday 08 April 2025 (83807)ISC

Betsy, Oli, Camilla

The complete story:
Ryan S had found a "good DRMI time" were all optics were in thier aligned state beofre HEPI lockign or venting: 1427541769 gps.
We had planned to go back to this time, but there was then some confusion about whether we wanted a time with the HEPIs locked or not and the team decided to go back to the time directly before the power outage, so that there is a typo is Ryan's original alog.
Everything actually got put back to a time before the power outage, when all suspensions were in the ALIGNED guardian state, (e.g. Sunday 6th April ~16:00UTC or times around then). However some of the ISI's were tripped at that time: HAM2, HAM7, BSC1, BSC2, BSC3, OPs overview attached. 
As we went and checked the IMC alignment yesterday 83794 (and saw flashes on IM4_TRANS suggesting MCs and IM1,2,3 are good), we do not want to move the alignment again so are staying with everything at this "before power outage time"...for better or worse.

We rechecked that everything was put back to this time, looking at the M1 DAMP channels for each optic, e.g. H1:SUS-OM1_M1_DAMP_{P,Y}_INMON:

  • OM1,2,3, ZM6
  • SRM,2,3
  • IM1,2,3,4
  • Quads, BS, TMS
    • Found that ETMX was 30urad off, same with ITMY Yaw, so Oli readjusted. All others were left as is.
  • Oli checked all other optics which were fine, within a few urad.

For each ndscope, 1st t-cursor on originally "good DRMI time" we had planned to go back to, second t-cursor on time we went back to.

Images attached to this comment
oli.patane@LIGO.ORG - 09:47, Tuesday 08 April 2025 (83808)

I checked the rest of the optics and verified that they all got put back to point where they were pointing before the power outage. I've also used one of the horizontal cursors to mark where the optic was at the "good DRMI time", and the other cursor marks where the optic is currently.

MCs

PRs

ZMs

FCs

Images attached to this comment
H1 SYS
betsy.weaver@LIGO.ORG - posted 14:04, Monday 07 April 2025 - last comment - 16:43, Monday 07 April 2025(83783)
VENT ACTIVITY STAT mid boot fests
Slow start on vent tasks today - besides the lack of water onsite this morning, and the particularly nasty power outage which are making things not come back up very well, we popped into HAM2 to keep moving on a few of the next steps.  Corey has captured detailed pictures of components and layouts, and Camilla and I have logged all of the LSC and ASC PD serial numbers and cables numbers.  We removed all connections at these PD boxes and Daniel is out making the RF cable meter length measurements.  OPS were realigning all suspensions to a golden DRMI time they chose as a reference for any fall back times.  Jason and Ryan are troubleshooting other PSL items that are misbehaving.

We are gearing up to take a side road on today's plan to look for flashes out of HAM2 to convince ourselves that the PSL PZT and alignment restoration of some suspensions are somewhat correct.
Comments related to this report
camilla.compton@LIGO.ORG - 16:43, Monday 07 April 2025 (83798)

Betsy and I also removed the septum plate VP cover to allow the PSL beam into HAM2 for alignment check work 83794, it was placed on HAM1. 

H1 ISC
keita.kawabe@LIGO.ORG - posted 13:58, Monday 07 April 2025 - last comment - 14:49, Monday 07 April 2025(83782)
WFS and LSC RF sensors were turned off

In preparation for disconnecting cables in HAM1, I turned off the following DC interface chassis:

LSC RF PD DC interface chassis in ISC R4 (REFL-A, POP-A among others), LSC RF PD DC interface chassis in ISC R1 (REFL-B among others),  and ASC WFS DC interface chassis in ISC R4 (REFL_A, REFL_B among others).

Daniel will perform TDR to measure RF in-vac cable length from outside.

Comments related to this report
keita.kawabe@LIGO.ORG - 14:49, Monday 07 April 2025 (83786)

Turning off the DC interface for LSC REFL_B somehow interfere with FSS locking. Turns out that the DC interface provides power (and maybe fast readback of the DC output) of FSS RFPD.

Since the point of powering down was to safely disconnect the DC in-vac cable from LSC REFL_B, and since the cable was safely disconnected, I restored the power and the FSS relocked right away.

H1 General
oli.patane@LIGO.ORG - posted 09:19, Monday 07 April 2025 - last comment - 14:16, Monday 07 April 2025(83769)
Status of power outage recovery

Came in to find all IFO systems down. Working through recovery now.

SUSB123 power supply seemed to have tripped off. ITMs and BS OSEM counts were all sitting very low around 3000. Once power supply was flipped back on, OSEM counts returned to normal for ITMX and BS, but now ITMY coil drivers filter banks are fflashing ROCKER SWITCH DEATH. Jeff and Richard are back in the CER cycling the coil drivers to hopefully fix that.

Also power for ISIITMY is down and being worked on to bring back.

Most of vent work is currently on hold and focused on getting systems back online

Comments related to this report
oli.patane@LIGO.ORG - 09:50, Monday 07 April 2025 (83772)

Cycling the coil drivers worked to fix that issue with the ITMY coil drivers. They needed to turn the power back off, turn the connected chassis off, then turn the power back on and then each chassis back on one by one.

The ITMY ISI GS13 that failed was replaced,and work is still going on to bring ITMY ISI back.

There are some timing errors that need to be corrected and a problem with DEV4 at EX.

camilla.compton@LIGO.ORG - 14:16, Monday 07 April 2025 (83774)SQZ

Once the ISI was back, Elenna and I brought all optics in HAM7/8 back to ALIGNED. Elenna put ZM4, FC1 back to before the power outage as they had changed ~200urad from SQZ/FC ASC signals being zeroed. Everything else (ZM1,2,3,5,OPO,FC2) was a <5-10 urad change so leaving as is.

H1 ISC
daniel.sigg@LIGO.ORG - posted 11:20, Tuesday 04 March 2025 - last comment - 16:15, Monday 07 April 2025(83155)
Delay Measurements of REFL/POP RF Detectors

Following T2500040-v1 the phase delays were measured for LSC REFL_A, LSC POP_A, ASC REFL_A, ASC REFL_B, and ASC POP_X. Results in the attached pdf.

Non-image files attached to this report
Comments related to this report
daniel.sigg@LIGO.ORG - 16:15, Monday 07 April 2025 (83795)

TDR Measurements attached (Marc, Daniel)

Non-image files attached to this comment
Displaying reports 401-420 of 81787.Go to page Start 17 18 19 20 21 22 23 24 25 End