Displaying reports 1601-1620 of 83002.Go to page Start 77 78 79 80 81 82 83 84 85 End
Reports until 14:23, Tuesday 08 April 2025
H1 SUS
oli.patane@LIGO.ORG - posted 14:23, Tuesday 08 April 2025 - last comment - 15:32, Tuesday 08 April 2025(83818)
SR3 M1 SUS comparison between all DOFs

Jeff asked me to plot a comparison for SR3 M1 between all degrees of freedom comparing it in vacuum versus in air. I've plotted the last two measurements taken for SR3 from last August at the end of the OFI vent. One measurement was taken in air, and the other was taken in vacuum The pressure for the in vacuum measurement wasn't all the way down to our nominal, but as Jeff said in his alog at the time when we were running these measurements: "most of the molecules are out of the chamber that would contribute to buoyancy, so the SUS are at the position they will be in observing-level vacuum" (79513).

Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 15:32, Tuesday 08 April 2025 (83819)CSWG, SEI, SYS
Calling out the "interesting" off-diagonal elements:
                 D R I V E   D O F 
          L     T     V     R     P     Y

     L    --    nc    nc    meh   eand  YI
 
R    T    nc    --    YI    eand  nc    meh
E 
S    V    meh   YI    --    meh   nc    YI
P
     R    VI    esVI  VI    --    YI    VI
D 
O    P    esVI  VI    YI    meh   --    YI
F
     Y    YI    nc    nc    nc    nc   --

Here's the legend to the matrix, in order of "interesting":
  VI = Very Interesting (and unmodeled); very different between vac and air.
esVI = Modeled, but Still Very Interesting; very different between vac and air
  YI = Yes, Interesting. DC response magnitude is a bit different between vac and air, but not by much and all the resonances show up at roughly the same magnitude.
 meh = The resonant structure is different in magnitude, but probably just a difference in measurement coherence
eand = The cross coupling is expected, and not different between air and vac.
  nd = Not Different (and unmodeled). The cross-coupling is there, but it doesn't change from air to vac.
I've bolded everything above "meh" to help guide the eye.

Recapping in a different way, because the plots are merged in a really funky order,
  VI = L to R (pg 14), 
       T to P (pg 22),
       Y to R (pg 33)

esVI = T to R (pg 16)
       L to P (pg 20)

  YI = L to Y (pg 28), Y to L (pg 27),
       T to V (pg 12), V to T (pg 11),  
       V to P (pg 24),
       P to R (pg 25), 
       Y to V (pg 31),
       Y to P (pg 35)


What a mess! 
The matrix of interesting changes is NOT symmetric across the diagonal
The matrix has unmodeled cross-coupling that *changes* between air and vac
For the elements that are supposed to be there, (like L to P / P to L and T to R / R to T), the cross coupling different between air and vacuum.
For some elements, the cross-coupling is *dramatically worse* at *vac* than it is at air.

Why is there yaw to roll coupling, and why is it changing between air and vacuum??

There's clearly more going on here than just OSEM sensor miscalibration that the Stanford team found with PR3 data in LHO:83605. These measurements are a mere 8 days apart!

The plan *was* to use SR3 as a proxy during the vent to test out the OSEM estimator algorithm they were using to improve yaw, but ... with this much different between air and vac, I'm not so sure the in-air SR3 measurements to inform an estimator to be used at vacuum.
H1 General (EPO)
corey.gray@LIGO.ORG - posted 13:04, Tuesday 08 April 2025 - last comment - 13:21, Tuesday 08 April 2025(83790)
HAM1 BEFORE SEI De-install / ISI Install Photos + Contamination Control Tasks

HAM1 Before Photos:  (HAM1 chamber open just under 90min for this activity)

This morning before the deinstall activities begin, took the opportunity to photo document the HAM1 optical layout.  Keita requested I take photos to record the layout wrt to the iLIGO bolt pattern, because rough alignment of optical components on the new SEI ISI for HAM1 will be done utilizing the bolt patterns of the Optics Tables; so I took a few more photos than normal (top view and angled with a focus on the REFL path).  Took large photos with the Canon 60D DSLR camera as well as my camera phone.  

The photos are being populated in this Google Drive folder:  https://drive.google.com/drive/folders/1yDKp7aByA_TYJ12c8j8BnZM_pd1Q2DBZ?usp=sharing

Naming each photo referencing an updated layout Camilla Compton made which labels all beam dumps, but I also had to use an older layout to preserve naming since the layout on HAM1 currently looks like D1000313v16 (which is also referenced for naming the photos).

The above folder has the Canon photos, and I'll be adding the camera phone images next.

Contamination Control Notes:

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 13:21, Tuesday 08 April 2025 (83816)ISC, SEI, SUS, SYS
Tagging ISC, SUS, SYS, and SEI. Rest in power HAM1 Stack!
LHO FMCS
eric.otterman@LIGO.ORG - posted 12:47, Tuesday 08 April 2025 (83815)
Mitsubishi A/C low ambient hood enabling
The Mitsubishi condensing units at the H2 and MSR have had their low ambient hoods enabled for the season. Due to the large ambient temperature swings that occur in Spring, this may cause less uniform temperature trending in the MSR until the overnight temperatures moderate. 
LHO FMCS
eric.otterman@LIGO.ORG - posted 12:42, Tuesday 08 April 2025 (83814)
FCES annual air handler maintenance
The annual air handler maintenance was done at the FCES this morning. No issues have been observed since. 
H1 CDS
david.barker@LIGO.ORG - posted 11:41, Tuesday 08 April 2025 (83812)
CDS Recovery Status

The DAQ's EDC is green again. Erik and Camilla started the HWS EX and EY IOCs which were the last needed to complete the set.

Daniel and Patrick confirmed that the two Beckhoff terminals lost at EX are the AOS baffle PDs, which are not immediately needed but will be needed to return to O4. For now I have "greened up" the CDS overview by configuring DEV4 to expect 125 terminals of 127 total.

To remind us that DEV4 is in a degraded state, and that DEV1 was degraded in Dec 2024 when it lost its illuminators, these are shown with a darker green block.

Images attached to this report
H1 CDS
patrick.thomas@LIGO.ORG - posted 11:13, Tuesday 08 April 2025 (83811)
Remaining Beckhoff errors
Attached are screenshots showing the terminals that are currently in error.

The h1ecatc1 computer froze up part way through looking at these with remote desktop. I put a terminal on it and physically power cycled it.
Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:30, Tuesday 08 April 2025 (83809)
Tue CP1 Fill

Tue Apr 08 10:10:05 2025 INFO: Fill completed in 10min 1secs

 

Images attached to this report
LHO VE
jordan.vanosky@LIGO.ORG - posted 08:49, Tuesday 08 April 2025 (83806)
Morning Purge Air Checks

Morning inspection of Kobelco, water pump and drying towers completed, all nominal.

Purge air dewpoint prior to in chamber work, measured at HAM1 port of entry was -42 C. Measurement will be repeated at YBM prior to BSC8 -X door removal.

Images attached to this report
LHO General
tyler.guidry@LIGO.ORG - posted 08:08, Tuesday 08 April 2025 - last comment - 11:54, Tuesday 08 April 2025(83803)
Water Leak Repair
C&E Trenching arrived yesterday morning and began work on uncovering/locating the potable water leak. Ultimately, the leak was in very close proximity to where I first noticed the persistently wet pavement. It was directly beneath secondary high voltage lines that feed the chiller yard among other things. The cause of the leak ended up being a small crack impregnated with a rock near a smashed/deformed section of pipe. There is no clear explanation as to how the pipe was damaged in the first place. The trenching team suggested that the pipe may have been damaged just before/during installation. Interestingly enough, this section of pipe was found to have two prior repairs made to it as seen in the photo with the two pipe couplers right next to each other. Based on the date printed on the pipe, these repairs were made during construction some time in 96'.

Important note: with the potable supply valved out at the water room, I replenished the holding tank to a level of 57.8" read at FMCS. Once the pipe repair had been made, I reintroduced the potable supply and the tank level fell to 52.5". In conjunction with this, the Magemter gauge prior to line repair read xx6171 gallons. Post repair the gauge read xx6460. However, I don't have much confidence in the Magmeter gauge readout in this scenario as the line turbulence causes some egregious (-250gpm+) reverse flow readings while the line recharges.

After repair, and keeping staff water usage suspended, I held the line pressure at some 80psi for 30 minutes or so and observed no leaks. There were also no drops in system pressure nor was there any flow readout at the magmeter gauge - both important and reassuring improvements.

R. McCarthy T. Guidry
Images attached to this report
Comments related to this report
corey.gray@LIGO.ORG - 11:54, Tuesday 08 April 2025 (83813)EPO

Tagging EPO for the photos!

H1 SEI
oli.patane@LIGO.ORG - posted 07:57, Tuesday 08 April 2025 (83802)
ISI CPS Noise Spectra Check Weekly FAMIS

Closes FAMIS#26038, last checked 83718

Just like last week, everything is elevated due to the vent.

Non-image files attached to this report
H1 General
oli.patane@LIGO.ORG - posted 07:40, Tuesday 08 April 2025 (83801)
Ops Day Shift Start

TITLE: 04/08 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: MAINTENANCE
    Wind: 18mph Gusts, 15mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.23 μm/s
QUICK SUMMARY:

Plans for today:

H1 CDS
erik.vonreis@LIGO.ORG - posted 06:33, Tuesday 08 April 2025 (83800)
Workstations updated

Workstations were updated and rebooted.  This was an OS packages update.  Conda packages were not updated.

LHO VE
jordan.vanosky@LIGO.ORG - posted 17:14, Monday 07 April 2025 (83799)
Installation of 1000AMU RGA Assembly

Travis, Melina, Jordan

Today we were able to install the assembly for the new 1000AMU RGA. (D2300253). This is located on the Y-beam manifold near BSC8.

The assembly had been previously built in the staging building and the final 8"CF connection to the gate valve on the YBM was done. There is a temporary support stand currently attached, a more permanent support and bump protection cage will be installed later.

The assembly was helium leak checked at all joints, He dwell of ~5 seconds per joint. No He signal was seen above the HLD background ~1.7E-10 Torr-l/s.

Assembly will continue to pumpdown overnight on a turbo, then will transition to the assembly 10 l/s diode ion pump.

Images attached to this report
H1 SUS (CDS, SYS)
jeffrey.kissel@LIGO.ORG - posted 15:03, Monday 07 April 2025 - last comment - 13:20, Tuesday 08 April 2025(83787)
Recovery from 2025-04-06 Power Outage: +18V DC Power Supply to SUS-C5 ITMY/ITMX/BS Rack Trips, ITMY PUM OSEM SatAmp Fails; Replaced Both +/-18 V Power Supplies and Replaced ITMY PUM OSEM SatAmp
J. Kissel, R. McCarthy, M. Pirello, O. Patane, D. Barker, B. Weaver
2025-04-06 Power outage: LHO:83753

Among the things that did not recover nicely from the 2025-04-06 power outage was the +18V DC power supply to the SUS ITMY / ITMX / BS rack, SUS-C5. The power supply lives in VDC-C1 U23-U21 (Left-Hand Side if staring at the rack from the front); see D2300167. More details to come, but we replaced both +/-18V power supplies and SUS ITMY PUM OSEMs satamp did not survive the powerup, so we replaced that too.

Took out 
    +18V Power Supply S1300278
    -18V Power Supply S1300295
    ITMY PUM SatAmp S1100122

Replaced with
    +18V Power Supply S1201919
    -18V Power Supply S1201915
    ITMY PUM SatAmp S1000227
Comments related to this report
jeffrey.kissel@LIGO.ORG - 13:20, Tuesday 08 April 2025 (83810)CDS, SUS
And now... the rest of the story.

Upon recovery of the suspensions yesterday, we noticed that all the top-mass OSEM sensor values for ITMX, ITMY, and BS were low, *all* scattered from +2000 to +6000 [cts]. They typically should be typically sitting at ~half the ADC range, or ~15000 [cts]; see ~5 day trend of the top mass (main chain, M0) OSEMs for H1SUSBS M1,ITMX M0, and H1SUSITMY M0. The trends are labeled with all that has happen in the past 5 days. The corner was vented on Apr 4 / Friday, so that changes the physical position of the suspensions and the OSEMs see it. At the power outage on Apr 6, you can see a much different, much more drastic change. 

Investigations are rapid fire during these power outages, with ideas and guesses for what's wrong are flying everywhere. The one that ended up having fruit was that Dave mentioned that it looked like "they've lost a [+/-18V differential voltage] rail or something," -- where he's thinking about the old 2011 problem LLO:1857 where 
   - There's a SCSI cable that connects the SCSI ports of a given AA chassis to the SCSI port of the corresponding ADC adapter card on the back of any IO chassis
   - The ADC Adapter Card 's port has very small, male pins that can be easy bent if one's not careful during the connection of the cable.
   - Sometimes, these male pins get bent in such a way that the (rather sharp) pin stabs into the plastic of the connecter, rather than into the conductive socket of the cable. Thus, (typically) one leg, of one differential channel is floating, and this manifests digitally in that it creates an *exact*  -4300 ct (negative 4300 ct) offset that is stable and not noisy. 
   - (as a side note, this issue was insidious: once one bent male pin on the ADC adapter card was bent, and mashed into the SCSI cable, that *SCSI* cable was now molded to the *bent* pin, and plugging it in to *other* adapter cards would bend previously unbent pins, *propagating* the problem.) 

Obviously this wasn't happening to *all* the OSEMs on three suspensions without anyone touching any cables, but it gave us enough clue to go out to the racks.
Another major clue -- the signal processing electronics for ITMX, ITMY and BS are all in the same rack -- SUS-C5 in the CER.
Upon visiting the racks, we found, indeed, that all the chassis in SUS-C5 -- the coil drivers, TOP (D1001782), UIM (D0902668) and PUM (D0902668) -- had their "-15 V" power supply indicator light OFF; see FRONT and BACK pictures of SUS-C5.

Remember several quirks of the system that help us realize what's happened (and looking at the last page of ITM/BS wiring diagram, D1100022 as your visual aide):
(1) For aLIGO "UK" suspensions -- the OSEM *sensors'* PD satellite amplifiers (sat amps, located out in the LVEA within the biergarten) that live out in the LVEA field racks are powered by the coil drivers to which their OSEM *coil actuators* are connected.
So, when the SUS-C5 coil drivers lost a differential power rail, that makes both the coils and the sensors of the OSEM behave strangely (as typical with LIGO differential electronics: not "completely off" just "what the heck is that?"). 
(2) Just as an extra fun gotcha, all of the UK coil drivers back panels are *labeled incorrectly* so that the +15V supply voltage indicator LED is labeled "-15" and the -15V supply is labeled "+15".
So, this is why the obviously positive 18V coming from the rack's power rail is off, but the "+15" indicator light is on an happy.  #facepalm
(3) The AA Chassis and Binary IO for these SUS live in the adjacent SUS-C6 rack; it's + and - 18V DC power supply (separate and different from the supplies for the SUS-C5 rack) came up fine without any over-current trip. Similarly the IO chassis, which *do* live in SUS-C5, are powered by a separate single-leg +24V from another DC power supply, also coming up fine without over-current trip.
So, we had a totally normal digital readback of the odd electronics behavior.
(4) Also note, at this point, we had not yet untripped the Independent Software Watch Dog, and the QUAD's Hardware Watchdog had completely tripped. 
So, if you "turn on the damping loops" it looks like nothing's wrong; at first glance, it might *look* like there's drive going out to the suspensions because you see live and moving MASTER_OUT channels and USER MODEL DAC output, missing that there's no IOP MODEL DAC output. and it might *look* like the suspensions are moving as a result because there are some non-zero signals coming into on OSEMINF banks and they're moving around, so that means the damping loops are doing what they do and blindly taking this sensor signal, filtering it per normal, and sending a control signal out.

Oi.

So, anyways, back to the racks -- while *I* got distracted inventorying *all* the racks to see what else failed, and mapping all the blinking lights in *all* the DC power supplies (which, I learned, are a red herring) -- Richard flipped on the +18V power supply in VDC-C1 U23, identifying quickly that it had over-current-tripped when the site regained power.
See the "before" picture of VDC-C1 U23 what it looks like tripped -- the "left" (in this "front of the rack" view) power supply's power switch on the lower left is in the OFF position, and voltage and current read zero.

Turning the +18V power supply on *briefly* restored *all* OSEM readbacks, for a few minutes.
And then the same supply, VDC-C1 U23, over-current tripped again. 
So Richard and I turned off all the coil drivers in SUS-R5 via their rocker switches, turned on the VDC-C1 U23 left +18V power supply again, then one-by-one powered on the coil drivers in SUS-C5 with Richard watching the current draw on the VDC-C1 U23 power supply.

Interesting for later: when we turned on the ITMY PUM driver, he shouted down "whup! Saw that one!"
With this slow turn on, the power supply did not trip and power to the SUS-R5 held, so we left it ...for a while.
Richard and I identified that this rack's +18V and -18V power supplies had *not* yet had their fans upgraded per IIET:33728.
Given that it was functioning again and having other fish to fry, we elected to not *yet* to replace the power supplies.

Then ~10-15 minutes later, the same supply, VDC-C1 U23, over-current tripped again, again . 
So, Marc and I went forward with replacing the power supplies.
Before replacement, with the power to all the SUS-C5 rack's coil drivers off again, we measured the output voltage of both supplies via DVM: +19.35 and -18.7 [V_DC].
Then we turned off both former power supplies and swapped in the replacements (see serial numbers quoted in the main aLOG); see "after" picture.

Not knowing better we set the supplies to output to a symmetric +/-18.71 [V_DC] as measured by DVM. 
Upon initial power turn on with no SUS-R5 coil drivers on, we measured the voltage from an unused 3W3 power spigot of the SUS-R5 +/-18 V power rail, and measured a balanced +/-18.6 [V_DC].
Similar to Richard and I earlier, I individually turned on each coil driver at SUS-C5 while Marc watched the current draw at the VDC-C1 rack.
Again, once we got the ITMY PUM driver we saw a large jump in current draw. (this is one of the "important later")
I remeasured the SUS-R5 power rail, and the voltage on positive leg had dropped to +18.06 [V_DC].
So, we slowly increased the requested voltage from the power supply to achieve +18.5 [V_DC] again at the SUS-R5 power rail. 
This required 19.34 [V_DC] at the power supply.
Welp -- I guess whomever had set the +18V power supply to +19.35 [V_DC] some time in the past had come across this issue before.

Finishing up at the supplies, we restored power / turned to all the remaining coil drivers had watched it for another bit. 
No more over-current trips. 
GOOD! 

... but we're not done!

... upon returning to the ITMY MEDM overview screen on a CDS laptop still standing by the rack, we saw the "ROCKER SWITCH DEATH" or "COIL DRIVER DEATH" warning lights randomly and quickly flashing around *both* the L1 UIM and the L2 PUM COILOUTFs. Oli reported the same thing from the control room. However, both those coil drivers power rail lights looked fine and the rocker switches had not tripped. Reminding myself that these indicator lights are actually watching the OSEM sensor readbacks; if the sensors are some small threshold around zero, then the warning light flashes. This was a crude remote indicator of whether the coil driver itself had over-current tripped because again, the sensors are powered by the coil driver, so if the sensors are zero then there's a good chance the coil driver if off.
But in this case we're staring at the coil driver and it reports good health and no rocker switch over-current trip.
However we see the L2 PUM OSEMs were rapidly glitching between "normal signal" of ~15000 [cts] and a "noisy zero" around 0 [ct] -- hence the red, erratic (and red herring) warning lights.

Richard's instincts were "maybe the sat amp has gone in to oscillations" a la 2015's problem solved by an ECR (see IIET:4628), and suggest power cycling the sat amp. 
Of course, these UK satamps () are another design without a power switch, so a "power cycle" means disconnecting and reconnecting the cabling to/from the coil driver that powers it at the satamp. 
So, Marc and I headed out to SUS-R5 in the biergarten, and found that only ITMY PUM satamp had all 4 channels' fault lights on and red. See FAULT picture.
Powering off / powering on (unplugging, replugging) the sat amp did not resolve the fault lights nor the signal glitching.
We replaced the sat amp with a in-hand spare and fault lights did NOT light up and signals looked excellent. No noise, and the DC values were restored to their pre-power-outage values. See OK picture.

So, we're not sure *really* what the failure mode was for this satamp, but (a) we suspect it was a victim of the current surges and unequal power rails over the course of re-powering the SUS-C5 rack, which contains the ITMY PUM coil driver that drew a lot of current upon power up, which powers this sat-amp (this is the other of the "important later"); and (b) we had a spare and it works, so we've moved on with post-mortem to come later. 

So -- for all that -- the short answer summary is as the main aLOG says:
- The VDC-C1 U23 "left" +18V DC power supply for the SUS-R5 rack (and for specifically the ITMX, ITMY, and BS coil drivers) over-current tripped several times over the course of power restoration, leading us to
- Replace both +18V and -18V power supplies that were already stressed and planned to be swapped in the fullness of time, and 
- We swapped a sat-amp that did not survive the current surges and unequal power rail turn-ons of the power outage recovery and subsequent investigations.

Oi!
Images attached to this comment
H1 SUS
ryan.crouch@LIGO.ORG - posted 14:19, Monday 07 April 2025 - last comment - 09:47, Tuesday 08 April 2025(83780)
Power power outage OSEM recovery / Offset check

Oli, Ibrahim, RyanC

We took a look at the osems current positions for the suspensions post power outage to make sure the offsets are still correct, the previously referenced "Golden time" was 1427541769 (the last drmi lock before the vent). While we did compare against this time we mainly set them to before the poweroutage.

Input:

IM1_P: 368.5 -> 396.1, IM1_Y: -382.7 -> -385.4

IM2_P: 558.0 -> 792.0, IM2_Y: -174.7 -> -175.7

IM3_P: -216.3 -> -207.7, IM3_Y: 334.0 -> 346.0

IM4_P: -52.4 -> -92.4, IM4_Y: 379.5 -> 122.5

SR2_P: -114.3 -> -117.6, SR2_Y: 255.2 -> 243.6

SRM_P: 2540.3 -> 2478.3, SRM_Y: -3809.1 -> -3825.1

SR3_P: 439.8 -> 442.4, SR3_Y: -137.7 -> -143.9

Output:

ZM6_P: 1408.7 -> 811.7, ZM6_Y: -260.1 -> -206.1

OM1_P: -70.9 -> -90.8, OM1_Y: 707.2 -> 704.5

OM2_P: -1475.8 -> -1445.0, OM2_Y: -141.2 -> -290.8

OM3_P: Didn't adjust, OM3_Y: Didn't adjust

Comments related to this report
ibrahim.abouelfettouh@LIGO.ORG - 15:19, Monday 07 April 2025 (83789)

Input:

PRM_P: -1620.8 -> -1672, (Changed by -51.2) PRM_Y: 579.6 -> -75.6 (Changed by -655.2)

PR2_P: 1555 -> 1409 (Changed by 146), PR2_Y: 2800.7 -> -280.8 (Changed by -3081.5)

PR3_P: -122.2 -> -151 (Changed by -28.8), PR3_Y: -100 -> -232.4 (Changed by -132.4)

MC1_P: 833.3 -> 833.3, MC1_Y: -2230.6 -> -2230.6 (No change)

MC2_P: 591.5 -> 591.5, MC2_Y: -580.4 -> -580.4 (No change)

MC3_P: -20.3 -> -20.3, MC3_Y: -2431.1 -> -2431.1 (No change)

Attached are plots showing the offsets (and their relevant M1 OSEMs) before and after the suspension was reliably aligned.

Images attached to this comment
oli.patane@LIGO.ORG - 15:45, Monday 07 April 2025 (83793)

Comparing QUADs, BS, and TMSs pointing before to after outage. All had come back up from the power outage with a slightly different OPTICALIGN OFFSET for P and Y, due to the power outage taking everything down, and then when systems got back up, the OPTICALIGN OFFSETS were read from the sus sdf files, and those channels aren't monitored by sdf and so had older offset values. I set the offest values back to what they were before the power outage, but still had to adjust them to get the top masses pointed back to where they were before the outage.

'Before' refers to the OPTICALIGN OFFSET values before the outage, and 'After' is what I changed those values to to get the driftmon channels to match where they were before the outage.

SUS Before After
ITMX    
P -114.5 -109.7
Y 110.1 110.1 (no change)
ITMY    
P 1.6 2.0
Y -17.9 -22.4
ETMX    
P -36.6 -45.7
Y -146.2 -153.7
ETMY    
P 164.6 160.6
Y 166.7 166.7 (no change)
BS    
P 96.7 96.7 (no change)
Y -393.7 -393.7 (no change)
TMSX    
P -88.9 -89.9
Y -94.3 -92.3
TMSY    
P 79.2 82.2
Y -261.9 -261.9 (no change)

 

camilla.compton@LIGO.ORG - 08:37, Tuesday 08 April 2025 (83805)

HAM7/8 suspensions was brought back, details in 83774.

camilla.compton@LIGO.ORG - 09:35, Tuesday 08 April 2025 (83807)ISC

Betsy, Oli, Camilla

The complete story:
Ryan S had found a "good DRMI time" were all optics were in thier aligned state beofre HEPI lockign or venting: 1427541769 gps.
We had planned to go back to this time, but there was then some confusion about whether we wanted a time with the HEPIs locked or not and the team decided to go back to the time directly before the power outage, so that there is a typo is Ryan's original alog.
Everything actually got put back to a time before the power outage, when all suspensions were in the ALIGNED guardian state, (e.g. Sunday 6th April ~16:00UTC or times around then). However some of the ISI's were tripped at that time: HAM2, HAM7, BSC1, BSC2, BSC3, OPs overview attached. 
As we went and checked the IMC alignment yesterday 83794 (and saw flashes on IM4_TRANS suggesting MCs and IM1,2,3 are good), we do not want to move the alignment again so are staying with everything at this "before power outage time"...for better or worse.

We rechecked that everything was put back to this time, looking at the M1 DAMP channels for each optic, e.g. H1:SUS-OM1_M1_DAMP_{P,Y}_INMON:

  • OM1,2,3, ZM6
  • SRM,2,3
  • IM1,2,3,4
  • Quads, BS, TMS
    • Found that ETMX was 30urad off, same with ITMY Yaw, so Oli readjusted. All others were left as is.
  • Oli checked all other optics which were fine, within a few urad.

For each ndscope, 1st t-cursor on originally "good DRMI time" we had planned to go back to, second t-cursor on time we went back to.

Images attached to this comment
oli.patane@LIGO.ORG - 09:47, Tuesday 08 April 2025 (83808)

I checked the rest of the optics and verified that they all got put back to point where they were pointing before the power outage. I've also used one of the horizontal cursors to mark where the optic was at the "good DRMI time", and the other cursor marks where the optic is currently.

MCs

PRs

ZMs

FCs

Images attached to this comment
H1 PEM
ryan.crouch@LIGO.ORG - posted 08:42, Monday 07 April 2025 - last comment - 08:13, Tuesday 08 April 2025(83766)
HAM1 dust monitor weekend trend

The counts were pretty low over the weekend, peaking at ~ 30 counts of 0.3s and 10 counts for 0.5s.

Images attached to this report
Comments related to this report
ryan.crouch@LIGO.ORG - 09:21, Monday 07 April 2025 (83770)

The EY dust monitor died with the power outage and has not come back with a process restart, it's having connection issues.

ryan.crouch@LIGO.ORG - 08:13, Tuesday 08 April 2025 (83804)

The EY dust monitor came back overnight.

Displaying reports 1601-1620 of 83002.Go to page Start 77 78 79 80 81 82 83 84 85 End