Displaying reports 1-20 of 85861.Go to page 1 2 3 4 5 6 7 8 9 10 End
Reports until 13:43, Friday 05 December 2025
H1 ISC
elenna.capote@LIGO.ORG - posted 13:43, Friday 05 December 2025 - last comment - 14:10, Friday 05 December 2025(88387)
PR3 yaw move to improve comm beatnote, ALS X alignment

While TJ was running initial alignment for the green arms, I noticed that the ALS X beam on the camera appeared to be too far to the right side of the screen. The COMM beatnote was at -20 dBm, when it is normally between -5 and -10 dBm. I checked both the PR3 top mass osems and the PR3 oplevs. The top mass osems did not indicate any significant change in position, but the oplev seems to indicate a significant change in the yaw position. PR3 yaw was around -11.8 but then changed to around -11.2 after the power outage. It also appears that the ALS X beam is closer to its usual position on the camera.

I decided to try moving PR3 yaw. I stepped 2 urad and which brought the oplev back to -11.8 in yaw and the COMM beatnote to -5 dBm. Previous slider value: -230.1, new slider value: -232.1.

The DIFF beatnote may not be great yet, but we should wait for beamsplitter alignment before making any other changes.

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 14:10, Friday 05 December 2025 (88388)

Actually, this may not have been the right thing to do. I trended the oplevs and top mass osems of ITMX and ETMX and compared their values during today's initial alignment before moving PR3, to the last initial alignment we did before the power outage. They are mostly very similar except for ETMX yaw.

  P then P now Y then Y now
ITMX oplev -7.7 -7.8 5.8 5.6
ETMX oplev 2.7 3.1 -11.5 -3.1

I put the PR3 yaw slider back to its previous value of -230.1 until we can figure out why ETMX yaw has moved so much.

H1 SEI
ryan.crouch@LIGO.ORG - posted 13:02, Friday 05 December 2025 (88386)
BRS Drift Trends -- Monthly

Closes FAMIS38811, last checked in alog87797.

We can see yesterdays outage on every plot as expected and the BRS issues from October 21st. There's doesn't look to be any trend in the driftmon, and ETMY looks to be still slowly increasing in temperature before the outage. The aux plot looks about the same as during the last check except for ETMY DAMP CTL looks to have come back at a different lower spot but the EY SEI configuration looks like is not quite fully recovered so that may be why.

Images attached to this report
H1 General
thomas.shaffer@LIGO.ORG - posted 12:47, Friday 05 December 2025 (88385)
All systems recovered enough for relocking

All systems have been recovered enough to start relocking at this point. We will not be relocking the IFO fully at this time though since the ring heaters have been off and their time to thermalize would take too long. We will instead be running other measurements that will not need the ring heaters on.

There is a good chance that other issues will pop up, but we are starting initial alignment now.

H1 IOO
jenne.driggers@LIGO.ORG - posted 12:43, Friday 05 December 2025 (88384)
IMC locked

[Oli, Jenne]

Oli brought all of the IMC optics back to where they had been yesterday before the power outage.  We squiggled MC2 and the IMC PZT around until we could lock on the TEM00 mode, and let the WFS (with 10x usual gain) finish the alignment.  We offloaded the IMC WFS using the IMC guardian.  We then took the IMC to OFFLINE, and moved MC1 and the PZT such that the DC position on the IMC WFS matched a time from yesterday when the IMC was offline before the power outage. We relocked, let the alignment run again, and again offloaded using the guardian.   Now the IMC is fine, and ready for initial alignement soon.

H1 PEM
ryan.crouch@LIGO.ORG - posted 12:38, Friday 05 December 2025 (88383)
Dust monitor IOC restarted

I restarted all the dust monitor IOCs, they all came back nicely. I then reset the alarm levels using the 'check_dust_monitors_are_working' script.

H1 DAQ
daniel.sigg@LIGO.ORG - posted 12:34, Friday 05 December 2025 (88382)
Slow controls

Marc Daniel

We finished the software chanegs for EtherCAT Corner Station Chassis 4.

We found 2 issues related to the power outage:

Baffle PD chassis in EX has a bench suppy that needs to be turned on by hand.

The system is back up and running.

H1 CDS
david.barker@LIGO.ORG - posted 11:35, Friday 05 December 2025 (88381)
CDS Status

Jonathan, EJ, Richard, TJ, Dave:

Slow /opt/rtcds file access has been fixed.

All Guardian nodes running again.

The long-range-dolphin issue was due to the end station Adnaco chassis being down, Richard powered these up and all IPCs are working correctly now.

Cameras are now working again.

Currently working on:

 . GDS broacaster and DAQ min trend archived data

 . Alarms and Alerts

H1 PSL
oli.patane@LIGO.ORG - posted 10:20, Friday 05 December 2025 (88380)
PSL Status Report Weekly FAMIS

Closes FAMIS#27621, last checked 88194

Everything looking okay except PMC REFL being high. It does look like it's been higher since they recovered the PSL after yesterday's power outage, but we are also still recovering and more will still need to be done for the PSL anyway, so maybe this is expected/will be adjusted anyway.


Laser Status:
    NPRO output power is 1.83W
    AMP1 output power is 70.67W
    AMP2 output power is 139.9W
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 0 days, 17 hr 9 minutes
    Reflected power = 25.84W
    Transmitted power = 105.6W
    PowerSum = 131.5W

FSS:
    It has been locked for 0 days 16 hr and 19 min
    TPD[V] = 0.5027V

ISS:
    The diffracted power is around 4.0%
    Last saturation event was 0 days 0 hours and 0 minutes ago


Possible Issues:
    PMC reflected power is high

Images attached to this report
LHO General
tyler.guidry@LIGO.ORG - posted 09:13, Friday 05 December 2025 (88379)
FAC Recovery Post Power Outage
Following yesterdays power outage, primary FAC systems (HVAC, domestic water supply, fire alarm/suppression) all seem to be operating normally. Fire panels required acknowledgement of power supply failures, and battery backups did their job. There is an issue plaguing the supply fan drive at the FCES which revisited us this morning, but that arising from the outage is highly unlikely. Temperatures are within normal ranges in all areas. Eric continues to physically walk systems down to ensure normal operation of points in systems not visible via FMCS.

C. Soike E. Otterman R. McCarthy T. Guidry
LHO General
thomas.shaffer@LIGO.ORG - posted 07:52, Friday 05 December 2025 (88377)
Ops Day Shift Start

TITLE: 12/0 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None

QUICK SUMMARY: Still in the midst of recovery from the power outage yesterday. Looks like all suspensions stayed damped overnight except the BS which tripped at 0422UTC. SEI is still in a non-nominal state as Jim left it yesterday (alog88373). Many medm screens, scripts, other processes are running *very* slow. 

Recovery will continue, one forward step at a time.

H1 CDS
david.barker@LIGO.ORG - posted 22:36, Thursday 04 December 2025 (88376)
CDS slow filesystem and not all services are back online

Recovery of the front end systems was hampered by a slow /opt/rtcds file system, which still persists. There are no logs on h1fs0 indicating the problem, and the ZFS file system itself appears to be fully functional. Hourly backups by h1fs1 continue normally. At this point we cannot rule out a network issue.

The alarms system is running but does not seem to be operational, again no errors are being logged and channel access appears to be working.

Jim reported a bad model initialization with its safe.snap, most probably caused by a slow /opt/rtcds. I had to restart many models several times to get through a "burt restore" timeout for similar reasons.

We may need to restart h1fs0 in the morning and remount /opt/rtcds on all the NFS clients

Images attached to this report
H1 CDS (SUS)
filiberto.clara@LIGO.ORG - posted 22:01, Thursday 04 December 2025 (88375)
Power Supply Failure for EY SUS-C1/C2 Racks

The positive 18V power supply which provides power to Rack SUS-C1 and SUS-C2 failed after the power outage. Fan seized and power supply tripped off. Power supply was replaced.

New Power Supply  installed - S1300291

F. Clara, M. Pirello

H1 DAQ
filiberto.clara@LIGO.ORG - posted 21:48, Thursday 04 December 2025 (88374)
EtherCAT Corner Station Chassis 4

WP 12915
Corner Station Controls Chassis 4 Wiring Diagram - E1101222
EtherCAT Corner Station Chassis 4 - D1101266
Modifications to the Beckhoff chassis - E2000499

The EtherCAT Corner Station Chassis 4 was modified per E2000499. New Beckhoff terminals were installed to support JAC and BHD. A new rear panel was installed to accommodate the new rear adapter boards and shutter control connectors. The din rail power terminal blocks were relocated making the exising power cables too short. New power cables installed to the EtherCAT couplers and power terminals. All field cabling was reconnected, expect for ISC_313. The Beckhoff software will need to be updated.

EtherCAT Corner Station Chassis 4 - Serial Number S1107450

F. Clara, D. Sigg

H1 SEI (OpsInfo)
jim.warner@LIGO.ORG - posted 20:58, Thursday 04 December 2025 (88373)
Status of SEI recovery

Seems like all of the models are running, but many of the guardians are still down. What I was able to get up and running:

HEPI Pump stations for all 3 buildings. Patrick had to help some with the end station beckhoff. I had to adjust the float switch for the overflow tank at EX, we were ridding just at the trip level there. The corner station Athena looked dead when I powered on the power supply on the mezzanine, but I think some of the leds are just dead. Good thing we plan to replace that soon.

Both BRS are running. Patrick had to power cycle the EX beckhoff, we were then able to restart the software. The damping settings got reverted (highthreshold and lowthreshold), so I had to fix those through the cli. Heater settings also got lost, both brs cooled off a bit over the several hours we were blind, and will take a little while to warm back up. Probably will keep watching for a while to make sure everything is okay, but I think we are good.

Some of the chambers are able to be isolated through guardian, but many have dead guardian nodes. I manually engaged damping where I couldn't get the ISI isolated via guardian.

None of the blend, sensor correction or cps diff is alive enough to recover yet. I have manually disabled sensor correction for the corner, the ends can't be isolated yet.  I guess that will wait for tomorrow. 

H1 SUS (SUS)
rahul.kumar@LIGO.ORG - posted 18:27, Thursday 04 December 2025 - last comment - 18:57, Thursday 04 December 2025(88369)
SUS - power outage recovery

Oli, Ryan S, Rahul

Except ETMX and ETMY (timing error, work currently ongoing), we have recovered all other suspensions by un-tripping the WD and setting it to SAFE for tonight. The inmons looks fine - eyeballed them all (bosems and aosem), nothing out of the order.

For ETMX and ETMY, Dave is currently performing a computer restart, following which they will set to safe as well.

 

Comments related to this report
ryan.short@LIGO.ORG - 18:57, Thursday 04 December 2025 (88371)

Once CDS reboots were finished, I took all suspensions to either ALIGNED or MISALIGNED so that they're damped overnight.

LHO General
corey.gray@LIGO.ORG - posted 18:02, Thursday 04 December 2025 - last comment - 19:20, Thursday 04 December 2025(88350)
Thurs DAY Ops Summary

TITLE: 12/04 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
SHIFT SUMMARY:

After CDS & H1-Locking work of yesterday, today we transitioned to starting prep work for the upcoming vent of HAM7.

However, there was more CDS work today which was related to ISC Slow Controls....but in the middle of this...

There was a 90min power outage on site!!
 

LOG:

Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 18:36, Thursday 04 December 2025 (88370)

Recovery milestones:
22:02 Power back-up
22:04 CDS / GC seeing what's on/functional, then bring infrastructure back up
22:10 VAC team starts Kebelco to support the ar pressure that's keeping the cornerstation gate valves for closing
22:13 Phones are back
22:17 LHO GC Internet's back
22:25 GC wifi back up, alog and CDS back up
22:40 RyanS, Jason, Patick got PSL beckoff back up
22:55 VAC in LVEA/FCES back up
22:57 CDS back up (only controls)
23:27 PSL computer back
00:14 Safety interlock is back
00:14 HV and ALSY on at EY
00:35 opslogin0 back up
00:37 CS HV back up
00:53 CS HEPI pump station controller back up
01:05 CO2X and CO2Y back up
01:10 HV and ALSY on at EY
01:22 PSL is alive

Note: EY 24MHz oscillator had to be power cycled to resync

Casualties:
- lost ADC on seiH16
- 18V power supply failure at EY in SUS rack

Log:

Start Time System Name Location Lazer_Haz Task Time End
22:32 VAC Gerardo MX, EX, MY, EY n Bringing back VAC computers 23:41
23:32 VAC Jordan LVEA, FCES n Bringing back VAC computers 22:55
22:54 PSL RyanS, Jason LVEA n Turning PSL chiller back on 23:27
23:03 VAC Jordan LVEA n Checking out VAC computer 23:41
23:56 SAF Richard EY n Looking at safety system 00:25
00:02 JAC Jennie JOAT, OpticsLab n Putting components on table 01:24
00:19 SAF TJ EX n Turning on HV and ALSX 00:49
00:38 TCS RyanC LVEA n Turning CO2 lasers back on 01:06
01:03 EE Marc CER n Power cycling io chassis 01:14
01:06 PSL RyanS, Jason LVEA n Checking makeup air for PSL 01:10
01:07 JAC Jennie LVEA n Grabbing parts 01:14
01:08 beckhoff patrick cr - BRSx recovery 02:20
01:08 sei jim EX.EY - HEPI pump station recovery 01:58
01:25 SEI Patrick EX n BRSX troubleshooting 01:54
01:27 EE Marc EX n Looking at RF sources 02:09
02:00 EE Fil EY n Power cycling SUSEY 02:19
02:09 VAC Gerardo MX N Troubleshooting VAC computer 02:27
02:12 EE Marc CER N Checking power supplies 02:16
ryan.short@LIGO.ORG - 19:20, Thursday 04 December 2025 (88372)OpsInfo

Since Oli's alog, I tried to keep a rough outline of the goings-on:

Marc and Fil went down to EY to replace the failed power supply, which brought life back to the EY frondends.

Dave noticed several models across site had timing errors, so he restarted them.

Gerardo continued to torubleshoot VAC computers at the mid-stations.

Once CDS boots were finished, I brought all suspension Guardians to either ALIGNED or MISALIGNED so that they're damped overnight.

I started to recover some of the Guardian nodes that didn't come up initially. When TJ started the Guardian service earlier, it took a very long time, but most of the nodes came up and he put them into good states for the night. The ones that didn't come up (still white on the GRD overview screen) I've been able to revive with a 'guardctrl restart' command, but I can only do a couple at a time or else the process times out. Even this way, the nodes take several minutes to come online. I got through many of the dead nodes, but I did not finish as I am very tired.

Main things still to do for recovery:  (off the top of my head)

  • Finish reviving dead Guardian nodes
  • Maybe restart the main file server (Dave suspects this is the cause of general slowness of CDS machines)
  • Ensure seismic is in a good place
  • Finish Beckhoff software updates that were in-progress at the time of the outage
  • Clear up remaining IPC errors across CDS land
  • Turn lasers back on
  • Check suspension positions
  • Lots of other things probably
  • Thank everyone for their tireless efforts
H1 IOO (ISC, PSL)
jennifer.wright@LIGO.ORG - posted 18:17, Friday 03 October 2025 - last comment - 10:19, Friday 05 December 2025(87290)
ISS vertical calibration

Jennie W, Keita,

Since we don't have an easy way of scanning the input beam in the vertical direction, Keita used the pitch of the PZT steering mirror to do the scan and we read out the DC voltages for each PD.

The beam position can be inferred from the pictures setup - see photo. As the pitch actuator on the steering mirror is rotated the allen key which is in the hole in the pitch actuator moves up and down relative to the ruler.

height on ruler above table = height of centre of actuator wheel above table + sqrt((allen key thickness/2)^2 + (allen key length)^2) *np.sin(ang - delta_theta)

where ang is the angle the actuator wheel is at and delta_theta is the angle from the centre line of the allen key to its corner which is used to point at the gradations on the ruler.

The first measurement from our alignment that Keita found yesterday that minimised the vertical dither coupling is shown. It shows voltage on each PD vs. height on the ruler.

From this and from the low DC voltages we saw on the QPD and some PDs yesterday Keita and realised we had gone too far to the edge of the QPD and some PDs.

So in the afternoon Keita realigned onto all the of PDs.

Today as we were doing measurements on it Keita realised we still had the small aperture piece in place on the array so we moved that for our second set of measurements.

The plot of voltage with ruler position and voltage with pitch wheel angle are attached.

Images attached to this report
Comments related to this report
jennifer.wright@LIGO.ORG - 13:02, Monday 06 October 2025 (87323)

Keita did a few more measurements in the verticall scan after I left on Friday, attached is the updated scan plot.

He also then set the pitch to the middle of the range (165mm on the scale in the graph) and took a horizontal scan of the PD array using the micrometer that the PZT mirror is mounted on. See second graph.

Images attached to this comment
jennifer.wright@LIGO.ORG - 13:04, Monday 06 October 2025 (87324)

From the vertical scan of the PD array it looks like diodes 2 and 6, which are in a vertitcal line in the array, are not properly aligned. We are not sure if this is an issue with one of the beam baths through the beamsplitters/mirrors that split the light onto the four directions for each vertical pair of diodes or if these diodes are just aligned wrongly.

keita.kawabe@LIGO.ORG - 10:19, Friday 05 December 2025 (88367)

The above plots are not relevant any more as PD positions were adjusted since, but here are additional details for posterity.

  • "height" was always measured by a ruler which had a considerable zero point offset from the table surface. That's OK because the same offset appears on both sides of the equation and cancels with each other.
  • "height of centre of actuator wheel above table" =160mm.
  • "allen key thickness" = 2mm.
  • "allen key length" = 80.6mm.

Calculating rotation angle of the knob doesn't mean anything, that must be converted to a meaningful number like the displacement of the beam on the PD. This wasn't done for the above plots but was done to the plots with final PD positions. 

  • PZT mirror mount is Thorlabs KC-1-PZ series (not to be confused with KC-1-P series) and the manual actuator knob tilts the mirror by 0.4deg per revolution.
  • We haven't measured the distance from the PZT mirrror to anything but took pictures that are good enough to determine the distance to the array PDs.
    • Attached photos were shot during the beam profile measurement. Red and green lines are visual aid for the screw holes grid ON THE TABLE SURFACE and the yellow line is the beam path, which is pretty much directly above the red line. From the second picture, we see that the distance from the PZT mirror surface to the surface of the tenmprary mirror (SM1) is something like 9.5"+-0.5".
    • Before removing SM1 from the table, the distance from SM1 to the ISS array input aperture location (red cross in the third attachment) was measured to be 292mm.
    • The distance from the input aperture to the front surface of the periscope beam splitter on the ISS array is nominally 17.4mm (again see the third attachment).
    • Finally, from the front surface of the periscope beam splitter on the ISS array to one of the PDs in the array on the first floor is 139.2mm. ("PD1" in PD Array Plate problem.docx in https://dcc.ligo.org/LIGO-E1400231. Don't trust numbers for other PDs in the document as the effect of refraction is calculated incorrectly.)
    • In total, the distance from the PZT mirror to one array PD is ~690+-13mm or 690*(1+-0.02)mm. The error bar is negligible for this purpose.
Images attached to this comment
Displaying reports 1-20 of 85861.Go to page 1 2 3 4 5 6 7 8 9 10 End