Displaying reports 201-220 of 85478.Go to page Start 7 8 9 10 11 12 13 14 15 End
Reports until 07:33, Tuesday 28 October 2025
LHO General
thomas.shaffer@LIGO.ORG - posted 07:33, Tuesday 28 October 2025 (87783)
Ops Day Shift Start

TITLE: 10/28 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 10mph Gusts, 8mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.21 μm/s 
QUICK SUMMARY: Locked for almost 2 hours, maintenance day. Calm environment. Magnetic injections and sus charge running now.

H1 CDS
erik.vonreis@LIGO.ORG - posted 06:46, Tuesday 28 October 2025 (87782)
Workstations updated

Workstations were updated and rebooted.  This was an os packages update.  Conda packages were not updated.

LHO General
corey.gray@LIGO.ORG - posted 21:59, Monday 27 October 2025 (87776)
Mon EVE Ops Summary

TITLE: 10/27 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:

H1's been locked 9.25hrs.  Environmentally all is quiet.
LOG:   0327 GRB Short

X1 SUS
oli.patane@LIGO.ORG - posted 19:16, Monday 27 October 2025 (87781)
Verifying BBSS primary prism location is not near a point of instability

Jeff, Oli

Along with trying to verify the current location of the BBSS's primary prism on LLO and LHO's dummy masses (87767), we also wanted to check to see if we were near a point of instability for the prism relative to the center of mass of the dummy test mass. I plotted the model with the original value for d4, 2.67mm, alongside models with OG d4 +/- 1mm, 2mm, and 3mm. They all look like stable places to put the prism, so it looks like we don't have to worry. The plots can be found at /ligo/svncommon/SusSVN/sus/trunk/BBSS/Common/Results/comparetripleparams/2025-10-28_BBSS_d4nom_plusminus3mm/triplemodelcomp_2025-10-28_BBSS_d4nom_plusminus3mm_M1toM1.pdf svn r12759

Non-image files attached to this report
H1 SUS
oli.patane@LIGO.ORG - posted 19:01, Monday 27 October 2025 - last comment - 12:46, Tuesday 28 October 2025(87780)
ETMY L2 saturations last week NOT caused by new satamp

Jeff, Ryan S, Oli

During last week's set of issues, something that we saw happen a few times during our lock reacquisition attempts were lots of EY saturations while going through LOWNOISE_COIL_DRIVERS/TRANSITION_FROM_ETMX. The saturations would stop once L2 to R0 damping was turned off (87713), so it seemed like the issue was with the ETMY L2 satamp, so we swapped it out with a different one (87722). We didn't see any of these repeating saturations after that, but also we were changing a lot of things at the time trying to figure out the problem, plus we hadn't been seeing these saturations during every single relock attempt.

The DAC channels that were showing saturations were from DAC1 channels 1 and 2. Checking the model, those channels line up with R0 F2 and F3, which are the channels that control Length on R0. We plotted R0's MASTER OUTs during times where we saw lots of the EY saturations, and at times where the saturations heard on verbals were normal, including a time before we swapped the ETMY L2 satamp on October 14th.

Here's a breakdown of the different examples we looked at:

Normal amount of saturations during LOWNOISE_COIL_DRIVERS / TRANSITION_FROM_ETMX
Date SatAmp verbals ndscope
Oct 1 unmodified oct1_verbals oct1_ndscope
Oct 20 modified oct20_verbals oct20_ndscope
Oct 22 modified oct22_verbals oct22_ndscope
Lots of EY saturations during LOWNOISE_COIL_DRIVERS / TRANSITION_FROM_ETMX
Oct 23 modified oct23_verbals oct23_ndscope                               oct23_ndscope_zoomout
Oct 24 modified oct24_verbals oct24_ndscope

We saw that for the times where the amount of saturations were what we consider 'normal', the LOWNOISE_COIL_DRIVERS/TRANSITION_FROM_ETMX states behave similarly in the R0 MASTER OUT channels, including after swapping the satamp. The OSEMs will see some movement, but it's not too far outside of where they usually sit. However, for the two times that we checked where we had the excessive EY saturations, we saw that right before they started, there was a high frequency glitch seen in the ETMY L2 Length witness channel. This glitch only moved L2 a small amount, about 0.5 um, but it was causing R0 to move a lot in Length, saturating or nearly saturating for a long time.

Plotting the impulse response of the SUS-ETMY_L2_R0DAMP_L filter bank, we see that these filters have an impulse response time of ~16 seconds, and breaking down the impulse response by each filter's contribution, we see that the FM8 (module 7) filter module, invPsmoo, has a wild impulse response. Because of this filter module, the impulse response of the entire R0_L2DAMP_L filter bank is extremely long, and the signal is very large. The frequency response plot for module 7 shows us that it approaches the 10^15 gain at higher frequencies. Additionally, these new satamps have about double the gain at high frequencies as compared to the old satamps, so that would also be exacerbating any issues at higher frequencies.

With all that said, it looks like the conclusion is that the EY saturation issues from last week were not caused by a faulty satamp, but instead by something else that caused L2 to glitch, and the long impulse response and high gain causing R0 to take forever to calm down.

A temporary solution would be to keep the L2 to R0 damping off during locking until after LOWNOISE_LENGTH_CONTROL has finished, to make sure that we are avoiding having it on during all the sudden movements that could upset R0.

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 12:46, Tuesday 28 October 2025 (87798)

I am confused about the conclusions of this alog. Attached is a screenshot of the last time we had these saturations before the satamp was replaced. I did a test where I turned off the R0 tracking loop (ETMY L2->R0 length) by ramping the gain of the loop to zero on a 15 second ramp. The saturations stopped. I then ramped back on and saw the saturations return. I waited seven minutes and tried ramping on again and got the same saturation warnings. This test was done while we sat in state 560, which is lownoise_length_control. The state had completed and we held there in order to track down the saturations.

I can see the glitch that Jeff and Oli found in this alog, but I don't see any other glitches that caused the subsequent saturations when I was turning the gain on and off. The ramp time should be long enough to avoid any sort of issues with the impulse response, and the on/off test happened many minutes after the noted glitch, so I don't think they can be explained by this impulse response issue.

I don't necessarily think this indicates the satamp is the problem, except that we haven't had these saturations since the replacement, and this loop has been running for a long time without issue (my understanding is since O3b, but I don't know for certain).

I agree that a good way to avoid this issue is to engage the R0 tracking later on in the guardian.

Images attached to this comment
LHO General
ryan.short@LIGO.ORG - posted 16:48, Monday 27 October 2025 (87777)
Ops Day Shift Summary

TITLE: 10/27 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Two lock acquisitions today, both of which involved pauses for some commissioning measurements, but otherwise they were automatic. Rode through an earthquake this afternoon using the high-gain ASC configuration, which very likely saved time with relocking afterwords. H1 has now been locked for 4 hours.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
15:29 FAC Nellie MY N Technical cleaning 16:19
15:31 FAC Kim MX N Technical cleaning 16:19
15:31 FAC Tyler X-arm N Tumbleweed inventory 15:37
15:37 FAC Randy MX N Caulking BTE 19:31
17:58 IAS Jason Opt Lab N Checking parts 18:16
20:07 ISC Keita Opt Lab Local ISS array work 21:30
20:14 ISC Rahul Opt Lab Local ISS array work 21:30
21:20 VAC Gerardo, Jordan MX N Collecting equipment 21:56
LHO General
corey.gray@LIGO.ORG - posted 16:16, Monday 27 October 2025 - last comment - 16:45, Monday 27 October 2025(87775)
Mon Eve Ops Transition

TITLE: 10/27 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 8mph Gusts, 3mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.26 μm/s 
QUICK SUMMARY:

Got the rundown from RyanS with H1's status (which has been good even with the EQs), it has been locked 3.5hrs.  Microseism has leveled off (between the 50th & 90th percentile lines, after it's fast downturn yesterday), and winds lare calm.

Comments related to this report
corey.gray@LIGO.ORG - 16:45, Monday 27 October 2025 (87778)PEM

Operator Checksheet NOTES:

  • Ran the "check_dust_monitors_are_working" script and have (1) newish one for DR! not having counts change, and (2) had the usual notifications for LVEA5 & LAB2 dust monitors.
H1 ISC
jennifer.wright@LIGO.ORG - posted 15:12, Monday 27 October 2025 (87773)
Discarded one uncommitted change in labutils

Jennie W, TJ Schaffer

 

I used git restore to undo the following change in the labutils/PIMON/plot_lockloss.py script.

git diff plot_lockloss.py
diff --git a/PIMON/plot_lockloss.py b/PIMON/plot_lockloss.py
index 34cc35a..9a5575e 100644
--- a/PIMON/plot_lockloss.py
+++ b/PIMON/plot_lockloss.py
@@ -5,7 +5,7 @@ import matplotlib.pyplot as plt
 from pathlib import Path
 from matplotlib.backends.backend_pdf import PdfPages
 
-file = sys.argv[1]
+file = '/ligo/data/pimon/locklosses/1445269860_lockloss_pi_data.npz'

 
 data = np.load(file, allow_pickle=True)

 RMS = lambda y: np.cumsum(y[::-1])[::-1]

Just so we can push the labutils repository.
 

H1 DetChar (DetChar)
vicente.sierra@LIGO.ORG - posted 13:53, Monday 27 October 2025 (87772)
Data Quality Shift Report: LHO 2025-10-20 to 2025-10-26

Link to report here.

Summary:

H1 General
ryan.short@LIGO.ORG - posted 13:17, Monday 27 October 2025 (87771)
Ops Day Mid-Shift Report

H1 returned to observing at 19:54 UTC after a fairly strightforward relocking process. DRMI took a while to catch, but once it did, we paused to take a few OLG measurements (alog87768).

The 1 HZ ASC ringup started right around reaching low noise. Elenna tried a couple things to mitigate it, but we eventually just transitioned to high-gain ASC to finally subdue it.

I had one SDF to accept (see screenshot) which appears to be from Tony's SQZ troubleshooting overnight (alog87760).

Images attached to this report
H1 PSL
ryan.short@LIGO.ORG - posted 12:35, Monday 27 October 2025 (87770)
PSL 10-Day Trends

FAMIS 31109

PMC TRANS is having a bit of increase again, but otherwise no major events this week. I adjusted the ISS RefSignal this morning to bring the diffracted power back to our target of 4%; something I've been meaning to do for a while.

Images attached to this report
H1 ISC
elenna.capote@LIGO.ORG - posted 12:25, Monday 27 October 2025 - last comment - 14:51, Tuesday 28 October 2025(87768)
DRMI Inventory log

Some DRMI locking info

MICH, PRCL, SRCL filter banks during the "acquire DRMI 1f" state before the lock is grabbed.

OLGs for MICH, PRCL, SRCL after 1F acquisition, DRMI ASC engaged.

 

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 14:51, Tuesday 28 October 2025 (87807)

MICH, PRCL, SRCL filter banks when DRMI 1F is locked, settings for the measurement time above.

Images attached to this comment
LHO VE
david.barker@LIGO.ORG - posted 10:59, Monday 27 October 2025 (87769)
Mon CP1 Fill

Mon Oct 27 10:10:35 2025 INFO: Fill completed in 10min 32secs

 

Images attached to this report
H1 ISC
jennifer.wright@LIGO.ORG - posted 10:58, Monday 27 October 2025 (87766)
Change of reflected power from OMC during DARM offset step

I made plots of anti-symmetric power P_AS vs. power at reflected port of OMC P_OMC_REFL during the DARM offset step measurement on September 4th, (see LHO alog #86785 and 87629).

I had to use the Beckhoff reported power for this (H1:OMC-REFL_A_DC_POWER) as the front-end channel is calibrated wrongly (see LHO alog #87648).

I also plotted the P_OMC_REFL vs.  P_DCPD, ie. reflected vs. transmitted power for the OMC.

The plots for our two measurements taken at different times during IFO thermalisation are below, both were taken when OM2 was hot.

Non-image files attached to this report
X1 SUS
ibrahim.abouelfettouh@LIGO.ORG - posted 10:50, Monday 27 October 2025 (87767)
LHO Dummy Prism Position Measurement (glass test build vs. metal)

Ibrahim, TJ O'Hanlon

There was question about where the prism position was with respect to the nominal position (known to be 2.6775mm from the cetner line as figure 1 shows).

As it turns out, measuring from the scribe line of the center of the dummy test mass, yields an "dummy mass upside down" answer that has confused us (LLO Measurement). However, using the scribe on the glued tooling that we used to determine the secondary prism location resulted in the correct value ~2.73mm. While all these are rough measurements, I wanted to check if LLO's upside-down metal-prism version is equivalent to LHO's scribe-on-prism tooling glued-on version. For the LLO comparison, I used the metal prism build dimensions from D1900580. According to a scale-comparison (using line lengths ratios), they are at least comparable (Figure 2).

TJ agrees that the the scribe line I'm using, which was on D2400027 (also screenshotted below) and says "our prisms are in the same place but you are measuring off of the scribe line off the D24000247 while I am looking at the center round mass. The center round mass scribleline would match with d24000247 if it wasn't flipped".

Meaning LHO and LLO prisms are in the same spots but that the dummy scribe line is not to be trusted. That spot also matches (with at least +-1 mm error until measured more accurately) the 2.67mm nominal value.

I've attached some pictures of the measurement.

Images attached to this report
LHO General (Lockloss)
ryan.short@LIGO.ORG - posted 07:42, Monday 27 October 2025 - last comment - 10:16, Monday 27 October 2025(87761)
Ops Day Shift Start

TITLE: 10/27 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Earthquake
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: EARTHQUAKE
    Wind: 10mph Gusts, 7mph 3min avg
    Primary useism: 0.19 μm/s
    Secondary useism: 0.26 μm/s 
QUICK SUMMARY: H1 lost lock at 12:48 UTC after spedning almost 16 hours locked from a M6.5 EQ out of the Caribbean. Still in EQ mode, so will start relocking once ground motion calms down.

Comments related to this report
ryan.short@LIGO.ORG - 10:16, Monday 27 October 2025 (87764)Lockloss

H1 back to NLN at 16:57 UTC.

Ran in initial alignment then set to relocking automatically. Paused in a couple of places on the way up for Elenna to make some ASC measurements, now starting some commissioning activities which are slated to be wrapped up by 18:30 UTC.

And of course as soon as I'm about to post this, lockloss @ 17:14 UTC from what looks like an ETM glitch.

H1 SQZ (SQZ)
anthony.sanchez@LIGO.ORG - posted 04:03, Monday 27 October 2025 - last comment - 10:30, Monday 27 October 2025(87760)
Incoherent Ramblings from the AM

At [?] in the morning H1 was Found Locked but SQZ_Man was upset.
SQZ_Man stuck in loop from Beam Div_Open_FRS ->FC_WAIT _FS. 

SHG H1:SQZ-SHG_TEC_SETTEMP was adjusted down to maximixe H1:SQZ-SHG_GR_DC_POWERMON

still stuck in loop

Ran the noconda python switch_nom_sqz_states.py without script
....UH.... Broke all the SQZr Gaurdians.... Ooops
Ran the noconda python switch_nom_sqz_states.py with script
Taking SQZ_man to no_squeezing
Ran the noconda python switch_nom_sqz_states.py without script.... again
Nothing broke !!! But Still cannot get to observing.
Had to manually take SQZ_ SHG to Down. 
Back to Observing. Range only at 133 Mpc

 

Comments related to this report
sheila.dwyer@LIGO.ORG - 10:30, Monday 27 October 2025 (87765)

The problem was that the OPO pump ISS was running out of range, as the OPO reflected power has slowly been increasing since our last crystal move.  I've adjusted the wave plate on SQZT0 to allow more green power to be launched, this now gives a control mon of about 5 when the OPO transmission is 80 uW.  

 

H1 PSL (ISC, PSL)
keita.kawabe@LIGO.ORG - posted 18:48, Friday 24 October 2025 - last comment - 16:47, Monday 27 October 2025(87729)
ISS array another design issue (Rahul, Keita)

Summary:

We aligned everything such that none of 8 PDs was excellent but all were OK (we were also able to set up such that 4 pds were excellent but a few were terrible but decided not to take that), we were preparing for putting the array in storage until the installation, only to find that something is wrong with the design of the asymmetric QPD clamp D1300963-V2. It's unusable as is.

QPD clamp doesn't constrain the position of the QPD laterally, and there's a gross mismatch between the position of properly aligned QPD and that of the center hole of the QPD clamp. Because of that, when QPD is properly positioned, one of the QPD pins will touch the QPD clamp and be grounded unless the QPD connector is fixed such a way to pull the QPD pins sideways. Fortunately but sadly, the old non-tilt QPD clamp D1300963-V1 works better, so we'll use that.

Another minor issue, is that there seems to be a confusion as to the direction of the QPD tilt in terms of the word "pitch" and "yaw". The way the QPD is tilted in D1101059-v5 (this is how things are set up in the lab as of now) doesn't seem to follow the design intent of ECR E1400231 though it follows the word of it. After confirming that this is the case with systems, we'll change the QPD tilt direction (or not). This means that we're not ready to put everything in storage quite yet.  

None of these affect the PD array alignment we've done, this is just a problem of the QPD.

Pin grounding issue due to the QPD clamp design.

I loosened the screws for the QPD connector clamps (circled in blue in the first attachment) and the output of the QPD preamp got crazy with super large 60Hz noise and large DC SUM even though there was no laser light.

I disconnected the QPD connector, removed the connector clamps too, and found that one pin of the QPD was short circuited to the ground via the QPD clamp (not to be confused with the QPC connector clamps, see 2nd attachment).

Turns out, the offending pin was isolated during our adjustments all the time because the QPD connector clamps were putting enough lateral pressure as well as down such that the pins were slightly bent from the offending side. I was able to reattach the connector, push it laterally while tightening the clamp screws, and confirm that the QPD functioned fine. But this is not really where we wanted to be.

I rotated the QPD clamp 180 degrees (which turns out to make more sense judging from the drawings in the first attachment), which moved the QPD. Since the beam radius is about 0.2mm, if the QPD moves by 0.2mm it's not useful as a reference of the in-lab beam position. I turned the laser on, repositioned the QPD back to where it should be, but the pin on the opposite side started touching. (Sorry no picture.)

I put the old non-tilt version clamp and it was much, much better (attachment 3). It's annoying because the screw holes don't have an angled recess. The screw head is tilted relative to the mating surface on the clamp, contacting at a single point, and tightening/loosening the screw tend to move the QPD. But it's possible to carefully tighten one screw a bit, then the other one a bit, repeat that dozen times or so until nothing moves even when pushed firmly by finger. After that, you can still move the QPD by tiny amounts by tapping the QPD assy by bigger Allen key. Then tighten again.

What's going on here?

In the 4th attachment, you can see that the "center" hole of the QPD clamp is offset by 0.55" (1.4mm) in the direction orthogonal to A-A, and about 0.07" (even though this number is not specified anywhere in the drawing) or 1.8mm in A-A direction. So the total lateral offset is sqrt(1.4^2+1.8^2)~2.3mm. OTOH, the QPD assy is only 0.5" thick, so the lateral shift arising from the 1.41deg tilt at the back of the QPD assy is just 1.41/180*pi*0.5=0.0123" or 0.3mm.

Given that the beam position relative to the array structure is determined by the array itself and not by how the QPD is mounted, 2.3mm lateral shift is impossibly large, something must be wrong in the design. The 5th attachment is a visual aid for you.

Anyway, we'll use the old clamp, it's not worth designing and manufacturing new ones at this point.

QPD tilt direction.

If you go back to the first attachment, the QPD is tilted in a direction indicated by a red "tilt" arrow in the lab as we just followed the drawing.

The ECR E1400231 says "We have to tilt the QPD 1 deg in tip (pitch) and 1 deg in tilt (yaw)" and it sounds as if it colloborates with the drawing.

However, I suspect that "pitch" and "yaw" in the above sentence might have been misused. In the right figure of the 6th attachment (screeshot of ECR unedited), it seems that the QPD reflection hits the elevator (the red 45 degree thing in the figure) at around 6 O'clock position around the eliptic exit hole, which means that the QPD is tilted in its optical PIT. If it's really tilted 1 degree in optical PIT and 1 degree in optical YAW, the reflection will hit something like 7:30 position instead of 6:00.

That makes sense as the design intent of the ECR is to make sure that the QPD reflection will not go back into the exit hole. The 7th attachment is a side view I made, red lines represent the IR beams, yellow lines the internal hole(s) in the elevator, and green lines the aperture of the two eliptical exit holes. Nothing is to scale, but hopefully you agree that, in order to steer the QPD reflection outside of the exit hole aperture, PIT UP requires the largest tilt and PIT DOWN requires the least tilt. We have a fixed tilt of QPD, so it's best to PIT DOWN, that's what I read from the ECR. If you don't know which angle is bigger or smaller, see attachment 8.

Anyway, I'll ask Callum if my interpretation is correct, and will act accordingly.

Images attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 16:09, Monday 27 October 2025 (87774)

A followup summary:

Callum and Betsy say that I'm in the best position to judge, so I decided to tilt the QPD in its optical PIT.

Turns out that the QPD was already tilted in QPD's optical PIT so everything is fine(-ish). We'll put the unit in storage tomorrow.

Seems like we were tricked by the part drawing of the tilt washer D1400146, not the assembly drawing D1101059.

Details:

Before rotating anything, I wanted to see if the reflection from QPD could be seen on the aluminum part using the IR viewer, and indeed we could see something. The first attachment shows that some kind of diffraction pattern is hitting the barrel of the 1" 99:1 sampler in PIT. The second attachment shows that the bright spots are gone when Rahul blocked the beam going to QPD, so it's clearly due to the reflection of the QPD. The pattern might come from the gaps at the QPD center. It wasn't clear if the reflection was directly hitting the barrel  through AR, or if it hits 99% coating and reflected towards the barrel.

(There was also some IR visible in the input aperture but the beam is much smaller than this aperture, I believe we're seeing the scattered light coming back to this aperture from inside the array structure.)

We pulled the spare tilt washer D1400146-V1 (drawing with my red lines in the 3rd attachment) and measured the depth of the recess at 12 O'clock position (red E in the drawing), 3:00 (B), 6:00 (C) and 9:00 (D) using a caliper. It's a rough measurement, but anyway we repeated the measurement twice and got the following:

  A B (registration mark) C D
Meas 1 1.45 mm 1.21 1.45 1.70
Meas 2 1.41 1.21 1.49 1.71
Average 1.43 1.21 1.47 1.705

Clearly B at the registration mark is the shallowest position and the opposite position D is the deepest. The recess diameter was measured to be 23.0mm (specified as between .906 and .911" or 23.01 to 23.14mm), so the tilt of the recess as measured is (1.705-1.21)/23 ~ 21.5mrad or 1.2 deg, which reasonably agrees with 1.41deg specification and, more importantly, these measurements cannot be explained if the part was manufactured as specified in the drawing.

It seems that the drawing of the tilt washer D1400146 is incorrect or at least doesn't agree with reality, and the assembly drawing D1101059 was correct in that following that will give us the QPD tilt along optical PIT.

Seeing how the QPD reflection hits the barrel of the 99:1 sampler, the ghost beam dumping doesn't look well thought out but that's what it is.

4th picture shows the registration mark of the tilt ring as was set in the lab for future reference.

We've done the last QPD scan (turns out that I happened to set the PIT-YAW angle really well). Data will be posted later. Now we're ready to pack things up.

Images attached to this comment
keita.kawabe@LIGO.ORG - 16:47, Monday 27 October 2025 (87779)

We "measured" the dimension of the new (non-functional) QPD clamp D1300963-V2 by taking a picture with a ruler.

The offset of the center bore along the line connecting the two screw holes was measured to be about 1.9mm, which agrees pretty well with the above alog where I wrote "(even though this number is not specified anywhere in the drawing) or 1.8mm".

Images attached to this comment
Displaying reports 201-220 of 85478.Go to page Start 7 8 9 10 11 12 13 14 15 End