Displaying reports 641-660 of 77247.Go to page Start 29 30 31 32 33 34 35 36 37 End
Reports until 02:02, Saturday 06 July 2024
H1 General
anthony.sanchez@LIGO.ORG - posted 02:02, Saturday 06 July 2024 - last comment - 02:03, Saturday 06 July 2024(78893)
Friday Eve shift

TITLE: 07/06 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:

Lockloss 1:06 UTC
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1404263217
Lockloss Screenshots attached

Relocking:
After the Lockloss I had pretty small flashes on X arm.
I allowed Increase flashes to run and it didn't get me better than 0.3.
I then touched it up by hand and could not get it better than 3.1, Trending back I think the goal is above 1.
I then tried to get better alignment by rolling back to an alignment from the beggining of the last lock.

I tried the Alignment after the Last initial alignment.

Im going to try to move just PR2 now.
Revert all movements back to right after we lost lock.

Sheila did a small Pico motor move in HAM3.
Pico Motor ALS/Pop steering HAM1 ( Actually in HAM3).
H1:ALS-C_TRX_A_LF_GAIN was increased temporarily to make the X arm WFS run.

And Sheila did another move of ALS/Pop steering once the WFS were running.

Note the H1:ALS-X_FIBR_A_DEMOD_RFMON beat note dropped down to -38 and the threshold was lowered to -43.

Once this was done we could do an Initial Alignment, BUT we did not have anything on AS Air
Moved IM4 & PRM to get light on AS Air and Refl PRM cam.
Sheila used IM4 PRM & PR2 Osems to match prior OSEM values to do a "manual WFS releave past" to Move PR2 which gave us increased IR flashes.

Touched up PRM in Yaw to lock PRX

Finished Initial Alignment at 5:19 UTC

Locking was being difficult and would lockloss at FIND IR and Locking ALS.
I tried giving another Initial Alignment after it failed a number of time because we did touch it up from hand.
Even after that IA  it was still locklossing at Find IR and Lockling ALS. Paused in Locking ALS to allow the WFS to calm down.
Yeah ALS WFS DOF 2 is pulling it away for some reason. But even trying to allow the WFS to melow out, it still catches a Lockloss.
Finally got past DRMI !!! YAY!!!

LOCKLOSS!? From MAX POWER!?
7:30 UTC Random HEPI HAM1 Watchdog trip.
IOP SUS56, 34, & 23 all had a IOP DACkill trip at the same time.
Seems like sush2a Had DAC error calling in CDS team.

CDS team is resetting all the SUS front end models because everything from HAM1 to HAM6 tripped in this timing glitch. ****

LOG:
Sheila remotely Helped get me a good Alignment and got me through a rouch IA
Dave B & Erik helped restart all the Front Ends.
Jim was also called due to a HEPI trip and he was next on the call list.

Every one has been cycled to the bottom of the list.

 

Images attached to this report
Comments related to this report
anthony.sanchez@LIGO.ORG - 02:03, Saturday 06 July 2024 (78895)

See Daves alog about the CDS Timing error https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=78892

H1 CDS
david.barker@LIGO.ORG - posted 01:12, Saturday 06 July 2024 - last comment - 09:26, Saturday 06 July 2024(78892)
Timing Error, restart of SUS frontends needed

Tony, Jim, Erik, Dave:

We had a timing error which caused DACKILLs on h1susb123, h1sush34, h1sush56 and DAC_FIFO errors on h1sush2a.

There was no obvious cause of the timing error which caused the Dolphin glitch, we noted that h1calcs was the only model with a DAQ CRC error (see attached).

After diag-reset and crc-reset only the SUS dackill and fifo error persisted. 

We restarted all the models on h1susb123, h1sush2a, h1sush34, h1sush56 after bypassing the SWWD SEI systems for BSC1,2,3 and HAM2,3,4,5,6.

SUS models came back OK, we removed the SEI SWWD bypasses and handed the system over to Tony.

Images attached to this report
Comments related to this report
erik.vonreis@LIGO.ORG - 01:46, Saturday 06 July 2024 (78894)

H1SUSH2A DACs went into error 300 ms before the CRC SUM increased on h1calcs. 

 

The DAQ (I believe) reports CRC errors 120 ms after a dropped packet, leaving 180 ms. unaccounted for.

Images attached to this comment
david.barker@LIGO.ORG - 09:17, Saturday 06 July 2024 (78901)

FRS31532 created for this issue. I has been closed as resolved-by-restarting.

david.barker@LIGO.ORG - 09:26, Saturday 06 July 2024 (78902)

Model restart logs from this morning:

Sat06Jul2024
LOC TIME HOSTNAME     MODEL/REBOOT
01:12:22 h1susb123    h1iopsusb123
01:12:33 h1sush2a     h1iopsush2a 
01:12:39 h1sush34     h1iopsush34 
01:12:43 h1susb123    h1susitmy   
01:12:47 h1sush2a     h1susmc1    
01:12:56 h1sush56     h1iopsush56 
01:12:57 h1susb123    h1susbs     
01:12:59 h1sush34     h1susmc2    
01:13:01 h1sush2a     h1susmc3    
01:13:10 h1sush56     h1sussrm    
01:13:11 h1susb123    h1susitmx   
01:13:13 h1sush34     h1suspr2    
01:13:15 h1sush2a     h1susprm    
01:13:24 h1sush56     h1sussr3    
01:13:25 h1susb123    h1susitmpi  
01:13:27 h1sush34     h1sussr2    
01:13:29 h1sush2a     h1suspr3    
01:13:38 h1sush56     h1susifoout 
01:13:52 h1sush56     h1sussqzout 
 

H1 General
anthony.sanchez@LIGO.ORG - posted 16:27, Friday 05 July 2024 (78890)
Friday Ops Eve Shift Start

TITLE: 07/05 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 12mph Gusts, 9mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.12 μm/s
QUICK SUMMARY:
H1 has been locked and Observing for over 2 hours. Everything seems to be functioning well.


 

 

LHO General
corey.gray@LIGO.ORG - posted 16:24, Friday 05 July 2024 (78877)
Fri Ops DAY Shift Summary

TITLE: 07/05 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:

Most of today was supposed to be a straightforward 3hrs of commissioning, but an earthqake lockloss and another random lockloss forced a majority of the shift to be locking/commissioning time.  For commissioning there was some PR2 Spot moves, and sheila made a note about possible/probable PR2 & IM4 (pit) adjustments needed for INPUT ALIGN part of initial alignment.
LOG:

H1 PEM
sheila.dwyer@LIGO.ORG - posted 14:23, Friday 05 July 2024 (78878)
PR2 spot moved, reduced power in beam and peak

Robert, Sheila, Camilla, Corey

Today we moved the spot on PR2, which reduced the power measured at the HAM3 viewport and the appearance of the 48 Hz peak in DARM when the black glass is removed from that viewport.

In our first lock of the day, Robert removed black glass: 16:08 UTC, we took ~20 minutes of quiet time with the black glass removed, before loosing lock due to an earthquake.

We decided to move 42 urad based on the first screenshot shows the May move described in 77949, the arm powers improved in the first 42urad of PR3 yaw move  After relocking we moved PR2 by 42 urad, while the IFO was thermalizing (screenshot).  Ran the A2L script and got a few minutes of quiet time there, starting at 20:20-20:31 UTC.  We then decided to take another step to see if we could reduce the power in the HAM2 beam further, we moved an additional 10urad on PR3 yaw, and saw a small decrease in the beam power.  We re-ran A2L here (added values to lscparams and loaded guardian), and took some time with the illuminator on the viewport to check the height of the peak.  Camilla noticed that the squeezing didn't look optimal, so she's running SQZ alignment and angle scans.  There is some LSC coherence, mostly with PRCL.  We will plan to add PRCL FF next week, and re tune MICH and SRCL.

Robert measured power in the beam exiting the HAM3 viewport at a few steps:  the original position: 47 mW, -20 urad yaw: 28 mW; -42 urad 19 mW; -52 urad: 17mW.

Old alogs about this move:

Operator note:  We moved these beams using a script that moves sliders on several optics, so that ideally we should be able to relock without doing any significant alignment.  For yaw, the script was recently adjusted so that it really should work well, for pitch there may be some manual alignment of PR2 + IM4 needed to get initial alignment to lock the X arm in IR. 

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 12:45, Friday 05 July 2024 (78889)
Fri CP1 Fill, took two tries

First run at 10:00 did not complete, its thermocouples reached the trip temp of -130C before we got a flow of LN2.

Fri Jul 05 10:17:33 2024 INFO: Fill completed in 17min 29secs

We re-ran the fill at 12:00 with a lowered trip temp of -140C, this run was successful

Fri Jul 05 12:04:36 2024 INFO: Fill completed in 4min 33secs

Trending the trip temps since this system was installed two years ago, we have never had to run with trip temps as low as -140C (summer settings have been -130C).

Tomorrow we will run the fill at the earlier time of 08:00 in case outside temps were a factor in today's issue. I have set the trip temp back to -130C.

Images attached to this report
H1 AOS
corey.gray@LIGO.ORG - posted 12:06, Friday 05 July 2024 (78888)
Fri Mid-Shift Status

Commissioning has had a rough time due to a big Earthquake (and then a mystery lockloss).  H1 is currently recovering from the 2nd lockloss of this morning.  The hope is to return to Commissioning if timing works out (with the main task being PR2 spot moves).  
As for locking, it's been fairly straightforward.

LHO VE
camilla.compton@LIGO.ORG - posted 11:51, Friday 05 July 2024 (78886)
OM2 Heating presure change normal/expected

Richard, Keita, Camilla

Richard questioned a 4e-9 pressure rise when we heated up OM2 in June 78573 (plot). The first time we heated OM2 after the O3/O4 vent in November 2022 (plot) the spike was larger 9e-9. Future times we heated up OM2 in O4a (July and September) the rise was smaller. 

Keita says that it is expected that the first time you heat something up after a vent the pressure can rise as particles from venting are burnt off, future OM2 heating we'd expect smaller pressure rises.

Images attached to this report
H1 ISC
corey.gray@LIGO.ORG - posted 11:23, Friday 05 July 2024 (78885)
LSC XARM T Ramp Reverted

H1 has recovered from the recent BC aftershock.  One SDF diff which came up was an LSC X-Arm TRAMP.  Reverted back to the setpoint of 0.0 (from 1.0).  Not really sure why this changed---at the time, H1 was Check Mich Fringing.

Images attached to this report
H1 SEI (SEI)
corey.gray@LIGO.ORG - posted 11:07, Friday 05 July 2024 (78884)
SEI ground seismometer mass position check - Monthly (#26491)

Monthly FAMIS Check (#26491)

T240 Centering Script Output:

Averaging Mass Centering channels for 10 [sec] ...
2024-07-05 10:59:44.259990

There are 15 T240 proof masses out of range ( > 0.3 [V] )!
ETMX T240 2 DOF X/U = -0.56 [V]
ETMX T240 2 DOF Y/V = -0.401 [V]
ETMX T240 2 DOF Z/W = -0.481 [V]
ITMX T240 1 DOF X/U = -1.362 [V]
ITMX T240 1 DOF Y/V = 0.317 [V]
ITMX T240 1 DOF Z/W = 0.415 [V]
ITMX T240 3 DOF X/U = -1.42 [V]
ITMY T240 3 DOF X/U = -0.713 [V]
ITMY T240 3 DOF Z/W = -1.737 [V]
BS T240 1 DOF Y/V = -0.386 [V]
BS T240 3 DOF Y/V = -0.343 [V]
BS T240 3 DOF Z/W = -0.485 [V]
HAM8 1 DOF X/U = -0.312 [V]
HAM8 1 DOF Y/V = -0.481 [V]
HAM8 1 DOF Z/W = -0.774 [V]

All other proof masses are within range ( < 0.3 [V] ):
ETMX T240 1 DOF X/U = -0.132 [V]
ETMX T240 1 DOF Y/V = -0.101 [V]
ETMX T240 1 DOF Z/W = -0.157 [V]
ETMX T240 3 DOF X/U = -0.098 [V]
ETMX T240 3 DOF Y/V = -0.226 [V]
ETMX T240 3 DOF Z/W = -0.091 [V]
ETMY T240 1 DOF X/U = 0.04 [V]
ETMY T240 1 DOF Y/V = 0.091 [V]
ETMY T240 1 DOF Z/W = 0.153 [V]
ETMY T240 2 DOF X/U = -0.094 [V]
ETMY T240 2 DOF Y/V = 0.158 [V]
ETMY T240 2 DOF Z/W = 0.064 [V]
ETMY T240 3 DOF X/U = 0.165 [V]
ETMY T240 3 DOF Y/V = 0.062 [V]
ETMY T240 3 DOF Z/W = 0.095 [V]
ITMX T240 2 DOF X/U = 0.122 [V]
ITMX T240 2 DOF Y/V = 0.214 [V]
ITMX T240 2 DOF Z/W = 0.203 [V]
ITMX T240 3 DOF Y/V = 0.109 [V]
ITMX T240 3 DOF Z/W = 0.118 [V]
ITMY T240 1 DOF X/U = 0.037 [V]
ITMY T240 1 DOF Y/V = 0.044 [V]
ITMY T240 1 DOF Z/W = -0.066 [V]
ITMY T240 2 DOF X/U = 0.049 [V]
ITMY T240 2 DOF Y/V = 0.193 [V]
ITMY T240 2 DOF Z/W = 0.03 [V]
ITMY T240 3 DOF Y/V = 0.023 [V]
BS T240 1 DOF X/U = -0.204 [V]
BS T240 1 DOF Z/W = 0.089 [V]
BS T240 2 DOF X/U = -0.09 [V]
BS T240 2 DOF Y/V = 0.009 [V]
BS T240 2 DOF Z/W = -0.162 [V]
BS T240 3 DOF X/U = -0.202 [V]

Assessment complete.

STS Centering Script Output:

Averaging Mass Centering channels for 10 [sec] ...

2024-07-05 11:02:32.261843
There are 2 STS proof masses out of range ( > 2.0 [V] )!
STS EY DOF X/U = -4.008 [V]
STS EY DOF Z/W = 2.765 [V]

All other proof masses are within range ( < 2.0 [V] ):
STS A DOF X/U = -0.507 [V]
STS A DOF Y/V = -0.728 [V]
STS A DOF Z/W = -0.647 [V]
STS B DOF X/U = 0.377 [V]
STS B DOF Y/V = 0.94 [V]
STS B DOF Z/W = -0.492 [V]
STS C DOF X/U = -0.655 [V]
STS C DOF Y/V = 0.894 [V]
STS C DOF Z/W = 0.344 [V]
STS EX DOF X/U = -0.06 [V]
STS EX DOF Y/V = 0.017 [V]
STS EX DOF Z/W = 0.087 [V]
STS EY DOF Y/V = 0.025 [V]
STS FC DOF X/U = 0.239 [V]
STS FC DOF Y/V = -1.056 [V]
STS FC DOF Z/W = 0.644 [V]

Assessment complete.

H1 PSL (PSL)
corey.gray@LIGO.ORG - posted 10:57, Friday 05 July 2024 (78883)
PSL Status Report (FAMIS #26263)

For FAMIS 26263:
Laser Status:
    NPRO output power is 1.818W (nominal ~2W)
    AMP1 output power is 66.81W (nominal ~70W)
    AMP2 output power is 136.9W (nominal 135-140W)
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 2 days, 1 hr 53 minutes
    Reflected power = 20.62W
    Transmitted power = 104.8W
    PowerSum = 125.4W

FSS:
    It has been locked for 0 days 0 hr and 44 min
    TPD[V] = 0.6824V

ISS:
    The diffracted power is around 2.5%
    Last saturation event was 0 days 0 hours and 44 minutes ago


Possible Issues:
    PMC reflected power is high
    FSS TPD is low

H1 ISC
camilla.compton@LIGO.ORG - posted 10:27, Friday 05 July 2024 - last comment - 13:53, Tuesday 09 July 2024(78879)
Bruco ran for 2024/07/05 13:50UTC

Bruco ran for last night's 159MPc range (instructions from Elenna), using command below. Results here.

python -m bruco --ifo=H1 --channel=GDS-CALIB_STRAIN_CLEAN --gpsb=1404222629 --length=1000 --outfs=4096 --fres=0.1 --dir=/home/camilla.compton/public_html/brucos/GDS_CLEAN_1404222629 --top=100 --webtop=20 --plot=html --nproc=20 --xlim=7:2000 --excluded=/home/elenna.capote/bruco-excluded/lho_DARM_excluded.txt

Can see:

Comments related to this report
sheila.dwyer@LIGO.ORG - 13:53, Tuesday 09 July 2024 (78976)

There are several interesting things at around 30Hz  (and around 40Hz) in this BRUCO, which might all be related to some ground motion or accoustic noise witness. 

LVEAFLOOR accelerometer

several channels related to HAM2 motion like MASTER_H2_DRIVE.  Around 38-40 Hz BRUCO picks out lots of HAM2 channels, and seems to preffer HAM2 over any other chamber.  It might be worth doing some HAM2 injections.

This time that Camilla chose was after the PSL alignment shift, but before we moved the beam on PR2 last Friday.

H1 General
corey.gray@LIGO.ORG - posted 09:43, Friday 05 July 2024 (78880)
M5.0 EQ Off BC Coast Knocks H1 Out Of Lock (During COMMISSIONING/PR2 Move)

Commissioning started at 1600utc/9amlocal, but at 1637utc a magnitude 5.0 earthquake near British Columbia knocked h1 out of lock (this is where we have been having earthquakes the last day or so).

LHO General
corey.gray@LIGO.ORG - posted 07:38, Friday 05 July 2024 (78876)
Fri Ops Day Transition

TITLE: 07/05 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 2mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY:

H1's been locked 2.25hrs with 2-3hr lock stretches starting about 15hrs ago (with issues before that as noted in previous logs & mainly post-tues-Maintenance this week).  Seeing small EQ spikes in recent hours after the big EQ about 22hrs ago.  Low winds and microseism.  On the drive in, there was a thin layer of smoke on the horizon in about all directions post-4th of July with no active/nearby plumes observed.

H1 SQZ
ryan.short@LIGO.ORG - posted 02:40, Friday 05 July 2024 - last comment - 10:16, Friday 05 July 2024(78875)
SQZ TTFSS Input Power Too High - Raised Threshold

H1 called for assistance at 08:45 UTC because it was able to lock up to NLN, but could not inject squeezing due to an error with the SQZ TTFSS. The specific error it reported was "Fiber trans PD error," then on the fiber trans screen it showed a "Power limit exceeded" message. The input power to the TTFSS (SQZ-FIBR_TRANS_DC_POWER) was indeed too high at 0.42mW where the high power limit was set at 0.40mW. Trending this back a few days, it seems that the power jumped up in the morning on July 3rd (I suspect when the fiber pickoff in the PSL was aligned) and it has been floating around that high power limit ever since. I'm not exactly sure why this time it was an issue, as we've had several hours of observing time since then.

I raised the high power limit from 0.40mW to 0.45mW, the TTFSS was able to lock without issue, SQZ_MANAGER brought all nodes up, and squeezing was injected as usual. I then accepted the new high power limit in SDF (attached) for H1 to start observing at 09:20 UTC.

Since this feels like a Band-Aid solution just to get H1 observing tonight, I encourage someone with more knowledge of the SQZ TTFSS to look into it as time allows.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 10:16, Friday 05 July 2024 (78881)

Vicky checked that we can have a max of 1mW as the sum of both fibers ( H1:SQZ-FIBR_PD_DC_POWER from PD User_Manual p14)  to stay in the linear operating range. To be safe for staying in observing, we've further increased the "high" threshold to  1mW.

Images attached to this comment
H1 SQZ (SQZ)
ryan.crouch@LIGO.ORG - posted 01:00, Friday 05 July 2024 - last comment - 12:37, Friday 05 July 2024(78867)
OPS Thursday eve shift summmary

TITLE: 07/05 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: An earthquake lockloss then a PI lockloss. Currently at MAX_POWER

Lock1:
Lock2:

Lock3:

 

To recap for SQZ, I have unmonitored 3 SQZ channels on syscssqz (H1:SQZ-FIBR_SERVO_COMGAIN, H1:SQZ-FIBR_SERVO_FASTGAIN, H1:SQZ-FIBR_LOCK_TEMPERATURECONTROLS_ON) that keep dropping us out of observing for now until their root issue can be fixed (Fiber trans PD error, too much power on FIBR_TRANS?). I noticed that each time the GAINS change it also drops our cleaned range

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 12:37, Friday 05 July 2024 (78887)

It seems that as you found, the issue was the max power threshold. Once Ryan raised the threshold in 78881, we didn't see this happen again, plot attached. I've re-monitored these 3 SQZ channels: sdfs attached (H1:SQZ-FIBR_SERVO_COMGAIN, H1:SQZ-FIBR_SERVO_FASTGAIN, H1:SQZ-FIBR_LOCK_TEMPERATURECONTROLS_ON) and TEMPERATURECONTROLS_ON accepted.

It's expected that the CLEAN range would drop as that range only reports when the GRD-IFO_READY flag is true (which isn't the case when there's sdf diffs).

Images attached to this comment
H1 ISC
daniel.sigg@LIGO.ORG - posted 15:24, Thursday 04 July 2024 - last comment - 10:22, Friday 05 July 2024(78864)
ISCTEX Beatnote alignment improved

Keita Daniel

We found that the transimpedance gain of the ALS-X_FIBR_A_DC PD was wrong (changed it from 20k to 2k). In turn, this meant that 20mW of light was on this PD.

After looking at the beatnote amplitude directly at the PD and found it to be way too small, we decided to swap the PD with a spare (new PD S/N S1200248, old PD S/N S1200251). However, this did not improve the beatnote amplitude. (The removed PD was put back into the spares cabinet.)

We then looked for clipping and found that the beam on the first beam sampler after the fiber port was close to the side. We moved the sampler so the beam is closer to the center of the optics. We also found the beam on the polarizing cube in the fiber path to be low. We moved the cube downwards to center the beam. After aligning the beam back to the broadband PD, the beatnote amplitude improved drastically. This alignment seems very sensitive.

We had to turn the power from the laser in the beat note path down from 20mW to about 6mW on the broadband PD.

This required a recalibration of the ALS-X_LASER_IR_PD photodiode. The laser output power in IR is about 60mW.

The beatnote strength as read by the medm screens is now 4-7dBm. Still seems to vary.

Comments related to this report
keita.kawabe@LIGO.ORG - 15:48, Thursday 04 July 2024 (78865)

To recap, fundamental problem was the alignment (probably it was close to clipping before, and started clipping over time due to temperature shift or whatever). Also, the PBS mount or maybe the mount post holder for the fiber beam is not really great, a gentle push by finger will flex something and change the alignment enough to change the beat note. We'll have to see for a while if the beat note will stay high enough.

Wrong transimpedance value in MEDM was not preventing PLL from locking but was annoying. H1:ALS-X_FIBR_A_DC_TRANSIMPEDANCE was 20000 though the interface box gain was 1.  This kind of stuff confuses us and slows down the troubleshooting. Whenever you change the gain of the BBPD interface box, please don't forget to change the transimpedance value at the same time (gain 1= transimpedance 2k, gain 10= 20k).

We took a small plier from the EE shop and forgot to bring it back from the EX (sorry).

Everything else should be back to where it was. Thorlab powermeter box was put on Camilla's desk.

Images attached to this comment
keita.kawabe@LIGO.ORG - 10:22, Friday 05 July 2024 (78882)

It's still good, right now it's +5 to +6 dBm.

Too early to tell but we might be diurnally going back and forth between +3-ish and +7-ish dBm.  4dB power variation is big (a factor of ~2.5).

If this is diurnal, it's probably explained by the alignment drift, i.e. we're not yet sitting close to the global maxima. It's not yet worth touching up the alignment unless this becomes a problem, but anyway, if we decide to make it better some time in the future, remember that we will have to touch both the PBS and the fiber launcher (or lens).

Images attached to this comment
Displaying reports 641-660 of 77247.Go to page Start 29 30 31 32 33 34 35 36 37 End