Displaying reports 6761-6780 of 83355.Go to page Start 335 336 337 338 339 340 341 342 343 End
Reports until 11:07, Friday 05 July 2024
H1 SEI (SEI)
corey.gray@LIGO.ORG - posted 11:07, Friday 05 July 2024 (78884)
SEI ground seismometer mass position check - Monthly (#26491)

Monthly FAMIS Check (#26491)

T240 Centering Script Output:

Averaging Mass Centering channels for 10 [sec] ...
2024-07-05 10:59:44.259990

There are 15 T240 proof masses out of range ( > 0.3 [V] )!
ETMX T240 2 DOF X/U = -0.56 [V]
ETMX T240 2 DOF Y/V = -0.401 [V]
ETMX T240 2 DOF Z/W = -0.481 [V]
ITMX T240 1 DOF X/U = -1.362 [V]
ITMX T240 1 DOF Y/V = 0.317 [V]
ITMX T240 1 DOF Z/W = 0.415 [V]
ITMX T240 3 DOF X/U = -1.42 [V]
ITMY T240 3 DOF X/U = -0.713 [V]
ITMY T240 3 DOF Z/W = -1.737 [V]
BS T240 1 DOF Y/V = -0.386 [V]
BS T240 3 DOF Y/V = -0.343 [V]
BS T240 3 DOF Z/W = -0.485 [V]
HAM8 1 DOF X/U = -0.312 [V]
HAM8 1 DOF Y/V = -0.481 [V]
HAM8 1 DOF Z/W = -0.774 [V]

All other proof masses are within range ( < 0.3 [V] ):
ETMX T240 1 DOF X/U = -0.132 [V]
ETMX T240 1 DOF Y/V = -0.101 [V]
ETMX T240 1 DOF Z/W = -0.157 [V]
ETMX T240 3 DOF X/U = -0.098 [V]
ETMX T240 3 DOF Y/V = -0.226 [V]
ETMX T240 3 DOF Z/W = -0.091 [V]
ETMY T240 1 DOF X/U = 0.04 [V]
ETMY T240 1 DOF Y/V = 0.091 [V]
ETMY T240 1 DOF Z/W = 0.153 [V]
ETMY T240 2 DOF X/U = -0.094 [V]
ETMY T240 2 DOF Y/V = 0.158 [V]
ETMY T240 2 DOF Z/W = 0.064 [V]
ETMY T240 3 DOF X/U = 0.165 [V]
ETMY T240 3 DOF Y/V = 0.062 [V]
ETMY T240 3 DOF Z/W = 0.095 [V]
ITMX T240 2 DOF X/U = 0.122 [V]
ITMX T240 2 DOF Y/V = 0.214 [V]
ITMX T240 2 DOF Z/W = 0.203 [V]
ITMX T240 3 DOF Y/V = 0.109 [V]
ITMX T240 3 DOF Z/W = 0.118 [V]
ITMY T240 1 DOF X/U = 0.037 [V]
ITMY T240 1 DOF Y/V = 0.044 [V]
ITMY T240 1 DOF Z/W = -0.066 [V]
ITMY T240 2 DOF X/U = 0.049 [V]
ITMY T240 2 DOF Y/V = 0.193 [V]
ITMY T240 2 DOF Z/W = 0.03 [V]
ITMY T240 3 DOF Y/V = 0.023 [V]
BS T240 1 DOF X/U = -0.204 [V]
BS T240 1 DOF Z/W = 0.089 [V]
BS T240 2 DOF X/U = -0.09 [V]
BS T240 2 DOF Y/V = 0.009 [V]
BS T240 2 DOF Z/W = -0.162 [V]
BS T240 3 DOF X/U = -0.202 [V]

Assessment complete.

STS Centering Script Output:

Averaging Mass Centering channels for 10 [sec] ...

2024-07-05 11:02:32.261843
There are 2 STS proof masses out of range ( > 2.0 [V] )!
STS EY DOF X/U = -4.008 [V]
STS EY DOF Z/W = 2.765 [V]

All other proof masses are within range ( < 2.0 [V] ):
STS A DOF X/U = -0.507 [V]
STS A DOF Y/V = -0.728 [V]
STS A DOF Z/W = -0.647 [V]
STS B DOF X/U = 0.377 [V]
STS B DOF Y/V = 0.94 [V]
STS B DOF Z/W = -0.492 [V]
STS C DOF X/U = -0.655 [V]
STS C DOF Y/V = 0.894 [V]
STS C DOF Z/W = 0.344 [V]
STS EX DOF X/U = -0.06 [V]
STS EX DOF Y/V = 0.017 [V]
STS EX DOF Z/W = 0.087 [V]
STS EY DOF Y/V = 0.025 [V]
STS FC DOF X/U = 0.239 [V]
STS FC DOF Y/V = -1.056 [V]
STS FC DOF Z/W = 0.644 [V]

Assessment complete.

H1 PSL (PSL)
corey.gray@LIGO.ORG - posted 10:57, Friday 05 July 2024 (78883)
PSL Status Report (FAMIS #26263)

For FAMIS 26263:
Laser Status:
    NPRO output power is 1.818W (nominal ~2W)
    AMP1 output power is 66.81W (nominal ~70W)
    AMP2 output power is 136.9W (nominal 135-140W)
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 2 days, 1 hr 53 minutes
    Reflected power = 20.62W
    Transmitted power = 104.8W
    PowerSum = 125.4W

FSS:
    It has been locked for 0 days 0 hr and 44 min
    TPD[V] = 0.6824V

ISS:
    The diffracted power is around 2.5%
    Last saturation event was 0 days 0 hours and 44 minutes ago


Possible Issues:
    PMC reflected power is high
    FSS TPD is low

H1 ISC
camilla.compton@LIGO.ORG - posted 10:27, Friday 05 July 2024 - last comment - 13:53, Tuesday 09 July 2024(78879)
Bruco ran for 2024/07/05 13:50UTC

Bruco ran for last night's 159MPc range (instructions from Elenna), using command below. Results here.

python -m bruco --ifo=H1 --channel=GDS-CALIB_STRAIN_CLEAN --gpsb=1404222629 --length=1000 --outfs=4096 --fres=0.1 --dir=/home/camilla.compton/public_html/brucos/GDS_CLEAN_1404222629 --top=100 --webtop=20 --plot=html --nproc=20 --xlim=7:2000 --excluded=/home/elenna.capote/bruco-excluded/lho_DARM_excluded.txt

Can see:

Comments related to this report
sheila.dwyer@LIGO.ORG - 13:53, Tuesday 09 July 2024 (78976)

There are several interesting things at around 30Hz  (and around 40Hz) in this BRUCO, which might all be related to some ground motion or accoustic noise witness. 

LVEAFLOOR accelerometer

several channels related to HAM2 motion like MASTER_H2_DRIVE.  Around 38-40 Hz BRUCO picks out lots of HAM2 channels, and seems to preffer HAM2 over any other chamber.  It might be worth doing some HAM2 injections.

This time that Camilla chose was after the PSL alignment shift, but before we moved the beam on PR2 last Friday.

H1 General
corey.gray@LIGO.ORG - posted 09:43, Friday 05 July 2024 (78880)
M5.0 EQ Off BC Coast Knocks H1 Out Of Lock (During COMMISSIONING/PR2 Move)

Commissioning started at 1600utc/9amlocal, but at 1637utc a magnitude 5.0 earthquake near British Columbia knocked h1 out of lock (this is where we have been having earthquakes the last day or so).

LHO General
corey.gray@LIGO.ORG - posted 07:38, Friday 05 July 2024 (78876)
Fri Ops Day Transition

TITLE: 07/05 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 2mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY:

H1's been locked 2.25hrs with 2-3hr lock stretches starting about 15hrs ago (with issues before that as noted in previous logs & mainly post-tues-Maintenance this week).  Seeing small EQ spikes in recent hours after the big EQ about 22hrs ago.  Low winds and microseism.  On the drive in, there was a thin layer of smoke on the horizon in about all directions post-4th of July with no active/nearby plumes observed.

H1 SQZ
ryan.short@LIGO.ORG - posted 02:40, Friday 05 July 2024 - last comment - 10:16, Friday 05 July 2024(78875)
SQZ TTFSS Input Power Too High - Raised Threshold

H1 called for assistance at 08:45 UTC because it was able to lock up to NLN, but could not inject squeezing due to an error with the SQZ TTFSS. The specific error it reported was "Fiber trans PD error," then on the fiber trans screen it showed a "Power limit exceeded" message. The input power to the TTFSS (SQZ-FIBR_TRANS_DC_POWER) was indeed too high at 0.42mW where the high power limit was set at 0.40mW. Trending this back a few days, it seems that the power jumped up in the morning on July 3rd (I suspect when the fiber pickoff in the PSL was aligned) and it has been floating around that high power limit ever since. I'm not exactly sure why this time it was an issue, as we've had several hours of observing time since then.

I raised the high power limit from 0.40mW to 0.45mW, the TTFSS was able to lock without issue, SQZ_MANAGER brought all nodes up, and squeezing was injected as usual. I then accepted the new high power limit in SDF (attached) for H1 to start observing at 09:20 UTC.

Since this feels like a Band-Aid solution just to get H1 observing tonight, I encourage someone with more knowledge of the SQZ TTFSS to look into it as time allows.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 10:16, Friday 05 July 2024 (78881)

Vicky checked that we can have a max of 1mW as the sum of both fibers ( H1:SQZ-FIBR_PD_DC_POWER from PD User_Manual p14)  to stay in the linear operating range. To be safe for staying in observing, we've further increased the "high" threshold to  1mW.

Images attached to this comment
H1 SQZ (SQZ)
ryan.crouch@LIGO.ORG - posted 01:00, Friday 05 July 2024 - last comment - 12:37, Friday 05 July 2024(78867)
OPS Thursday eve shift summmary

TITLE: 07/05 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: An earthquake lockloss then a PI lockloss. Currently at MAX_POWER

Lock1:
Lock2:

Lock3:

 

To recap for SQZ, I have unmonitored 3 SQZ channels on syscssqz (H1:SQZ-FIBR_SERVO_COMGAIN, H1:SQZ-FIBR_SERVO_FASTGAIN, H1:SQZ-FIBR_LOCK_TEMPERATURECONTROLS_ON) that keep dropping us out of observing for now until their root issue can be fixed (Fiber trans PD error, too much power on FIBR_TRANS?). I noticed that each time the GAINS change it also drops our cleaned range

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 12:37, Friday 05 July 2024 (78887)

It seems that as you found, the issue was the max power threshold. Once Ryan raised the threshold in 78881, we didn't see this happen again, plot attached. I've re-monitored these 3 SQZ channels: sdfs attached (H1:SQZ-FIBR_SERVO_COMGAIN, H1:SQZ-FIBR_SERVO_FASTGAIN, H1:SQZ-FIBR_LOCK_TEMPERATURECONTROLS_ON) and TEMPERATURECONTROLS_ON accepted.

It's expected that the CLEAN range would drop as that range only reports when the GRD-IFO_READY flag is true (which isn't the case when there's sdf diffs).

Images attached to this comment
H1 General (Lockloss, SUS)
ryan.crouch@LIGO.ORG - posted 23:38, Thursday 04 July 2024 (78874)
06:37 UTC

PI ring up lockloss? PI28 and 29 were unable to be damped down by the GRD and we eventually lost lock.

https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1404196646

H1 SEI
ryan.crouch@LIGO.ORG - posted 21:46, Thursday 04 July 2024 (78873)
H1 ISI CPS Noise Spectra Check - Weekly

Closes FAMIS25997, last checked in alog78550

ITMX_ST2_CPSINF_H1 has gotten noisier at high frequency

Everything else looks the same as previously.

Non-image files attached to this report
H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 19:15, Thursday 04 July 2024 - last comment - 20:56, Thursday 04 July 2024(78870)
02:13 UTC lockloss

Lost lock from an earthquake

Comments related to this report
ryan.crouch@LIGO.ORG - 19:35, Thursday 04 July 2024 (78871)

XARM kepts giving "fiber polarization error" and is in CHANGE_POL state, neither I nor guardian can get the H1:ALS-X_FIBR_LOCK_FIBER_POLARIZATIONPERCENT below 14 using the polarization controller. I called Sheila and she suggested turning on H1:ALS-X_FIBR_LOCK_LOGIC_FORCE which fixed it!

ryan.crouch@LIGO.ORG - 20:56, Thursday 04 July 2024 (78872)

03:56 UTC Observing

H1 SEI (SEI)
neil.doerksen@LIGO.ORG - posted 18:35, Thursday 04 July 2024 - last comment - 09:14, Friday 12 July 2024(78869)
Earthquake Analysis : Similar onsite wave velocities may or may not cause lockloss, why?

It seems earthquakes causing similar magnitudes of movement on-site may or may not cause lockloss. Why is this happening? Should expect to always or never cause lockloss for similar events. One suspicion is that common or differential motion might lend itself better to keeping or breaking lock.

- Lockloss is defined as H1:DRD-ISC_LOCK_STATE_N going to 0 (or near 0).
- I correlated H1:DRD-ISC_LOCK_STATE_N with H1:ISI-GND_STS_CS_Z_EQ_PEAK_OUTMON peaks between 500 and 2500 μm/s.
- I manually scrolled through the data from present to 2 May 2024 to find events.
    - Manual, because 1) wanted to start with a small sample size and quickly see if there was a pattern, and 2) because I need to find events that caused loss, then go and find similarly sized events we kept lock.
- Channels I looked at include:
    - IMC-REFL_SERVO_SPLITMON
    - GRD-ISC_LOCK_STATE_N
    - ISI-GND_STS_CS_Z_EQ_PEAK_OUTMON ("CS_PEAK")
    - SEI-CARM_GNDBLRMS_30M_100M
    - SEI-DARM_GNDBLRMS_30M_100M
    - SEI-XARM_GNDBLRMS_30M_100M
    - SEI-YARM_GNDBLRMS_30M_100M
    - SEI-CARM_GNDBLRMS_100M_300M
    - SEI-DARM_GNDBLRMS_100M_300M
    - SEI-XARM_GNDBLRMS_100M_300M
    - SEI-YARM_GNDBLRMS_100M_300M
    - ISI-GND_STS_ITMY_X_BLRMS_30M_100M
    - ISI-GND_STS_ITMY_Y_BLRMS_30M_100M
    - ISI-GND_STS_ITMY_Z_BLRMS_30M_100M
    - ISI-GND_STS_ITMY_X_BLRMS_100M_300M
    - ISI-GND_STS_ITMY_Y_BLRMS_100M_300M
    - ISI-GND_STS_ITMY_Z_BLRMS_100M_300M
    - SUS-SRM_M3_COILOUTF_LL_INMON
    - SUS-SRM_M3_COILOUTF_LR_INMON
    - SUS-SRM_M3_COILOUTF_UL_INMON
    - SUS-SRM_M3_COILOUTF_UR_INMON
    - SUS-PRM_M3_COILOUTF_LL_INMON
    - SUS-PRM_M3_COILOUTF_LR_INMON
    - SUS-PRM_M3_COILOUTF_UL_INMON
    - SUS-PRM_M3_COILOUTF_UR_INMON

        - ndscope template saved as neil_eq_temp2.yaml

- 26 events; 14 lockloss, 12 locked (3 or 4 lockloss event may have non-seismic causes)

- After, usiing CS_PEAK to find the events, I, so far, used the ISI channels to analyse the events.
    - The SEI channels were created last week (only 2 events captured in these channels, so far).

- Conclusions:
    - There are 6, CS_PEAK events above 1,000 μm/s in which we *lost* lock;
        - In SEI 30M-100M
            - 4 have z-axis dominant motion with no motion or strong z-motion or no motion in SEI 100M-300M
            - 2 have y-axis dominated motion with a lot of activity in SEI 100M-300M and y-motion dominating some of the time.
    - There are 6, CS_PEAK events above 1,000 μm/s in which we *kept* lock;
        - In SEI 30M-100M
            - 5 have z-axis dominant motion with only general noise in SEI 100M-300M
            - 1 has z-axis dominant noise near the peak in CS_PEAK and strong y-axis domaniated motion starting 4 min prior to the CS_PEAK peak; it too only has general noise in SEI 100M-300M. This x- or y-motion which starts about 4 min before the peak in CS_PEAK has been observed in 5 events -- Love waves precede Rayleigh waves, could be Love waves?
    - All events below 1000 μm/s which lose lock seem to have a dominant y-motion in either/both SEI 30M-100M / 100M-300M. However, the sample size is not large enough to convince me that shear motion is what is causing lockloss. But it is large enough to convince me to find more events and verify. (Some plots attached.)

Images attached to this report
Comments related to this report
beverly.berger@LIGO.ORG - 09:08, Sunday 07 July 2024 (78921)DCS, SEI

In a study with student Alexis Vazquez (see the poster at https://dcc.ligo.org/LIGO-G2302420, we found that there was an intermediate range of peak ground velocities in EQs where lock could be lost or maintained. We also found some evidence that lock loss in this case might be correlated with high microseism (either ambiant or caused by the EQ). See the figures in the linked poster under Findings and Validation.

neil.doerksen@LIGO.ORG - 09:14, Friday 12 July 2024 (79070)SEI

One of the plots (2nd row, 2nd column) has the incorrect x-channel on some of the images (all posted images are correct, by chance). Patterns reported may not be correct, will reanalyze.

H1 General
anthony.sanchez@LIGO.ORG - posted 16:34, Thursday 04 July 2024 (78868)
Thursday OPS Day Shift End

TITLE: 07/04 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:

Today H1 was down for ALS maintenance and replacement of ALS X PD as described in Daniel's alog (78864)
once they returned I started an Initial_Alignment and then started locking.
Observing was reached at 23:28 UTC.
There have been a number of earthquakes right off the coast of Victoria Island B.C. today.

LOG:                                                                                                                                                                                                                                                                                                 

Start Time System Name Location Lazer_Haz Task Time End
16:08 SAF LVEA LVEA YES LVEA IS LASER HAZARD 10:08
17:31 PEM Robert EX N Going to EX not inside the VEA 17:44
18:11 ALS Daniel, Kieta EX Yes Troubleshooting ALS Beatnote issues 21:11
23:26 FAC Tony Water tank N Closing water diverting valves. 23:26
H1 General
ryan.crouch@LIGO.ORG - posted 16:05, Thursday 04 July 2024 (78866)
OPS Thursday eve shift start

TITLE: 07/04 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 10mph Gusts, 4mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.08 μm/s 
QUICK SUMMARY:

H1 ISC
daniel.sigg@LIGO.ORG - posted 15:24, Thursday 04 July 2024 - last comment - 10:22, Friday 05 July 2024(78864)
ISCTEX Beatnote alignment improved

Keita Daniel

We found that the transimpedance gain of the ALS-X_FIBR_A_DC PD was wrong (changed it from 20k to 2k). In turn, this meant that 20mW of light was on this PD.

After looking at the beatnote amplitude directly at the PD and found it to be way too small, we decided to swap the PD with a spare (new PD S/N S1200248, old PD S/N S1200251). However, this did not improve the beatnote amplitude. (The removed PD was put back into the spares cabinet.)

We then looked for clipping and found that the beam on the first beam sampler after the fiber port was close to the side. We moved the sampler so the beam is closer to the center of the optics. We also found the beam on the polarizing cube in the fiber path to be low. We moved the cube downwards to center the beam. After aligning the beam back to the broadband PD, the beatnote amplitude improved drastically. This alignment seems very sensitive.

We had to turn the power from the laser in the beat note path down from 20mW to about 6mW on the broadband PD.

This required a recalibration of the ALS-X_LASER_IR_PD photodiode. The laser output power in IR is about 60mW.

The beatnote strength as read by the medm screens is now 4-7dBm. Still seems to vary.

Comments related to this report
keita.kawabe@LIGO.ORG - 15:48, Thursday 04 July 2024 (78865)

To recap, fundamental problem was the alignment (probably it was close to clipping before, and started clipping over time due to temperature shift or whatever). Also, the PBS mount or maybe the mount post holder for the fiber beam is not really great, a gentle push by finger will flex something and change the alignment enough to change the beat note. We'll have to see for a while if the beat note will stay high enough.

Wrong transimpedance value in MEDM was not preventing PLL from locking but was annoying. H1:ALS-X_FIBR_A_DC_TRANSIMPEDANCE was 20000 though the interface box gain was 1.  This kind of stuff confuses us and slows down the troubleshooting. Whenever you change the gain of the BBPD interface box, please don't forget to change the transimpedance value at the same time (gain 1= transimpedance 2k, gain 10= 20k).

We took a small plier from the EE shop and forgot to bring it back from the EX (sorry).

Everything else should be back to where it was. Thorlab powermeter box was put on Camilla's desk.

Images attached to this comment
keita.kawabe@LIGO.ORG - 10:22, Friday 05 July 2024 (78882)

It's still good, right now it's +5 to +6 dBm.

Too early to tell but we might be diurnally going back and forth between +3-ish and +7-ish dBm.  4dB power variation is big (a factor of ~2.5).

If this is diurnal, it's probably explained by the alignment drift, i.e. we're not yet sitting close to the global maxima. It's not yet worth touching up the alignment unless this becomes a problem, but anyway, if we decide to make it better some time in the future, remember that we will have to touch both the PBS and the fiber launcher (or lens).

Images attached to this comment
H1 General
anthony.sanchez@LIGO.ORG - posted 15:14, Thursday 04 July 2024 (78863)
ALSY SDF Screen shots of work done by Daniel and Kieta today

6 channels were accepted in the SDF diffs after the ALS adjustments done today.

Images attached to this report
Displaying reports 6761-6780 of 83355.Go to page Start 335 336 337 338 339 340 341 342 343 End