Displaying reports 661-680 of 77247.Go to page Start 30 31 32 33 34 35 36 37 38 End
Reports until 01:00, Friday 05 July 2024
H1 SQZ (SQZ)
ryan.crouch@LIGO.ORG - posted 01:00, Friday 05 July 2024 - last comment - 12:37, Friday 05 July 2024(78867)
OPS Thursday eve shift summmary

TITLE: 07/05 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: An earthquake lockloss then a PI lockloss. Currently at MAX_POWER

Lock1:
Lock2:

Lock3:

 

To recap for SQZ, I have unmonitored 3 SQZ channels on syscssqz (H1:SQZ-FIBR_SERVO_COMGAIN, H1:SQZ-FIBR_SERVO_FASTGAIN, H1:SQZ-FIBR_LOCK_TEMPERATURECONTROLS_ON) that keep dropping us out of observing for now until their root issue can be fixed (Fiber trans PD error, too much power on FIBR_TRANS?). I noticed that each time the GAINS change it also drops our cleaned range

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 12:37, Friday 05 July 2024 (78887)

It seems that as you found, the issue was the max power threshold. Once Ryan raised the threshold in 78881, we didn't see this happen again, plot attached. I've re-monitored these 3 SQZ channels: sdfs attached (H1:SQZ-FIBR_SERVO_COMGAIN, H1:SQZ-FIBR_SERVO_FASTGAIN, H1:SQZ-FIBR_LOCK_TEMPERATURECONTROLS_ON) and TEMPERATURECONTROLS_ON accepted.

It's expected that the CLEAN range would drop as that range only reports when the GRD-IFO_READY flag is true (which isn't the case when there's sdf diffs).

Images attached to this comment
H1 General (Lockloss, SUS)
ryan.crouch@LIGO.ORG - posted 23:38, Thursday 04 July 2024 (78874)
06:37 UTC

PI ring up lockloss? PI28 and 29 were unable to be damped down by the GRD and we eventually lost lock.

https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1404196646

H1 SEI
ryan.crouch@LIGO.ORG - posted 21:46, Thursday 04 July 2024 (78873)
H1 ISI CPS Noise Spectra Check - Weekly

Closes FAMIS25997, last checked in alog78550

ITMX_ST2_CPSINF_H1 has gotten noisier at high frequency

Everything else looks the same as previously.

Non-image files attached to this report
H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 19:15, Thursday 04 July 2024 - last comment - 20:56, Thursday 04 July 2024(78870)
02:13 UTC lockloss

Lost lock from an earthquake

Comments related to this report
ryan.crouch@LIGO.ORG - 19:35, Thursday 04 July 2024 (78871)

XARM kepts giving "fiber polarization error" and is in CHANGE_POL state, neither I nor guardian can get the H1:ALS-X_FIBR_LOCK_FIBER_POLARIZATIONPERCENT below 14 using the polarization controller. I called Sheila and she suggested turning on H1:ALS-X_FIBR_LOCK_LOGIC_FORCE which fixed it!

ryan.crouch@LIGO.ORG - 20:56, Thursday 04 July 2024 (78872)

03:56 UTC Observing

H1 SEI (SEI)
neil.doerksen@LIGO.ORG - posted 18:35, Thursday 04 July 2024 - last comment - 09:14, Friday 12 July 2024(78869)
Earthquake Analysis : Similar onsite wave velocities may or may not cause lockloss, why?

It seems earthquakes causing similar magnitudes of movement on-site may or may not cause lockloss. Why is this happening? Should expect to always or never cause lockloss for similar events. One suspicion is that common or differential motion might lend itself better to keeping or breaking lock.

- Lockloss is defined as H1:DRD-ISC_LOCK_STATE_N going to 0 (or near 0).
- I correlated H1:DRD-ISC_LOCK_STATE_N with H1:ISI-GND_STS_CS_Z_EQ_PEAK_OUTMON peaks between 500 and 2500 μm/s.
- I manually scrolled through the data from present to 2 May 2024 to find events.
    - Manual, because 1) wanted to start with a small sample size and quickly see if there was a pattern, and 2) because I need to find events that caused loss, then go and find similarly sized events we kept lock.
- Channels I looked at include:
    - IMC-REFL_SERVO_SPLITMON
    - GRD-ISC_LOCK_STATE_N
    - ISI-GND_STS_CS_Z_EQ_PEAK_OUTMON ("CS_PEAK")
    - SEI-CARM_GNDBLRMS_30M_100M
    - SEI-DARM_GNDBLRMS_30M_100M
    - SEI-XARM_GNDBLRMS_30M_100M
    - SEI-YARM_GNDBLRMS_30M_100M
    - SEI-CARM_GNDBLRMS_100M_300M
    - SEI-DARM_GNDBLRMS_100M_300M
    - SEI-XARM_GNDBLRMS_100M_300M
    - SEI-YARM_GNDBLRMS_100M_300M
    - ISI-GND_STS_ITMY_X_BLRMS_30M_100M
    - ISI-GND_STS_ITMY_Y_BLRMS_30M_100M
    - ISI-GND_STS_ITMY_Z_BLRMS_30M_100M
    - ISI-GND_STS_ITMY_X_BLRMS_100M_300M
    - ISI-GND_STS_ITMY_Y_BLRMS_100M_300M
    - ISI-GND_STS_ITMY_Z_BLRMS_100M_300M
    - SUS-SRM_M3_COILOUTF_LL_INMON
    - SUS-SRM_M3_COILOUTF_LR_INMON
    - SUS-SRM_M3_COILOUTF_UL_INMON
    - SUS-SRM_M3_COILOUTF_UR_INMON
    - SUS-PRM_M3_COILOUTF_LL_INMON
    - SUS-PRM_M3_COILOUTF_LR_INMON
    - SUS-PRM_M3_COILOUTF_UL_INMON
    - SUS-PRM_M3_COILOUTF_UR_INMON

        - ndscope template saved as neil_eq_temp2.yaml

- 26 events; 14 lockloss, 12 locked (3 or 4 lockloss event may have non-seismic causes)

- After, usiing CS_PEAK to find the events, I, so far, used the ISI channels to analyse the events.
    - The SEI channels were created last week (only 2 events captured in these channels, so far).

- Conclusions:
    - There are 6, CS_PEAK events above 1,000 μm/s in which we *lost* lock;
        - In SEI 30M-100M
            - 4 have z-axis dominant motion with no motion or strong z-motion or no motion in SEI 100M-300M
            - 2 have y-axis dominated motion with a lot of activity in SEI 100M-300M and y-motion dominating some of the time.
    - There are 6, CS_PEAK events above 1,000 μm/s in which we *kept* lock;
        - In SEI 30M-100M
            - 5 have z-axis dominant motion with only general noise in SEI 100M-300M
            - 1 has z-axis dominant noise near the peak in CS_PEAK and strong y-axis domaniated motion starting 4 min prior to the CS_PEAK peak; it too only has general noise in SEI 100M-300M. This x- or y-motion which starts about 4 min before the peak in CS_PEAK has been observed in 5 events -- Love waves precede Rayleigh waves, could be Love waves?
    - All events below 1000 μm/s which lose lock seem to have a dominant y-motion in either/both SEI 30M-100M / 100M-300M. However, the sample size is not large enough to convince me that shear motion is what is causing lockloss. But it is large enough to convince me to find more events and verify. (Some plots attached.)

Images attached to this report
Comments related to this report
beverly.berger@LIGO.ORG - 09:08, Sunday 07 July 2024 (78921)DCS, SEI

In a study with student Alexis Vazquez (see the poster at https://dcc.ligo.org/LIGO-G2302420, we found that there was an intermediate range of peak ground velocities in EQs where lock could be lost or maintained. We also found some evidence that lock loss in this case might be correlated with high microseism (either ambiant or caused by the EQ). See the figures in the linked poster under Findings and Validation.

neil.doerksen@LIGO.ORG - 09:14, Friday 12 July 2024 (79070)SEI

One of the plots (2nd row, 2nd column) has the incorrect x-channel on some of the images (all posted images are correct, by chance). Patterns reported may not be correct, will reanalyze.

H1 General
anthony.sanchez@LIGO.ORG - posted 16:34, Thursday 04 July 2024 (78868)
Thursday OPS Day Shift End

TITLE: 07/04 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:

Today H1 was down for ALS maintenance and replacement of ALS X PD as described in Daniel's alog (78864)
once they returned I started an Initial_Alignment and then started locking.
Observing was reached at 23:28 UTC.
There have been a number of earthquakes right off the coast of Victoria Island B.C. today.

LOG:                                                                                                                                                                                                                                                                                                 

Start Time System Name Location Lazer_Haz Task Time End
16:08 SAF LVEA LVEA YES LVEA IS LASER HAZARD 10:08
17:31 PEM Robert EX N Going to EX not inside the VEA 17:44
18:11 ALS Daniel, Kieta EX Yes Troubleshooting ALS Beatnote issues 21:11
23:26 FAC Tony Water tank N Closing water diverting valves. 23:26
H1 General
ryan.crouch@LIGO.ORG - posted 16:05, Thursday 04 July 2024 (78866)
OPS Thursday eve shift start

TITLE: 07/04 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 10mph Gusts, 4mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.08 μm/s 
QUICK SUMMARY:

H1 ISC
daniel.sigg@LIGO.ORG - posted 15:24, Thursday 04 July 2024 - last comment - 10:22, Friday 05 July 2024(78864)
ISCTEX Beatnote alignment improved

Keita Daniel

We found that the transimpedance gain of the ALS-X_FIBR_A_DC PD was wrong (changed it from 20k to 2k). In turn, this meant that 20mW of light was on this PD.

After looking at the beatnote amplitude directly at the PD and found it to be way too small, we decided to swap the PD with a spare (new PD S/N S1200248, old PD S/N S1200251). However, this did not improve the beatnote amplitude. (The removed PD was put back into the spares cabinet.)

We then looked for clipping and found that the beam on the first beam sampler after the fiber port was close to the side. We moved the sampler so the beam is closer to the center of the optics. We also found the beam on the polarizing cube in the fiber path to be low. We moved the cube downwards to center the beam. After aligning the beam back to the broadband PD, the beatnote amplitude improved drastically. This alignment seems very sensitive.

We had to turn the power from the laser in the beat note path down from 20mW to about 6mW on the broadband PD.

This required a recalibration of the ALS-X_LASER_IR_PD photodiode. The laser output power in IR is about 60mW.

The beatnote strength as read by the medm screens is now 4-7dBm. Still seems to vary.

Comments related to this report
keita.kawabe@LIGO.ORG - 15:48, Thursday 04 July 2024 (78865)

To recap, fundamental problem was the alignment (probably it was close to clipping before, and started clipping over time due to temperature shift or whatever). Also, the PBS mount or maybe the mount post holder for the fiber beam is not really great, a gentle push by finger will flex something and change the alignment enough to change the beat note. We'll have to see for a while if the beat note will stay high enough.

Wrong transimpedance value in MEDM was not preventing PLL from locking but was annoying. H1:ALS-X_FIBR_A_DC_TRANSIMPEDANCE was 20000 though the interface box gain was 1.  This kind of stuff confuses us and slows down the troubleshooting. Whenever you change the gain of the BBPD interface box, please don't forget to change the transimpedance value at the same time (gain 1= transimpedance 2k, gain 10= 20k).

We took a small plier from the EE shop and forgot to bring it back from the EX (sorry).

Everything else should be back to where it was. Thorlab powermeter box was put on Camilla's desk.

Images attached to this comment
keita.kawabe@LIGO.ORG - 10:22, Friday 05 July 2024 (78882)

It's still good, right now it's +5 to +6 dBm.

Too early to tell but we might be diurnally going back and forth between +3-ish and +7-ish dBm.  4dB power variation is big (a factor of ~2.5).

If this is diurnal, it's probably explained by the alignment drift, i.e. we're not yet sitting close to the global maxima. It's not yet worth touching up the alignment unless this becomes a problem, but anyway, if we decide to make it better some time in the future, remember that we will have to touch both the PBS and the fiber launcher (or lens).

Images attached to this comment
H1 General
anthony.sanchez@LIGO.ORG - posted 15:14, Thursday 04 July 2024 (78863)
ALSY SDF Screen shots of work done by Daniel and Kieta today

6 channels were accepted in the SDF diffs after the ALS adjustments done today.

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 13:14, Thursday 04 July 2024 (78862)
strange Drops from Observing

TITLE: 07/04 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Earthquake
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 6mph 5min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.07 μm/s
QUICK SUMMARY:
Kieta and Daniel have come to site and gone to EX to adjust the H1:ALS-X_FIBR_A_DEMOD_RFMON. which should allow a better bat note. Work Permit : 11962

I was looking at verbals an noticed that Intention bit was flipped back and forth from 12:02 UTC until 12:57 UTC  a large number of times.
I checked the SQZ manager during that time and I didnt see much motion, so i'm still not sure what caused this..

Kieta & Daniel have come back from End X for a snack, and let me kno wthat the Photo Diode is likely busted and they are gonna go back out there and swap it.
I have accepted some ALS SDF Diffs.

Hopefully they can swap it out today and we can get locked sometime today.
 

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 10:29, Thursday 04 July 2024 (78861)
CDS HW Status IOC: added check for IOP timing card status

The CDS HW STAT ioc was restarted this morning, the new code checks the status of the LIGO Timing Card in the IO Chassis.

LHO VE
david.barker@LIGO.ORG - posted 10:27, Thursday 04 July 2024 (78860)
Thu CP1 Fill

Thu Jul 04 10:17:39 2024 INFO: Fill completed in 17min 35secs

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 08:01, Thursday 04 July 2024 (78859)
Thursday OPS Day Shift Start

TITLE: 07/04 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 6mph Gusts, 4mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY:
We had been locked for 7 + hours until an Earthquake unlocked us. 
When I arrived the IFO was Unlocked and ALS_XARM was stuck on CHECK_CRYSTAL_FREQ.
I'm still trying to figure out how to make the H1:ALS-X_FIBR_LOCK_BEAT_FREQUENCY better.

Images attached to this report
H1 General (SQZ)
ryan.crouch@LIGO.ORG - posted 01:00, Thursday 04 July 2024 (78850)
OPS Wednesday EVE shift summary

TITLE: 07/04 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:

Lock1: ISS 2nd loop issues, possible EndX issues, and the classic input align issues

 

The SQZ-FIBR_TRANS_DC is just barely over the high power limit (Tagging SQZ)

I unmonitored the CSSQZ channel H1:SQZ-FIBR_LOCK_TEMPERATURECONTROLS_ON as it would periodically turn off and on at a high frequency without seemingly doing anything to the BEAT_FREQ, so this won't keep dropping us out of observing tonight

Images attached to this report
H1 ISC (PSL)
sheila.dwyer@LIGO.ORG - posted 21:56, Wednesday 03 July 2024 - last comment - 22:01, Wednesday 03 July 2024(78856)
reverted IM3 sliders, brings back beam on ISS second loop

Ryan C, Sheila D

Summary:  The input beam moved after the PSL incursion, alinging the IMC doesn't completely restore the input beam alignment to the IFO.  Using IM4 QPD as a reference to move IM3 resulted in no beam on the ISS second loop, so we have reverted that for now.

Before Tuesday's PSL incursion, when the IMC was locked at 2W IM4 trans QPD was fairly well centered, we had 1.5 or 1.6 counts on the ISS second loop inner and outer loop sums, 310 counts on MC2 trans QPD, and 95 counts on the ISS second loop QPD.  After Jenne relocked the IMC yesterday, we had 2W on IM4 trans, but the QPD was off center with ~0.3 pit and 1 in yaw, the second loop inner and outer sums were 1.46 and 1.6, 305 counts on MC2 trans, and 94 on ISS second loop QPD. 

This morning Camilla and company moved IM3 to re-center the beam on IM4 trans, 78831,  After our struggles with input alignment after the vent earlier this year, we learned that LLO uses IM4 QPD as a reference to adjust IM3 trans when needed, so we had centered IM4 hoping to use it as a reference in this way.  However, doing this made the beam completely fall off the ISS second loop, with no power on it's QPD or the two PD sum channels.  This may mean that we were close to clipping in the second loop before, so we should come back to this another day to think about how we can restore the input alignment to what it previously was, and make sure we don't clip on the ISS. 

For now I reverted IM3 sliders to this morning, and Ryan is working his way through the initial alignment. 

Along the way he also showed me that the X arm green transmission is extremely noisy, with high frequency fuzz.  This might mean that we still have some electronics problems at EX after today's work.  Sicne Ryan has been able to get past ALS locking tonight, we won't try to address this issue tonight.

 

Comments related to this report
sheila.dwyer@LIGO.ORG - 22:01, Wednesday 03 July 2024 (78857)

Also, a note, that Ryan had trouble with SRY, but lowering the threshold for triggering allowed it to lock fine and run WFS.  We might consider using lower thresholds here.

H1 SQZ (SQZ)
karmeng.kwan@LIGO.ORG - posted 15:20, Wednesday 26 June 2024 - last comment - 05:23, Thursday 04 July 2024(78686)
SHG2 Sinc Twin Sisters Rock Curve

Built a second SHG with the PPKTP crystal from MIT, did a measurement on the conversion efficiency and fit for the single pass nonlinear conversion efficiency Enl = 0.0055W-1. The single pass measurement of the PPKTP crystal give a nonlinearity deff = 5.41 pm/V. Generally deff of PPKTP is quoted around 9.3pm/V (Table 5.1 of Georgia's thesis)

Measurement of the phase matching curve with input power of 60 mW shows a dip around 35 celcius. The temperature controller reads in kOhm and is converted into celcius via the Steinhart-Hart equation:

A1 = 0.003354016
B1 = 0.0002569850
C1 = 2.620131e-6
D1 = 6.383091e-8

R25 = 10000
R = R_meas / R25
T = ( 1 / (A1 + (B1 * np.log(R)) + (C1 * np.log(R)**2) + (D1 * np.log(R)**3) ) ) - 273.15

Phase matching curve is plotted by using Equation 3.14, 3.20 and Table 3.2 from Sheila's thesis. I have fitted the sinc curve with T0 = 34.9 and i_max = 11.79 , 18.00 and 32.00.

Images attached to this report
Non-image files attached to this report
Comments related to this report
naoki.aritomi@LIGO.ORG - 15:45, Wednesday 26 June 2024 (78690)

Mount Saint Helens for our currently used SHG: 76239

nutsinee.kijbunchoo@LIGO.ORG - 05:23, Thursday 04 July 2024 (78858)SQZ

I found a single/double pass SHG study that was done for the Virgo squeezer by Leonardi et al. here https://iopscience.iop.org/article/10.1088/1555-6611/aad84d

Daniel pointed out that Eq.2 from this paper shows an "additional phase from the red/green dispersion in the rear mirror turn around path" in the double pass scheme. I've attached a couple plots as a function of arbituary x (this variable is related to delta T) at various phase mismatch delta phi. I think the phase mismatch between the red and the green in the double pass alone might explain most of the mountains we are seeing here (?). this might not be the answer to all the problems but it's a good place to start (Figure 7 also looks very interesting).

 

Maybe the mountains can be patched up if we have a capability of translating one of the mirrors.

Images attached to this comment
Displaying reports 661-680 of 77247.Go to page Start 30 31 32 33 34 35 36 37 38 End