Displaying reports 1121-1140 of 77266.Go to page Start 53 54 55 56 57 58 59 60 61 End
Reports until 05:25, Friday 14 June 2024
H1 SQZ (SQZ)
corey.gray@LIGO.ORG - posted 05:25, Friday 14 June 2024 - last comment - 10:27, Monday 17 June 2024(78428)
SQZ OPO ISS Hit Its Limit and Took H1 Out Of Observing

Woke up to see that the SQZ_OPO_LR Guardian had the message:

"disabled pump iss after 10 locklosses. Reset SQZ-OPO_ISS_LIMITCOUNT to clear message"

Followed 73053, but did NOT need to touch up the OPO temp (it was already at its max value); then took SQZ Manager back to FRE_DEP_SQZ, and H1 went back to OBSERVING.

Comments related to this report
corey.gray@LIGO.ORG - 05:38, Friday 14 June 2024 (78429)

Received wake-up call at 440amPDT (1140utc).  Took a few minutes to wake up, then log into NoMachine.  Spent some time figuring out the issue, and ultimately doing an alog search to find steps to restore SQZ (found an alog by Oli which pointed to 73053).  Once SQZ relocked, automatically taken back to OBSERVING at 517am(1217utc).

camilla.compton@LIGO.ORG - 11:05, Friday 14 June 2024 (78435)

Sheila, Naoki, Camilla. We've adjusted this so it should automacally relock the ISS.

IFO went out of observing from the OPO without the OPO Guardian going down as the OPO stayed locked, just turned it's ISS off. We're not sure what the issue with the ISS was, SHG power was fine as the controlmon was 3.5 which is near the middle of the range. Plot attached. It didn't reset until Corey intervened.

Sheila and I changed the logic in SQZ_OPO_LR's LOCKED_CLF_DUAL state so that now if the ISS lockloss counter* reaches 10, it will go to LOCKED_CLF_DUAL_NO_ISS, where it turns off the ISS before trying to relock the ISS to get back to LOCKED_CLF_DUAL. This will drop us from observing but should resolve itself in a few minutes. Naoki tested this by changing the power to make ISS unlock.
The message "disabled pump iss after 10 locklosses. Reset SQZ-OPO_ISS_LIMITCOUNT to clear message." has been removed, wiki updated. It shouldn't get caught in a loop as in ENGAGE_PUMP_ISS, if it's lockoss counter reaches 20, it will take the OPO to DOWN.

* this isn't really a lockloss counter, more of a count of how many seconds the ISS is saturating.

Images attached to this comment
camilla.compton@LIGO.ORG - 15:23, Friday 14 June 2024 (78445)

Worryingly the squeezing got BETTER while the ISS was unlocked, plot attached of DARM, SQZ BLRMs and range BLMS.

In the current lock, the SQZ BLRMs are back to the good values plot, why was the ISS injecting noise last night? Has this been a common occurrence? What is a good way of monitoring this? Coherence with DARM and the ISS

Images attached to this comment
camilla.compton@LIGO.ORG - 10:27, Monday 17 June 2024 (78488)

Check on this is 78486. Think that the SQZ OPO temperature or angle wasn't well tuned for the green OPO power at this time, when the OPO ISS was off, the SHG launch power dropped from 28.8mW to 24.5mW, plot. it was just chance that SQZ was happier here.

Images attached to this comment
LHO General
ryan.short@LIGO.ORG - posted 01:02, Friday 14 June 2024 (78427)
Ops Eve Shift Summary

TITLE: 06/14 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Quiet shift with H1 locked and observing for the duration. There was one errant Picket Fence trigger (see midshift log) and we rode out a M5.9 EQ from the southern Atlantic, but otherwise uneventful. H1 has now been locked for 11 hours.
LOG: No log for this shift.

H1 General (SEI)
ryan.short@LIGO.ORG - posted 20:49, Thursday 13 June 2024 - last comment - 16:54, Monday 08 July 2024(78426)
Ops Eve Mid Shift Report

State of H1: Observing at 157Mpc, locked for 6.5 hours.

Quiet shift so far except for another errant Picket Fence trigger to EQ mode just like ones seen last night (alog78404) at 02:42 UTC (tagging SEI).

Images attached to this report
Comments related to this report
edgard.bonilla@LIGO.ORG - 13:46, Friday 14 June 2024 (78440)SEI

That's about two triggers in a short time. If the false triggers are an issue, we should consider triggering on picket fence only if there's a Seismon alert.

jim.warner@LIGO.ORG - 10:13, Monday 24 June 2024 (78620)

The picket fence-only transition was commented out last weekend on the 15th by Oli. We now will only transition on picket fence signals if there is a live seismon notificaition.

edgard.bonilla@LIGO.ORG - 16:54, Monday 08 July 2024 (78946)SEI

Thanks Jim,

I'm back from my vacation and will resume work on the picket fence to see if we can fix these errant triggers this summer.

H1 CAL (CAL, ISC)
francisco.llamas@LIGO.ORG - posted 18:06, Thursday 13 June 2024 (78425)
Drivealign L2L gain changes using kappa_tst script - an attempt

SheilaD, FranciscoL, ThomasS [Remote: LouisD]

Today, we tried to evaluate the effect on H1:CAL-CS_TDEP_KAPPA_TST_OUTPUT from changing H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN. In summary, we were not able to collect enough data and we want to do this again with a quiet IFO.

We used the script KappaToDrivealign.py to change the value of H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN. The script worked as intended but has to round off the the value that it uses to change the drivealign to the digits that EPICS can take. H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN changed from 184.65 to 186.53576 (1.012%). We are chainging the gain to reduce the difference of the model of the actuation function and the measurement of \DeltaL_{res}. We did not change H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN because that would have a net-zero change between measurement and model. We were not able to assess the effect of our change at the end of the commissioning period so we reverted our changes.

kappa_tst_crossbar.png shows a trend of the channels we are concerned of. The crossbar on KAPPA_TST shows that the value drops closely after the gain of DRIVEALIGN_L2L was reverted. However, the uncertainty from H1:CAL-CS_TDEP_PCAL_LINE3_UNCERTAINTY increased during these values, which makes them less reliable. The data between the vertical dashed lines on KAPPA_TST from kappa_tst_deltat.png shows that there was a period of time at which KAPPA_TST was closer to 1 along with a lower uncertainty of PCAL_LINE3.

There are two ways we could repeat this measurement: With 25 minutes of quiet time of the interferometer, we can monitor the changes of KAPPA_TST after changing DRIVEALIGN_L2L gain *or* we can measure simulines before and after changing DRIVEALIGN_L2L. The latter method would illustrate better the sability of the UGF, but it would take around 40 minutes.

To recap the data we collected today was not reliable enough because it had a high uncertainty and it was not enough time to clearly see the effect of our changes. We will try again by either monitoring KAPPA_TST or running simuline measurements.

Images attached to this report
H1 General (Lockloss)
camilla.compton@LIGO.ORG - posted 17:12, Thursday 13 June 2024 (76052)
Comparing unknown locklosses in O4 to O3.
Using a lot of assumptions, I estimated that our lockloss rate in O4a is similar to O3a but much better than O3b.
O4b rate is worse but we are only 2 months in so it's not fair to compare.

I've also looked at the amount of hours we've lost from unknown locklosses, which is the majority of our locklosses at around 65%, and how many theortircal GW's we coudl ahev missed becasue of these. 

O4b so far 110 LHO locklosses from observing in O4b [1] x 65% unknown [2] x 2 hours down per lockloss = 143 hours downtime from unknown locklosses. 925 observing hours in O4b so far (60% of O4b [3], from April 10th).

O4a 350 LHO locklosses from observing in O4a [1] x 65% unknown [2] x 2 hours down per lockloss = 450 hours downtime from unknown locklosses. 5700 observing hours in O4a (67% of O4a [3]) with 81 candidates  in O4b [4]= 1 candidate / 70 hours so 450 hours / 70 observing hours per candidate ~ 6.5 gw candidates lost due to unknown locklosses The Ops team notes that we had considerably more locklosses during the first ~month of O4a while using 75W input power.

O3b 247 LHO locklosses from observing in O3b [5] x 65% unknown [using same as O4a] x 2 hours down per lockloss = 320 hours downtime from unknown locklosses. 2790 observing hours in O4a (79% of O3b [3]) with 35 events [6]= 1 events / 80 hours 320 hours / 80 observing hours per event ~ 4 events lost due to unknown locklosses.

O3a 182 LHO locklosses from observing in O3a [5] x 65% unknown [using same as O4a] x 2 hours down per lockloss = 235 hours downtime from unknown locklosses. 3120 observing hours in O4a (71% of O3a [3]) with 44 events [6]= 1 events / 70 hours 235 hours / 70 observing hours per event ~ 3.3 events lost due to unknown locklosses.

We currently don't track downtime per lockloss, but we could think about tracking it, 2 hours is a guess. It may be as low as 1 hour. 

[1] https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/ using date and Observing filters.
[2] O4 lockloss Googlesheet
[3] https://ldas-jobs.ligo.caltech.edu/~DetChar/summary/O4a/ or O4b or O3b or O3a
[4] https://gracedb.ligo.org/superevents/public/O4a/
[5] G2201762 O3a_O3b_summary.pdf
[6] https://dcc.ligo.org/LIGO-G2102395
LHO General (OpsInfo)
thomas.shaffer@LIGO.ORG - posted 16:23, Thursday 13 June 2024 (78418)
Ops Day Shift End

TITLE: 06/13 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Commissioning and calibration this morning, followed by a lock loss. Relocking was not automated for two things: touched ETMX Y by 0.3urads and it caught, then PRM to avoid an initial alignment and get PRMI to lock. Locked for 2 hours now.
LOG:

For operators: I put back in the MICH Fringes and PRMI flags that Sheila and company put to False (no auto MICH or PRMI) last weekend (alog78319). So it should try MICH and PRMI on its own again.

LHO General
ryan.short@LIGO.ORG - posted 16:06, Thursday 13 June 2024 (78423)
Ops Eve Shift Start

TITLE: 06/13 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 6mph Gusts, 4mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.09 μm/s
QUICK SUMMARY: H1 has been locked and observing for 2 hours; things are running smoothly after the commissioning window earlier today.

H1 SQZ
naoki.aritomi@LIGO.ORG - posted 15:59, Thursday 13 June 2024 (78422)
Unsuccessful FC ringdown measurement after vacuum pressure incident

Naoki, Andrei, Camilla, Vicky

To investigate the effect of vacuum pressure incident in HAM6 reported in 78346, Peter suggested to measure the FC loss since the FC should be the most sensitive cavity. We tried the FC ringdown measurement, but the result does not make any sense. We will take the data again.

Previous ringdown measurement: 66403, 67273

For ringdown measurement, we need the cabling as written in 67409 to lock the FC green with SQZ laser VCO since the SQZ laser frequency noise is too much for ringdown measurement with only TTFSS. We did only the first and second cabling in 67409. The third and fourth cabling is necessary if you want to do homodyne measurement.

After the cabling, we requested OPO guardian to LOCKED_SEED_DITHER, but the dither lock did not work well. The seed stayed around 25% of maximum seed trans. We confirmed that the OPO CMB EXC is connected to some DAC output which should be dither signal. Although the seed is not on resonance of OPO, the seed power seems stable so we moved on.

Then we manually locked the FC green. We also engaged one boost of FC green CMB. We adjusted green VCO tune to get the seed on resonance of FC. 

We increased the seed power to 2.7 mW. More than 2.7 mW seed saturated the FC trans IR PD.

We looked at H1:SQZ-FC_TRANS_D_LF_OUT and H1:SQZ-OPO_IR_PD_LF_OUT for FC TRANS and REFL since they are 16k channel. We maximized the seed FC trans by adjusting green VCO tune and requested OPO guardian to DOWN.

The attached figure shows the ringdown measurement. The decay time of FC TRANS and REFL are quite different and both of them are much larger than the expected 4 ms decay time. Also the decay start timing is different for TRANS and REFL. The decay of FC TRANS is not going to 0, which means there would be residual seed. The requesting OPO guardian to DOWN might not block the seed enough and this might be related to the dither locking, which is not working well.

EDIT: There is an elliptic low pass filter LP100 in FC trans IR PD, which would explain the different behavior of TRANS and REFL. We should turn off LP100 for ringdown measurement.

Images attached to this report
LHO General
keita.kawabe@LIGO.ORG - posted 14:58, Thursday 13 June 2024 (78421)
Bee season again

Bees are again nesting in a wire spool on the outdoor shelf by the corner station.

Non-image files attached to this report
H1 ISC
thomas.shaffer@LIGO.ORG - posted 14:03, Thursday 13 June 2024 - last comment - 11:44, Friday 14 June 2024(78419)
Converted A2L script to run all optics/dofs simultaneously if desired

I took the script that we have been using to run our A2L and converted it to run the measurements for all quads and degrees of freedom at the same time, or less, as desired. The new script is (userapps)/isc/h1/scripts/a2l/a2l_min_multi.py. Today Sheila and I tested it for all quads with just Y with the results below. These values were accepted in SDF, updated in lscparams.py, and ISC_LOCK reloaded. More details about the script at the bottom of this log.

Results for ETMX Y
Initial:  4.99
Final:    4.94
Diff:     -0.04999999999999982

Results for ETMY Y
Initial:  0.86
Final:    0.94
Diff:     0.07999999999999996

Results for ITMX Y
Initial:  2.93
Final:    2.89
Diff:     -0.040000000000000036

Results for ITMY Y
Initial:  -2.59
Final:    -2.51
Diff:     0.08000000000000007

 


 

The script we used to use was (userapps)/isc/common/scripts/decoup/a2l_min_generic_LHO.py which was, I think, originally written by Vlad B. and then Jenne changed it up to work for us at LHO. I took this and changed a few things around to then call the optimiseDOF function for each desired quad and dof under a ThreadPool class from multiprocess to run all of the measurements simultaneously. We had to move or change filters in the H1:ASC-ADS_{PIT,YAW}{bank#}_DEMOD_{SIG, I, Q} banks so that each optic and dof is associated with a particular frequency and used the ADS banks 6-9. These frequencies needed to be spaced apart enought but still within our area of interest. We also had to engage notches for all of these potential lines in the H1:SUS-{QUAD}_L3_ISCINF_{P,Y} banks (FM6&7). We also accepted the ADS output matrix values in SDF for these new banks with a gain of 1.

This hasn't been tested for all quads and both P&Y, so far only Y.

optic_dict = {'ETMX': {'P': {'freq': 31.0, 'ads_bank': 6},
                                      'Y': {'freq': 31.5, 'ads_bank': 6}
                                     },
                       'ETMY': {'P': {'freq': 28.0, 'ads_bank': 7},
                                      'Y': {'freq': 28.5, 'ads_bank': 7}
                                     },
                       'ITMX': {'P': {'freq': 26.0, 'ads_bank': 8},
                                     'Y': {'freq': 26.5, 'ads_bank': 8}
                                     },
                       'ITMY': {'P': {'freq': 23.0, 'ads_bank': 9},
                                     'Y': {'freq': 23.5, 'ads_bank': 9}
                                     },
}
Comments related to this report
sheila.dwyer@LIGO.ORG - 11:44, Friday 14 June 2024 (78437)

Here's a screenshot of the ASC coherence after TJ ran this script yesterday, there is still high coherence with YAW ASC and DARM. 

 

Images attached to this comment
H1 General
thomas.shaffer@LIGO.ORG - posted 13:20, Thursday 13 June 2024 - last comment - 14:08, Thursday 13 June 2024(78417)
Lock loss 1948 UTC

ETMX sees an odd move ~100ms before the lock loss just like yesterday. The lock loss tool also has a DCPD tag.

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 14:08, Thursday 13 June 2024 (78420)

Back to Observing at 2107UTC

H1 General
thomas.shaffer@LIGO.ORG - posted 12:33, Thursday 13 June 2024 (78416)
Back to Observing 1932 UTC

Observing at 1932UTC. A bit of a delay wrapping up commissioning today, but we're back to observing and locked for 20 hours.

H1 ISC
jennifer.wright@LIGO.ORG - posted 12:27, Thursday 13 June 2024 (78415)
Retuned SRM position and SRC1 offsets to increase coupled cavity pole

Jennie W, Keita, Sheila

Sheila suggested we move SRM to increase the coupled cavity pole as this got worse in the previous week.

Keita and I opened the SRC1_P and SRC1_Y loops (and turned off the offsets) then walked SRM in yaw and looked at the coupled cavity pole.

See image of the loop medm screen.

Its hard to see but moving down in SRM yaw on the sliders made the f_cc higher. We stopped due to the end of the commissioning period so maybe we could do more optimisation of this.

See the attached trend.

I set the offsets in the two loops to cancel out the INMON values at this new alignment, before closing them again.

These offsets have been changed in lscparams.py on lines 551 and 552 and accepted in OBSERVE.snap by TJ.

Images attached to this report
H1 PEM
sheila.dwyer@LIGO.ORG - posted 12:18, Thursday 13 June 2024 (78414)
turned ITMY ESD bias to 0V

Robet mentioned while he was here that there was coherence with the ground current clamp in the vertex and DARM, and suggested that we try having both ITM biases set to 0V.  I did that today, and accepted the bias gain of 0 in OBSERVE and SAFE.snaps.

Robert also found that we need to change the bias at EX, which we haven't done yet.

H1 ISC (ISC)
jennifer.wright@LIGO.ORG - posted 11:27, Thursday 13 June 2024 - last comment - 16:33, Friday 19 July 2024(78413)
DARM offset step

I ran the DARM offset step code starting at:

2024 Jun 13 16:13:20 UTC (GPS 1402330418)

Before recording this time stamp it records the PCAL current line settings and makes sure notches for 2 PCAL frequencies are set in the DARM2 filter bank.

It then puts all the PCAL power into these lines at 410.3 and 255Hz (giving them both a height of 4000 counts), and measures the current DARM offset value.

It then steps the DARM offset and waits for 120s each time.

The script stopped at 2024 Jun 13 16:27:48 UTC (GPS 1402331286).

In the analysis the PCAL lines can be used to calculate how the optical gain changes at each offset.

See the attached traces, where you can see that H1:OMC-READOUT_X0_OFFSET is stepped and the OMC-DCPD_SUM and ASC-AS_C respond to this change.

Watch this space for analysed data.

The script sets all the PCAL settings back to nominal after the test from the record it ook at the start.

The script lives here:

/ligo/gitcommon/labutils/darm_offset_step/auto_darm_offset_step.py

The data lives here:

/ligo/gitcommon/labutils/darm_offset_step/data/darm_offset_steps_2024_Jun_13_16_13_20_UTC.txt

 

Images attached to this report
Comments related to this report
jennifer.wright@LIGO.ORG - 11:10, Friday 14 June 2024 (78436)

See the results in the attached pdf also found at

/ligo/gitcommon/labutils/darm_offset_step/figures/plot_darm_optical_gain_vs_dcpd_sum/all_plots_plot_darm_optical_gain_vs_dcpd_sum_1402330422_380kW__Post_OFI_burn_and_pressure_spikes.pdf

The contrast defect is 0.889 ± 0.019 mW and the true DASRM offset 0 is 0.30 counts.

Non-image files attached to this comment
jennifer.wright@LIGO.ORG - 16:11, Monday 15 July 2024 (79144)

I plotted the power at the antisymmetric port as in this entry to find out the loss term between the input to HAM6 and the DCPDs, which in this case is  (1/1.652) =  0.605 with 580.3 mW of light at the AS port insensitive to DARM length changes.

Non-image files attached to this comment
victoriaa.xu@LIGO.ORG - 16:33, Friday 19 July 2024 (79251)ISC, SQZ

From Jennie's measurement of 0.88 mW contrast defect, and dcpd_sum of 40mA/resp = 46.6mW, this implies an upper bound on the homodyne readout angle of 8 degrees.

This readout angle can be useful for the noise budget (ifo.Optics.Quadrature.dc=(-8+90)*np.pi/180) and analyzing sqz datasets e.g. May 2024, lho:77710.

 

Table of readout angles "recently":

   
 total_dcpd_light
 (dcpd_sum = 40mA)
 contrast_defect
 homodyne_angle
 alog
 O4a  Aug 2023  46.6 mW  1.63 mW  10.7 deg  lho71913 
 ER16  9 March 2024  46.6 mW  2.1 mW  12.2 deg  lho76231
 ER16  16 March 2024  46.6 mW  1.15 mW  9.0  deg  lho77176 
 O4b  June 2024  46.6 mW  0.88 mW  8.0 deg  lho78413 
 O4b  July 2024  46.6 mW  1.0 mW  8.4 deg  lho79045 

 

##### quick python terminal script to calculate #########

# craig lho:65000
contrast_defect   = 0.88    # mW  # measured on 2024 June 14, lho78413, 0.88 ± 0.019 mW
total_dcpd_light  = 46.6    # mW  # from dcpd_sum = 40mA/(0.8582 A/W) = 46.6 mW
import numpy as np
darm_offset_power = total_dcpd_light - contrast_defect
homodyne_angle_rad = np.arctan2(np.sqrt(contrast_defect), np.sqrt(darm_offset_power))
homodyne_angle_deg = homodyne_angle_rad*180/np.pi # degrees
print(f"homodyne_angle = {homodyne_angle_deg:0.5f} deg\n")


##### To convert between dcpd amps and watts if needed #########

# using the photodetector responsivity (like R = 0.8582 A/W for 1064nm)
from scipy import constants as scc
responsivity = scc.e * (1064e-9) / (scc.c * scc.h)
total_dcpd_light = 40/responsivity  # so dcpd_sum 40mA is 46.6mW
H1 CAL
thomas.shaffer@LIGO.ORG - posted 09:08, Thursday 13 June 2024 - last comment - 12:10, Thursday 13 June 2024(78409)
Calibration Sweep 1538 UTC

Ran the usual broadband and simulines sweep starting at 1538UTC.

Simulines start:

PDT: 2024-06-13 08:46:11.048813 PDT
UTC: 2024-06-13 15:46:11.048813 UTC
GPS: 1402328789.048813

Simulines end:
PDT: 2024-06-13 09:07:41.142373 PDT
UTC: 2024-06-13 16:07:41.142373 UTC
GPS: 1402330079.142373
 

Files:

024-06-13 16:07:41,052 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240
613T154612Z.hdf5
2024-06-13 16:07:41,060 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS
_20240613T154612Z.hdf5
2024-06-13 16:07:41,066 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS
_20240613T154612Z.hdf5
2024-06-13 16:07:41,071 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS
_20240613T154612Z.hdf5
2024-06-13 16:07:41,076 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS
_20240613T154612Z.hdf5

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 12:10, Thursday 13 June 2024 (78411)

A second simulines was ran after Francisco made some changes. Below is the info of that simulines.

Start:

PDT: 2024-06-13 09:55:53.982484 PDT
UTC: 2024-06-13 16:55:53.982484 UTC
GPS: 1402332971.982484

End:


PDT: 2024-06-13 10:17:24.832414 PDT
UTC: 2024-06-13 17:17:24.832414 UTC
GPS: 1402334262.832414
 

Files:

2024-06-13 17:17:24,764 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240613T165554Z.hdf
5
2024-06-13 17:17:24,771 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240613T16555
4Z.hdf5
2024-06-13 17:17:24,776 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240613T16555
4Z.hdf5
2024-06-13 17:17:24,780 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240613T16555
4Z.hdf5
2024-06-13 17:17:24,785 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240613T16555
4Z.hdf5

LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 15:24, Monday 10 June 2024 - last comment - 16:45, Thursday 13 June 2024(78346)
Report of the Observed Vacuum Pressure Anomalities (06/06/2024 local)

On Friday 06/07/2024 Dave Barker sent an email to the vacuum group noting 3 spikes on the pressure of the main vacuum envelope, I took a closer look at the 3 different events and noticed that the events correlated to the IFO losing lock.  I contacted Dave, and together we contacted the operator, Corey, who made others aware of our findings.

The pressure "spikes" were noted by different components integral to the vacuum envelope.  Gauges noted the sudden rise on pressure, and almost at the same time ion pumps reacted to the rise on pressure.  The outgassing was noted on all stations, very noticeable a the mid stations, and with less effect at both end stations, and for both with a delay.

The largest spike for all 3 events is noted at HAM6 gauge, we do not have a gauge at HAM5 or HAM4.  The one near HAM6 is the one on the relay tube that joins HAM5/7 (PT152), with the restriction of the relay tube, then the next gauge is at BSC2 (PT120), however the spike is not as "high" as the one noted on HAM6 gauge.

A list of aLOGs made by others related to the pressure anomalies and their findings:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=78308
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=78320
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=78323
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=78343

Note: the oscillation visible on the plot of the outer stations (Mids and Ends) is the diurnal cycle, nominal behavior.

Images attached to this report
Comments related to this report
michael.zucker@LIGO.ORG - 13:13, Tuesday 11 June 2024 (78371)

Of live working gauges, PT110 appears closest to the source based on time signature. This is on the HAM5-7 relay tube and only indirectly samples HAM6*.  It registered a peak of 2e-6 8e-6 Torr with decay time of order 30 17s. Taking a HAM as sample volume (optimistic) this indicates at least 0.02 0.08 torr-liters of "something" must have been released at once.  The strong visible signal at mid- and end-stations suggests it was not entirely water vapor, as this should have been trapped in CP's. 

For reference, a mirror in a 2e-6 Torr environment intercepts about 1 molecular monolayer per second. Depending on sticking fraction, each of these gas pulses could deposit of order 100 monolayers of contaminant on everything. 

The observation that the IFO still works is comforting; maybe we should feel lucky. However it seems critical to understand how (for example) the lock loss energy transient could possibly hit something thermally unstable, and to at least guess what material that might be.  Recall we have previously noted evidence of melted glass on an OMC shroud.

Based on the above order-of-magnitude limits, similar gas pulses far too small to see on pressure gauges could be damaging the optics. 

It would be instructive to compare before/after measures of arm, MC, OMC, etc. losses, to at least bound any acquired absorption

 

*corrected, thanks Gerardo

jordan.vanosky@LIGO.ORG - 14:03, Tuesday 11 June 2024 (78374)

Corner RGA scans were collected today during maintenance, using RGA on Output tube. RGA volume has been open to main volume since last pumpdown ~March 2024, but electronics head/filament was turned off due to the small fan on the electronics head not spinning during observing. Unable to connect to HAM6 RGA, through either RGA computer in control room, or locally at unit with laptop. Only Output tube RGA available at this time.

Small aux cart and turbo was connected to RGA volume on output tube, then RGA Volume isolated from main volume and the filament turned on. The filament had warmed for ~2 hours prior to RGA scans being collected.

RGA Model: Pfeiffer PrismaPlus

AMU Range: 0-100

Chamber Pressure: 1.24E-8 torr on PT131(BSC 3), and 9.54E-8 torr on PT110 (HAM6), NOTE: Cold Cathode gauge interlocks tripped during filming activings in LVEA today, BSC2 pressure not recorded

Pumping Conditions: 4x 2500 l/s Ion Pumps and 2x 10^5 l/s cryopumps, HAM6 IP and HAM7/Relay tube

SEM voltage: 1200V

Dwell time: 500ms

Pts/AMU: 10

RGAVolume scans collected with main volume valve closed, only pumping with 80 l/s turbo aux cart

Corner scans collected with main volume valve open, and aux cart valve closed

Comparison to March 2024 scan provided as well.

RGA is still powered on and connected to main volume with a continuous 0-100AMU scan sweep at 5ms dwell time

Images attached to this comment
richard.mccarthy@LIGO.ORG - 07:34, Wednesday 12 June 2024 (78385)

Richard posting from Robert S.

I had a work permit to remove viewports so I opened the two viewports on the -Y side of HAM6. I used one of the bright LED arrays at one viewport and looked through the other viewport  so everything was well lit.  I looked for any evidence of  burned spots, most specifically on the fast shutter or in the area where the fast shutter directs the beam to the cylindrical dump.  I did not see a damaged spot but there are a lot of blocking components, so not surprising. I also looked at OM1 which is right in front of the viewports. I looked for burned spots on the cables etc but didnt see any. I tried to see if there were any spots on the OMC shroud, or around OM2 and OM3, the portions that I could see. I didnt see anything, but I think its pretty unlikely that I could have seen something.

jordan.vanosky@LIGO.ORG - 11:48, Wednesday 12 June 2024 (78390)

Repeated 0-100AMU scans of the corner today, after filament had full 24 hours to warm up. Same scan parameters as above June 11th scans. Corner pressure 9.36e-9 Torr, PT 120.

Dwell time 500 ms

Attached is comparison to yesterday's scan, and compared to the March 4th 2024 scan after corner pumpdown.

There is a significant decrease in AMU 41, 43 and 64 compared to yesterday's scan.

Images attached to this comment
jordan.vanosky@LIGO.ORG - 16:45, Thursday 13 June 2024 (78424)

Raw RGA text files stored on DCC at T2400198

H1 ISC
jennifer.wright@LIGO.ORG - posted 11:52, Monday 10 June 2024 - last comment - 16:19, Thursday 13 June 2024(78074)
Optical Gain Changes before and during O4b

Jennie W, Sheila

 

Sheila wanted me to look at how good our optical gain is doing since the burn on the OFI that roughly happened (we think) on the 22nd April.

Before this happened we made a measurement of the OMC alignment using dithers on the OMC ASC degrees of freedom. We got a set of new alignment offsets for the OMC QPDs that would have increased our optical gain but did not implement these at the time.

After the OFI burn we remeasured these alignment dithers and found a similar set of offsets that would imrpove our optical gain. Thus I have assumed that we would have achieved this optical gain increase before the OFI burn if we had implemented the offset changes then.

Below is the sequence of events then a table stating our actual or inferred optical gain and the date on which it was measured.

 

Optical gain before vent as tracked by kappa C: 2024/01/10 22:36:26 UTC is 1.0117 +/- 0.0039

Optical gain after vent: 2024/04/14 03:54:38 UTC is 1.0158 +/- 0.0028, optical gain if we had improved OMC alignment = 1.0158 + 0.0189 = 1.0347

SR3 yaw position and SR2 yaw and pitch positions were changed on the 24th April ( starting 17:18:15 UTC time) to gain some of our throughput back.

The OMC QPD offsets were changed on 1st May (18:26:26 UTC time) to improve our optical gain - this improved kappa c by 0.0189.

Optical gain after spot on OFI moved due to OFI damage: 2024-05-23 06:35:25 UTC 1.0051 +/- 0.0035

SR3 pitch and yaw positions and the SR2 pitch and yaw positions were changed on 28th May (starting at 19:14:34 UTC time).

SR3 and SR2 positions moved back to pre-28th values on 30th May (20:51:03 UTC time).

So we still should be able to gain around 0.0296 ~ 3% optical gain increase, provided we stay at the spot on the OFI we had post 24th April:

SR2 Yaw slider = 2068 uradians
SR2 Pitch slider = -3 uradians

SR3 Yaw slider = 120 uradians

SR3 Pitch slider = 438 uradians
 

Date kappa_C Optical Gain [mA/pm] Comments
10th January 1.0117 8.46 O4a
14th April 1.0347 8.66 Post vent
23rd May 1.0051 8.41 Post OFI 'burn'

 

Images attached to this report
Comments related to this report
jenne.driggers@LIGO.ORG - 12:05, Monday 10 June 2024 (78350)

I'm not really sure why, but our optical gain is particularly good right now.  And, it's still increasing even though we've been locked for 12+ hours. 

The other times in this plot where the optical gain is this high is about April 5th (well before the OFI incident), and May 30th.

Images attached to this comment
jenne.driggers@LIGO.ORG - 12:23, Monday 10 June 2024 (78353)

Actually, this *might* be related to the AS72 offsets we've got in the guardian now.  Next time we're commissioning, we should re-measure the SR2 and SRM spot positions.

Images attached to this comment
jennifer.wright@LIGO.ORG - 16:19, Thursday 13 June 2024 (78376)

Jennie W, Sheila, Louis

 

I recalculated the optical gain for pre-vent as I had mixed up the time in PDT with the UTC time for this measurement, it was actually from on the 11th January 2024.

Also the value I was using for OMC-DCPD_SUM_OUT/LSC-DARM_IN1 in mA/counts changes over time, and the optical gain reference value in counts/m also changes between before the vent, April, and now.

Louis wrote a script that grabs the correct front-end calibration (when this is updated the kappa C reference is updated) and the measured OMC-DCPD_SUM_OUTPUT to the DARM loop error point.

Instructions for running the code can be found here.

The code calculates current optical gain = kappa C  * reference optical gain * 1e-12 / (DARM_IN1 divided by OMC-DCPD_SUM)

                                                       [mA/pm] = [counts/m] * [m/pm]  / [counts/mA]

All the kappa Cs in the table below and all the optical gains were calculated by Louis's script except for the 14th April.

I calculated the optical gain on the 14th April assuming as in the entry above that we would have got a (~0.0189) increase in kappa C if we had previously (before the 14th April) implemented the OMC alignment offsets we in fact implemented post OFI burn, on 1st May.

I went to these reference times and measured coupled cavity pole, f_cc and P_circ the arm cavity power.

I also checked the OMC Offset and OMC-DCPD_SUM for these times (which shouldn't change).

Date kappa_C f_c Optical Gain [mA/pm] P_Circ X/Y OMC Offset OMC-DCPD_SUM Comments

11th January

06:36:26 UTC

1.0122 441 +/- 7 8.33 368 kW +/- 0.6kW 10.9405 +/- 0.0002 40mA -/+ 0.005 O4a, time actually 11/01/2024 06:36:26 UTC

14th April

03:54:54 UTC

1.0257 391 +/- 7 8.70 375 +/- 0.8 kW 10.9402 +/ - 0.0002 40mA -/+ 0.003mA Post vent

23rd May

06:35:41 UTC

1.0044 440 +/- 6 8.52 382 kW +/-0.4 kW/ 384 kW +/- 0.4 kW 10.9378 +/- 0.00002 40mA +/- 0.006 mA Post OFI 'burn'

12th June

09:43:04 UTC

1.0171 436 +/- 7 kW 8.62 379 kW +/- 0.6 kW/ 380 kW +/- 0.5 kW 10.9378 +/- 0.00002 40mA +/- 0.004 mA Post HAM6 pressure spike

 

 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

In summary we have (8.62/8.70) *100 = 99.1% of the optical gain that we could have achieved before the OFI burn, and our current optical gain  is (8.62/8.33)*100 = 103.5 % of that before the vent.

We do not appear to be doing worse in optical gain since the vacuum spikes last week.

 
 
Images attached to this comment
Displaying reports 1121-1140 of 77266.Go to page Start 53 54 55 56 57 58 59 60 61 End