Displaying reports 161-180 of 77797.Go to page Start 5 6 7 8 9 10 11 12 13 End
Reports until 16:43, Friday 06 September 2024
H1 General
oli.patane@LIGO.ORG - posted 16:43, Friday 06 September 2024 (79958)
Ops Day Shift End

TITLE: 09/06 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Lots of locklosses but pretty easy relocks
LOG:

14:30 Observing and Locked for 30 mins
15:53 Lockloss after almost 2 hours locked https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=79946
16:06 Lockloss from TRANSITION_DRMI_TO_3F, starting initial alignment
16:29 Initial alignment done, relocking
17:20 NOMINAL_LOW_NOISE    
17:23 Observing

18:51 Lockloss after 1.5 hours https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=79949
    - Had to restore ETMY and TMSY to values from previous lock's LOCKING_ARMS_GREEN, and then had to still touch them up to get ALSY to lock
20:14 Observing

22:27 Lockloss after 2.25 hours https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=79954
22:28 Started an initial alignment
    - Couldn't get to SRC_ALIGN, so I went to ACQUIRE_SRY and then moved SRM to minimize the error signals in SRC1. Then SRC locked and offloaded fine
22:56 Initial alignment done, relocking

23:42 NOMINAL_LOW_NOISE                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             

Start Time System Name Location Lazer_Haz Task Time End
23:58 SAF H1 LHO YES LVEA is laser HAZARD 18:24
14:49 FAC Kim OpticsLab n Tech clean 15:05
15:38 FAC Mitchell EX, EY n Dust monitor checks 16:31
16:11 FAC Kim OpticsLab n Tech clean - vacuuming up sand 16:56
16:19 PCAL Francisco PCAL Lab y(local) Checking for sand 16:31
16:20 OPT Sheila OptLab n Checking for sand 16:31
18:37 FAC Kim OptLab n Checking for more sand 18:47
21:46 FIT Vicky YARM n Running fast 22:18
H1 TCS
oli.patane@LIGO.ORG - posted 16:31, Friday 06 September 2024 (79957)
TCS Chiller Water Level Top-Off FAMIS

Closes FAMIS#27797, last checked 79723

Filled both to close to max. No leak in water cup

TCSX:

Before: 29.2

After: 30.5

Added 440mL of water

TCSY:

Before: 9.8

After: 10.6

Added 275mL of water

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:06, Friday 06 September 2024 (79956)
OPS Eve Shift Start

TITLE: 09/06 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 11mph Gusts, 6mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.14 μm/s
QUICK SUMMARY:

IFO is LOCKING and in PRMI_ASC

H1 SEI
oli.patane@LIGO.ORG - posted 15:31, Friday 06 September 2024 (79955)
Seismometer Mass Check Monthly FAMIS

Closes FAMIS#26493, last checked 79438

T240 (channels averaged between 2024-09-06 22:13:59 - 22:14:09UTC)

There are 14 T240 proof masses out of range ( > 0.3 [V] )!
ETMX T240 2 DOF X/U = -0.705 [V]
ETMX T240 2 DOF Y/V = -0.571 [V]
ETMX T240 2 DOF Z/W = -0.543 [V]
ITMX T240 1 DOF X/U = -1.471 [V]
ITMX T240 1 DOF Y/V = 0.317 [V]
ITMX T240 1 DOF Z/W = 0.42 [V]
ITMX T240 3 DOF X/U = -1.535 [V]
ITMY T240 3 DOF X/U = -0.753 [V]
ITMY T240 3 DOF Z/W = -1.878 [V]
BS T240 1 DOF Y/V = -0.404 [V]
BS T240 3 DOF Y/V = -0.321 [V]
BS T240 3 DOF Z/W = -0.504 [V]
HAM8 1 DOF Y/V = -0.512 [V]
HAM8 1 DOF Z/W = -0.824 [V]

All other proof masses are within range ( < 0.3 [V] ):
ETMX T240 1 DOF X/U = -0.108 [V]
ETMX T240 1 DOF Y/V = -0.111 [V]
ETMX T240 1 DOF Z/W = -0.191 [V]
ETMX T240 3 DOF X/U = -0.133 [V]
ETMX T240 3 DOF Y/V = -0.201 [V]
ETMX T240 3 DOF Z/W = -0.104 [V]
ETMY T240 1 DOF X/U = 0.013 [V]
ETMY T240 1 DOF Y/V = 0.097 [V]
ETMY T240 1 DOF Z/W = 0.162 [V]
ETMY T240 2 DOF X/U = -0.094 [V]
ETMY T240 2 DOF Y/V = 0.167 [V]
ETMY T240 2 DOF Z/W = 0.039 [V]
ETMY T240 3 DOF X/U = 0.175 [V]
ETMY T240 3 DOF Y/V = 0.036 [V]
ETMY T240 3 DOF Z/W = 0.096 [V]
ITMX T240 2 DOF X/U = 0.121 [V]
ITMX T240 2 DOF Y/V = 0.24 [V]
ITMX T240 2 DOF Z/W = 0.204 [V]
ITMX T240 3 DOF Y/V = 0.106 [V]
ITMX T240 3 DOF Z/W = 0.108 [V]
ITMY T240 1 DOF X/U = 0.05 [V]
ITMY T240 1 DOF Y/V = 0.057 [V]
ITMY T240 1 DOF Z/W = -0.059 [V]
ITMY T240 2 DOF X/U = 0.044 [V]
ITMY T240 2 DOF Y/V = 0.193 [V]
ITMY T240 2 DOF Z/W = 0.055 [V]
ITMY T240 3 DOF Y/V = 0.037 [V]
BS T240 1 DOF X/U = -0.179 [V]
BS T240 1 DOF Z/W = 0.082 [V]
BS T240 2 DOF X/U = -0.104 [V]
BS T240 2 DOF Y/V = 0.004 [V]
BS T240 2 DOF Z/W = -0.133 [V]
BS T240 3 DOF X/U = -0.211 [V]
HAM8 1 DOF X/U = -0.279 [V]
 

STS (channels averaged between 2024-09-06 22:21:52 - 22:22:02UTC)

There are 1 STS proof masses out of range ( > 2.0 [V] )!
STS EY DOF X/U = -2.402 [V]

All other proof masses are within range ( < 2.0 [V] ):
STS A DOF X/U = -0.521 [V]
STS A DOF Y/V = -0.829 [V]
STS A DOF Z/W = -0.536 [V]
STS B DOF X/U = 0.335 [V]
STS B DOF Y/V = 0.945 [V]
STS B DOF Z/W = -0.448 [V]
STS C DOF X/U = -0.764 [V]
STS C DOF Y/V = 0.766 [V]
STS C DOF Z/W = 0.563 [V]
STS EX DOF X/U = -0.17 [V]
STS EX DOF Y/V = -0.038 [V]
STS EX DOF Z/W = 0.075 [V]
STS EY DOF Y/V = 0.005 [V]
STS EY DOF Z/W = 1.225 [V]
STS FC DOF X/U = 0.24 [V]
STS FC DOF Y/V = -1.056 [V]
STS FC DOF Z/W = 0.622 [V]

H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 15:30, Friday 06 September 2024 - last comment - 16:45, Friday 06 September 2024(79954)
Lockloss

Lockloss @ 09/06 22:27UTC after 2:13 hours locked. Not sure why these locks have been so short.

Comments related to this report
oli.patane@LIGO.ORG - 16:45, Friday 06 September 2024 (79959)

23:44 Observing

H1 PSL
oli.patane@LIGO.ORG - posted 15:08, Friday 06 September 2024 (79952)
PSL Status Report Weekly FAMIS

Closes FAMIS#26293, last checked 79718

Everything is looking normal. ISS diff power is a bit low but has been jumping up to ~2.5% every once in a while, and AMP1 output power is also a little low.


Laser Status:
    NPRO output power is 1.827W (nominal ~2W)
    AMP1 output power is 64.37W (nominal ~70W)
    AMP2 output power is 137.4W (nominal 135-140W)
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 10 days, 4 hr 35 minutes
    Reflected power = 21.82W
    Transmitted power = 105.0W
    PowerSum = 126.8W

FSS:
    It has been locked for 0 days 3 hr and 11 min
    TPD[V] = 0.8249V

ISS:
    The diffracted power is around 1.9%
    Last saturation event was 0 days 3 hours and 11 minutes ago


Possible Issues:
    AMP1 power is low
    PMC reflected power is high
    ISS diffracted power is low

H1 General
oli.patane@LIGO.ORG - posted 11:52, Friday 06 September 2024 - last comment - 13:18, Friday 06 September 2024(79949)
Lockloss

Lockloss @ 09/06 18:51UTC

Lost lock after 1.5 hours, similar to how long we had been locked last lock (1:47hours)

Comments related to this report
oli.patane@LIGO.ORG - 13:18, Friday 06 September 2024 (79950)

20:14 Observing

H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 08:55, Friday 06 September 2024 - last comment - 10:23, Friday 06 September 2024(79946)
Lockloss

Lockloss @ 09/06 15:53UTC

Comments related to this report
oli.patane@LIGO.ORG - 10:23, Friday 06 September 2024 (79947)

17:23 Observing

LHO VE
david.barker@LIGO.ORG - posted 08:17, Friday 06 September 2024 (79945)
Fri CP1 Fill

Fri Sep 06 08:07:47 2024 INFO: Fill completed in 7min 43secs

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 07:39, Friday 06 September 2024 (79944)
Ops Day Shift Start

TITLE: 09/06 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 8mph Gusts, 4mph 3min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.18 μm/s
QUICK SUMMARY:

Observing and have been Locked for 30 mins. Everything normal

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 22:00, Thursday 05 September 2024 (79943)
OPS Eve Shift Summary

TITLE: 09/06 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:

IFO is in NLN and OBSERVING as of 2:46 UTC

Very quiet shift:

LOG:

None

Images attached to this report
X1 SUS (SUS)
ibrahim.abouelfettouh@LIGO.ORG - posted 18:57, Thursday 05 September 2024 - last comment - 16:19, Saturday 07 September 2024(79941)
BBSS M1 Pitch Instability F1 BOSEM Drift: The Saga Continues

Ibrahim, Oli, Jeff, Betsy, Joe, Others

Summary:

Relevant Alogs:

alog 79079: Recent Post-TF Diagnostic Check-up - one of the early discoveries of the drift and pitch instability.

alog 79181: Recent M1 TF Comparisons. More recent TFs have been taken (found at: /ligo/svncommon/SusSVN/sus/trunk/BBSS/X1/BS/SAGM1/Data on the X1 network). We are waiting on updated confirmation of model parameters in order to know what we should correctly be comparing our measurements to. We just confirmed d4 a few days ago following the bottom wire loop change and now seek to confirm d1 and what that means with respect to our referential calibration block.

alog 79042: First investigation into the BOSEM drift - still operating erroneously under the tmperature assumption.

alog 79032: First discovery of drift issue, originally erroneously thought to be part of the diurnal temperature driven suspension sag (where I though that blades sagging more than others contributed to the drift in pitch).

Hypothesis:

We think that this issue is related to the height of the blades for these reasons:

  1. The issue was fixed when we lowered all blades from the calibration block's "nominal" or zero by -1.5mm with all 4 blades roughly close to this number (avg -1.5mm)
  2. The issue came back when we attempted to fix the S-shaped M1 blade tip by correcting the extra swivel it needed to have in order to stay at the same height. (Joe recommendation to Betsy)
  3. Oli and Jeff have a d1 investigation in alog 76071 overlays different P to P model TFs when the blade heights are above/below their physical D (called FD in the attached plots).
    1. Interestingly, there is a new mode at roughly 1.9Hz when d is above the model's physical D by +-4mm. This mode is confirmed to not be cross coupling. Our recent TFs don't have them but TFs with the drift earlier do - I think this is a red herring.
    2. More clearly, the attached file shows overlays from different d1 sizes (Pitch).
  4. While the F1 blades are at an avg height of -1.5mm below nominal calibration block height, the spread between the individual blades is higher than before, with the problematic "soft/S" blade measuring at only -1mm. There's another blade at -1.8mm. This is the only difference between our current drifty -1.5mm avg and the non-drifty -1.5mm avg is the spread of each indiv. blade height. At this point, I'm interested in seeing how the spread of individual blades affects the drift effect in addition to just an everage d1 drop - could it be a combo of these effects? We can investigate the latter by playing with the model and the former by emperically measuring the drift itself.

Our Units:

Sensor Calibration Block Nominal: 0mm = 25.5mm using shims, drifty - what below measurements are based on
Config 1: -1.5mm avg = 24mm using shims, no drift
Config 2: -1.5mm avg = 24mm using shims, drifty. Only difference is that the spread of the individual blade tip heights is greater. Indiv blade heights: -1.6mm, -1.5mm, -1.0mm, -1.8mm.

We need to know how the calibration block converts to model parameters in d1 and whether that's effective or physical d1 in the model. Then we can stop using referential units.

To further investigate, we have questions:

  1. What is the "sensor calibration block" calibrated to? Physical D (Center of Mass to blade tip) or Effective D?  What are these values? We just want to find a model way to test parameters rather than the cal block or the shim methods to our model since right now we're going off potentially old information.
  2. Could differences between the 4 individual blades be causing a drift this stark? (i.e. it's not a net d1 height issue but a blade to blade height issue or a combo). I'm thinking this may be the case since we have two equal net heights (-1.5mm avg) with the only difference being the spread of the indiv. heights.

Some Early Observations (attempting to constrain our model to our measurements):

  1. TFs before and after the F1 drift manifested (now vs 7 days ago) barely change the actual peak locations, but that's expected due to nature of TFs (I think).
  2. The difference between -1mm and -3mm drastically changes the 1.05Hz peak's position. In general, small mm changes have noticeable decimal freq. changes.
  3. The shape of the model curve is different for -3mm and -5mm, having positive inflection. Anything higher has our straight/negative inflection shape.

Attachments:

F1Drift09052024: BOSEM Drift over the last 7 days. Notice that the F1 OSEM is the only one showing a drift. LF and RT show a diurnal temperature based change due to suspension sagging but this is unrelated.
F1DriftEuler09052024: BOSEM Euler Basis Drift over the last 7 days. Notice that only Pitch is showing the drift
F1DriftM2CountsEuler09052024: BOSEM Counts Drift in the M2 (PUM) Stage for both euler and direct. Notice that there is no percieveable drifting or pitching here. Disclaimer: The M2 Sat-amp box is old and has a transimpedance issue. I just got a spare and will switch it out when not on-shift.
triplemodelcomp_2024-08-30_2300_BBSS_M1toM1: Oli's TF model to measurement comparison with different physical d1 +- mm distances. Pitch here is the most important. We want to empically fit the model to the measurement but we do not yet know the absolute height of the calibration block in model terms.
allbbss_2024-jan05vJuly12Aug30_X1SUSBS_M1_ALL_ZOOMED_TFs: Oli's Drift v. No Drift v. Model Comparison. Oli is planning on posting an alog both with this information and the d1 distance comparisons once we ascertain calibration block absolute units.
Images attached to this report
Non-image files attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 16:19, Saturday 07 September 2024 (79969)

Update to the triplemodelcomp_2024-08-30_2300_BBSS_M1toM1 file Ibrahim attached - there is an update to the legend. In that version I had the description for the July 12th measurement as 'New wire loop, d1=-1.5mm, no F1 drift', but there was actually F1 drift during that measurement - it had just started over a week before so the OSEM values weren't declining as fast as they had been earlier that week. I also want to be more specific as to what d1 means in that context, so in this updated version I changed July's d1 to be d1_indiv to hopefully better show that that value of -1.5mm is the same for each blade, whereas for the August measurements (now posted ) we have d1_net, because the blades heights differ by multiple .1 mms, but they still average out to the same -1.5mm.

Non-image files attached to this comment
H1 SQZ
victoriaa.xu@LIGO.ORG - posted 18:00, Thursday 05 September 2024 (79942)
SHG locking issue b/c demod RF Max threshold exceeded when shg unlocked

Naoki, Vicky -

Had an SHG relocking issue just now, when the squeezer briefly dropped lock at 2024 Sept 5 22:46:16 UTC, for the PMC to relock (its PZT bottomed out, routine issue).

SHG guardian went into a weird locking loophole which we had not seen before, summarized in screenshot. The SHG IR trans PD locking beatnote strength goes high when the SHG is unlocked, aka see the H1:SQZ-SHG_TRANS_RF24_DEMOD_RFMON signal. Its threshold is nominally at H1:SQZ-SHG_TRANS_RF24_DEMOD_RFMAX = 0. But the SHG_GRD has a hardfault state that brings guardian down if this threshold is exceeded, so GRD would try to LOCK, then see this error message for RF power level overload, then it would go DOWN, and its stuck.  See SHG guardian logs.

#FIXME: resolve this issue. Ideas- either remove this race condition from SHG guardian (send to IDLE if in fault?), changing RF threshold, etc.

To fix it this time, I manually changed the threshold H1:SQZ-SHG_TRANS_RF24_DEMOD_RFMAX = 5 (threshold was at 0, the demod beatnote was at 3.3), then brought SHG_GRD to LOCKED (worked fine), then reset threshold back H1:SQZ-SHG_TRANS_RF24_DEMOD_RFMAX = 0.

#TODO: decide whether this is a problem that needs fixing or just a weird one-off issue. Trending back, the RF demod power basically always goes high when unlocked, which often triggers this race condition. I'm not sure like 1) why was a problem this time, or 2)  why it is not a problem every time.

Images attached to this report
H1 ISC
elenna.capote@LIGO.ORG - posted 16:47, Thursday 05 September 2024 - last comment - 16:33, Tuesday 10 September 2024(79939)
DARM comparison, BRUCO results post-commissioning

We made some improvements today in the sensitivity, going from about 151 Mpc on GDS CLEAN to about 158 Mpc. However, our best range from April 11th (DARM FOM reference pre-OFI disaster) is around 165 Mpc. I made a comparison of that time and now with today's commissioning improvements to see where we are still missing range. I have attached the four plot results from the darm_integral_compare results (see alog 76935 for directions).

The range integrand plot makes it much easier to see that we are still missing sensitivity around the mid-frequency band. However, the sensitivity difference shows that we lose 5 Mpc of range by 40 Hz as well. Much of this range loss seems to come from a variety of peaks that have appeared since the OFI vent, such as the 20 Hz peak. We lose another ~3 Mpc between 40-200 Hz.

I ran a bruco with GDS CALIB STRAIN CLEAN on high range time after commissioning today: post-commissioning bruco

It looks like many of these new low frequency peaks (like the large 20 Hz peak) are well witnessed by things like PSL accelerometers, indicating that they could be from jitter: PEM-CS_ACC_PSL_TABLE1_Y_DQ

Generally, there is a lot of jitter coherence, and given that this is the CLEAN channel, that's probably a sign that the jitter cleaning could be improved, maybe making use of other witness channels if the current witnesses are insufficient to subtract the noise.

A peak at 30 Hz has some coherence with MAG sensor channels, here is one: PEM-CS_MAG_LVEA_VERTEX_X_DQ

Right around 35.4 Hz, there is a lot of coherence with various ISI HAM6 sensors and OMC ASC sensors. For example: ISI-HAM6_GS13INF_V1_IN1_DQ

There is also still a large amount of LSC REFL RIN coherence up to 1 kHz: LSC-REFL_RIN_DQ

I think we should test the PRCL offset again, especially because this will help reduce the CHARD Y noise coupling (ASC-CHARD_Y_OUT_DQ) and will also possibly help this HF noise (frequency noise? intensity noise?)

SRCL is better than before, but maybe has more room for improvement between 10-25 Hz: LSC-SRCL_OUT_DQ

DHARD Y coherence is low, but still present, so we should be careful with the WFS offset: ASC-DHARD_Y_OUT_DQ

There is still PRCL coherence: LSC-PRCL_OUT_DQ which is likely coupling through a combination of CHARD Y, SRCL, and LSC REFL RIN. Again a PRCL offset will help. Other strategies are to check POP phasing, POP sensing, etc. Reminder: PRCL feedforward failed, so we need to consider other avenues for noise reduction.

To summarize some strategies to get back to April sensitivity:

Editing because I went back to check the previous PRCL offset work and found this comment: 76818, in short, we can fix the REFL RIN coherence, but it has no effect on the sensitivity. However, it can improve CHARD Y noise, although at the time I don't think we were limited by CHARD Y enough to see the low frequency benefit.

Images attached to this report
Comments related to this report
derek.davis@LIGO.ORG - 11:29, Friday 06 September 2024 (79948)DetChar, DetChar-Request

Regarding the 20 Hz line, this line disappeared from DARM yesterday (Sept 5) from roughly 12:45 - 14:15 UTC. Matching Elenna's note about coherence with PSL environmental channels, the same line disappears from the PSL microphones and accelerometers at the same time. Furthermore, there are short time windows where this line dissapears from PSL channels. This behavoir happens roughly (not the exact same gap each time) at 2 hour intervals.

These clues may be helpful for any investigation into the source of this line.  

Images attached to this comment
elenna.capote@LIGO.ORG - 15:18, Friday 06 September 2024 (79953)

Another note about PRCL Offsets and CHARD Y:

I have attached a screenshot plot comparing the PRCL offset on/off times with the noise in CHARD Y (I used the on/off times from this April alog: 76814). The PRCL offset did reduce the noise in CHARD Y a small amount, and also reduced the CHARD Y coherence with DARM. I don't think at the time of this test we were limited by CHARD Y, so we didn't actually see a change in sensitivity from this test. Therefore, it's worth trying the offset again since we seem to have more CHARD Y noise coupling right now.

Images attached to this comment
elenna.capote@LIGO.ORG - 16:33, Tuesday 10 September 2024 (80029)

Here is a comparison of a longer-span time from April and from last night's lock. Using 2 hour blocks of no-glitch time I created these darm comparison plots.

There were further small improvements in the sensitivity from when these plots were last made, so they are not completely comparable to the plots in the original alog.

These plots indicate that we have actually gained some low frequency sensitivity since April, although we are definitely seeing more peaks around low frequency than before the emergency vent. We are still missing some range around 100 Hz.

Images attached to this comment
H1 SQZ
victoriaa.xu@LIGO.ORG - posted 12:26, Wednesday 04 September 2024 - last comment - 16:39, Friday 06 September 2024(79903)
FIS at different SRCL offsets, going through 0

Naoki, Sheila, Vicky - FIS Measurements at different SRCL offsets

Setup steps:

Measurement steps:

  1. Note the SRCL offset counts: caget H1:LSC-SRCL1_OFFSET
  2. Run SCAN_SQZANG
  3. Take measurement at that SRCL offset. Note gpstime.
  4. Check adc counts to see if we can keep going. Right now, looks like each 75-count step on the SRCL offset is about 3.5k ADC counts.

All spans 60s

FIS original SRCL offset @ -175 1409509282  (reference, orginal FDS settings). Span 60s, brown

FIS SRCL offset @ -100  1409509687  (worse SRCL detuning), pink

FIS SRCL offset @ -250  1409510096  (better SRCL detuning, closer to 0), green

FIS SRCL offset @ -325  1409510096  (even better SRCL detuning), yellow

FIS SRCL offset @ -400  1409510673  (still better SRCL detuning), blue

FIS SRCL offset @ -475  1409511323  (very interesting, flipped around / crossed zero with SRCL detuning), black

No SQZ beam diverter closed 1409511600 - 1409511717

Images attached to this report
Comments related to this report
victoriaa.xu@LIGO.ORG - 16:39, Friday 06 September 2024 (79951)

Posting the analysis for this FIS + SRCL offset data, where we can compare FIS + SRCL data to QN models, to try inferring the physical SRCL detuning (in degrees) for each offset value (in counts). 

Comparing data + models for FIS at different SRCL detunings, with kHz sqz optimized - Attachment 1. This seems like a reasonable way to estimate SRCL detuning.

  • Previously at -175 cts, it was likely around +0.3 deg. Now at -290 cts, it is likely around -0.08 deg. Note these measurements were taken over ~40 minutes, but it only uses ~2 min of data at each setting, so this could be faster.
  • These plots show the measured squeezing dBs after subtracting non-quantum noise based on an unsqz noise model. Data = circles. GWINC QN models = solid lines. In models, the SRCL detuning is just set to qualitatively match the sqz data (by-eye). Red dashed trace shows a reference at 0-degree detuning.
  • For calculating the qn models in gwinc: at each srcl detuning, the sqz angle is fit to minimize kHz sqz. Just like how, in real life, we minimized kHz sqz at each SRCL detuning for the measurements. That is, kHz sqz is minimized in both measured data and in qn models, so the data+models can be readily compared.
    • It seems like you could get a good signal on this if optimizing bucket squeezing because kHz sqz would be a good lever arm. But based on this week, that seems potentially harder as 1) harder to optimize sqz angle in real life (noisy/slow as it requires more averaging to optimize 350 hz noise, leaves more room for meas error) and 2) potentially more complicated to model, because in the models, the sqz angle needs to match the one used in the measurement at each srcl detuning.

Comparing FIS models at various SRCL detunings + fit how SRCL offset scales from counts to degrees - Attachment 2

  • Left panel shows that around 0-degree srcl detuning, the differences in squeezing are very marginal and hard to see (compare the red / yellow / green traces).
    • From this plot, we can see why we thought blue = -400 cts trace was a reasonable point to try yesterday in lho79929 -- that setting seems to minimize 100 Hz noise if we only had FIS. But we have FDS, where the anti-squeezed quantum noise below 100 Hz should be "taken care of" by the FC. So, seems reasonable to aim for zero detuning.
  • Right panel shows a fit of SRCL offset from counts to degrees, fit suggests -274 counts would minimize the detuning; we set it at -290 this week in lho79929, so very close to 0! Note that the different signs of the SRCL detuning seem to impact squeezing quite differently.

So far this code is living here.

Here also reproducing the list of times I used for the analysis, all span = 120 seconds. Also including the SRCL offset in counts of the filter bank, and the corresponding estimated SRCL detuning based on FIS quantum noise models, with khz squeezing angle minimized.

nosqz: 1409511600
FIS -100: 1409509687, pink.        ~~> +0.9 deg
FIS -175: 1409509282, brown.    ~~> +0.3 deg
FIS -250: 1409510096, green.     ~~> +0.1 deg
FIS -325: 1409510392, orange.  ~~> -0.1 deg
FIS -400: 1409510673, blue.       ~~> -0.5 deg
FIS -475: 1409511323, black.     ~~> -1.1 deg

Images attached to this comment
Displaying reports 161-180 of 77797.Go to page Start 5 6 7 8 9 10 11 12 13 End