Displaying reports 9961-9980 of 84799.Go to page Start 495 496 497 498 499 500 501 502 503 End
Reports until 10:32, Wednesday 10 April 2024
H1 SEI
oli.patane@LIGO.ORG - posted 10:32, Wednesday 10 April 2024 (77088)
H1 ISI CPS Noise Spectra Check FAMIS

Closes FAMIS#25986, last checked in 76761

They all look very similar to at least the last few weeks of checks.

Non-image files attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:29, Wednesday 10 April 2024 (77087)
Wed CP1 Fill

Wed Apr 10 10:12:00 2024 INFO: Fill completed in 11min 56secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 08:42, Wednesday 10 April 2024 (77085)
CDS Maintenance Summary: Tuesday 9th April 2024

Added new PSL WD channel to PSLOPC SDF

Jason, Patrick, Dave:

H1:PSL-LASER_PDWD was added to the h1pslopcsdf slow controls sdf.

New 3IFO Storage Container #2 humidity sensor

Fil, Bubba, Dave:

A new dewpoint/humidity sensor was installed for CON2 (H0:VAC-3IFO_MOD_CON2_DP3, H0:VAC-3IFO_MOD_CON2_H2O_3). When the 3ifo dewpoint IOC was restarted 12mar2024 these channels were significantly different from the rest (dewpoint ~ 0C, PPM > 4000) which we interpreted as a sensor issue. In the 4 weeks since these values have slowly dropped (see attachment). The new sensor continues where the last one left off, suggesting that this container readings were in fact correct and there is nothing wrong with the original sensor.

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 07:58, Wednesday 10 April 2024 - last comment - 08:10, Wednesday 10 April 2024(77083)
Ops Day Shift Start

TITLE: 04/10 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 2mph Gusts, 1mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.24 μm/s
QUICK SUMMARY:

Detector relocking and at LOWNOISE_ASC.

Comments related to this report
oli.patane@LIGO.ORG - 08:10, Wednesday 10 April 2024 (77084)

15:09UTC Observing

H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 07:33, Wednesday 10 April 2024 (77082)
Lockloss at 13:11UTC

https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi/event/1396789891

The LL webpage hasnt been loading for me today so I haven't looked very closely at these 2 locklosses from this morning

H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 07:26, Wednesday 10 April 2024 (77081)
OPS OWL shift report

H1 called for assistance at 14:15UTC in INITIAL_ALIGNMENT as SRY align was stuck again so I did the same as before to get out of IA. Back in regular locking now

H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 02:46, Wednesday 10 April 2024 (77080)
Lockloss at 07:21 utc

https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1396768899

H1 General
ryan.crouch@LIGO.ORG - posted 02:28, Wednesday 10 April 2024 (77079)
OPS OWL shift report

H1 called for assistance at 08:20 UTC in INITIAL_ALIGNMENT as SRY align was stuck, "Find By Hand" message in the log for ALIGN_IFO. After touching it up by hand with SRM to get AS_AIR looking the best, I left IA and went back to locking. My nomachine session on OPSlogin0 was being very laggy and kept freezing for a few seconds at a time.

Relock #1:

Couldnt get DRMI, went to PRMI, then was able to get DRMI

Reaquired NLN at 09:24UTC and back into Observing at 09:27UTC

LHO General
ryan.short@LIGO.ORG - posted 00:00, Wednesday 10 April 2024 (77077)
Ops Eve Shift Summary

TITLE: 04/10 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 160Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Some time lost observing due to locklosses at higher locking states, otherwise an uneventful evening.

H1 has now been locked at NLN for 3 hours.

LOG:

No log for this shift.

H1 General
ryan.short@LIGO.ORG - posted 20:19, Tuesday 09 April 2024 - last comment - 21:04, Tuesday 09 April 2024(77075)
Ops Eve Mid Shift Report

Following a lockloss at 00:40 UTC with an unkown cause, H1 has now had two unsuccessful lock attempts, both losing lock during or soon after TRANSITION_FROM_ETMX with ETMY and ITMX saturations. Higher winds may be contributing to the issues; gusts have been up to 30mph. I'll continue to monitor this lock attempt.

Comments related to this report
ryan.short@LIGO.ORG - 21:04, Tuesday 09 April 2024 (77076)

Third time was the charm moving past TRANSITION_FROM_ETMX; only one ETMY saturation during that state.

H1 is back to observing as of 04:03 UTC.

H1 SQZ
eric.oelker@LIGO.ORG - posted 17:26, Tuesday 09 April 2024 (77074)
Alternate PSAMs settings lead to flatter FDS and FDAS spectra

Naoki, Camilla, Eric, Vicky

While looking through some old FDS/FDAS data, we noticed that the spectra taken on March 17 2024 seemed to have the flattest antisqueezing spectra seen during ER16 (to be discussed in upcoming log post).  The PSAMS were adjusted between March 17 and March 20th, and since that date we have seen sizeable frequency dependence in our antisqueezing spectra.  Several other changes were made around this time as well, so the story isn't completely clear, but the PSAMS seem a likely suspect. 

The PSAM strain gauge values from March 17 (7.5,0.5) were not covered during our recent scans.  Shortly before the observing run this afternoon, we moved our PSAMS to (7.5,0.5), realigned, and took quick FDS and FDAS datasets.  Indeed we see that the spectra appear to have less frequency dependence.  More thorough analysis with classical noise subtraction and careful normalization/calibration will be performed in the coming days to quantify the change.  For now, we have reverted back to the old PSAM values (8.8,-0.7) for the observing period. 

It seems that our overall level of antisqueezing is a bit lower than in previous days, but there are other likely causes for this besides the PSAM settings.  We will come back to this PSAM setting later on and try to optimize further to see if we can match or exceed our best ever spectra. 

Datasets:

Images attached to this report
H1 ISC
sheila.dwyer@LIGO.ORG - posted 16:30, Tuesday 09 April 2024 (77063)
running A2L decoupling script

Jennie W, Sheila D

We decided to try Gabriele's angle to legnth decouplig script, which was in regular use before we started to use ADS (44532).  This is needed again because we found that the camera servos were not keeping the A2L couplings small, and because we are changing the camera offsets to be different from the set points found by ADS. 

To get the script working we:

The script is at  userapps/isc/h1/scripts

there is a bash script that runs this and sets the A2L gains for all 4 test masses, called run_all_a2l.sh  This calls the my_a2l.py script for each optic for pitch and yaw.

The screenshot here shows that last night the yaw noise coupling to DARM was bad, as expected because Jennie moved the camera servo setpoint to improve build ups 77033.  Running this script now (early in the lock, while waiting for violin modes in OMC_whitening) seems to have fixed the low frequency noise for now.  We probably want to rerun this script with a more thermalized IFO. 

After running this script (several times), we've set the A2L gains for 'FINAL' in lscparams:

'FINAL':{
            'P2L':{'ITMX':-0.9709, #+1.0,
                   'ITMY':-0.3830, #+1.0,
                   'ETMX':4.1183,
                   'ETMY':4.6013}, #+1.0},
            'Y2L':{'ITMX':2.7837, #+1.0,
                   'ITMY':-2.3962, #+1.0,
                   'ETMX':4.9315,
                   'ETMY':3.0610 },#+1.0},# Centered on the optic #removed offsets from EY IX IY 220705 gm jd ec
                    }#spots that are the center of the optic
            }#close a2l gains dict

And accepted these values in OBSERVE.snap.

The second attached screenshot here shows HARD loop coherence with DARM after running this script, there is still CHARD Y coherence below 20Hz, but this is much better than last night.

We spent a few minutes in NLN_CAL_MEAS with nominal sqz settings starting at 22:44:31- 22:55 UTC  (1396737889)

Images attached to this report
H1 ISC
jenne.driggers@LIGO.ORG - posted 16:18, Tuesday 09 April 2024 (77071)
CBC hardware injections for NonSENS cleaning review

Just before we went into Observing, I (finally) ran the cbc hardware injections that are necessary for the final NonSENS review sign-off.  LLO ran these injections during O4a (LLO alog 69132), but I ran out of time before the commissioning break.  While the IFO is improved since the end of O4a, the cleaning parameters that are in place during the time of that test are all the same as were present for the last several months of O4a. In particular, both jitter cleaning and laser noise cleaning were turned on during this test (even though later I turned off laser noise cleaning for O4b, see alog 77069).

injection start GPS: 1396738718.28

Below is the output of the terminal during this test, which ran for about 93 seconds.


jenne.driggers@cdsws31:~$ hwinj --run cbc clean-test-short
ifo: H1
waveform root: /ligo/groups/cal/H1/hwinj
config file: /ligo/groups/cal/H1/hwinj/hwinj.yaml
GraceDB url: https://gracedb.ligo.org/api/
GraceDB group: Detchar
GraceDB pipeline: HardwareInjection
excitation channel: H1:CAL-INJ_TRANSIENT_EXC
injection group: cbc
injection name: clean-test-short
reading waveform file...
injection waveform file: /ligo/groups/cal/H1/hwinj/cbc/O4_CLEANING_MULTI_CBCHWINJ_20s_H1.txt
injection waveform sample rate: 16384
injection waveform length: 93.0 seconds
injection start GPS: 1396738718.28
H1:CAL-INJ_TINJ_TYPE => 1
H1:CAL-INJ_TINJ_TYPE => 1
H1:CAL-INJ_TRANSIENT_SW2 => 1024
H1:CAL-INJ_TRANSIENT_SW2 => 1024
H1::CAL-INJ_TRANSIENT => ON: OUTPUT
H1::CAL-INJ_TRANSIENT => ON: OUTPUT
=== EXECUTING AWG INJECTION ===
this will wait until the injection is nearly complete...
ndshosts: h1daqnds1 and h1daqnds0
getting host by name: h1daqnds1
found host
testpoint_client 1.0.0
found version 4 or newer test point interface
H1:CAL-INJ_TRANSIENT_SW2 => 1024
H1:CAL-INJ_TRANSIENT_SW2 => 1024
H1::CAL-INJ_TRANSIENT => OFF: OUTPUT
H1::CAL-INJ_TRANSIENT => OFF: OUTPUT
H1:CAL-INJ_TINJ_TYPE => 0
H1:CAL-INJ_TINJ_TYPE => 0
=== INJECTION COMPLETE ===
jenne.driggers@cdsws31:~$ 

LHO General
ryan.short@LIGO.ORG - posted 16:01, Tuesday 09 April 2024 - last comment - 16:16, Tuesday 09 April 2024(77067)
Ops Eve Shift Start

TITLE: 04/09 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Commissioning
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 18mph Gusts, 12mph 5min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.21 μm/s
QUICK SUMMARY: H1 has been locked for almost 30 minutes, will be entering observing shortly.

Comments related to this report
ryan.short@LIGO.ORG - 16:16, Tuesday 09 April 2024 (77070)

Started observing at 23:08 UTC

H1 CDS (ISC, PSL)
filiberto.clara@LIGO.ORG - posted 14:08, Tuesday 09 April 2024 - last comment - 12:35, Wednesday 17 April 2024(77062)
SPI Pick-off Fiber Length

WP 11805
ECR E2400083

Lengths for possible SPI Pick-off fiber. Part of ECR E2400083.

PSL Enclosure to PSL-R2 - 50ft
PSL-R2 to SUS-R2 - 100ft
SUS-R2 to Top of HAM3 (flange D7/D8) - 25ft
SUS-R2 to HAM3 (flange D5) - 20ft

Comments related to this report
jeffrey.kissel@LIGO.ORG - 16:28, Tuesday 09 April 2024 (77072)
J. Kissel [for J. Oberling]

Jason also took the opportunity during his dust monitoring PSL incursion today to measure the distance between where the new fiber collimator would go on the PSL table to the place where it would exit at the point Fil calls the PSL enclosure.

He says 
SPI Fiber Collimator to PSL Enclosure = 9ft.
jeffrey.kissel@LIGO.ORG - 13:53, Thursday 11 April 2024 (77118)
J. Kissel [for F. Clara, J. Oberling]

After talking with Fil I got some clarifications on how he defines/measures his numbers:
   - They *do* include any vertical traversing that the cable might need to go through,
   - Especially for rack-to-rack distances, always assumes that the cable will go to the bottom of the rack (typically 10 ft height from cable tray to rack bottom), 
   - He adds two feet (on either end) such that we can neatly strain relieve and dress the cable.

So -- the message -- Fil has already built in some contingency into the numbers above. 
(More to the point: we should NOT consider them "uncertain" and in doing so add an addition "couple of feet here" "couple of feet there" "just in case.")

Thanks Fil!

P.S. We also note that, at H1, the optical fibers exit the PSL at ground level on the +X wall of the enclosure between the enclosure and HAM1, underneath the light pipes. Then the immediately shoot up to the cable trays, then wrap around the enclosure, and then land in the ISC racks at PSL-R2. Hence the oddly long 50 ft. number for that journey.

Jason also reports that he rounded up to the nearest foot for his measurement of the 9ft run from where the future fiber collimator will go to the PSL enclosure "feed through."
jeffrey.kissel@LIGO.ORG - 12:35, Wednesday 17 April 2024 (77249)SEI, SYS
Upon discussion with the SPI team, we want to minimize the number of "patch panel" "fiber feedthrough" connections in order to minimize loss and polarization distortion.

As such, we prefer to go directly from the "SPI pick-off in the PSL" fiber collimator directly to the Laser Prep Chassis in SUS-R2.
That being said purchase all of the above fiber lengths, such that we can re-create a "fiber feedthrough patch panel full" system as contingency plan.

So, for the baseline plan, we'll take the "original, now contingency plan" PSL-R2 to SUS-R2, 100 ft fiber run and use that to directly connect the "SPI pick-off in the PSL" fiber collimator directly to the Laser Prep Chassis in SUS-R2.

I spoke with Fil and confirmed that 100 ft is plenty enough to make that run (from SPI pick-off in PSL to SUS-R2).
H1 CDS
david.barker@LIGO.ORG - posted 06:12, Tuesday 09 April 2024 - last comment - 10:48, Wednesday 10 April 2024(77040)
h1omc0 channel hop caused dolphin glitch of SUS and LSC

Oli, Erik, Dave:

Around 04:30 this morning a ADC channel hop on h1iopomc0 caused all of the omc0 models to stop running, and also caused a corner station Dolphin glitch which DACKILLED h1susb123, h1sush34, h1sush56 and h1lsc0.

Comments related to this report
david.barker@LIGO.ORG - 06:14, Tuesday 09 April 2024 (77041)

h1omc0 dmesg:

[Tue Apr  9 04:34:05 2024] rts_cpu_isolator: LIGO code is done, calling regular shutdown code
[Tue Apr  9 04:34:05 2024] h1iopomc0: ERROR - A channel hop error has been detected, waiting for an exit signal.
[Tue Apr  9 04:34:05 2024] h1omcpi: ERROR - An ADC timeout error has been detected, waiting for an exit signal.
[Tue Apr  9 04:34:05 2024] h1omc: ERROR - An ADC timeout error has been detected, waiting for an exit signal.
 

david.barker@LIGO.ORG - 06:22, Tuesday 09 April 2024 (77042)

Initially I thought this was an IO Chassis issue, so we power cycled h1omc0 rather than a restart all of its models (my confusion was because this front end only has one Adnaco, and is running the low-noise ADC). This brought h1omc0 back up and running.

We restarted the models on h1lsc0, which cleared the DACKILL.

Oli put the SUS and SEI for BSC1,2,3 and HAM3,4,5,6 into safe, I SWWD bypassed the SEI IOPs  and we restarted the models on h1susb123, h1sush34, h1sush56. All came back with no problems.

I cleared the SWWDs, did a DIAG_RESET and cleared the DAQ CRCs.

Handing over to Oli for IFO recovery.

david.barker@LIGO.ORG - 06:35, Tuesday 09 April 2024 (77043)

CDS Overview after DIAG_RESET run on all front ends:

Images attached to this comment
david.barker@LIGO.ORG - 06:39, Tuesday 09 April 2024 (77045)

Time of OMC crash:

04:32:45 PDT

11:32:45 UTC

1396697583 GPS

david.barker@LIGO.ORG - 10:48, Wednesday 10 April 2024 (77089)
H1 SQZ
naoki.aritomi@LIGO.ORG - posted 15:59, Thursday 04 April 2024 - last comment - 18:56, Tuesday 09 April 2024(76949)
PSAMS coarse scan trial (2)

Naoki, Eric, Camilla

We continued the PSAMS coarse scan in 76925. Yesterday, IFO was not thermalized, but today IFO is thermalized at least for 7 hours. It seems our nominal 140/90 (strain voltage 7.22/-0.71) would be close to optimal. The detail analysis will follow.

This time, after we moved PSAMS, we compensated the alignment change caused by the PSAMS change by looking at OSEM. This works well for ZM4, but not for ZM5 and we need to touch ZM5 in addition to the compensation. This might be related to the beam miscentering in ZM5 as reported in 75770.

no sqz (10 min)

PDT: 2024-04-04 08:06:30 PDT
UTC: 2024-04-04 15:06:30 UTC
GPS: 1396278408

asqz 200/115 (strain voltage 9.59/-0.702) (5 min)

PDT: 2024-04-04 08:41:34 PDT
UTC: 2024-04-04 15:41:34 UTC
GPS: 1396280512

sqz 200/115 (5 min)

PDT: 2024-04-04 08:49:14 PDT
UTC: 2024-04-04 15:49:14 UTC
GPS: 1396280972

asqz 200/200 (strain voltage: 9.59/2.67) (5 min)

PDT: 2024-04-04 09:28:12 PDT
UTC: 2024-04-04 16:28:12 UTC
GPS: 1396283310

sqz 200/200 (5 min)

PDT: 2024-04-04 09:36:04 PDT
UTC: 2024-04-04 16:36:04 UTC
GPS: 1396283782

asqz 100/200 (strain voltage 6.83/2.72) (5 min)

PDT: 2024-04-04 09:55:09 PDT
UTC: 2024-04-04 16:55:09 UTC
GPS: 1396284927

sqz 100/200 (5 min)

PDT: 2024-04-04 10:02:23 PDT
UTC: 2024-04-04 17:02:23 UTC
GPS: 1396285361

asqz 0/200 (strain voltage 2.2/2.72) (5 min)

PDT: 2024-04-04 10:27:25 PDT
UTC: 2024-04-04 17:27:25 UTC
GPS: 1396286863

sqz 0/200 (5 min)

PDT: 2024-04-04 10:35:27 PDT
UTC: 2024-04-04 17:35:27 UTC
GPS: 1396287345

asqz 140/90 (strain voltage 7.22/-0.71) (5 min)

PDT: 2024-04-04 11:04:22 PDT
UTC: 2024-04-04 18:04:22 UTC
GPS: 1396289080

sqz 140/90 (5 min)

PDT: 2024-04-04 11:12:41 PDT
UTC: 2024-04-04 18:12:41 UTC
GPS: 1396289579

asqz 170/90 (strain voltage 8.80/-0.70) (5 min)

PDT: 2024-04-04 11:55:59 PDT
UTC: 2024-04-04 18:55:59 UTC
GPS: 1396292177

sqz 170/90 (5 min)

PDT: 2024-04-04 12:03:26 PDT
UTC: 2024-04-04 19:03:26 UTC
GPS: 1396292624

asqz 75/90 (strain voltage 5.77/-0.71) (5 min)

PDT: 2024-04-04 12:24:44 PDT
UTC: 2024-04-04 19:24:44 UTC
GPS: 1396293902

sqz 75/90 (5 min)

PDT: 2024-04-04 12:31:39 PDT
UTC: 2024-04-04 19:31:39 UTC
GPS: 1396294317

asqz 130/125 (strain voltage 7.21/0.26) (5 min)

PDT: 2024-04-04 14:58:24 PDT
UTC: 2024-04-04 21:58:24 UTC
GPS: 1396303122

sqz 130/125 (5 min)

PDT: 2024-04-04 15:06:03 PDT
UTC: 2024-04-04 22:06:03 UTC
GPS: 1396303581

asqz 130/83 (strain voltage 7.22/-1.2) (5 min)

PDT: 2024-04-04 15:39:41 PDT
UTC: 2024-04-04 22:39:41 UTC
GPS: 1396305599

sqz 130/83 (5 min)

PDT: 2024-04-04 15:46:36 PDT
UTC: 2024-04-04 22:46:36 UTC
GPS: 1396306014

Comments related to this report
eric.oelker@LIGO.ORG - 16:31, Thursday 04 April 2024 (76960)
Attached are the averaged squeezing and anti-squeezing DARM spectra for each PSAM value we've measured so far.  Based on the course scan, we see that our initial PSAM values (strain voltages of 7.22/-0.71 for ZM4/ZM5) appear to be roughly optimal, giving roughly -5.2 dB of squeezing and 15.6 dB antisqueezing at 2 kHz.  So far we've noticed that any significant movement of the PSAM setting for ZM5 seems to de-optimize things and we are relatively insensitive to changes in ZM4 when ZM5 is held fixed at its initial value.    

Values from yesterday afternoon are also included.
Images attached to this comment
naoki.aritomi@LIGO.ORG - 15:52, Tuesday 09 April 2024 (77065)

On April 9th, we took one more PSAMS data. This PSAMS setting might be better than the nominal 170/95 (strain voltage 8.8/-0.66) as reported in 77074. We may come back to this setting later.

asqz 125/136 (strain voltage 7.5/0.5) (5 min)

PDT: 2024-04-09 15:36:49 PDT
UTC: 2024-04-09 22:36:49 UTC
GPS: 1396737427

sqz 125/136 (5 min)

PDT: 2024-04-09 15:43:59 PDT
UTC: 2024-04-09 22:43:59 UTC
GPS: 1396737857

victoriaa.xu@LIGO.ORG - 18:56, Tuesday 09 April 2024 (77073)

Subtracted SQZ dB's for the various PSAMS settings last week: course trial #1: 76925, and course trial #2: 76949, and the previous optimzations in LHO:76507, which used SQZ ASC to hold alignments when moving PSAMS before ZM alignment scripts 76757.

I sorted the PSAMS tests by positive and negative ZM5 strain gauge voltages.

Some takeaways: hard to interpret what's happening. Could be interesting to try ZM5 strains around 0 - 0.5 V, and scan with e.g. ~0.1 V steps. I wonder if reviving the psams scanning scripts, and doing fine optimizations, would be productive at this point. I'm not sure if there's a tension between good squeezing at 1 kHz vs. 100 Hz. But some settings that give the most kHz squeezing (negative ZM5 strain) don't necessarily show the best 100 Hz squeezing (positive ZM5 strain), and vice-versa. It may just be that the optimal point is narrow, and we're taking big steps?

  • For best SQZ at 1 kHz - we seem to more reliably see this with ZM5 strains between (-0.7 , 0.5) V across a large range of ZM4 settings, even when settings are varied on the same day/lock.
  • For best SQZ at 200 Hz - a bit hard to say, but possibly ZM5 > 0 (positive) gives better 200 Hz SQZ.  
    • Several negative ZM5 traces suggest freq-dep losses below the DARM pole.
    • For positive ZM5 traces, very hard to tell. SQZ looks lossier for ZM5 > 2V.  

Attachment 1 - positive ZM5 strain - potentially more SQZ at 100 Hz, and less sqz at 1 kHz?

  • Some of these traces have flatter SQZ, but this is hard to disentangle from just loss, since many positive ZM5 settings have both less SQZ and less ASQZ.
    • Like as an example, if SQZ-OMC mode-matching was very bad for those settings, squeezing would just look lossier across the whole band, which might cause it look flatter too.
    • But I think the blue (7.2, 0.3) trace is an interesting counter-example: we could infer that there's less generated squeezing (b/c anti-sqz is lower), but the decrease in anti-sqz is not due to loss b/c squeezing is the same.
    • Comparing the blue 4/4 to the pink 3/17 is also interesting - PSAMS settings are very similar, and shape of SQZ is very similar (blue / pink nearly parallel). But, anti-sqz (= NLG + loss) is less on 4/4 than on 3/17, while sqz (~ loss) is basically the same. This could suggest less generated squeezing on 4/4 than 3/17.
  • Higher ZM5 settings far above 1V are not obviously good. In this range, both asqz+sqz look worse.

Attachment 2 - negative ZM5 strain - more kHz SQZ, but kinda looks lossier below the DARM pole at e.g. 100 Hz.

  • Mostly, ZM5 strain is at -0.7V, and ZM4 is varying. 
  • This looks consistent with the FD-SQZ data set on 20 March LHO:76540, which used PSAMS at ZM4/ZM5 = 150 / 90 (8.46V, -1.26 V).
    • Fitting a common sqz model to that 3/20 data suggested there were freq-dep losses below the DARM pole at those settings (plot). There's also evidence for the freq-dep losses at 100 Hz in this 4/4 data.

 

Linking LHO:75749 with the in-chamber beam profiles at various PSAMS settings, as we continue working to reconcile the models and measurements.

Images attached to this comment
H1 TCS
camilla.compton@LIGO.ORG - posted 12:17, Wednesday 28 February 2024 - last comment - 09:01, Wednesday 10 April 2024(76030)
EX HWS fiber swapped from 200um to 50um

TJ, Camilla. WP 11730 Table: D1800270

TJ and I swapped the EX HWS fiber from a M92L02 200um 0.22NA multi-mode fiber to a M14L01 50um 0.22NA fiber, the same that LLO successfully uses. This gave us a focus ~150mm after the 125mm lens, where D1800125 suggests our focus should be 1.25m from the launcher (or ~100mm from L2 62398). TJ found we need to change the spacers in the D1800125 launcher, from the design 12mm closer to 11mm to get the beam focus at 1.25m. We got a beam a much more sensible size by only securing the launcher on one side and could see a return beam off ETMX, image will be commented. We plan to buy/find more spacers before continuing this work.

LLO has recently been swapping 1" optics to 2" to reduce clipping 69891. We did this in 62995 and 73878 so have M1A, M1B and M1C on EX as 2" optics but currently no picomotors in the HWS path.

Comments related to this report
thomas.shaffer@LIGO.ORG - 13:44, Wednesday 28 February 2024 (76033)

Attached image with the plate off. Looks much better than before in the size and uniformity, but it would need more alignment and focusing if we decide to stay near this launcher to lens length.

Images attached to this comment
camilla.compton@LIGO.ORG - 09:01, Wednesday 10 April 2024 (77086)

From March 19th. TJ, Gabriele, Camilla 

On March 12th, TJ and I tried to change the length of spacers in D1800125 by 1mm increments. This still didn't give us the required beam size.

On March 19th, TJ, Gabriele and I measured the beam straight out of the fiber and SM05SMA adapter, before the spacers and f = 20.0 mm bi-convex collimating lens. We used a ruler for horizontal, put white laminated paper on a stand to see the beamsize and measured the diameter with calipers, as the beam is too large for beamscanner. Results attached. Plan to use this to make a mode matching solution. 

Non-image files attached to this comment
Displaying reports 9961-9980 of 84799.Go to page Start 495 496 497 498 499 500 501 502 503 End