Displaying reports 15041-15060 of 86441.Go to page Start 749 750 751 752 753 754 755 756 757 End
Reports until 10:32, Wednesday 18 October 2023
LHO VE
david.barker@LIGO.ORG - posted 10:32, Wednesday 18 October 2023 (73551)
Wed CP1 Fill

Wed Oct 18 10:10:29 2023 INFO: Fill completed in 10min 25secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 SQZ
camilla.compton@LIGO.ORG - posted 10:05, Wednesday 18 October 2023 (73549)
SQZ OPO Temp and SQZ angle adjusted to improve squeezing

TJ popped us out of observing so Vicky and I could touch up the OPO temperature 0.017degC (step3 of 70050) and adjust the SQZ angle 2degrees. The temperature change made the biggest difference in SQZ BLRMs, plot.

Knew this needed to happen as our range trended downwards with the SQZ BLRMS (sitemap> SQZ>Overview> SQZ Scopes> SQZ BLRMS), plot of 14 hour lock attached.

Images attached to this report
H1 General (SEI)
thomas.shaffer@LIGO.ORG - posted 10:00, Wednesday 18 October 2023 (73550)
Slightly higher than normal 3-10Hz ground motion today

Camilla noticed an awful looking glitch in DARM today and it seems to line up with extra 3-10 Hz noise seem in our seismometer BLRMS. The magnitude of this noise today is about that of Tuesday maintenance levels, execpt that there isn't any trucks or other large equipment going on site that we are aware of. It seems to be strongest near the corner station. We will continue looking around for a possible source.

Images attached to this report
H1 DetChar
gabriele.vajente@LIGO.ORG - posted 08:50, Wednesday 18 October 2023 - last comment - 18:35, Friday 27 October 2023(73546)
Low Frequency Noise (<50 Hz)

Using two periods of quiet time during the last couple of days (1381575618 + 3600s, 1381550418 + 3600s) I computed the usual coherences:

https://ldas-jobs.ligo-wa.caltech.edu/~gabriele.vajente/bruco_STRAIN_1381550418/
https://ldas-jobs.ligo-wa.caltech.edu/~gabriele.vajente/bruco_STRAIN_1381575618/

The most interesting observation is that, for the first time as far as I can remember, there is no coherence above threshold with any channels for wide bands in the low frequency range, notably between 20 and 30 Hz, and also for many bands above 50 Hz. I'll assume for now that most of the noise above ~50 Hz is explained by thermal noise and quantum noise, and focus on the low frequency range (<50 Hz).

Looking at the PSDs for the two hour-long times, the noise belowe 50 Hz seems to be quite repeatable, and follows closely a 1/f^4 slope. Looking at a spectrogram (especially when whitened with the median), one can see that there is still some non-stationary noise, although not very large. So it seems to me that the noise below ~50 Hz is made up o some stationary 1/f^4 unknown noise (not coherent with any of the 4000+ auxiliary channels we record) and some non-stationary noise. This is not hard evidence, but an interesting observation.

Concerning the non-stationary noise, I think there is evidence that it's correlated with the DARM low frequency RMS. I computed the GDS-CALIB RMS between 20 and 50 Hz (whitened to the median to weight equally the frequency bins even though the PSD has a steep slope), and the LSC_DARM_IN1 RMS between 2.5 and 3.5 Hz (I tried a few different bands and this is the best). There is a clear correlation between the two RMS, as shown in a scatter plot, where every dot is the RMS computed over 5 seconds of data, using a spectrogram.

 

 

Images attached to this report
Comments related to this report
gabriele.vajente@LIGO.ORG - 11:01, Wednesday 18 October 2023 (73554)

DARM low frequency (< 4 Hz) is highly coherent with ETMX M0 and R0 L damping signals. This might just be recoil from the LSC drive, but it might be worth trying to reduce the L damping gain and see if DARM RMS improves

 

Images attached to this comment
gabriele.vajente@LIGO.ORG - 13:04, Wednesday 18 October 2023 (73560)

Bicoherence is also showing that the noise between 15 and 30 Hz is modulated according to the main peaks visible in DARM at low frequency.

Images attached to this comment
elenna.capote@LIGO.ORG - 20:53, Wednesday 18 October 2023 (73579)

We might be circling back to the point where we need to reconsider/remeasure our DAC noise. Linking two different (and disagreeing) projections from the last time we thought about this, it has the correct slope. However, Craig's projection and the noisemon measurement did not agree, something we never resolved.

Projection from Craig: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=68489

Measurement from noisemons: https://alog.ligo-wa.caltech.edu/aLOG/uploads/68382_20230403203223_lho_pum_dac_noisebudget.pdf

christopher.wipf@LIGO.ORG - 11:15, Friday 20 October 2023 (73620)

I updated the noisemon projections for PUM DAC noise, and fixed an error in their calibration for the noise budget. They now agree reasonably well with the estimates Craig made by switching coil driver states. From this we can conclude that PUM DAC noise is not close to being a limiting noise in DARM at present.

Images attached to this comment
jeffrey.kissel@LIGO.ORG - 09:51, Tuesday 24 October 2023 (73691)CDS, CSWG, ISC, OpsInfo, SUS
To Chris' point above -- we note that the PUMs are using 20-bit DACs, and we are NOT using and "DAC Dither" (see aLOGs motivating why we do *not* use them in LHO:68428, and LHO:65807, namely that [in the little testing that we've done] we've seen no improvement, so we decided they weren't worth the extra complexity and maintenance.)
christopher.wipf@LIGO.ORG - 15:25, Tuesday 24 October 2023 (73710)

If at some point there’s a need to test DAC dithers again, please look at either (1) noisemon coherence with the DAC request signal, or (2) noisemon spectra with a bandstop in the DAC request to reveal the DAC noise floor.  Without one of those measures, the noisemons are usually not informative, because the DAC noise is buried under the DAC request.

christopher.wipf@LIGO.ORG - 18:35, Friday 27 October 2023 (73784)

Attached is a revised PUM DAC noisemon projection, with one more calibration fix that increases the noise estimate below 20 Hz (although it remains below DARM).

Images attached to this comment
LHO General
thomas.shaffer@LIGO.ORG - posted 08:03, Wednesday 18 October 2023 (73547)
Ops Day Shift Start

TITLE: 10/18 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 160Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 2mph 5min avg
    Primary useism: 0.11 μm/s
    Secondary useism: 0.39 μm/s
QUICK SUMMARY: Locked for almost 12 hours. There is an acknowledged Vacuum X BT manifold AI pump alarm with a sticky note saying that it will be fixed next Tuesday.

LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 01:29, Wednesday 18 October 2023 - last comment - 10:26, Friday 20 October 2023(73545)
GV8 Annulus Ion Pump Failure

Today at 4:45 UTC the annulus ion pump for GV8 went down to zero-ish, not a common failure, perhaps the ion pump controller failed, we (vacuum group) will take a look at the system as soon as permissible.

Images attached to this report
Comments related to this report
gerardo.moreno@LIGO.ORG - 09:50, Friday 20 October 2023 (73615)VE

Late entry.

On Wednesday I had the opportunity to investigate the failure of the annulus ion pump, IFO was out of lock.  The controller for the annulus ion pump did not show signs of power, the front display lights were off, see photo.  Replaced the controller, and the system started working again.  However, the signal for the current was a bit unusual, on a plot of the past 10 years the signal never reached such a low level, see attached plots.

Images attached to this comment
gerardo.moreno@LIGO.ORG - 10:26, Friday 20 October 2023 (73616)VE

And today the AIP railed high.  Pump to be changed next Tuesday.

Images attached to this comment
H1 General
oli.patane@LIGO.ORG - posted 00:03, Wednesday 18 October 2023 (73544)
Ops EVE Shift End

TITLE: 10/18 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 160Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: We've now been Locked for almost 4 hours and everything is looking good. When we lost lock at 10/18 01:53UTC, I had some trouble getting ALSX to lock, but no other issues besides that. The SQZ-OPO_ISS_CONTROLMON value is currently around 7.72 and has been trending slightly upward for the past hour and before that was moving downward with a shallow slope, so I don't think we will have any issues with it for the next several hours.
LOG:

23:00UTC Detector Observing and has been Locked for 1.5hours

01:53 Lockloss (73539)
03:09 NOMINAL_LOW_NOISE
03:25 Observing

04:45 X Beamtube Manifold AI Pump Major Alarm - Contacted Janos, will be fixed next Tuesday

H1 AOS (DetChar)
dishari.malakar@LIGO.ORG - posted 21:00, Tuesday 17 October 2023 (73543)
DQ Shift report: 9th October 2023 00:00 UTC to 15 Oct 2023 23:59 UTC

The link to the full report: DQ_shift_report

 

H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 18:55, Tuesday 17 October 2023 - last comment - 11:03, Wednesday 18 October 2023(73539)
Lockloss

Lockloss @ 10/18 01:53UTC

Comments related to this report
oli.patane@LIGO.ORG - 20:17, Tuesday 17 October 2023 (73541)

DARM_IN1(attachment1) saw some sort of glitch(left data tip) ~170ms before DARM registered the lockloss starting(right data tip).

ETMX L2 and L3 also saw this glitch(attachment2). Seems like the glitch hit DARM_IN1, then ETMX L3, then ETMX L2?

I noticed something similar with the 10/15 08:53UTC LL(attachment3), but in that case it looked like the movement was first seen in ETMX L3, then DARM_IN1 and ETMX L2. In that instance the glitch also happened a full 0.5s before the lockloss.

Images attached to this comment
oli.patane@LIGO.ORG - 20:26, Tuesday 17 October 2023 (73542)

03:25 Back Observing

thomas.shaffer@LIGO.ORG - 11:03, Wednesday 18 October 2023 (73552)Lockloss

Very interesting find Oli! I've attached some more plots to hopefully help us narrow down where this is coming from. The L2 ETMX fastimon and noisemon channels see this same glitch, but the L2 OSEM sensor inputs on all of the quads don't. This makes me think that it could possible be coming from the OMC DCPD signals, the error signals forLSC-DARM. Tough to say based on data rates, but the second attachement also maybe shows the DCPD signal moving first, but take that with a grain of salt.

Images attached to this comment
H1 SUS
oli.patane@LIGO.ORG - posted 18:31, Tuesday 17 October 2023 (73538)
Weekly In-Lock SUS Charge Measurement

Closes FAMIS#26062, last checked 73432

ETMX did not run this week, so there is no new data point for ETMX.

Like Ryan S had said last week, matplotlib is still acting up. When I was running the coefficientplots.py script, mpl refused to let the plots show at the end, so I ended up needing to add an extra plt.show() right after plotting, go into debug mode, and get the script to pause after generating each plot so I could save them.

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 16:06, Tuesday 17 October 2023 (73533)
Ops EVE Shift Start

TITLE: 10/17 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 161Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 7mph 5min avg
    Primary useism: 0.06 μm/s
    Secondary useism: 0.39 μm/s
QUICK SUMMARY:

Locked for 1.5 hours now. Wind is low and trending downward.

LHO General
thomas.shaffer@LIGO.ORG - posted 15:58, Tuesday 17 October 2023 (73511)
Ops Day Shift Summary

TITLE: 10/17 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 160Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Maintenance day, with work all around the site. Recovery was straight forward and autonomous, but required an initial alignment.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
14:56 FAC Karen EY n Tech clean 16:10
14:58 FAC Cindi FCES n Tech clean 17:23
14:59 FAC Kim EX n Tech clean 16:05
15:21 FAC Tyler EY n Check on chiller 1 17:21
15:24 PSL Jason PSL local Ref cav alignment 16:55
15:25 VAC Janos FCTE n 5 way cross install 15:49
15:30 SQZ Sheila, Camilla, Vicky, Naoki LVEA - SQZ LOCAL SQZ measurements 19:45
15:45 VAC Gerardo & Jordan LVEA, HAM8 N Vacuum Work 18:45
15:49 VAC Janos, Travis EX, MX, EY n Purge air sampling 18:08
15:51 SUS/CDS Fil, Rahul EY n SUS coil driver troubleshoot 19:19
15:59 FAC Christina MX n Drop off, look up 17:45
16:57 FAC Karen LVEA n Tech clean 18:23
17:10 FAC Kim LVEA n Tech clean 18:24
17:16 CDS Jonathan, Patrick MSR n Camera network switching 18:08
17:25 ISC Daniel LVEA n Plug in cable 18:08
17:52 SEI Jim LVEA local Investigating HAM7 trips 18:26
18:08 ISC Keita, Daniel CR n OMC meas. 19:29
18:16 FAC Chris EY n Glycol fill 18:46
19:05 CDS/SEI Dave remote n Restart picket fence code 19:19
19:15 VAC Janos LVEA n Particle counts 19:23
20:20 FAC Tyler EY n Check on glycol levels 20:43
20:37 VAC Travis LVEA n Looking at crane numbers 20:43
21:52 CDS Fil MY n Looking for parts 22:52
H1 SQZ
camilla.compton@LIGO.ORG - posted 14:51, Tuesday 17 October 2023 - last comment - 18:30, Tuesday 24 October 2023(73524)
SQZ OPO Crystal Moved

Naoki, Vicky, Sheila, Camilla. WP 11470.

OPO Crystal Move

Following last time we moved the OPO crystal 65684, we used the wincam dell laptop and gray E-870 PI shift driver box (stored in LVEA SQZ cabinet) and the OPO crystal cable Daniel attached the the +X HAM7 flange. Adjusted OPO temperature from 31.684 degC to 31.804 degC for co-resonance while scanning rather than stationary. 

To scan the OPO PZT fast enough, we used external function generator and Thorlabs driver at the racks for the OPO PZT to scan over more than 1 FSR (around 1-9V scan). 

We were able to move the OPO crytal over its full range and found 6 spots with red/green co-resonance, attached pdf shows the places we moved. LLO found 10-13 potions which is surprisingly more than us, 73502

Through O4 we've been using the 2nd posltion from the right, but left the crystal on the 2nd spot from the Left. 

Photos attached of the PZT scan (yellow trace), OPO IR trans light measured on HD PDA (pink trace), Green trans from CLF path (green trace). The green alignment looked bad at all spots. This could be from misalignment of OPO path or HOMs present in fiber. When we next go into HAM7 we can look at this. 

NLG Measurement

Turn down green pump power with SQZT0 wave plate to 4mW in to stop the OPO lazing. We measured NLG of 14 (OPO_IR_PD_LF_OUTPUT amplified / unamplifed = 0.0136/0.00092). This reduced to NLG of 12 over a few minutes. 

Homodyne Measurement 

We balanced the HD by reducing the seed power and realigned to maximize visibility. At start of measurement visibly was 14 and at the end it NLG was 12. Vicky, Sheila Naoki took Homodyne measurements which were hard with NLG drifting. Measured 4.5dB whihc is less than the 6dB measured last week 73376. But we think we could have offloaded the ASC alignments incorrectly to not see good SQZ on the HD.  

SQZ in IFO

Ended with NLG of 13.5 into IFO. Amplified power of 0.0196, unamplified seed of 0.00149. Though this may be changing quickly. 

Took some no sqz data while TJ tested updates to the Observing with No SQZ wiki. Accepted sdfs to go back into observing. Vicky adjusted OPO tempurature and had to adjust the SQZ angle from 142 to 163, squeezing looks better 4dB+ but we expect the angle and OPO tempurature may need to be adjusted over the next hours...

Images attached to this report
Non-image files attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 16:22, Tuesday 17 October 2023 (73534)

On the bad alignment of the green to the OPO shown in Camilla's photos of the scope:

We normally apply an offset to PZT1, which we were not doing this morning.  This offset impacts the cavity alignment, as you can see in Vicky's screenshots from April 2022.  62856

Here are some past alogs showing an OPO cavity scan with the green transmission.  

Dec 2022: 66527  (we should do a repeat of a scan like this now, with lowered green power and a slow scan, to see if things have really degraded). 

April 2022: 62691

Sept 22 we swapped the CLF collimator.

victoriaa.xu@LIGO.ORG - 20:37, Tuesday 17 October 2023 (73535)

With this new crystal position, we have more squeezing in DARM, at first briefly measuring ~4.8 dB SQZ at 2kHz! It has since settled to ~4.2dB in Observe. As Camilla said, this is with the 2nd spot from the left edge (5th from the right). The optimal OPO temperature will be changing as it settles into this new spot, so there we may need to tune co-resonance temperature over the next few days to re-optimize.

Just before injecting SQZ to the interferometer, with OPO trans = 80uW, NLG was measured as 13.15. We took some sqz/asqz/meansqz loss measurements, and will do subtraction using the following times. Measurements using dtt cursors:

  • SQZ, ~4.8 dB
    • 1381612908 - 1381612998
  • No SQZ
    • 1381613053 - 1381614325
  • Anti-sqz, 14.0 dB
    • 1381616445 - 1381616576
  • Mean-sqz, 11.0 dB
    • 1381616609 - 1381616740
  • After 22:33:40 UTC (gpstime 1381617238), we went back to Observe with SQZ at this spot.

Loss analysis and subtraction to follow.

NLG variation over an early ~6min of operating at this new spot is interesting. I think we see the crystal's green losses increasing rapidly. In the screenshot, the OPO was locked in green (~80uW green transmitted power, but no intensity stabilization so green power varied).

  • We see the crystal's non-linear gain decrease by ~11.7% over 6 minutes for fixed input green power (NLG = (max_amplified/unamplified) ; the top red trend shows decay of the maximum amplified seed power. The maximum amplified seed power can be found by optimizing the OPO crystal temperature, but this decay of NLG was not recoverable by optimizing crystal temperature (bottom trend = attempts to optimize opo crystal temp).
  • NLG decay does look consistent with the decay of transmitted green power over ~6 minutes. The decay of green transmission for fixed input green power could result from fast crystal losses/absorption/etc in green. This lowered green pump power, results in a lower non-linear gain for red/sqz. That is, I think this NLG decay can be explained by fast green losses in the opo crystal, and doesn't necessarily require fast variations in red (i.e. these fast green losses don't have to also be fast red sqz losses).

edited to include: NLG measured ~7 hours later in the lockloss. After ~4 hours with stabilized green trans = 80uW, the opo co-resonance changed from 31.638 degC to 31.617 degC, and we recovered the same NLG after tuning the temperature, so the interaction strength (probably also the red losses) did not degrade much in this time.

edited to include darm sqz 15 min after relocking: SQZ reached 4.8dB again after relocking; the darker bottom SQZ trace is taken ~15 minutes into this next lock. This is operating at the new spot for ~8 hours.

Images attached to this comment
victoriaa.xu@LIGO.ORG - 18:30, Tuesday 24 October 2023 (73722)

For these squeezed darms at the new crystal spot, here's adding in subtracted sqz data for these times of anti-squeezing, squeezing, and mean-squeezing (i.e. LO loop unlocked, sqz phase is unlocked from the IFO beam and spinning through all sqz angles).

Images attached to this comment
H1 ISC (AWC, DetChar-Request, ISC)
keita.kawabe@LIGO.ORG - posted 15:09, Tuesday 10 October 2023 - last comment - 15:27, Thursday 19 October 2023(73367)
OM2/beckhoff coupling no light test (Daniel, Keita)

To see if the OM2/beckhoff coupling is a direct electronics coupling or not, we've done A-B-A test while the fast shutter was closed (no meaningful light on the DCPD).

State A (should be quiet): 2023 Oct/10 15:18:30 UTC - 16:48:00 UTC. The same as the last observing mode. No electrical connection from any pin of the Beckhoff cable to the OM2 heater driver chassis. Heater drive voltage is supplied by the portable voltage reference.

State B (might be noisy): 16:50:00 UTC - 18:21:00 UTC. The cable is directly connected to the OM2 heater driver chassis.

State A (should be quiet): 18:23:00- 19:19:30 UTC or so.

DetChar, please directly look at H1:OMC-DCPD_SUM_OUT_DQ to find combs.

It seems that even if the shutter is closed, once in a while very small amount of light reaches DCPDs (green and red arrows in the first attachment). One of them (red arrow) lasted long and we don't know what was going on there. One of the short glitches was caused by BS momentarilly kicked (cyan arrow) and scattered light in HAM6 somehow reached DCPDs, but I couldn't find other glitches that exactly coincided with optics motion or IMC locked/unlocked.

To give you a sense of how bad (or not) these glitches are, 2nd attachment shows the DCPD spectrum of a quiet time in the first State A period (green), strange glitchy period indicated by the red arrow in the first attachment (blue), a quiet time in State B (red) and during the observing time (black, not corrected for the loop).

FYI, right now we're back to State A (should be quiet). Next Tuesday I'll inject something to thermistors in chamber. BTW 785 was moved in front of the HAM6 rack though it's powered off and not connected to anything.

Images attached to this report
Comments related to this report
ansel.neunzert@LIGO.ORG - 10:25, Monday 16 October 2023 (73498)

I checked H1:OMC-DCPD_SUM_OUT_DQ and don't see the comb in any of the three listed intervals (neither state A nor B). Tested with a couple of SFT lengths (900s and 1800s) in each case.

keita.kawabe@LIGO.ORG - 17:19, Tuesday 17 October 2023 (73527)DetChar-Request

Since it seems that the coupling is NOT a direct electronics coupling from Beckhoff -> OM2 -> DCPD, we fully connected the Beckhoff cable to the OM2 heater driver chassis and locked the OMC to the shoulder with an X single bounce beam (~20mA DCPD_SUM, not 40mA like in the usual nominal low noise state). That way, if the Beckhoff is somehow coupling to OMC PZT that might cause visible combs in the DCPD.

We didn't see the comb in this configuration. See the 1st attachment, red is the shoulder lock and green is when 1.66Hz comb was visible with the full IFO (the same time reported by Ansel in alog 73000), showing just two largest peaks of 1.66Hz harmonics visible in the green trace. (It seems that the 277.41Hz and 279.07 Hz peak are 167th and 168th harmonics of 1.66Hz.) Anyway, because of the higher noise floor, even if the combs are there we couldn't have seen these peaks. We've had a different comb spacing since then (alog 73028) but anyway I don't see anything at around 280Hz. FYI I used 2048 FFTs for both, red is a single FFT and the green is an average of 6. This is w/o any normalization (like RIN).

In the top panel of 2nd attachment, red is the RIN of OMC-DCPD_SUM_OUT_DQ of the shoulder lock, blue and dark green are RIN of 2nd loop in- and out-of-loop sensor array. Magenta, cyan and blue green are the same set of signals when H1 was in observing last night. Bottom panel shows coherence between DCPD_SUM during the shoulder lock and ISS sensors as well as IMC_F, which just means that there's no coherence except for high kHz.

If you look at Georgia's length noise spectrum from 2019 (alog 47286), you'll see that it's not totally dissimilar to our 2nd plot top panel even though Georgia's measurement used dither lock data. Daniel points out that a low-Q peak at around 1000Hz is a mechanical resonance of OMC structure causing the real length noise.

Configurations: H1:IMC-PWR_IN~25.2W. ISS 2nd loop is on. Single bounce X beam. DCPD_SUM peaked at about 38mW when the length offset was scanned, and the lock point was set to the middle (i.e. 19mA). DC pointing loops using AS WFS DC (DC3 and DC4) were on. OMC QPD loops were not ON (it was enabled at first but was disabled by the guardian at some point before we started the measurement). We were in this state from Oct/17/2023 18:12:00 - 19:17:20 UTC.

Images attached to this comment
Non-image files attached to this comment
keita.kawabe@LIGO.ORG - 17:25, Tuesday 17 October 2023 (73536)DetChar-Request

BTW Beckhoff cable is still fully connected to the OM2 heater driver chassis. This is the first observation data with such configuration after Fil worked on the grounding of Beckhoff chassis (alog 73233).

Detchar, please find the comb in the obs mode data starting Oct/17/2023 22:33:40 UTC.

ansel.neunzert@LIGO.ORG - 11:31, Wednesday 18 October 2023 (73555)

The comb indeed re-appeared after 22:33 UTC on 10/17. I've attached one of the Fscan daily spectrograms (1st figure); you can see it appear in the upper right corner, around 280 Hz as usual at the start of the lock stretch.

Two other notes:

  • The comb is now back to its original spacing of 1.6611 Hz.
  • There are new strong lines visible at +/- 1.235 Hz from the comb teeth. If this structure has appeared before, I'm not aware of it. I've attached an image showing the lines (2nd figure, average of DMT-ANALYSIS_READY data between 22:00 10/17 and 02:00 10/18) and a comparison with older data from Sept 20th (3rd figure).
Images attached to this comment
keita.kawabe@LIGO.ORG - 13:29, Wednesday 18 October 2023 (73563)DetChar-Request

Just to see if anything changes, I used the switchable breakout board at the back of the OM2 heater driver chassis to break the thermistor connections but kept the heater driver input coming from the Beckhoff. The only two pins that are conducting are pins 6 and 19.

That happened at around Oct/18/2023 20:18:00 to 20:19-something UTC when others were doing the commissioning measurements.

Detchar, please look at the data once the commissioning activities are over for today.

ansel.neunzert@LIGO.ORG - 14:04, Thursday 19 October 2023 (73595)

Because there was an elevated noise floor in the data from Oct/17/2023 18:12:00 mentioned in Keita's previous comment, there was some doubt as to whether the comb would have been visible even if it were present. To check this, we did a direct comparison with a slightly later time when the comb was definitely present & visible. The first figure shows an hour of OMC-DCPD_SUM_OUT_DQ data starting at UTC 00:00 on 10/18 (comparison time with visible comb). Blue and yellow points indicate the comb and its +/-1.235 Hz sidebands. The second figure shows the time period of interest starting 18:12 on 10/17, with identical averaging/plotting parameters (1800s SFTs with 50% overlap, no normalization applied so that amplitudes can be compared) and identical frequencies marked. If it were present with equivalent strength, it looks like the comb ought to have been visible in the time period of interest despite the elevated noise floor. So this supports the conclusion that the comb *not* present in the 10/17 18:12 data.

Images attached to this comment
ansel.neunzert@LIGO.ORG - 15:27, Thursday 19 October 2023 (73600)

Following up, here's about 4 hours of DELTAL_EXTERNAL after Oct 18 22:00. So this is after Keita left only the heater driver input connected to the Beckhoff on Oct/18/2023 20:18:00. The comb is gone in this configuration.

Images attached to this comment
H1 SQZ
sheila.dwyer@LIGO.ORG - posted 14:03, Tuesday 10 October 2023 - last comment - 18:31, Tuesday 17 October 2023(73365)
homodyne measurements today

Vicky, Sheila, Camilla, Dorotea, Naoki

We went to SQZT7 this morning with the new homodyne (73241).  In the end we wer able to see decently flat shot noise and decent visibility.  On the way, we ran into some difficulties that caused some confusion:

In the end we have flat shot noise, and a visibility of 98.5% measured on PDA (3.07% loss) and visibility of 97.8% (4.44% loss) measured on PDB.  The nonlinear gain of 11 measured with seed max/ no pump. A comment to this alog will contain the measured sqz/asqz/mean sqz.

Comments related to this report
victoriaa.xu@LIGO.ORG - 13:47, Wednesday 11 October 2023 (73376)

Screenshot summarizing homodyne measurements today. With measured carrier NLG=11 (for generated squeezing ~14.7-14.8 dB), we observe

  • squeezing           = - 6 dB
  • anti-squeezing    = +13.5 dB
  • mean-squeezing = +10.5 dB

Comparing sqz/anti-sqz to generated sqz: ~7% unexplained homodyne losses. This is consistent with our last estimate of excess HD losses (8/29/2023, LHO:72802, ~7% mystery loss). Since then, we swapped the HD detector and improved readout losses (visibility). We now measure more homodyne squeezing at 6 dB, consistent with expected loss reductions. That is compared to 8/29 (LHO:72802), we have less total loss, less budgeted hd loss && more squeezing, but the same unexplained hd losses as before.

Comparing mean-squeezing to generated sqz: could be consistent with sqz/asqz losses. I think there is a mis-estimate of the generated squeezing level from non-linear gain. If we ignore our NLG11 measurement, and instead use the generated squeezing level to match observed 13.5 dB of anti-squeezing, then we allow losses to determine the measured 6 dB squeezing level, we would have an NLG=10 (not 11) for a generated squeezing level of 14.5 dB. This would suggest 7% unexplained losses, same as the sqz/asqz measurements.

For ~7% mystery losses, this is compared to total HD losses of 21%, of which we budget 15% losses. From the sqz wiki, the budgeted losses are:

  • opo escape 98.5% 
  • in-chamber ham7 95.3%
  • beam diverter 99%
  • SQZT7 on-table optics losses 2% (72604)
  • HD PD QE 97.7%  (73241)
  • visibility 95.6%  (from 97.8% fringe visibility on PD B) 

If we include phase+dark noise that degrades squeezing but is not loss, then 21% total loss can explain the 6dB measured squeezing, see e.g. from the gsheet calculator (edited to include ranges for NLG=10 and NLG=11):

    SQZ   ASQZ
NLG                    10 - 11
x  (0.68, 0.70)  (0.68, 0.70)
gen sqz (dB)  (-14.5, -15.01)   (14.5, 15.01) 
with throughput eta =                       0.79
meas sqz (dB)  (-6.24, -6.29)  (13.54, 14.03) 
with phase noise (mrad) =                      20.00
meas sqz (dB)  (-6.08, -6.11)  (13.54, 14.03)
with dB(Vtech/Vshot) =                    -22.00
var(v_tech/v_shot)   0.0063   0.0063
meas sqz (dB)   (-5.97, -6.00)    (13.54, 14.03) 

DTT homodyne template saved at $userapps/sqz/h1/Templates/dtt/HD_SQZ/HD_SQZ_101023.xml .

Edited to include some history of homodyne measurements:

  • 6 dB SQZ -  10/10/23 LHO:73365 = 21% total loss, 15% known loss, 7% mystery (this time)
  • 5.2dB SQZ - 8/31/23, LHO:72802 = 27% total loss, 21% known loss, 7% mystery (bad PD-B QE, lower visibility?)
  • 5.5dB SQZ -   2/3/23, LHO:67219 = 25% total loss, 15% known losses, < 10% mystery (tech noise -10dB), NLG was varied.

It could still be interesting to vary NLG to see if we can obseve any more squeezing, or if an additional technical noise floor (aside from dark noise) is needed to explain the NLG sweeps.

Images attached to this comment
sheila.dwyer@LIGO.ORG - 17:02, Wednesday 11 October 2023 (73395)

We revised the sqz loss wiki table again today, and are including it to explain what we think our current understanding of losses is. 

It seems likely that the 7% extra losses we see on homodyne measurements are in HAM7, so we've nominally added that to the loss budget. 

In addition to this, there would be an additional 8% loss on the sqz beam if we didn't correct it's linear polarization with a half wave plate.  72604  At the time of the chamber close out, (65110) we measured throughput from HAM7 to HAM5 that would implies that two passes through the OFI were giving us 97.6% transmission, so this is not compatible with the polarization being wrong by this much.  We haven't included this as a loss in the loss budget because it seems incompatible with our measurement in chamber. 

The wiki currently lists the OMC transmission as 92%, and the PD QE as 98%.  The PD QE may be worse than this (see 61568), but measurements of the product of QE and OMC transmission for 00 mode seem to indicate that is in the range 90-92%, so this is close. 

With the infered losses of from the measured sqz/anti-sqz in the IFO, the plausible range of losses is 30-35%, we are using 32%.  With only known losses (including the values for OMC trans and PD QE), we have 14% unexplained loss. If we include the 7% apparent HAM7 losses, we have 9% unexplained losses in the IFO.  This does seem similar to the 8% polarization problem, but it would also include SQZ-OMC mode matching.  

Possible future scenarios:  We may be able to reduce the 7% HAM7 losses, and we may be able to swap the OMC to reduce those losses from 92% to 97%.  

  total efficiency resulting sqz measured without subtraction (technical noise -12dB below shot, 20mrad phase noise) if technical noise is 20dB below unsqueezed shot noise
fix HAM7 losses 0.73 4.4dB 5dB
swap OMC (92%-> 97%) 0.71 4.14dB 4.8dB
swap OMC and fix HAM7 losses 0.77 4.85dB 5.6dB
swap OMC, fix HAM7 losses, and fix 8% from polarization issue (if that is real) 0.84 5.83dB 6.8dB

These numbers come from the aoki equations that Vicky added to the google sheet here: gsheet

 

Images attached to this comment
victoriaa.xu@LIGO.ORG - 18:31, Tuesday 17 October 2023 (73537)

Don G. and Sheila have very likely resolved the homodyne polarization issue as being due to the SQZT7 periscope. So, the mis-polarization is likely not an issue for squeezing in the interferometer.

The sqz beam leaves HAM7 via reflection off the sqz beam diverter. From the latest CAD layout from Don, the outgoing reflected beam (blue) is ~75.58 degrees from global +X. The periscope re-directs the beam to travel along SQZT7, approximately along +Y. The CAD layout thus suggests that the SQZT7 periscope re-directs the beam in yaw (counter-clockwise) by an estimated 90 - 75.58 = 14.4 degrees

From recent homodyne measurements LHO:72604, of the sqz light leaving HAM7 and arriving on SQZT7, ~8% of the power was in the wrong polarization, this calculates to a ~16.5 degrees polarization misrotation. Compared to this 16.5 degree misrotatation we were searching for, the 14.4 degrees polarization rotation induced by the periscope image rotation can plausibly explain the misrotation.

Images attached to this comment
Displaying reports 15041-15060 of 86441.Go to page Start 749 750 751 752 753 754 755 756 757 End