Displaying reports 15001-15020 of 86440.Go to page Start 747 748 749 750 751 752 753 754 755 End
Reports until 11:27, Thursday 19 October 2023
H1 General
thomas.shaffer@LIGO.ORG - posted 11:27, Thursday 19 October 2023 - last comment - 13:31, Thursday 19 October 2023(73592)
Lock loss 1813 UTC

1381774439

We had that same feature in LSC-DARM that we've been seeing recently ~100ms before lockloss.

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 12:43, Thursday 19 October 2023 (73593)

1941 Back to Observing

There was a small issue with locking the squeezer when we got to low noise. Vicky worked on it and it seems to be a side affect of changing the crystal temperature.

victoriaa.xu@LIGO.ORG - 13:31, Thursday 19 October 2023 (73594)

Specifically in SQZ_MANAGER, in the OPO_PZT_OK() checker, I changed the maximum "OK" PZT voltage from 105 to 110 (last number on Line 59). We had been stuck re-locking because OPO resonance was at 106V, and we were failing the OPO_PZT_OK() checker which tries to re-lock to a lower FSR. Instead, we could have also changed the voltage the other OPO PZT, using the CLF_SERVO_SLOW_OUTPUT DC offset, to lower the voltage on PZT1.

Images attached to this comment
H1 SQZ
camilla.compton@LIGO.ORG - posted 11:17, Thursday 19 October 2023 (73590)
OPO Tempurature adjusted to improve squeezing

TJ popped us out of observing at 17:51UTC so we could touch up the OPO temperature (sdf attached) as SQZ and range had been reducing. We had to do this yesterday too 73549 but expect we'll need to do this less over time as the OPO crystal settles after Tuesday's 73524 spot change.

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:20, Thursday 19 October 2023 (73586)
Thu CP1 Fill

Thu Oct 19 10:10:24 2023 INFO: Fill completed in 10min 20secs

This is the first fill using the new zotvac0 workstation (newer hardware, Deb11 compared with Deb10).

Richard verified a good fill via camera.

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 09:16, Thursday 19 October 2023 - last comment - 09:40, Thursday 05 September 2024(73584)
Rare IPC receive errors on h1oaf and h1calcs from h1susetmx

Last night at GPS=1381715243 (Wed Oct 18 18:47:05 2023 PDT) h1oaf and h1calcs long-range-dolphin receivers from h1susetmx reported a single receive error (1 errror in the 16384 packets received in that second). This is the first error of this type this year. There was no issue with h1susetmx at the time, its cpu max is 20uS. I'll clear the errors on the next TOO.

With an error rate of 1 per year, (1 in 5.2e+11) we will just continue to monitor this for now.

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 09:40, Thursday 05 September 2024 (79919)

Happened again 02:08:03 Thu 05sep2024. FRS32032

LHO General
thomas.shaffer@LIGO.ORG - posted 08:01, Thursday 19 October 2023 (73583)
Ops Day Shift Start

TITLE: 10/19 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 162Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 6mph Gusts, 4mph 5min avg
    Primary useism: 0.06 μm/s
    Secondary useism: 0.51 μm/s
QUICK SUMMARY: IFO has been locked for 16 hours. While the primary useism is starting to trend down, the secondary is coming up. Peakmon seems to be much calmer now, so I'll bring the SEI_ENV threshold back when (if) we go out of Observing today (alog73569).

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 00:00, Thursday 19 October 2023 (73582)
Ops EVE Shift End

TITLE: 10/19 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 162Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: We are Observing and have now been Locked for 8.5 hours. Everything has been going smoothly and we even reached 162Mpc according to H1:CDS-SENSMON_CLEAN_SNSC_EFFECTIVE_RANGE_MPC (the lower one on sensemon) !!
LOG:

23:00UTC Detector in Commissioning and has been Locked for 21 mins
23:08 Went into Observing

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 19:59, Wednesday 18 October 2023 (73577)
Ops EVE Midshift Update

Observing at 160Mpc and have been Locked for 4.5hours now. Primary and secondary seism are still slowly rising.

H1 SQZ
victoriaa.xu@LIGO.ORG - posted 19:03, Wednesday 18 October 2023 - last comment - 15:29, Monday 20 November 2023(73562)
NLG HD sweep, HD mystery losses fully resolved, ~8dB HD SQZ with new crystal spot

Naoki, Sheila, Camilla, Vicky

Summary: After yesterday's crystal move LHO:73535, we re-aligned SQZT7, and now see 8 dB SQZ on the homodyne, up to measured NLG=114 without a phase noise turnaround! This fully resolves the homodyne loss budget, there is 0 mystery loss remaining on the homodyne, from which we can infer 0 mystery losses in HAM7. Back to the IFO afterwards, after 1 day at this new crystal spot, squeezing in DARM is about 4.5dB - 4.8dB, reaching almost 5dB at the start of lock.

We first re-aligned the homodyne to the IFO SQZ alignment, which reached 4.8dB SQZ in DARM yesterday, so we are more confident the alignment back through the VOPO is not clipping. In yesterday's measurements, we had a sign error in the FC-ASC offloading script, which brought us to a bad alignment with limited homodyne squeezing, despite high 98% fringe visibilities. Attached is a screenshot of homodyne FC/ZM slider values with FC+SQZ ASC's fully offloaded (correctly), to which the on-table SQZT7 homodyne is now well-aligned. After Sheila re-aligned the homodyne to the screenshotted FC/ZM values, fringe visibilities are PD1 = 98.5% (loss 3.1%), PD2 = 97.8% (loss 4.2%).

We then did an NLG sweep on the homodyne, from NLG=2.4 (opo trans 20uW) to NLG=114 (opo trans 120uW). Measurements below and attached as .txt, DTT is attached, plots to follow.

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

unamplified_ir = 0.0014 (H1:SQZ-OPO_IR_PD_LF_OUT_DQ with pump shuttered)

NLG = amplified / unamplified_ir (opo green pump un-shuttered)

@80uW pump trans, amplified = 0.0198 (at start, 0.0196 at end) --> NLG 0.0198/0.0014 ~ 14

@100uW pump trans, amplified = 0.0046 (at start, 0.0458 at end) --> NLG 0.046/0.0014 = 33

@120uW pump trans, amplified = 0.16 --> NLG 0.16/0.0014 = 114

@60uW pump trans, amplified = 0.011 (at start, 0.0107 at end) --> NLG = 7.86

@40uW pump trans, amplified = 0.0059 (at start, 0.0059 at end) --> NLG = 4.2

@20uW pump trans, amplified = 0.0034 (at start, --- at end) --> NLG = 2.4

trace reference opo_green_trans
(uW)
NLG SQZ dB CLF RF6 demod angles (+)
LO shot noise @ 1.106 mA, -136.3 dB 10 80 14    
Mean SQZ 11     +13  
SQZ 12     -8.0 162.0
ASQZ 13     +16 245.44
NLG=33   100 33    
Mean SQZ 14     +16.7  
SQZ 15     -8.0 170.5
ASQZ 16     +19.9 237.85
NLG = 114   120 114    
Mean SQZ 17     +22.5  
SQZ 19     -8.0 177.98
ASQZ 18     +25.6 230.13
NLG = 7.9   60 7.9    
Mean SQZ 20     +9.7  
SQZ 21     -7.7 154.28
ASQZ 22     +12.4 253.83
NLG = 4.2   40 4.2    
LO SN check 4       ~0.1dB lower?
Mean SQZ 23     +6.8  
SQZ 24     -6.3 140.6
ASQZ 25     +9.6 262.64
NLG = 2.4   20 2.4    
Mean SQZ 26     +3.8  
SQZ 27     -4.8 135.45
ASQZ 28     +6.3 -100.5
LO shot noise @ 1.06 mA, 29        

All measurements had PUMP_ISS engaged throughout; we manually tuned the ISS setpoint for different NLGs. For low NLG (20uW trans) we manually engaged ISS. LO power (shot noise) drifted ~5% over the measurement, see trends.

NLG Sweep Procedure:

DTT saved in $(userapps)/sqz/h1/Templates/dtt/HD_SQZ/HD_SQZ_8dB_101823_NLGsweep.xml

Images attached to this report
Non-image files attached to this report
Comments related to this report
victoriaa.xu@LIGO.ORG - 11:06, Thursday 19 October 2023 (73581)

Using Dhruva's nice plotting code for NLG sweeps from LHO:67242, here are some plots of squeezing vs. NLG, and calibrating the OPO lasing threshold and various green powers at this new crystal spot. Data & updated plotting code attached.

NLG sweep data summary here:

SHG Launched (mW) OPO Green Refl (mW) OPO Green Trans (uW)  NLG 
 
Mean SQZ (dB) SQZ (dB) Anti-SQZ (dB)
10.8 1 80 14 13 -8.0 16
13.3 1.3 100 33 16.7 -8.0 19.9
16 1.5 120 114 22.5 -8.0 25.6
8.4 0.8 60 7.9 9.7 -7.7 12.4
5.9 0.6 40 4.2 6.8 -6.3 9.6
3.5 0.4 20 2.4 3.8 -4.8 6.3

To-do: Look into the fits of loss & technical noise.

  • Technical noise here fits to -8.8dB below LO shot noise @ 1.1mA (-136dB, dark noise @ -159dB) -- check this is correct/real. For reference with the old homodyne PD in Feb 2023, LHO:67223 with 5.5dB sqz, technical noise fit to -9.6dB below LO shot noise @ 1.08mA (calibration suspect, HD_DIFF_DC_OUT was around -134-135dB, dark noise @ - 157dB). Would be interesting if this about -10dB technical noise below shot noise on the homodyne is real technical noise injection from the squeezer, and/or if it can be improved with upcoming squeezer laser PMC installation.
  • Loss estimate from NLG sweep - our homodyne loss budget accounts for 13% losses (after scaling back all over-estimations of losses), but this sweep fits to 7% losses, so we should resolve this over-estimated loss discrepancy.
Non-image files attached to this comment
michael.zucker@LIGO.ORG - 14:58, Thursday 19 October 2023 (73599)

Outstanding work, well done!

victoriaa.xu@LIGO.ORG - 15:29, Monday 20 November 2023 (74319)

Attached here is a re-fitting of this homodyne NLG sweep, which fits [loss, phase noise, technical noise] to measured SQZ+ASQZ, given measured NLG. It also shows the calculated loss from measured mean-sqz and NLG (which relies on accurate calibration of NLG --> generated SQZ dB). The same fitting was done for NLG sweeps on DARM the following week LHO:73747.

The previous anlaysis was fitting [phase noise, technical noise] using the loss calcuated from mean-squeezing. Compared to the earlier analysis, I think these fits here are closer. 

We budgeted 13% HD loss for this homodyne measurement:  1 - [0.985(opo) * 0.96(ham7) * 0.98(sqzt7 optics) * 0.977(HD PD QE) * 0.96((visibility~98%)**2)] = 13%.

This fit to the NLG HD sweep suggests ~11% homodyne loss, 7 mrad rms phase noise, with technical noise about -10 dB below 1.1 mW LO shot noise. Note HD dark noise is -22 dB below shot noise, suggesting mystery technical noise on the homodyne.

Images attached to this comment
H1 OpsInfo (VE)
anthony.sanchez@LIGO.ORG - posted 18:28, Wednesday 18 October 2023 - last comment - 10:01, Thursday 19 October 2023(73573)
zotvac0 Debian11 upgrade update

 

The CDS Vacuum workstation zotvac0 got it's upgrade today.
The workstation hardware was changed from an older model ZOTAC to the newer model ZOTAC ZBOX-EN1060k which is capable of running Debian 11.
The hardware install and Debian11 imaging went fine, and the workstation opperates as a cds workstation.
Dave ran a test of the cp1 fill and he is confident that the new vacuum computer can handle this task tomorrow. Though it may not be able to make the plots it usually does.

Current issues:
H1EDC has 17 channels that are currently disconnected that stem from this swap, Dave and I will be working on this first thing in the morning.
Users such as controls, myself, and most other users are able to log in, except vacuum, which had a local account on the old machine (zotvac0 which is currently in the cumputer users room turned off).
But the vacuum user can log in via tty, and shh to make changes to the VAC system. Unfortunately that is not generally how the vaccum users use their account. I have been tinkering for the last hour and noticed that when I switched from xfwm4 to gnome i was able to log in as vacuum using the gui. I then rebooted to see if it would consistently allow vaccum user to locking via gnome and it does not.
So tomorrow Dave and I will be working on getting this running correctly again.

To preserve information about how the vacuum workstation was configured I'll post my notes with IP addresses and macs partially redacted.
 

ZotVac Replacement

Dhcp :
**********************************************
Current Workstations.pool entry:

host zotvac0 {
  hardware ethernet [redacted]:0c:de;
  fixed-address [cds subnet].72;
  option host-name "zotvac0";
}

This must be removed:
host zotws23 {
  hardware ethernet [redacted]:5b:fa;
  fixed-address [cds subnet ].156;
  option host-name "zotws23";
}
-------------------------------------------------
After:
 
host zotvac0 {
  hardware ethernet [redacted]:5b:fa;
  fixed-address [cds subnet].72;
  option host-name "zotvac0";
}

**********************************************

For DNS I just remove zotws23

**********************************************

Old ZOTVAC0 was changed over to the following for troubleshooting purposes.

host zotvac1 {
  hardware ethernet [redacted]:0c:de;
  fixed-address [cds subnet].156;
  option host-name "zotvac1";
}

Images attached to this report
Comments related to this report
anthony.sanchez@LIGO.ORG - 10:01, Thursday 19 October 2023 (73585)

Update to zotvac0 upgrade:
The vacuum user can now log in, after upading the user id to match the one found on the file server for the vacuum user, then editing the name service cache daemon on the local machine to ensure the deamon is using the correct service.
The H1:DAQ-H1EDC_CHAN_NOCON is now back down to 0, unfortunatly, neither Dave nor myself are sure what caused this to start working again aside from a reboot.
The zotvac0 system did seem to hibernate last night about an hour or so after it was last used.
I am concerned that the vacuum computer will go to sleep/hibernate again if it's not used for some amount of time. I have looked at the power settings, and everything is set to never suspend and never sleep so it can actively monitor the vacuum. So I'm leaving the workstation alone, and monitoring if it goes to sleep/ hibernates throughout the day.

H1 CDS (OpsInfo)
david.barker@LIGO.ORG - posted 18:24, Wednesday 18 October 2023 - last comment - 21:41, Wednesday 18 October 2023(73574)
EDC is not connecting to 17 CP1_OVERFILL channels

Tony, Dave:

The EDC is consistently not connecting to 17 of the 27 channels served by the CP1 overfill IOC. This started soon after the upgraded zotvac0 was installed. We ran the original IOC to see if the new machine was the issue, but the EDC still only connected to 10 of the 27 channels in H1EPICS_CP1.ini. It is the same block of 10 which connect, there is no obvious reason why these 10 (checked position in file, type of PV). So we are running the new zotvac0 overnight.

OPS: we expect the EDC disconnect channel count to remain at 17 or hopefully eventually drop to zero.

Comments related to this report
david.barker@LIGO.ORG - 21:41, Wednesday 18 October 2023 (73580)

Looks like zotvac0 has gone offline and the EDC is now disconnected from all 27 channels. We'll leave it like this overnight and work on it first thing tomorrow.

H1 General
thomas.shaffer@LIGO.ORG - posted 14:00, Wednesday 18 October 2023 - last comment - 21:57, Wednesday 18 October 2023(73565)
Lock loss 2043 UTC

1381696994

No obvious cause but there was TCS CO2 steps happening at the time, and our funny LSC-DARM wiggle ~100ms before lockloss.

Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 21:57, Wednesday 18 October 2023 (73578)

I looked at the quad L2 & L3 MASTER_OUT channels as well as the ETMX L2 NOISEMON and FASTIMON channels that you looked at for the 10/18 01:53UTC lockloss in 73552, and noticed a couple of things:

Comparing this to the two recent locklosses that had pre-lockloss glitches, 10/15 08:53UTC and 10/18 01:53UTC(73552), both had not been caught by other quads or ASC-AS_A, and in regards to which channel had seen the glitch first, for the 10/15 LL I could not tell who saw it first (I had said that it had hit ETMX L3 first in 73552 but I think that was wrong), and for the 10/18 01:53 LL we previously discovered that either DARM or the DCPDs saw it first.

LL Time before LL (s) Seen First By Also Seen By
10/15 08:53UTC ~0.5 either DARM, DCPDs, ETMX L3 NOISEMON, FASTIMON, ETMX_L2
10/18 01:53UTC (73552) ~0.17 DARM or DCPDs NOISEMON, FASTIMON, ETMX_L2
10/18 20:42UTC ~0.1 ETMX L3 NOISEMON, FASTIMON, ETMX_L2, ITMs_L2, ETMY_L2, ASC-AS_A
Images attached to this comment
H1 CDS
david.barker@LIGO.ORG - posted 12:48, Wednesday 18 October 2023 - last comment - 18:45, Wednesday 18 October 2023(73559)
Picket fence FOM plot went unstable, pnsndata server is intermittent

Starting around 11am PDT the picket fence trend plot on nuc5 started restarting itself. This was because the pnsndata service (providing stations OTR on the Washinton penninsular and LAIR at Eugene Oregon) became intermittent. To test this I took these stations out and re-ran picket fence with no issues. Later it looked like pnsndata was more stable, so I added it back in. This did not last and at 12:40 I removed it again from our client.

Note that with the west coast stations removed, the map has centered itself on Idaho.

I'll continue to monitor psnsdata and add it back when it becomes stable.

Images attached to this report
Comments related to this report
edgard.bonilla@LIGO.ORG - 18:45, Wednesday 18 October 2023 (73576)SEI

We checked with Renate Hartog (the PNSN network manager) through a series of calls and emails. She's gotten everything going now, thanks Renate!

Turns out that they had a system re-boot at about that time and people forgot to restart a (virtual) network interface. As of 17:13 pacific, the issue has been resolved and we can now connect to the PNSN.

Hopefully things will be stable and running again soon. Thanks for the notification about this Dave.

Edgard

H1 DetChar
gabriele.vajente@LIGO.ORG - posted 08:50, Wednesday 18 October 2023 - last comment - 18:35, Friday 27 October 2023(73546)
Low Frequency Noise (<50 Hz)

Using two periods of quiet time during the last couple of days (1381575618 + 3600s, 1381550418 + 3600s) I computed the usual coherences:

https://ldas-jobs.ligo-wa.caltech.edu/~gabriele.vajente/bruco_STRAIN_1381550418/
https://ldas-jobs.ligo-wa.caltech.edu/~gabriele.vajente/bruco_STRAIN_1381575618/

The most interesting observation is that, for the first time as far as I can remember, there is no coherence above threshold with any channels for wide bands in the low frequency range, notably between 20 and 30 Hz, and also for many bands above 50 Hz. I'll assume for now that most of the noise above ~50 Hz is explained by thermal noise and quantum noise, and focus on the low frequency range (<50 Hz).

Looking at the PSDs for the two hour-long times, the noise belowe 50 Hz seems to be quite repeatable, and follows closely a 1/f^4 slope. Looking at a spectrogram (especially when whitened with the median), one can see that there is still some non-stationary noise, although not very large. So it seems to me that the noise below ~50 Hz is made up o some stationary 1/f^4 unknown noise (not coherent with any of the 4000+ auxiliary channels we record) and some non-stationary noise. This is not hard evidence, but an interesting observation.

Concerning the non-stationary noise, I think there is evidence that it's correlated with the DARM low frequency RMS. I computed the GDS-CALIB RMS between 20 and 50 Hz (whitened to the median to weight equally the frequency bins even though the PSD has a steep slope), and the LSC_DARM_IN1 RMS between 2.5 and 3.5 Hz (I tried a few different bands and this is the best). There is a clear correlation between the two RMS, as shown in a scatter plot, where every dot is the RMS computed over 5 seconds of data, using a spectrogram.

 

 

Images attached to this report
Comments related to this report
gabriele.vajente@LIGO.ORG - 11:01, Wednesday 18 October 2023 (73554)

DARM low frequency (< 4 Hz) is highly coherent with ETMX M0 and R0 L damping signals. This might just be recoil from the LSC drive, but it might be worth trying to reduce the L damping gain and see if DARM RMS improves

 

Images attached to this comment
gabriele.vajente@LIGO.ORG - 13:04, Wednesday 18 October 2023 (73560)

Bicoherence is also showing that the noise between 15 and 30 Hz is modulated according to the main peaks visible in DARM at low frequency.

Images attached to this comment
elenna.capote@LIGO.ORG - 20:53, Wednesday 18 October 2023 (73579)

We might be circling back to the point where we need to reconsider/remeasure our DAC noise. Linking two different (and disagreeing) projections from the last time we thought about this, it has the correct slope. However, Craig's projection and the noisemon measurement did not agree, something we never resolved.

Projection from Craig: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=68489

Measurement from noisemons: https://alog.ligo-wa.caltech.edu/aLOG/uploads/68382_20230403203223_lho_pum_dac_noisebudget.pdf

christopher.wipf@LIGO.ORG - 11:15, Friday 20 October 2023 (73620)

I updated the noisemon projections for PUM DAC noise, and fixed an error in their calibration for the noise budget. They now agree reasonably well with the estimates Craig made by switching coil driver states. From this we can conclude that PUM DAC noise is not close to being a limiting noise in DARM at present.

Images attached to this comment
jeffrey.kissel@LIGO.ORG - 09:51, Tuesday 24 October 2023 (73691)CDS, CSWG, ISC, OpsInfo, SUS
To Chris' point above -- we note that the PUMs are using 20-bit DACs, and we are NOT using and "DAC Dither" (see aLOGs motivating why we do *not* use them in LHO:68428, and LHO:65807, namely that [in the little testing that we've done] we've seen no improvement, so we decided they weren't worth the extra complexity and maintenance.)
christopher.wipf@LIGO.ORG - 15:25, Tuesday 24 October 2023 (73710)

If at some point there’s a need to test DAC dithers again, please look at either (1) noisemon coherence with the DAC request signal, or (2) noisemon spectra with a bandstop in the DAC request to reveal the DAC noise floor.  Without one of those measures, the noisemons are usually not informative, because the DAC noise is buried under the DAC request.

christopher.wipf@LIGO.ORG - 18:35, Friday 27 October 2023 (73784)

Attached is a revised PUM DAC noisemon projection, with one more calibration fix that increases the noise estimate below 20 Hz (although it remains below DARM).

Images attached to this comment
H1 SUS (OpsInfo)
thomas.shaffer@LIGO.ORG - posted 15:48, Friday 13 October 2023 - last comment - 11:06, Thursday 19 October 2023(73452)
Started making a DARM BLRMS monitor for violin modes

In order to more easily track and notify on rising violin modes, I've started making a DARM BLRMS monitor around 490-520Hz. This is currently in position #8 in place of Camilla 6600-6800Hz that didn't give expected results (alog73272). It looks like its working, but not the best it could be. Jenne gave me some pointers that I'll try at another time to get this to better show us the violin mode total magnitude. Once it's looking acceptable I'll make others for the harmonics as well.

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 11:06, Thursday 19 October 2023 (73591)

Added some more notches for the other harmonics and upped the attenuation on the bandpass. I also took over slot #9 for the 1k violin modes. I think they are working somewhat well, but I'd really like to watch this next time we have a violin mode ring up to see if it's accurate.

Images attached to this comment
H1 ISC (AWC, DetChar-Request, ISC)
keita.kawabe@LIGO.ORG - posted 15:09, Tuesday 10 October 2023 - last comment - 15:27, Thursday 19 October 2023(73367)
OM2/beckhoff coupling no light test (Daniel, Keita)

To see if the OM2/beckhoff coupling is a direct electronics coupling or not, we've done A-B-A test while the fast shutter was closed (no meaningful light on the DCPD).

State A (should be quiet): 2023 Oct/10 15:18:30 UTC - 16:48:00 UTC. The same as the last observing mode. No electrical connection from any pin of the Beckhoff cable to the OM2 heater driver chassis. Heater drive voltage is supplied by the portable voltage reference.

State B (might be noisy): 16:50:00 UTC - 18:21:00 UTC. The cable is directly connected to the OM2 heater driver chassis.

State A (should be quiet): 18:23:00- 19:19:30 UTC or so.

DetChar, please directly look at H1:OMC-DCPD_SUM_OUT_DQ to find combs.

It seems that even if the shutter is closed, once in a while very small amount of light reaches DCPDs (green and red arrows in the first attachment). One of them (red arrow) lasted long and we don't know what was going on there. One of the short glitches was caused by BS momentarilly kicked (cyan arrow) and scattered light in HAM6 somehow reached DCPDs, but I couldn't find other glitches that exactly coincided with optics motion or IMC locked/unlocked.

To give you a sense of how bad (or not) these glitches are, 2nd attachment shows the DCPD spectrum of a quiet time in the first State A period (green), strange glitchy period indicated by the red arrow in the first attachment (blue), a quiet time in State B (red) and during the observing time (black, not corrected for the loop).

FYI, right now we're back to State A (should be quiet). Next Tuesday I'll inject something to thermistors in chamber. BTW 785 was moved in front of the HAM6 rack though it's powered off and not connected to anything.

Images attached to this report
Comments related to this report
ansel.neunzert@LIGO.ORG - 10:25, Monday 16 October 2023 (73498)

I checked H1:OMC-DCPD_SUM_OUT_DQ and don't see the comb in any of the three listed intervals (neither state A nor B). Tested with a couple of SFT lengths (900s and 1800s) in each case.

keita.kawabe@LIGO.ORG - 17:19, Tuesday 17 October 2023 (73527)DetChar-Request

Since it seems that the coupling is NOT a direct electronics coupling from Beckhoff -> OM2 -> DCPD, we fully connected the Beckhoff cable to the OM2 heater driver chassis and locked the OMC to the shoulder with an X single bounce beam (~20mA DCPD_SUM, not 40mA like in the usual nominal low noise state). That way, if the Beckhoff is somehow coupling to OMC PZT that might cause visible combs in the DCPD.

We didn't see the comb in this configuration. See the 1st attachment, red is the shoulder lock and green is when 1.66Hz comb was visible with the full IFO (the same time reported by Ansel in alog 73000), showing just two largest peaks of 1.66Hz harmonics visible in the green trace. (It seems that the 277.41Hz and 279.07 Hz peak are 167th and 168th harmonics of 1.66Hz.) Anyway, because of the higher noise floor, even if the combs are there we couldn't have seen these peaks. We've had a different comb spacing since then (alog 73028) but anyway I don't see anything at around 280Hz. FYI I used 2048 FFTs for both, red is a single FFT and the green is an average of 6. This is w/o any normalization (like RIN).

In the top panel of 2nd attachment, red is the RIN of OMC-DCPD_SUM_OUT_DQ of the shoulder lock, blue and dark green are RIN of 2nd loop in- and out-of-loop sensor array. Magenta, cyan and blue green are the same set of signals when H1 was in observing last night. Bottom panel shows coherence between DCPD_SUM during the shoulder lock and ISS sensors as well as IMC_F, which just means that there's no coherence except for high kHz.

If you look at Georgia's length noise spectrum from 2019 (alog 47286), you'll see that it's not totally dissimilar to our 2nd plot top panel even though Georgia's measurement used dither lock data. Daniel points out that a low-Q peak at around 1000Hz is a mechanical resonance of OMC structure causing the real length noise.

Configurations: H1:IMC-PWR_IN~25.2W. ISS 2nd loop is on. Single bounce X beam. DCPD_SUM peaked at about 38mW when the length offset was scanned, and the lock point was set to the middle (i.e. 19mA). DC pointing loops using AS WFS DC (DC3 and DC4) were on. OMC QPD loops were not ON (it was enabled at first but was disabled by the guardian at some point before we started the measurement). We were in this state from Oct/17/2023 18:12:00 - 19:17:20 UTC.

Images attached to this comment
Non-image files attached to this comment
keita.kawabe@LIGO.ORG - 17:25, Tuesday 17 October 2023 (73536)DetChar-Request

BTW Beckhoff cable is still fully connected to the OM2 heater driver chassis. This is the first observation data with such configuration after Fil worked on the grounding of Beckhoff chassis (alog 73233).

Detchar, please find the comb in the obs mode data starting Oct/17/2023 22:33:40 UTC.

ansel.neunzert@LIGO.ORG - 11:31, Wednesday 18 October 2023 (73555)

The comb indeed re-appeared after 22:33 UTC on 10/17. I've attached one of the Fscan daily spectrograms (1st figure); you can see it appear in the upper right corner, around 280 Hz as usual at the start of the lock stretch.

Two other notes:

  • The comb is now back to its original spacing of 1.6611 Hz.
  • There are new strong lines visible at +/- 1.235 Hz from the comb teeth. If this structure has appeared before, I'm not aware of it. I've attached an image showing the lines (2nd figure, average of DMT-ANALYSIS_READY data between 22:00 10/17 and 02:00 10/18) and a comparison with older data from Sept 20th (3rd figure).
Images attached to this comment
keita.kawabe@LIGO.ORG - 13:29, Wednesday 18 October 2023 (73563)DetChar-Request

Just to see if anything changes, I used the switchable breakout board at the back of the OM2 heater driver chassis to break the thermistor connections but kept the heater driver input coming from the Beckhoff. The only two pins that are conducting are pins 6 and 19.

That happened at around Oct/18/2023 20:18:00 to 20:19-something UTC when others were doing the commissioning measurements.

Detchar, please look at the data once the commissioning activities are over for today.

ansel.neunzert@LIGO.ORG - 14:04, Thursday 19 October 2023 (73595)

Because there was an elevated noise floor in the data from Oct/17/2023 18:12:00 mentioned in Keita's previous comment, there was some doubt as to whether the comb would have been visible even if it were present. To check this, we did a direct comparison with a slightly later time when the comb was definitely present & visible. The first figure shows an hour of OMC-DCPD_SUM_OUT_DQ data starting at UTC 00:00 on 10/18 (comparison time with visible comb). Blue and yellow points indicate the comb and its +/-1.235 Hz sidebands. The second figure shows the time period of interest starting 18:12 on 10/17, with identical averaging/plotting parameters (1800s SFTs with 50% overlap, no normalization applied so that amplitudes can be compared) and identical frequencies marked. If it were present with equivalent strength, it looks like the comb ought to have been visible in the time period of interest despite the elevated noise floor. So this supports the conclusion that the comb *not* present in the 10/17 18:12 data.

Images attached to this comment
ansel.neunzert@LIGO.ORG - 15:27, Thursday 19 October 2023 (73600)

Following up, here's about 4 hours of DELTAL_EXTERNAL after Oct 18 22:00. So this is after Keita left only the heater driver input connected to the Beckhoff on Oct/18/2023 20:18:00. The comb is gone in this configuration.

Images attached to this comment
Displaying reports 15001-15020 of 86440.Go to page Start 747 748 749 750 751 752 753 754 755 End