Displaying reports 17101-17120 of 86726.Go to page Start 852 853 854 855 856 857 858 859 860 End
Reports until 14:37, Wednesday 26 July 2023
H1 DetChar (DetChar, ISC)
zach.yarbrough@LIGO.ORG - posted 14:37, Wednesday 26 July 2023 (71737)
Followup on posisble noise from ESD PI damping.
Derek, Iara, Zach

Following up on Vicky's request to determine if the continuous turning on and off of the ESD drive in response to PI ringup is causing noise.

We examined 28 June 2023 at 07:25 UTC and 19 July 2023 at 16:19 UTC where the ESD drive was repeatedly turned on and off. We referenced strain and omicron segments for the same times and found no significant increase in glitches or average strain.

We find no additional noise from the engagement of the ESD drive. Additionally, we notice no significant difference between the time periods when the ESD drive is on vs. when it is off.

Attached are images of the PI ringups and resulting ESD damping and the corresponding times in strain and omicron.
Images attached to this report
H1 DetChar (DetChar, ISC)
shania.nichols@LIGO.ORG - posted 14:11, Wednesday 26 July 2023 (71735)
TSAMS Glitches - June 27 2023

Andy, Marissa, Gaby, Brennan, Tabata, Shania

We investigated glitches that occurred in Calib strain on June 27 2023 (see red box in glitch-gram and attached omegascan) which are related to the OM2 TSAMS used to improve mode matching and increase BNS range (see aLog 70886). The glitches in Calib strain appear while the temperature is changing (see attached TSAMS plots) and occur around 300 Hz. These glitches are also witnessed by OMC-ASC_QPD_{A/B}_YAW (see attached omegascan). Once the temperature is stable the glitches are no longer present. These glitches also occur in Livingston (March 15 2023) when the TSAMS are used (see aLog 63983 for TSAMS plot and attached L1 glitch-gram). 


 

Images attached to this report
H1 ISC (CAL)
jenne.driggers@LIGO.ORG - posted 14:04, Wednesday 26 July 2023 - last comment - 18:45, Wednesday 26 July 2023(71732)
Cal lines on/off test - not seeing much extra noise with quick DTT look at the data

Now that we're back to our lower noise situation with OM2 hot, we're redoing the test that Oli ran in 71304 to look at how much different our noise is with and without the calibration lines on.  We expect that this will primarily show that the noise right around the line frequencies is reduced, as suggested by Gabriele in 71614. I don't think we expect any major changes other than right around the lines, but if we do, that would be very interesting.

Since the low frequency calibration lines are back on this week (alog 71706), it took more 'doing' than normal to turn off the calibration lines.  ISC_LOCK's NLN_CAL_MEAS does not currently turn off the CAL_AWG_LINES guardian-controlled lines, so I selected LINES_OFF in that guardian.  However, that didn't actually stop all of the lines, so I also did an awg clear 8 * to stop the lines going to the DARM1_EXC.  TJ is looking into ensuring that NLN_CAL_MEAS takes care of the awg lines.

I'm a little surprised, but I'm not seeing much of a difference in the spectra when the lines are on vs. off.  All 4 panels are the same 4 traces (noted above in the bullet points), but zoomed differently.  These traces are all of the GDS-CALIB_STRAIN_NOLINES, so for times that the calibration lines are on, they've been subtracted out of the data here (note that the 'new' CAL_AWG_LINES are not yet subtracted, so those are still present in this channel).  I expected this channel to show some noise around the calibration lines for times when the Cal lines were on.  But, I'm not really seeing anything. 

I'm not sure if the PCALY_DARM lines are coming back on at the same height each time or not.  It's possible that they are, and it's just that the attached ndscope can only look at 16 Hz channels, and so the beating between lines / aliasing is causing it to look like they are not.  But, just to flag that we should check to ensure that the lines are coming on with awg at the amplitude requested.

Images attached to this report
Comments related to this report
gabriele.vajente@LIGO.ORG - 18:45, Wednesday 26 July 2023 (71749)

Looking closely, one can see that the noise between 18 and 22 Hz is slighlty improved. It's small, but it's there.

Images attached to this comment
H1 General
thomas.shaffer@LIGO.ORG - posted 13:08, Wednesday 26 July 2023 (71733)
Lock loss 2005 UTC

Lock loss 1374437133

Caused by commissioning activities.

LHO VE
david.barker@LIGO.ORG - posted 11:56, Wednesday 26 July 2023 (71731)
Wed CP1 Fill

Wed Jul 26 10:07:00 2023 INFO: Fill completed in 6min 56secs

Travis confirmed a good fill curbside

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 09:36, Wednesday 26 July 2023 (71729)
Updated CDS Host Stats MEDM

Yesterday we added h1digivideo3 to the CDS hosts stats system, and I noticed that the MEDM was missing some of the recently added network ports, specifically the totals and the loopbacks.

I have updated my script which generates the H1CDS_HOST_STATS MEDM window, the new version is shown in the attachment.

This MEDM can be accessed either from the CDS Overview, as the "COMPUTERS" button lower-right, or from the SITEMAP from CDS->"CDS Machine Stats"

Images attached to this report
H1 CAL
anthony.sanchez@LIGO.ORG - posted 08:33, Wednesday 26 July 2023 - last comment - 14:30, Friday 28 July 2023(71725)
PCAL X Noise found by Shivaraj


Shivaraj sent Rick and I a message about some noise found on H1:PCALX_TX_PD and H1:PCALX_RX_PD.
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20230703/cal/pcal_x/

This particular noise took place on the 3rd of July, LHO was empty of people except an operator. Clear examples of this type of noise on these channels can also be found on the 4th, the 5th, and 17th of july.


Checking the same channels on EY:
There is some form of noise on H1:PCALY_TX_PD that shows up on the 11th , 12th, and 13th of this month that looks similar, though not as often and not as intense as the noise found on PCALX and it's not always on both the PCALY_TX and PCALY_RX PD at the same time like seen at EX.

I have reached out to Shivaraj, to try to learn more about this and see if it's a problem for DARM. Which it doesn't seem to be according to what he saw in Bruco .

This noise could point to a problem with our PCAL lasers since its in both TX and RX PD's on at EX. But it also could be the AOM, or the OFS being saturated , or otherwise interacting with changes in temp or humidity.

This could also be a DAQ issue, like a chassis or board because it's showing up on both channels at the same time at EX. Shivaraj mentioned that there might be "cross talk between different channels in a board, and if the glitches are in the light and seen by both PD's they would also show up in other channels, which we could likely use to our advantage."

 

Images attached to this report
Comments related to this report
anthony.sanchez@LIGO.ORG - 13:03, Friday 28 July 2023 (71793)

I took this issue to the Noise sprint on Wednesday and Adrian Helming-Cornell, Jane Glanzer,Vishal Yalla took up the project.
Dave Barker and Erik also apparently tried to also look into this as well and by Wednesday's lunch time there was some sharing of information

The Noise Sprint group started a google doc where we put all the information that we were gathering:
https://docs.google.com/document/d/127y-9zX6So-zWHxpziH0cU9SAjMjV1lUiJrwKRHdB4A/edit

That may not be a clickable link so here is the content:

alog: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=71725 

 

PCAL X Noise found by Shivaraj


Shivaraj sent Rick and I a message about some noise found on H1:PCALX_TX_PD and H1:PCALX_RX_PD.
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20230703/cal/pcal_x/

This particular noise took place on the 3rd of July, LHO was empty of people except an operator. Clear examples of this type of noise on these channels can also be found on the 4th, the 5th, and 17th of july.


Checking the same channels on EY:
There is some form of noise on H1:PCALY_TX_PD that shows up on the 11th , 12th, and 13th of this month that looks similar, though not as often and not as intense as the noise found on PCALX and it's not always on both the PCALY_TX and PCALY_RX PD at the same time like seen at EX.

I have reached out to Shivaraj, to try to learn more about this and see if it's a problem for DARM. Which it doesn't seem to be according to what he saw in Bruco .

This noise could point to a problem with our PCAL lasers since its in both TX and RX PD's on at EX. But it also could be the AOM, or the OFS being saturated , or otherwise interacting with changes in temp or humidity.

This could also be a DAQ issue, like a chassis or board because it's showing up on both channels at the same time at EX. Shivaraj mentioned that there might be "cross talk between different channels in a board, and if the glitches are in the light and seen by both PD's they would also show up in other channels, which we could likely use to our advantage."


 

PCAL Background: 

-> PCAL = Photon Calibration 

Used for calibration using test masses at the ends stations of the interferometer using a physical force

 

PCAL chassis layout: https://dcc.ligo.org/cgi-bin/private/DocDB/ShowDocument?.submit=Identifier&docid=S1400489&version=5 

 

Potential Causes: 

  • Laser 
  • AOM
  • AOM Power Supply 

 

Tasks:

  • See how often this occurs via summary pages
  • Try and compare the noise to other channels (list on the way)/look for excitations
  • Trend PCAL lines and systematic error in lines, do they affect transfer functions?
  • (High Priority) Is H1:GRD-ISC_LOCK_STATE_N in NLN_CAL_MEAS?  State 700? 
    • Are PCAL noise bursts happening in nominal low noise (state 600), calibration measurements (state 700)
    • Can find on IDVW 
    • Alog when Power supply was replaced see if noise was present before this date.
      • We do see this noise before May 2nd in both PCALX and PCALY. April 10th-14th. Rules out power supply change.
  • Some noise in the BSC9 (which is in EX) X/Y/Z channels on July 3rd 2023 from about 14-20utc from 20-40ish Hz.
    • BLRMS these channels, any temporal correlations between these guys and TX/RX noise bursts?
    • What about the endstation ground motion SEI system BLRMS?


 

Channel Names: 

(Might have correlation between calibration channels; however chassis channels are not resolving without calibration channels) 

 

  • H1:CAL-PCALX_RX_PD_WATTS_OUT
    H1:CAL-PCALX_TX_PD_WATTS_OUT
  • H1:CAL-PCALX_SHUTTERPOWERENABLE
  • H1:CAL-PCALX_RECEIVERMODULETEMPERATURE   (Jitter on channel) 
  • H1:CAL-PCALX_TRANSMITTERMODULETEMPERATURE ( Jitter on channel) 
  • H1:CAL-PCALX_OPTICALFOLLOWERSERVOOSCILLATION
  • H1:CAL-PCALX_OFS_AOM_DRIVE_MON_OUTMON 
    • H1:CAL-PCALX_OFS_AOM_DRIVE_OUTMON (Plotting)
  • H1:CAL-PCALX_WS_PD_OUTMON
  • Effectively any channel that starts with H1:CAL-PCALX_ [ varable ]
    AOM , OFSPD, OFS, TX, RX, OPTICALFOLLOWER

 

Calibration channels:  

Link to GSTAL If you find a time that the kappas have some weird signals then check out the GSTAL data for those times as well.

 

  • H1:CAL-CS_TDEP_KAPPA_UIM_OUTPUT 
  • H1:CAL-CS_TDEP_KAPPA_PUM_OUTPUT
  • H1:CAL-CS_TDEP_KAPPA_TST_OUTPUT
  • H1:GRD-ISC_LOCK_STATE_N Look for state 700


 

Chassis documentation: https://dcc.ligo.org/LIGO-D1400153

 

Other instances:

June 9

June 10 - few lines

June 11 - few lines

June 19

June 24

July 3

July 4

July 5

July 11

July 17

July 18

July 19

July 20

July 21 - few lines

July 25 - few lines

 

  • DAC update:
    • The channel we use to drive the test mass is sampled at a different rate (16Hz) vs what the pcal photodiodes  (16Khz)
    • Means we may be inserting this noise into the photodiodes because of this difference, if the driving channels are only 16hz. (Later confirmed that this is not the case.) 
    • H1:CAL-PCALX_TX_PD_WATTS_OUT_DQ  and the corresponding RX channels on both X and Y end are all not available on LIGO-DV web. Though they are available on the CDS network and the control room. 
    • Going to get 16kHz DAC readback channels that are faster to try and confirm if this is the issue. Working with Dave and Erik, we have confirmed that the driving channels are 16khz channels but the readback channels are still only 16hz which prevents us from getting analysis results above 8 hz using LIGO DV. 
    • Using ndcscope & Diaggui we have confirmed that there is an issue when the Roaming PCAL X line changes. Which is a PCAL Line that gets changed by a Gaurdian node every 24 hours of NLN Lock state. The issue is that the Roaming X line is changed without a ramp down/up time. This may be the start of the noise that we are seeing but this has not been confirmed yet. The solution to this would be to add a second channel, and ramp down the initial channel and replace the initial frequency of the roaming X line with a ramp up of the next frequency.  

I have 2 gps times that I have narrowed this down to happen between.

Between 1372456038 -1372456158


Calibration channels:  

July 4th, 2023: 16:00:00 - 18:00:00 UTC (1372510818 GPS) 

 

H1:CAL-CS_TDEP_KAPPA_PUM_OUTPUT, raw,start: 2023-07-04 16:00:00 (1372521618) len: 2:00:00.

H1:CAL-CS_TDEP_KAPPA_UIM_OUTPUT, raw,start: 2023-07-04 16:00:00 (1372521618) len: 2:00:00.

H1:CAL-CS_TDEP_KAPPA_TST_OUTPUT, raw,start: 2023-07-04 16:00:00 (1372521618) len: 2:00:00.

 

 

 












 

July 3th, 2023: 14:00:00 - 16:00:00 UTC (1372510818 GPS) 

 

H1:CAL-CS_TDEP_KAPPA_PUM_OUTPUT, raw,start: 2023-07-03 14:00:00 (1372428018) len: 2:00:00.

H1:CAL-CS_TDEP_KAPPA_UIM_OUTPUT, raw,start: 2023-07-03 14:00:00 (1372428018) len: 2:00:00.

H1:CAL-CS_TDEP_KAPPA_TST_OUTPUT, raw,start: 2023-07-03 14:00:00 (1372428018) len: 2:00:00.

 

 

https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20230703/cal/pcal_x/ 

 

anthony.sanchez@LIGO.ORG - 14:30, Friday 28 July 2023 (71795)

More searching is needed to ensure that this is resolved.

LHO General
thomas.shaffer@LIGO.ORG - posted 08:18, Wednesday 26 July 2023 (71724)
Ops Day Shift Start

TITLE: 07/26 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
CURRENT ENVIRONMENT:
    SEI_ENV state: EARTHQUAKE
    Wind: 8mph Gusts, 6mph 5min avg
    Primary useism: 0.23 μm/s
    Secondary useism: 0.05 μm/s
QUICK SUMMARY: Austin handed me an IFO that was just about up to max power when we had another aftershock roll through and break the lock. Starting again, still fully auto relocking.

LHO General (ISC)
austin.jennings@LIGO.ORG - posted 08:02, Wednesday 26 July 2023 (71720)
Wednesday Owl Shift Summary

TITLE: 07/26 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 144Mpc
SHIFT SUMMARY:

- 9:16 - intention bit went to COMMISSIONING, CDS_CA_COPY/CAMERA_SERVO froze, but I was able to move it back to OBSERVING @ 9:19 - Tagging ISC

- Lockloss @ 12:51

- Leaving H1 to TJ with the IFO currently relocking at POWER_10W

LOG:

No log for this shift.

H1 General (Lockloss)
austin.jennings@LIGO.ORG - posted 05:54, Wednesday 26 July 2023 (71723)
Lockloss @ 12:51

Lockloss @ 12:51, due to a 6.4 EQ in Vanuatu.

LHO General
austin.jennings@LIGO.ORG - posted 04:00, Wednesday 26 July 2023 (71722)
Mid Shift Owl Report

H1 has been observing for 10 hours. All subsystems appear to be stable.

LHO General (SUS)
ibrahim.abouelfettouh@LIGO.ORG - posted 00:40, Wednesday 26 July 2023 - last comment - 14:38, Wednesday 26 July 2023(71721)
Weekly In-Lock SUS Charge Measurement

Plots of this week's (07/26/2023) SUS charge measurement attached.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 14:38, Wednesday 26 July 2023 (71736)SUS

You can see from the in-lock charge plots that V_eff changed the direction of it's trend at certain times, roughly listed below. In the plot attached, the bias voltage applied to the DC ESD changed at certain times 68123 67698. I wanted too see if these times agreed (shown in G1600699). More investigation is needed as the current plots only go back as far as March but this could explain the trend in ITMY V_eff. Not shown in plot is that EX bias was off 10 March to 05 April 2023 (68446).

Optic  

V_eff changed direction, plots above.  estimated dates

Bias voltage applied to the DC ESD changed, plot attached
ITMX Feb/March 2023 No change
ITMY April 2023 23rd April 2023
ETMX No change 28th Feb 2023
ETMY end of April 2023 28th Feb 2023
Images attached to this comment
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 00:35, Wednesday 26 July 2023 (71718)
OPS Eve Shift Summary

TITLE: 07/26 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
SHIFT SUMMARY:

IFO is in NLN and OBSERVING as of 1:14 UTC. Nothing of note to report.

See lockloss alog for details on 23:54 UTC Lockloss (nothing remarkable). See midshift alog for details on lock acquisiton (near fully automatic).

Weekly SUS Charge (see alog 71721)

LOG:

LHO General
austin.jennings@LIGO.ORG - posted 00:03, Wednesday 26 July 2023 (71719)
Ops Owl Shift Start

TITLE: 07/26 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 144Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 15mph Gusts, 12mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.05 μm/s
QUICK SUMMARY:

- H1 locked and in observing for just over 6 hours

- SEI/CDS/DMs ok

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 20:05, Tuesday 25 July 2023 - last comment - 09:16, Wednesday 26 July 2023(71717)
OPS Eve Midshift Update

IFO is in OBSERVING as of 01:14 UTC

We lost lock at 23:54 UTC (alog) with no apparent cause

Lock Acquisition:

1. IR was not found on ALS_DIFF and so I moved the Diff Offset slider from 397 to 417 and IR caught immediately thereafter. No further touching was needed all the way to NLN. There was no wait between OMC_Whitening and NLN.

2. There were ASC SDF diffs (screenshot below), but they showed a difference of 0 (though were initially showing up as red). Hitting the "observe" intention too early on my part I think stopped guardian from clearing these because as soon as I toggled it back, all guardian nodes were ready for observing.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 09:16, Wednesday 26 July 2023 (71727)

The sdf diff screen in the screenshot isn't the real diffs. This is showing "FULL TABLE" and "SORT ON SUBSTRING: DHARD_Y". How I left it yesterday.  To see the actual diffs, you should change the dropdown boxes to "SETTING DIFFS" and "SHOW ALL". I expect the diffs were from waiting for the ADS camera's to converge. 

H1 General (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 17:02, Tuesday 25 July 2023 - last comment - 13:36, Wednesday 26 July 2023(71714)
Lockloss from NLN

Lockloss from NLN (and Observing) at 23:54 UTC

No known reason for the lockloss. Re-locking now.

Comments related to this report
genevieve.connolly@LIGO.ORG - 13:36, Wednesday 26 July 2023 (71734)
Brina, Genevieve, Camilla

We took a look at this lockloss to see if there were any anomalies in the optics. We saw some motion in the ETMX channels starting at 2023/07/25 23:54:54 UTC (~40ms before the lockloss).
Images attached to this comment
H1 ISC (SUS)
camilla.compton@LIGO.ORG - posted 11:31, Tuesday 25 July 2023 - last comment - 09:59, Wednesday 26 July 2023(71681)
DHARD_Y Transient Causing Violin ring up, edited DHARD filter turn on order.

Jenne, Camilla

In 71559 Ryan C and Daniel found that there was a DHARD_Y Transient during DHARD_WFS state that was larger since June 29th, coinciding with the time the violins starting to ring up 71404.

Yesterday TJ and Oli increased the DHARD_P and _Y TRAMPs from 5s to 15s in this state 71672 and in the two relocks since then we've only had to stay in OMC_ WHITNIENTING damping violins <15 minutes 71676. The transient is much smaller after this TRAMP increase but still there and a factor 2 larger than pre-June 29th.

The step response of the only filter on at the time (FM6 "newcntrl") is very big when it's turned on, but over in 1sec, attached. Currently FM6 is turned on and then 1 second later the input and gain is turned on, plot attached. Jenne suggests as the step response starts when the input is turned on, we should turn the input on with FM6 and then after 1 second, ramp the gain up. I've added this to ISC_LOCK and loaded. Hope this will remove the transient and then we can reduce the tramp back to 5s.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 14:00, Tuesday 25 July 2023 (71693)

Daniel pointed out that FM4,7,9 are turned off when FM6 is turned on. These filters all have 5 second ramp times so we are probably getting the transient from these filters being still ramping down as the gain is ramped up. I've accepted these as OFF in the safe.snap sdf see attached. So they should be reverted on sdf revert.

We saw a similar transient in DHARD_P but haven't looked at it yet so we should repeat this process with other filters.

Images attached to this comment
daniel.sigg@LIGO.ORG - 13:58, Tuesday 25 July 2023 (71694)

Most of the individual filters in this module have a 2-5s ramp time. For whatever reason, FM4/5/7/9 are all on just before DHARY gets engaged, They then get turned off while FM6 is turned on, a short time before the input to the moduleis turned on. However, there is not enough time for these filters to ramp off, so they are still on during the inital gain ramping. 

camilla.compton@LIGO.ORG - 09:59, Wednesday 26 July 2023 (71730)

This transient is not visible in the optics anymore, see attached. I'll put the tramp back to 5s but ISC_LOCK will need to be reloaded when we are out of observe. Opened and closed FRS 28647

Brina, Genevieve and Lance are going to look for transients from other filters and at other times, Elenna suggested checking DHARD_P. 

Images attached to this comment
H1 ISC (ISC)
keita.kawabe@LIGO.ORG - posted 11:15, Tuesday 18 July 2023 - last comment - 11:24, Friday 04 August 2023(71457)
ITMY single bounce, cold OM2, OMC scan with ITM central heating OFF/ON (Sheila, Daniel, Keita)

Summary:

This is a continuation of single bounce beam analysis. In the past we've done OM2 hot/cold measurements for ITMX in alog 70502 and 71100, this time we've done a different thing (OM2 cold, ITM CO2 off/on for ITMY beam).

When ITM CO2 was off, the OMC scan looked like the first attached (for Jennie: about 16:46:35 - 16:47:58 UTC). 20 peak is ~1.0 while 00 peak is ~16 (off the scale in the plot).

With CO2 heating of 1W (started ~16:51:13) , the 20 peak started decreasing but it was much, much slower than we expected.

At around 17:25:00 UTC we had to stop due to other maintenance tasks. The last usable scan before this (for Jennie: 17:23:08-17:24:38 UTC) is shown in the second attachment. 20 peak was still slowly decreasing, but anyway at that moment 20 peak was down to 0.6.

Given this slow time constant, Daniel points out that maybe we should have waited longer after the IFO unlocked before starting the single bounce scan (both for today and for the past measurements). FYI IFO was unlocked at about 15:07 UTC.

I'll do my mode matching simulation as soon as Jennie gets the 20/(20+00) numbers.

What was done:

10W into IMC, ITMY single bounce. ASC-AS_A and AS_B DC centering (DC3 and DC4) were on. RF sidebands were turned off.

Manually locked OMC (OMC guardian auto, asked for prep-omc-scan, then go manual, scan the OMC-PZT2_EXC to find 00 peak, stop scan and adjust the PZT2_OFFSET so we're on the 00 resonance, ask for OMC_LSC_ON, then OMC_Locked and go AUTO, that's what I kind of remember).

Manually refined the alignment using OM3 and OMCS. Disabled the OMC LSC, OMC guardian DOWN, and started scanning. We ended up using 0.01Hz Ramp signal with 110V amplitude (PZT2_OFFSET zero) to make sure to use the full range of the PZT.

OM2 was cold throughout the scan (H1:AWC-OM2_TSAMS_THERMISTOR_1_TEMPERATURE=21.748 to 21.749, H1:AWC-OM2_TSAMS_THERMISTOR_2_TEMPERATURE= 22.149 to 22.147)

TCS was off at first. The first scan (16:45:35-16:47:58) was about 1h 40min after the lock loss.

TCS central heating of 1W was turned on at about 16:51:13.

Daniel restored the RF SBs and brought all settings back.

Images attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 11:22, Tuesday 18 July 2023 (71461)

How to turn off RFSBs.

Disconnect the cable for 118MHz on the patch panel at the bottom of the PSL rack (1st picture).

On top of the patch panel there's a 24MHz amplifier, don't turn it off.

On top of the 24MHz thing, there are amplifiers for 9MHz and 45MHz. You will turn off the output of both (2nd picture showing the 45MHz unit with the RF output switch in OFF position).

Images attached to this comment
keita.kawabe@LIGO.ORG - 16:11, Tuesday 18 July 2023 (71480)

If we just believe TCS frontend simulation, H1:TCS-SIM_ITMY_SUB_DEFOCUA_FULL_SINGLE_PASS_OUTPUT was ~17.05uD during the last OMC scan before we gave up.

We might be able to use this to distinguish between the two patches in the MM parameter space (update in https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=71477) but I'll wait for the OMC scan fitting results.

Images attached to this comment
jennifer.wright@LIGO.ORG - 18:12, Tuesday 25 July 2023 (71716)SQZ

Executive Summary: The mode mis-match with no central heating on the ITM is 8.2%, the mode mis-match with central heating on the ITM is 3.6%.

For the first scan:

T0 = 1373734011

delta T = 87s

OMC scan is shown in the first png image.

Fitted C20/02 peak is shown in the first pdf.

We expect the HOM spacing to be 0.588 MHz as per this entry and DCC T1500060 Table 25.

The mode spacing is 148.796 - 149.388 = 0.592 MHz.

The ratio of second order to zeroth order carrier is (0.575 + 0.853)/(0.575 + 0.853 + 15.90) = 0.082 = 8.2 % mode mis-match

To run the code checkout git branch /dev of labutils and run measurement.

python OMCscan_nosidebands3.py 1373734011 87 "Sidebands off, 10W input, cold ITM + OM2" "single bounce" --verbose -m -o 2

and for the split peak fitting:

python fit_two_peaks_no_sidebands3.py

 

For the second scan:

T0 = 1373736206

delta T = 90s

OMC scan is shown in second png image.

Fitted C20/02 peak is shown in the second pdf.

The mode spacing is 148.741 - 149.338 = 0.597 MHz.

The ratio of second order to zeroth order carrier is (0.201 + 0.428)/(0.201 + 0.428 + 17.02) = 0.036 = 3.6 % mode mis-match

Run the following on the same git branch.

python OMCscan_nosidebands4.py 1373736206 90 "Sidebands off, 10W input, hot ITM + cold OM2" "single bounce" --verbose -m -o 2 -p 0.01

and for the split peak fitting:

python fit_two_peaks_no_sidebands4.py

data is in labutils/omc_scan/data/2023-07-18

files in labutils/omc_scan

figures in labutils/omc_scan/figures/2023-07-18

Images attached to this comment
Non-image files attached to this comment
keita.kawabe@LIGO.ORG - 11:24, Friday 04 August 2023 (71951)

Summary:

Incorporated the fit results and updated the plot. Original analysis is in alog 71145.

In the attached, there are two pairs of patches, each pair comprising yellow and lighter blue, that represent the previous measurement (alog 71145 where ITMX single bounce was used with no TCS, OM2 hot/cold) and two pairs, each pair comprising greenish blue and darker blue, that represent the measurent done this time (ITMY single bounce, cold OM2, TCS ON/OFF).

Since it's impossible that the beam parameters of ITMX single bounce beam on the OMC are totally different from those of ITMY single bounce, you can just look at the distance between pairs and judge which ones represent the reality. In this case, the patches in the left half plane are the clear winners.

Details and caveats:

Calculation done for the ITMY single bounce is exactly the same as ITMX except that the measured losses are different and the mode actuator is ITMY central TCS instead of OM2.

As for the TCS optical power I  used H1:TCS-SIM_ITMY_SUB_DEFOCUS_FULL_SINGLE_PASS_OUTPUT~17uD for the central heating (zero for no heating). I simply doubled the number for double-pass effect. If this is grossly off the result might look different.

Since 1st order HOM power was not negligible in ITMY single bouncer scan, as a first order approximation, I used P2/(P0+P1+P2) as the measured mode matching loss where P0, P1 and P2 are the power of 00, 1st order and 2nd order mode (for the 2nd order mode, 20 and 02 were resolved by the fit code). I've done this to ITMX single bounce scan too just for consistency.

If the model is perfect and has everything, the difference between yellow "X, OM2" and greenish-blue "Y, OM2 Cold/TCS OFF" should be explained by the difference in the ITM ROC, substrate lensing/heating including the TCS (and IFO heating prior to lock loss, since we haven't waited for hours and hours after the IFO was unlocked). It would be interesting to see if ITM difference will make the plot look any different.

However, the model doesn't have ITMX and ITMY, it's just a single ITM at the average location. Though it's easy to implement that feature in principle, I have a suspicion that the numbers used for ITM substrate lens effect in the past could be off, and I've contacted GariLynn. Wait for the conclusion of that discussion.

A big caveat is that you cannot quickly draw conclusion about the full IFO mode matching from this. At the very least, you have to take into account that the arm mode is primarily determined by the HR and that the carrier coming to OMC from inside the arms only experience the ITM lensing once (-ish).

Another big caveat is that the ADC was railing for the 00 mode peak. Look at the 2nd attachment bottom where H1:OMC-DCPD_B_STAT_MIN=-(2^19). It's not as bad as the finesse measurement (alog 71888) as the scan was slower, but if we want a better data we need to redo it with lower power or w/o x10 gain.

Last attachment shows what happens when you change OM2 (left, 1 step in the plot = maximum range of the T-SAMS) or ITM heating (right, 1 step = 10uD single pass).

Images attached to this comment
H1 DetChar (DetChar)
ansel.neunzert@LIGO.ORG - posted 10:45, Thursday 06 July 2023 - last comment - 09:12, Wednesday 26 July 2023(71108)
1.6611 Hz comb re-appeared June 27

Perhaps unsurprisingly given its previous history, the strong 1.6611 Hz comb that disappeared (alog 69791) in late May has resurfaced. It shows up clearly in Fscans; I did some additional digging and it looks like the first traces appear on June 27th in the 12:00-14:00 UTC range. This corresponds in time with some of the work described in alog 70849, but OM2 heater changes don't account for the previous disappearance of the comb; Sheila confirms that heater wasn't on earlier in May. So it's still not clear what's going on.

Comments related to this report
ansel.neunzert@LIGO.ORG - 14:31, Friday 07 July 2023 (71144)

Update: it's coherent with H1_PEM-EX_VMON_ETMX_ESDPOWER48_DQ and H1_PEM-EX_VMON_ETMX_ESDPOWER18_DQ, and *not* with CS or EY VMON channels.

(Last time we tried to hunt this comb down, I think we didn't have high resolution coherence plots generated to high enough frequencies for these channels.)

Plots attached. The gray dots are harmonics of a separate 99.9989 Hz comb.

Images attached to this comment
evan.goetz@LIGO.ORG - 12:34, Wednesday 19 July 2023 (71507)DetChar
It looks like the behaviour of this comb changed again on July 13, shifting slightly in frequency, before then disappearing again on July 14. It is as yet unclear what caused the changes.

The attached weekly average Fscan From July 12 - 19 shows these changes around 280 Hz especially.
Images attached to this comment
evan.goetz@LIGO.ORG - 09:12, Wednesday 26 July 2023 (71726)
This comb seems to reappear between 7:30 and 9:00 UTC on July 19, 2023. Hopefully this time range can point to something that specifically changes. See attached daily Fscan image
Images attached to this comment
Displaying reports 17101-17120 of 86726.Go to page Start 852 853 854 855 856 857 858 859 860 End