Displaying reports 16681-16700 of 86674.Go to page Start 831 832 833 834 835 836 837 838 839 End
Reports until 15:18, Wednesday 09 August 2023
H1 SUS
gabriele.vajente@LIGO.ORG - posted 15:18, Wednesday 09 August 2023 - last comment - 08:19, Thursday 10 August 2023(72106)
Reducing some SR2 and SRM damping loop gains

Reducing the SR2 M1 DAMP P gain from -0.5 to -0.2 and the SRM M1 DAMP L gain from -0.5 to -0.25 did reduce the noise in SRCL and DARM between 3 and 8 Hz.

The main improvement was obtained reducing SR2 damping gain, When reducing the SRM L gain, the 1.3 Hz line appears to get larger. It's worth taking a look at the SRM damping loop and its interaction with the ASC and LSC loops to ttry to track donw the 1.3 Hz peak.

So SRCL and DARM are limited by SR2 P damping noise between 3 and 8 Hz. This change reduced DARM RMS by 20%.

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 14:23, Wednesday 09 August 2023 (72107)OpsInfo

I SDFed the SR2 P gain to -0.2 in observe.snap. I tried to update this in the safe.snap file by changing to the "SDF to safe" guardian state, however, it did not show an SDF diff. This is something to keep an eye out for just in case it gets reverted in safe. Tagging OpsInfo so operators are aware.

ryan.crouch@LIGO.ORG - 16:50, Wednesday 09 August 2023 (72115)

Confirmed during this relock that the SR2 M1 DAMP P gain  was set to -0.2 in both safe and observe, and SRM M1 DAMP L gain was set to -0.5 in both safe and observe.

ryan.short@LIGO.ORG - 19:05, Wednesday 09 August 2023 (72119)OpsInfo

The SR2 SAFE and OBSERVE tables are the same, as seen in this table of SDF_REVERT models; alog 69140. As far as I know, no changes have been made since this table was created, except for Daniel's comment.

Tagging OpsInfo to remind people of its existence.

gabriele.vajente@LIGO.ORG - 08:19, Thursday 10 August 2023 (72126)

After the SR2_P gain reduction, it looks like the SRCL RMS between 2 and 8 Hz is now equally limited by SR2_M1_DAMP_P and SR3_M1_DAMP_L. Below 2 Hz and in particular at the 1.3 Hz peak SRCL is coherrent wiith SRM_M1_DAMP_L, however that could be recoil from the SRCL lock to M1.

H1 CAL (DetChar)
jeffrey.kissel@LIGO.ORG - posted 14:41, Wednesday 09 August 2023 - last comment - 17:29, Monday 14 August 2023(72108)
102 Hz Feature is NOT a Calibration Line Issue; Regardless, Calibration Systematic Error Monitor Line MOVED from 102.13 to 104.23 Hz
J. Kissel, A. Neunzert, E. Goetz, V. Bossilkov

As we continue the investigation in understanding why the noise the region around 102.13 Hz gets SUPER loud at the beginning of nominal low noise segments, and the calibration line seems to be reporting a huge amount of systematic error (see investigations in LHO:72064), Ansel has found that some new electronics noise has appeared in the end station as of Saturday Aug 5 2023 around 05:30a PT at frequency extremely, and unluckily close to the 102.13000 Hz calibration line -- something at 102.12833 Hz; see LHO:72105.

While we haven't yet ID'd the cause, and thus no have a solution -- we can still change the calibration frequency to move it away from this feature in hopes that there're not beating together terribly like that are now.

I've changed the calibration frequency line frequency to 104.23 Hz as of 21:13 UTC on Aug 09 2023.

This avoids 
    (a) LLO's similar frequency at 101.63 Hz, and 
    (b) because the former frequency, 102.13 Hz was near the upper edge of the 9.33 Hz wide [92.88, 102.21) Hz pulsar spin down, "non-vetoed" band, this new frequency 104.23 Hz skips up to the next 18.55 Hz wide "non-veto" band between [104.22, 122.77) Hz according to LHO:68139.

Stay tuned 
   - to see if this band-aid fix actually helps, or just spreads out the spacing between the comb, and
   - as we continue to investigate the issue of from where this thing came.

Other things of note: 

Since
   - this feature is *not* related to the calibration line itself, 
   - this calibration line is NOT used to generate any time-dependent correction factors and thus the calibration pipeline itself, nor the data it produces is affected
   - this calibration line is used only to *monitor* the calibration systematic error, 
   - this feature is clearly identified in an auxiliary PEM channel -- and that same channel *doesn't* see the calibration line
we conclude that there *isn't* some large systematic error that is occuring, it's just the calculation that's getting spoiled and misreporting large systematic error.
Thus, we make NO plan to do anything on further with the calibration or systematic error estimate side of things from this.

We anticipate that this now falls squarely into the noise subtraction pipeline's shoulders. Given that this 102.12833 Hz noise has a clear witness channel, and the noise creates non-linear nastiness, I expect this will be an excellent candidate for offline non-linear / NONSENS cleaning.


Here's the latest list of calibration lines:
Freq (Hz)   Actuator                   Purpose                      Channel that defines Freq             Changes Since Last Update (LHO:69736)     
15.6        ETMX UIM (L1) SUS          \kappa_UIM excitation        H1:SUS-ETMY_L1_CAL_LINE_FREQ          No change
16.4        ETMX PUM (L2) SUS          \kappa_PUM excitation        H1:SUS-ETMY_L2_CAL_LINE_FREQ          No change
17.1        PCALY                      actuator kappa reference     H1:CAL-PCALY_PCALOSC1_OSC_FREQ        No change
17.6        ETMX TST (L3) SUS          \kappa_TST excitation        H1:SUS-ETMY_L3_CAL_LINE_FREQ          No change
33.43       PCALX                      Systematic error lines       H1:CAL-PCALX_PCALOSC4_OSC_FREQ        No change
53.67         |                            |                        H1:CAL-PCALX_PCALOSC5_OSC_FREQ        No change
77.73         |                            |                        H1:CAL-PCALX_PCALOSC6_OSC_FREQ        No change
104.23        |                            |                        H1:CAL-PCALX_PCALOSC7_OSC_FREQ        FREQUENCY CHANGE; THIS ALOG
283.91        V                            V                        H1:CAL-PCALX_PCALOSC8_OSC_FREQ        No change
284.01      PCALY                      PCALXY comparison            H1:CAL-PCALY_PCALOSC4_OSC_FREQ        No change
410.3       PCALY                      f_cc and kappa_C             H1:CAL-PCALY_PCALOSC2_OSC_FREQ        No change
1083.7      PCALY                      f_cc and kappa_C monitor     H1:CAL-PCALY_PCALOSC3_OSC_FREQ        No change
n*500+1.3   PCALX                      Systematic error lines       H1:CAL-PCALX_PCALOSC1_OSC_FREQ        No change (n=[2,3,4,5,6,7,8])
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 17:29, Monday 14 August 2023 (72207)
Just a post-facto proof that this calibration line frequency change from 102.13 to 104.23 Hz drastically improved the symptom that the response function systematic error, as computed by this ~100 Hz line, was huge for hours while the actual 102.128333 Hz line was loud.

The attached screenshot shows a 2 days before and two days after the change (again on 2023-08-09 at 21:13 UTC). The green trace, which shows that there is no longer erroneously reported large error as computed by the 102.13 and then 104.23 Hz lines at the beginning of nominal low noise segments.
Images attached to this comment
H1 ISC (AWC, ISC)
keita.kawabe@LIGO.ORG - posted 14:38, Wednesday 09 August 2023 - last comment - 14:41, Wednesday 09 August 2023(72061)
OM2 heater voltage: ~1.66Hz comb observed between Beckhoff DAC outputs and the driver board ground.

This is a quick investigation about 1.6611Hz comb in h(t) which might be related to OM2 heater (alog 71108 and 71801 ).

During Tuesday maintenance, I measured the OM2 heater driver at the driver chassis (https://dcc.ligo.org/LIGO-D2000211) using a breakout board while both the Beckhoff-side cable and vacuum-side cable were attached to the chassis and the driver was heating the OM2 (H1:AWC-OM2_TSAMS_DRV_VSET=21.448W). How the chassis, Beckhoff module and in-vac OM2 are connected are in https://dcc.ligo.org/LIGO-E2100049.

Voltages I looked at are:

  1. Across the positive and negative pin of the Beckhoff drive output that comes into the driver (pin 6-19 on the back DB25).
  2. Beckhoff positive drive and the driver ground (pin 6-13 on the back DB25).
  3. Beckhoff negative drive and the driver ground (pin 19-13 on the back DB25).
  4. Across the driver positive and negative output (pin1-14 on the front DB25).
  5. Driver positive output and the driver ground (pin 14-13 on the front DB25).
  6. Driver negative output and the driver ground (pin 1-13 on the front DB25).

I used SR785 (single BNC input, floating mode, AC coupled) for all of the above, and in addition used a scope to look at 2. and 3.

1., 4., 5. and 6. didn't look suspicious at all at low frequency, but 2. and 3. showed a forest of peaks comprising ~1.66Hz and its odd-order harmonics. See the first picture attached, 1.66Hz was derived from 18.25/11, not from the fundamental peak as the fft resolusion was not fine enough for that. (FWIW, frequency of the peaks read off of SR785 are 1.625, 5.0, 8.3125, 11.625, 14.9375 and 18.25Hz). Note that the amplitude is not huge (see the y-axis scale).

For 2. and 3., if you look at wider frequency range, it was apparent that the forest extended to much higher frequency (at least ~100Hz before it starts to get smaller), and something was glitching at O(1Hz) (attached video). No, the glitch is not SR785 auto-adjusting the frontend gain, it's in the signal.

Each glitch was pretty fast (2nd picture, see t~ negative 24ms from the center where there was a glitch of ~2ms or so width) and it's not clear if it's related to the comb. I could not see 1.66Hz periodicity using the scope but, as I already wrote, the comb is not huge in amplitude.

Anyway, these all suggest that the comb is between Beckhoff ground and the driver ground (i.e. it's common mode for Beckhoff positive and negative output). Differential receiver inside the driver seems to have rejected that, we're not pounding the heater element of OM2 with this comb (3rd picture, this corresponds to the measurement 4.).

I cannot say at this point if this is the cause of the comb in h(t) but it certainly looks as if Beckhoff is somehow related.

I haven't repeated this with zero drive voltage from Beckhoff though probably I should have. If the comb is not present in 2. and 3. with zero drive voltage from Beckhoff (I doubt that but it's still possible), we could use an adjustable power supply as an alternative.

The heater driver chassis is mounted on the same rack as the DCPD interface. Maybe we could relocate the driver just in case Beckhoff ground is somehow coupling into the rest of the chassis on the same rack.

It would also be useful to know if something happened to the Beckhoff world at the same time the comb appeared in h(t) apart from that we started heating OM2.

 

Images attached to this report
Non-image files attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 14:41, Wednesday 09 August 2023 (72109)

If you're wondering about the DC voltage between Beckhoff and driver, pin 6 and 19 on the back DB25 (Beckhoff P and N output) relative to pin 13 (the driver ground) were about +3.2V and -3.61V, respectively. If there was no gnd difference this should have been 3.4V and -3.4V.

H1 CAL (DetChar)
jeffrey.kissel@LIGO.ORG - posted 14:26, Wednesday 09 August 2023 (72096)
Low-frequency Thermalization lines are OFF as of 2023-08-09 16:40 UTC
J. Kissel

As a result of recent findings that the collection of lines used to probe the low frequency sensing function at 
    PCALY_EXC  8.925, 11.575, 15.275, 24.5 [Hz]
    DARM_EXC   8.825, 11.475, 15.175, 24.4 [Hz]

are causing excess non-linearities found when looking over long time scales --- see LHO:71964 --- I've turned OFF these calibration lines as of 2023-08-09 16:40 UTC. We were already out of observing for PEM injections anyways, and Robert relinquished a moment for me to quickly turn these off.

I've accepted the changes in both the PCALY (h1caley) and OMC (h1omc) front-end model's SDFs against both the safe and OBSERVE.snaps such that this sticks.



Here's the latest list of calibration lines:
Freq (Hz)   Actuator                   Purpose                      Channel that defines Freq             Changes Since Last Update (LHO:69736)     
8.825       DARM (via ETMX L1,L2,L3)   Live DARM OLGTFs             H1:LSC-DARMOSC1_OSC_FREQ              Turned OFF; THIS aLOG
8.925       PCALY                      Live Sensing Function        H1:CAL-PCALY_PCALOSC5_OSC_FREQ        Turned OFF; THIS aLOG
11.475      DARM (via ETMX L1,L2,L3)   Live DARM OLGTFs             H1:LSC-DARMOSC2_OSC_FREQ              Turned OFF; THIS aLOG
11.575      PCALY                      Live Sensing Function        H1:CAL-PCALY_PCALOSC6_OSC_FREQ        Turned OFF; THIS aLOG
15.175      DARM (via ETMX L1,L2,L3)   Live DARM OLGTFs             H1:LSC-DARMOSC3_OSC_FREQ              Turned OFF; THIS aLOG
15.275      PCALY                      Live Sensing Function        H1:CAL-PCALY_PCALOSC7_OSC_FREQ        Turned OFF; THIS aLOG
24.400      DARM (via ETMX L1,L2,L3)   Live DARM OLGTFs             H1:LSC-DARMOSC4_OSC_FREQ              Turned OFF; THIS aLOG
24.500      PCALY                      Live Sensing Function        H1:CAL-PCALY_PCALOSC8_OSC_FREQ        Turned OFF; THIS aLOG
15.6        ETMX UIM (L1) SUS          \kappa_UIM excitation        H1:SUS-ETMY_L1_CAL_LINE_FREQ          No change
16.4        ETMX PUM (L2) SUS          \kappa_PUM excitation        H1:SUS-ETMY_L2_CAL_LINE_FREQ          No change
17.1        PCALY                      actuator kappa reference     H1:CAL-PCALY_PCALOSC1_OSC_FREQ        No change
17.6        ETMX TST (L3) SUS          \kappa_TST excitation        H1:SUS-ETMY_L3_CAL_LINE_FREQ          No change
33.43       PCALX                      Systematic error lines       H1:CAL-PCALX_PCALOSC4_OSC_FREQ        No change
53.67         |                            |                        H1:CAL-PCALX_PCALOSC5_OSC_FREQ        No change
77.73         |                            |                        H1:CAL-PCALX_PCALOSC6_OSC_FREQ        No change
102.13        |                            |                        H1:CAL-PCALX_PCALOSC7_OSC_FREQ        No change
283.91        V                            V                        H1:CAL-PCALX_PCALOSC8_OSC_FREQ        No change
284.01      PCALY                      PCALXY comparison            H1:CAL-PCALY_PCALOSC4_OSC_FREQ        Off briefly between 2023-08-01 22:02 - 22:11 UTC, back on as of 22:16 UTC
410.3       PCALY                      f_cc and kappa_C             H1:CAL-PCALY_PCALOSC2_OSC_FREQ        No Change
1083.7      PCALY                      f_cc and kappa_C monitor     H1:CAL-PCALY_PCALOSC3_OSC_FREQ        No Change
n*500+1.3   PCALX                      Systematic error lines       H1:CAL-PCALX_PCALOSC1_OSC_FREQ        No Change (n=[2,3,4,5,6,7,8])
Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:46, Wednesday 09 August 2023 (72101)
Wed CP1 Fill

Wed Aug 09 10:21:40 2023 INFO: Fill completed in 21min 35secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 10:00, Wednesday 09 August 2023 - last comment - 10:15, Wednesday 09 August 2023(72097)
H1 UPdate

Fire Alarm

Comments related to this report
anthony.sanchez@LIGO.ORG - 10:15, Wednesday 09 August 2023 (72099)Lockloss

Fire alarm was a Fire Drill.
During the process of the Fire arlam going off H1 lost lock.
H1 Started to relock itself during the drill.

Lockloss:
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi/event/1375635627

Images attached to this comment
H1 General
anthony.sanchez@LIGO.ORG - posted 08:56, Wednesday 09 August 2023 (72093)
Early Departure of OBSERVING for Comissioning

Since L1 is not OBSERVING due to logging activity;
H1 has intentionally dropped out of OBSERVING early for PEM Injections and other commissioning.
 

H1 General (SQZ)
oli.patane@LIGO.ORG - posted 17:07, Tuesday 08 August 2023 - last comment - 10:36, Wednesday 09 August 2023(72083)
Out of Observing due to SQZ FC-IR Unlocked

23:42:12UTC Pushed out of Observing by SQZ_MANAGER

We got multiple SDF Diffs appearing (attachment1 and 2), but they kept changing so the ones in the attached screenshot were the earliest ones we could get. Attachment3 shows the SQZ_MANAGER log from that time and Attachment4 shows the SQZ_FC log from that time.

Naoki and Vicky were able to relock the squeezer by manually moving the beam spot, but are running some tests before we go back into Observing.

Images attached to this report
Comments related to this report
victoriaa.xu@LIGO.ORG - 17:16, Tuesday 08 August 2023 (72084)OpsInfo, SQZ

Naoki, Vicky

Re-aligning FC2 P/Y sliders manually was able to bring squeezer back. Steps taken just now, which seemed to work (tagging OpsInfo). Screenshot of some relevent medms.

  • Paused SQZ_FC Guardian manually
  • sitemap > SQZ > SQZ Overview > FC SERVO (medms)
    --> manually locked FC VCO GREEN by engaging (closing) the servo (red switch) on the common mode board. Can also increase common gain slider from e.g. 5 -->15/20 if alignment has trouble holding. (GRD will reset everything, so adjust gains however to hold a lock)
    --> could see a weak and flashing TEM00 spot on FC GREEN TRANS camera, so adjusted FC2 P/Y sliders to maximize FC GREEN TRANS spot on the camera. This also maximized FC GREEN transmission flashes, like on H1:SQZ-FC_TRANS_C_LF_OUTPUT.
  • Trends screenshot shows trends when squeezer dropped. FC2 was aligned, and you can see flashes on the bright green FC transmission PD (H1:SQZ-FC_TRANS_C_LF_OUTPUT) increase. For alignment, you can use green transmission flashes on the PD, or watch the spot on the camera, which got brighter to what you see in the screenshot (note also the location of the beam spot on the camera; this is where the cavity should be). 
  • Un-paused SQZ-FC guardian. Then from SQZ_MANAGER, re-selected FREQ_DEP_SQZ. The squeezer went back fine after this.
  • Accepted FC2 SUS SDF diffs after squeezer was back to FDS.

Will investigate why the alignment had to be manually walked (we haven't done this in 1-2 months; ASC typically takes care of FC alignment fine). Likely similar issue that Corey had last night LHO:72050; if this happens again, we can try aligning FC2 in this way.

Images attached to this comment
oli.patane@LIGO.ORG - 17:08, Tuesday 08 August 2023 (72085)

00:05:12 Returned to Observing

naoki.aritomi@LIGO.ORG - 10:36, Wednesday 09 August 2023 (72086)

We found that SQZ ASC has been OFF since 6 days ago (first attached figure). We took SQZ data 6 days ago in alog71902. During this measurement, we turned off SQZ ASC for alignment of anti squeezing. We changed the SQZ ASC flag in sqzparams to False and set back to True after the measurement, but we may forgot to reload the SQZ_MANAGER guardian. We asked Oli to reload the SQZ_MANAGER guardian when lockloss happens. We went to commissioning and manually turned on the SQZ ASC switch and went back to observing in a minute.

We also found that the SQZ ASC switch (H1:SQZ-ASC_WFS_SWITCH) and FC ASC switch (H1:SQZ-FC_ASC_SWITCH) are not monitored. We monitored them as shown in the second, third attached figure.

Images attached to this comment
H1 DetChar (CAL, DetChar)
derek.davis@LIGO.ORG - posted 15:01, Tuesday 08 August 2023 - last comment - 21:49, Tuesday 29 August 2023(72064)
Excess noise near 102.13 Hz calibration line

Benoit, Ansel, Derek

Benoit noticed that for recent locks, the 102.13 Hz calibration line is much louder than typical for the first few hours of the lock. An example of this behavior is shown in the attached spectrogram of H1 strain data on August 5 - this is the first day this behavior appeared. Ansel noted that this feature includes a comb-like structure around the line that is only present in the H1:GDS-CALIB_STRAIN_NOLINES channel and not H1:GDS-CALIB_STRAIN (see spectra for CALIB_STRAIN and CALIB_STRAIN_NOLINES on Aug 5). This issue also visible in the PCAL trends for the 102.13 Hz line. 

We are not sure if the excess noise near 102.13 Hz is from the calibration line itself or another noise source that is near the line. However, the behavior has been present for every lock since 12:30 UTC on August 5 2023. 

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 09:21, Wednesday 09 August 2023 (72094)CAL
FYI, 
$ gpstime Aug 05 2023 12:30 UTC
    PDT: 2023-08-05 05:30:00.000000 PDT
    UTC: 2023-08-05 12:30:00.000000 UTC
    GPS: 1375273818.000000
so... this behavior seems to have started at 5:30a local time on a Saturday. Therefore *very* unlikely that the start of this issue is intentional / human change driven.

The investigation continues....
making sure to tag CAL.
jeffrey.kissel@LIGO.ORG - 09:42, Wednesday 09 August 2023 (72095)
Other facts and recent events:
- Attached are 2 screenshots that show the actual *digital* excitation is not changing with time in anyway.
    :: 2023-08-08_H1PCALEX_OSC7_102p13Hz_Line_3mo_trend.png shows the specific oscillator, --- PCALX's OSC7 which drives the 102.13 Hz line's EPICs channel version of its output. The minute trend shows the max, min, and mean of the output, and there's no change in amplitude.
    :: 2023-08-08_H1PCALEX_EXC_SUM_3mo_trend.png shows a trend of the total excitation sum from PCAL X. This also shows *no* change in time in amplitude.

Both trends show the Aug 02 2023 change in amplitude kerfuffle I caused that Corey found and a bit later rectified -- see LHO:71894 and subsequent comments, but that was done, over with an solved, definitely by Aug 03 2023 UTC and unrelated to the start up of this problem.

It's also well after I installed new oscillators and rebooted the PCALX, PCALY, and OMC models on Aug 01 2023 (see LHO:71881).
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 10:44, Wednesday 09 August 2023 (72100)
The front-end version of the calibration's systematic error at 102.13 Hz also shows the long, time-dependent issue -- this will allow us to trend the issue against other channels

Folks in the calibration group have found that the online monitoring system for the
    -  overall DARM response function systematic error
    - (absolute reference) / (Calibrated Data Product) [m/m] 
    - ( \eta_R ) ^ (-1) 
    - (C / 1+G)_pcal / (C / 1+G)_strain
    - CAL-DELTAL_REF_PCAL_DQ / GDS-CALIB_STRAIN
(all different ways of saying the same thing; see T1900169) in calibration at each PCAL calibration line frequency -- the "grafana" pages -- are showing *huge* amounts of systematic error during these times when the amplitude of the line is super loud.

Though this metric is super useful because it's dreadfully obvious that things are going wrong -- this metric is not in any normal frame structure, so you can't compare it against other channels to find out what's causing the systematic error.

However -- remember -- we commissioned a front-end version of this monitoring during ER15 -- see LHO:69285.

That means the channels 
     H1:CAL-CS_TDEP_PCAL_LINE8_COMPARISON_OSC_FREQ        << the frequency of the monitor
    H1:CAL-CS_TDEP_PCAL_LINE8_SYSERROR_MAG_MPM        << the magnitude of the systematic error
    H1:CAL-CS_TDEP_PCAL_LINE8_SYSERROR_PHA_DEG        << the phase of the systematic error 

tell you (what's supposed to be***) equivalent information.

*** One might say that "what's suppose to be" is the same as "roughly equivalent" due to the following reasons: 
    (1) because we're human, the one system is displaying the systematic error \eta_R, and the other is displaying the inverse ( \eta_R ) ^ (-1) 
    (2) Because this is early-days in the front-end system, it uses the "less complete" calibrated channel CAL-DELTAL_EXTERNAL_DQ rather than the "fully correct" channel GDS-CALIB_STRAIN

But because the problem is so dreadfully obvious in these metrics, even though they're only *roughly* equivalent, you can see the same thing.
In the attached screenshot, I show both metrics for the most recent observation stretch, between 10:15 and 14:00 UTC on 2023-Aug-09.

Let's use this front-end metric to narrow down the problem via trending.
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 10:58, Wednesday 09 August 2023 (72102)CAL, DetChar
There appears to be no change in the PCALX analog excitation monitors either.

Attached is a trend of some key channels in the optical follower servo -- the analog feedback system that serves as intensity stabilization and excitation power linearization for the PCAL's laser light that gets transmitted to the test mass -- the actuator of which is an acousto-optic modulator (an AOM). There seems to be no major differences in the max, min, and mean of these signals before vs. after these problems started on Aug 05 2023.

H1:CAL-PCALX_OFS_PD_OUT_DQ
H1:CAL-PCALX_OFS_AOM_DRIVE_MON_OUT_DQ
Images attached to this comment
madeline.wade@LIGO.ORG - 11:34, Wednesday 09 August 2023 (72104)

I believe this is caused by the presence of another line very close to the 102.13 Hz pcal line.  This second line is present at the start of a lock stretch but seems to go away as the lock stretch continues.  I have attached a plot showing a zoom-in on an ASD around 102.1-102.2 Hz right after a lock stretch (orange), where the second peak is evident, and well into a lock stretch (blue) where the PCAL line is still present, but the second peak right below it in frequency is gone.  This ASD is computed using an hour of data for each curve, so we can get the needed resolution for these two peaks.

I don't know the origin of this second line.  However, a quick fix to the issue could be moving the PCAL line over by about a Hz.  The second attached plot shows that the spectrum looks pretty clean from 101-102 Hz, so somewhere in there would be probably be okay for a new location of the PCAL line.

Images attached to this comment
ansel.neunzert@LIGO.ORG - 11:47, Wednesday 09 August 2023 (72105)

Since it looks like the additional noise is at 102.12833 Hz, I did a quick check in Fscan data from Aug 5 for channels where there is high coherence with DELTAL_EXTERNAL at 102.12833 but *not* at 102.13000 Hz. This narrows down to just a few channels:

  • H1:PEM-EX_MAG_EBAY_SEIRACK_{Z,Y}_DQ .
  • H1:PEM-EX_ADC_0_09_OUT_DQ
  • H1:ASC-OMC_A_YAW_OUT_DQ. Note that other ASC-OMC channels (Fscan tracks A,B and PIT,YAW) see high coherence at both frequencies.

(lines git issue opened as we work on this.)

jeffrey.kissel@LIGO.ORG - 14:45, Wednesday 09 August 2023 (72110)
As a result of Ansel's discovery, and conversation on the CAL call today -- I've moved the calibration line frequency from 102.13 to 104.23 Hz. See LHO:72108.
derek.davis@LIGO.ORG - 13:28, Friday 11 August 2023 (72157)

This line may have appeared in the previous lock the day before (Aug 4). The daily spectrogram for Aug 4 shows a line near 100 Hz starting at 21:00 UTC. 

Images attached to this comment
elenna.capote@LIGO.ORG - 16:49, Friday 11 August 2023 (72163)

Looking at alogs leading up to the time Derek notes above, I noticed that Gabriele retuned and tested new LSC FF. This change may be related to this new peak. Remembering some issues we had recently where DHARD filter impulses were ringing up violin modes, I checked the new LSC FF filters and how they are engaged in the guardian. Some of them have no ramp time, and the filter bank is turned on immediately along with the filters in the guardian. I have no idea why that would cause a peak at 102 Hz, but I updated those filters to have a 3 second ramp.

oli.patane@LIGO.ORG - 16:51, Friday 11 August 2023 (72164)

Reloaded the H1LSC model to load in Elenna's filter changes

ansel.neunzert@LIGO.ORG - 13:36, Monday 14 August 2023 (72197)

Now that the calibration line has been moved, the comb-like structure at the calibration line frequency is no longer present (checked in the CLEAN channel).

We can also see the shape of the 102.12833 Hz line much more clearly without the overlapping calibration line. I have attached a plot for reference on the width and shape.

Images attached to this comment
camilla.compton@LIGO.ORG - 16:33, Monday 14 August 2023 (72203)ISC

As discussed in todays commissioning meeting, I checked TMSX and ETMX movement for a kick during locking and couldn't see anything suspicious. I did find some increase motion/noise every 8Hz in TMSX 1s into ENGAGE_SOFT_LOOPS when ISC_LOCK isn't explicitly doing anything, plot attached. However this noise was present prior to Aug 4th, (July 30th attached).

TMS is suspicious as Betsy found that TMS's have violin modes ~103-104Hz.

Jeff draws attendtion to 38295, showing modes of quad blade springs above 110Hz, and 24917 showing quad top wire modes above 300Hz.

Elenna's notes with calibration lines off (as we are experimenting with for current lock) we can see this 102Hz peak at ISC_LOCK state ENGAGE_ASC_FOR_FULL_IFO. We were mistaken.

Images attached to this comment
elenna.capote@LIGO.ORG - 21:49, Tuesday 29 August 2023 (72544)

To preserve documentation, this problem has now been solved, with more details in 7253772319, and 72262.

The cause of this peak was a spurious, narrow, 102 Hz feature in the SRCL feedforward that we didn't catch when the filter was made. This has been been fixed, and the cause of the mistake has been documented in the first alog listed above so we hopefully don't repeat this error.

H1 CDS
david.barker@LIGO.ORG - posted 13:35, Tuesday 08 August 2023 - last comment - 08:23, Wednesday 09 August 2023(72067)
CDS Maintenance Summary: Tuesday 8th August 2023

WP11359 Consolidated power control IOC software

Erik:

Erik installed new IOC code for the control of Pulizzi and Tripplite network controlled power switches. No DAQ restart was needed.

WP11358 Add new end station Tripplite EPICS channels to the DAQ

Dave:

I modified H1EPICS_CDSACPWR.ini to add the new end station Tripplite control boxes. I found that this file was very much out of date, and only had the MSR Pulizzi channels. I added the missing seven units to the file. DAQ + EDC restart was needed.

WP11357 Add missing ZM4,5 SUSAUX channels to DAQ

Jeff, Rahul, Fil, Marc, Dave:

Jeff discovered that the FM4,5  M1 VOLTMON channels were missing from the SUSAUX model. In this case these are being read by h1susauxh56. It was found that the existing ZM6 channels were actually reading ZM5's channels. h1susauxh56.mdl was modified to:

Change the ADC channels being read by ZM6 from 20-23 to 24-27

Add ZM4 reading ADC channels 16-19

Add ZM5 reading ADC channels 20-23

DAQ Restart was needed.

DAQ Restart

Dave:

Another messy DAQ restart. The sequence was:

  1. Restart the 1-leg
  2. Restart the EDC to include the power control channels
  3. gds1 needed a second restart
  4. When fw1 was back up and had written its first full frame:
  5. Restart the 0-leg
  6. gds0 needed a second restart
  7. After fw0 had written its first full frame, fw1 spontaneously crashed!

This was an interesting data point, last week I restarted the DAQ in the opposite order of 0-leg then 1-leg, and as fw1 was coming back fw0 spontaneously crashed, which in that case resulted in some full frame files not being written by either fw.

Could it be that starting the second fw impacts the first fw's disk access speed (perhaps LDAS gap checker switching between file systems)?

As we have found to be always the case, once the errant fw has crashed once, it does not crash again.

 

Comments related to this report
david.barker@LIGO.ORG - 13:45, Tuesday 08 August 2023 (72068)

DAQ missing full frames GPS times (no overlap between fw0 and fw1 lists) (missing because of crash highlighted)

Scanning directory: 13755...
FW0 Missing Frames [1375554944, 1375555008]
FW1 Missing Frames [1375554752, 1375554816, 1375555072, 1375555136, 1375555200]
 

david.barker@LIGO.ORG - 15:01, Tuesday 08 August 2023 (72078)

This morning new filtermodules were added to h1susauxh56 to readout the ZM4,5 quadrants. The RCG starts new filtermodules in an inactive state, namely with INPUT=OFF, OUTPUT=OFF, GAIN=0.0. It can be a bit time consuming to manually activate the MEDM switches by hand.

I wrote a script to activate new filtermodules, called activate_new_filtermodules. It takes the filtermodule name as its argument.

Here is an example using a h1pemmx filtermodule:

david.barker@opslogin0: activate_new_filtermodule PEM-MX_CHAN_12
H1:PEM-MX_CHAN_12_SW1 => 4
H1:PEM-MX_CHAN_12 => ON: INPUT
H1:PEM-MX_CHAN_12_SW2 => 1024
H1:PEM-MX_CHAN_12 => ON: OUTPUT
H1:PEM-MX_CHAN_12_GAIN => 1.0

 

david.barker@LIGO.ORG - 15:12, Tuesday 08 August 2023 (72079)

------------------- DAQ CHANGES: ------------------------

REMOVED:

No channels removed from the DAQ frame

ADDED:

+8 fast channels added (all at 256Hz)

< H1:SUS-ZM4_M1_VOLTMON_LL_OUT_DQ 4 256
< H1:SUS-ZM4_M1_VOLTMON_LR_OUT_DQ 4 256
< H1:SUS-ZM4_M1_VOLTMON_UL_OUT_DQ 4 256
< H1:SUS-ZM4_M1_VOLTMON_UR_OUT_DQ 4 256
< H1:SUS-ZM5_M1_VOLTMON_LL_OUT_DQ 4 256
< H1:SUS-ZM5_M1_VOLTMON_LR_OUT_DQ 4 256
< H1:SUS-ZM5_M1_VOLTMON_UL_OUT_DQ 4 256
< H1:SUS-ZM5_M1_VOLTMON_UR_OUT_DQ 4 256
 

+112 slow channels added

david.barker@LIGO.ORG - 08:23, Wednesday 09 August 2023 (72092)

Tue08Aug2023
LOC TIME HOSTNAME     MODEL/REBOOT
11:31:41 h1susauxh56  h1susauxh56 <<< Correct ZM6, add ZM4 and ZM5


11:32:59 h1daqdc1     [DAQ] <<< 1-leg restart
11:33:10 h1daqfw1     [DAQ]
11:33:11 h1daqnds1    [DAQ]
11:33:11 h1daqtw1     [DAQ]
11:33:19 h1daqgds1    [DAQ]


11:33:53 h1susauxb123 h1edc[DAQ] <<< EDC restart for CDSACPWR


11:34:25 h1daqgds1    [DAQ] <<< 2nd gds1 restart needed


11:36:06 h1daqdc0     [DAQ] <<< 0-leg restart
11:36:17 h1daqfw0     [DAQ]
11:36:17 h1daqtw0     [DAQ]
11:36:18 h1daqnds0    [DAQ]
11:36:25 h1daqgds0    [DAQ]
11:37:14 h1daqgds0    [DAQ] <<< 2nd gds0 restart needed


11:39:10 h1daqfw1     [DAQ] <<< FW1 crash!!
 

H1 DetChar (CAL, DetChar)
ansel.neunzert@LIGO.ORG - posted 13:23, Friday 04 August 2023 - last comment - 10:10, Wednesday 09 August 2023(71964)
CAL_AWG_LINES extra calibration lines create many narrow artifacts below 100 Hz

A set of extra calibration lines were turned on for a 1-week test starting 2023-07-25 (alog 71706). Part of the purpose of this test was to assess any impacts on CW data quality. Unfortunately, it looks like these lines do pollute low frequencies in a way that would be impactful to CW searches if left on.

Figure 1 shows a normalized H1:CAL-DELTAL_EXTERNAL spectrum for the week starting July 26. The gray dots indicate lines which correspond to intermodulation products of the calibration lines. For this plot, I computed a bunch of intermodulation products with up to 3 lines, and integer multiplicative factors for each line as large as +/- 3 as long as the total order was under 5. The plot intersects this list with the results of the automated linefinder.

These lines are new; their appearance corresponds to the extra lines being turned on. Spot checks of daily Fscan spectra confirm this. For comparison, I have attached an equivalent plot for the previous week (figure 2). Interactive versions of the plots can be found here: fig 1 interactive, fig 2 interactive. The product for each marked line can be seen in the hover text in those versions.

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 10:10, Wednesday 09 August 2023 (72098)CAL
Thanks for identifying this! 
These lines have now been turned OFF as of 2023-08-09 16:40 UTC as a result of this study.
H1 CAL
caden.swain@LIGO.ORG - posted 11:47, Tuesday 11 July 2023 - last comment - 14:54, Wednesday 09 August 2023(71117)
Characterization of the Spare OMC DCPD Whitening Chassis - S2300004

C.Swain, L.Dartez, J.Kissel

Finished characterizing the spare whitening chassis, OMC DCPD S2300004.

The last two files are text documents containing updated notes on gathering the transfer function measurement data and noise measurement data of the whitening chassis, respectively. 

Non-image files attached to this report
Comments related to this report
caden.swain@LIGO.ORG - 14:54, Wednesday 09 August 2023 (71763)

Adding more details to OMC DCPD S2300004 Characterization for better accessibility: 


Whitening ON Fit:

Channel Fit Zeros Fit Poles   Fit Gain
DCPDA [0.997] Hz [9.904e+00, 4.401e+04] Hz 435752.409
DCPDB [1.006] Hz [9.994e+00, 4.377e+04] Hz 433177.144

These results align very well with the goal of a [1:10] whitening chassis.

 A difference of [0.3, 0.6] % from the model for the DCPDA and DCPDB fit zeros, respectively, is present. A difference of [0.96, 0.06] % from the model for the DCPDA and DCPDB fit poles, respectively, is present.

As all the differences are below 1%, the precision of the results is satisfactory. 


Noise Measurements: 

Average ASD for Whitening ON: 300 nV/rtHz

Average ASD for Whitening OFF: 30 nV/rtHz

Average ASD SR785 Noise Floor Level: 20 nV/rtHz

Whitening ON and Whitening OFF noise is consistent with OMC DCPD S2300003 noise measurements. Noise Floor is slightly higher than the S2300003 Noise Floor measurements (~20nV/rtHz compared to ~6nV/rtHz). This difference in noise floor measurements could be caused by a difference in measurement setup when gathering S2300003 data compared to S2300004 data.


Overall: 

The measurments of the OMC DCPD S2300004 whitening chassis align well with what is to be expected from a z:p = [1 : (10, 44e3)] Hz whitening filter, with an average ASD noise floor of 300nV/rtHz in the Whitening ON state.

SR785 noise floor level is slightly higher than previously recorded from the OMC DCPD S2300003 whitening chassis but does not contribute to either the Whitening OFF or Whitening ON states, so this difference may effectively be ignored. 

Displaying reports 16681-16700 of 86674.Go to page Start 831 832 833 834 835 836 837 838 839 End