Displaying reports 81-100 of 76860.Go to page 1 2 3 4 5 6 7 8 9 10 End
Reports until 09:52, Friday 12 July 2024
H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 09:52, Friday 12 July 2024 - last comment - 09:05, Monday 15 July 2024(79073)
Lockloss at 16:44 UTC

16:44 UTC lockloss. PRCL was oscillating at about ~3.6 Hz.

Comments related to this report
ryan.crouch@LIGO.ORG - 12:55, Friday 12 July 2024 (79078)

We've lost it at PREP_DC_READOUT twice in a row, during different points of the OMC locking process. Lockloss tool tags ADS_EXCURSION

camilla.compton@LIGO.ORG - 09:05, Monday 15 July 2024 (79111)

We turned on a PRCL FF the day before: 79035. But this 3.6Hz PRCL wobble is normal, it was constant throughout the lock plot and locks before the feedforward was installed (example).

This lockloss looked very normal, AS_A then IMC loosing lock, as usual, plot.

Images attached to this comment
H1 ISC
ryan.crouch@LIGO.ORG - posted 09:43, Friday 12 July 2024 - last comment - 10:35, Friday 12 July 2024(79069)
Range checks

Ryan C, Sheila D

15:32 to 16:01 UTC we dropped Observing to do some range checks/investigations.

Sheila and I did some range checks following the wiki. Running the coherence check showed high coherence with CHARD and the SQZer BLRMs did not look as good as previous locks, particularly in the 10-20, 20-34, and 60-100 HZ bandwidths. So we decided to drop out of observing since LLO was down to run the SQZ alignment and angle scans and I concurrently ran the A2L_min script (TJs alog78552). After these finished we also took 5 minutes of NO_SQZ time starting at 15:49 UTC which showed that the extra noise is not coming from the SQZer. We gained a few Mpcs from the SQZ scan and A2L_min script.

Coherence comparison before and after SQZ and A2L checks

Images attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 10:19, Friday 12 July 2024 (79074)

Quickly checked temperature.

LVEA diurnal temperature swing is larger than usual in the past 4 days or so (e.g. zone 1B 0.33 Celsius pp instead of 0.15) but I don't see correlation with the range drop.

Images attached to this comment
sheila.dwyer@LIGO.ORG - 10:35, Friday 12 July 2024 (79072)

Attached is a spectrum comparison for a 159 Mpc time from last night to 149 Mpc this morning.  The issue is between 30-70 Hz, where Ryan's coherence plot doesn't show much.

The second attachment shows that today's poor range time is similar to what happened after Tuesday maintenance (78954). 

We tried the no squeezing time because we've seen in the past that the squeezer was adding noise around this frequency range, 78033 77969 77888 . In these past times were at times when we had moved the spot position on PR2, and we also had the intermittent squeezing problems that we seem not to have this week.  We reverted PR2 spot moves because of this suspicion (77895  78012) and the issue seemed to go away, but we thought it might be simply that the intermittent squeezer issue happened to get better.

The third attached screenshot shows a similar trends for the times when we moved PR3 in May to move the spot on PR2 (77949), that was a ~3 times larger move than the one we did last week (78878).

Images attached to this comment
H1 ISC
ryan.crouch@LIGO.ORG - posted 08:47, Friday 12 July 2024 - last comment - 09:03, Friday 12 July 2024(79066)
Ran A2L P & Y

Ran the (userapps)/isc/h1/scripts/a2l/a2l_min_multi.py script for all four quads in both P and Y to try and help our range. Minimal improvements, this was done in tandem with the SQZ scans which gained us about 2 Mpc.

ETMX P
Initial:  3.12
Final:    3.09
Diff:     -0.03

          ETMX Y
Initial:  4.79
Final:    4.81
Diff:     0.02

          ETMY P
Initial:  4.48
Final:    4.49
Diff:     0.01

          ETMY Y
Initial:  1.13
Final:    1.26
Diff:     0.13

          ITMX P
Initial:  -1.07
Final:    -1.02
Diff:     0.05

          ITMX Y
Initial:  2.72
Final:    2.79
Diff:     0.07

          ITMY P
Initial:  -0.47
Final:    -0.43
Diff:     0.04

          ITMY Y
Initial:  -2.3
Final:    -2.36
Diff:     -0.06

Comments related to this report
ryan.crouch@LIGO.ORG - 09:03, Friday 12 July 2024 (79068)

SDFed

Images attached to this comment
LHO VE
david.barker@LIGO.ORG - posted 08:11, Friday 12 July 2024 (79063)
Fri CP1 Fill

Fri Jul 12 08:08:03 2024 INFO: Fill completed in 8min 0secs

Jordan confirmed a good fill curbside.

Images attached to this report
H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 07:37, Friday 12 July 2024 - last comment - 08:22, Friday 12 July 2024(79060)
OPS Friday day shift start

TITLE: 07/12 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 3mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY:

Comments related to this report
ryan.crouch@LIGO.ORG - 08:22, Friday 12 July 2024 (79064)

Running the coherence low range check, CHARD_P,Y and MICH seem to have high coherence.

Images attached to this comment
H1 General
oli.patane@LIGO.ORG - posted 01:09, Friday 12 July 2024 (79058)
Ops Eve Shift End

TITLE: 07/12 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Mostly uneventful evening, but after the lockloss while trying to relock (hadn't done an IA), DRMI unlocked twice while we were in ENGAGE_DRMI_ASC, and got beamsplitter saturations both times, with the second time also getting a BS ISI saturation (1st time ndscope, 2nd time ndscope). Not sure what that was about, as I ran an IA and when I tried locking again it was fine. After being in Observing for a little bit I decided that I wanted to adjust sqz because it was really bad, and I had some issues after running it the first time (clicked things out of order even though it shouldn't matter??), but eventually I was able to get a lot more squeezing and a much better sensitivity at the higher frequencies.
LOG:

23:00 Relocking and at PARK_ALS_VCO
23:35 NOMINAL_LOW_NOISE
23:42 Started running simulines measurement to check if simulines is working
00:05 Simulines done
00:13 Observing

00:44 Earthquake mode activated due to EQ in El Salvador
01:04 Seismic to CALM

04:32 Lockloss
Relocking
    - 17 seconds into ENGAGE_DRMI_ASC, BS saturated and then LL
    - BS saturation twice in ENGAGE_DRMI_ASC, then ISI BS saturation, then LL
05:08 Started an initial alignment
05:27 IA done, relocking
06:21 NOMINAL_LOW_NOISE
06:23 Observing

06:38 Out of Observing to try and make sqz better because it's really bad
07:24 Observing                                                                                                            

Start Time System Name Location Lazer_Haz Task Time End
23:10 PCAL Rick, Shango, Dan PCAL Lab y(local) In PCAL Lab 23:54
Images attached to this report
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 21:33, Thursday 11 July 2024 - last comment - 23:24, Thursday 11 July 2024(79056)
Lockloss

Lockloss @ 07/12 04:32 UTC

Comments related to this report
oli.patane@LIGO.ORG - 23:24, Thursday 11 July 2024 (79057)

06:23 UTC Observing

LHO VE
david.barker@LIGO.ORG - posted 20:23, Thursday 11 July 2024 (79054)
Thu CP1 Fill

Thu Jul 11 08:11:57 2024 INFO: Fill completed in 11min 53secs

late entry from this morning

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 20:06, Thursday 11 July 2024 (79053)
Ops Eve Midshift Status

Observing at 157Mpc and have been locked for 3.5  hours. We rode out a 5.2 earthquake from El Salvador earlier. Wind is low and going down.

X1 SUS
ibrahim.abouelfettouh@LIGO.ORG - posted 13:38, Thursday 11 July 2024 - last comment - 21:56, Thursday 11 July 2024(79032)
BBSS Transfer Functions and First Look Observations

Ibrahim, Oli

Attached are the most recent (07-10-2024) BBSS Transfer Functions following the most recent RAL visit and rebuild. The Diaggui screenshots show the first 01-05-2024 round of measurements as a reference. The PDF shows these results with respect to expectations from the dynamical model. Here is what we think so far:

Thoughts:

The nicest sounding conclusion here is that something is wrong with the F3 OSEM because it is the only OSEM and/or flag involved in L, P, Y (less coherent measurements) but not in the others; F3 fluctuates and reacts much more irratically than the others, and in Y, the F3 OSEM has the greatest proportion of actuation than P and a higher magnitude than L, so if there were something wrong with F3, we'd see it in Y the loudest. This is exactly where we see the loudest ring-up. I will take spectra and upload this in another alog. This would account for all issues but the F1, LF and RT OSEM drift, which I will plot and share in a seperate seperate alog.

Images attached to this report
Non-image files attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 21:56, Thursday 11 July 2024 (79055)

We have also now made a transfer function comparison between the dynamical model, the first build (2024/01/05), and following the recent rebuild (2024/07/10). These plots were generated by running $(sussvn)/trunk/BBSS/Common/MatlabTools/plotallbbss_tfs_M1.m for cases 1 and 3 in the table. I've attached the results as a pdf, but the .fig files can also be found in the results directory, $(sussvn)/trunk/BBSS/Common/Results/allbbss_2024-Jan05vJuly10_X1SUSBS_M1/. These results have been committed to svn.

Non-image files attached to this comment
H1 CAL
louis.dartez@LIGO.ORG - posted 07:10, Thursday 11 July 2024 - last comment - 22:48, Friday 12 July 2024(79019)
testing patched simulines version during next calibration measurement
We're running a patched version of simuLines during the next calibration measurement run. The patch (attached) was provided by Erik to try and get around what we think are awg issues introduce (or exacerbated) by the recent awg server updates (mentioned in LHO:78757).

Operators: there's is nothing special to do. just follow the normal routine as I applied the patch changes in place. Depending on the results of this test, I will either roll them back or work with Vlad to make them permanent (at least for LHO).
Non-image files attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 17:16, Thursday 11 July 2024 (79048)

Simulines was run right after getting back to NOIMINAL_LOW_NOISE. Script ran all the way until after Commencing data processing, where it then gave:

Traceback (most recent call last):
  File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 712, in
    run(args.inputFile, args.outPath, args.record)
  File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 205, in run
    digestedObj[scan] = digestData(results[scan], data)
  File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 621, in digestData
    coh = np.float64( cohArray[index] )
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/gwpy/types/series.py", line 609, in __getitem__
    new = super().__getitem__(item)
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/gwpy/types/array.py", line 199, in __getitem__
    new = super().__getitem__(item)
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/astropy/units/quantity.py", line 1302, in __getitem__
    out = super().__getitem__(key)
IndexError: index 3074 is out of bounds for axis 0 with size 0

erik.vonreis@LIGO.ORG - 17:18, Thursday 11 July 2024 (79049)

All five excitations looked good on ndscope during the run.

erik.vonreis@LIGO.ORG - 17:22, Thursday 11 July 2024 (79050)

Also applied the following patch to simuLines.py before the run.  The purpose being to extend the sine definition so that discontinuities don't happen if a stop command is executed late.  If stop commands are all executed on time (the expected behavior), then this change will have no effect.

 

diff --git a/simuLines.py b/simuLines.py
index 6925cb5..cd2ccc3 100755
--- a/simuLines.py
+++ b/simuLines.py
@@ -468,7 +468,7 @@ def SignalInjection(resultobj, freqAmp):
     
     #TODO: does this command take time to send, that is needed to add to timeWindowStart and fullDuration?
     #Testing: Yes. Some fraction of a second. adding 0.1 seconds to assure smooth rampDown
-    drive = awg.Sine(chan = exc_channel, freq = frequency, ampl = amp, duration = fullDuration + rampUp + rampDown + settleTime + 1)
+    drive = awg.Sine(chan = exc_channel, freq = frequency, ampl = amp, duration = fullDuration + rampUp + rampDown + settleTime + 10)
     
     def signal_handler(signal, frame):
         '''

 

vladimir.bossilkov@LIGO.ORG - 07:33, Friday 12 July 2024 (79059)

Here's what I did:

  • Cloned simulines in my home directory
  • Copied the currently used ini file to that directory, overwriting default file [cp /ligo/groups/cal/src/simulines/simulines/newDARM_20231221/settings_h1_newDARM_scaled_by_drivealign_20231221_factor_p1.ini /ligo/home/vladimir.bossilkov/gitProjects/simulines/simulines/settings_h1.ini]
  • reran simulines on the log file [./simuLines.py -i /opt/rtcds/userapps/release/cal/common/scripts/simuLines/logs/H1/20240711T234232Z.log]

No special environment was used. Output:
2024-07-12 14:28:43,692 | WARNING | It is assumed you are parising a log file. Reconstruction of hdf5 files will use current INI file.
2024-07-12 14:28:43,692 | WARNING | If you used a different INI file for the injection you are reconstructing, you need to replace the default INI file.
2024-07-12 14:28:43,692 | WARNING | Fetching data more than a couple of months old might try to fetch from tape. Please use the NDS2_CLIENT_ALLOW_DATA_ON_TAPE=1 environment variable.
2024-07-12 14:28:43,692 | INFO | If you alter the scan parameters (ramp times, cycles run, min seconds per scan, averages), rerun the INI settings generator. DO NOT hand modify the ini file.
2024-07-12 14:28:43,693 | INFO | Parsing Log file for injection start and end timestamps
2024-07-12 14:28:43,701 | INFO | Commencing data processing.
2024-07-12 14:28:55,745 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240711T234232Z.hdf5
2024-07-12 14:29:11,685 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240711T234232Z.hdf5
2024-07-12 14:29:20,343 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240711T234232Z.hdf5
2024-07-12 14:29:29,541 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240711T234232Z.hdf5
2024-07-12 14:29:38,634 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240711T234232Z.hdf5


Seems good to me. Were you guys accidentally using some conda environment when running simulines yesterday? When running this I was in " cds-testing " (which is the default?!). I have had this error in the past due to borked environments [in particular scipy which is the underlying responsible code for coherence], which is why I implemented the log parsing function.
The fact that the crash was on coherence and not the preceding transfer function calculation rings the alarm bell that scipy is the issue. We experienced this once in LLO with a single bad conda environment that was corrected, though I stubbornly religiously ran with a very old environment for a long time to make sure that error doesn't come up,

I ran this remotely so can't look at PDF if i run 'pydarm report'.
I'll be in touch over teamspeak to get that resolved.

ryan.crouch@LIGO.ORG - 08:00, Friday 12 July 2024 (79061)

Attaching the calibration report

Non-image files attached to this comment
vladimir.bossilkov@LIGO.ORG - 08:06, Friday 12 July 2024 (79062)

There's a number of WAY out there data points in this report.

Did you guys also forget to turn off the calibration lines when you ran it?

Not marking this report as valid.

louis.dartez@LIGO.ORG - 08:34, Friday 12 July 2024 (79065)
right, there was no expectation of this dataset being valid. the IFO was not thermalized and the cal lines remained on. 

The goal of this exercise was to demonstrate that the patched simulines version at LHO can successfully drive calibration measurements. And to that end the exercise was successful. LHO has recovered simulines functionality and we can lay to rest the scary notion of regressing back to our 3hr-long measurement scheme for now.
erik.vonreis@LIGO.ORG - 22:48, Friday 12 July 2024 (79089)

The run was probably in done in the 'cds' environment.  At LHO, 'cds' and 'cds-testing' are currently identical.  I don't know the situation at LLO, but LLO typically runs with an older environment than LHO.

Since it's hard to stay with fixed versions on conda-forge, it's likely several packages are newer at LHO vs. LLO cds environments.

H1 ISC (CAL, CDS)
jeffrey.kissel@LIGO.ORG - posted 10:34, Tuesday 09 July 2024 - last comment - 22:43, Friday 12 July 2024(78958)
Testing CPU Turn Around Time for OMC DCPD 524 kHz IOP model :: Unused High Frequency Notches Removed to Save Computation Time
J. Kissel, E. von Reis, D. Sigg

Circa March 2023, Daniel had installed some never-used first attempts at further filtering the high frequency noise present in the 524 kHz OMC DCPD channels (never aLOGged because it was never used, but I call them out after I found the work in LHO:68098). 

Now, because I'd like to characterize the existing, in-use, digital AA filtering, running into some unknown noise (LHO:78516), and hoping to install 2 to 4 more parallel filter banks that would also be quite full of filters (LHO:78956), there is worry that there won't be enough computation time in the 524 kHz system.

Remember, that "the 524 kHz system" is actually a modified "standard" 65 kHz system, which is reading out 8 samples from the 524 kHz ADC each 65 kHz clock cycle and computing everything at 65 kHz. Thus, in principle, the max turn around time is
    1 / (2^16 Hz) = (1 / 65536) [sec] = 1.5258789e-5 [sec] = 15.3 [usec]

However, we *think* the practical limit is somewhat less that this. I don't think I understand what those limitations are enough to say definitively and/or to quote a limit quantitatively, but I think they're related to "the copy of the OMC DCPD channels which are demodulated at high frequency to create PI channels that are shipped to the end station -- in other words, the IPC sending demands a bit of computational time, and if there isn't enough turnaround time left in the OP to write to the IPC network, then the end-station SUS PI models throw an IPC timing error."

Anyways -- this morning, I looked at the 524 kHz system's computation time as is before doing anything (via the channel H1:FEC-179_CPU_METER), and it's sitting at 9 [usec] (out of the ideal 15 [usec]), occasionally popping up to 10 [usec].

But -- this led me to remember that -- regardless of whether the filter is turned ON -- the front-end computes the output of the filter -- sucking up computation time.
So, I've removed these unused prototype notch filters from the DCPD A0 and B0 filter banks.
In addition, I've also removed the old "V2A" filter from a previous version of the digital compensation for the OMC DCPD transimpedance amplifier response.

Removing the notch filters drops the computation time from "9 [sec] occasionally bopping up to 10 [usec]."
See attached time series of the CPU meter.

These filters are, of course, available in the filter_archive, under the latest previous archived file before today's work:
    /opt/rtcds/lho/h1/chans/filter_archive/h1iopomc0/
        H1IOPOMC0_1401558760.txt
but for ease of use, I copy them here.

FM3 :: Notches1
    ellip("BandStop",3,0.5,30,12800,13200)
    notch(10216,50,30)
    ellip("BandStop",3,0.5,30,10380,10465)
    ellip("BandStop",3,0.5,30,12900,13100)

FM5 :: Notches2
    ellip("BandStop",3,0.5,30,8100,8200)
    notch(9101,50,30)notch(9337,200,30)
    notch(9463,50,20)
    ellip("BandStop",3,0.5,30,9750,9950)

FM8 :: Notches3
    ellip("BandStop",5,0.5,40,14384,18384)
    ellip("BandStop",5,0.5,40,30768,34768)

FM6 :: V2A
    zpk([5.699+i*22.223;5.699-i*22.223;32.73],[2.549;2.117;6.555],0.00501187,"n")gain(0.971763)
I've also posted a plot of the magnitude of these notch filters -- mostly just to demonstrate how many second order sections that these filters had -- sucking up computation time.
Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 09:42, Friday 12 July 2024 (79071)

Keita, Sheila

We were looking for explanations for the drops in range we've seen since Tuesday.  Attached is a plot of the CPU meter, it seems that this jumped up shortly after Jeff's plot was made.  It is still below 13usec, and doesn't look correlated with our range problems.

Images attached to this comment
erik.vonreis@LIGO.ORG - 22:43, Friday 12 July 2024 (79088)

Variation of CPU time in that range shouldn't by itself have any effect on the control loops running on that model until they get to a sustained time above 15 us or an individual cycle time somewhat more than 15 us depending on the model. 

The effects of a model that's run too long are DAC buffer starvation, i.e. the IOP didn't keep up with the DAC clocks,  or IPC communication between models arriving to late. 

Both of these errors would appear immediately on the CDS overview MEDM.

H1 SEI (SEI)
neil.doerksen@LIGO.ORG - posted 18:35, Thursday 04 July 2024 - last comment - 09:14, Friday 12 July 2024(78869)
Earthquake Analysis : Similar onsite wave velocities may or may not cause lockloss, why?

It seems earthquakes causing similar magnitudes of movement on-site may or may not cause lockloss. Why is this happening? Should expect to always or never cause lockloss for similar events. One suspicion is that common or differential motion might lend itself better to keeping or breaking lock.

- Lockloss is defined as H1:DRD-ISC_LOCK_STATE_N going to 0 (or near 0).
- I correlated H1:DRD-ISC_LOCK_STATE_N with H1:ISI-GND_STS_CS_Z_EQ_PEAK_OUTMON peaks between 500 and 2500 μm/s.
- I manually scrolled through the data from present to 2 May 2024 to find events.
    - Manual, because 1) wanted to start with a small sample size and quickly see if there was a pattern, and 2) because I need to find events that caused loss, then go and find similarly sized events we kept lock.
- Channels I looked at include:
    - IMC-REFL_SERVO_SPLITMON
    - GRD-ISC_LOCK_STATE_N
    - ISI-GND_STS_CS_Z_EQ_PEAK_OUTMON ("CS_PEAK")
    - SEI-CARM_GNDBLRMS_30M_100M
    - SEI-DARM_GNDBLRMS_30M_100M
    - SEI-XARM_GNDBLRMS_30M_100M
    - SEI-YARM_GNDBLRMS_30M_100M
    - SEI-CARM_GNDBLRMS_100M_300M
    - SEI-DARM_GNDBLRMS_100M_300M
    - SEI-XARM_GNDBLRMS_100M_300M
    - SEI-YARM_GNDBLRMS_100M_300M
    - ISI-GND_STS_ITMY_X_BLRMS_30M_100M
    - ISI-GND_STS_ITMY_Y_BLRMS_30M_100M
    - ISI-GND_STS_ITMY_Z_BLRMS_30M_100M
    - ISI-GND_STS_ITMY_X_BLRMS_100M_300M
    - ISI-GND_STS_ITMY_Y_BLRMS_100M_300M
    - ISI-GND_STS_ITMY_Z_BLRMS_100M_300M
    - SUS-SRM_M3_COILOUTF_LL_INMON
    - SUS-SRM_M3_COILOUTF_LR_INMON
    - SUS-SRM_M3_COILOUTF_UL_INMON
    - SUS-SRM_M3_COILOUTF_UR_INMON
    - SUS-PRM_M3_COILOUTF_LL_INMON
    - SUS-PRM_M3_COILOUTF_LR_INMON
    - SUS-PRM_M3_COILOUTF_UL_INMON
    - SUS-PRM_M3_COILOUTF_UR_INMON

        - ndscope template saved as neil_eq_temp2.yaml

- 26 events; 14 lockloss, 12 locked (3 or 4 lockloss event may have non-seismic causes)

- After, usiing CS_PEAK to find the events, I, so far, used the ISI channels to analyse the events.
    - The SEI channels were created last week (only 2 events captured in these channels, so far).

- Conclusions:
    - There are 6, CS_PEAK events above 1,000 μm/s in which we *lost* lock;
        - In SEI 30M-100M
            - 4 have z-axis dominant motion with no motion or strong z-motion or no motion in SEI 100M-300M
            - 2 have y-axis dominated motion with a lot of activity in SEI 100M-300M and y-motion dominating some of the time.
    - There are 6, CS_PEAK events above 1,000 μm/s in which we *kept* lock;
        - In SEI 30M-100M
            - 5 have z-axis dominant motion with only general noise in SEI 100M-300M
            - 1 has z-axis dominant noise near the peak in CS_PEAK and strong y-axis domaniated motion starting 4 min prior to the CS_PEAK peak; it too only has general noise in SEI 100M-300M. This x- or y-motion which starts about 4 min before the peak in CS_PEAK has been observed in 5 events -- Love waves precede Rayleigh waves, could be Love waves?
    - All events below 1000 μm/s which lose lock seem to have a dominant y-motion in either/both SEI 30M-100M / 100M-300M. However, the sample size is not large enough to convince me that shear motion is what is causing lockloss. But it is large enough to convince me to find more events and verify. (Some plots attached.)

Images attached to this report
Comments related to this report
beverly.berger@LIGO.ORG - 09:08, Sunday 07 July 2024 (78921)DCS, SEI

In a study with student Alexis Vazquez (see the poster at https://dcc.ligo.org/LIGO-G2302420, we found that there was an intermediate range of peak ground velocities in EQs where lock could be lost or maintained. We also found some evidence that lock loss in this case might be correlated with high microseism (either ambiant or caused by the EQ). See the figures in the linked poster under Findings and Validation.

neil.doerksen@LIGO.ORG - 09:14, Friday 12 July 2024 (79070)SEI

One of the plots (2nd row, 2nd column) has the incorrect x-channel on some of the images (all posted images are correct, by chance). Patterns reported may not be correct, will reanalyze.

H1 CDS (CAL, CDS, SUS)
erik.vonreis@LIGO.ORG - posted 10:21, Tuesday 25 June 2024 - last comment - 18:37, Thursday 11 July 2024(78644)
SUSH2A

[Dave, Erik]

Dave found that DACs in h1sush2a were in a FIFO HIQTR state since 2024-04-09 11:33 UTC.

 

FIFO HIQTR means that DAC buffers had more data than expected.  DAC latency would be proportionally higher than expected.

 

The models were restarted, which fixed the issue.

Comments related to this report
erik.vonreis@LIGO.ORG - 18:37, Thursday 11 July 2024 (79052)

The upper bound on sush2a latency for the first three months of O4B is 39 IOP
cycles.  At 2^16 cycles per second, that's a maximum of 595
microseconds.

At 1 kHz that's  214 degrees of phase shift.

Normal latency is 3 IOP cycles, 46 microseconds, 16 degrees phase shift
@ 1 kHz.

The minimum latency when sush2a was in error was 4 cycles, 61
microseconds, 23 deg @ 1 KHz.
 

Displaying reports 81-100 of 76860.Go to page 1 2 3 4 5 6 7 8 9 10 End