Displaying reports 9561-9580 of 86348.Go to page Start 475 476 477 478 479 480 481 482 483 End
Reports until 15:27, Friday 12 July 2024
H1 CAL
jeffrey.kissel@LIGO.ORG - posted 15:27, Friday 12 July 2024 - last comment - 10:33, Wednesday 17 July 2024(79081)
PCALX :: Long-term High-Frequency Sensing Function TF vs. PCAL Spot Position Systematics Study
J. Kissel, L. Dartez, F. Llamas

As Louis and I were wracking our brains for all the changes to the IFO that might have impacted the IFO's calibration in 2024 in order to update / maintain T2300297, we realized that Francisco's work -- changing the PCAL "inner beam" spot position of PCALX to explore and validate models of systematic errors the PCAL system -- would potentially impact the measurements that are constantly running on PCALX -- the so-called "roaming lines" which is a really long duration, repeating sweep of the sensing function between 1 and 5 kHz. Because this long-duration sweep relies on nominal low noise data, the speed at which the sweep frequency advances is dependent on the IFO's duty cycle, and is thus irregular; sometimes complete in 6 days, other times taking 10 days or more.

To facilitate this research, I plot the trend of the PCALX Oscillator Frequency that's used to define the sweep frequency over time and overlay Francisco, et. al's PCALX spot moves.

And just so you can join us in our worry -- while Francisco's work (see latest in LHO:78964) is showing 0.03%-level changes in the comparison between PCALX and PCALY at 283.91 and 284.01 Hz respectively (via plots like this), the roaming lines measurement is only PCALX and the sweep is from 1-5 kHz, where it's been known for a long time that PCAL spot position changes of ~5 [mm] at RX, or ~2.5 [mm] scale at the Test Mass, can cause 1-to-10%-level errors in reported displacement values at 3, 4, 5 kHz (largest at highest frequencies) -- see, for example, figures 2.30 through 2.35 in T2300381.

Stay tuned!

This aLOG does not suggest we've done the leg work to quantify that we've seen the a problem, but the start of doing such a study.
Images attached to this report
Comments related to this report
richard.savage@LIGO.ORG - 10:33, Wednesday 17 July 2024 (79191)CAL

Jeff wrote:

"...while Francisco's work (see latest in LHO:78964) is showing 0.03%-level changes..."

I think he meant to write 0.3%-level rather than 0.03%.

X1 SUS
ibrahim.abouelfettouh@LIGO.ORG - posted 13:17, Friday 12 July 2024 (79079)
BBSS Post-TF Diagnostic Check-up

Ibrahim

Today, I went into the staging building to see if there were any visible or otherwise apparent reasons for our L, P, Y M1 Transfer Function incoherence.

Here's what I did:

Here's what I found:

Here's what I'm going to to:

I'm going to set an offset in order to "fake" center the BOSEMs and then take data but I expect that with a Pitch instability the results won't be too coherent either. We'll see. We're at least now ensured that there won't be any rubbing or contact.

What I'm going to do later:

We will fix the percieveable Roll issue on Monday when there are more hands on deck. I'll also consider re-centering the OSEMs at noon and measuring the temp at that time so we have some sort of informed reference.

Images attached to this report
H1 PSL
ryan.crouch@LIGO.ORG - posted 10:58, Friday 12 July 2024 (79075)
PSL Status Report - Weekly


Laser Status:
    NPRO output power is 1.821W (nominal ~2W)
    AMP1 output power is 65.27W (nominal ~70W)
    AMP2 output power is 138.5W (nominal 135-140W)
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 2 days, 23 hr 37 minutes
    Reflected power = 17.94W
    Transmitted power = 107.8W
    PowerSum = 125.8W

FSS:
    It has been locked for 0 days 0 hr and 49 min
    TPD[V] = 0.7949V

ISS:
    The diffracted power is around 3.2%
    Last saturation event was 0 days 0 hours and 49 minutes ago


Possible Issues:
    ISS diffracted power is high, seems like its been high since early morning this past maintenance.

H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 09:52, Friday 12 July 2024 - last comment - 09:05, Monday 15 July 2024(79073)
Lockloss at 16:44 UTC

16:44 UTC lockloss. PRCL was oscillating at about ~3.6 Hz.

Comments related to this report
ryan.crouch@LIGO.ORG - 12:55, Friday 12 July 2024 (79078)

We've lost it at PREP_DC_READOUT twice in a row, during different points of the OMC locking process. Lockloss tool tags ADS_EXCURSION

camilla.compton@LIGO.ORG - 09:05, Monday 15 July 2024 (79111)

We turned on a PRCL FF the day before: 79035. But this 3.6Hz PRCL wobble is normal, it was constant throughout the lock plot and locks before the feedforward was installed (example).

This lockloss looked very normal, AS_A then IMC loosing lock, as usual, plot.

Images attached to this comment
H1 ISC
ryan.crouch@LIGO.ORG - posted 09:43, Friday 12 July 2024 - last comment - 10:35, Friday 12 July 2024(79069)
Range checks

Ryan C, Sheila D

15:32 to 16:01 UTC we dropped Observing to do some range checks/investigations.

Sheila and I did some range checks following the wiki. Running the coherence check showed high coherence with CHARD and the SQZer BLRMs did not look as good as previous locks, particularly in the 10-20, 20-34, and 60-100 HZ bandwidths. So we decided to drop out of observing since LLO was down to run the SQZ alignment and angle scans and I concurrently ran the A2L_min script (TJs alog78552). After these finished we also took 5 minutes of NO_SQZ time starting at 15:49 UTC which showed that the extra noise is not coming from the SQZer. We gained a few Mpcs from the SQZ scan and A2L_min script.

Coherence comparison before and after SQZ and A2L checks

Images attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 10:19, Friday 12 July 2024 (79074)

Quickly checked temperature.

LVEA diurnal temperature swing is larger than usual in the past 4 days or so (e.g. zone 1B 0.33 Celsius pp instead of 0.15) but I don't see correlation with the range drop.

Images attached to this comment
sheila.dwyer@LIGO.ORG - 10:35, Friday 12 July 2024 (79072)

Attached is a spectrum comparison for a 159 Mpc time from last night to 149 Mpc this morning.  The issue is between 30-70 Hz, where Ryan's coherence plot doesn't show much.

The second attachment shows that today's poor range time is similar to what happened after Tuesday maintenance (78954). 

We tried the no squeezing time because we've seen in the past that the squeezer was adding noise around this frequency range, 78033 77969 77888 . In these past times were at times when we had moved the spot position on PR2, and we also had the intermittent squeezing problems that we seem not to have this week.  We reverted PR2 spot moves because of this suspicion (77895  78012) and the issue seemed to go away, but we thought it might be simply that the intermittent squeezer issue happened to get better.

The third attached screenshot shows a similar trends for the times when we moved PR3 in May to move the spot on PR2 (77949), that was a ~3 times larger move than the one we did last week (78878).

Images attached to this comment
H1 ISC
ryan.crouch@LIGO.ORG - posted 08:47, Friday 12 July 2024 - last comment - 09:03, Friday 12 July 2024(79066)
Ran A2L P & Y

Ran the (userapps)/isc/h1/scripts/a2l/a2l_min_multi.py script for all four quads in both P and Y to try and help our range. Minimal improvements, this was done in tandem with the SQZ scans which gained us about 2 Mpc.

ETMX P
Initial:  3.12
Final:    3.09
Diff:     -0.03

          ETMX Y
Initial:  4.79
Final:    4.81
Diff:     0.02

          ETMY P
Initial:  4.48
Final:    4.49
Diff:     0.01

          ETMY Y
Initial:  1.13
Final:    1.26
Diff:     0.13

          ITMX P
Initial:  -1.07
Final:    -1.02
Diff:     0.05

          ITMX Y
Initial:  2.72
Final:    2.79
Diff:     0.07

          ITMY P
Initial:  -0.47
Final:    -0.43
Diff:     0.04

          ITMY Y
Initial:  -2.3
Final:    -2.36
Diff:     -0.06

Comments related to this report
ryan.crouch@LIGO.ORG - 09:03, Friday 12 July 2024 (79068)

SDFed

Images attached to this comment
LHO VE
david.barker@LIGO.ORG - posted 08:11, Friday 12 July 2024 (79063)
Fri CP1 Fill

Fri Jul 12 08:08:03 2024 INFO: Fill completed in 8min 0secs

Jordan confirmed a good fill curbside.

Images attached to this report
H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 07:37, Friday 12 July 2024 - last comment - 08:22, Friday 12 July 2024(79060)
OPS Friday day shift start

TITLE: 07/12 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 3mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY:

Comments related to this report
ryan.crouch@LIGO.ORG - 08:22, Friday 12 July 2024 (79064)

Running the coherence low range check, CHARD_P,Y and MICH seem to have high coherence.

Images attached to this comment
H1 ISC (ISC)
jennifer.wright@LIGO.ORG - posted 15:55, Thursday 11 July 2024 - last comment - 15:26, Friday 19 July 2024(79045)
DARM Offset step with hot OM2

We were only about 2 and half hours into lock when I did this test due to our earthquake lockloss this morning.

I ran the

python auto_darm_offset_step.py

in /ligo/gitcommon/labutils/darm_offset_step

Starting at GPS 1404768828

See attached image.

Analysis to follow.

Returned DARM offset H1:OMC-READOUT_X0_OFFSET to 10.941038 (nominal) at 2024 Jul 11 21:47:58 UTC (GPS 1404769696)

DARM offset moves recorded to 
data/darm_offset_steps_2024_Jul_11_21_33_30_UTC.txt

Images attached to this report
Comments related to this report
jennifer.wright@LIGO.ORG - 14:25, Friday 12 July 2024 (79080)

Here is the calculated Optical gain vs dcpd power and DARM offset vs optical gain as calculated by ligo/gitcommon/labutils/darm_offset_step/plot_darm_optical_gain_vs_dcpd_sum.py

The contrast defect is  calculated from the height of the 410Hz PCAL line at each offset step in the output DCPD, and is 1.014 +/- 0.033 mW.

Non-image files attached to this comment
jennifer.wright@LIGO.ORG - 15:58, Monday 15 July 2024 (79130)

I added an additional plotting step to the code and it now makes this plot which shows us how the power at AS_C changes with the DARM offset power at the DCPDs. The slope of this graph tells us what fraction of the power is lost between the input to HAM6 (AS_C) and the DCPDs.

P_AS = 1.770*P_DCPD + 606.5mW

Where the second term is light that will be rejected by the OMC and that which gets through the OMC but is insensitive to DARM length changes.

The loss term between the anti-symmetric port and the DCPDs is 1/1.77 = 0.565

Non-image files attached to this comment
H1 CAL
louis.dartez@LIGO.ORG - posted 07:10, Thursday 11 July 2024 - last comment - 22:48, Friday 12 July 2024(79019)
testing patched simulines version during next calibration measurement
We're running a patched version of simuLines during the next calibration measurement run. The patch (attached) was provided by Erik to try and get around what we think are awg issues introduce (or exacerbated) by the recent awg server updates (mentioned in LHO:78757).

Operators: there's is nothing special to do. just follow the normal routine as I applied the patch changes in place. Depending on the results of this test, I will either roll them back or work with Vlad to make them permanent (at least for LHO).
Non-image files attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 17:16, Thursday 11 July 2024 (79048)

Simulines was run right after getting back to NOIMINAL_LOW_NOISE. Script ran all the way until after Commencing data processing, where it then gave:

Traceback (most recent call last):
  File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 712, in
    run(args.inputFile, args.outPath, args.record)
  File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 205, in run
    digestedObj[scan] = digestData(results[scan], data)
  File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 621, in digestData
    coh = np.float64( cohArray[index] )
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/gwpy/types/series.py", line 609, in __getitem__
    new = super().__getitem__(item)
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/gwpy/types/array.py", line 199, in __getitem__
    new = super().__getitem__(item)
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/astropy/units/quantity.py", line 1302, in __getitem__
    out = super().__getitem__(key)
IndexError: index 3074 is out of bounds for axis 0 with size 0

erik.vonreis@LIGO.ORG - 17:18, Thursday 11 July 2024 (79049)

All five excitations looked good on ndscope during the run.

erik.vonreis@LIGO.ORG - 17:22, Thursday 11 July 2024 (79050)

Also applied the following patch to simuLines.py before the run.  The purpose being to extend the sine definition so that discontinuities don't happen if a stop command is executed late.  If stop commands are all executed on time (the expected behavior), then this change will have no effect.

 

diff --git a/simuLines.py b/simuLines.py
index 6925cb5..cd2ccc3 100755
--- a/simuLines.py
+++ b/simuLines.py
@@ -468,7 +468,7 @@ def SignalInjection(resultobj, freqAmp):
     
     #TODO: does this command take time to send, that is needed to add to timeWindowStart and fullDuration?
     #Testing: Yes. Some fraction of a second. adding 0.1 seconds to assure smooth rampDown
-    drive = awg.Sine(chan = exc_channel, freq = frequency, ampl = amp, duration = fullDuration + rampUp + rampDown + settleTime + 1)
+    drive = awg.Sine(chan = exc_channel, freq = frequency, ampl = amp, duration = fullDuration + rampUp + rampDown + settleTime + 10)
     
     def signal_handler(signal, frame):
         '''

 

vladimir.bossilkov@LIGO.ORG - 07:33, Friday 12 July 2024 (79059)

Here's what I did:

  • Cloned simulines in my home directory
  • Copied the currently used ini file to that directory, overwriting default file [cp /ligo/groups/cal/src/simulines/simulines/newDARM_20231221/settings_h1_newDARM_scaled_by_drivealign_20231221_factor_p1.ini /ligo/home/vladimir.bossilkov/gitProjects/simulines/simulines/settings_h1.ini]
  • reran simulines on the log file [./simuLines.py -i /opt/rtcds/userapps/release/cal/common/scripts/simuLines/logs/H1/20240711T234232Z.log]

No special environment was used. Output:
2024-07-12 14:28:43,692 | WARNING | It is assumed you are parising a log file. Reconstruction of hdf5 files will use current INI file.
2024-07-12 14:28:43,692 | WARNING | If you used a different INI file for the injection you are reconstructing, you need to replace the default INI file.
2024-07-12 14:28:43,692 | WARNING | Fetching data more than a couple of months old might try to fetch from tape. Please use the NDS2_CLIENT_ALLOW_DATA_ON_TAPE=1 environment variable.
2024-07-12 14:28:43,692 | INFO | If you alter the scan parameters (ramp times, cycles run, min seconds per scan, averages), rerun the INI settings generator. DO NOT hand modify the ini file.
2024-07-12 14:28:43,693 | INFO | Parsing Log file for injection start and end timestamps
2024-07-12 14:28:43,701 | INFO | Commencing data processing.
2024-07-12 14:28:55,745 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240711T234232Z.hdf5
2024-07-12 14:29:11,685 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240711T234232Z.hdf5
2024-07-12 14:29:20,343 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240711T234232Z.hdf5
2024-07-12 14:29:29,541 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240711T234232Z.hdf5
2024-07-12 14:29:38,634 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240711T234232Z.hdf5


Seems good to me. Were you guys accidentally using some conda environment when running simulines yesterday? When running this I was in " cds-testing " (which is the default?!). I have had this error in the past due to borked environments [in particular scipy which is the underlying responsible code for coherence], which is why I implemented the log parsing function.
The fact that the crash was on coherence and not the preceding transfer function calculation rings the alarm bell that scipy is the issue. We experienced this once in LLO with a single bad conda environment that was corrected, though I stubbornly religiously ran with a very old environment for a long time to make sure that error doesn't come up,

I ran this remotely so can't look at PDF if i run 'pydarm report'.
I'll be in touch over teamspeak to get that resolved.

ryan.crouch@LIGO.ORG - 08:00, Friday 12 July 2024 (79061)

Attaching the calibration report

Non-image files attached to this comment
vladimir.bossilkov@LIGO.ORG - 08:06, Friday 12 July 2024 (79062)

There's a number of WAY out there data points in this report.

Did you guys also forget to turn off the calibration lines when you ran it?

Not marking this report as valid.

louis.dartez@LIGO.ORG - 08:34, Friday 12 July 2024 (79065)
right, there was no expectation of this dataset being valid. the IFO was not thermalized and the cal lines remained on. 

The goal of this exercise was to demonstrate that the patched simulines version at LHO can successfully drive calibration measurements. And to that end the exercise was successful. LHO has recovered simulines functionality and we can lay to rest the scary notion of regressing back to our 3hr-long measurement scheme for now.
erik.vonreis@LIGO.ORG - 22:48, Friday 12 July 2024 (79089)

The run was probably in done in the 'cds' environment.  At LHO, 'cds' and 'cds-testing' are currently identical.  I don't know the situation at LLO, but LLO typically runs with an older environment than LHO.

Since it's hard to stay with fixed versions on conda-forge, it's likely several packages are newer at LHO vs. LLO cds environments.

H1 AOS (ISC, VE)
keita.kawabe@LIGO.ORG - posted 13:05, Tuesday 09 July 2024 - last comment - 11:49, Friday 12 July 2024(78966)
We cannot assess the energy deposited in HAM6 during pressure spike incidents (yet)

We cannot make a reasonable assessment of energy deposited in HAM6 when we had the pressure spikes (the spikes themselves are reported in alogs 78346, 78310 and 78323, Sheila's analysis is in alog 78432), or even during regular lock losses.

This is because all of the relevant sensors saturate badly, and the ASC-AS_C is the worst in this respect because of heavy whitening. This happens each and every time the lock is lost. This is our limitation in configuration. I made a temporary change to partly mitigate this in a hope that we might obtain useful knowledge for regular lock losses (but I'm not entirely hopeful), which will be explained later.

Anyway, look at the 1st attachment, which is the trend at around the pressure spike incident at 10W (other spikes were at 60W, so this is the mildest of all). You cannot see the pressure spike because it takes some time for the puffs of gass molecules to reach the pirani.

Important points to take:

This is understandable. Look at the second attachment for a very rough power budget and electronics description of all of these sensors. QPDs (AS_C  and OMC QPDs) have 1kOhm raw transimpedance, 0.4:40 whitening that is not switchable on top of two stages of 1:10 that are switchable. WFSs (AS_A and AS_B) have 0.5k transimpedance with a factor of 10 gain that is switchable, and they don't have whitening.

This happens with regular lock losses, and even  with 2W RF lock losses (third attachment), so it's hard to make a good assessment of the power deposited for anything. At the moment, we have to accept that we don't know.

We can use AS_B or AS_A data even though they're railed and make the lower bound of the power, thus energy. That's what I'll do later.


(Added later)

After TJ locked the IFO, we saw strange noise bump ffrom ~20 to ~80 or so Hz. Since nobody had any idea, and since my ASC SUM connection to the PEM rack is an analog connection from the ISC rack that also has the DCPD interface chassis, I ran to the LVEA and disconnected that.

Seems like that wasn't it (it didn't get any better right after the disconnection), but I'm leaving it disconnected for now. I'll connect it back when I can.

Images attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 13:24, Tuesday 09 July 2024 (78968)

In a hope to make a better assesment of the regular lock losses, I made the following changes.

  • With Richard's help, I T-ed the ASC-AS_C analog SUM output on the back of the QPD interface chassis in ISC R5 rack (1st picture) and connected it to H1:PEM-CS_ADC_5_19_2k_OUT_DQ.
    • The SUM output doesn't have any whitening nor any DC amplification, it is just the analog average (SEG1+2+3+4)/4 where each SEG has 1kOhm transimpedance gain, and AS_C only receives ~400ppm of the power coming into HAM6. This will be the signal that rails/saturates later than other sensors.
    • The other end of the T goes to fast shutter logic chassis input in the same rack. The "out" signal of that chassis is T-ed and goes to the shutter driver as well as shutter interface in the same rack.
    • Physical connection goes from the QPD interface in the ISC rack on the floor to the channel B03 of the PEM DQ patch panel on the floor, then to CH20 of the PEM patch panel in the CER.
  • I flipped the x10 gain switch for AS_B to "low", which means there's no DC amplification for AS_B. So we have that much headroom.
    • I set the dark offset for all quadrants.
    • There was no "+20dB" in the AS_B DC filters, so I made that and loaded the filter (2nd attachment).
    • TJ took care of SDF for me.

My gut feeling is that these things still rail, but we'll see. I'll probably revert these on Tuesday next week.

Images attached to this comment
thomas.shaffer@LIGO.ORG - 13:50, Tuesday 09 July 2024 (78974)

SDF screenshot of accepted values.

Images attached to this comment
keita.kawabe@LIGO.ORG - 15:17, Tuesday 09 July 2024 (78977)

Low voltage operation of the fast shutter: It still bounces.

Before we started locking  IFO, I used available light coming from IMC and closed/opened the fast shutter using the "Close" and "Open" button on the MEDM screen. Since this doesn't involve the trigger voltage crossing the threshold, this only seems to drive the low voltage output of the shutter driver which is used to hold the shutter in closed position for a prolonged time.

In the attached, the first marker shows the time the shutter started moving, witnessed by GS-13.

About 19ms after the shutter started moving, the shutter is fully shut. About 25 ms after the shutter was closed, it started opening, got open or half-open for about 10ms and then closed for good.

Nothing was even close to railing. I repeated the same thing three times and it was like this every time.

Apparently the mirror is bouncing down or maybe moving sideways. During the last vent we haven't taken the picture of the beam on the fast shutter mirror, but it's hard to imagine that it's close the the end of the mirror's travel.

I thought that it's not supposed to do that. See the second movie in G1902365, even though the movie is capturing the HV action, not the LV, it's supposed to stay in the closed position.

Images attached to this comment
keita.kawabe@LIGO.ORG - 11:37, Thursday 11 July 2024 (79029)

ASC-AS_C analog sum signal at the back of the QPD interface chassis was put back on at around 18:30 UTC on Jul/11.

keita.kawabe@LIGO.ORG - 11:49, Friday 12 July 2024 (79077)

Unfortunately, I forgot that the input range of some of these PEM ADCs are +-2V, and so the signal still railed when the analog output of ASC-AS_SUM didn't (2V happens to be the trigger threshold of the fast shutter), so this was still not good enough.

I installed 1/11 resistive divider (nominally 909Ohm - 9.1k) on the output of the ASC-AS_C analog SUM output on the chassis (not on the input of the PEM patch panel) at around 18:30 UTC on Jul/12 2024 while IFO was out of lock.

H1 ISC (CAL, CDS)
jeffrey.kissel@LIGO.ORG - posted 10:34, Tuesday 09 July 2024 - last comment - 22:43, Friday 12 July 2024(78958)
Testing CPU Turn Around Time for OMC DCPD 524 kHz IOP model :: Unused High Frequency Notches Removed to Save Computation Time
J. Kissel, E. von Reis, D. Sigg

Circa March 2023, Daniel had installed some never-used first attempts at further filtering the high frequency noise present in the 524 kHz OMC DCPD channels (never aLOGged because it was never used, but I call them out after I found the work in LHO:68098). 

Now, because I'd like to characterize the existing, in-use, digital AA filtering, running into some unknown noise (LHO:78516), and hoping to install 2 to 4 more parallel filter banks that would also be quite full of filters (LHO:78956), there is worry that there won't be enough computation time in the 524 kHz system.

Remember, that "the 524 kHz system" is actually a modified "standard" 65 kHz system, which is reading out 8 samples from the 524 kHz ADC each 65 kHz clock cycle and computing everything at 65 kHz. Thus, in principle, the max turn around time is
    1 / (2^16 Hz) = (1 / 65536) [sec] = 1.5258789e-5 [sec] = 15.3 [usec]

However, we *think* the practical limit is somewhat less that this. I don't think I understand what those limitations are enough to say definitively and/or to quote a limit quantitatively, but I think they're related to "the copy of the OMC DCPD channels which are demodulated at high frequency to create PI channels that are shipped to the end station -- in other words, the IPC sending demands a bit of computational time, and if there isn't enough turnaround time left in the OP to write to the IPC network, then the end-station SUS PI models throw an IPC timing error."

Anyways -- this morning, I looked at the 524 kHz system's computation time as is before doing anything (via the channel H1:FEC-179_CPU_METER), and it's sitting at 9 [usec] (out of the ideal 15 [usec]), occasionally popping up to 10 [usec].

But -- this led me to remember that -- regardless of whether the filter is turned ON -- the front-end computes the output of the filter -- sucking up computation time.
So, I've removed these unused prototype notch filters from the DCPD A0 and B0 filter banks.
In addition, I've also removed the old "V2A" filter from a previous version of the digital compensation for the OMC DCPD transimpedance amplifier response.

Removing the notch filters drops the computation time from "9 [sec] occasionally bopping up to 10 [usec]."
See attached time series of the CPU meter.

These filters are, of course, available in the filter_archive, under the latest previous archived file before today's work:
    /opt/rtcds/lho/h1/chans/filter_archive/h1iopomc0/
        H1IOPOMC0_1401558760.txt
but for ease of use, I copy them here.

FM3 :: Notches1
    ellip("BandStop",3,0.5,30,12800,13200)
    notch(10216,50,30)
    ellip("BandStop",3,0.5,30,10380,10465)
    ellip("BandStop",3,0.5,30,12900,13100)

FM5 :: Notches2
    ellip("BandStop",3,0.5,30,8100,8200)
    notch(9101,50,30)notch(9337,200,30)
    notch(9463,50,20)
    ellip("BandStop",3,0.5,30,9750,9950)

FM8 :: Notches3
    ellip("BandStop",5,0.5,40,14384,18384)
    ellip("BandStop",5,0.5,40,30768,34768)

FM6 :: V2A
    zpk([5.699+i*22.223;5.699-i*22.223;32.73],[2.549;2.117;6.555],0.00501187,"n")gain(0.971763)
I've also posted a plot of the magnitude of these notch filters -- mostly just to demonstrate how many second order sections that these filters had -- sucking up computation time.
Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 09:42, Friday 12 July 2024 (79071)

Keita, Sheila

We were looking for explanations for the drops in range we've seen since Tuesday.  Attached is a plot of the CPU meter, it seems that this jumped up shortly after Jeff's plot was made.  It is still below 13usec, and doesn't look correlated with our range problems.

Images attached to this comment
erik.vonreis@LIGO.ORG - 22:43, Friday 12 July 2024 (79088)

Variation of CPU time in that range shouldn't by itself have any effect on the control loops running on that model until they get to a sustained time above 15 us or an individual cycle time somewhat more than 15 us depending on the model. 

The effects of a model that's run too long are DAC buffer starvation, i.e. the IOP didn't keep up with the DAC clocks,  or IPC communication between models arriving to late. 

Both of these errors would appear immediately on the CDS overview MEDM.

H1 SEI (SEI)
neil.doerksen@LIGO.ORG - posted 18:35, Thursday 04 July 2024 - last comment - 09:14, Friday 12 July 2024(78869)
Earthquake Analysis : Similar onsite wave velocities may or may not cause lockloss, why?

It seems earthquakes causing similar magnitudes of movement on-site may or may not cause lockloss. Why is this happening? Should expect to always or never cause lockloss for similar events. One suspicion is that common or differential motion might lend itself better to keeping or breaking lock.

- Lockloss is defined as H1:DRD-ISC_LOCK_STATE_N going to 0 (or near 0).
- I correlated H1:DRD-ISC_LOCK_STATE_N with H1:ISI-GND_STS_CS_Z_EQ_PEAK_OUTMON peaks between 500 and 2500 μm/s.
- I manually scrolled through the data from present to 2 May 2024 to find events.
    - Manual, because 1) wanted to start with a small sample size and quickly see if there was a pattern, and 2) because I need to find events that caused loss, then go and find similarly sized events we kept lock.
- Channels I looked at include:
    - IMC-REFL_SERVO_SPLITMON
    - GRD-ISC_LOCK_STATE_N
    - ISI-GND_STS_CS_Z_EQ_PEAK_OUTMON ("CS_PEAK")
    - SEI-CARM_GNDBLRMS_30M_100M
    - SEI-DARM_GNDBLRMS_30M_100M
    - SEI-XARM_GNDBLRMS_30M_100M
    - SEI-YARM_GNDBLRMS_30M_100M
    - SEI-CARM_GNDBLRMS_100M_300M
    - SEI-DARM_GNDBLRMS_100M_300M
    - SEI-XARM_GNDBLRMS_100M_300M
    - SEI-YARM_GNDBLRMS_100M_300M
    - ISI-GND_STS_ITMY_X_BLRMS_30M_100M
    - ISI-GND_STS_ITMY_Y_BLRMS_30M_100M
    - ISI-GND_STS_ITMY_Z_BLRMS_30M_100M
    - ISI-GND_STS_ITMY_X_BLRMS_100M_300M
    - ISI-GND_STS_ITMY_Y_BLRMS_100M_300M
    - ISI-GND_STS_ITMY_Z_BLRMS_100M_300M
    - SUS-SRM_M3_COILOUTF_LL_INMON
    - SUS-SRM_M3_COILOUTF_LR_INMON
    - SUS-SRM_M3_COILOUTF_UL_INMON
    - SUS-SRM_M3_COILOUTF_UR_INMON
    - SUS-PRM_M3_COILOUTF_LL_INMON
    - SUS-PRM_M3_COILOUTF_LR_INMON
    - SUS-PRM_M3_COILOUTF_UL_INMON
    - SUS-PRM_M3_COILOUTF_UR_INMON

        - ndscope template saved as neil_eq_temp2.yaml

- 26 events; 14 lockloss, 12 locked (3 or 4 lockloss event may have non-seismic causes)

- After, usiing CS_PEAK to find the events, I, so far, used the ISI channels to analyse the events.
    - The SEI channels were created last week (only 2 events captured in these channels, so far).

- Conclusions:
    - There are 6, CS_PEAK events above 1,000 μm/s in which we *lost* lock;
        - In SEI 30M-100M
            - 4 have z-axis dominant motion with no motion or strong z-motion or no motion in SEI 100M-300M
            - 2 have y-axis dominated motion with a lot of activity in SEI 100M-300M and y-motion dominating some of the time.
    - There are 6, CS_PEAK events above 1,000 μm/s in which we *kept* lock;
        - In SEI 30M-100M
            - 5 have z-axis dominant motion with only general noise in SEI 100M-300M
            - 1 has z-axis dominant noise near the peak in CS_PEAK and strong y-axis domaniated motion starting 4 min prior to the CS_PEAK peak; it too only has general noise in SEI 100M-300M. This x- or y-motion which starts about 4 min before the peak in CS_PEAK has been observed in 5 events -- Love waves precede Rayleigh waves, could be Love waves?
    - All events below 1000 μm/s which lose lock seem to have a dominant y-motion in either/both SEI 30M-100M / 100M-300M. However, the sample size is not large enough to convince me that shear motion is what is causing lockloss. But it is large enough to convince me to find more events and verify. (Some plots attached.)

Images attached to this report
Comments related to this report
beverly.berger@LIGO.ORG - 09:08, Sunday 07 July 2024 (78921)DCS, SEI

In a study with student Alexis Vazquez (see the poster at https://dcc.ligo.org/LIGO-G2302420, we found that there was an intermediate range of peak ground velocities in EQs where lock could be lost or maintained. We also found some evidence that lock loss in this case might be correlated with high microseism (either ambiant or caused by the EQ). See the figures in the linked poster under Findings and Validation.

neil.doerksen@LIGO.ORG - 09:14, Friday 12 July 2024 (79070)SEI

One of the plots (2nd row, 2nd column) has the incorrect x-channel on some of the images (all posted images are correct, by chance). Patterns reported may not be correct, will reanalyze.

Displaying reports 9561-9580 of 86348.Go to page Start 475 476 477 478 479 480 481 482 483 End