Displaying reports 201-220 of 82900.Go to page Start 7 8 9 10 11 12 13 14 15 End
Reports until 18:49, Tuesday 17 June 2025
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 18:49, Tuesday 17 June 2025 - last comment - 20:44, Tuesday 17 June 2025(85145)
Lockloss

Lockloss @ 06/18 01:30 UTC

Comments related to this report
oli.patane@LIGO.ORG - 20:44, Tuesday 17 June 2025 (85146)

03:08 Observing

H1 CDS
david.barker@LIGO.ORG - posted 17:20, Tuesday 17 June 2025 - last comment - 17:28, Tuesday 17 June 2025(85141)
DAQ restart of PSLDBB and EDC restart for Vacuum LX PT100 and digivideo CAM37 additions (and some removals)

The DAQ was restarted for a second time today at 12:41 (0-leg) and 12:50 (1-leg) for three reasons:

1) h1psldbb had a bad DAQ status following model change at 09:15 to remove an IPC sender

2) H0EPICS_VACLX.ini has new PT100 HAM1 gauge channels

3) H1EPICS_DIGVIDEO.ini has new CAM37 ISCT1-REFL channels (camera was added recently)

I regenerated H1EPICS_DIGVIDEO.ini using generate_camera_daq_ini.py, which correctly added CAM37 channels as a new server machine, but reverted CAMs[21, 23, 24, 26] back to "old" server settings. Remember these cameras were moved to a new server, but lack of masking forced them back to the old server (digivideo2). At this point the EDC was expecting a new set of channels, so as a work around I built a dummy ioc to simulate these channels to keep EDC happy. So to make a long story short, these cameras are back with their original PV set and digivideo_dummy_ioc is no longer needed. To test this I shut down the IOC with no loss of EDC connections, and verified tha the Guardian CDS_COPY "from" list was still functioning.

 

Comments related to this report
david.barker@LIGO.ORG - 17:28, Tuesday 17 June 2025 (85142)

Detailed list of DAQ channel changes (9097 additions, 91 removals)

Non-image files attached to this comment
H1 OpsInfo (ISC)
jennifer.wright@LIGO.ORG - posted 16:57, Tuesday 17 June 2025 (85136)
Instructions for running DARM offset step to do output loss meas

To run the test that calculates how much loss we have between the ASC-AS_C (anti-symmetric port) and the DCPDs at the output of the OMC (where strain is derived from).

  1. This test is set up to run in NLN.
  2. Turn off OMC ASC. Go to sitemap -> OMC -> OMC Control and note down the MASTER GAIN value in circled section in attached photo, then change this value to 0.
  3. Go to /ligo/gitcommon/labutils/darm_offset_step and run:

conda activate labutils

python auto_darm_offset_step.py

The script turns on one notch in the DARM LSC loop, then changes the PCAL line heights so all the power is in just two frequencies.

At the end it reverses both these changes.

It can be stopped using Crtl-C but this will not restore the PCAL line heights back to their default values so you may have to use time machine.

After setting up the PCAL the script steps the DARM offset level in 120 s steps, I think it takes about 21 minutes to run.

After the script finishes, please put the OMC ASC MASTER GAIN back to its original value.

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 16:39, Tuesday 17 June 2025 (85140)
Ops Eve Shift Start

TITLE: 06/17 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 142Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 10mph Gusts, 4mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.07 μm/s
QUICK SUMMARY:

We are Observing and have been Locked for almost 1.5 hours. SQZ doesn't look good, so I might pop out of Observing in another hour to run sqz scan align. Also, if we lose lock I will be restarting a couple guardian nodes that use AWG since that was restarted earlier today, and we've already had one awg error pop up in ALS_YARM (85130).

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:38, Tuesday 17 June 2025 (85137)
OPS Day Shift Summary

TITLE: 06/17 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 140Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:

IFO is in NLN and OBSERVING as of 22:26 UTC

TUES Maintenance Activities:

We had a very productive maintenance day during which the following was completed

LVEA was swept - alog 85128

*Only listing completed WP’s - close your permits if work is done.

Lock Reacquisition:

Lock recovery was quite rough due to a few issues

  1. ITMY Oscillation - Whilst locking ALSY, we noticed that it was oscillating at ~3Hz. No alarms had sounded other than the CDS/Maint work’s ISI/SUS trips expected earlier in the morning that were since successfully reverted. Ryan C discovered that the L1 BOSEMs were also oscillating independently of ALS WFs (which was the initial hypothesis). Then we found a worrying increase in the IY reaction chain gain (which showed an apparent increase of a few ten thousand!). This appears to have started when the CDS upgrade was done even though the right values were save into the SDF SAFE files. CDS advised that any signals from this time are not to be trusted, but clearly something must have moved since the 3Hz signal we were still getting was still ringing (albeit ringing down for the last 3 hrs). We managed to find the appropriate filter banks, IY_R0_L2DAMP_R_GAIN and IY_R0_L2DAMP_P_GAIN and identified them as the culprits, turning them off. After the involvement of Ryan C, Tony, Sheila, Elenna, Keita, Jonathan, Dave, EJ, Oli and myself, we came to the conclusion that we were going to test turning on the gain (nominally set to 1.5 and -2.5 not 10k) with a ramp. We did this and there were no issues. Weird. So essentially CDS is saying that there was no gain and power cycling the gain fixed the IY motion which was seen IY L2, IY L1 OSEMs as well as ALSY during ALS. Elenna still thinks something is weird with IY Reaction Chain so is investigating. Picture of Tony’s IY Power Spectrum Below. (alog 85135)
  2. ALS Y Fault during ScanAlign. ALSY Faulted during scan-align causing a guardian node error. I had to manual out to continue locking. This is now a known issue whereby ALS scan align (not specific to X or Y) fails due to an AWG issue. The SUSCHARGE and ALS guardians will be restarted by Oli once we have our next Lockloss. (alog 85130)
  3. IMC Locking Issues. The CDS work today threw the IMC out of alignment so we had to move MC1, MC2, MC3 and PZT sliders to a time where we were successfully auto-locking.
  4. SEI states, PSL and SUS were all transitioned to their SAFE (Safe State Transition Wiki) and then transition out (Safe State De-transition Wiki)
  5. SDF Screenshots Linked

LOG:

Start Time System Name Location Lazer_Haz Task Time End
15:22 SAF LASER SAFE LVEA SAFE LVEA is LASER SAFE \u0d26\u0d4d\u0d26\u0d3f( \u2022_\u2022) 15:37
15:07 FAC Kim, Nellie LVEA N Technical Cleaning 16:12
15:40 VAC Jordan, Gerardo LVEA N Anchor Drill work 17:31
15:42 OPS' Camilla LVEA N Laser SAFE transition, ISCT1 check 16:14
15:42 3:00 Christina LVEA N Walkabout 17:08
16:05 FAC Randy LVEA N Inventory 20:49
16:12 FAC Kim EX N Technical Cleaning 17:58
16:13 FAC Nellie EX N Technical Cleaning 17:15
16:19 PSL Jason LVEA N PSL Inventory + Cable pull 17:08
16:23 3IFO Betsy, Mitchell, LIGO India Delegation LVEA N Tour, 3IFO Walkabout + Rick 18:20
16:28 VAC Richard LVEA N Check on vac 18:48
16:30 AOS Jennie Optics Lab Local ISS Array 17:21
16:42 ISS Kieta Optics lab Local Joining Jennie W in he optics lab 19:15
16:43 CDS Erik Remote N Restarting EndX IOC 16:53
16:57 EE Fil LVEA & Both Ends N Looking for Scopes & Talkign to Richard. 18:55
17:08 FAC Christina End stations N Getting Pictures of hardware 18:25
17:13 ISC Sheila, Matt, Camilla LVEA N ISCT1 Laser Panels 17:41
17:27 Bee Tyler LVEA N Being a Busy Bee keeper and intercepting Betsy 17:35
17:27 ISS Rahul Optics Lab Local Helping Keita with ISS work 19:15
17:28 ISS Jennie W Optics Lab Local Working on ISS with Keita 19:15
17:42 ISC Camilla ISCT table N ISCT table work 17:58
17:44 PEM Ryan C LVEA n Checking Dust mons 18:18
17:45 VAC Gerardo Mid & EY N Guage replacements 18:56
17:58 FAC Nellie, Kim FCES N Technical Cleaning 18:32
18:20 3IFO Mitchell, Jim, 3IFO team member LVEA N 3IFO inventory check 18:23
18:25 3IFO Brijesh, Jim LVEA N 3IFO Inventory 18:43
18:34 TCS Camilla LVEA N TCS Part Search 18:45
18:57 VAC Gerardo LVEA N Vac Gauge Reconnection 19:06
19:58 OPS Tony LVEA N Sweep 20:49
20:49 ISS Keita, Rahul Optics lab LOCAL ISS array work 22:55
20:58 FAC Chris EX, MX N Air handler checks 22:12
21:11 PSL Rick, 3IFO Delegation PSL Anteroom N PSL Anteroom Tour 22:06
21:21 ISS Jennie W Optics lab Local Working on ISS system 22:51
Images attached to this report
H1 SUS (CDS)
erik.vonreis@LIGO.ORG - posted 16:12, Tuesday 17 June 2025 - last comment - 17:50, Tuesday 17 June 2025(85135)
Excessive movement in ITMY may have been caused by bad IPC table in OMC0

Trend of H1:IOP-SUS_ITMY_WD_OSEM1_RMSOUT shows increased motion during the 10 minutes post-RCG upgrade that OMC0, see alog 85120, was clobbering IPCs, including two peaks.

The attached screenshot has cursors at the approximate start and end of OMC0 clobbering IPCs.  RMS remained high until guardian was started 30 minutes later, after which ITMY continued to ring until guardian was again restarted.

We will attempt to trace the clobbered IPCs to see if they plausibly could have driven ITMY.

Images attached to this report
Comments related to this report
erik.vonreis@LIGO.ORG - 17:37, Tuesday 17 June 2025 (85143)

The attach list shows the mapping from OMC0 IPCs to IPCs that were clobbered during the ten minutes OMC0 was running on the wrong IPC table.

 

Non-image files attached to this comment
erik.vonreis@LIGO.ORG - 17:50, Tuesday 17 June 2025 (85144)

ITMX, which received the same clobbered channel as ITMY, also showed a spike in movement during the same period, but was properly stilled by guardian.

 

 

Images attached to this comment
H1 CDS (CDS, VE)
patrick.thomas@LIGO.ORG - posted 15:59, Tuesday 17 June 2025 (85134)
Updates to h0vaclx and addition of the HAM1 PT100 BCG552 EtherCAT gauge
WPs: 12577 and 12608
Previous work: alog 84871

This portion of the first work permit has been completed: "Migrate h0vaclx from the svn repo to the git repo. Also migrate it from using the C# code to the PowerShell code for generating the TwinCAT 3 Visual Studio solution. This would make it match h0vacmr and h0vacly in both of these regards. Update the PLC code on h0vaclx to add PT100 as an Inficon BCG552 gauge."

The installation of TwinCAT 3 on h0vaclx has been updated to version 3.1.4024.35. The PowerShell script to generate the TwinCAT 3 solution has been changed to use the TwinCAT XAE Shell instead of Visual Studio 2010 because I could not get it working with the latter. The new Inficon BCG552 EtherCAT gauge on HAM1 is connected and being read into EPICS. The code being used for the scripts is at commit d27c3ebfb424572f3aba003744e97e947e5a4873 in the git repo at https://git.ligo.org/cds/ifo/beckhoff/lho-vacuum. The shortcut in the TwinCAT autostart folder to start the EPICS IOC has been updated to point to the location of the checkout of this repo. The shortcut on the Desktop has similarly been updated.

Timeline of work:

9:56 Stopped the EPICS IOC. Set the TwinCAT runtime to Config. Started the installer for TwinCAT 3.1.4024.67.
10:02 A Windows Security dialog message appeared three times: "Windows can't verify the publisher of this driver software". Clicked "Install this driver software anyway" each time.
The installer took a very long time on "Installing Microsoft .NET Framework 5.6.1 Full".
10:22 The computer spontaneously logged me out during the installation.
10:24 Logged back in.
10:26 I started the installation of TwinCAT 3.1.4024.67 again and then soon canceled it.
10:35 I started the installer for TwinCAT 3.1.4024.35.
10:50 I restarted the computer to complete the install as prompted.
10:52 Logged back in. The installation appeared to be successful. I tried to generate the TwinCAT 3 solution from the scripts. I could not find a way around an error saying that the project template could not be found, despite it being at the path shown.
11:19 I ran 'shutdown /r' to restart the computer and try again but still got the same error about the template after the restart. I changed the script to use the TwinCAT XAE Shell instead of Visual Studio 2010. The script froze. I logged out and back in. The script succeeded in generating the solution. I scanned for terminals and did not see the BCG552 gauge on HAM1. Gerardo told me it was not connected and went to connect it. The gauge showed up and everything appears to be working. I updated the paths in the scripts for the IOC and the shortcuts to start the IOC. I checked in all of the changes to git.
Images attached to this report
H1 SUS
elenna.capote@LIGO.ORG - posted 15:27, Tuesday 17 June 2025 (85132)
Roll Mode T ramp set in guardian, accepted in SDF

We keep having some SDF observe diff for the roll mode damping gain ramp time. I added a line to the DAMP BOUNCE ROLL guardian state to set the gain to 10 seconds when the gain is set. Apparently we've been engaging the damping with no gain ramp. Hopefully this observe diff will stop popping up now.

Images attached to this report
H1 GRD (GRD)
anthony.sanchez@LIGO.ORG - posted 14:31, Tuesday 17 June 2025 - last comment - 14:32, Tuesday 17 June 2025(85130)
ALS-YARM Guardian issue in Scan Alignment.

Operator Was trying to lock in LOCKING_ARMS_GREEN[12] and ALS_YARM Faulted when It was trying to do SCAN_ALIGNMENT[57].

We Manual'd ALS_YARM back to UNLOCKED[-28]  then reloaded ALS_YARM.
Operator then took ISC_LOCK to GREEN_ARM_MANUAL[13]


Error Log:

 ALS_YARM W: RELOADING @ SCAN_ALIGNMENT.run
2025-06-17_21:15:15.740544Z ALS_YARM [SCAN_ALIGNMENT.run] Generating waveforms
2025-06-17_21:15:16.229352Z ALS_YARM [SCAN_ALIGNMENT.run] Start time is 1434230144
2025-06-17_21:15:16.229580Z ALS_YARM [SCAN_ALIGNMENT.run] Opening awg streams
2025-06-17_21:15:16.232982Z awgSetChannel: failed awgnewchannel_1(chntype = 1, arg1 = 0, arg2 = 0, awg_clnt[99][0] = 31370384) H1:SUS-TMSY_M1_TEST_P_EXC
2025-06-17_21:15:16.232982Z Error code from awgSetChannel: -5
2025-06-17_21:15:16.233280Z ALS_YARM W: Traceback (most recent call last):
2025-06-17_21:15:16.233280Z   File "/usr/lib/python3/dist-packages/guardian/worker.py", line 494, in run
2025-06-17_21:15:16.233280Z     retval = statefunc()
2025-06-17_21:15:16.233280Z   File "/opt/rtcds/userapps/release/als/common/guardian/ALS_ARM.py", line 1525, in run
2025-06-17_21:15:16.233280Z     ex.open()
2025-06-17_21:15:16.233280Z   File "/usr/lib/python3/dist-packages/awg.py", line 595, in open
2025-06-17_21:15:16.233280Z     raise AWGStreamError("can't open stream to " + self.chan \
2025-06-17_21:15:16.233280Z awg.AWGStreamError: can't open stream to H1:SUS-TMSY_M1_TEST_P_EXC: Error setting up an awg slot for the channel (-120)
2025-06-17_21:15:16.256804Z ALS_YARM ERROR in state SCAN_ALIGNMENT: see log for more info (LOAD to reset)
2025-06-17_21:15:35.011679Z ALS_YARM MODE: MANUAL
2025-06-17_21:15:35.017804Z ALS_YARM MANAGER: removed
2025-06-17_21:15:41.635373Z ALS_YARM MODE: AUTO

 

Comments related to this report
anthony.sanchez@LIGO.ORG - 14:32, Tuesday 17 June 2025 (85131)

Seems similar to this Alog From Oli.

H1 General
anthony.sanchez@LIGO.ORG - posted 13:06, Tuesday 17 June 2025 (85128)
LVEA has been swept

The LVEA has been swept.
The Vac team still has a pump on HAM1 that can be heard throughout the VEA.

H1 ISC
elenna.capote@LIGO.ORG - posted 12:03, Tuesday 17 June 2025 (85126)
Quick Info about the AWGLines guardian

Since we plan to do some TCS tests on the IFO in the near future, I recommend we make use of the AWGLines guardian, which injects a variety of lines that can be used to monitor things like noise couplings and optical gain changes.

Taking a quick survey of the guardian, I don't see any immediate issues. The guardian injects these lines: intensity noise (47.3 Hz, 222 Hz, 4222 Hz), frequency noise (25.2, 203, 4500 Hz), jitter (111.3, 167.1, 387.4 Hz), and MICH (40.7 Hz), PRCL (35.3 Hz) and SRCL (38.1 Hz). There is an additional state that engages the 8.125 Hz notches in the ASC and then injects one ASC line. The guardian is currently set up to inject in CHARD Y (8.125 Hz), but that can be adjusted to any ASC loop. The amplitude for the CHARD Y injection looks good, but that will need to be adjusted for other ASC loops. I have no idea if the laser noise and LSC noise line amplitudes are "good", but we ran this guardian regularly before O4 started so they are probably ok.

It does not appear that any notches or bandstops are engaged for the laser noise or LSC injections. Not sure if we want to change that. The cable for frequency noise will need to be plugged in to ensure the frequency noise gets injected.

The way the states are currently written, you will need to manual to "SET UP ASC INJECTIONS" to ensure the ASC line gets injected before going to "INJECTING". The "STOP INJECTING" state will stop all the injections, including the ASC injections and revert all changes such as switches and filters.

H1 ISC
camilla.compton@LIGO.ORG - posted 11:29, Tuesday 17 June 2025 (85125)
Laser Curtains around ISCT1/IOT2L replaced

Sheila, Matt, Camilla

We replaced the laser curtains around ISCT1/IOT2L that were removed fro the HAM1 vent, so now this area can be taken to local laser hazard. 

LHO VE (VE)
travis.sadecki@LIGO.ORG - posted 10:11, Tuesday 17 June 2025 - last comment - 16:37, Tuesday 17 June 2025(85121)
new Kobelco installation complete

Late entry from last week. Roger's Machinery completed the new Kobelco installation and startup last Tuesday.  Our last task before acceptance testing of the system was to replace the filter elements in the filter tree.  When we opened the housings, we discovered that one of the elements was mangled and had evidence of rough tool use on it.  When we removed the element, we could see that there was some evidence of galling in the threads where the element screws into the filter housing (see pics).  We will order a new housing and replace it before continuing with testing the system.  

Images attached to this report
Comments related to this report
janos.csizmazia@LIGO.ORG - 16:37, Tuesday 17 June 2025 (85138)
And here is the filter element, in an absolutely ridiculous condition, as Travis described it.
Images attached to this comment
H1 CDS
david.barker@LIGO.ORG - posted 10:01, Tuesday 17 June 2025 - last comment - 16:37, Tuesday 17 June 2025(85120)
RCG5.5.0 completed 09:51

We have completed the upgrade of H1 frontends to RCG5.5.0 at 09:51.

Detailed alog will be written but surprises found/re-remembered:

EX dolphin frontends need to enable an unused port on the EY switch because the EX switch has no port control (damaged in Apr power outage)

PSL DBB model had an obsolete Dolphin IPC sender back when it used to have a dolphin card. New RCG doesn't allow senders with no cards. Removed the sender from the model, a DAQ restart is pending for this model

We had upgraded h1omc0 to 5.5.0 some time ago, but the H1.ipc file had changed so it needed a restart. Prior to restart it was clobbering the SWWD IPCs between sush34 and seih23, seih45.

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 11:20, Tuesday 17 June 2025 (85124)
Images attached to this comment
david.barker@LIGO.ORG - 16:37, Tuesday 17 June 2025 (85139)

Here is a rough squence of today's upgrade, all times local PDT

07:22 rebuild H1EDC.ini to convince outselves the first DAQ restart will be model changes only

07:43 h1cdsrfm powered down, this breaks the linkage between Dolphin locations

07:43 h1ecatmon0 upgrade (reboot)

07:45 Dolphin network manager started on h1vmboot5-5, causing the standard set of end station systems to crash (susey, seiey, iscey and susex). We PAUSE'd the remaining EX (iscex, seiex)

07:51 Reboot h1susaux[ex, ey, b123, h2, h34, h56] pem[mx, my] to upgrade them. susaux[h34, ey] got stuck and were power cycled via IPMI.

08:03 DAQ 0-leg restart for new INI files across the board.

08:18 DAQ 1-leg restart. At this point omc0, susaux's and pemmid have good DAQ data, everyone else has BAD DAQ data.

08:27 Power down EX machines, power up EY machines. SWWD IPCs working, Dolphin IPC checks out.

08:32 Power up EX machines, all at the same time because of the Dolphin switch issue. They do not start. After some head scratching we remembered that the startup sequence needs to activate a dolphin switch port, which cannot happen at EX because the switch is damaged. Work around is for all three EX front ends to switch an unused port on the EY switch. Once this was put into place the EX machines started without anyone having to drive to the end station.

08:55 Reboot h1psl0 to upgrade PSL models (no dolphin cards, more about this later...)

08:56 Power down all non-upgraded corner station machines (SUS+SEI+ASC+LSC+OAF) but not h1omc0 (more about this later..)

09:00 h1psldbb is not running. It has an obsolete Dolphin IPC sender part in the model but no Dolphin card. RCG5.5.0 does not allow this. Rebuild model sans IPC part, starts running. Note PSLDBB DAQ data is BAD from this point till the second DAQ restart.

09:10 First power up h1sush2b, h1seih16 for HAM1 SEI work. SWWD IPC between the two working well (well, for these two, more later...)

09:20 Power up all remaining corner station computers

09:30 Discover wierd SWWD IPC receive values for HAM3 and HAM4 (val should be 1.0, but is -0.002 or 0.000).

09:34 try restart h1iopsush34, IPC values still bad. But h1omc0 has not been restarted so its using the old IPC configuration and could be writing out-of-bounds

09:35 restart h1omc0 models, SWWD IPC errors are resolved

09:44 power up h1cdsrfm. First EX is PAUSE'd, EY and CS are fenced. long range dolphin starts with no issues. A new MEDM is generated from the new H1.ipc file.

09:51 Complete in 2 hours.

 

 

H1 AOS (DetChar, SUS)
kiet.pham@LIGO.ORG - posted 16:28, Friday 13 June 2025 - last comment - 12:45, Tuesday 17 June 2025(85026)
Violin mode contaimination near 500Hz: possible mixxing between the 2nd harmonic (near 1500 Hz) and the 1st harmonic (near 1000 Hz)

Kiet and Sheila,

Following up on the investigation posted in aLOG 84136, we examined the impact of higher-order violin mode harmonics on the contamination region.

We found that subtracting the violin peaks near 1000 Hz (1st harmonic) from those near 1500 Hz (2nd harmonic) results in frequency differences that align with many of the narrow lines observed in the contamination region around 500 Hz.

Violin peaks that we used (using O4a+b run average spectra)

F_n1 = {1008.69472,1008.81764, 1007.99944,1005.10319,1005.40083} Hz 
F_n2 = {1472.77958,1466.18903,1465.59417,1468.58861, 1465.02333, 1486.36153, 1485.76708} Hz 

Out of the 35 possible difference pairs (one from each set), 27 matched known lines in the contamination region to within 1/1800 Hz (~0.56 mHz)—most within 0.1 mHz. Considering that each region actually contains >30 peaks, the number of matching pairs likely increases significantly, helping explain the dense forest of lines in the comtaimination region.

Next steps:

The Fscan run average data are available here (interactive plots): 

Fundamental region (500 Hz): 

1st harmonic region (1000 Hz): 

2nd harmonic region (1500 Hz): 

Comments related to this report
kiet.pham@LIGO.ORG - 12:45, Tuesday 17 June 2025 (85127)DetChar, SUS

Adding plot comparing the PSDs before and after getting rid of the peaks that can be indentified by this method.

Images attached to this comment
Displaying reports 201-220 of 82900.Go to page Start 7 8 9 10 11 12 13 14 15 End