Displaying reports 7181-7200 of 86200.Go to page Start 356 357 358 359 360 361 362 363 364 End
Reports until 08:43, Wednesday 20 November 2024
H1 CDS
david.barker@LIGO.ORG - posted 08:43, Wednesday 20 November 2024 (81380)
More EY CNS-II 1PPS jumps to -0.8uS

For about 50 minutes starting at 01:45 PST this morning we had another jump of the EY CNS-II reference GPS 1PPS compared with the LIGO timing. Previous jump was Mon 18nov2024 05:55. Prior to that no jumps for 3 weeks since the power supply was replaced.

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 08:05, Wednesday 20 November 2024 (81376)
Power glitch 07:00:26 PST

We had a slight mains power glitch at 07:00:26 PST this morning, the GC UPS reported being on battery power for 4 seconds.

Attached plot shows all three phases in the corner station CER.

Images attached to this report
H1 PSL
ryan.crouch@LIGO.ORG - posted 07:36, Wednesday 20 November 2024 (81374)
FSS high voltage restored

Richard turned back on the high voltage and plugged back in the DB37 cable that was unplugged yesterday afternoon.

H1 General
ryan.crouch@LIGO.ORG - posted 07:30, Wednesday 20 November 2024 - last comment - 08:53, Wednesday 20 November 2024(81373)
OPS Wednesday DAY shift start

TITLE: 11/20 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM_EARTHQUAKE
    Wind: 17mph Gusts, 11mph 3min avg
    Primary useism: 0.40 μm/s
    Secondary useism: 1.56 μm/s
QUICK SUMMARY:

Comments related to this report
ryan.crouch@LIGO.ORG - 08:53, Wednesday 20 November 2024 (81377)PSL

The minor frequency power glitch earlier this morning 07:00:26 tripped off the NPRO, 50ms after the glitch started the NPRO tripped.

Images attached to this comment
H1 PSL (ISC)
ryan.short@LIGO.ORG - posted 22:13, Tuesday 19 November 2024 - last comment - 18:39, Wednesday 20 November 2024(81371)
PMC Glitch Testing

Since the secondary microseism has been very high this evening and preventing H1 from locking, we decided to leave just the PMC locked (no FSS, ISS, or IMC) for an extended time and watch for any glitches. At around 23:45 UTC, we unlocked the FSS, Richard turned off the high voltage supply for the FSS, and Jason and I unplugged the DB37 cable from the FSS fieldbox in the PSL-R1 rack in order to ensure no feedback from the FSS made it to the NPRO. Pictures of the DB37 cable's location are attached.

The first attachment shows the changes seen when the FSS was unlocked. Since then, I've seen several instances of groups of glitches come through, such as those shown in the second and third attachments. These glitches in the PMC_MIXER channel are smaller than ones seen previously that have unlocked the IMC (like in alog81228). There have also been times where the PMC_MIXER channel gets "fuzzier" for a bit and then calms down, shown in the fourth attachment; it's possible this is due to the NPRO frequency not being controlled so the PMC sees some frequency changes? Finally, I only recall one instance of the NPRO jumping in power like in the final attachment; the PMC doesn't seem to care much about this, only having one very small glitch at this time.

I'll leave the PSL in this configuration to collect more data overnight as the secondary microseism is still much too high for H1 to successfully lock.

Images attached to this report
Comments related to this report
peter.fritschel@LIGO.ORG - 07:46, Wednesday 20 November 2024 (81375)

A zoom-in on some of the glitches from the third figure above.

Images attached to this comment
sheila.dwyer@LIGO.ORG - 08:55, Wednesday 20 November 2024 (81379)

After Ryan's shift ended last night, there were some larger glitches, with a similar amplitude in the PMC mixer channel to the ones that we saw unlocking the reference cavity 81356 (and IMC in 81228)

The first plot shows one of these times with larger glitches, the second one zooms in for 60ms when the glitches were frequent, this looks fairly similar to Peter's plot above. 

The period of large glitches started around 2 am (7:37 UTC on Nov 20th), and ended when a power glitch turned off the laser at 7 am (15 UTC) 81376.  Some of the small glitches in that time frame time seem to be at the same time that the reference cavity was resonanting (with low transmission), but many of the large glitches do not line up with times when the reference cavity was resonanting. 

I've zoomed in on most of the times when the PMC mixer glitches reached 0.1, and see that there are usually small jumps in NPRO power at the time of these glitches, although the times don't always line up well and the small power glitches are happening very often so this might be a coincidence.

 

 

Images attached to this comment
victoriaa.xu@LIGO.ORG - 09:47, Wednesday 20 November 2024 (81381)

Sheila, Jason, Vicky - Compared the PSL + PMC mixer glitches between last night (Nov 22, 2024, no FSS no ISS) and the emergency vent (Aug 2024, PSL+PMC+FSS+ISS), as in 81354.

  • Previously 80999, the IMC could not stay locked for long periods of time: "IMC losses lock when it's locked by itself, with PMC, FSS, and ISS."
  • Fixed a bunch of things last week, but then still had many fast "IMC"-tagged locklosses over the weekend. Hard to tell the cause.
  • Start removing stuff again this week, to isolate problem upstream of the IMC.
  • Laser + PMC + FSS (no ISS)   = bad still glitchy 81356.  Rules out ISS and likely IMC as cause of glitches.
  • Laser + PMC (no FSS no ISS) = bad still glitchy 81371.  FSS depowered and HV off to really turn FSS "off".  Rules out FSS as cause of glitches.

As a reference, "before" during the emergency vent in August 2024, the Laser + PMC + FSS + ISS were all locked with no PMC mixer glitches for >1 month.

 

Updating our matrix of tests to isolate the problem, and thinking things through:

Before (vent Aug 2024) Now (Nov 2024)
laser + PMC + FSS + ISS good = no glitches laser + PMC + FSS bad = glitches 81356
laser + PMC ??? (presumably good) laser + PMC bad = same PMC mixer glitches 81371

1) Are these +/-0.1 V PMC mixer glitches the problem? Yes, probably.

2) Are these big PMC mixer glitches caused or worsened by the FSS? No. PMC mixer glitches basically same with FSS on 81356 and off 81371

3) Are the laser + PMC mixer glitches new? Yes, probably.  If these PMC glitches always there, could it be that we were previously able to ride them out before, and not now? But this would imply that in addition to the new glitches, the FSS secondarily degraded. Seems very unlikely: already several problems (bad amp, new eom needed new notch, etc) have been fixed with FSS, and the FSS OLTFs and in-loop spectra look good. FSS on/off does not change the PMC mixer glitches, so the problem seems most likely laser or PMC.

Images attached to this comment
victoriaa.xu@LIGO.ORG - 18:39, Wednesday 20 November 2024 (81391)ISC, PSL

Sheila, Daniel, Jason, Ryan S, many others

We think the problem is not the PMC, and likely the laser.

Daniel looked at PMC mixer gliches on the remote scope: 81390. If PMC mixer glitches are indicative of the problem, we can try to track down the origin of the glitches.

  • What can cause PMC mixer glitches?     
    1. NPRO Laser frequency glitches .... seems likely
    2. PMC cavity length glitches (e.g. servo electronics or PZT) ....  most likely no  81390.
    3. Likely PDH is optically sensing real glitches, as we have already switched the 35.5 MHz LO to the Marconi 81277, that made no difference ...... rule out the 35.5 MHz LO.
       
  • Unlikely PMC, because the PMC mixer glitches on the scope seem too fast for the PMC locking electronics.
    1. Daniel found that the "PMC servo drives the PZT with a 3.3K series resistor forming a ~1kHz low pass with the PZT capacitance of ~45nF. This yields a chracteristic time constant of ~150us. The glitches as seen by the PMC mixer are at least a few times faster than this." 81390
      • Daniel's scope photo here shows "a train of PMC mixer glitches, going as high as ~70mVpk (or ~70kHz)", similar to the zoomed-in glitches Peter showed above.
    2. So, this means that glitches originating in the PMC locking servo or PZT electronics would change the PMC cavity length too slowly to create the fast PMC mixer glitches we observe. Seems unlikely that PMC servo locking electronics are the issue.
    3. Unlikely mechanical / PZT issues given the very nice PMC cavity scans - optically, the cavity scans don't show glitches or issues as a function of the PZT drive (remote scope traces + sheila pmc pzt scan photo).
       
  • Biggest change to the glitches came from swapping the original O4 laser to now the O3 laser. Glitches changed when lasers changed: see Camilla lho:81386. Seems like a clue that the glitches are related to the specific lasers themselves.

 

Talking with Jason and RyanS, some summary of the NPROs available:

  • (S/N 1661) The O3 NPRO was swapped in recently. Potentially we now see the same problems with 1661 was seen in the end of O3b.
    • Camilla's 81386 shows that the FSS oscillations with this laser in O4 look a lot like its issues at the end of O3b.
    • In 2020, these FSS issues originally prompted Camilla to create the FSS_OSCILLATION tags. These FSS_OSCILLATION tags started to be flagged regularly again in mid-Sept 2024, but with the original O4 laser, the FSS oscillations looked totally different: see her log 81386.
       
  • (S/N 7974) The original O4 NPRO was in use Sept 2017 - June 2018, when it was uninstalled (42652) seemingly due to similar fast FSS glitches, leading to a broken noise eater. Jason looked at alogs from time of removing this NPRO in 2018, and the 2018 glitches look like the original Sept 2024 glitches.  7974 was then sent back to Coherent for repair + refurbished. This is the newest NPRO bought in 2015, and the only one that Coherent will service.  Then 7974 was installed for O4. 
     
  • (S/N 1639F) The "third" NPRO is from O1, and was in use for many years 2011-2017. It was wearing out its lifetime and running out of power, then removed and refurbished. Since, was only used a few times in the optics lab. Will try this 3rd NRPO.
Images attached to this comment
LHO General
ryan.short@LIGO.ORG - posted 22:10, Tuesday 19 November 2024 (81372)
Ops Eve Shift Summary

TITLE: 11/20 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: None
SHIFT SUMMARY: H1 remained unlocked the whole evening due to very high secondary microseismic motion, and there is no plan to attempt locking overnight. In the meantime, we took the opportunity to leave just the PMC locked to look for glitches (more details in alog81371). Very quiet evening otherwise with nothing else to report.

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 21:28, Tuesday 19 November 2024 (81370)
BSC3 vacuum gauge glitch, vacstat reset

At 17:52:10 PST Tue 19 Nov 2024 we had another sharp positive glitch in the BSC3  vacuum gauge signal (PT132_MOD2). VACSTAT did not alarm because no other gauge tripped around this time. The glitch was a 2 second wide square wave, and as before it was the delta-P which tripped.

vacstat was reset at 18:39 to restore it to its monitoring state.

Images attached to this report
H1 DAQ (CAL, CDS, GRD)
anthony.sanchez@LIGO.ORG - posted 17:18, Tuesday 19 November 2024 (81368)
PCALX_STAT Guardian Node

I startedtesting the PCALX_STAT Guardian node today.

/opt/rtcds/userapps/release/cal/h1/guardian/PCALX_STAT.py

It created a new ini file, But the DAQ was not restarted after this new ini file creation.
As it currently stands this is a draft of the final product that will be tested for a week and further refined.
This Guardian node, does not make any changes to the IFO, it's only job is to determine if PCALX arm is broken or not. TJ has already added it to the Guardian Ignore list. 
 

H1 General (Laser Transition)
ryan.crouch@LIGO.ORG - posted 16:35, Tuesday 19 November 2024 (81349)
OPS Tuesday day shift summary

TITLE: 11/19 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: 2ndary microseism has been consistently high (above the 90th percentile) for the whole day and has risen even more in the last few hours of the shift and has reached the top of the plot. It will likely will get even worse tomorrow as the storm off the pacific coast closes in. Wind has been pretty low today. ITMY5/6 is still kind of high.

As of 23:30 UTC we're back to sitting in idle, we unlocked the FSS, ISS, and IMC just leaving the PMC locked.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
21:41 SAF Laser LVEA YES LVEA is laser HAZARD 17:55
16:08 FAC Eric EndX N Glycol piping famis task, upright fallen portapotty by FCES 16:50
16:14 TCS TJ, Camilla LVEA Y CO2Y table cable jiggling 17:01
16:17 FAC Kim & Karen FCES N Tech clean 16:31
16:19 FAC Tyler LVEA N Verify strobes in H2/PSL area 16:28
16:20 FAC Tyler + Cascade Fire OSB, Ends N Fire alarm testing 18:54
16:27 FAC Tyler, Tristan EndX N Fire alarm testing 18:47
16:30 FAC Kim EndX N Tech clean 17:19
16:31 FAC Karen EndY N Tech clean 17:35
17:00 ISC Sheila LVEA Y Unplug ISS Feed forward 17:08
16:34 VAC Jordan LVEA Y->N Prep new scroll pumps, gauge swap 20:04
16:41 FAC Chris + Fire LVEA Y Check fire ext. 17:10
16:59 VAC Gerardo LVEA Y->N Join Jordan, Scroll pumps 20:04
17:03 OPS Camilla LVEA Y -> N LASER HAZARD transition 17:20
17:08 FAC Chris + Cascade fire EndY N Fire checks, End then mid 18:54
17:10 VAC Janos LVEA Y Join VAC team 17:53
17:20 CDS Marc, Fil EndX N DAC swap 18:12
16:45 EPO Corey+1 EndY N Pictures 17:15
17:30 FAC Kim FCES N Tech clean 18:00
17:32 VAC Janos LVEA N Pump/Gauge work 18:35
17:36 FAC Karen FCES N Tech clean 18:00
17:42 OPS Oli LVEA N Grab power meter 17:47
17:47 SEI Jim CR N BS and ITM sei tests 18:22
17:00 ISC Daniel LVEA Y Investigations, scope installation 17:52
17:56 SAF LASER LVEA N LVEA is LASER SAFE 20:46
17:57 ALS Camilla, Oli EndY Y Beam profiling 19:54
18:01 FAC Richard LVEA N Walk around 18:21
18:02 ISC Keita LVEA N Cable checks 18:29
18:16 ISC Sheila CER N Plug and unplug FF cable 18:40
18:22 VAC Janos EndX then EndY N Scroll pumps 19:25
18:22 SEI Jim LVEA Biergarten / CR N working on Seismic sub systems 19:55
17:30 EPO Corey+1 LVEA N B-Roll 18:25
18:37 FAC Kim & Karen LVEA N Tech clean, Kim out 19:37 19:53
18:54 FAC Tyler+Cascade Mids N Fire checks 20:20
19:25 VAC Janos LVEA N Join VAC team 20:04
19:56 SEI Jim LVEA N Take pictures 20:06
20:03 OPS Oli LVEA N Return power meter 20:11
20:11 CAL Tony PCAL lab Y Make it laser safe 21:07
20:40 CAL Francisco PCAL lab Y PCAL work 22:28
20:43 OPS/TCS TJ LVEA N-> Y -> N HAZARD TRANSITION then CO2Y table adjustment, back to safe 21:13
20:48 TCS Camilla LVEA Y Join TJ, CO2Y table 21:13
21:17 SAF LVEA LASER SAFE LVEA N LVEA IS LASER SAFE 01:17
21:31 ISC Keita LVEA N AS_B SEG3 testing 21:49
21:47 OPS Oli LVEA N Sweep 22:13
23:37 PSL RyanS, Jason CER, PSL racks N Pull cable  

IFO/Locking:

Some of the maintenance work completed includes:

We did not do a DAQ restart

Images attached to this report
H1 ISC (TCS)
camilla.compton@LIGO.ORG - posted 16:04, Tuesday 19 November 2024 - last comment - 15:09, Tuesday 03 December 2024(81358)
Beam Profile Measurements of ALS-Y path

Oli, Camilla WP12203. Repeat of some of the work done in 2019: EX: 52608, EY: 52636, older: part 1part 2part 3.

We misaligned ITMY and turned off the ALS-Y QPD servo with H1:ALS-Y_PZT_SWITCH and placed the Ophir Si scanning slit beam profiler to measure both the 532nm ALSY outgoing beam and the ALSY return beam in the HWS path.

The outgoing beam was a little oblong in the measurements but looked pretty clean and round by eye, the return beam did not! Photos of outgoing and return beam attached.  Outgoing beam was 30mW, return beam 0.75mW. 

Attached is the 13.5% and D4sigma measurements, I also have photos of the 50% measurements if needed. Distances are measured from the optic where HWS and ALS beams combine, ALS-M11 in D1400241

We had previously removed HWS-M1B and HWS-M1C and translated HWS-M1A from whats shown in D1400241-v8 to remove clipping. 

Images attached to this report
Non-image files attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 11:07, Tuesday 26 November 2024 (81491)

TJ, Camilla

We expanded on these measurements today and measured the positions of the lenses and mirrors in both ALS and HWS beampaths and took beamscan data further from the periscope, where the beam is changing size more. Data attached for today and all data together calculated from the VP.  Photo of the beamscanner in the HWS return ALS beam path also attached.

Images attached to this comment
Non-image files attached to this comment
camilla.compton@LIGO.ORG - 15:09, Tuesday 03 December 2024 (81599)

Oli, Camilla

Today we took some beam measurements between ALS-L6 and ALS-M9. These are in the attached documents with today's data and all the data. The horizontal A1 measurements seemed strange before L6. We're unsure why as further downstream when the beam is larger and easier to see by eye it looks round.

Images attached to this comment
Non-image files attached to this comment
LHO General
ryan.short@LIGO.ORG - posted 16:00, Tuesday 19 November 2024 (81366)
Ops Eve Shift Start

TITLE: 11/19 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 10mph Gusts, 8mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 1.47 μm/s
QUICK SUMMARY: Due to very rapidly rising microseism, H1 will not be locking this evening or overnight, which allows us to run a long test for PSL glitch hunting by leaving just the PMC locked (no FSS, ISS, IMC). The DB37 cable was unlplugged from the FSS fieldbox at 23:45 UTC.

H1 TCS
thomas.shaffer@LIGO.ORG - posted 14:29, Tuesday 19 November 2024 - last comment - 09:24, Wednesday 04 December 2024(81362)
Recent TCS CO2Y lock loss and table work today

Camilla C., TJ

Recently, the CO2Y laser that we replaced on Oct 22 has been struggling to stay locked for long periods of time (alog81271 and trend from today). We've found some loose or bad cables in the past that have caused us issues, so we went out on table today to double check they are all ok.

The RF cables that lead into the side of the laser can slightly impact the output power when wiggled, in particular the ones with a BNC connector, but not to the point that we think it would be causing issues. The only cable that we found loose was for the PZT that goes to the head of the laser. The male portion of the SMA that comes out of the laser head was loose, and cannot be tightened from outside of the laser. We verified that the connection from this to the cable were good, but wiggling it did still introduce glitched in the PZT channel. I don't think that we've conviced ourselves that this is a problem though, because the PZT doesn't seem to glitch when the laser loses lock and instead it will run away.

An unfortunate consequence of the cable wiggling was that one of the Beckhoff plugs at the feedthrough must have been unseated slightly and caused our mask flipper read backs to read incorrectly. The screws for this plug were not working so we just pushed the plug back in to fully seat it and all seemed to work again.

We still are not sure why we've been having these lock losses lately, the 2nd and 3rd attachments show a few of them from the last day or so. They remind me of back in 2019 when we saw this - example1example2. The fix ultimately a chiller swap (alog54980), but the flow and water temp seem more stable this time around. Not completely ruling it out yet though.

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 09:24, Wednesday 04 December 2024 (81617)

We've only had two relocks in the last two weeks since we readjusted cables. This is within its normal behavior. I'll close this FRS32709 unless this suddenly becomes unstable again. Though there might be a larger problem of laser stability, I think closing the this FRS makes sense since it is referencing a specific instance of instability.

Both X&Y tend to have periods of long stretches where they don't relock, and periods where they have issues staying locked (attachment 2). Unless there are obvious issues with chiller supply temperature, loose cables, wrong settings, etc. I don't think that we have a great grasp as to why it loses lock sometimes.

Images attached to this comment
H1 General
oli.patane@LIGO.ORG - posted 14:17, Tuesday 19 November 2024 (81363)
LVEA has been Swept

LVEA has been swept. Just had to unplug a(n already off) PEM function generator and turn off the lights for the HAM5/6 giant cleanroom.

LHO VE
jordan.vanosky@LIGO.ORG - posted 13:51, Tuesday 19 November 2024 (81361)
Replacement of Faulty Gauge (PT 154) on FC-A Cross A2

Today we swapped the PT154 gauge on the FC-A section that was reported faulty in alog 81078 with a brand new gauge, same make/model.

FCV1 & 2 were closed to isolate the FCA section, and the angle valve on the A2 cross closed to isolate the gauge. Volume was vented with dry nitrogen and gauges swapped. CF connection was helium leak checked, no He signal above the HLD background ~2e-10 Torr-l/s.

Closing WP 12200

Images attached to this report
H1 ISC (ISC)
keita.kawabe@LIGO.ORG - posted 11:54, Tuesday 19 November 2024 - last comment - 14:46, Tuesday 19 November 2024(81355)
ASC-AS_B_DC interface analog gain was put back to HIGH (i.e. +20dB)

Related alog: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=81320

Sometime ago (July/09/2024) the analog DC gain of H1:ASC-AS_B_DC was set from "High" to "Low" when I was looking to extract more information from WFSs about the power coming past the Fast Shutter when the shutter bounces down after high power lock losses. Since this is not necessary any more (see plots in alog 81130) and since keeping this means that we have to either adjust dark offset once in a while and/or making the "shutter closed" threshold for Fast Shutter test (alog 81320), I set the gain switch back to "HIGH", and disabled the +20dB digital gain in H1:ASC-AS_B_DC_SEG[1234] FM4.

Interestingly, when I flipped the gain switch, SEG3 output didn't change (1st attachment). It could be that the specific DC channel is stuck to HIGH or LOW, but it could be that the analog offset happens to be really small for that channel. I cannot test it right now as we're leaving IMC unlocked. As soon as the light comes back I will go to the floor and test.

The second attachment is the relevant MEDM screen (that shows the state it should be in now) and the third one is the picture of the interface chassis (the switch I flipped is circled in red, and it's supposed to be HIGH now).

Images attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 13:50, Tuesday 19 November 2024 (81360)

After the IMC was relocked I went to the floor, opened the fast shutter, switched the analog gain switch back and forth several times and confirmed that all segments responded correctly.

ryan.crouch@LIGO.ORG - 14:46, Tuesday 19 November 2024 (81364)

I accepted in SDF in both SAFE and OBSERVE to keep off FM4

Images attached to this comment
H1 PSL
victoriaa.xu@LIGO.ORG - posted 11:42, Tuesday 19 November 2024 - last comment - 17:53, Tuesday 19 November 2024(81354)
PSL PMC + FSS was continuously locked over emergency vent

It looks like usually the PSL PMC + FSS can stay locked for long periods of time, for example >1 month over the emergency vent in August. See the attached trends of the # of locked days and hours in the last few months (ndscope saved as /ligo/home/victoriaa.xu/ndscope/PSL/pmc_fss_can_stay_locked_over_emergency_o4b_vent.yaml).

The last time the PMC was locked for almost a month ended on Sept 23, 2024. Since then the PSL PMC has not stayed locked for over 5 days, but this is most likely due to commissioning tests and debugging which started around then.

Images attached to this report
Comments related to this report
victoriaa.xu@LIGO.ORG - 13:20, Tuesday 19 November 2024 (81357)

Some trends showing several PSL signals over O4b, and the end of O4a.

Key points- PMC + FSS stayed locked continuously during both the O4a/b break (Feb 2024), and the emergency vent (Aug 2024), with minimal glitching in PMC TRANS and Ref Cav Trans PD (FSS-TPD).

Images attached to this comment
victoriaa.xu@LIGO.ORG - 17:53, Tuesday 19 November 2024 (81369)

Trying to compare PSL-related spectra between the original O4 NPRO and the current NPRO using DTT is kinda confusing.

Comparing times with before O4 NPRO from 21 Oct 2024 04:55 UTC (blue, 169 Mpc), and now with O3 NPRO at 16 Nov 2024 08:20 UTC (red, 169.9 Mpc).

For the fast mon spectra in the top left plot, FSS FAST MON OUT looks 2-5x noisier now than before. H1:PSL-FSS_FAST_MON_OUT_DQ calibrated into Hz/rtHz using zpk = ([], [10], [1.3e6])  (see 81251, this makes H1 and L1 spectra into comparable Hz/rtHz units 81210).

But for FSS-TPD (ref cav trans), it looks like there's some extra 1-10Hz noise, but otherwise the trans spectra might be quieter? Similarly confusing, the ISS AOM control signal looks quieter. Not a clear takeaway from just these spectra on how to compare O3/O4 NPROs.

Images attached to this comment
H1 AOS
neil.doerksen@LIGO.ORG - posted 15:37, Wednesday 13 November 2024 - last comment - 08:12, Friday 22 November 2024(81257)
NN Seismic Array HS-1 Install

2024 Nov 12

Neil, Fil, and Jim installed an HS-1 geophone in the biergarten (image attached). HS-1 is threaded to plate and plate is double-sided taped to the floor. Signal given was non-existent. Must install pre-amplifier to boost signal.

2024 Nov 13

Neil and Jim installed an amplifier (SR560) to boost HS-1 signal (images attached). Circuitry checked to ensure signal makes it to the racks. However, when left alone there is no signal coming through (image attached, see blue line labelled ADC_5_29). We suspect the HS-1 is dead. HS-1 and amplifier are now out of LVEA, HS-1's baseplate is still installed. We can check one or two more things, or wait for more HS-1s to compare.

Images attached to this report
Comments related to this report
jim.warner@LIGO.ORG - 16:45, Tuesday 19 November 2024 (81367)SEI

Fil and I tried again today, we couldn't get this sensor to work. We started from the PEM rack in the CER, plugging the HS1 through the SR560 into the L4C interface chassis, confirming the HS1 would see something when we tapped it. We then moved out to the PEM bulkhead by HAM4, again confirmed the HS1/SR560 combo still showed signal when tapping the HS1. Then we moved to the biergaren and plugged in the HS1/SR560 right next to the other seismometers. While watching the readout in the DAQ of the HS1 and one of the Guralps I have connected to the PEM AA, we could see that both sensors could see when I slapped the ground near the seismometers, but the signal was barely above what looks like electronics noise on the HS1, while the Guralp showed lots of signal that looked like ground motion. We tried gains from 50-200 on the SR560, none of them really seemed to improve the snr of the HS1. The HS1 is still plugged in over night, but I don't think this particular sensor is going to measure much ground motion.

brian.lantz@LIGO.ORG - 08:12, Friday 22 November 2024 (81413)

One check for broken sensors - A useful check is to be sure you can feel the mass moving when the HS-1 is in the correct orientation. A gentle shake in the vertical, inverted, and horizontal orientations will quickly reveal which orientation is correct.

H1 PSL (ISC, Lockloss, OpsInfo)
camilla.compton@LIGO.ORG - posted 11:12, Monday 11 November 2024 - last comment - 15:06, Tuesday 19 November 2024(81193)
Overview of PSL Story so far

After discussions with control room team: Jason, Ryan S, Sheila, Tony, Vicky, Elenna, Camilla 

Conclusions: The NPRO glitches aren't new. Something changed to make us not be able to survive them as well in lock. The NRPO was swapped so isn't the issue. Looking the timing of these "IMC" locklosses, they are caused by something in the IMC or upstream 81155.

Tagging OpsInfo: Premade templates for looking at locklosses are in /opt/rtcds/userapps/release/sys/h1/templates/locklossplots/PSL_lockloss_search_fast_channels.yaml and will come up with command 'lockloss select' or 'lockloss show 1415370858'.

Comments related to this report
camilla.compton@LIGO.ORG - 15:45, Monday 18 November 2024 (81340)ISC
  • November 12th:
    • Took TFs 81247
    • Tightened loose ISS AOM RF cable 81247
    • Tested and characterized in-service TTFSS, SN LHO01: tuned notches and replaced a bad PA85 op amp 81247
  • November 13th:
    • IMC stayed locked without IFO 81262
    • FSS OLG and gain change 81254
    • IMC OLG and gain change 81259
  • Novmeber 14th:
    • PSL Beckhoff power cycle 81281
    • Replace 35MHz with Marconi: 81277
    • Tightened loose ISS AOM cable: 81280
  • Novmeber 18th:
    • LSC POP un-clipped 81329
    • Issues found with IMC locked checker in ISC_LOCK's READY 81336 (maybe keeping us in DOWN longer than needed)

Updated list of things that have been checked above and attache a plot where I've split the IMC only tagged locklosses (orange) from those tagged IMC and FSS_OSCIALTION (yellow). The non IMC ones (blue) are the normal lock losses and (mostly) only once we saw before September. 

Images attached to this comment
camilla.compton@LIGO.ORG - 15:06, Tuesday 19 November 2024 (81365)ISC
  • Novmeber 14th:
    • PMC OLG measured 81283
  • November 19th:
    • Scopes setup at PSL racks 81359
    • PMC+FSS only test shows glitches 81356
    • Reverted EX 28bit LIGO-DAC change back to 20-bit 81350
Displaying reports 7181-7200 of 86200.Go to page Start 356 357 358 359 360 361 362 363 364 End