Displaying reports 1981-2000 of 85417.Go to page Start 96 97 98 99 100 101 102 103 104 End
Reports until 14:44, Monday 21 July 2025
H1 ISC (OpsInfo)
thomas.shaffer@LIGO.ORG - posted 14:44, Monday 21 July 2025 - last comment - 11:07, Wednesday 23 July 2025(85895)
pimon minor changes

Elenna noticed that pimon that we run on nuc25 hadn't been consistently saving data. There were large periods of time that didn't have anything recorded. I first verified that it would still write log and npz files, then I redirected them to /ligo/data/pimon/locklosses/ where it will now put all of the npz files. Elenna also asked that the script saves the time series and not the PSDs, so I made that change as well. I'm not sure the file size difference, the PSDs were about 3.6Mb, so we should keep an eye on that and get rid of old ones.

Operators, please verify that pimon is running on nuc25, and restart it if necessary. The normal launch script will start it up.

Comments related to this report
thomas.shaffer@LIGO.ORG - 11:07, Wednesday 23 July 2025 (85938)

Apparently, it struggles to save the time series data and will freeze up. I've changed it back to only save the PSDs on lockloss and we'll see if it survives better.

H1 ISC
elenna.capote@LIGO.ORG - posted 13:07, Monday 21 July 2025 (85894)
Calibration lines in DHARD

The calibration lines still appear in DHARD P and Y to varying degrees. Here is a higher resolution spectrum of DHARD P and Y, with OMC DCPD sum for reference. DHARD P shows all four calibration lines (L1 15.6 Hz, L2 16.4 Hz, L3 17.6 Hz and PCAL 17.1 Hz) between 15-18 Hz, while DHARD Y only seems to show the 16.4 Hz line in L2.

Today, Sheila and I also looked at the phasing of AS A/B 45 WFS. Our thinking was that a DARM line could be phased to be mainly in Q for each segment. The segments all show the calibration lines prominently, so they can be used as a phasing reference. It was immediately apparently that for all segments of both WFS, the DARM line is at least a factor of 2 higher in I than in Q. We decided not to make any phasing changes at this time, since rephasing these lines into Q would change the sign dramatically and may impact locking when we put DARM on RF.

Images attached to this report
H1 ISC
elenna.capote@LIGO.ORG - posted 12:09, Monday 21 July 2025 - last comment - 15:19, Thursday 28 August 2025(85893)
New MICH ASC lowpasses engaged

Instead of redesigned the MICH ASC loops, I just updated the lowpasses to be at 12 Hz instead of 15 Hz, which should reduce the gain by 15 dB between 10-20 Hz for both pitch and yaw. I tested them today with no issues, so I adjusted the MICH ASC engagement in ISC_DRMI guardian, and updated SDF (accidentally overwrote the screenshot with my screenshot of the filter).

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 15:19, Thursday 28 August 2025 (86635)

Based on the results from Sheila's noise budget, I adjusted this filter to 9 Hz, which has further reduced the MICH ASC coherence with DARM. New filters are in FM8, lownoise ASC engages these filters. SDFed and guardian code tested.

References are coherence from the last lock, live traces are after filter is engaged.

Images attached to this comment
H1 SQZ
camilla.compton@LIGO.ORG - posted 11:56, Monday 21 July 2025 (85892)
FC Detuning Checked, already in best spot

Elenna and I checked the FC detuning by taking it slowly from nominal -26 to -50, where we could clearly see low freq DARM was worse, and then stepping to -15 (any lower unlocked the FC in 85886) using ezcastep H1:IOP-LSC0_RLF_FREQ_OFS -s 30 '+1,35', see attached plot. The best de-tuning is close to where we already are, leaving it at -27.

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:25, Monday 21 July 2025 (85889)
Mon CP1 Fill

Mon Jul 21 10:08:47 2025 INFO: Fill completed in 8min 44secs

 

Images attached to this report
H1 SQZ
camilla.compton@LIGO.ORG - posted 09:02, Monday 21 July 2025 (85886)
Lockloss while setting up for FC detuning Commisioning

First I tried taking H1:IOP-LSC0_RLF_FREQ_OFS down towards zero but we lost FC lock at  H1:IOP-LSC0_RLF_FREQ_OFS = -8. FC would not re-lock until I brought it back to -15.

Wanted to scan FC de-tuning from -15 to -60 but the command didn't seem to work with negative steps.

Then 5s after moving FC detuning from -15 to -60, the IFO lost lock, 1437146099. The lockloss appears unrelated to the FC detuning change as the FC lost lock after we see  ETMX_L3 start to be unstable, plot. There was slightly increased noise at low frequency but not enough scatter to cause a lockloss.

Ideally would have ran ezcastep H1:IOP-LSC0_RLF_FREQ_OFS -s 30 '+1,45' to get 30sec steps from -60 to -15. We've been at -44 before with no issues, 83526, but wanted to make the low frequency noise worse to be able to fit to the best place.

Images attached to this report
H1 AOS
joseph.betzwieser@LIGO.ORG - posted 08:46, Monday 21 July 2025 (85885)
Updating pydarm release to tag 20250721.0
I'm followed the pydarm deployment instructions here, to update the LHO pydarm install.

This is the 20250721.0 tag for pydarm, which includes a bug fix for high frequency roam line handling, letting us indicate the correct reference model in the pydarm_H1.ini file.

This is not the default cds conda environment, but the default you get when typing pydarm at a command line, or specifically invoking by running "conda activate /ligo/groups/cal/conda/pydarm".
LHO General
corey.gray@LIGO.ORG - posted 07:40, Monday 21 July 2025 - last comment - 07:51, Monday 21 July 2025(85881)
Mon DAY Ops Transition

TITLE: 07/21 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 113Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 13mph Gusts, 9mph 3min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.06 μm/s
QUICK SUMMARY:

Arriving to a Dust Alarm in the PSL over the last hr (there have been winds around the corner station the last 2hrs but only about 15mph).

For H1 over the last 12 hrs we have had 4 locks (current one is 90min) where 2 (of3) locklosses were ETMx Glitches, and it looks like all reacquisition was automatic overnight.

H1 is scheduled for Commissioning from 15-19utc (8-noonPDT).

Comments related to this report
corey.gray@LIGO.ORG - 07:51, Monday 21 July 2025 (85883)

Notes From Ops Shift Check Sheet

1) Dust Monitor Check Notifications for LVEA5 & LAB2

Ran the "check_dust_monitors_are_working" script the last two mornings and received notifications for the following:

  • *H1:PEM-CS_DUST_LVEA5      WARNING: dust counts did not change, please investigate
  • H1:PEM-CS_DUST_LAB2                      Error: data set contains 'not-a-number' (NaN) entries

2)  Access System "Flashing Doors"

  • EY has (2) flashing doors, but I'm pretty sure this has been ongoing for years.
  • VPW has 3 door issues:  Exterior Roll-up, LDAS entry, Dirty shop entry & Wood shop has Roll-up door issue.

3) LHO Control Room Screenshots & FOMs

  • H1 Glitches (nuc27) not posting (this has been the case for a while)
H1 General (Lockloss)
anthony.sanchez@LIGO.ORG - posted 22:01, Sunday 20 July 2025 (85880)
Ops Eve shift & Lockloss from NLN

TITLE: 07/21 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
Started the Shift unlocked due to earthquake.

Got back to observing for 1 minute before a lockloss.
Then relocked again for an hour and 17 minutes before another Unknown Lockloss from NLN.

Relocking now,  currently at LOW_NOISE_COIL_DRIVERS we should be locked by soon .

 

Images attached to this report
H1 General (Lockloss)
anthony.sanchez@LIGO.ORG - posted 18:44, Sunday 20 July 2025 (85879)
Sunday Eve Shift report.

TITLE: 07/21 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 12mph Gusts, 9mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.09 μm/s
QUICK SUMMARY:

Went up to the Roof to check for fires or smoke. Took some pictures in all directions.

Inherited an unlocked H1 after a large earthquake from Alaska.
Did an Initial Alignment which had an SRM WD trip towards the end. So I reran it with out any issues.
We reached NOMINAL_LOW_NOISE at 01:25 UTC
Observing at 1:27 UTC.

And Lost lock to an Unknown Lockloss at 1: 29 UTC.



 

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 16:29, Sunday 20 July 2025 (85872)
Sun DAY Ops Summary

TITLE: 07/20 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:

Flurries of earthquakes from Japan.  Had another HAM6 High Voltage power supply failure.  Able to finally finish the change for removing the SR3 Dither Pitch Offset.  Shift ended with a M6.2 Alaska earthquake which took H1 down.
LOG:

H1 ISC (FRS, ISC, OpsInfo)
corey.gray@LIGO.ORG - posted 14:17, Sunday 20 July 2025 - last comment - 14:21, Sunday 20 July 2025(85876)
More OMC High Voltage OMC Power Supply Weirdness

(Corey, Elenna [remote], Tony)

Recent Backstory Alogs:  

Today, while locking H1 noticed that we were stuck in PREP_DC_READOUT_TRANSITION.  DIAG_MAIN was mentioning an OMC issue.  OMC_LOCK guardian was in a loop of trying to sweep the PZT to FIND_CARRIER, but Elenna noticed that the PZT was not sweeping.  Then she remmebered the previous issues!  

(I was on the wrong trail, because I had assumed the SR3 change I made somehow was the cause of the issue---it was NOT.)

We still had the OMC_LOCK in a non-ISC_LOCK-managed AUTO state and we were not totally sure of how to have ISC_LOCK remanage OMC_LOCK.  Tony found a 2015 alog from Patrick which we used to get OMC_LOCK managed again.  H1 continued on and has been in OBSERVING.

Images attached to this report
Comments related to this report
corey.gray@LIGO.ORG - 14:21, Sunday 20 July 2025 (85877)FRS, OpsInfo

Added a new comment for today's HAM6 Power Supply Issue to the already-opened FRS Ticket 34433.

OPS NOTE:  If you are having trouble locking the OMC (no light on the HAM6 OMC Trans camera, stuck at PREP_DC_READOUT_TRANSITION, and no evidence of the OMC PZT2 working), it might due to this power supply!

H1 SUS (SUS)
corey.gray@LIGO.ORG - posted 12:13, Sunday 20 July 2025 - last comment - 14:24, Sunday 20 July 2025(85875)
SR3 Dither Pitch Offset Zeroed + H1SUSSR3 safe.snap Updated

Finally had the opportunity (due to H1 Lockloss during my shift) to address the SR3 Dither Pitch Offset Oli noted last week (alog 85830), and I first looked at during a lockloss at the end of my shift Friday evening (alog 85855)---but my changes did not hold for the Dither Pitch Offset due to the H1SUSSR3 SDF needing its safe.snap changed/accepted.

It's been ages since I've updated an safe.snap, so pardon the less elegant steps I took to update the SR3 SDF here---Basically I 

  1. Turned the OFFSET button OFF (then Accepted the SDF diff for the safe.snap) and then
  2. Took the OFFSET from 32.3 to 0 (then Accepted the SDF diff, once again, for the safe.snap). 

Both updates are screenshot-ed separately! Ha.  The new SR3 OPTICALIGN_P_OFFSET was already at its new offset (of 457.9...which used to be 445.8) since Friday (and aligned).  So now that the SR3 pointing changed with the Dither Offset going to zero, I ran a new alignment. 

Currently, H1 has locked DRMI and I'm sure the final step here would be updated thing observe.snap with the changes noted above.

ADDENDUM:  Currently stuck at PREP_DC_READOUT_TRANSITION, where the OMC can't lock.  I'm wondering if this is due to the SR3 change....Where the SR3 Top Mass was at one spot since Fri evening, and now with the Dither Offset being zeroed, we now have SR3 Top Mass back to it's pointing we had for the last few weeks up until Friday night.  (see attached ndscope trend over the weekend)

Images attached to this report
Comments related to this report
corey.gray@LIGO.ORG - 14:24, Sunday 20 July 2025 (85878)

After the OMC Power Supply issues noted above, H1 made it to NLN, but it did in fact have SDF Diffs for OBSERVE.snap for H1SUSSR3.  Those new settings were ACCEPTED in SDF (see attached), and this is task is now complete.

Images attached to this comment
LHO VE
david.barker@LIGO.ORG - posted 10:16, Sunday 20 July 2025 (85874)
Sun CP1 Fill

Sun Jul 20 10:08:56 2025 INFO: Fill completed in 8min 52secs

 

Images attached to this report
H1 DetChar (DetChar, PEM)
derek.davis@LIGO.ORG - posted 17:22, Friday 18 July 2025 - last comment - 11:20, Thursday 24 July 2025(85856)
20.2 Hz line appeared Jun 9, turns on and off

Prompted by me noticing on-off behaviors in the daily strain spectrogram for today at around 20.2 Hz, I've done some additional investigations into the source and behavior of this line: 

The 20.2 Hz line, which is currently prominent in DARM, first appeared in accelerometer and microphone data from the corner station on June 9. The first appearance of this line that I found was in the PSL mics, as shown in this spectrogram. This line then appeared in DARM in the first post-vent locks a few days later. The summary of work from June 9 does not show anything obvious to me that would be the source of this new noise.

This feature also turns off and on multiple times during the day. An example from today can be seen in this spectrogram. Most corner station microphones and accelerometers exhibit this feature, but it is most pronounced visually in the PSL microphone spectrograms. I was unable to identify any other non-PEM channels that showed the same on-off behavior, but this does reveal many change points that should aid in tracking down the source. Almost every day, this line exhibits abrupt on-off features at different times of the day and for varying durations. Based on my initial review, these change points appear to be more likely during the local daytime (although not at any specific time).  When the line first appeared, it was usually in the "off" state and then turned on for short periods. However, this has slowly changed, so that now the line is generally in the "on" state and turns off for brief periods. 

  

Images attached to this report
Comments related to this report
derek.davis@LIGO.ORG - 09:24, Monday 21 July 2025 (85887)

Looking into past alogs, I noticed that I reported this same issue last summer in alog 79948. Additional discussion about this line can be found in the detchar-requests repository (requires authentication). In this case, the line appeared in late spring and disappeared in early autumn of 2024. No source was identified before the line disappeared. 

Going back further, I also see the same feature appearing in late spring and disappearing in early autumn of 2023. The presence of the line is hence correlated with the outside temperature, likely related to some aspect of the air conditioning system that is only needed when it is (roughly) hotter outside than inside. This also means that we can expect this line to remain present in the data until autumn unless mitigation measures are taken.

timothy.ohanlon@LIGO.ORG - 11:20, Thursday 24 July 2025 (85959)

I looked briefly into the 20 Hz Noise without much success. Comparing the floor accelerometers, the noise is louder in the EBAY than the LVEA (although the signal of the EBAY accelerometer doesn't look good since the vent). The next closest is HAM1 followed by BS. So the noise is around the -X-Y corner of the LVEA, likely in the EBAY, Transition Area or Optics Lab because HAM6 sees less motion than HAM1 and EBAY sees the most.

Images attached to this comment
H1 SQZ
sheila.dwyer@LIGO.ORG - posted 11:14, Thursday 17 July 2025 - last comment - 08:39, Monday 21 July 2025(85820)
SQZ angle ADF srevo back on in guardian

Sheila, Camilla

We ran a couple of squeezing angle scans to check the settings of the ADF servo. 

One thing that we realized is that the ADF Q demod signal is divided by H1:SQZ-ADF_OMC_TRANS_Q_NORM rather than mulitplied which is what we had thought.  We changed the coefficent from 0.18 to 5.8. The first png attachment shows that this transforms the blue ellipse into the orange one.  It would be a bit better if we first adjusted the demod phase to maximize the Q signal, so that the ellipse would be aligned along the axis, and the rescaled version would be more like a circle.  However you can see in the right side plot that this gives us a reasonably linear readback of sqz angle as we change the RF6 demod angle (which is actually cabled up to RF3 phase) about 150 degrees where our best squeezing is.

Camilla turned the servo back on in sqzparams. 

For future reference, a slightly better way to do this would be to move the demod phase to maximize Q, do a scan and set H1:SQZ-ADF_OMC_TRANS_Q_NORM to the ratio (max of Q)/ (max of I).  Then you can do a smaller scan around the point with the best squeezing, and in sqzparams set sqz_ang_adjust_ang to the readback angle that you think is best.

 

Images attached to this report
Non-image files attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 14:27, Thursday 17 July 2025 (85827)OpsInfo

This didn't work at the start of today's the lock as the ADF frequency had been left near 10kHz. Once I put the ADF back to 322Hz it seemed to work fine.

For operators, this means that if the squeezing looks bad, running SCAN_SQZANG_FDS alone won't change the SQZ angle. You would need to:

  • Request SQZ_MANGER to SCAN_SQZANG_FDS
  • Once it's done, if sqz has improved, adjust H1:SQZ-ADF_OMC_TRANS_PHASE to put H1:SQZ-ADF_OMC_TRANS_SQZ_ANG around zero.
    • see the attached screenshot showing the channels to change and ndscope, this is from sitemap > sqz > sqz manager > ADF
  • Request SQZ_MANGER to FREQ_DEP_SQZ

If the servo is running away, try the above instructions, if that doesn't work, the servo can be turned off via editing use_sqz_angle_adjust = False in sqz/h1/guardian/sqzparams.py. Please alog and tag SQZ.

Images attached to this comment
camilla.compton@LIGO.ORG - 08:39, Monday 21 July 2025 (85884)

Since we've had this servo running, the range has been higher and sqz more stable, see attached.

Images attached to this comment
H1 AOS
elenna.capote@LIGO.ORG - posted 09:21, Monday 14 July 2025 - last comment - 09:30, Monday 21 July 2025(85738)
LOWNOISE ASC Locklosses

I previously noted a glitch about 30 seconds before lockloss in LOWNOISE ASC, 85685. However, we had two more locklosses from this state last night and I do not see such a glitch so that is a random coincidence. One of those locklosses appears to be caused by an earthquake. However, since 6/11, we have had 9 locklosses in this state that occurred exactly 47 seconds into the state, which seems suspicious, one of those occurred last night, and the lockloss with the glitch was the same.

This seems to be coincident with the engagement of a few DHARD P filters:

2025-07-14_14:26:54.641531Z ISC_LOCK executing state: LOWNOISE_ASC (522)
2025-07-14_14:26:54.642230Z ISC_LOCK [LOWNOISE_ASC.enter]
2025-07-14_14:26:54.655894Z ISC_LOCK [LOWNOISE_ASC.main] ezca: H1:ASC-ADS_PIT3_OSC_CLKGAIN => 300
2025-07-14_14:26:54.656325Z ISC_LOCK [LOWNOISE_ASC.main] ezca: H1:ASC-ADS_PIT4_OSC_CLKGAIN => 300
2025-07-14_14:26:54.656732Z ISC_LOCK [LOWNOISE_ASC.main] ezca: H1:ASC-ADS_PIT5_OSC_CLKGAIN => 300
2025-07-14_14:26:54.657043Z ISC_LOCK [LOWNOISE_ASC.main] ezca: H1:ASC-ADS_YAW3_OSC_CLKGAIN => 300
2025-07-14_14:26:54.657438Z ISC_LOCK [LOWNOISE_ASC.main] ezca: H1:ASC-ADS_YAW4_OSC_CLKGAIN => 300
2025-07-14_14:26:54.657892Z ISC_LOCK [LOWNOISE_ASC.main] ezca: H1:ASC-ADS_YAW5_OSC_CLKGAIN => 300
2025-07-14_14:26:54.658134Z ISC_LOCK [LOWNOISE_ASC.main] timer['LoopShapeRamp'] = 5
2025-07-14_14:26:54.658367Z ISC_LOCK [LOWNOISE_ASC.main] timer['pwr'] = 0.125
2025-07-14_14:26:54.783581Z ISC_LOCK [LOWNOISE_ASC.run] timer['pwr'] done
2025-07-14_14:26:59.658298Z ISC_LOCK [LOWNOISE_ASC.run] timer['LoopShapeRamp'] done
2025-07-14_14:26:59.719537Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-CHARD_Y_GAIN => 200
2025-07-14_14:26:59.720456Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-CHARD_Y_SW1 => 256
2025-07-14_14:26:59.846249Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-CHARD_Y_SW2 => 20
2025-07-14_14:26:59.971686Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-CHARD_Y => ON: FM3, FM8, FM9
2025-07-14_14:26:59.972384Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-DHARD_Y_SW1 => 5392
2025-07-14_14:27:00.098073Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-DHARD_Y_SW2 => 4
2025-07-14_14:27:00.223528Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-DHARD_Y => ON: FM1, FM3, FM4, FM5, FM8
2025-07-14_14:27:00.224135Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-CSOFT_P_SMOOTH_ENABLE => 0
2025-07-14_14:27:00.224497Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-CSOFT_Y_SMOOTH_ENABLE => 0
2025-07-14_14:27:00.224868Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-DSOFT_P_SMOOTH_ENABLE => 0
2025-07-14_14:27:00.225188Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-DSOFT_Y_SMOOTH_ENABLE => 0
2025-07-14_14:27:00.225433Z ISC_LOCK [LOWNOISE_ASC.run] timer['LoopShapeRamp'] = 10
2025-07-14_14:27:10.225728Z ISC_LOCK [LOWNOISE_ASC.run] timer['LoopShapeRamp'] done
2025-07-14_14:27:10.281803Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-CHARD_P_TRAMP => 5
2025-07-14_14:27:10.408120Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-CHARD_P_SW2 => 16
2025-07-14_14:27:10.533563Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-CHARD_P => OFF: FM9
2025-07-14_14:27:10.534285Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-CHARD_P_SW1 => 256
2025-07-14_14:27:10.660088Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-CHARD_P_SW2 => 4
2025-07-14_14:27:10.785453Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-CHARD_P => ON: FM3, FM8
2025-07-14_14:27:10.786315Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-CHARD_P_GAIN => 208
2025-07-14_14:27:10.786535Z ISC_LOCK [LOWNOISE_ASC.run] timer['LoopShapeRamp'] = 5
2025-07-14_14:27:15.786858Z ISC_LOCK [LOWNOISE_ASC.run] timer['LoopShapeRamp'] done
2025-07-14_14:27:15.847152Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-DSOFT_Y_TRAMP => 5
2025-07-14_14:27:15.847580Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-DSOFT_Y_GAIN => 5
2025-07-14_14:27:15.848666Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-DSOFT_P_GAIN => 5
2025-07-14_14:27:15.848917Z ISC_LOCK [LOWNOISE_ASC.run] timer['LoopShapeRamp'] = 5
2025-07-14_14:27:20.849050Z ISC_LOCK [LOWNOISE_ASC.run] timer['LoopShapeRamp'] done
2025-07-14_14:27:20.906577Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:SUS-ITMX_M0_DAMP_Y_TRAMP => 10
2025-07-14_14:27:20.907206Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:SUS-ETMX_M0_DAMP_Y_TRAMP => 10
2025-07-14_14:27:20.907700Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:SUS-ITMY_M0_DAMP_Y_TRAMP => 10
2025-07-14_14:27:20.908422Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:SUS-ITMX_M0_DAMP_Y_GAIN => -0.5
2025-07-14_14:27:20.908830Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:SUS-ETMX_M0_DAMP_Y_GAIN => -0.5
2025-07-14_14:27:20.909148Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:SUS-ITMY_M0_DAMP_Y_GAIN => -0.5
2025-07-14_14:27:20.909562Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:SUS-ETMY_M0_DAMP_Y_GAIN => -0.5
2025-07-14_14:27:20.909789Z ISC_LOCK [LOWNOISE_ASC.run] timer['LoopShapeRamp'] = 10
2025-07-14_14:27:30.910055Z ISC_LOCK [LOWNOISE_ASC.run] timer['LoopShapeRamp'] done
2025-07-14_14:27:30.968166Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:SUS-SR2_M1_DAMP_P_GAIN => -0.2
2025-07-14_14:27:30.968527Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:SUS-SR2_M1_DAMP_Y_GAIN => -0.2
2025-07-14_14:27:30.968806Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:SUS-SR2_M1_DAMP_L_GAIN => -0.2
2025-07-14_14:27:30.969073Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:SUS-SR2_M1_DAMP_R_GAIN => -0.2
2025-07-14_14:27:30.969343Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:SUS-SR2_M1_DAMP_T_GAIN => -0.2
2025-07-14_14:27:30.969606Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:SUS-SR2_M1_DAMP_V_GAIN => -0.2
2025-07-14_14:27:30.969838Z ISC_LOCK [LOWNOISE_ASC.run] timer['LoopShapeRamp'] = 10
2025-07-14_14:27:40.970003Z ISC_LOCK [LOWNOISE_ASC.run] timer['LoopShapeRamp'] done
2025-07-14_14:27:40.972085Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-DHARD_P_SW1 => 1024
2025-07-14_14:27:41.097962Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-DHARD_P_SW2 => 4
2025-07-14_14:27:41.223313Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-DHARD_P => ON: FM4, FM8
2025-07-14_14:27:41.223637Z ISC_LOCK [LOWNOISE_ASC.run] timer['LoopShapeRamp'] = 10
2025-07-14_14:27:41.593743Z ISC_LOCK [LOWNOISE_ASC.run] Unstalling IMC_LOCK
2025-07-14_14:27:41.765955Z ISC_LOCK JUMP target: LOCKLOSS

I will take a look and see if there is anything unstable about these filters. Whatever is occurring seems to be too fast to be seen in the ASC signals themselves, and at first glance I don't see anything strange in the suspension channels either.

DHARD FM4 is engaged with a 10 second ramp- this is a change I made on 6/11: 84973 because we had lost lock on that day twice in the same spot. Two of the locklosses at 47 seconds occurred before that change. Then, later that day on 6/11 I reengaged a boost in DHARD P, which only has a 5 second ramp, 84980. Engaging that boost shouldn't be unstable, but maybe something bas occurrs when they ramp at different times. I'm lengthing the ramp to 10 seconds.

Comments related to this report
elenna.capote@LIGO.ORG - 11:44, Tuesday 15 July 2025 (85766)

We had another lockloss from this state at the 00:47 mark last night, 1436615757 so I'm not sure this fixed the problem.

 However, the lockloss was proceeded by a glitch about 30 seconds before, like another lockloss I noticed in this state. This could be coincidence again, but it's looking a little suspicious!

elenna.capote@LIGO.ORG - 11:58, Tuesday 15 July 2025 (85768)

The glitch appears to be occuring due to the CHARD P change. We ramp a boost off with 2 seconds, and a new shaping and low pass on with 2 seconds, and then change the gain with 5 seconds. Looking at the step response of the shaping and lowpass filter, this ramp should probably be 10 seconds, and the gain also 10 seconds to match. I will keep the boost at 2 seconds to ramp off though. I increased the wait timer to 10 seconds to match this ramping. Model and guardian changes saved and loaded.

I am still not sure what is going on with DHARD P, but as a test I've now separated the low pass and loop shape from the engagement of the boost, since we know those individually are stable to engage. We now engage FM4 with a 10 second ramp and wait time, then engage FM8 with a 5 second ramp and wait time. I edited the ramps and gaurdian code to do so, svaed and loaded. This is kind of annoying, but it might help me debug what's going wrong here.

elenna.capote@LIGO.ORG - 13:49, Tuesday 15 July 2025 (85776)

I watched the signals during lownoise ASC, and this time I saw no glitch in CHARD during its lownoise transition. However, I saw a glitch when the DHARD P FM4 filter was engaged, and no glitch when FM8 was engaged. Maybe the ramp of FM4 should be even longer than 10 seconds. I increased the filter ramp to 15 seconds and increased the guardian wait timer to match. Both changes saved and loaded.

elenna.capote@LIGO.ORG - 09:30, Monday 21 July 2025 (85888)

We haven't had a lockloss in this state since this fix (but we've had plenty of locks), so I am going to declare this problem fixed!

Displaying reports 1981-2000 of 85417.Go to page Start 96 97 98 99 100 101 102 103 104 End