Displaying reports 381-400 of 82909.Go to page Start 16 17 18 19 20 21 22 23 24 End
Reports until 11:04, Wednesday 11 June 2025
H1 ISC (SQZ)
anthony.sanchez@LIGO.ORG - posted 11:04, Wednesday 11 June 2025 (84971)
DARM Appraisal Post HAM1 Vent

Compairing Sqz times from this morning Jun 11 2025 GPS time: 1433689084 and a 15 min span from March 26th 2025 GPS time: 1427073110
Command ran :  python3 range_compare.py 1433689084 1427073110 --span 900

Today's time is well calibrated, well thermalized.
We can see New peaks in the the ASD, and some of our old peaks are higher. From 10-15hz, 25-30hz, 500- 600HZ, 2k hz esp.
We seem to also have a broadband decrease in both sensitivity and range. Where our sensitivity loss is small at most frequencies, but spans a wide frequency range.
Unfortunately our range has dropped off from 80hz -2k  hz by up to ~15 Mpc. See first page.
The good news is that there are some very slight gains in range in the 50-80 hz freq range. See 3rd page



 

Non-image files attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:33, Wednesday 11 June 2025 (84969)
Wed CP1 Fill

Wed Jun 11 10:10:49 2025 INFO: Fill completed in 10min 46secs

Good fill verified by Gerardo curbside.

Images attached to this report
H1 General (CDS, SEI)
anthony.sanchez@LIGO.ORG - posted 08:28, Wednesday 11 June 2025 - last comment - 08:52, Wednesday 11 June 2025(84964)
HPIHAM1 Channels not found.

SDF Overview looks great except this HPIHAM1 channel not found.

Images attached to this report
Comments related to this report
anthony.sanchez@LIGO.ORG - 08:52, Wednesday 11 June 2025 (84967)

Channels not found list attached as a picture.

Images attached to this comment
H1 General (CAL, SEI)
anthony.sanchez@LIGO.ORG - posted 08:17, Wednesday 11 June 2025 - last comment - 09:56, Wednesday 11 June 2025(84963)
Wednesday Day Ops Morning Shift & Observing!

TITLE: 06/11 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 140Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 11mph Gusts, 5mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.18 μm/s
QUICK SUMMARY:

H1 is still locked 8 Hours and 30 minutes later!
All systems seem to be functioning.
 

There was talk of doing a calibration measurement, Which I started to do right after making sure there wasn't anyone still inside working the LVEA.

I ran a PCAL BroadBand with this command:  
pydarm measure --run-headless bb
2025-06-11 07:44:58,555 config file: /ligo/groups/cal/H1/ifo/pydarm_cmd_H1.yaml
2025-06-11 07:44:58,571 available measurements:
  pcal: PCal response, swept-sine (/ligo/groups/cal/H1/ifo/templates/PCALY2DARM_SS__template_.xml)
  bb  : PCal response, broad-band (/ligo/groups/cal/H1/ifo/templates/PCALY2DARM_BB__template_.xml)

The BroadBand finished. But I did not run the Simulines. It was believed by the Calibration gurus that we don't need it before Observing because our calibration .
"monitoring lines show a pretty good uncertainty for LHO this morning: https://gstlal.ligo.caltech.edu/grafana/d/StZk6BPVz/calibration-monitoring?orgId=1&var-DashDatasource=lho_calibration_monitoring_v3&var-coh_threshold=%22coh_threshold%22%20%3D%20%27cohok%27%20AND&var-detector_state=&from=1749629890225&to=1749652518797 Roughly +/-2% wiggle "
~Joe B

Clicked the button for Observing, And we went right into observing with out any SDF issues!
Went into observing at 14:57 UTC

There are messages though mostly from the SEI system, all of which are Setpoint changes see SPM DIFFS for differences for HAMs 2,3,4,5.
But these have not stopped us from getting in to Observing.

 

 

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 09:56, Wednesday 11 June 2025 (84968)

I have attached a screenshot of the broadband measurement from this morning. It shows that the calibration uncertainty is within +-2%, which means that our new calibration is excellent!

For those who want to plot the latest PCAL broadband, you can use a template that I have saved in /opt/rtcds/userapps/release/cal/h1/dtt_templates/PCAL_BB_template.xml (aka [userapps] cal/h1/dtt_templates/)

In order to use this template, you must find the GPS time of the start of the broadband measurement, which I found today by converting the timestap in Tony's post above into GPS time. This template pulls data from NDS2 because it uses GDS, so you will also need to go to the "Input" tab, and put your current GPS time in the "Epoch stop" entry that is within the "NDS2 selection" box. The current time will hopefully be after the start time of the broadband measurement, so that will ensure that the full span of the data you need is requested from NDS2. If you don't do this, the template will give you an error if you try to run.

Images attached to this comment
H1 General (ISC, SQZ)
oli.patane@LIGO.ORG - posted 00:05, Wednesday 11 June 2025 - last comment - 11:22, Wednesday 11 June 2025(84958)
Ops Eve Shift End

TITLE: 06/11 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 135 Mpc
INCOMING OPERATOR: None
SHIFT SUMMARY:

We are Observing! We've been Locked for 1 hour. We got to NOMINAL_LOW_NOISE an hour ago after a couple locklosses with Elenna's help (see below) and then took a couple unthermalized broadband calibration measurements (84959, 84960). I also just adjusted the sqz angle and was able to get better squeezing for the 1.7 kHz band, but the 350 Hz band squeezing is very bad. I am selecting DOWN so that if we unlock, we don't relock.

Early in the relocking process, we were having issues with DRMI and PRMI not catching, even though we had really good DRMI flashes. I finally gave up and went to run an initial alignment, but we had a bit of a detour when an error in SDFing caused Big Noise (TM) to be sent into PM1 and caused the software WD to trip, then causing the HAM1 ISI and HAM1 HEPI to also trip. Once we got that figured out we went through a full initial alignment with no issues.

Relocking, we had two locklosses from LOWNOISE_ASC from the same spot. Here are their logs (first, second). There were no ASC oscillations before the locklosses, so it doesn't seem to be due to the 1Hz issues from earlier (849463). Looking at the logs, they both happened right after turning on FM4 for DHARD P, DcntrlLP. Elenna took a look at that filter and noticed that the ramping on time might be too short, and changed it from 5s to 10s, and updated the wait time in the guardian to match. She loaded that all in, and it worked!!

As a strange aside, after the second LOWNOISE_ASC lockloss, I went into manual IA to align PRX, but there was no light on ASC-AS_A. Left Manual IA, went through DOWN and SDF_REVERT again, then back into manual IA, and found the same issue at PRX. Looked at the ASC screen and noticed that the fast shutter was closed. Selected OPEN for the fast shutter, and it opened fine. This was a weird issue??

LOG:

23:30UTC Locked and getting data for the new calibration
23:43 Lockloss
    - Started an initial alignment, trying to do automatically after PRC align was bypassed in the state graph (84950)
    - Tried relocking, couldn't get DRMI or PRMI to catch, even with really good DRMI flashes
    - Went to manual inital alignment to just do PRX by hand, but saw the HAM1 ISI IOP DACKILL had tripped
        - Then HAM1 HEPI tripped, and I had to put PM1 in SAFE because huge numbers were coming in through the LOCK filter
        - It was due to an SDF error and was corrected
    - Lockloss from LOWNOISE_ASC for unknown cause (no ringup)
    - Lockloss from LOWNOISE_ASC for unknown cause (no ringup)
    - Tried going to manual IA to align PRX, but there was no light on ASC-AS_A. Left Manual IA, went through DOWN and SDF_REVERT again, then back into manual IA, and found the same issue at PRX. Looked at the ASC screen and noticed that the fast shutter was closed. Selected OPEN for the fast shutter, and it opened fine.
06:03 NOMINAL_LOW_NOISE
06:07 Started BB calibration measurement
06:12 Calibration measurement done
06:36 BB calibration measurement started
06:41 Calibration measurement done
07:02 Back into Observing

Start Time System Name Location Lazer_Haz Task Time End
00:50 VAC Gerardo LVEA YES Climbing around on HAM1 00:58
Non-image files attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 00:13, Wednesday 11 June 2025 (84962)

Unfortunately, since we don't want the ifo trying to relock all night if we lose lock, I have to select DOWN, but that means that the request for ISC_LOCK is not in the right spot for us to stay Observing. So we won't be Observing overnight, but will be locked (at least until we lose lock, then we will be in DOWN)

elenna.capote@LIGO.ORG - 11:22, Wednesday 11 June 2025 (84973)

Here is some more information about some of the problems Oli faced last night and how they were fixed.

PM1 saturations:

Unfortunately, this problem was an error on my part. Yesterday, Sheila and I were making changes to the DC6 centering loop, which feeds back to PM1. As a part of updating the loop design, I SDFed the new filter settings, but inadvertently also SDFed the input of DC6 to be ON in safe. We don't want this; SDF is supposed to revert all the DC centering loop inputs to OFF when we lose lock. Since I made this mistake, a large junk signal came in through the input of DC6 and then was sent to the suspension, which railed PM1 and then tripped the HAM1 ISI. Once I realized what was happening, I logged in and had Oli re-SDF the inputs of DC6 P and Y to be OFF.

You can see this mistake in my attached screenshot of the DC6 SDF; I carelessly missed the "IN" among the list of differences.

DHARD P filter engagement:

In order to avoid some control instabilities, Sheila and I have been reordering some guardian states. Specifically, we moved the LOWNOISE ASC state to run after LOWNOISE LENGTH CONTROL. This should not have caused any problems, except Oli noticed that we lost lock twice at the exact same point in the locking process, right at the end of LOWNOISE ASC when the DHARD P low noise controller is engaged, FM4. I attached the two guardian logs Oli sent me demonstrating this.

I took a look at the FM4 step response in foton, and noticed that the step response is actually quite long, and the ramp time of the filter was set to 5 seconds. I also looked at the DARM signal right before lockloss, and noticed that the DARM IN1 signal had a large motion away from zero just before lockloss, like it was being kicked. My hypothesis is that the impulse of the new DHARD P filter was kicking DARM during engagement. This guardian state used to be run BEFORE we switched the coil drivers to low bandwidth, so maybe the low bandwidth coil drivers can't handle that kind of impulse.

I changed the ramp time of the filter to 10 seconds, and we proceeded through the state on the next attempt just fine.

Images attached to this comment
Non-image files attached to this comment
H1 CAL
oli.patane@LIGO.ORG - posted 23:45, Tuesday 10 June 2025 - last comment - 23:50, Tuesday 10 June 2025(84960)
(Another) broadband calibration confirmation measurement (unthermalized)

We took another broadband measurement after having been at max power for 40 minutes in our quest to confirm the newest calibration. Of course, since we have only been at max power for 40 minutes, we are still unthermalized.

Start: 2025-06-11 06:36:03 UTC (1433658981)

End: 2025-06-11 06:41:12 UTC (1433659290)

Output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250611T063603Z.xml
 

Comments related to this report
elenna.capote@LIGO.ORG - 23:50, Tuesday 10 June 2025 (84961)

We should still take another measurement when we are more thermalized, but just after 40 minutes of NLN the broadband results look good. I also checked the calibration line grafana page, and all the uncertainties are within 5%. Most are within 2-3% except the 33 Hz line which is at 4%.

Images attached to this comment
H1 CAL
oli.patane@LIGO.ORG - posted 23:27, Tuesday 10 June 2025 (84959)
Broadband calibration confirmation measurement (unthermalized)

As soon as we got to NLN, we took a broadband calibration measurement to check out the new calibration (84953). We had just gotten to max power 12 minutes before starting this measurement, so of course we are very unthermalized. We are hoping to take another measurement once we're thermalized.

Start: 2025-06-11 06:07:20 UTC (1433657258)

End: 2025-06-11 06:12:30 UTC (1433657568)

Output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250611T060720Z.xml

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 20:26, Tuesday 10 June 2025 (84957)
Ops EVE Midshift Update

Currently trying to lock - We were almost there but lost lock during LOWNOISE_ASC for an unknown reason - there were no ringups leading up to the lockloss.

H1 GRD
oli.patane@LIGO.ORG - posted 17:28, Tuesday 10 June 2025 (84954)
Changed lscparams 'manual_control' back to False

Self-explanatory from title. Set 'manual_control' to False so that when relocking, we automatically go into LOCKING_ARMS_GREEN instead of GREEN_ARMS_MANUAL, and so we go into PRMI and MICH after timing out of DRMI, instead of staying in DRMI.

Reloaded ISC_LOCK and ISC_DRMI

H1 CAL
elenna.capote@LIGO.ORG - posted 17:21, Tuesday 10 June 2025 - last comment - 11:06, Wednesday 11 June 2025(84953)
Pushed New Calibration from report 20250610T224009Z -- lockloss interrupted the final check

Francisco, Elenna, help online from Joe B

We used the thermalized calibration measurement that Tony took in alog 84949, and ran the calibration report, generating report 20250610T224009Z. We had previously done this process for a slightly earlier calibration measurement with guidance from Joe. Upon inspection of the report, Joe recommended that we change the parameter is_pro_spring from False to True, which significantly improved the fit of the calibration. The report that Tony uploaded in his alog includes that fit change. Since we were happy with this fit, Francisco reran the pydarm report, this time requesting the generation of the GDS filters. After this completed, we inspected the comparison of the FIR filters with the DARM model, and saw very good agreement between 10 and 1000 Hz.

Two things we want to point out are that the nonsens filter fits included a lot of ripple a low frequency, but it still looks small enough that we think it is "ok". We also saw some large line features at high frequency in the TST filters, which Joe had previously assured us was ok.

While online with Joe, we had also confirmed that the DARM actuation parameters, such as gains and filters, matched in three locations: in the suspension model itself, in the CAL CS model, and in the pydarm ini file.

Since we confirmed this was all looking good, Francisco and I proceeded with the next steps, which we followed from Jeff's alog here, 83088. We ran these commands in this order:

pydarm commit 20250610T224009Z --valid

pydarm export --push 20250610T224009Z

pydarm upload 20250610T224009Z

pydarm gds restart

At this point, Jeff notes that he had to wait 12 minutes running "pydarm gds status" and running the broadband measurement to confirm the calibration is good. Francisco and I also knew we needed to check the status of the calibration lines on grafana. However, a few minutes after we started the clock on this wait time, the IFO lost lock.

We think the calibration is good, but we have not actually been able to confirm this, which means we cannot go into observing tomorrow (Wednesday) before making this confirmation.

Doing so requires some locked time with calibration lines on and a broadband injection for a final verification of this new calibration. The hope is that we can achieve this tonight, but if not, we must do so tomorrow before going into observing. (Note: because of the different rules of "engineering" data versus "observing" data, we could go into observing mode tonight without this confirmation).

Comments related to this report
oli.patane@LIGO.ORG - 19:15, Tuesday 10 June 2025 (84956)
Images attached to this comment
elenna.capote@LIGO.ORG - 11:06, Wednesday 11 June 2025 (84972)

We confirmed this new calibration is good in this alog: 84963.

I am going to add a few more details and thoughts about this calibration here:

Currently, we are operating with a digital offset in SRCL, which is counteracting about 1.4 degrees of SRCL detuning. Based on the calibration measurement, operating with this offset seems to have compensated most of the anti-spring that has been previously evident in the sensing function. However, our measurements still show non-flat behavior at low frequency, which was actually best fit with a spring (aka "pro-spring"). However, the full behavior of this feature appears more like some L2A2L coupling. It may be worthwhile to test out this coupling by trying different ASC gains and running sensing function measurements.

Joe pointed out to me this morning in the cal lines grafana, and we also saw in the very early broadband measurement last night (84959), that the calibration looks very bad just at the start of lock, with uncertainties nearing 10%. This seems to level off within about 30 minutes of the start of the lock. Since that is pretty bad, we might want to consider what to do on the IFO side to compensate. Maybe our SRCL offset is too large for the first 30 minutes of lock, or there is something else we can do to mitigate this response.

Just watching the grafana for this recent lock acquisition, it took about 1 hour for the uncertainty of the 33 Hz line to drop from 8% to 2%.

H1 ISC
jennifer.wright@LIGO.ORG - posted 16:36, Tuesday 10 June 2025 - last comment - 17:46, Tuesday 10 June 2025(84947)
Optical gain and PRG vs. pre-vent

Jennie W, Sheila D

I compared our optical gain and power-recycling gain between this afternoon once we were thermalised  at 22:42:59 UTC and a thermalised time just before the vent on April 1st at 07:34:01 UTC.

Our optical gain looks like it has decreased by around 1% and our PRG from 52 to 50 W/W.

This might make it worth tweaking our OMC alignment to improve optical gain, but the the PRG hasn't changed much so its maybe not worth trying to improve this before the run starts by tweaking camera servo offsets.

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 17:46, Tuesday 10 June 2025 (84955)

We should be careful with the PRG comparison- I am not sure the change in the PRG is "real" because before the vent, we had not updated the PRG calibration to account for reduced power on IM4 trans that occurred after the O4a/O4b break. However, I did update the PRG calibration last week to account for it. It could still be correct; one reason why I didn't update the PRG calibration before was because it seemed "good enough", but I'm not sure if it's good enough to a few percent to make this kind of comparison.

H1 SQZ
camilla.compton@LIGO.ORG - posted 15:08, Tuesday 10 June 2025 - last comment - 10:33, Wednesday 11 June 2025(84941)
No SQZ Time

No SQZ time taken today, 21:47:00UTC to 21:59:00UTC.

Attached plot shows today (in black) compared to before the vent (orange) and last year (red).
Our no sqz now looks very similar to before the the vent apart form the increased jitter lines 300-700Hz (hopefully from HAM1 VAC pumps). Both 2025 plots are worse at high frequency and 40-70Hz compared to last years, maybe this could be an calibration artifact?
Images attached to this report
Comments related to this report
anthony.sanchez@LIGO.ORG - 10:33, Wednesday 11 June 2025 (84970)

2 Seconds of data in this data span is missing.
Our tools can't pull the entire time for this No SQZ ing time.
Johnathan has given us the 2 seconds that are missing from the data 12 minute data stretch.

"I don't have good news for you.  There is a 2s gap in there at 1433627912-1433627913 on H-H1_llhoft.  The H-H1_HOFT_C00 is worse, I don't see frames in the 1433627.... range at all." ~ Johnathan H.

We were able to salvage 674 seconds from this time that can be useful. 
Useful GPS time: 1433627238 -1433627912

 

H1 ISC
jenne.driggers@LIGO.ORG - posted 13:13, Tuesday 10 June 2025 - last comment - 08:23, Wednesday 11 June 2025(84930)
Some Observe.snap SDF clearing

I'm working on going through some Observe SDFs, so that we're ready for observing soon.

Jim is currently working on going through many of the SEI SDFs.  The rest of the diffs I need to check with other commissioners to be sure about before we clear them, but I think we're getting close to having our SDFs cleared!

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 14:05, Tuesday 10 June 2025 (84935)

h1tcshws sdfs attached.

Reverted BaffePDs to what they were 3 months ago, attached, unsure why they would have changed.

SQZ ADF frequency sdfs accepted, we do not know why these would have been accepted at the values -600 that they've been at some of the past 2 weeks.

Images attached to this comment
elenna.capote@LIGO.ORG - 13:35, Tuesday 10 June 2025 (84936)

ASC SDFs were from changes to DC6, cleared.

elenna.capote@LIGO.ORG - 13:39, Tuesday 10 June 2025 (84937)

Cleared these SDFs for the phase changes for LSC REFL A and B.

Images attached to this comment
sheila.dwyer@LIGO.ORG - 16:14, Tuesday 10 June 2025 (84945)

I trended these, and see that FM2 was on in all three of these last time we were in observing, so these must have been erroneously accepted in the observing snap.

Images attached to this comment
camilla.compton@LIGO.ORG - 08:23, Wednesday 11 June 2025 (84965)

I also accepted the HAM7_DK_BYPASS time from 1200 to 999999 after checking with Dave, as attached.

Images attached to this comment
H1 ISC
camilla.compton@LIGO.ORG - posted 10:34, Tuesday 10 June 2025 - last comment - 16:59, Wednesday 25 June 2025(84922)
Noticed BS PIT Moved while locking and then drifts in NLN: not new, happened end of O3b but not 1 year ago.

Sheila, Elenna, Camilla

Sheila was questioning if something is drifting for us to need an initial alignment after the majority of relocks. Elenna and I noticed that BS PIT moves a lot both while powering up /moving spots and while in NLN. Unsure from the BS alignment inputs plot what's causing this.

This was also happening before the break (see below) but the operators were similarly needing more regular initial alignments before the break too.  1 year ago this was not happening, plot.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 12:44, Tuesday 10 June 2025 (84929)

These large BS PIT changes began 5th to 6th July 2024 (plot). This is the day shift from the time that the first lock like this happened 5th July 2024 19:26UTC (12:26PT): 78877 at the time we were doing PR2 spot moves. There also was a SUS computer restart 78892 but that appeared to be a day after this started happening.

Images attached to this comment
camilla.compton@LIGO.ORG - 09:45, Wednesday 11 June 2025 (84966)ISC, SUS

Sheila, Camilla

This reminded Sheila of when we were heating a SUS in the past and causing the bottom mass to pitch and the ASC to move the top mass to counteract this. Then after lockloss, the bottom mass would slowly go back to it's nominal position.

We do see this on the BS since the PR2 move, see attached (top 2 left plots). See in the green bottom mass oplev trace, when the ASC is turned off on lockloss, the BS moves quickly and then slowly moves again over the next ~30 minutes, do not see simular things on PR3. Attached is the same plot before the PR2 move. And below is a list of other PR2 positions we tried, all the other positions have also made this BS drift. The total PR2 move since the good place is ~3500urad in Yaw.

  • Different time May 21st to 24th 2024:
    • BS Oplev Drift
    • Plot shows 30urad M1 drift
    • PR2 Alignment Sliders P: 1435, Y: 1130
  • Pre July 5th 2024:
    • No BS Oplev Drift
    • Plot shows 5urad M1 drift
    • PR2 Alignment Sliders P: 1565, Y: 3210
  • July 5th 2024 to 6th Feb 2025:
    • BS Oplev Drift
    • Plot shows 50urad M1 drift
    • PR2 Alignment Sliders P: 1535, Y: 2785
  • 6th Feb 2025 to 10th Feb 2025:
    • BS Oplev Drift
    • Plot shows 30urad M1 drift
    • PR2 Alignment Sliders P: 1480, Y: 1195
  • 10th Feb 2025 to now:
    • BS Oplev Drift
    • Plot shows 30-40urad M1 drift
    • PR2 Alignment Sliders P: 1430, Y: -245

To avoid this heating and BS drift, we should move back towards a PR2 YAW of closer to 3200. But, we moved PR2 to avoid the spot clipping on the scrapper baffle, e.g.  77631, 80319, 82722, 82641.

Images attached to this comment
jenne.driggers@LIGO.ORG - 14:38, Thursday 12 June 2025 (85002)

I did a bit of alog archaeology to re-remember what we'd done in the past.

  • In August of 2015, we found that we were struggling with PR3 pitch alignment jumping, then cooling down upon lockloss.  Alog 20268 talks about the implementation of the lock loss compensation, which first appeared in ISC_DRMI guardian in rev 11228.
  • At some point (I didn't dig to find out when precisely), we also implemented the same filters for BS pitch.
  • By Jan 2020, both BS and PRM had the soft ASC turn-off.
  • In Jan 2020, ISC_DRMI rev 20905 we removed this soft ASC turn-off for both PR3 and BS.  The referenced alog 54709 notes that we shouldn't need those anymore, since we had installed wire heating baffles, to prevent the wires from being illuminated and heating up.
  • We haven't had the soft turn-off filters in use since 2020, about 3 months before the end of O3b.  This may be why Camilla saw that we were seeing BS drift at the end of O3b.
  • Perhaps our alignment during O4, until we moved the PR spots in May 2024, was such that we weren't susceptible to this wire heating.
  • I don't think PR3 is seeing the same kind of trouble that it did back in 2015 upon lockloss, so I think its wire heating baffles are working as designed, so no need to make any changes to the PR3 controls.
  • Sheila made the point that because we unclipped some of the +Y side of the beam (without moving the spot on the BS), maybe there is a bit more light that is illuminating the barrel of the BS or getting to the wires.  Or, something?  Without having looked at the actual drawings, I could imagine that the wire heating baffles are working better on PR3 than they are on the BS, because we hit PR3 much closer to normal incidence, whereas with the BS the light could be sneaking around the baffles.  Robert thinks that light could get inside the cage baffle and reflect around and be hitting and heating the wires.
  • All of this seems to say that we should re-implement the soft ASC turn-off for the BS. I had a quick look at the 1/e time for the BS to move after lockloss (it's about 241 seconds), and the 1/e time for the filters (about 240 seconds, despite my quoting in alog 54706 that they were 25 min filters (2*pis are hard!)

To put back the soft turn-off of the BS ASC, I think we need to:

  • Disable the BS M1 ASC lockloss trigger.  Jeff reminded me that this would foil my plans, since it turns off the ASC signals to the EUL2OSEM matrix.  This will mean that neither the Pit nor the Yaw BS M1 signals will be shut off by the lockloss trigger.  To disable, we'll need to set H1:SUS-BS_M1_TRIG_ASC_ENABLE to zero (which means that the ASC signals will always be passed to the EUL2OSEM matrix).  I don't think this is in guardian anywhere, so we should only need to change it and then accept in safe and observe snap files.
  • Change ISC_DRMI around line 66 such that BS pit gain is not set to zero.  Also, have it turn off FM1 in addition to turning off the input.
  • Change ISC_DRMI around line 141 to not hit the BS pit RSET button.

Camilla made the good point that we probably don't want to implement this and then have the first trial of it be overnight.  Maybe I'll put it in sometime Monday (when we again have commissioning time), and if we lose lock we can check that it did all the right things.

jenne.driggers@LIGO.ORG - 09:57, Monday 16 June 2025 (85075)

I've now implemented this soft let-go of BS pit in the ISC_DRMI guardian, and loaded.  We'll be able to watch it throughout the day today, including while we're commissioning, so hopefully we'll be able to see it work properly at least once (eg, from a DRMI lockloss).

jenne.driggers@LIGO.ORG - 17:16, Monday 16 June 2025 (85106)

This 'slow let-go' mode for BS pitch certainly makes the behavior of the BS pit oplev qualitatively different. 

In the attached plots, the sharp spike up and decay down behavior around -8 hours is how it had been looking for a long time (as Camilla notes in previous logs in this thread).  Around -2 hours we lost lock from NomLowNoise, and while we do get a glitch upon lockloss, the BS doesn't seem to move quite as much, and is mostly flattened out after a shorter amount of time.  I also note that this time (-2 hours ago) we didn't need to do an initial alignment (which was done at the -8 hours ago time).  However, as Jeff pointed out, we held at DOWN for a while to reconcile SDFs, it's not quite a fair comparison. 

We'll see how things go, but there's at least a chance that this will help reduce the need for initial alignments.  If needed, we can try to tweak the time constant of the 'soft let-go' to further make the optical lever signal stay more overall flat.

The SUSBS SDF safe.snap file is saved with FM1 off, so that it won't get turned back on in SDF revert.  The PREP_PRMI_ASC and PREP_DRMI_ASC states both re-enable FM1 - I may need to go through and ensure it's on for MICH initial alignment.

Images attached to this comment
jenne.driggers@LIGO.ORG - 16:59, Wednesday 25 June 2025 (85344)

RyanS, Jenne

We've looked at a couple of times that the BS has been let go of slowly, and it seems like the cooldown time is usually about 17 minutes until it's basically done and at where it wants to be for the next acquisition of DRMI.  Attached is one such example.

Alternatively, a day or so ago Tony had to do an initial alignment.  On that day, it seemed like the BS took much longer to get to its quiescent spot.  I'm not yet sure why the behavior is different sometimes.

Tony is working on taking a look at our average reacquisition time, which will help tell us whether we should make another change to further improve the time it takes to get the BS to where it wants to be for acquisition.

Images attached to this comment
Displaying reports 381-400 of 82909.Go to page Start 16 17 18 19 20 21 22 23 24 End