Displaying reports 221-240 of 82900.Go to page Start 8 9 10 11 12 13 14 15 16 End
Reports until 11:08, Tuesday 17 June 2025
LHO VE
david.barker@LIGO.ORG - posted 11:08, Tuesday 17 June 2025 (85123)
Tue CP1 Fill

Tue Jun 17 10:10:11 2025 INFO: Fill completed in 10min 7secs

 

Images attached to this report
LHO VE (VE)
travis.sadecki@LIGO.ORG - posted 10:11, Tuesday 17 June 2025 - last comment - 16:37, Tuesday 17 June 2025(85121)
new Kobelco installation complete

Late entry from last week. Roger's Machinery completed the new Kobelco installation and startup last Tuesday.  Our last task before acceptance testing of the system was to replace the filter elements in the filter tree.  When we opened the housings, we discovered that one of the elements was mangled and had evidence of rough tool use on it.  When we removed the element, we could see that there was some evidence of galling in the threads where the element screws into the filter housing (see pics).  We will order a new housing and replace it before continuing with testing the system.  

Images attached to this report
Comments related to this report
janos.csizmazia@LIGO.ORG - 16:37, Tuesday 17 June 2025 (85138)
And here is the filter element, in an absolutely ridiculous condition, as Travis described it.
Images attached to this comment
H1 CDS
david.barker@LIGO.ORG - posted 10:01, Tuesday 17 June 2025 - last comment - 16:37, Tuesday 17 June 2025(85120)
RCG5.5.0 completed 09:51

We have completed the upgrade of H1 frontends to RCG5.5.0 at 09:51.

Detailed alog will be written but surprises found/re-remembered:

EX dolphin frontends need to enable an unused port on the EY switch because the EX switch has no port control (damaged in Apr power outage)

PSL DBB model had an obsolete Dolphin IPC sender back when it used to have a dolphin card. New RCG doesn't allow senders with no cards. Removed the sender from the model, a DAQ restart is pending for this model

We had upgraded h1omc0 to 5.5.0 some time ago, but the H1.ipc file had changed so it needed a restart. Prior to restart it was clobbering the SWWD IPCs between sush34 and seih23, seih45.

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 11:20, Tuesday 17 June 2025 (85124)
Images attached to this comment
david.barker@LIGO.ORG - 16:37, Tuesday 17 June 2025 (85139)

Here is a rough squence of today's upgrade, all times local PDT

07:22 rebuild H1EDC.ini to convince outselves the first DAQ restart will be model changes only

07:43 h1cdsrfm powered down, this breaks the linkage between Dolphin locations

07:43 h1ecatmon0 upgrade (reboot)

07:45 Dolphin network manager started on h1vmboot5-5, causing the standard set of end station systems to crash (susey, seiey, iscey and susex). We PAUSE'd the remaining EX (iscex, seiex)

07:51 Reboot h1susaux[ex, ey, b123, h2, h34, h56] pem[mx, my] to upgrade them. susaux[h34, ey] got stuck and were power cycled via IPMI.

08:03 DAQ 0-leg restart for new INI files across the board.

08:18 DAQ 1-leg restart. At this point omc0, susaux's and pemmid have good DAQ data, everyone else has BAD DAQ data.

08:27 Power down EX machines, power up EY machines. SWWD IPCs working, Dolphin IPC checks out.

08:32 Power up EX machines, all at the same time because of the Dolphin switch issue. They do not start. After some head scratching we remembered that the startup sequence needs to activate a dolphin switch port, which cannot happen at EX because the switch is damaged. Work around is for all three EX front ends to switch an unused port on the EY switch. Once this was put into place the EX machines started without anyone having to drive to the end station.

08:55 Reboot h1psl0 to upgrade PSL models (no dolphin cards, more about this later...)

08:56 Power down all non-upgraded corner station machines (SUS+SEI+ASC+LSC+OAF) but not h1omc0 (more about this later..)

09:00 h1psldbb is not running. It has an obsolete Dolphin IPC sender part in the model but no Dolphin card. RCG5.5.0 does not allow this. Rebuild model sans IPC part, starts running. Note PSLDBB DAQ data is BAD from this point till the second DAQ restart.

09:10 First power up h1sush2b, h1seih16 for HAM1 SEI work. SWWD IPC between the two working well (well, for these two, more later...)

09:20 Power up all remaining corner station computers

09:30 Discover wierd SWWD IPC receive values for HAM3 and HAM4 (val should be 1.0, but is -0.002 or 0.000).

09:34 try restart h1iopsush34, IPC values still bad. But h1omc0 has not been restarted so its using the old IPC configuration and could be writing out-of-bounds

09:35 restart h1omc0 models, SWWD IPC errors are resolved

09:44 power up h1cdsrfm. First EX is PAUSE'd, EY and CS are fenced. long range dolphin starts with no issues. A new MEDM is generated from the new H1.ipc file.

09:51 Complete in 2 hours.

 

 

H1 SQZ
camilla.compton@LIGO.ORG - posted 09:12, Tuesday 17 June 2025 (85116)
Poked SQZ HD A PD to make it work, now will need realigning

As Kevin and Vicky found a couple of weeks ago, the SQZ HD PD A seems to be loose, it stopped working and Vicky re-seated it and it worked again. It again had stopped working so I poked it (it started pitched down) to make it work, now it will need realigning before the HD is functional again.

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 07:43, Tuesday 17 June 2025 (85115)
OPS Day Shift Start

TITLE: 06/17 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 4mph 3min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.07 μm/s
QUICK SUMMARY:

IFO is in DOWN for PLANNED MAINTENANCE

H1 had just gotten to NLN as I arrived. I then put it in DOWN for today's maintenance. Activities slated for today:

H1 CDS
david.barker@LIGO.ORG - posted 07:19, Tuesday 17 June 2025 - last comment - 07:25, Tuesday 17 June 2025(85113)
RCG 5.5.0 Upgrade

WP12536 RCG5.5.0 Upgrade

WP12624 SWWD variable bypass time h1iopsush2a

WP12625 Fix broken ADC selector h1hpiham4

WP12626 Fix partial ADC MEDM by adding named parts

Jonathan, Erik, EJ, Dave:

Yesterday, Mon 16jun2025, we did the final round of builds and installs of all the models against RCG5.5.0

h1build was rebooted against the new boot server (h1vmboot5-5) and was used to do the builds/installs.

After the builds, I ran each model through check_model_changes and then wrote an output files parser:

check_rcg550_daq_changes.py

All the INI changes were expected, most had the RCG additional channels expected from going from 5.1.4 to 5.5.0.

Exceptions were:

h1iopsusex had 1 less because it was upgrading from 5.3.0 so already had timing card temp chan

The following had additional ADC naming parts:

isibs, isiham3, isiham4, isietmx, isietmy, alsex, alsey, ascimc

Build took about 2 hours. Install took 2hrs 13mins, backing up targets to target_archive took 45GB of disk space, /opt/rtcds is at 89%

Comments related to this report
david.barker@LIGO.ORG - 07:25, Tuesday 17 June 2025 (85114)

Post install, note h1omc0 was already upgraded to 5.5.0 previously

Images attached to this comment
H1 General
ryan.crouch@LIGO.ORG - posted 22:00, Monday 16 June 2025 (85111)
OPS Monday eve shift summary

TITLE: 06/17 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: We stayed locked the entire shift, high frequency SQZing hasn't been the best.
LOG: No Log.

00:01 UTC Observing

H1 PSL (ISC)
jennifer.wright@LIGO.ORG - posted 17:28, Monday 16 June 2025 - last comment - 09:08, Tuesday 17 June 2025(85110)
Upgrade of ISS array unit S1202965

Jennie W, Keita, Rahul

Today the three of us went into the optics lab to upgrade unit S1202965 with the c'bore washer and qpd clamp plate that give a ~ 2 degree tilt between the QPD plane and the mount plate. See detail D in D1300720.

Looking at the assy solidworks file, LIGO-D1101059, if the back face of the photodiode array is facing to the back, the longer clamp leg points towards the front, and the notch on the tilt washer should be approx at the 4o'clock position.

 

We first checked the alignment into the array photodiodes currently and realised the beam was off by a large amount in yaw from the entrance aperture.

 

Keita had to change the mounts for the PZT mirror and lens as these were slightly tilted on the translation stage and it seemed like we needed some more robust alignment setup.

We then tried aligning with the upstream steering mirror and PZT mirror but can see multiple beams on each array PD. To check that the beam is not too large at the input aperture we want to re-profile the beam size on the way into the ISS assembly.

We left the set-up with the M2MS beam prolfer set up at the corner of the table and rough alignment of the beam into it, more fine adjustment needs to be done.

Comments related to this report
keita.kawabe@LIGO.ORG - 09:08, Tuesday 17 June 2025 (85117)

The reason why the alignment was totally off is unknown. It was still off after turning on the PZT driver with an offset (2.5V) so it cannot be the PZT mirror. Something might have been bumped in the past month or two.

H1 General
ryan.crouch@LIGO.ORG - posted 16:55, Monday 16 June 2025 - last comment - 17:05, Monday 16 June 2025(85102)
OPS Monday EVE shift start

TITLE: 06/16 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 20mph Gusts, 7mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY:

Comments related to this report
ryan.crouch@LIGO.ORG - 17:02, Monday 16 June 2025 (85105)

We had to accept some SDFs to get into Observing.

00:01 UTC Observing

Images attached to this comment
elenna.capote@LIGO.ORG - 17:05, Monday 16 June 2025 (85107)

Putting these here as well.

I think these diffs result from errors in our safe SDFing from earlier.

Images attached to this comment
H1 ISC (GRD)
elenna.capote@LIGO.ORG - posted 16:15, Monday 16 June 2025 - last comment - 14:14, Monday 23 June 2025(85098)
Update to MOVE_SPOTS to speed things up

Ryan S., Elenna

Thge MOVE_SPOTS state is taking 13 minutes (!) to complete, because the YAW3 ADS DOF is very far off and taking a significant time to converge. Both Jenne and I have found that bumping up the YAW3 gain (PRM yaw) slowly helps converge the loops much faster.

Ryan kindly helped me update the state code to slowly increase the gain if the convergence is taking too long. We added a new timer 'ADS', that waits for one minute after the new A2L gains are ramped (so an additional minute after the 2 minute ramp time of the A2L gains). If, after that first minute, there is still no convergence, then the YAW3 gain is doubled. After that, the 'ADS' timer waits 2 minutes, and again doubles the gain. This process can happen up to three times, which should increase the YAW3 gain to a maximum value of 8. Jenne and I have found that the gain can go as high as 10 in this state. The two minute waits give the other ASC, like SRC1 and INP1 Y time to converge as the ADS pulls the PRM in faster. Once the convergence checker returns true, the YAW3 gain is set back to 1.

We will monitor how this proceeds on this locking attempt. I updated the guardian notify statements so it states when the gain is increased.

Comments related to this report
elenna.capote@LIGO.ORG - 16:42, Monday 16 June 2025 (85101)

This was a sucess- this run through took only 7 minutes. I am shortening the 2 minute wait before increasing the gain to 90 seconds. If that still works, maybe we can go to 60 seconds.

elenna.capote@LIGO.ORG - 09:58, Tuesday 17 June 2025 (85119)

To be more specific, the first attempt as described above meant the state took 6 minutes, 50 seconds. I loaded the change to reduce the wait time from 120 to 90 seconds, which only shortened the state length to 6 minutes, 30 seconds. The gain was only ramped to 8 for a very short period of time. I still think we can make this shorter, which we can do by making that wait time 60 seconds, and maybe taking bigger steps in the gain each time. However, we are still in the RCG upgrade, so I will hold off on changes to the guardian for now.

YAW3 is still limiting the length of the state. In this morning's relock, YAW3 convergence took nearly an additional minute more than the other loops. Once we have caught YAW3 up to everything else, we could make the state even shorter by raising the gain of other ADS loops. Two minutes of the state are taken up in the ramp of the A2L gain, so it is taking an additional 4 minutes, 30 seconds to wait for loop convergence.

elenna.capote@LIGO.ORG - 14:14, Monday 23 June 2025 (85256)

Now it seems that PIT3 is taking too much time to converge, so I updated the guardian to also increase the PIT3 gain in the same way.

H1 General (ISC, OpsInfo, SEI, SUS)
elenna.capote@LIGO.ORG - posted 15:40, Monday 16 June 2025 - last comment - 17:03, Monday 16 June 2025(85092)
Safe SDF reconciliation

In preparation for the RCG upgrade, we are using the relocking time to reconcile SDF differences in the SAFE file.

Here are some of mine:

I have also determined that the unmonitored channel diffs in the LSC, ASC, and OMC models are guardian controlled values and do not need to be saved.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 15:29, Monday 16 June 2025 (85094)

Not accepting or reverting the h1sqz, h1ascsqzfc, or slow controls cs_sqz sdfs, attached. As these have the same observe and safe files.

Images attached to this comment
camilla.compton@LIGO.ORG - 15:35, Monday 16 June 2025 (85095)

Accepted  H1:TCS-ETMX_RH_SET{LOWER,UPPER}DRIVECURRENT as  ndscope-ing shows they are normally at this value.

Images attached to this comment
elenna.capote@LIGO.ORG - 17:00, Monday 16 June 2025 (85103)

Some of these SDFs may have then led to diffs in the OBSERVE state. I have reverted the roll mode tRamp, and accepted the OSC gains in the CAL CS model.

Images attached to this comment
oli.patane@LIGO.ORG - 17:03, Monday 16 June 2025 (85104)

I updated the OPTICALIGN OFFSETs for each suspension that we use those sliders on. I tried using my update_sus_safesnap.py script at first, but even though it's worked one other time in that past, it was not working anytime I tried using it on more than one suspension at a time (it seems like it was only doing one out of each suspension group). I ended up being able to get them all updated anyway eventually. I'm attaching all their sdfs and will be working on fixing the script. Note that a couple of the ETM/TMS values might not match thesetpoint exactly due to the screenshots happening during relocking and after they had moved a bit with the WFS

Images attached to this comment
H1 PEM
ryan.crouch@LIGO.ORG - posted 15:27, Monday 16 June 2025 - last comment - 10:20, Tuesday 17 June 2025(85093)
DustMon Monthly Trends

All looks well, aside from the known issue with LAB2 and LVEA5 seems frozen, I'll investigate that tomorrow during maintenance.

Images attached to this report
Comments related to this report
ryan.crouch@LIGO.ORG - 10:20, Tuesday 17 June 2025 (85122)

LVEA5 being off is expected, it's a pumped dust monitor so we turned it off for observing.

H1 GRD (ISC, SQZ)
oli.patane@LIGO.ORG - posted 12:49, Monday 16 June 2025 - last comment - 09:37, Tuesday 17 June 2025(85083)
THERMALIZATION guardian edited and put into use for SRCL1 offset stepping

Oli, Camilla, Sheila, RyanS

It was pointed out (84972) that our new SRCL offset is too big at the beginning of the lock, affecting the calibration and how well we are squeezing. Camilla had the idea of taking the unused-but-already-set-up THERMALIZATION guardian and repurposing the main state so it steps LSC-SRCL1_OFFSET from the LSC-SRCL1_OFFSET value at the end of MAX_POWER to the official offset value given in lscparams (offset['SRCL_RETUNE']). This stepping starts at the end of MAX_POWER and goes for 90 minutes. Here is a screenshot of the code.

To go with this new stepping, we've commented out the line (~5641) in ISC_LOCK's LOWNOISE_LENGTH_CONTROL (ezca['LSC-SRCL1_OFFSET'] = lscparams.offset['SRCL_RETUNE']) so that the offset doesn't get set to that constant and instead keeps stepping.

To get this to run properly while observing, we did have to unmonitor the LSC_SRCL1_OFFSET value in the Observe sdf (sdf).

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 14:21, Monday 16 June 2025 (85089)CAL

Attached is a screenshot of the grafana page, highlighting the 33 Hz calibration line, which seems to be the most sensitive to thermalization. Before, when the SRCL offset was set static, it appears that the 33 Hz line uncertainty starts at about 1.09 and then decays down to about 1.02 over the first hour. With the thermalization adjustment of the SRCL offset from 0 to -455 over one hour, the 33 Hz uncertainty starts around 0.945 and then increases to 1.02 over the first hour. Seems like we overshot in the other direction, so we could start closer to -200 perhaps and move to -455.

Images attached to this comment
oli.patane@LIGO.ORG - 17:25, Monday 16 June 2025 (85108)

We decided to change the guardian so that it starts at -200 before then stepping its way up to -455 over the course of 75 minutes instead of 90 minutes.

elenna.capote@LIGO.ORG - 17:26, Monday 16 June 2025 (85109)

With the update to the guardian to start at -200, each calibration line uncertainty has actually stayed very flat for these first 30 minutes of lock (except for the usual very large jump in the uncertainty for the first few minutes of the lock).

Images attached to this comment
elenna.capote@LIGO.ORG - 09:37, Tuesday 17 June 2025 (85118)

This shows the entire lock using the thermalization guardian with the SRCL offset ramping from -200 to -455, The line uncertainty holds steady the entire time within 2-3%!

Images attached to this comment
H1 ISC
camilla.compton@LIGO.ORG - posted 10:34, Tuesday 10 June 2025 - last comment - 16:59, Wednesday 25 June 2025(84922)
Noticed BS PIT Moved while locking and then drifts in NLN: not new, happened end of O3b but not 1 year ago.

Sheila, Elenna, Camilla

Sheila was questioning if something is drifting for us to need an initial alignment after the majority of relocks. Elenna and I noticed that BS PIT moves a lot both while powering up /moving spots and while in NLN. Unsure from the BS alignment inputs plot what's causing this.

This was also happening before the break (see below) but the operators were similarly needing more regular initial alignments before the break too.  1 year ago this was not happening, plot.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 12:44, Tuesday 10 June 2025 (84929)

These large BS PIT changes began 5th to 6th July 2024 (plot). This is the day shift from the time that the first lock like this happened 5th July 2024 19:26UTC (12:26PT): 78877 at the time we were doing PR2 spot moves. There also was a SUS computer restart 78892 but that appeared to be a day after this started happening.

Images attached to this comment
camilla.compton@LIGO.ORG - 09:45, Wednesday 11 June 2025 (84966)ISC, SUS

Sheila, Camilla

This reminded Sheila of when we were heating a SUS in the past and causing the bottom mass to pitch and the ASC to move the top mass to counteract this. Then after lockloss, the bottom mass would slowly go back to it's nominal position.

We do see this on the BS since the PR2 move, see attached (top 2 left plots). See in the green bottom mass oplev trace, when the ASC is turned off on lockloss, the BS moves quickly and then slowly moves again over the next ~30 minutes, do not see simular things on PR3. Attached is the same plot before the PR2 move. And below is a list of other PR2 positions we tried, all the other positions have also made this BS drift. The total PR2 move since the good place is ~3500urad in Yaw.

  • Different time May 21st to 24th 2024:
    • BS Oplev Drift
    • Plot shows 30urad M1 drift
    • PR2 Alignment Sliders P: 1435, Y: 1130
  • Pre July 5th 2024:
    • No BS Oplev Drift
    • Plot shows 5urad M1 drift
    • PR2 Alignment Sliders P: 1565, Y: 3210
  • July 5th 2024 to 6th Feb 2025:
    • BS Oplev Drift
    • Plot shows 50urad M1 drift
    • PR2 Alignment Sliders P: 1535, Y: 2785
  • 6th Feb 2025 to 10th Feb 2025:
    • BS Oplev Drift
    • Plot shows 30urad M1 drift
    • PR2 Alignment Sliders P: 1480, Y: 1195
  • 10th Feb 2025 to now:
    • BS Oplev Drift
    • Plot shows 30-40urad M1 drift
    • PR2 Alignment Sliders P: 1430, Y: -245

To avoid this heating and BS drift, we should move back towards a PR2 YAW of closer to 3200. But, we moved PR2 to avoid the spot clipping on the scrapper baffle, e.g.  77631, 80319, 82722, 82641.

Images attached to this comment
jenne.driggers@LIGO.ORG - 14:38, Thursday 12 June 2025 (85002)

I did a bit of alog archaeology to re-remember what we'd done in the past.

  • In August of 2015, we found that we were struggling with PR3 pitch alignment jumping, then cooling down upon lockloss.  Alog 20268 talks about the implementation of the lock loss compensation, which first appeared in ISC_DRMI guardian in rev 11228.
  • At some point (I didn't dig to find out when precisely), we also implemented the same filters for BS pitch.
  • By Jan 2020, both BS and PRM had the soft ASC turn-off.
  • In Jan 2020, ISC_DRMI rev 20905 we removed this soft ASC turn-off for both PR3 and BS.  The referenced alog 54709 notes that we shouldn't need those anymore, since we had installed wire heating baffles, to prevent the wires from being illuminated and heating up.
  • We haven't had the soft turn-off filters in use since 2020, about 3 months before the end of O3b.  This may be why Camilla saw that we were seeing BS drift at the end of O3b.
  • Perhaps our alignment during O4, until we moved the PR spots in May 2024, was such that we weren't susceptible to this wire heating.
  • I don't think PR3 is seeing the same kind of trouble that it did back in 2015 upon lockloss, so I think its wire heating baffles are working as designed, so no need to make any changes to the PR3 controls.
  • Sheila made the point that because we unclipped some of the +Y side of the beam (without moving the spot on the BS), maybe there is a bit more light that is illuminating the barrel of the BS or getting to the wires.  Or, something?  Without having looked at the actual drawings, I could imagine that the wire heating baffles are working better on PR3 than they are on the BS, because we hit PR3 much closer to normal incidence, whereas with the BS the light could be sneaking around the baffles.  Robert thinks that light could get inside the cage baffle and reflect around and be hitting and heating the wires.
  • All of this seems to say that we should re-implement the soft ASC turn-off for the BS. I had a quick look at the 1/e time for the BS to move after lockloss (it's about 241 seconds), and the 1/e time for the filters (about 240 seconds, despite my quoting in alog 54706 that they were 25 min filters (2*pis are hard!)

To put back the soft turn-off of the BS ASC, I think we need to:

  • Disable the BS M1 ASC lockloss trigger.  Jeff reminded me that this would foil my plans, since it turns off the ASC signals to the EUL2OSEM matrix.  This will mean that neither the Pit nor the Yaw BS M1 signals will be shut off by the lockloss trigger.  To disable, we'll need to set H1:SUS-BS_M1_TRIG_ASC_ENABLE to zero (which means that the ASC signals will always be passed to the EUL2OSEM matrix).  I don't think this is in guardian anywhere, so we should only need to change it and then accept in safe and observe snap files.
  • Change ISC_DRMI around line 66 such that BS pit gain is not set to zero.  Also, have it turn off FM1 in addition to turning off the input.
  • Change ISC_DRMI around line 141 to not hit the BS pit RSET button.

Camilla made the good point that we probably don't want to implement this and then have the first trial of it be overnight.  Maybe I'll put it in sometime Monday (when we again have commissioning time), and if we lose lock we can check that it did all the right things.

jenne.driggers@LIGO.ORG - 09:57, Monday 16 June 2025 (85075)

I've now implemented this soft let-go of BS pit in the ISC_DRMI guardian, and loaded.  We'll be able to watch it throughout the day today, including while we're commissioning, so hopefully we'll be able to see it work properly at least once (eg, from a DRMI lockloss).

jenne.driggers@LIGO.ORG - 17:16, Monday 16 June 2025 (85106)

This 'slow let-go' mode for BS pitch certainly makes the behavior of the BS pit oplev qualitatively different. 

In the attached plots, the sharp spike up and decay down behavior around -8 hours is how it had been looking for a long time (as Camilla notes in previous logs in this thread).  Around -2 hours we lost lock from NomLowNoise, and while we do get a glitch upon lockloss, the BS doesn't seem to move quite as much, and is mostly flattened out after a shorter amount of time.  I also note that this time (-2 hours ago) we didn't need to do an initial alignment (which was done at the -8 hours ago time).  However, as Jeff pointed out, we held at DOWN for a while to reconcile SDFs, it's not quite a fair comparison. 

We'll see how things go, but there's at least a chance that this will help reduce the need for initial alignments.  If needed, we can try to tweak the time constant of the 'soft let-go' to further make the optical lever signal stay more overall flat.

The SUSBS SDF safe.snap file is saved with FM1 off, so that it won't get turned back on in SDF revert.  The PREP_PRMI_ASC and PREP_DRMI_ASC states both re-enable FM1 - I may need to go through and ensure it's on for MICH initial alignment.

Images attached to this comment
jenne.driggers@LIGO.ORG - 16:59, Wednesday 25 June 2025 (85344)

RyanS, Jenne

We've looked at a couple of times that the BS has been let go of slowly, and it seems like the cooldown time is usually about 17 minutes until it's basically done and at where it wants to be for the next acquisition of DRMI.  Attached is one such example.

Alternatively, a day or so ago Tony had to do an initial alignment.  On that day, it seemed like the BS took much longer to get to its quiescent spot.  I'm not yet sure why the behavior is different sometimes.

Tony is working on taking a look at our average reacquisition time, which will help tell us whether we should make another change to further improve the time it takes to get the BS to where it wants to be for acquisition.

Images attached to this comment
Displaying reports 221-240 of 82900.Go to page Start 8 9 10 11 12 13 14 15 16 End