Displaying reports 441-460 of 82928.Go to page Start 19 20 21 22 23 24 25 26 27 End
Reports until 12:13, Tuesday 10 June 2025
H1 CDS (CDS, SEI)
patrick.thomas@LIGO.ORG - posted 12:13, Tuesday 10 June 2025 - last comment - 09:41, Thursday 12 June 2025(84928)
odd cabling on h1brsey
I went down to end Y to retrieve the usb stick that I remotely copied the c:\slowcontrols directory on h1brsey to, and also to try to connect h1brsey to the kvm switch in the rack. I eventually realized that what I thought was a vga port on the back of h1brsey was probably not, and instead I found this odd seeming wiring connected from what I am guessing is a hdmi or dvi port on the back of h1brsey, to some kind of converter device, then to a usb port on a network switch. I'm not sure what this is about, so I am attaching pictures.
Images attached to this report
Comments related to this report
patrick.thomas@LIGO.ORG - 09:41, Thursday 12 June 2025 (84994)
A work permit has been filed to remove this cabling and put h1brsey on the kvm switch in the rack.
LHO VE
david.barker@LIGO.ORG - posted 11:49, Tuesday 10 June 2025 (84927)
Tue CP1 Fill

Tue Jun 10 10:12:01 2025 INFO: Fill completed in 11min 57secs

 

Images attached to this report
LHO FMCS
tyler.guidry@LIGO.ORG - posted 11:45, Tuesday 10 June 2025 (84926)
LVEA Zone 2 Temperatures
around 11:15 local I observed an outlier on the VEA temperature trend. Zone 2 (Y output) appeared to be running beyond its norm. Because Eric was troubleshooting a heater coil in this particular zone (per WP 12589) this morning, this was not terribly surprising, but I decided to investigate anyway. According to FMCS, heating stage 1 was manually forced on. It appeared to hold at least 40% heating command in this condition. I don't recall a reason for this being manually enabled, nor did Eric. Since disabling it, heating command dropped from 40% to 0 and supply temperatures have fallen from 73F to 58. This might cause a sharper than usual course correct, but I would expect zone 2 to fall in line with the rest of the VEA by days end.

E. Otterman T. Guidry
H1 ISC
camilla.compton@LIGO.ORG - posted 10:34, Tuesday 10 June 2025 - last comment - 16:59, Wednesday 25 June 2025(84922)
Noticed BS PIT Moved while locking and then drifts in NLN: not new, happened end of O3b but not 1 year ago.

Sheila, Elenna, Camilla

Sheila was questioning if something is drifting for us to need an initial alignment after the majority of relocks. Elenna and I noticed that BS PIT moves a lot both while powering up /moving spots and while in NLN. Unsure from the BS alignment inputs plot what's causing this.

This was also happening before the break (see below) but the operators were similarly needing more regular initial alignments before the break too.  1 year ago this was not happening, plot.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 12:44, Tuesday 10 June 2025 (84929)

These large BS PIT changes began 5th to 6th July 2024 (plot). This is the day shift from the time that the first lock like this happened 5th July 2024 19:26UTC (12:26PT): 78877 at the time we were doing PR2 spot moves. There also was a SUS computer restart 78892 but that appeared to be a day after this started happening.

Images attached to this comment
camilla.compton@LIGO.ORG - 09:45, Wednesday 11 June 2025 (84966)ISC, SUS

Sheila, Camilla

This reminded Sheila of when we were heating a SUS in the past and causing the bottom mass to pitch and the ASC to move the top mass to counteract this. Then after lockloss, the bottom mass would slowly go back to it's nominal position.

We do see this on the BS since the PR2 move, see attached (top 2 left plots). See in the green bottom mass oplev trace, when the ASC is turned off on lockloss, the BS moves quickly and then slowly moves again over the next ~30 minutes, do not see simular things on PR3. Attached is the same plot before the PR2 move. And below is a list of other PR2 positions we tried, all the other positions have also made this BS drift. The total PR2 move since the good place is ~3500urad in Yaw.

  • Different time May 21st to 24th 2024:
    • BS Oplev Drift
    • Plot shows 30urad M1 drift
    • PR2 Alignment Sliders P: 1435, Y: 1130
  • Pre July 5th 2024:
    • No BS Oplev Drift
    • Plot shows 5urad M1 drift
    • PR2 Alignment Sliders P: 1565, Y: 3210
  • July 5th 2024 to 6th Feb 2025:
    • BS Oplev Drift
    • Plot shows 50urad M1 drift
    • PR2 Alignment Sliders P: 1535, Y: 2785
  • 6th Feb 2025 to 10th Feb 2025:
    • BS Oplev Drift
    • Plot shows 30urad M1 drift
    • PR2 Alignment Sliders P: 1480, Y: 1195
  • 10th Feb 2025 to now:
    • BS Oplev Drift
    • Plot shows 30-40urad M1 drift
    • PR2 Alignment Sliders P: 1430, Y: -245

To avoid this heating and BS drift, we should move back towards a PR2 YAW of closer to 3200. But, we moved PR2 to avoid the spot clipping on the scrapper baffle, e.g.  77631, 80319, 82722, 82641.

Images attached to this comment
jenne.driggers@LIGO.ORG - 14:38, Thursday 12 June 2025 (85002)

I did a bit of alog archaeology to re-remember what we'd done in the past.

  • In August of 2015, we found that we were struggling with PR3 pitch alignment jumping, then cooling down upon lockloss.  Alog 20268 talks about the implementation of the lock loss compensation, which first appeared in ISC_DRMI guardian in rev 11228.
  • At some point (I didn't dig to find out when precisely), we also implemented the same filters for BS pitch.
  • By Jan 2020, both BS and PRM had the soft ASC turn-off.
  • In Jan 2020, ISC_DRMI rev 20905 we removed this soft ASC turn-off for both PR3 and BS.  The referenced alog 54709 notes that we shouldn't need those anymore, since we had installed wire heating baffles, to prevent the wires from being illuminated and heating up.
  • We haven't had the soft turn-off filters in use since 2020, about 3 months before the end of O3b.  This may be why Camilla saw that we were seeing BS drift at the end of O3b.
  • Perhaps our alignment during O4, until we moved the PR spots in May 2024, was such that we weren't susceptible to this wire heating.
  • I don't think PR3 is seeing the same kind of trouble that it did back in 2015 upon lockloss, so I think its wire heating baffles are working as designed, so no need to make any changes to the PR3 controls.
  • Sheila made the point that because we unclipped some of the +Y side of the beam (without moving the spot on the BS), maybe there is a bit more light that is illuminating the barrel of the BS or getting to the wires.  Or, something?  Without having looked at the actual drawings, I could imagine that the wire heating baffles are working better on PR3 than they are on the BS, because we hit PR3 much closer to normal incidence, whereas with the BS the light could be sneaking around the baffles.  Robert thinks that light could get inside the cage baffle and reflect around and be hitting and heating the wires.
  • All of this seems to say that we should re-implement the soft ASC turn-off for the BS. I had a quick look at the 1/e time for the BS to move after lockloss (it's about 241 seconds), and the 1/e time for the filters (about 240 seconds, despite my quoting in alog 54706 that they were 25 min filters (2*pis are hard!)

To put back the soft turn-off of the BS ASC, I think we need to:

  • Disable the BS M1 ASC lockloss trigger.  Jeff reminded me that this would foil my plans, since it turns off the ASC signals to the EUL2OSEM matrix.  This will mean that neither the Pit nor the Yaw BS M1 signals will be shut off by the lockloss trigger.  To disable, we'll need to set H1:SUS-BS_M1_TRIG_ASC_ENABLE to zero (which means that the ASC signals will always be passed to the EUL2OSEM matrix).  I don't think this is in guardian anywhere, so we should only need to change it and then accept in safe and observe snap files.
  • Change ISC_DRMI around line 66 such that BS pit gain is not set to zero.  Also, have it turn off FM1 in addition to turning off the input.
  • Change ISC_DRMI around line 141 to not hit the BS pit RSET button.

Camilla made the good point that we probably don't want to implement this and then have the first trial of it be overnight.  Maybe I'll put it in sometime Monday (when we again have commissioning time), and if we lose lock we can check that it did all the right things.

jenne.driggers@LIGO.ORG - 09:57, Monday 16 June 2025 (85075)

I've now implemented this soft let-go of BS pit in the ISC_DRMI guardian, and loaded.  We'll be able to watch it throughout the day today, including while we're commissioning, so hopefully we'll be able to see it work properly at least once (eg, from a DRMI lockloss).

jenne.driggers@LIGO.ORG - 17:16, Monday 16 June 2025 (85106)

This 'slow let-go' mode for BS pitch certainly makes the behavior of the BS pit oplev qualitatively different. 

In the attached plots, the sharp spike up and decay down behavior around -8 hours is how it had been looking for a long time (as Camilla notes in previous logs in this thread).  Around -2 hours we lost lock from NomLowNoise, and while we do get a glitch upon lockloss, the BS doesn't seem to move quite as much, and is mostly flattened out after a shorter amount of time.  I also note that this time (-2 hours ago) we didn't need to do an initial alignment (which was done at the -8 hours ago time).  However, as Jeff pointed out, we held at DOWN for a while to reconcile SDFs, it's not quite a fair comparison. 

We'll see how things go, but there's at least a chance that this will help reduce the need for initial alignments.  If needed, we can try to tweak the time constant of the 'soft let-go' to further make the optical lever signal stay more overall flat.

The SUSBS SDF safe.snap file is saved with FM1 off, so that it won't get turned back on in SDF revert.  The PREP_PRMI_ASC and PREP_DRMI_ASC states both re-enable FM1 - I may need to go through and ensure it's on for MICH initial alignment.

Images attached to this comment
jenne.driggers@LIGO.ORG - 16:59, Wednesday 25 June 2025 (85344)

RyanS, Jenne

We've looked at a couple of times that the BS has been let go of slowly, and it seems like the cooldown time is usually about 17 minutes until it's basically done and at where it wants to be for the next acquisition of DRMI.  Attached is one such example.

Alternatively, a day or so ago Tony had to do an initial alignment.  On that day, it seemed like the BS took much longer to get to its quiescent spot.  I'm not yet sure why the behavior is different sometimes.

Tony is working on taking a look at our average reacquisition time, which will help tell us whether we should make another change to further improve the time it takes to get the BS to where it wants to be for acquisition.

Images attached to this comment
H1 CAL (ISC)
elenna.capote@LIGO.ORG - posted 10:30, Tuesday 10 June 2025 (84921)
Results from Simulines run last night

Last night Corey ran a simulines measurement shortly into the start of the lock, 84908. This measurement was mainly done as a test to confirm simulines wasn't breaking the lock, so we were not well thermalized. We can first report that simulines did not break the lock, so the previous lockloss that occurred during the simulines measurement was likely unrelated to simulines itself.

GPS start of measurement: 1433563079

I did a time machine on the calibration monitor screen for the GPS start of the measurement.

I was able to generate a report by running pydarm report --skip-gds

I have attached the generated PDF from this report, and I took a screenshot of the first page since there is an interesting result. The sensing function shows a large spring, which is probably due to the fact that we are operating with a significant SRCL offset, which is designed to compensate for 1.4 degrees of SRCL detuning, 84794.

However, it is important to remember that this calibration measurement was made while the IFO was unthermalized, but the SRCL offset measurement I linked here was performed with the IFO thermalized.

The results from this calibration report are saved in /ligo/groups/cal/H1/reports/20250610T035741Z/

Images attached to this report
Non-image files attached to this report
H1 General
thomas.shaffer@LIGO.ORG - posted 10:07, Tuesday 10 June 2025 (84920)
LVEA Swept

Camilla and I swept the LVEA this morning between locks. The VAC team still has HAM1 pumps to turn off and valve in, as well as a setup with a laptop on the Y arm near the manifold, and a turbo on the output arm. They will get to this later in the day. Other notable things found on out walk through:

 

 

Images attached to this report
LHO FMCS
eric.otterman@LIGO.ORG - posted 09:58, Tuesday 10 June 2025 (84919)
LVEA Zone 2A temp
The heating coil in zone 2A of the LVEA was repaired this morning. There will be some variation in the trending while the PI loop adjusts. 
H1 CDS
david.barker@LIGO.ORG - posted 09:01, Tuesday 10 June 2025 (84917)
Removed CDS SDF diff by powering down EY Tripplite outlet-2

The CDS SDF has had 1 diff for the past 4 days because the second outlet of the EY Tripplite power-strip was turned on around 9am Friday 06jun2025. Strangely EY's lights were not turned on at all on Friday, so either this was a mistake for something had been plugged into the outlet ahead of time.

Asking around the control room, no one knows why this was turned on. Because driving to EY and entering the VEA is invasive on locking, we elected to turn it off for now.

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 08:19, Tuesday 10 June 2025 (84915)
Cryopump low level alarm value increased from 80% to 85%

Following the drop in LN2 level over the last few days (see plot) we decided to bump up the alarm level from 80% to 85% on all CPs to give the vacuum team an earlier warning that the PID controls system is not able to maintain a nominal level.

alarms was restarted 08:16 with the following change

Channel name="H0:VAC-LX_CP2_LT150_PUMP_LEVEL_PCT" low="8.5e+01" high="9.9e+01" description="CP2 Pump LN2 Level">
Channel name="H0:VAC-MY_CP3_LT200_PUMP_LEVEL_PCT" low="8.5e+01" high="9.9e+01" description="CP3 Pump LN2 Level">
Channel name="H0:VAC-MX_CP5_LT300_PUMP_LEVEL_PCT" low="8.5e+01" high="9.9e+01" description="CP5 Pump LN2 Level">
Channel name="H0:VAC-MX_CP6_LT350_PUMP_LEVEL_PCT" low="8.5e+01" high="9.9e+01" description="CP6 Pump LN2 Level">
Channel name="H0:VAC-EY_CP7_LT400_PUMP_LEVEL_PCT" low="8.5e+01" high="9.9e+01" description="CP7 Pump LN2 Level">
Channel name="H0:VAC-EX_CP8_LT500_PUMP_LEVEL_PCT" low="8.5e+01" high="9.9e+01" description="CP8 Pump LN2 Level">
 

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 07:42, Tuesday 10 June 2025 - last comment - 12:36, Wednesday 11 June 2025(84914)
Tuesday Ops Day Shift - A Light Maintenance Day.


TITLE: 06/10 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 1mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.18 μm/s
QUICK SUMMARY:
H1 was in IDLE when I arrived.
I will start trying to lock now.
 

Comments related to this report
ryan.crouch@LIGO.ORG - 12:41, Tuesday 10 June 2025 (84931)SUS

I've changed the sign of the damping gain for ITMX 13 in lscparams from +0.2 to -0.2 after seeing it damp correctly in 2 lock stretches. The VIOLIN_DAMPING GRD could use a reload to see this change.

rahul.kumar@LIGO.ORG - 12:36, Wednesday 11 June 2025 (84979)

I have loaded the violin damping guardian, since the setting RyanC found still works.

LHO General
corey.gray@LIGO.ORG - posted 22:30, Monday 09 June 2025 (84908)
Mon EVE Ops Summary

TITLE: 06/09 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: n/a tonight
SHIFT SUMMARY:

LOG:

H1 SUS
corey.gray@LIGO.ORG - posted 22:26, Monday 09 June 2025 (84913)
ITMy Mode 05 Changed Due To Mode Ringing Up During First Post-HAM1-Vent Lock

Elenna C., Corey G., Oli P.

With this being the first lock at NLN post the HAM1 vent, and since we had a rung up fundamental violin, decided to address for overnight operation.  It looked liked like it was our infamous ITMy MODE5/6. 

Before Oli left, they mentioned that if IY M5/6 rings up it might be worth it to try 2W settings if it rings up. 

Sure enough, it was slowing ringing up and with Elenna also assisting, we decided to change the settings.  Here they are:

NEW:

ITMy MODE5 :  FM6 + FM7 + FM10, gain = 0.01

OLD:

ITMy MODE5 :  FM6 + FM8 + FM10, gain = 0.01

The 3rd image shows when the change was made marked with the cursor and how the mode begins turning around about 10min  later.  Saved this new change in lscparams and hit LOAD on the VIOLIN DAMPING STATES and ISC LOCK notes.

Images attached to this report
H1 ISC
elenna.capote@LIGO.ORG - posted 21:54, Monday 09 June 2025 (84912)
Some Observe SDF reconciliation

I did some quick reconciliation of some of the observe SDF diffs. An easy one was all dark offsets. I also accepted many of the ASC SDFs, since I am responsible for many of them. This includes things like PD phasing, new intrix values, gain changes, etc. I also accepted several LSC SDFs, which also includes PD phasing, matrix changes, feedforward changes, and the MICH filter change. The only one I am not familiar with in the attached screenshot is the MCL trig threshhold.

Images attached to this report
H1 ISC
elenna.capote@LIGO.ORG - posted 18:49, Monday 09 June 2025 - last comment - 08:40, Tuesday 10 June 2025(84911)
(Almost) at NLN, Fast locklosses preventing calibration

Headline summary: We are very nearly back to NLN, only prevented in returning by the violin modes which are too high to engage OMC whitening. We have not yet been able to calibrate because of a very fast lockloss of unknown source.

The alog was down for most of the afternoon and evening, so I will do my best here to copy mesages from the mattermost chat, which served as the temporary alog.

Minor struggles in returning to full lock:

Once we achieved full power, we proceeded to try to solve some of the final instabilities leftover from last week. Changes made to avoid locking stability problems:

The plan was to try testing simulines again, however! We have had multiple very fast locklosses with no known origin. Two have happened shortly after arriving at OMC whitening. These are not coming from ASC and I don't see any ringup in DARM or LSC loops.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 08:40, Tuesday 10 June 2025 (84916)Lockloss

Had a look a the fast locklosses from OMC_WHITENING, lockloss tool tags windy for both (~20mph so not bad) and we are waiting for the violins to damp before engaging OMC_WHITENING.

Note, although these aren't slow ring ups, these are our typical type of locklosses and are not IMC fast locklosses.

  • 1433550277
    • ETMX SUS L3 sees a glitch/noisy period first: plot
  • 1433554608
    • ETMX SUS starts becoming noisy 87ms before LL: plot
  • As always it's hard to differentiate between if ETMX or DARM is the issue.
Images attached to this comment
H1 General
anthony.sanchez@LIGO.ORG - posted 16:31, Monday 09 June 2025 (84909)
Monday Ops Day Shift report.

TITLE: 06/09 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 18mph Gusts, 9mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.17 μm/s
SHIFT SUMMARY:

Covering for Ryan C.
OFI Returned to aligned... No output. Apparently this is normal.

SRM already aligned by Elenna.
Attempting to start relocking 19:00 UTC
Potential Lockloss from FIND_IR when I requested SEI_ENV to cycle between Maintenance & CALM.

Lots of saturations when we ran through LOWNOISE_COIL_DRIVERS

Unknown Lockloss from Low_Noise ASC ( We manualed over a handful of States and Ended up manualing back to this state.)

Adjusted the ALS polarization of both X and Y arms.
Ran a Manual_Initial_Alignment Finished @ 22:26 UTC

No internet do to a GC switch restart 22:40 UTC. Internet back up a few moments later.


Lots of saturations when we ran through LOWNOISE_COIL_DRIVERS again, we did not lose lock either time.

Lockloss due to DHARD P ring up while in LOW_NOISE_LENGTH_CONTROL.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
14:47 OPS TJ, Camilla LVEA N -> Y Put in bellows 15:28
14:56 FAC Kim, Nellie LVEA N -> Y Technical clean 15:23
15:03 SAF Richard LVEA N Safety checks 15:20
15:07 OPS TJ LVEA N -> Y LASER hazard transition 15:28
15:20 VAC Jordan & Gerardo LVEA Y Checks & Balances on HAM1 16:00
15:22 FAC LVEA is LASER HAZARD LVEA YES LVEA is LASER HAZARD \u0d26\u0d4d\u0d26\u0d3f(\u239a_\u239a) 22:54
15:24 FAC Kim, Nellie FCES N Tech clean 16:47
15:26 FAC Randy LVEA Y Move forklift to receving 15:56
15:34 VAC Gerardo LVEA Y Open GV2 then GV1 16:00
15:53 ISC Camilla LVEA Y Quick table check 15:55
16:00 FAC Tyler, Chris High bay, MidY N Forklift to mids 18:09
16:03 VAC Gerardo, Jordan EndY N VAC checks 16:40
16:16 ISC Camilla, TJ LVEA Y ISCT1 alignment still 19:55
16:24 ALS Keita Ends N Take pictures of electronics racks 17:07
16:24 ISC Elenna LVEA Y Join table alignment crew 17:56
16:24 ISC Elenna LVEA Y Alignment on ISCT1 16:24
16:47 FAC Kim MidX N Tech clean 18:04
16:48 FAC Nellie MidY N Tech clean 17:38
16:52 VAC Gerardo, Jordan EndY N CP7 checks 17:10
16:57 FAC Richard LVEA N Check out network stuff 17:02
18:01 ISC Elenna LVEA Y Table work 18:59
18:09 ISC Keita MidY N Grab part out of storage 18:24
19:01 CDS Patrick Ey & EX N Getting data from BRS computers 20:23
19:42 FAC Randy LVEA y Looking for tools in LVEA 19:57
19:51 VAC Travis & Tyler MidX N Looking for Cryopump storage space. 21:13
20:32 PEM Marc, Kiet Mid X N Looking at Fiber box connections to vertex Vault 20:50
21:14 VAC Gerardo End Y N Twiddling Vac Valves 21:53
22:44 PCAL Francisco PCAL Lab yes Shutting apatures & laser apatures 23:04

 

LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 22:23, Thursday 05 June 2025 - last comment - 10:45, Tuesday 10 June 2025(84846)
CP7 Liquid Level Control Valve (LLCV) is Railing (Known Issue)

I noticed that the LLCV is railing at its top value, 100% open, it can't open no more.  This is a known issue, but it appears as if the valve is reaching 100% sooner than expected, meaning when the tank is almost half full.  First, I'm going to try a re-zero the LLCV actuator and await for the results.  First attachment is a 2 day plot of LLCV railing today and yesterday.  The second plot is a 3 year history looking at the tank level and the LLCV, it rails at 100% a few times.

Images attached to this report
Comments related to this report
jon.feicht@LIGO.ORG - 09:59, Friday 06 June 2025 (84856)
BURT restore? PID tuning ok? CP2 @ LLO PID parameters attached for comparison.





Images attached to this comment
gerardo.moreno@LIGO.ORG - 16:19, Friday 06 June 2025 (84873)VE

Thanks Jon.  However, this system has a known issue, it turns out that the liquid level control valve is not suitable for the job, that is the reason why it reaches 100% sooner than later, but it appears as if something slip, now it reaches 100% at a higher level, this is the reason why I want to re-zero the actuator.

Attached is the Fill Control for CP7   The issue was mentioned here for the first time aLOG 4761, but I never found out who discovered this is only briefly mentioned by Kyle. Another entry on aLOG 59841.

Images attached to this comment
gerardo.moreno@LIGO.ORG - 10:45, Tuesday 10 June 2025 (84923)VE

Dirty solution of solving the issue with the LLCV getting railed at 100% open, we used the bypass valve, opened it up by 1/8 of a turn and that did the job.  Not a single shot, but eventually we settle on that turn number.  PID took over and managed to settle around to 92% open for the LLCV.  Today we received a load for the tank for CP7.  We are still going to calibrate the actuator.

Images attached to this comment
H1 CAL
ibrahim.abouelfettouh@LIGO.ORG - posted 16:27, Thursday 05 June 2025 - last comment - 11:22, Tuesday 10 June 2025(84834)
(Incomplete) Calibration June 5, 2025

(This is Oli)

Once we had been at 60W for two hours, I started a calibratoin measurement. I started with Simulines since yesterday I had gotten a broadband measurment done (84808). A couple minutes into the measurement, we lost lock. The cause is unknown, but I've attached the output for the simulines measurement (txt).

Images attached to this report
Non-image files attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 16:56, Thursday 05 June 2025 (84836)

Lockloss happened as Calibration signals were ramping on, see attached. First glitch in L3, see attached.

Images attached to this comment
francisco.llamas@LIGO.ORG - 11:22, Tuesday 10 June 2025 (84925)

Attaching trend of OMC DCPD sum during LL. Plot suggests the DCPDs were not the cause of the lock-loss.

Images attached to this comment
Displaying reports 441-460 of 82928.Go to page Start 19 20 21 22 23 24 25 26 27 End