Displaying reports 1-20 of 84519.Go to page 1 2 3 4 5 6 7 8 9 10 End
Reports until 21:59, Wednesday 17 September 2025
LHO General
corey.gray@LIGO.ORG - posted 21:59, Wednesday 17 September 2025 (86996)
Wed EVE Ops Summary

TITLE: 09/17 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY:  Nice quiet shift with H1 now locked for about 26hrs.

LOG:   2340 Betsy & Jason out of the optics lab

LHO General
corey.gray@LIGO.ORG - posted 16:42, Wednesday 17 September 2025 - last comment - 20:32, Wednesday 17 September 2025(86995)
Wed EVE Ops Transition

TITLE: 09/17 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 2mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.18 μm/s
QUICK SUMMARY:

H1's been locked since last night!  (20.5hrs now)  H1 was in need of an ISC_LOCK node LOAD, but the H1 SQZ briefly dropped us out of Observing, so Tony was able to take this LOAD off the To Do List.

Seismically, there were a couple of EQs  which required EQ Mode, but H1 rode right through them (last night and during Tony's shift....no "ASC Hi Gn" transition was needed.

From last night, it looks like TJ & Jim took care of BRSy and also the violins were fixed by Ryan C (and thanks about the note on its finicky gain!)  :)

Comments related to this report
corey.gray@LIGO.ORG - 20:32, Wednesday 17 September 2025 (86997)

Smooth sailing for H1 with it being locked for 24.5hrs & observing.

H1 General
anthony.sanchez@LIGO.ORG - posted 16:30, Wednesday 17 September 2025 (86994)
Fairly Quiet Ops Day Shift.

TITLE: 09/17 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
H1 Has mostly been locked and Observing all day.
But we dropped from Observing at 15:18 UTC to apply violin damping.
Back to Observing at 15:22 UTC

And again dropping to commissioning when the SQZr lost lock at 23:07 UTC.
Back to Observing at 23:11:12 UTC.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
16:08 SPI Jeff Optics Lab N Dropping off parts 18:26
16:39 FAC MacMiller VPW N Contractors Quitely Working at VPW 00:39
17:10 EOM Rick Optics lab N Looking for parts. 18:10
17:37 PSL/ io Jason & Betsy Optics lab N Crystal Photography 19:47
18:19 SPI Camilla Optics lab N Helping Jeff & Jason 19:00
21:58 EOM Jason & Betsy Optics lab N crystal photograghy 23:58
22:48 SPI Jeff Optics Lab N Dropping off parts. 23:00
H1 ISC
elenna.capote@LIGO.ORG - posted 16:23, Wednesday 17 September 2025 (86993)
CARM offset reduction data and model

Since we have been struggling with the carm offset reduction sequence, here is a quick look at the data from our most recent lock compared with a model that I borrowed from Sheila, see alog 62110.

I plotted the TRX norm data versus the REFL LF data, normalizing REFL LF to the value it has when DRMI is locked only (3.7 mW). Here are the TR CARM offsets (uncalibrated) as well for reference

TRX norm REFL A LF (mW) TR CARM offset
4.68 3.65 -3
33.3 3.58 -8
42.2 3.55 -9
808.63 1.96 -40
1207.53 1.24 -49
1359.8 0.885 -52
1579.92 0.502 -56

I found Sheila's code in the alog above, and added these data points to it to plot the transmission versus the refl dc power for different PRGs. This suggests, as we expect, that our PRG is quite high, the data points showing a PRG between 48 and 54. However, the very early points indicate a much lower PRG.

I also added Sheila's plot with the arm transmission versus carm offset in picometers. Based on our transmission, it seems that a "CARM 150 picometers" state, our CARM offset is actually pretty close to 150 pm. Similarly so for CARM 5 pm, assuming that our PRG is close to 54. These points are marked with stars.

Anyway, this doesn't really help us understand what's going wrong with the offset reduction.

Images attached to this report
H1 SUS (SUS)
anthony.sanchez@LIGO.ORG - posted 16:15, Wednesday 17 September 2025 (86992)
Bounce and Roll modes seem a bit higher.

Jeff and I noticed that today's lock may have some elevated Bounce and roll modes.
9.7Hz bounce modes.
13.7- 13.9 hz roll modes.
Some fundemantal 500 hz Violin modes for fun.
Another plot from 6-600 hz from ealier in this lock.


I also took a look at a different lock ~ 10 days ago pre power outage.
9 hz bounce mode
13 hz roll mode
Some exquisite looking Violins.

Tagging SUS.
 

Images attached to this report
H1 IOO (ISC)
elenna.capote@LIGO.ORG - posted 12:18, Wednesday 17 September 2025 (86989)
IMC powers

We have now been locked for over 16 hours.

IMC REFL DC power is steady at 18.5 mW

IMC WFS A is at 0.95 mW and IMC WFS B is at 0.75 mW

The IMC power in is 62 W and the power at IM4 trans is 56.7 W

MC2 trans is about 9670 [mystery units]

This is reasonable power for IMC refl, but the WFS power is very low. These are the jitter witnesses, and jitter subtraction is not performing as well as it was before the power outage. I can think of several possible reasons for this, but I'm sure that having less than a mW of power isn't helping.

We may want to consider either a) increasing the power on the IMC refl path b) changing the splitter between IMC refl and IMC WFS to be a 50/50 instead of a 90/10, or c) some combination of the first two options that gets us reasonable power on both IMC refl and IMC WFS.

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:11, Wednesday 17 September 2025 (86987)
Wed CP1 Fill

Wed Sep 17 10:08:25 2025 INFO: Fill completed in 8min 22secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 SEI
thomas.shaffer@LIGO.ORG - posted 09:28, Wednesday 17 September 2025 (86985)
Corrupted frame on BRSY caused high velocity overnight

Yesterday it initally looked like the BRSY was run up by activity at EY, but by the end of the day there was still no damping going on and DIAG_MAIN as well as SEI_CONF had notifications about BRSY. I spoke with Jim and he walked me through the same procedure that he and I did back in July (alog86074) when this happened last. The difference between last time and this time is that the checks I put in worked and we caught this much sooner. I also now have the correct way to remote into the brs machine.

After logging in and recapturing frames two times, the BRS is damping nicely.

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 08:04, Wednesday 17 September 2025 - last comment - 08:56, Wednesday 17 September 2025(86981)
Wednesday Ops Day shift report.

TITLE: 09/17 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 2mph Gusts, 1mph 3min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.17 μm/s
QUICK SUMMARY:

H1 is locked and Observering for 12 hours.
The Violins are very upset, seems like EMTY mode 1 is angry.  DCPDS are very diverged.
Main DARM screen is being restarted.


A new Diag_Main:
SEI_STATE: SEI_CONF might be stuck

Comments related to this report
ryan.crouch@LIGO.ORG - 08:56, Wednesday 17 September 2025 (86984)SUS

15:30 UTC I put the nominal gain of -0.1 into ETMY mode1 and it started to damp down.

Iit looked like it was damping at the beginning of the lock with a negative gain, but at 03:44 UTC the gain was changed from -0.2 to +0.2 without pausing at zero which is the safe thing to do. The mode pretty much immediately started to ring up until guardian turned it off at 04:38 UTC. It was damping with nominal at the start of the lock but it was increased in a few steps from -0.1 to -0.4, ETMY mode1 is a finicky mode, increasing the gain does not always increase the damping rate.

Images attached to this comment
LHO General
corey.gray@LIGO.ORG - posted 22:40, Tuesday 16 September 2025 - last comment - 08:37, Wednesday 17 September 2025(86976)
Tues EVE Ops Summary

TITLE: 09/16 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: TJ
SHIFT SUMMARY:

H1 Back To OBSERVING Again & Leaving so overnight (Elenna messaged TJ [owl] about status).

Day 6 post-Power-Outtage from noon PDT last Wednesday.

Hand-off from TJ at beginning of shift:

End Of Shift Notes:

  1. Violins were mostly damped down at the beginning of the lock, but ETMy Mode1 started ringing up and I tried several gains in that 1st hr of lock, but toward the last hr of the shift, the gain I left in rung up the mode to almost 6.0 and it was taken to 0.0 by Guardian.  I conctacted Rahul and for the night we will leave this gain at 0.0.  The mode is slowly coming down, but it is now off-screen on DARM.
  2. BRSy continues to be "Not In Use"
  3. NUC30's DARM DTT timed out and clicking the Start button didn't help.  I restarted nuc30 a couple of times, but the startup script would not restore this computer to its nominal state.  Leaving for the morning. 

LOG:

Locking Notes

Comments related to this report
david.barker@LIGO.ORG - 08:37, Wednesday 17 September 2025 (86982)

Jonathan and Tony fixed the nuc30 missing window manager, FOMs are back working again.

H1 General (ISC, SQZ)
corey.gray@LIGO.ORG - posted 20:15, Tuesday 16 September 2025 (86980)
H1 Back To Observing, Only A Couple SDFs

H1 made it to NLN, and there were a couple SDFs:

  1. LSC  TR_CARM_GAIN had a set point of 1.0, but was at 2.1---Elenna said this should be ACCEPTed.
  2. SQZ OPO_SERVO_COMEXCEN had a setpoint of "Off", but was at "On"---Elenna said to REVERT this.

Violin fundamental was just above 1e-15, but now violins are being damped (a handful have some extra gain to help the expedite).

Elenna messaged TJ (owl shift), to give him a heads up he may need to Guardian Code work.

Images attached to this report
H1 ISC (GRD, OpsInfo)
elenna.capote@LIGO.ORG - posted 20:15, Tuesday 16 September 2025 - last comment - 12:07, Wednesday 17 September 2025(86979)
Changes to CARM offset reduction sequence

Tagging OpsInfo and guardian since this log includes some big changes.

Tonight after several tries and failures I have found a workaround for the carm offset reduction sequence that should get us locked. I have adjusted the code and proofread it multiple times, but it is currently untested in the sense that I have not run it, I have only replicated in code the steps that I took to get us locked.

The major problem I faced tonight is that I could not engage DHARD at DHARD WFS; the CARM offset that is set in CARM_150_PM is just too far off the fringe to make the DHARD signal any good. I found that as soon as the CARM OFFSET REDUCTION state ran, the DHARD signal was actually useable for control. The first thing CARM OFFSET REDUCTION does is set H1:LSC-TR_CARM_OFFSET to -7, whereas CARM 150 PM ends with this offset at -3. I found that getting this offset to -7 or -8 sometimes is close enough to make the DHARD signal "real". I tried just setting the value in CARM 150 PM, but I had one lockloss with that strategy, not sure if it was the cause. I finally decided that I think the correct order here (for now) should be CARM 150 > DARM TO RF > PARK ALS VCO > SHUTTER ALS> CARM OFFSET REDUCTION > DHARD WFS. DHARD WFS usually comes right after DARM TO RF so this involves moving the DHARD engagement up a bit. This is a little risky, as anyone who watches the AS camera during carm offset reduction knows, because the arm alignment starts to get really shaky as the arms get closer to resonance. However, I've been doing a version of this for a bit now as a part of debugging this sequence and I think it generally works.

Besides that problem, we had several locklosses tonight around CARM 5 PM, which is where Sheila made several changes to follow our "new recipe" for getting CARM to REFL. A major change we are making here is that we are further reducing the CARM offset and further bumping up the PRCL gain while it is on REFLAIR 27I. PRCL seems to be losing too much gain as we reduce the offset and it causes locklosses when we reach resonance. However, once PRCL is on POP, the gain is fine, so Sheila and I chose to edit the LSC input matrix value. I had to correct some errors to ensure the correct intrix value is changed and that it is changed to the correct value. Then, I realized that in our "recipe" steps, we actually ran the entire usual CARM 5 PM state first, then we an our new steps which hard codes a TR carm offset and such. So, I edited the code to run the usual CARM 5 PM steps, then if we set this "after_power_outage" value to True, it will also run the additional recipe steps.

That is a lot of words, so below I am going to write down the steps of what should happen, if all goes correctly:

Here are some other details:

Since DHARD WFS was behaving bizarrely, I did leave the green shutters open and check the green alignment once the ASC converged, using the normal technique of running the green QPD offsets while the IFO ASC converges so the green alignment follows the IFO alignment. Once that completed, I checked and all the offsets appeard to be the same, the largest difference being less than 0.1. Therefore, I concluded that it is unlikely that we need to reset the green alignment, and that whatever is causing the DHARD issue is not because we are in a bad alignment.

When I moved DHARD WFS around in the ladder, I realized that this messes up the IDLE_ALS option. I'm hoping that this DHARD change is temporary, so I just commented out the ladder options that include IDLE_ALS.

We had a mystery lockloss during MOVE SPOTS that I don't understand.

Based on our calculations of the PSL>IMC throughput, I determined that the PSL power should be set to 62 W requested, which should give us 56 W on IM4 trans (we get about 91% now verses 93% before the power outage). I confirmed that is the case, IM4 trans is 56.7 W now, before we went to laser noise suppression (where ISS second loop is engaged). I edited the NLN power value in LSCparams and I reloaded both ISC_LOCK and LASER_PWR guardians.

Comments related to this report
david.barker@LIGO.ORG - 08:39, Wednesday 17 September 2025 (86983)

Here are the main changes to isc/h1/guardian/ISC_LOCK.py as displayed by meld, the new code is on the right.

I have not commited ISC_LOCK.py to the subversion repository, so 'svn di ISC_LOCK.py' still shows the recent changes.

Images attached to this comment
sheila.dwyer@LIGO.ORG - 10:08, Wednesday 17 September 2025 (86986)

Tony, Sheila

I've reverted the run state of CARM_5_PICOMeters to the way I left it in 86974, I believe that the way I wrote it was correctly doing the TR_CARM offset reduction and gain changes after the usual previous steps.  This is committed in the svn as 33120, but it hasn't been loaded as we are in observing. 

I left the change to the DHARD engagement order in.  We used to do the DHARD engagement later in the carm offset redcution like this, but we've found the process to be much more tolerant to variations in the initial alignment since we moved this step lower on the fringe.  Perhaps we need to check the phasing of AS45 WFS to see if something is wrong with our error signal so that we can move this earlier. 

elenna.capote@LIGO.ORG - 12:07, Wednesday 17 September 2025 (86988)

Sheila reverted the code to her original method, which was fine except for a few errors:

  • typo on line 2697 ezca['ASC_DHARD_P_TRAMP'] caused a channel connection error last night
    • I edited this to correctly say ezca['ASC-DHARD_P_TRAMP']
  • The guardian set the incorrect input matrix value on line 2690
    • the correct code should be: ISC_library.intrix['PRCL', 'REFLAIR_B27I']  = 1.6*lscparams.gain['DRMI_PRCL']['3F_27']
    • Sheila's original version changed reflair 9 instead of reflair 27 and multiplied the incorrect lscparams gain dictionary value

I saved these changes in ISC_LOCK, but did not load. Tony made a note to load the guardian at the next lockloss.

LHO General
thomas.shaffer@LIGO.ORG - posted 16:34, Tuesday 16 September 2025 - last comment - 12:44, Wednesday 17 September 2025(86969)
Ops Day Shift Start

TITLE: 09/16 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Lots of work today to understand our troubled IFO as well as some maintenance items. We perhaps have narrowed down the change from the power outage and have changed the input power into the PMC to bring us back to a similar place. We have run an initial alignment, which probably could have run automated, and we are now going through PRMI/DRMI for a second time. I accepted some SDFs in safe before we started locking, see screenshots attached.

BRSY was rung up during some work at EY today, and doesn't seem to be damping. I've contacted Jim, but we will keep an eye on it.


LOG:

Start Time System Name Location Lazer_Haz Task Time End
14:36 SYS Randy, Mitchell, Chris EY n Craning spiral staircase out 16:36
14:58 FAC Contractor (C&E) Vertex n Fire hydrant repair in vertex area near FCBTE 20:19
15:11 CDS Ken LVEA n HAM5/6 cable tray install 19:04
15:17 FAC Nelly LVEA n Tech clean 16:40
15:33 SPI Jeff Opt Lab n Parts 15:38
15:34 - Jennie, parents LVEA n Tour 16:11
15:35 CDS Fil LVEA LOCAL IOT2 table enclosure lights 15:59
15:38 VAC Janos, Travis MY n Pump work 19:22
15:45 ISC Camilla Opt Lab n Grab equipment 15:50
15:46 ISC Camilla, Sheila LVEA LOCAL IOT2 table checks 17:09
15:46 PSL Jason, Ryan S PSL enc YES PSL FSS adjustment 18:54
15:56 PEM Ryan C LVEA n Dropping off dust monitor testing equipment for the PSL team 15:58
16:00 SUS Ryan C EX n SUS charge meas. 17:59
16:15 SEI Jim LVEA n Replace CPS card for HAM3 17:09
16:41 FAC Nelly PSL enc YES Tech clean 16:46
16:41 SYS Betsy LVEA n Mega clean room sock meas. and check on status of work 16:55
16:52 ISC Elenna LVEA n Unplugged SR785 and other test equipment 16:52
17:24 FAC Nelly EX n Tech clean 17:58
17:32 VAC Gerardo, Jordan LVEA n AIP check at HAM6 17:38
17:33 FAC Tyler Mids n 3IFO checks 18:33
17:59 FAC Nelly HAM Shack n Tech clean 18:34
19:59 CDS Marc MY n Grabbing a chassis 21:36
20:20 PSL Jason, Ryan S PSL enc YES Table test 21:55
20:27 VAC' Gerardo, Camilla LVEA n Lookng for viewport covers 20:46
20:46 VAC Gerardo LVEA n HAM6 AIP 20:48
20:54 PCAL Francisco PCAL lab Local PCAL lab work 21:21
22:28 - Oli LVEA n Sweep 22:49
Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 12:44, Wednesday 17 September 2025 (86990)

Interestingly we saw a slight rise in the iop duo-tones for all four EY front ends which coincide with the spiral-staircase craining.

Plot shows all four IOP DUOTONE channels, AC2 power strip current usage and building lights. Seqence is

07:51 lights on

08:04 duo tone rise

09:13 duo tone starts dropping, AC2 less noisy

09:32 lights out

Images attached to this comment
H1 SUS (SEI)
brian.lantz@LIGO.ORG - posted 17:22, Thursday 21 August 2025 - last comment - 13:09, Wednesday 17 September 2025(86510)
Version 2 of a blend for the SR3 pitch estimator

I created an updated blend for the SR3 pitch estimator. These are in the SUS SVN next to the first version. (see LHO log 84452)

The design script is blend_SR3_pitchv2.m

the Foton update script is make_SR3_Pitch_blend_v2.m

these are both in {SUS_SVN}/HLTS/Common/FilterDesign/Estimator/  - revision 12608

The update script will install the new blends into FM2 of SR3_M1_EST_P_FUSION_{MEAS/MODL}_BP with the name pit_v2. pit_v1 should still be in FM1. Turn on FM2, turn FM1 off. 

Jeff and Oli tried the first one, and see that the first 2 modes (about 0.65 and 0.75 Hz) are seeing more motion with the estimator damping than the normal damping. To correct this, _v2 add OSEM signal to the estimator for those modes. See plots below - First 2 are the _v2 blend and a zoom of the _v2 blend. figure 3 shows the measured pitch plant vs. OSEM path - the modes line up pretty well. There is a bit of shift because the peaks are close together. I expect hope this will not matter. Figure 4 shows the plant vs. the model path. Now all 4 modes are driven by the measured OSEM signal instead of the model. 

It is interesting to see that the model was not doing a good job of predicting the motion at the first 2 peaks. This is (I guess) because either (a) the model and the plant are different - or - (b) there are unmodeled drives pushing the plant (the suspension) that the model doesn't know about. 

I'm guessing the answer is (b - unmodeled drives) and is likely from DAC noise. I think this because
1 - The plant fit is smooth and really good.
2 - In the yaw analysis that Edgard and Ivey are doing (not yet posted) the first mode of the yaw plant can be seen with the OSEM, but the ISI motion is much too small to excite that level of motion. But the OSEM can see motion, so something is exciting that motion.
3 - The DAC noise is the only thing I can think of.

Quick chat with Jeff indicates that the DAC noise models at those frequencies are not well trusted. We'll try something anyway and see if it is close. I don't see how to use the estimator to deal with that noise - we'd need to have an accurate realtime measurement. 

Images attached to this report
Non-image files attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 10:30, Friday 22 August 2025 (86516)

Updated make_SR3_Pitch_blend_v2.m to r12610 after fixing the filter name and the subblock it writes to. Will be loading these in the next time we lose lock.

brian.lantz@LIGO.ORG - 11:29, Friday 22 August 2025 (86518)

Thanks Oli!

Also - as a note to myself - I've attached 1 sec of drive signal from the SR3 outputs at the time when both estimators were on (2025-08-21 16:40 UTC, see LHO log 86491 ). The pitch drive is about 30 to 50 counts pk-pk, and not particularly high frequency, compared to the 16384 model rate. This suggests that the low frequency DAC noise is worth following up, and also that it could theoretically be improved with whitening filters. However, since the DC levels are ~ 10k counts to hold the alignments, a simple gain probably wont work. plz note - I am NOT suggesting any changes here, just logging some observations for followup. 

Images attached to this comment
brian.lantz@LIGO.ORG - 13:09, Wednesday 17 September 2025 (86991)

Overdue comment - The real problem seems to be that the L to P path was not installed, but it is now, see aLOG 86567

We do need to look into the DAC noise, however. The extra motion has been fixed by adding the L2P path AND changing the blends. 

Displaying reports 1-20 of 84519.Go to page 1 2 3 4 5 6 7 8 9 10 End