Displaying reports 4181-4200 of 84726.Go to page Start 206 207 208 209 210 211 212 213 214 End
Reports until 11:46, Thursday 20 February 2025
H1 ISC
elenna.capote@LIGO.ORG - posted 11:46, Thursday 20 February 2025 (82931)
PRCL feedforward gain increased

I ran a PRCL feedforward injection since the PRCL to DARM coherence has increased. Indeed, the injection showed the PRCL feedforward was doing worse around 30 Hz. I found that by increasing gain of the feedforward from 0.6 to 0.9 this improved the subtraction again.

You will notice that below about 15 Hz, PRCL coupling is still not great. That would probably require a more "invasive" adjustment of the feedforward fit, but maybe it's not worth it since it doesn't seem that it's causing much noise.

In the first attachment, blue trace shows and old coupling measurement with no feedforward, green was the "as found" previous feedforward fit measurement. Brown was the measurement today before I started adjusting the gain and red is what I got after increasing the feedforward gain. Second attachment shows the SDF accept. I also updated the gain in lscparams.

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:13, Thursday 20 February 2025 (82929)
Thu CP1 Fill

Thu Feb 20 10:07:47 2025 INFO: Fill completed in 7min 43secs

Gerardo confirmed a good fill curbside. TCmins [-51C, -50C] OAT (0C, 32F) DeltaTempTime 10:07:47

Images attached to this report
H1 SEI
oli.patane@LIGO.ORG - posted 09:45, Thursday 20 February 2025 (82928)
HEPI Pump Trends Monthly FAMIS

Closes FAMIS#, last checked 82369
 
HEPI pump trends looking as expected. The lines 23 days ago in all the plots are from a DAQ restart done during Tuesday Maintenance on January 28th(82498).

Images attached to this report
H1 ISC
sheila.dwyer@LIGO.ORG - posted 09:40, Thursday 20 February 2025 (82927)
POP LF calibration to Watts moved,

I've moved Elenna's calibration (82656) of the LSC POP diode DC power to the POP_A_LP filter.  I tried to engage it while we were relocking after the EQ and unlocked the IFO, because this is in the IFO trigger matrix. 

I've moved it and turned it on now, accepted in SDF safe and will accept in OBSERVE when we are in low noise.

H1 ISC
matthewrichard.todd@LIGO.ORG - posted 09:40, Thursday 20 February 2025 - last comment - 12:13, Thursday 20 February 2025(82918)
More PR2 spot moves, getting full model of scraper baffle

Matt Jennie Mayank TJ Sheila

We wanted to estimate whether we were clipping on the scraper baffle of PR2 so we moved the PR3 YAW sliders during single bounce quite a bit to get an idea of when were clipping, using the AS_C_NSUM to gauge whether we were starting to clip.


Most of the steps are the same in Sheila's previous alog, expect we are in single bounce and we are therefore not looking for the ALS beatnotes, instead we are looking at the AS_C_NSUM value. You want to take ISC_Lock to PR2_SPOT_MOVE so that the guardian will adjust the IM4, PR2 yaw sliders while you adjust PR3 sliders so that most everything stays aligned. You will have to adjust the pitch every once in awhile due to the cross couplings in IM4 and PR2.

Steps taken in this measurement:

  1. Turned ISC_Lock to PR2_SPOT_MOVE
  2. Move PR3 yaw using the slider (steps of 10 urad seemed to work for me) or using
cdsutils step -s .66 H1:SUS-PR3_Y_OPTICALIGN_OFFSET -- -1,10

       We think the yaw offset value that puts the spot on the center of PR2 is around -230 urad (we started here around 09:00 PST)

      3. By going in steps to -900 in yaw offset, while pitching PR3 every so often to keep everything centered, we were able to move across PR2 to the left edge of the baffle, clipping about 10% of the power at the ASC-AS_C QPD (finished 09:45 PST).
      4. Then we reset the sliders to their starting values and follwed the above steps going to the right (we stopped at +440 yaw offset). Here we recorded around a 10% loss on the right edge of the baffle (finished 10:44 PST).

Mayank has some plots he will comment analyzing the results of this.


Useful ndscopes

ndscope-dev /ligo/home/matthewrichard.todd/ndscope/pr2_spot_move.yaml
Comments related to this report
mayank.chaturvedi@LIGO.ORG - 12:13, Thursday 20 February 2025 (82932)

We modeled the PR2Baffle beam clipping

Matt Jennie Mayank TJ Sheila Keita

We extracted the beam path and geometry for the PRM-PR2Baffle-PR2-PR2Baffle-PR3 leg of the input light from the following documents, D1200573, D1102451 (cookie cutter), D0901098, D020023 etc.

The angle of PRM-PR2Baffle beam with respect to X axis was changed from about -0.1 radian to 0.1 radian (over and above the existing value of 0.339 Degrees) such that the
PR2 Beam spot moves from approximately -25mm to 25 mm.  The PR2 yaw was adjusted such that the PR2-PR3 beam always hits the same spot on PR3.
This ensures that the beam spot on PRM and PR3 remain unchanged while the beam spot on PR2 changes.

 

Attachment 1: Shows the overall modelled geometry

Attachment 2: Zoom view around PR2Baffle along with the beam paths.

Attachment 3: Python script.

Attachment 4: Experiment_Data_ 19Feb2025.

Attachment 5: Experiment_Data_ 5July2025.

 

Plot 1 shows the PR2 beam spot motion with respect to the angle.   

Plot 2 shows that following distances vs the PR2 beam spot location
a) PRM-PR2 beam and the Lower edge of the PR2Baffle (D1 distance)
b) PR2-PR3 beam and the Upper edge of the PR2Baffle (D2 distance).

Plot 3 shows the transmission of the beam with respect PR2 beam spot (As the beam comes closer to the baffle edge, some of the beam power is blocked by the baffle edge and hence the transmission decreases)

Plot 4 shows the net transmission for the beam (multiplication of Upper edge and Lower edge transmissions)  
It also shows the experimentally measured data above.

The two curves do not match exactly however they differ in position by around 2 mm. It seems PR2_Baffle is approximately at the right place. It may not be necessary to move the PR2_Baffle.

 

 

Images attached to this comment
Non-image files attached to this comment
LHO General
thomas.shaffer@LIGO.ORG - posted 07:46, Thursday 20 February 2025 - last comment - 09:29, Thursday 20 February 2025(82924)
Ops Day Shift Start

TITLE: 02/20 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Relocking
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 2mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.42 μm/s
QUICK SUMMARY: Just lost lock from a nearby earthquake. Barely saw it on the picket fence before we lost lock, but our control signals were moving. For the 8 hour lock that just ended, 4 hours ago there was a step down in the range and then less stable range. DARM looks to have more noise in the 80-200Hz area, my screenshot doesn't show it completely. Violin mode 6 was slowly ringing up overnight.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 09:29, Thursday 20 February 2025 (82925)

TJ, Camilla. No extra noise in the channels that usually show our low frequency non-stationary noise 82728, see plot. Comparing DARM before and after, there is very subtle changes <100Hz. TJ found the summary pages show more glitches at 60Hz and  ~48Hz.

Additionally, we see the line at 46.09Hz or 46.1Hz grow, see plot. Georgia noted this line in 2019 47447 and Evan piont us to O4aH1lines list where this apears to be the PR3 roll mode.

Images attached to this comment
thomas.shaffer@LIGO.ORG - 09:19, Thursday 20 February 2025 (82926)

Running the range comparison scripts for a few different times and spans around the range step. There looks to be a slight bit more noise all the way below 100Hz, and the 60Hz is very slightly higher.

The range step happened at 330am almost exactly and since the 60Hz line got worse, I'm wondering if there is something that turned on or updated right then.

Non-image files attached to this comment
LHO General (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 22:00, Wednesday 19 February 2025 (82923)
OPS Eve Shift Summary

TITLE: 02/20 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:

IFO is LOCKING at LOCKING_ALS (Once again, we lose lock in the last few mins of shift...)

IFO is in NLN and OBSERVING as of 03:07 UTC.

Overall very calm shift in which we seem to have improved squeeze locking, improving range to what it was before last weekend.

We had one Lockloss seemingly caused by oscillations in PRCL in the seconds before the losing lock - alog 82916. According to the first OLG PRCL measurements from alog 82920, Elenna found that the PRCL2 gain was low by about 30-40%. Following this, she made a change to up the gain from 1 to 1.4 (alog 82917). Accepted SDF attached.

While 1.4 was a bit too high and caused a PRCL ring-up and a LL at LOWNOISE_LENGTH_CONTROL (where the gain switches on), the next setting of 1.2 worked! We were able to fully automatically re-lock and get to NLN and OBSERVING. Before I went into OBSERVING, I took another OLG PRCL measurement, which is the second measurement in alog 82920.

Other than this, the infamous IY Mode 5_6 Violin has been ringing up, visible in the top right screen of the attached screenshot, which shows mode 6 as slowly increasing since Lock. New settings may be needed for this.

Just as I was about to submit, we had a LL, though there wasn't the characteristic PRCL ring-up from the last few LLs. It also doesn't look environmentally caused since wind is low, there are no EQs and microseism is high, but unchanged mostly from the beginning of the day. Currently experiencing known ALS lock issues.

LOG:

None

Images attached to this report
H1 ISC
ibrahim.abouelfettouh@LIGO.ORG - posted 19:14, Wednesday 19 February 2025 - last comment - 20:19, Wednesday 19 February 2025(82920)
PRCL Open Loop Gain Measurements

TJ, Ibrahim, Sheila, Elenna

Measured PRCL OLG at 2 different times during NLN - both attached.

First (done by TJ) at 3hrs into NLN -  Screenshot 1.

Second at 15 minutes into seperate NLN. This one was done after a PRCL-related lockloss (alog 82916) at which point Elenna changed the PRCL2 Gain from 1 to 1.2 (alog 82917) - Screenshot 2.

 

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 20:19, Wednesday 19 February 2025 (82922)

Just a note that we are trying for a UGF of about 30 Hz here. Right after lock, this is clearly a bit too high, but hopefully with the 20% gain boost after thermalization it will settle closer to 30 Hz.

H1 ISC
elenna.capote@LIGO.ORG - posted 17:05, Wednesday 19 February 2025 - last comment - 18:02, Wednesday 19 February 2025(82917)
PRLC2 Gain increased

I increased the PRCL2 gain that is set in lownoise length control from 1.0 to 1.4 to increase the overall PRCL loop gain by 40%. We have been seeing locklosses with 11 Hz oscillations that are probably due to marginal stability in PRCL. I changed line 5577 of the ISC_LOCK guardian, saved, and loaded. Ibrahim will post an alog with more info and open loop gain plots.

Comments related to this report
elenna.capote@LIGO.ORG - 18:02, Wednesday 19 February 2025 (82919)

This was too high and caused a 70 Hz ring up in PRCL. I put in a gain of 1.2 now.

H1 ISC (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 16:45, Wednesday 19 February 2025 - last comment - 19:14, Wednesday 19 February 2025(82916)
Lockloss 00:41 UTC

Lockloss that matches the ones from over the weekend, where PRCL becomes unstable and oscillates at 11Hz in the seconds before the Lockloss.

Comments related to this report
ibrahim.abouelfettouh@LIGO.ORG - 19:14, Wednesday 19 February 2025 (82921)

H1 Back to OBSERVING 03:07 UTC

LHO General
thomas.shaffer@LIGO.ORG - posted 16:31, Wednesday 19 February 2025 (82914)
Ops Day Shift End

TITLE: 02/20 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: The shift started out with a lock loss just before commissioning time. We took this opportunity to move PR2 spot while out of lock to find the baffle edges. We then relocked and Sheila and Oli fixed and issue we had causing lock losses at transition from ETMX (alog82912). We went back to observing for a few hours before stepping out to tune the SQZ angle and run a quick PRCL OLG. Results on the latter will be posted by Ibrahim later after he takes another to compare.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
19:34 SAF Laser Haz LVEA YES LVEA is laser HAZARD!!! 06:13
19:12 FAC Les Schwab Xarm n Truck tire repair 21:07
20:39 FAC Eric EX mech n Serial numbers 20:45
21:24 - Betsy Opt Lab n Parts and more parts 21:52
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:12, Wednesday 19 February 2025 (82915)
OPS Eve Shift Start

TITLE: 02/20 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 144Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 6mph Gusts, 4mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.60 μm/s
QUICK SUMMARY:

IFO is in NLN and COMISSIONING

The plan is to optimize SQZ via SQZ angle adjustment followed by a PRCL Open Loop Gain Measurement since there is some evidence that PR is experiencing noise at certain problem frequencies. Then, we go back to OBSERVING

 

H1 General
thomas.shaffer@LIGO.ORG - posted 14:17, Wednesday 19 February 2025 (82913)
Back to Observing 2133 UTC

Back to observing after a lock loss, some commissioning time, and trying to fix the transition from etmx lock losses.

useism is still high and our range is a bit low at 145Mpc. If the range doens't improve with more thermalization, we will take it out for some tuning.

H1 ISC
sheila.dwyer@LIGO.ORG - posted 13:45, Wednesday 19 February 2025 (82912)
transition to ETMX low noise DARM control causing too low a light level on DCPDs with microseism high

Oli, TJ, Sheila

There have been several locklosses over the last day from the LOWNOISE_ESD_ETMX state, which happened while the gain was ramping down ITMX darm control and ramping up ETMX control.  This is similar to what Elenna was trying to avoid by adjusting ramp times in 81260 and 81195, which was also at a time when the microseism was high.

Oli and I found that the problem with some of our transitions today was that the power on the DCPDs was dropping too low during the initial transition, we lost lock when it got to less than 1mA, in one of the sucsesful transitions it was as low as 4mA.  We edited the guardian to not turn off the darm boost (DARM1 FM1) before making the transition, and instead we are turning it off directly after transitioning control back to ETMX, before the other filter changes that happen in this state. 

This is the boost that we thought was causing locklosses when ramping off, 81638 which motivated Erik's quadratic ramping change 82263 which was then reverted 82284    82277.  Today Oli and I increased the ramp time on this filter from 10 to 30 seconds.  We have make the guardian wait the full 30 seconds for this ramp time, so this is making us wait longer in this state.

The attached screenshot shows the transition with the boost on on the left, and off on the right, the wiggle in the DCPD sum is about 1 mA rather than 15mA. 

Oli is thinking about adding a check for DCPD sum dropping low to the lockloss tool.

Images attached to this report
H1 CAL (CAL)
vladimir.bossilkov@LIGO.ORG - posted 10:03, Tuesday 18 February 2025 - last comment - 12:03, Thursday 20 February 2025(82878)
Calibration sweeps losing lock.

I reviewed the weekend lockloss where lock was lost during the calibration sweep on Saturday.

I've compared the calibration injections and what DARM_IN1 is seeing [ndscopes], relative to the last successful injection [ndscopes].
Looks pretty much the same but DARM_IN1 is even a bit lower because I've excluded the last frequency point in the DARM injection which sees the least loop suppression.

It looks like this time the lockloss was a coincidence. BUT. We desperately need to get a successful sweep to update the calibration.
I'll be reverting the cal sweep INI file, in the wiki, to what was used for the last successful injection (even though it includes that last point which I suspected caused the last 2 locklosses), out of abundance of caution and hoping the cause of locklosses is something more subtle that I'm not yet catching.

Images attached to this report
Comments related to this report
vladimir.bossilkov@LIGO.ORG - 09:08, Wednesday 19 February 2025 (82904)

Despite the lockloss, I was able to utilise the log file saved in /opt/rtcds/userapps/release/cal/common/scripts/simuLines/logs/H1/ (log file used as input into simulines.py), to regenerate the measurement files.

As you can imagine the points where the data is incomplete are missing but 95% of the sweep is present and fitting all looks great.
So it is in some way reassuring that in case we lose lock during a measurement, data gets salvaged and processed just fine.

Report attached.

Non-image files attached to this comment
vladimir.bossilkov@LIGO.ORG - 12:03, Thursday 20 February 2025 (82933)CAL

How to salvage data from any failed attempt simulines injections:

  • simulines siletently dumps log files into this directory: /opt/rtcds/userapps/release/cal/common/scripts/simuLines/logs/{IFO}/ for IFO=L1,H1
  • navitating there you will be greeted by a log the outputs of simulines every single time it has ever been run. The one you are interested in can be identified by the time, as the file name format is the same as the measurement and report directory time-name format.
  • running the following will automagically populate .hdf5 files in the calibration measurement directories that the 'pydarm report' command searches in for new measurements:
    • './simuLines.py -i /opt/rtcds/userapps/release/cal/common/scripts/simuLines/logs/H1/{time-name}.log'
    • for time-name resembling 20250215T193653Z
    • where './simuLines.py' is the simulines exectuable and can have some full path like the calibration wiki does: './ligo/groups/cal/src/simulines/simulines/simuLines.py'
Displaying reports 4181-4200 of 84726.Go to page Start 206 207 208 209 210 211 212 213 214 End