Displaying reports 2541-2560 of 83091.Go to page Start 124 125 126 127 128 129 130 131 132 End
Reports until 12:31, Thursday 20 February 2025
H1 CAL (CAL)
joseph.betzwieser@LIGO.ORG - posted 12:31, Thursday 20 February 2025 - last comment - 10:15, Thursday 27 February 2025(82935)
Calibration debugging
[Vlad, Louis, Jeff, Joe B]
So while exercising the latest pydarm code with an eye towards correcting the issues noted in LHO alog 82804, we ran into a few issues which we are still trying to resolve.

First, Vlad was able to recover all but the last data point from the simulines run on Feb 15th, which lost lock at the very end of the sweeps.  See his LHO alog 82904 on that process.

I updated the pydarm_H1.ini file to account for the current drive align gains and point at the current L1SUSETMX foton file (saved in the calibration svn as /ligo/svncommon/CalSVN/aligocalibration/trunk/Common/H1CalFilterArchive/h1susetmx/H1SUSETMX_1421694933.txt ).  However, I had to also merge some changes for it submitted from offsite.  Specifically https://git.ligo.org/Calibration/ifo/H1/-/commit/05c153b1f8dc7234ec4d710bd23eed425cfe4d95, which is associated with MR 10 which was intended to add some improvements to the FIR filter generation.

Next, Louis updated the pydarm install at LHO from tag 20240821.0 to tag 202502220.0.

We then generated the report and associated GDS FIR filters.  This is /ligo/groups/cal/H1/reports/20250215T193653Z.  The report and fits to the sweeps looked reasonable, however, the FIR generation did not look good.  The combination of the newly updated pydarm and ini changes was producing some nonsensical filter fits.  This is the first attachment.

We reverted the .ini file changes, and this helped to recover more expected GDS filters, however, there's still a bit of a small ~1% change visible around 10 Hz (instead of being a flat 1.0 ratio) in the filter response using the new pydarm tag versus the old pydarm tag which we don't understand quite yet, and would like to before updating the calibration.  I'm hoping we can do this in the next day or two after going through the code changes between the versions.

Given the measured ~6 Hz SRC detuning spring frequency (as seen in the reports), we will need to include that effect in the calcs front end to eliminate a non-trivial error when we do get around to updating the calibration.  I created a quick plot based off the 20250215T193653Z measured parameters, comparing the full model over a model without the SRC detuning included.  This is the attached New_over_nosrc.png image.
Images attached to this report
Non-image files attached to this report
Comments related to this report
joseph.betzwieser@LIGO.ORG - 10:15, Thursday 27 February 2025 (83083)
The slight difference we were seeing in old report GDS filters vs new report GDS filters was actually due to a mcmc fitting change.  We had changed the pydarm_cmd_H1.yaml to fit down to 10 or 15 Hz instead 40 Hz, which means it is in fact properly fitting the SRC detuning, which in turn means the model the FIR filter generation is correcting has changed significantly at low frequencies.

We have decided to use the FIR filter fitting configuration settings we've been using for the entire run for the planned export today.

Louis has pushed to LHO a pydarm code which we expect will properly install the SRC detuning into the h1calcs model.

I attach a text file of the diff of the FIR filter fitting configuration settings for the pydarm_H1.ini file between Aaron's new proposal (which seems to work better for DCS offline filters based only looking at ~6 reports) and the one's we've been using this run so far to fit the GDS online filters.

The report we are proposing to push today is: 20250222T193656Z





Non-image files attached to this comment
H1 General
camilla.compton@LIGO.ORG - posted 12:20, Thursday 20 February 2025 (82930)
Small improvements to DARM BLRMS

Attached is the DARM BLRMS before and after some filters were edited and added to remove some lines, all changes are attached apart form another adjustment to RBP_2 notch 33.375Hz.

Images attached to this report
H1 General
thomas.shaffer@LIGO.ORG - posted 12:06, Thursday 20 February 2025 (82936)
Observing 2004

Back to observing at 2004 after a lock loss and commissioning period. We are not thermalized yet, so we will have to run a calibration measurement later.

H1 ISC
elenna.capote@LIGO.ORG - posted 11:46, Thursday 20 February 2025 (82931)
PRCL feedforward gain increased

I ran a PRCL feedforward injection since the PRCL to DARM coherence has increased. Indeed, the injection showed the PRCL feedforward was doing worse around 30 Hz. I found that by increasing gain of the feedforward from 0.6 to 0.9 this improved the subtraction again.

You will notice that below about 15 Hz, PRCL coupling is still not great. That would probably require a more "invasive" adjustment of the feedforward fit, but maybe it's not worth it since it doesn't seem that it's causing much noise.

In the first attachment, blue trace shows and old coupling measurement with no feedforward, green was the "as found" previous feedforward fit measurement. Brown was the measurement today before I started adjusting the gain and red is what I got after increasing the feedforward gain. Second attachment shows the SDF accept. I also updated the gain in lscparams.

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:13, Thursday 20 February 2025 (82929)
Thu CP1 Fill

Thu Feb 20 10:07:47 2025 INFO: Fill completed in 7min 43secs

Gerardo confirmed a good fill curbside. TCmins [-51C, -50C] OAT (0C, 32F) DeltaTempTime 10:07:47

Images attached to this report
H1 SEI
oli.patane@LIGO.ORG - posted 09:45, Thursday 20 February 2025 (82928)
HEPI Pump Trends Monthly FAMIS

Closes FAMIS#, last checked 82369
 
HEPI pump trends looking as expected. The lines 23 days ago in all the plots are from a DAQ restart done during Tuesday Maintenance on January 28th(82498).

Images attached to this report
H1 ISC
sheila.dwyer@LIGO.ORG - posted 09:40, Thursday 20 February 2025 (82927)
POP LF calibration to Watts moved,

I've moved Elenna's calibration (82656) of the LSC POP diode DC power to the POP_A_LP filter.  I tried to engage it while we were relocking after the EQ and unlocked the IFO, because this is in the IFO trigger matrix. 

I've moved it and turned it on now, accepted in SDF safe and will accept in OBSERVE when we are in low noise.

H1 ISC
matthewrichard.todd@LIGO.ORG - posted 09:40, Thursday 20 February 2025 - last comment - 12:13, Thursday 20 February 2025(82918)
More PR2 spot moves, getting full model of scraper baffle

Matt Jennie Mayank TJ Sheila

We wanted to estimate whether we were clipping on the scraper baffle of PR2 so we moved the PR3 YAW sliders during single bounce quite a bit to get an idea of when were clipping, using the AS_C_NSUM to gauge whether we were starting to clip.


Most of the steps are the same in Sheila's previous alog, expect we are in single bounce and we are therefore not looking for the ALS beatnotes, instead we are looking at the AS_C_NSUM value. You want to take ISC_Lock to PR2_SPOT_MOVE so that the guardian will adjust the IM4, PR2 yaw sliders while you adjust PR3 sliders so that most everything stays aligned. You will have to adjust the pitch every once in awhile due to the cross couplings in IM4 and PR2.

Steps taken in this measurement:

  1. Turned ISC_Lock to PR2_SPOT_MOVE
  2. Move PR3 yaw using the slider (steps of 10 urad seemed to work for me) or using
cdsutils step -s .66 H1:SUS-PR3_Y_OPTICALIGN_OFFSET -- -1,10

       We think the yaw offset value that puts the spot on the center of PR2 is around -230 urad (we started here around 09:00 PST)

      3. By going in steps to -900 in yaw offset, while pitching PR3 every so often to keep everything centered, we were able to move across PR2 to the left edge of the baffle, clipping about 10% of the power at the ASC-AS_C QPD (finished 09:45 PST).
      4. Then we reset the sliders to their starting values and follwed the above steps going to the right (we stopped at +440 yaw offset). Here we recorded around a 10% loss on the right edge of the baffle (finished 10:44 PST).

Mayank has some plots he will comment analyzing the results of this.


Useful ndscopes

ndscope-dev /ligo/home/matthewrichard.todd/ndscope/pr2_spot_move.yaml
Comments related to this report
mayank.chaturvedi@LIGO.ORG - 12:13, Thursday 20 February 2025 (82932)

We modeled the PR2Baffle beam clipping

Matt Jennie Mayank TJ Sheila Keita

We extracted the beam path and geometry for the PRM-PR2Baffle-PR2-PR2Baffle-PR3 leg of the input light from the following documents, D1200573, D1102451 (cookie cutter), D0901098, D020023 etc.

The angle of PRM-PR2Baffle beam with respect to X axis was changed from about -0.1 radian to 0.1 radian (over and above the existing value of 0.339 Degrees) such that the
PR2 Beam spot moves from approximately -25mm to 25 mm.  The PR2 yaw was adjusted such that the PR2-PR3 beam always hits the same spot on PR3.
This ensures that the beam spot on PRM and PR3 remain unchanged while the beam spot on PR2 changes.

 

Attachment 1: Shows the overall modelled geometry

Attachment 2: Zoom view around PR2Baffle along with the beam paths.

Attachment 3: Python script.

Attachment 4: Experiment_Data_ 19Feb2025.

Attachment 5: Experiment_Data_ 5July2025.

 

Plot 1 shows the PR2 beam spot motion with respect to the angle.   

Plot 2 shows that following distances vs the PR2 beam spot location
a) PRM-PR2 beam and the Lower edge of the PR2Baffle (D1 distance)
b) PR2-PR3 beam and the Upper edge of the PR2Baffle (D2 distance).

Plot 3 shows the transmission of the beam with respect PR2 beam spot (As the beam comes closer to the baffle edge, some of the beam power is blocked by the baffle edge and hence the transmission decreases)

Plot 4 shows the net transmission for the beam (multiplication of Upper edge and Lower edge transmissions)  
It also shows the experimentally measured data above.

The two curves do not match exactly however they differ in position by around 2 mm. It seems PR2_Baffle is approximately at the right place. It may not be necessary to move the PR2_Baffle.

 

 

Images attached to this comment
Non-image files attached to this comment
LHO General
thomas.shaffer@LIGO.ORG - posted 07:46, Thursday 20 February 2025 - last comment - 09:29, Thursday 20 February 2025(82924)
Ops Day Shift Start

TITLE: 02/20 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Relocking
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 2mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.42 μm/s
QUICK SUMMARY: Just lost lock from a nearby earthquake. Barely saw it on the picket fence before we lost lock, but our control signals were moving. For the 8 hour lock that just ended, 4 hours ago there was a step down in the range and then less stable range. DARM looks to have more noise in the 80-200Hz area, my screenshot doesn't show it completely. Violin mode 6 was slowly ringing up overnight.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 09:29, Thursday 20 February 2025 (82925)

TJ, Camilla. No extra noise in the channels that usually show our low frequency non-stationary noise 82728, see plot. Comparing DARM before and after, there is very subtle changes <100Hz. TJ found the summary pages show more glitches at 60Hz and  ~48Hz.

Additionally, we see the line at 46.09Hz or 46.1Hz grow, see plot. Georgia noted this line in 2019 47447 and Evan piont us to O4aH1lines list where this apears to be the PR3 roll mode.

Images attached to this comment
thomas.shaffer@LIGO.ORG - 09:19, Thursday 20 February 2025 (82926)

Running the range comparison scripts for a few different times and spans around the range step. There looks to be a slight bit more noise all the way below 100Hz, and the 60Hz is very slightly higher.

The range step happened at 330am almost exactly and since the 60Hz line got worse, I'm wondering if there is something that turned on or updated right then.

Non-image files attached to this comment
LHO General (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 22:00, Wednesday 19 February 2025 (82923)
OPS Eve Shift Summary

TITLE: 02/20 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:

IFO is LOCKING at LOCKING_ALS (Once again, we lose lock in the last few mins of shift...)

IFO is in NLN and OBSERVING as of 03:07 UTC.

Overall very calm shift in which we seem to have improved squeeze locking, improving range to what it was before last weekend.

We had one Lockloss seemingly caused by oscillations in PRCL in the seconds before the losing lock - alog 82916. According to the first OLG PRCL measurements from alog 82920, Elenna found that the PRCL2 gain was low by about 30-40%. Following this, she made a change to up the gain from 1 to 1.4 (alog 82917). Accepted SDF attached.

While 1.4 was a bit too high and caused a PRCL ring-up and a LL at LOWNOISE_LENGTH_CONTROL (where the gain switches on), the next setting of 1.2 worked! We were able to fully automatically re-lock and get to NLN and OBSERVING. Before I went into OBSERVING, I took another OLG PRCL measurement, which is the second measurement in alog 82920.

Other than this, the infamous IY Mode 5_6 Violin has been ringing up, visible in the top right screen of the attached screenshot, which shows mode 6 as slowly increasing since Lock. New settings may be needed for this.

Just as I was about to submit, we had a LL, though there wasn't the characteristic PRCL ring-up from the last few LLs. It also doesn't look environmentally caused since wind is low, there are no EQs and microseism is high, but unchanged mostly from the beginning of the day. Currently experiencing known ALS lock issues.

LOG:

None

Images attached to this report
H1 ISC
ibrahim.abouelfettouh@LIGO.ORG - posted 19:14, Wednesday 19 February 2025 - last comment - 20:19, Wednesday 19 February 2025(82920)
PRCL Open Loop Gain Measurements

TJ, Ibrahim, Sheila, Elenna

Measured PRCL OLG at 2 different times during NLN - both attached.

First (done by TJ) at 3hrs into NLN -  Screenshot 1.

Second at 15 minutes into seperate NLN. This one was done after a PRCL-related lockloss (alog 82916) at which point Elenna changed the PRCL2 Gain from 1 to 1.2 (alog 82917) - Screenshot 2.

 

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 20:19, Wednesday 19 February 2025 (82922)

Just a note that we are trying for a UGF of about 30 Hz here. Right after lock, this is clearly a bit too high, but hopefully with the 20% gain boost after thermalization it will settle closer to 30 Hz.

H1 ISC
elenna.capote@LIGO.ORG - posted 17:05, Wednesday 19 February 2025 - last comment - 18:02, Wednesday 19 February 2025(82917)
PRLC2 Gain increased

I increased the PRCL2 gain that is set in lownoise length control from 1.0 to 1.4 to increase the overall PRCL loop gain by 40%. We have been seeing locklosses with 11 Hz oscillations that are probably due to marginal stability in PRCL. I changed line 5577 of the ISC_LOCK guardian, saved, and loaded. Ibrahim will post an alog with more info and open loop gain plots.

Comments related to this report
elenna.capote@LIGO.ORG - 18:02, Wednesday 19 February 2025 (82919)

This was too high and caused a 70 Hz ring up in PRCL. I put in a gain of 1.2 now.

H1 ISC (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 16:45, Wednesday 19 February 2025 - last comment - 19:14, Wednesday 19 February 2025(82916)
Lockloss 00:41 UTC

Lockloss that matches the ones from over the weekend, where PRCL becomes unstable and oscillates at 11Hz in the seconds before the Lockloss.

Comments related to this report
ibrahim.abouelfettouh@LIGO.ORG - 19:14, Wednesday 19 February 2025 (82921)

H1 Back to OBSERVING 03:07 UTC

H1 SQZ (OpsInfo)
camilla.compton@LIGO.ORG - posted 16:32, Tuesday 18 February 2025 - last comment - 12:20, Monday 24 February 2025(82891)
SQZ_ANG_SERVO set to False

Due to so much SQZ strangeness over the weekend, Sheila set the sqzparams.py use_sqz_ang_servo to False and I changed the  SQZ_ANG_ADJUST nominal state to DOWN and reloaded SQZ_MANAGER and SQZ_ANG_ADJUST.

We set the H1:SQZ-CLF_REFL_RF6_PHASE_PHASEDEG to a normal good value of 190. If the operator thinks the SQZ is bad and wants to change this to maximize the range AFTER we've been locked 2+ hours, they can. Tagging OpsInfo.

Comments related to this report
camilla.compton@LIGO.ORG - 12:35, Thursday 20 February 2025 (82937)

Daniel, Sheila, Camilla

This morning we set the SQZ angle to 90deg and scanned to 290deg using 'ezcastep H1:SQZ-CLF_REFL_RF6_PHASE_PHASEDEG -s '3' '+2,100''. Plot attached.

You can see that the place with the best SQZ isn't a good linear range for H1:SQZ-ADF_OMC_TRANS_SQZ_ANG, which is why the SQZ angle servo has ben going unstable. We are leaving the SQZ angle servo off.

Daniel noted that we expect the ADF I and Q channels to rotate around zero, which they aren't. So we should check that the math calculating these is what we expect. We struggles the find the SQZ-ADF_VCXO model block, it's in the h1oaf model (so that the model runs faster).

Images attached to this comment
camilla.compton@LIGO.ORG - 12:20, Monday 24 February 2025 (83010)

Today Mayank and I scanned the ADF phase via 'ezcastep H1:SQZ-ADF_VCXO_PLL_PHASE -s '2'  '+2,180''. You can see in the attached plot the I and Q phases show sine/cosine functions as expected. We think we may be able to adjust this phase to improve the linearity of H1:SQZ-ADF_OMC_TRANS_SQZ_ANG around the best SQZ so that we can again use the SQZ ANG servo. We started testing this, plot,  but found that the SQZ was v. freq dependent and needed the alignment changed (83009) so ran out of time.

Images attached to this comment
H1 CAL (CAL)
vladimir.bossilkov@LIGO.ORG - posted 10:03, Tuesday 18 February 2025 - last comment - 12:03, Thursday 20 February 2025(82878)
Calibration sweeps losing lock.

I reviewed the weekend lockloss where lock was lost during the calibration sweep on Saturday.

I've compared the calibration injections and what DARM_IN1 is seeing [ndscopes], relative to the last successful injection [ndscopes].
Looks pretty much the same but DARM_IN1 is even a bit lower because I've excluded the last frequency point in the DARM injection which sees the least loop suppression.

It looks like this time the lockloss was a coincidence. BUT. We desperately need to get a successful sweep to update the calibration.
I'll be reverting the cal sweep INI file, in the wiki, to what was used for the last successful injection (even though it includes that last point which I suspected caused the last 2 locklosses), out of abundance of caution and hoping the cause of locklosses is something more subtle that I'm not yet catching.

Images attached to this report
Comments related to this report
vladimir.bossilkov@LIGO.ORG - 09:08, Wednesday 19 February 2025 (82904)

Despite the lockloss, I was able to utilise the log file saved in /opt/rtcds/userapps/release/cal/common/scripts/simuLines/logs/H1/ (log file used as input into simulines.py), to regenerate the measurement files.

As you can imagine the points where the data is incomplete are missing but 95% of the sweep is present and fitting all looks great.
So it is in some way reassuring that in case we lose lock during a measurement, data gets salvaged and processed just fine.

Report attached.

Non-image files attached to this comment
vladimir.bossilkov@LIGO.ORG - 12:03, Thursday 20 February 2025 (82933)CAL

How to salvage data from any failed attempt simulines injections:

  • simulines siletently dumps log files into this directory: /opt/rtcds/userapps/release/cal/common/scripts/simuLines/logs/{IFO}/ for IFO=L1,H1
  • navitating there you will be greeted by a log the outputs of simulines every single time it has ever been run. The one you are interested in can be identified by the time, as the file name format is the same as the measurement and report directory time-name format.
  • running the following will automagically populate .hdf5 files in the calibration measurement directories that the 'pydarm report' command searches in for new measurements:
    • './simuLines.py -i /opt/rtcds/userapps/release/cal/common/scripts/simuLines/logs/H1/{time-name}.log'
    • for time-name resembling 20250215T193653Z
    • where './simuLines.py' is the simulines exectuable and can have some full path like the calibration wiki does: './ligo/groups/cal/src/simulines/simulines/simuLines.py'
Displaying reports 2541-2560 of 83091.Go to page Start 124 125 126 127 128 129 130 131 132 End