Displaying reports 12341-12360 of 86532.Go to page Start 614 615 616 617 618 619 620 621 622 End
Reports until 08:37, Friday 15 March 2024
H1 CDS
david.barker@LIGO.ORG - posted 08:37, Friday 15 March 2024 (76415)
restarted picket fence on nuc5

Picket Fence on nuc5 stopped updating 18:44 Thu 14mar2024 PDT. I killed the frozen window and started the code by hand on nuc5 via a vnc connection, it is updating again now.

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 08:08, Friday 15 March 2024 (76412)
OPS Day Shift Start

TITLE: 03/15 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 3mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.28 μm/s
QUICK SUMMARY:

IFO is in locked and in NEW_DARM.

Other:

H1 ISC
gabriele.vajente@LIGO.ORG - posted 07:53, Friday 15 March 2024 (76411)
OMC alignment lines

Lines started at 7:52am LT on CDSWS35

Images attached to this report
H1 AOS (ISC)
louis.dartez@LIGO.ORG - posted 21:59, Thursday 14 March 2024 (76410)
NEW DARM guardian state bug fix
[Gabriele, Louis]

we added a counter increment at the end of the run method in the NEW_DARM state (ISC_LOCK). The missing self.counter += 1 was causing the state to continuously set LSC-MICHFF_GAIN and LSC-SRCLFF1_GAIN to 1. This incidentally caused at least one lockloss as Gabriele was trying to run a MICH feedforward measurement tonight.


this was on line 6207
H1 ISC
gabriele.vajente@LIGO.ORG - posted 21:54, Thursday 14 March 2024 - last comment - 18:17, Friday 15 March 2024(76409)
LSC FF injections

[Louis, Gabriele]

We did again the noise injections for retuning the MICH FF, analysis and fit will follow tomorrow.

Images attached to this report
Comments related to this report
gabriele.vajente@LIGO.ORG - 08:20, Friday 15 March 2024 (76413)

New fit done, loaded into FM2, not tested yet.

Images attached to this comment
gabriele.vajente@LIGO.ORG - 11:09, Friday 15 March 2024 (76421)

We tested the new FF, and it didn't perform as well as expected. However

  • the old FF filter now performs worse than right after it was tuned
  • the new FF filter performs better than the old one now

We therefore used the measurement to fit another MICH FF, we'll test it soon

Images attached to this comment
elenna.capote@LIGO.ORG - 18:17, Friday 15 March 2024 (76445)

Gabriele created two new filters in FM1 and FM2 to try out in DARM. Currently, we are using FM3. I ran an injection using the current filter, and then for each of Gabriele's new filters. Overall, if we want the most suppression 10-20 Hz, we should go with the current, FM3. If we want to do better 20-40 Hz, we can choose one of the new filters. FM2 is worse from 40-70 Hz, but I'm not sure how much that matters. FM1 will inject a few dB more noise than the others from 7-10 Hz, again, not sure how much that matters. I am staying with FM3 pending some evaluation.

Images attached to this comment
H1 ISC
gabriele.vajente@LIGO.ORG - posted 21:17, Thursday 14 March 2024 - last comment - 16:15, Monday 18 March 2024(76407)
AS_A WFS centering affects the DHARD_Y to DARM coupling

[Louis, Gabriele]

The DHARD_Y to DARM coupling always showed two regimes: a steep coupling below 20-30 Hz, and a flatter coupling above 20-30 Hz. We've been able to change the flatter coupling above 20-30 Hz by changing the ITMT Y2L coefficient.

Today we confirmed a suspicion: the steep low frequency coupling is due to length to angle coupling at the AS WFS. We changed the beam position on the WFS by adding an offset to the WFS centering (H1:ASC-AS_A_DC_YAW_OFFSET) and saw a change in the DHARD_Y to DARM coupling.

A value close to -0.14 gives the minimum coupling below 20 Hz. we now have two independent knobs to minimize the DHARD_Y to DARM coupling at all frequencies.

Incidentally, the higher frequency couping is now lower than yesterday, with the same ITMY Y2L coefficient of -1.65

We did a scan of the AS_A_WFS Y centering from -0.2 to -0.1 in steps of 0.01, an analysis will follow tomorrow:

-0.200: 1394510627 - 1394510727
-0.190: 1394510777 - 1394510877
-0.180: 1394510927 - 1394511027
-0.170: 1394511077 - 1394511177
-0.160: 1394511227 - 1394511327
-0.150: 1394511377 - 1394511477
-0.140: 1394511527 - 1394511627
-0.130: 1394511677 - 1394511777
-0.120: 1394511827 - 1394511928
-0.110: 1394511978 - 1394512078
-0.100: 1394512128 - 1394512228
 0.000: 1394512278 - 1394512378

We are leaving a value of -0.14 in the WFS offset

 

Images attached to this report
Comments related to this report
louis.dartez@LIGO.ORG - 21:57, Thursday 14 March 2024 (76408)
Attached is a comparison of the DARM sensing function with no AS A centering offset vs an offset of -0.14. 

With an AS A centering offset of -0.14, which we found to be the value that results in the minimum amount of coupling to DARM below 20Hz, the sensing function clearly shows optical spring-like characteristics. This brings to mind a few thoughts: 

1. This supports the idea that coupling from the DHARD loop into DARM has a noticeable effect on the structure seen in the sensing function at low frequencies. We've been wondering about this for some time, so it's nice to finally have a direction to point in.
2. We tend to adjust the src detuning by constantly measuring the sensing function and trying to find an SRC offset that results in a flat sensing function at low frequencies. The fact that DHARD also couples with DARM in such a way that it can affect the shape of the sensing function at low frequencies begs the question: could we be in fact further detuning the src while intending to do the opposite due to confusion caused by the dhard coupling effects?
3. I recall being told that sometimes squeezing gets better with some level of detuning. If our only measure of SRC detuning is from measuring and inspecting the sensing function then this measurement hasn't been clean due to the DHARD coupling. 


lots to think about..
Images attached to this comment
gabriele.vajente@LIGO.ORG - 10:46, Friday 15 March 2024 (76419)

Here's a more detailed analysis of the AS WFC centering steps.

The first plot shows the steps in ASC-AS_A_DC_YAW_OFFSET compared with a DARM spectrogram, during a DHARD_Y injection. The spectrogram shows that there is minimum in the coupling of DHARD_Y to DARM around -0.15 / =0.16.

The second plot shows the transfer function from DHARD_Y to DARM for all values of the offset. A value of -0.15 gives the lowest coherence and the lowest coupling, so that seems to be the optimal value. One can notice how the transfer function phase flips sign as expected when one goes through the minimum coupling.

Images attached to this comment
gabriele.vajente@LIGO.ORG - 11:50, Friday 15 March 2024 (76422)

Changing the AS_A centering offset also moved SR2, SRM and BS.

elenna.capote@LIGO.ORG - 21:38, Friday 15 March 2024 (76451)

I tried stepping the REFl WFS A and B DC offsets in yaw similarly to see if the CHARD Y coupling to DARM would change. In summary, I stepped between -0.2 and 0.2 for both WFS and saw no change.

Method: I set a 30 second ramp on the offsets because the DC centering loops are slow. I stepped first in steps of 0.01, and then 0.02. I injected a broadband CHARD Y injection and measured the transfer function to darm between 10-30 Hz. I saw no change in the coupling while I made these steps.

minhyo.kim@LIGO.ORG - 14:52, Monday 18 March 2024 (76489)

Before checking on the calibration change in DARM and DHARD, I check on the thermalisation effect with the coupling.
I chose long duration locking time (Mar. 16, 05:30:00 UTC ~ 15:30:00 UTC) without centering offset, and selected start, middle (10:30:00 UTC) and end time within the time window.

Three plots are; 1) DARM, 2) DHARD PIT, 3) DHARD YAW.
In addition, I included screenshot of ndscope to confirm the time window.

As the 'end' time data in all plots show different trend compare to the other times, it seems that the thermalisation affects DARM and DHARD.

Images attached to this comment
minhyo.kim@LIGO.ORG - 16:15, Monday 18 March 2024 (76490)

Checked on the calibration lines in DARM and DHARD with centering offset on/off conditions.
To minimize the thermalisation effect, time for the comparison were chosen within short time window.

Figures are; 1) Comparison altogether, 2) DARM comparison, 3) DHARD PIT, 4) DHARD YAW, 5) Screenshot of the ndscope around comparison time.

It can be confirmed that the peaks of calibration lines were same in DARM with and without the centering offset. However, for DHARD, only YAW showed calibration lines, and with different peak magnitude (lower in without offset).

Images attached to this comment
H1 CAL
louis.dartez@LIGO.ORG - posted 21:09, Thursday 14 March 2024 (76406)
updated front end calibration
I updated the front end calibration with 20240315T012231Z.

I also tried to update the GDS pipeline but ran into errors (attached as gds_error.txt ).


I accepted the attached SDFs on observe.snap and checked that safe.snap for the CAL-CS was clear. screenshot.
Images attached to this report
Non-image files attached to this report
H1 SQZ
naoki.aritomi@LIGO.ORG - posted 20:39, Thursday 14 March 2024 - last comment - 15:56, Saturday 16 March 2024(76405)
Less CLF power gives better squeezing

Naoki, Dhruva, Nutsinee

Yesterday we had only 3dB squeezing at IFO so we checked squeezing at homodyne. Although the visibility is good (98.5%), the squeezing was only 4.5dB at homodyne with 6dBm CLF6. We reduced the CLF power and recovered 8dB squeezing as shown in the attached figure.

The CLF6 was reduced from 6dBm to -42dBm and the 8dB squeezing was obtained with -38dBm CLF6. The CLF6 between -38 and -20 dBm gave similar squeezing so we set it at -24dBm, which is similar to O4a value and corresponds to 8uW CLF_REFL_LF_OUTPUT. The squeezing at IFO is recovered to 4.5dB with -24dBm CLF6.

Note that the LLO also saw the better squeezing at IFO with less CLF power in LLO70072.

We had 6.5dB squeezing at homodyne in 76040 and 4.5dB squeezing at IFO in 76226 with 6dBm CLF6 before. The question is why we lost squeezing at homodyne and IFO this week with same CLF power? The commissioning list in 76369 might give us a clue.

Images attached to this report
Comments related to this report
daniel.sigg@LIGO.ORG - 12:49, Friday 15 March 2024 (76429)

At high CLF power we have about 12 uW of light for CLF and RLF after the OPO. Each homodyne PD has 0.5 mW of power, or 1 mW total. This yields a CLF/RLF to LO ratio of 0.012. Using the equation Gamma^2/2 to get the modulation index, we obtain an estimate of 150 mrad for the maximum phase modulation. In reality, it will be somewhat smaller since some of the power will be in amplitude modulation. This will limit the maximum amount of achievable squeezing on the homodyne. But, this has no bearing on the DCPDs, since the CLF/RLF to LO ratio there is approximetaly 5000 times smaller.

nutsinee.kijbunchoo@LIGO.ORG - 15:56, Saturday 16 March 2024 (76452)SQZ

A bit more details on this.

The power ratio between SB and CR of each sideband is approximated to be ~ (gamma/2)^2 where gamma is modulation depth in radian. Add the two sidebands together you get (gamma^2)/2.

 

A total power transmits through the OPO during high CLF case was 12uW. A total LO power hitting the HD was 1mW. So the phase noise contribution to HD sqz was

(gamma^2)/2 = 12uW/1mW 

High CLF phase noise (gamma) at the homodyne = 154mrad (max)

 

A total power transmits through the OPO during low CLF case was 0.6 uW. A total LO power hitting the HD was 1mW. So the phase noise contribution to HD sqz was

Low CLF phase noise (gamma) at the homodyne = sqrt(2*0.6uW/1mW) = 35mrad (max)

 

These number include amplitude modulation. It's the worse case that could possibly happen. Using 45 mrad of phase noise and *16dB of asqz fits the high CLF squeezing of 6.5 dB in the homodyne. 15 mrad and 16 dB of squeezing fits the low CLF squeezing of 8 dB in the HD.

 

For the IFO case we've only injected sqz using low CLF so far. All the sideband power gets attenuated by the OMC (a factor of 5000). The LO (IFO carrier) is ~40mW. Phase noise contribution from CLF/RLF sidebands is 0.08 mrad, which is negligible. Even for high CLF power (RF6 = 6 dBm) this phase noise would be 0.3 mrad. That number is still negligible so there's no reason why we shouldn't be able to see good squeezing with high CLF. Using **15dB of aSQZ and a loss of ***30% we have 5 dB of squeezing as observed on Friday. 

 

*ASQZ trace overlapped with 16 dB aSQZ reference in HD https://alog.ligo-wa.caltech.edu/aLOG/uploads/73562_20231018171710_8dB_hd_sqz_2023Oct18.png

**15 dB of aSQZ in DARM https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=76426

*** If I'm reading this noise budget correctly the inferred loss in the IFO is 30%. 

 

 

Images attached to this comment
H1 CAL (AOS, ISC)
louis.dartez@LIGO.ORG - posted 19:03, Thursday 14 March 2024 - last comment - 19:44, Thursday 14 March 2024(76399)
First successful calibration suite in the new darm offloading configuration
Gabriele, Louis

We've successfully run a full set of calibration swept-sine measurements in the new DARM offloading (LHO:76315). In December, I tried running simulines in the new DARM state without success. I reduced all injection amplitudes by 50% but kept knocking the IFO out of lock (LHO:74883). After those repeated failures, I realized that the right thing to do was to scale the swept-sine amplitudes by the changes that we made to the filters in the actuation path. I prepared four sets of simulines injections last year that we finally got to try this evening. The simulines configurations that I prepared live at /ligo/groups/cal/src/simulines/simulines/newDARM_20231221. In that directory are 1.) simulines injections scaled by the exact changes we made to the locking filters, 2.-4.) reductions by 10,100, and 1000 of the rescaled injections that I made out of an abundance of caution.

The measurements we took this evening are: 

2024-03-15 01:44:02,574 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240315T012231Z.hdf5
2024-03-15 01:44:02,582 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240315T012231Z.hdf5
2024-03-15 01:44:02,591 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240315T012231Z.hdf5
2024-03-15 01:44:02,599 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240315T012231Z.hdf5
2024-03-15 01:44:02,605 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240315T012231Z.hdf5


We did not get to take a broadband PCALY2DARM measurement as we usually do as part of the normal measurement suite. Next steps are to update the pyDARM parameter file to reflect the current state of the IFO, process these new measurements, then use them to update the GDS pipeline and confirm that is working well. More on that progress in a comment.


Relevant Logs:
- success in transitioning to the new DARM offloading scheme in March 2024: LHO:76315
- unable to transition into the new offloading in January 2024, (we still don't have a good explanation for this): LHO:75308
- cal-cs updated for the new darm state: LHO:76392
- weird noise in cal-cs last time we tried updating the front end calibration for this state (still no explanation): LHO:75432
- previous problems calibrating this state in December: LHO:74977
- simulines lockloss in new darm state in December: LHO:74887
Comments related to this report
louis.dartez@LIGO.ORG - 19:08, Thursday 14 March 2024 (76401)
the script i used to rescale the simulines injections is at /ligo/groups/cal/common/scripts/adjust_amp_simulines.py. it's the same (but modified) I used in LHO:74883.
louis.dartez@LIGO.ORG - 19:41, Thursday 14 March 2024 (76403)
On Updating the pyDARM parameter file for the new DARM state:


- copied H1OMC_1394062193.txt to /ligo/groups/cal/H1/arx/fotonfilters/ (see Nov 28, 2023 discussion section in LIGO-T2200107 regarding cal directory structure changes for 04b). Since pyDARM logic isn't fully transitioned yet, I also copied the same file to the 'old' location : /ligo/svncommon/CalSVN/aligocalibration/trunk/Common/H1CalFilterArchive/h1omc/.
- i also copied H1SUSETMX_139441589.txt to both (corresponding) locations.
- pyDARM parameter file swstat values were updated according to what was active at 1394500426 (SUSETMX and DARM1,2)

the git commit encapsulating the changes to the parameter file can be found here: https://git.ligo.org/Calibration/ifo/H1/-/commit/119768de95a66658039036aca358364c1d39abe4
louis.dartez@LIGO.ORG - 19:44, Thursday 14 March 2024 (76404)
here is the pyDARM report for this measurement: https://ldas-jobs.ligo-wa.caltech.edu/~cal/?report=20240315T012231Z
H1 ISC
georgia.mansell@LIGO.ORG - posted 18:15, Thursday 14 March 2024 - last comment - 14:09, Friday 15 March 2024(76398)
Waits that maybe can be removed from ISC_LOCK and other guardian nodes

Last week when we locked the new OMC by hand I copy-pasted some guardian code into a shell, and found that there was a gain set and wait that were totally unnecessary. This inspired me to start reading through ISC_LOCK to look for other redundant waits. Here are my notes, I only got up to the start of LOWNOISE_ASC before I went totally cross-eyed.

Here are the notes I took, the ones in bold we can for sure remove.

Preparing for lock

Line 301-305 [ISC_LOCK, DOWN] Prcl UGF servo turn off (do we still do this?) no wait times but maybe unnecessary
Line 327 [PREP_FOR_LOCKING] Thermalization guardian (are we using this?)
Line 350-354 [PREP_FOR_LOCKING] Turn off CHARD blending, no wait times
Line 423 [PREP_FOR_LOCKING] turn off PR3 DRIVEALIGN P2P offset for PR3 wire heating
Line 719 [PREP_FOR_LOCKING] toggling ETM ESD HV if the output is low, seems redundant with line 445

Initial alignment


INITIAL_ALIGNMENT for the green arms only offloads a minute or 2 after it's visually converged. Initially I thought the convergence checker thresholds should be raised, but it's a 30 second average. Might make sense to reduce the averaging time?
(2 screenshots attached for this one)

Arm Length Stabilization

ALS_DIFF [LOCK] Ramps DARM gain to 40, waits 5 seconds, ramps DARM gain to 400, waits 10 seconds. Seems long.
ALS_DIFF Line 179, waits 2* the DARM ramp time, but why?
ALS_DIFF [LOCK] Enegages boosts with a 3 second wait, engages boosts with another 10 second wait
ALS DIFF Line 191 wait timer 10 seconds seems unnecessary.

ALS_COMM [PREP_FOR_HANDOFF] line 90 5 second wait - maybe we could shorten this?
ALS_COMM [HANDOFF_PART_3] lines  170 and 179 - 2 and 5 second timers but I'm not sure I get why they are there
ALS_COMM's Find IR takes 5 seconds of dark data, has two hard coded VCO offsets in FAST_SEARCH, if it sees a flash it waits 5 seconds to make sure it's real, and then moves to FINE_TUNE_IR, taking 50 count VCO steps until the transmitted power is >0.65
Suggest updating the hard coded offsets (ALS_DIFF line 247) from [78893614, 78931180] to [78893816, 78931184] (second spot seems good, first spot needs a few steps)

ALS_DIFF's find IR has 3 locations saved in alsDiffParams.dat which it searches around. This list gets updated each time it finds a new spot, HOWEVER the search starts 150 counts away from the startin location and steps in in increments of 30. Seems like it would be more efficient to start 30 below the saved offset?

ISC_LOCK [CHECK_IR] line 1206 has a 5 second wait after everything is done which could probably be reduced?

Power- and dual- recycled michelson locking

PRMI locking - a bunch of 1 second waits idk if they are needed?
ISC_DRMI line 627/640 [PRMI_LOCKED] self.timer['wait']=1 seems maybe unnecessary?
ISC_DRMI line 746, 748 [ENGAGE_PRMI_ASC] - MICH P and Y ASC ramps on with a 20 second timer, but wait = false, but this seems long anyway?
ISC_DRMI line 762/765/768 [ENGAGE_PRMI_ASC] self.timer['wait'] = 4... totally could remove this?
ISC_DRMI [PRMI_TO_DRMI] line 835 - wait timer of 4 seconds (but I don't think it actually waited 4 seconds, see third attached screenshot, so maybe I dont know what time['wait'] really means!!!
When doing the PRMI to DRMI transition it first offloads the PRMI ASC, does the PRMI_TO_DRMI_TRANSITION state, then runs through the whole DOWN state of ISC_DRMI which takes ~25 seconds? maybe there can be a quicker way to do this


In ISC_DRMI there's a self.caution flag which is set as True if AS_C is low, and has 10 second waits after ASC engagements and a *90 second* wait before tirning on the SRC ASC. Might be worthwhile to to replace this with a convergence checker since it might not be needed that we wait for a minute and a half if we are already well algined

Line 1845 ISC_LOCK [CHECK_AS_SHUTTERS] 10 second wait for...? This happens after the MICH RSET but before the FAST SHTTER is reqested to READY
Lines 1837/8 and 1865 redundant?
Line 1870 wait timer 2 seconds after AS centering + MICH turned back on but why
Line 1887 - straight up 10 second wait after increasing MICH gain by 20dB

CARM offset

Line 2119 [CARM_TO_TR] time.sleep(3) at the end of this state not clear what we're waiting for
Line 2222 [DARM_TO_RF] self.timer['wait'] = 2 that used to be 1
Line 2235 [DARM_TO_RF] another 2 second timer?
Line 2314 [DHARD_WFS] 20 second timer to turn DHARD on, but maybe we should just go straight to convergence checking once the gains are ramped on?
Line 2360 [PARK_ALS_VCO] 5 second wait after resetting the COMM and DIFF PLLs
Line 2406 [SHUTTER_ALS] 5 second wait followed by a 1 second sleep after the X arm, Y arm, and COMM are taken to shuttered
Line 2744 [CARM_TO_ANALOG] 2 second wait when REFLBIAS boost turned off but before summing node gain (A IN2) increased?
Line 2753 [CARM_TO_ANALOG] 5 second wait after summing node gain increased
Line 2760 [CARM_TO_ANALOG] 2 second wait after enabling digital CARM antiboost?
Line 2766 [CARM_TO_ANALOG] 2 second wait after turning on analog CARM boost
Line 2772 [CARM_TO_ANALOG] 2 second wait after raming the REFL_DC_BIAS gain to 0, actually maybe this one makes sense.

Full IFO


There are a ton of waits during the ASC engagement but I think usually the convergence checkers are the limit to time spent in the state.
Line 3706 [ENGAGE_SOFT_LOOPS] 5 second wait after everything has converged?
Line 3765 [PREP_DC_READOUT_TRANSITION] 3 second wait after turning on DARM boost but shouldn't it be 1 second?
Line 3816 [DARM_TO_DC_READOUT] 10 second wait after switching DARM intrix from AS45 to DC readout, might be excessive
Line 3826/7 [DARM_TO_DC_READOUT] - DARM gain is set to 400 (but it's already 400!) and then there is a 4 second wait, these two lines can for sure be removed!
Line 3834 [DARM_TO_DC_READOUT] - 5 second wait after turning ramping some offsets to 0 BUT the offsets ramp much more quickly than that!

Power up


line 4033 [POWER_10_W] 30 second wait after turning on some differential arm ASC filters but actually, never mind I don't think it actually does this wait
Line 4299 [REDUCE_RF45_MODULATION_DEPTH] we have a 30 second ramp time to ramp the modulation depths, maybe this could be shorter?
Line 4614 [MAX_POWER] 20 second thermal wait could be decreased?
Line 4641 [MAX_POWER] 30 second thermal wait could be decreased??
Line 4645 [MAX_POWER] 30 second thermal wait for the final small step could be devreased?

Lownoise stuff


line 4463/4482 [LOWNOISE_ASC] 5 second wait after we turn off RPC gains that were already off
line 4490 [LOWNOISE_ASC] 10 second wait after CHARD_Y gain lowered, but it looks to have stabilized after 5 seconds so I think we can drop this to 5.
honestly a lot of waits in lownoise_asc so I ran out of time to check them all for necessity

Images attached to this report
Comments related to this report
georgia.mansell@LIGO.ORG - 12:23, Friday 15 March 2024 (76425)

More waits in the guardian:

line 4530 [LOWNOISE_ASC] 5 second wait after turning up (more negative) MICH gain, next steps are not MICH related so maybe we can shorten it?
line 4563 [LOWNOISE_ASC] 10 second ramp after changing top mass damping loop yaw gains, then another 10 second ramp after lowering SR2 and SR3 everything damping loop yaw gains? probably can lump these together and then also maybe reduce the wait?
too scared to think about the timers in transition_from_etmx, but the whole state takes about 3 minutes, which I guess makes sense since we ramp the ESDs down, and up again, also this has been newly edited today
line 5503 [LOWNOISE_LENGTH_CONTROL] 10 second wait after setting up filters for LSC feedforward, and some MICH modifications but not sure why?
line 5536 [LOWNOISE_LENGTH_CONTROL] 10 second wait after changing filters and gains in MICH1/PRCL1/SRCL1 but all their ramp times are 2 or 5 seconds
line 5549 [LOWNOISE_LENGTH_CONTROL] 1 second wait after turning on LSCFF, maybe not needed?
line 5773 [LASER_NOISE_SUPPRESSION] 1 second waits after each LSC_REFL_SERVO gain step - could this be quicker?
line 5632 [OMC_WHITENING] 5 second wait after confirming OMC is locked could probably be skipped?

I'm attaching ISC_LOCK as I was reading it since it's always in a state of flux!

Non-image files attached to this comment
elenna.capote@LIGO.ORG - 13:01, Friday 15 March 2024 (76430)

Georgia and I looked through lownoise ASC together and tried to improve the steps of the state. Overall, it should be shorter except we now need to add the new AS_A DC offset change. I have it set for a 30 second ramp, and I want it to go independently of other changes in lownoise ASC since the DC centering loops are slow. However, Gabriele says he has successfully engaged this offset with 10 seconds, so maybe it can be shorter. I would like to try the state once like this, and if it's ok, go to a shorter ramp on the offset. This is line 4551 in my version, 'self.timer['LoopShapeRamp'] = 30'.

Generally, we combined various steps of the same ramp length that had previously been separate, such as ASC loop changes, MICH gain changes, smooth limiter changes, etc. I completely removed the RPC step because it does nothing now that we do not use RPC.

elenna.capote@LIGO.ORG - 13:38, Friday 15 March 2024 (76431)

Ok, this was a bad change because we lost lock in this state. It was not during the WFS offset change, it was likely during some other filter change. We probably combined too many steps into each section. I reverted to the old version of the state, but I did take out the RPC gain ramp to zero since that is unnecessary.

georgia.mansell@LIGO.ORG - 14:09, Friday 15 March 2024 (76433)

Quickly looking at the guardlog (first screenshot) and the buildups (second screenshot), and ASC error signals (third screenshot) during this LOWNOISE_ASC lockloss.

It seems like consolidating the test mass yaw damping loop gain change, and the SR2/SR3 damping loop gain change was not a good choice. It was a slow lockloss.

Probably the changes earlier in the state were safe though!

Images attached to this comment
H1 ISC (ISC)
craig.cahillane@LIGO.ORG - posted 18:14, Thursday 14 March 2024 (76397)
Set up SR785 at the PSL racks, and plugged into analog DARM A+ chassis
Graeme, Matt, Craig 

In preparation for some analog analysis of CARM and DARM, we set up and got some PSDs of REFL A 9 I (which is actually REFL A 9 Q at the racks due to our arbitrary delay).

Daniel helped us to used the patch panel in the CER to route analog DARM to the PSL racks.  

There hasn't been an obvious effect on DARM from our setup so far, so we will leave it like this for tonight.

Pictures are of the SR785 setup at the PSL racks,
the CER patch panel BNC we used a BNC to connect U2 patch 5 which goes to the PSL racks to U5 patch 6 which goes to the HAM6 racks,
our connection to the OMC DCPD Whitening Chassis (OMC DCPD A+ slot),
and our connection to the HAM6 patch panel. 
Images attached to this report
H1 CAL
louis.dartez@LIGO.ORG - posted 17:55, Thursday 14 March 2024 - last comment - 12:47, Friday 15 March 2024(76396)
DARM PSD with updated CAL-CS filters activated
The CAL-CS filters installed by Jeff (LHO:76392) do a better job of correcting CAL-DELTAL_EXTERNAL. See darm_fom.png. Here's a screenshot of the filter banks: new_darm_calcs_filters.png.

Also, Evan's 5Hz high pass filter (LHO:76365) pretty much killed the strong kick we've been seeing each time we switch into this new DARM state from ETMY. 
Images attached to this report
Comments related to this report
evan.hall@LIGO.ORG - 12:47, Friday 15 March 2024 (76428)

The rms drive to the ESD is now about 5000 DAC counts on each quadrant, dominated by motion around 3 Hz.

Images attached to this comment
Non-image files attached to this comment
H1 CAL (ISC)
jeffrey.kissel@LIGO.ORG - posted 16:14, Thursday 14 March 2024 (76392)
H1 CAL CS ETMX Actuator Model of Digital Filters Updated to Allow for New DARM Distribution Filters
J. Kissel, L. Dartez

In prep for calibrating the detector under the "new DARM" control scheme (see e.g. some of the conversation in LHO aLOGs 76315  75308), I've copied over the new filters that are needed from the H1SUSETMX.txt filter file over to the H1CALCS.txt filter file. The new filters are only in the replica of L2 LOCK, L1 LOCK, L3 DRIVEALIGN, and L2 DRIVEALIGN, and I only needed to copy over two filters.

I've committed the H1CALCS.txt to the userapps repo.
    /opt/rtcds/userapps/release/cal/h1/filterfiles/
        H1CALCS.txt

Attached is a screenshot highlighting the new filters copied over.
Images attached to this report
H1 CAL (AOS)
louis.dartez@LIGO.ORG - posted 00:26, Tuesday 19 December 2023 - last comment - 19:06, Thursday 14 March 2024(74883)
reduced simulines config for new DARM config state
I've retuned a Simulines configuration for testing on Tuesday morning. The frequency vector is the same as we nominally use but I reduced all injection amplitudes by 50% across the board. If we're able to run Simulines without losing lock while in the new DARM state in the morning while, I'll need another round with Simulines at some point to determine the best injection strengths moving forward. 


The new injection configs that I tuned were placed in 
/ligo/groups/cal/src/simulines/simulines/FreqAmp_H1_newDARMconfig_20231218/.

I then sourced the cal pydarm environment with 
source /ligo/groups/cal/local/bin/activate and ran the vector optimization script at /ligo/groups/cal/src/simulines/simulines/amplitudeVectorOptimiser.py after adjusting the input and output directories for H1 on lines 33 & 34 (variables changed are inDir and outDir) to:


inDir = 'FreqAmp_H1_newDARMconfig_20231218'
outDir = 'FreqAmp_H1_simuLines_newDARMconfig_20231218'


This placed new "optimized" frequency vector files in /ligo/groups/cal/src/simulines/simulines/FreqAmp_H1_simuLines_newDARMconfig_20231218/. 

Lastly, to actually generate the config file that Simulines processes when it's run, while still in the cal virtual environment, I ran 
python simuLines_configparser.py --ifo H1,

after changing the H1 output filename in simuLines_configparser.py to
outputFilename = outDir+'settings_h1_newDARMconfig_20231218.ini'.

This returned the following output:

(local) louis.dartez@cdsws22: python simuLines_configparser.py --IFO H1
Total time = 1252.0
Average time taken to sweep individual frequency:
26.083333333333332
Separations in TIME for each sweep:
[233.0, 260.0, 253.0, 253.0]
Starting Frequencies for each swept sine:
[5.6, 7.97, 14.44, 64.78, 339.4]
Starting points in relative time for each swept sine:
[0, 233.0, 493.0, 746.0, 999.0]


I then commented out my temporary variable name changes in the simulines scripts.

At the end of the exercise, the simulines file to test in the new DARM loop configuration is /ligo/groups/cal/src/simulines/simulines/settings_h1_newDARMconfig_20231218.ini.

The command to execute simulines using this configuration is 

gpstime;python /ligo/groups/cal/src/simulines/simulines/simuLines.py -i /ligo/groups/cal/src/simulines/simulines/settings_h1_newDARMconfig_20231218.ini;gpstime. This is same as the instructions in the Operator's wiki, with the modification for the new ini file.
Comments related to this report
louis.dartez@LIGO.ORG - 00:28, Tuesday 19 December 2023 (74884)
The script I used to adjust the injection amplitudes can be found at /ligo/groups/cal/src/common/scripts/adjust_amp_simulines.py.
louis.dartez@LIGO.ORG - 19:06, Thursday 14 March 2024 (76400)
the script mentioned above now lives at /ligo/groups/cal/common/scripts/adjust_amp_simulines.py
H1 TCS
camilla.compton@LIGO.ORG - posted 11:37, Tuesday 17 January 2023 - last comment - 15:08, Tuesday 07 October 2025(66832)
TCS HWS SLED Stock.
Last recorded on alog 58758. Both were replaced in Dec 2022 (66179) so we expect they will be fine until the end of 2023.
Currently have two new 840nm spares, and one 790nm used spare that could be used in a pinch. We will order more spares. 

* Added to ICS DEFECT-TCS-7753, will give to Chrisitna for dispositioning once new stock has arrived. 

Comments related to this report
camilla.compton@LIGO.ORG - 12:47, Monday 30 January 2023 (67079)

New stock arrived and has been added to ICS. Will be stored in the totes in the TCS LVEA cabinet. 

  • ITMX: Superluminescent Diode QSDM-790-5 
    • S/N 11.21.380
    • S/N 11.21.382
    • S/N 05.21.346 (note the data sheet is labeled 04.21.346 but QPhotonics noted this is a typo) 
  • ITMY: Superluminescent Diode QSDM-840-5
    • S/N 11.21.303
camilla.compton@LIGO.ORG - 15:33, Thursday 10 August 2023 (72139)
  • ITMX: Superluminescent Diode QSDM-790-5 
    • S/N 06.18.002 - used spare
    • S/N 11.21.380
    • S/N 11.21.382
    • S/N 05.21.346 - Installed July 2023 71476
    • S/N 07.14.255 Old sled, removed 71476
  • ITMY: Superluminescent Diode QSDM-840-5
    • S/N 03.20.479 
    • S/N 06.16.005 
    • S/N 11.21.303 - Installed July 2023 71476
    • S/N 11.17.127 Old sled, removed 71476

ISC has been updated. As of August 2023, have 2 spare SLEDs for each ITM HWS.  

camilla.compton@LIGO.ORG - 14:41, Tuesday 10 October 2023 (73373)
  • ITMX Superluminescent Diode QSDM-790-5 
    • S/N 06.18.002 - used spare
    • S/N 11.21.380 - Installed Oct 2023 73371
    • S/N 11.21.382
    • S/N 05.21.346 - Old sled, removed  73371
    • S/N 07.14.255 Old sled, removed 71476
  • ITMY: Superluminescent Diode QSDM-840-5
    • S/N 03.20.479 - Installed Oct 2023 73371
    • S/N 06.16.005 
    • S/N 11.21.303 - Old sled, removed  73371
    • S/N 11.17.127 Old sled, removed 71476

ISC has been updated. As of October 2023, have 1 spare SLEDs for each ITM HWS, with more ordered.  

camilla.compton@LIGO.ORG - 15:31, Wednesday 06 December 2023 (74645)

Spare 8240nm SLEDs QSDM-840-5 09.23.313 and QSDM-840-5 09.23.314 arrived and will be placed in the TCS cabinets on Tuesday. We are expecting qty 2 790nm SLEDs too. 

camilla.compton@LIGO.ORG - 16:26, Thursday 14 March 2024 (76393)

Spare 790nm SLEDs QSDM-790-5--00-01.24.077 and QSDM-790-5--00-01.24.079 arrived and will be placed in the TCS cabinets on Tuesday. 

camilla.compton@LIGO.ORG - 16:12, Monday 09 June 2025 (84906)

In 84417, we swapped:

The removed SLEDs have been dispositioned, DEFECT-TCS-7839.

camilla.compton@LIGO.ORG - 15:08, Tuesday 07 October 2025 (87355)

In 87353 we removed 11.21.382 and installed 01.24.077.

Displaying reports 12341-12360 of 86532.Go to page Start 614 615 616 617 618 619 620 621 622 End