Displaying reports 9061-9080 of 84061.Go to page Start 450 451 452 453 454 455 456 457 458 End
Reports until 16:29, Wednesday 17 April 2024
H1 TCS
camilla.compton@LIGO.ORG - posted 16:29, Wednesday 17 April 2024 (77255)
CO2Y and CO2X Beam Scan

TJ, Camilla. WP 11811

Yesterday, we repeated the  76715 CO2 beam scan using a razor blade on a 25mm transition stage while measuring the power transmitted to our PD (channel H1:TCS-ITMX/Y_CO2_LSRPWR_MTR_OUTPUT).

We realized that some of the blocked beam was transmitted through a hole in the razor blade in our first set of measurements (changed radius measurement ~1mm) so the CO2X 76715  are not as trusted. 

All data taken for CO2X and CO2Y attached and summarized below:

Position Downstream of CO2Y Annular Mask [mm]
Downstream = (+), Upstream = (-)
Beam Raduis [mm] Prev. measured with hole in razor 
- 238 11.39 12.34
0 N/A 11.27 (think hole was covered)
+143 10.77 11.17
+194 10.61 11.39
Position Downstream of CO2X Annular Mask [mm]
Downstream = (+), Upstream = (-)
Beam Raduis [mm]
+ 32 9.85
+ 133 10.01
+184 10.49
Non-image files attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:00, Wednesday 17 April 2024 (77256)
OPS Day Shift Summary

TITLE: 04/17 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 126Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:

IFO is in NLN and OBSERVING

Comissioning today went well. We survived a few high magntitude earthquakes.
 

LOG:

Start Time System Name Location Lazer_Haz Task Time End
15:14 FAC Karen Optics, Vac Prep Labs N Technical Cleaning 15:43
15:43 FAC Ken FC Enclosure N Electrical maintenance 16:43
15:44 FAC Karen MY N Technical Cleaning 16:44
17:17 FAC Fil CS Recieving Door N Roll up recieving door 18:17
21:16 SQZ Andrei, Sheila Optics Lab N Tour 22:16
H1 General
oli.patane@LIGO.ORG - posted 16:00, Wednesday 17 April 2024 (77257)
Ops EVE Shift Start

TITLE: 04/17 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 16mph Gusts, 12mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.10 μm/s
QUICK SUMMARY:

We're Observing and have been Locked for over 15 hours.

H1 ISC (SEI)
jim.warner@LIGO.ORG - posted 14:34, Wednesday 17 April 2024 - last comment - 12:27, Thursday 18 April 2024(77254)
New HAM1 FF

Jennie, Jim

We tried Gabriele's newest version of the HAM1 ASC feedforward. New filters were installed last week, but we ran out of time during the commissioning window. It seems like we can still only get good subtraction from the pitch degrees of freedom, we got a lot of 10-ish hz injection when we engaged the yaw degrees of freedom.

To start we turned off the HAM1 ff, by setting H1:HPI-HAM1_TTL4C_FF_INF_{RX,RY,Z,X)}_GAIN to zero, then shutting off the H1:HPI-HAM1_TTL4C_FF_OUTSW. While collecting that data we switched to the new filters, and zerod the gains for all of the individual H1:HPI-HAM1_TTL4C_FF_CART2ASC{P,Y}_{DOF}_{DOF}_GAIN filter banks.  We then tried turning on all of the pitch feedforward first, by ramping the gains in a couple of steps from 0 to 1. It seemed like it worked well, but I don't think we actually got to full gain of 1. We then tried turning on the yaw FF, but that pretty quickly started injecting noise around 10hz, first attached spectra, red traces are with the new ff, blue are with ff off.

Second image are trends where Jennie tries to reconstruct the timeline, which is how we found the first pitch test wasn't complete. Sheila ran an A2L measurement, then we tried the pitch ff again, spectra in the third plot. All of the red traces are new P ff (1397415884), blue is the old (1397407980), green is with the ff off (1397405754).  This worked well, we got some slight improvements around 15hz, new chard p gets rid of some noise injection around 2hz. We didn't see the pitch ff affect the yaw dofs.

We left the new pitch filters running, and accepted in the Observe SDF. The yaw filters were left off.

Images attached to this report
Comments related to this report
gabriele.vajente@LIGO.ORG - 12:27, Thursday 18 April 2024 (77271)

I looked at the attempt of engaging the HAM1 yaw FF:
1) the filter that were loaded were correct, meaning they were what I expected
2) retraining with a time when the HAM1 pitch FF were on yields yaw filters that are similar to what was tried, so it doesn't look like it's a pitch / yaw interaction (Jennie also pointed to some evidence of this in the alog)

I suspect there might be cross coupling between the various yaw dofs. I would suggest that we upload the newly trained filters (attached) and try to engage the yaw FF one by one, starting from CHARD that is the one we care most

Images attached to this comment
Non-image files attached to this comment
H1 TCS (TCS)
ibrahim.abouelfettouh@LIGO.ORG - posted 13:09, Wednesday 17 April 2024 (77253)
TCS Chiller Water Level Top-Off (Bi-Weekly, FAMIS #27787)

 TCS Chiller Water Level Top-Off (Bi-Weekly, FAMIS #27787)

Closes FAMIS 27787

H1 ISC
sheila.dwyer@LIGO.ORG - posted 13:06, Wednesday 17 April 2024 (77252)
PRCL exctitation

We wanted to check the PRCL offset during our commissoning time this morning, we ended up not having much time for that but did re-run Elenna's template from 76814.  It looks like the PRCL to REFL RIN transfer function is much the same as at that time.  It seems as though there was a change in the PRCL to DARM transfer function, but that measurement has low coherence at most frequencies.

Images attached to this report
H1 INJ
thomas.shaffer@LIGO.ORG - posted 12:59, Wednesday 17 April 2024 (77248)
Stochastic injection ran with LLO

Joe B from LLO and I ran the short and long stochastic injections with the commands below starting at 19:08UTC. Both ran successfully, back to Observing at 1955UTC

 

 

hwinj stochastic short --time 1397416098  --run

ifo: H1
waveform root: /ligo/groups/cal/H1/hwinj
config file: /ligo/groups/cal/H1/hwinj/hwinj.yaml
GraceDB url: https://gracedb.ligo.org/api/
GraceDB group: Detchar
GraceDB pipeline: HardwareInjection
excitation channel: H1:CAL-INJ_TRANSIENT_EXC
injection group: stochastic
injection name: short
reading waveform file...
injection waveform file: /ligo/groups/cal/H1/hwinj/stochastic/SB_ER15HLShort_H1.txt
injection waveform sample rate: 16384
injection waveform length: 780.0 seconds

injection start GPS: 1397416098.0

 

 

 

 

hwinj stochastic long --time 1397417118 --run


ifo: H1
waveform root: /ligo/groups/cal/H1/hwinj
config file: /ligo/groups/cal/H1/hwinj/hwinj.yaml
GraceDB url: https://gracedb.ligo.org/api/
GraceDB group: Detchar
GraceDB pipeline: HardwareInjection
excitation channel: H1:CAL-INJ_TRANSIENT_EXC
injection group: stochastic
injection name: long
reading waveform file...
injection waveform file: /ligo/groups/cal/H1/hwinj/stochastic/SB_ER15HLLong_H1.txt
injection waveform sample rate: 16384
injection waveform length: 1800.0 seconds
injection start GPS: 1397417118.0

 

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 12:59, Wednesday 17 April 2024 (77251)
OPS Day Midshift Update

H1 is now finished with comissioning and IFO is now OBSERVING

H1 ISC
sheila.dwyer@LIGO.ORG - posted 12:15, Wednesday 17 April 2024 (77243)
camera offsets set

Following up on 77139, we set the camera offsets to those found, and added these to lscparams.py:

cam_offsets = {'PIT':{'1':-233,
            '2':-173,
            '3':-230},
        'YAW':{
             '1':-236,
             '2':-422,
             '3':-349.5},}

This increased the power in the X arm by 5kW, the Y arm by 6 kW, as the screenshot shows. After this we re-ran the A2L script, starting with much lower amplitudes for ETMX yaw.  It seems that with the A2L as badly tuned as this, we need to adjust the amplitude of the excitations for each test mass and each DOF separately

optic, dof amplitude starting A2L gain final A2L gain
ETMX, Y 0.3 4.9315 5.3477
ETMY, Y 3 3.061 0.8592
ITMX P 3 -0.9709 -1.0603
ITMY P 10

-0.3830

-0.3660
ETMX P 1 4.1183 3.10708
ETMY P 0.3 4.6013 4.46241
ITMX Y 1 2.7837 2.82803
ITMY Y 1 -2.3962 -2.38278

I edited the my_a2l.py script in userapps/isc/h1/scripts to round the A2L gains that it sets, to make it easier to deal with these values in SDF. 

Editing to add, we found that there was still angular coherence so re-ran the script for all DOFs with an amplitude of 1.  This made changed gains for ITMX, ITMY, ETMX, Y2L gains, ETMX and ETMY P2L gain,  not ETMY Y2L gain, or ITMX , ITMY P2L.  The end result is fairly high CHARD P coherence, which is limiting our range right now (see attached screenshot).

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:32, Wednesday 17 April 2024 (77245)
Wed CP1 Fill

Wed Apr 17 10:10:07 2024 INFO: Fill completed in 10min 3secs

Jordan confirmed a good fill curbside.

Images attached to this report
H1 ISC (CDS, ISC, Lockloss, SEI, SYS)
jeffrey.kissel@LIGO.ORG - posted 10:31, Wednesday 17 April 2024 - last comment - 12:19, Wednesday 17 April 2024(77242)
Update on Fast Shutter Timing
J. Kissel 

During today's Systems Call, it came up that there was scant documentation on the timing performance of the existing fast shutter system. A quick look through the aLOG showed some results from the 2016 era including examples like LHO aLOGs 29062, 28958, and 26263. Koji's LHO:29062 is probably the most conclusive, quoting (repeated here for convenience): "The fast mechanical shutter start blocking the beam at 1.7ms, reaches half blocking at 1.85ms, complete block at 2.0ms."

Here, I take some modern data with what digital signals stored in the frames:
    [65 kHz]  H1:OMC-PI_DCPD_64KHZ_AHF_DQ                OMC DCPDs (down stream of the OMC Cavity)
    [4096 Hz] H1:ISI-HAM6_BLND_GS13Z_DQ                  Vertical GS13s on the HAM6 ISI (nominally calibrated into 1 nm/s, but I've scaled it to arbitrarily show up demonstratively on the OMC DCPD plot)    
    [2048 Hz] H1:ASC-OMC_A_NSUM_OUT_DQ                   OMC QPD A SUM (up-stream of the OMC Cavity, but still down-stream from the fast shutter)
    [2048 Hz] H1:ASC-OMC_B_NSUM_OUT_DQ                   OMC QPD B SUM (""")
    [2048 Hz] H1:ASC-AS_C_NSUM_OUT_DQ                    AS_C QPD SUM (up-stream of the fast shutter)
    [16 Hz]   H1:SYS-MOTION_C_FASTSHUTTER_A_STATE        Slow-channel record of the logical state of the shutter
    [16 Hz]   H1:YSY-MOTION_C_SHUTTER_G_TRIGGER_VOLTS    Slow-channel record of the trigger PD voltage (which is AS_C)

With these channels, I show three recent lock-losses. 

It seems like all of the photo-diode signals show a very long "turn off time," (of order ~1 [sec]) and are pretty un-reliable at precision estimates of shutter timing. This is likely a linear combination of 
   - For the OMC DCPDs at least, there's an *additional* hardware "shutter" where the cavity's length actuator -- the OMC PZT -- are quickly railed to unlock the cavity after a lock-loss trigger,
   - For the OMC DCPDs at least, there's a non-negligible cavity ring-down time,
   - For OMC DCPDs and all the QPDs, impulse response of the photo-diode's readout electronics and anti-aliasing filters, and/or the  

However, the "clunk" of the shutter on the HAM6-ISI is quite obvious and precise, so I use the start of the impulse (a few samples after the transition from "no motion" to "large fast negative start of the clunk's ring-down") that as an upper-bound estimate of when the shutter is completely closed. But, with the imprecision of the [16 Hz] channels -- 62 ms -- it's tough to get anywhere.

In short -- what's stored in the frames are a pretty bad metrics for precision timing of the fast shutter system. 

But anyways -- the upper limit of the time between the change of the logical state of the trigger and the HAM6 ISI GS13s registering a large kick is 
    - 08:23:20 UTC :: 0.3125 (+/-0.0625) and 0.341064 (+/-0.000244) = 0.02856 [sec] = 28.5 [ms].
    - 13:01:26 UTC :: 0.125 and 0.1604 = 0.0354 [sec] = 35.4 [msec]
    - 23:39:28 UTC :: 0.3125 and 0.328857 = 0.016357 [sec] = 16.4 [msec]

I'll keep looking for better ways to increase the precision on this statement, but for now, we should still role with Koji's results from LHO:29062
Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 10:51, Wednesday 17 April 2024 (77246)

In G2201762 > O3a_O3b_summary.pdf Niko and I showed in O3b, there was a subset of ~25 locklosses where the light fell off H1:ASC-AS_C_NSUM_OUT_DQ on the order of 1ms, where this usually takes ~30ms. There was none of these locklosses in O3a and although the analysis is not currently online, I haven't noticed any of these 1ms fast lcoklosses in O4.

These O3b "Fast" locklosses often had the light fall off POPAIR (shutter now closed in NLN) and IMC_TRANS in the same 1ms period, sometimes with the FSS signals oscillated beforehand, showing these locklosses may have been correlated to issues with the PSL system that were resolved after O3. 

jeffrey.kissel@LIGO.ORG - 12:19, Wednesday 17 April 2024 (77247)
Ah HA! Thanks Camilla! 

Camilla's comment is only tangentially related, but is fruitful anyways!
The O3B study she points to is researching lock losses that are dubbed "Fast lock losses" by the collaboration because they happen in under 1 [msec], but as the plots on page two of her linked study show, light "disappears" from the interferometer because the IMC is unlocking, NOT because the fast shutter has triggered. Camilla thought they were related to my aLOG about the fast shutter because of conversations with Rana suggesting that "if these fast locklosses release all of their energy to the dark port prior to the fast shutter firing, then we need to know, and make the fast shutter faster!"

HOWEVER, this pointer reminds me of two more channels,
    [2048 Hz] H1:ASC_A_DC_NSUM_OUT_DQ         The "DC" or "QPD" component of WFS A (down-stream of the fast shutter, but upstream of the OMC)
    [2048 Hz] H1:ASC_B_DC_NSUM_OUT_DQ         The "DC" or "QPD" component of WFS B (down-stream of the fast shutter, but upstream of the OMC)

Apparently, whatever's going on with the OMC QPDs and the OMC DCPDs (also downstream of the fast shutter) which seem to take a lot longer for light to dissipate, *doesn't* happen with the AS WFSs.

So, these channels much more clearly show the cut-off of light. That being said, they corroborate well with the HAM6 ISI GS13s.

The uncertainty still remains, though, on when the shutter gets the firing signal -- i.e. when does the shutter's trigger PD get enough voltage above threshold to trigger the shutter.
For this we still only have the 16 Hz "SYS-MOTION" channels.
And thus my conclusions about timing remain.

Anyone know of a faster channel of the trigger PD?
Images attached to this comment
LHO VE
jordan.vanosky@LIGO.ORG - posted 09:53, Wednesday 17 April 2024 - last comment - 15:06, Tuesday 23 April 2024(77241)
LN2 Dewar Inspection 4/16/24 and Vacuum Jacket Pressures

During yesterday's (4/16) maintenance period, a Norco tech came to the site to inspect the 8 LN2 dewars that feed the cryopumps. Inspection report will be posted to Q2000008 once received.

The vacuum jacket pressures were also measured during inspection:

Dewar Pressure (micron/mtorr)
CP1 68 (gauge fluctuated may need replacing, lowest value recorded)
CP2 26
CP3 68
CP4 (not in service) 110
CP5 9
CP6 33
CP7 48
CP8 77 (gauge fluctuated may need replacing, lowest value recorded)

 

Comments related to this report
janos.csizmazia@LIGO.ORG - 15:06, Tuesday 23 April 2024 (77361)
Comparison between the last pressure check for all the dewar jackets:

- CP1: Jun 26th, 2023: 7 mTorr; difference: 68-7=61 mTorr; speed of pressure growth: 61 mTorr / 302 days = 0.202 mTorr/day
- CP2: Jun 26th, 2023: 5 mTorr; difference: 26-5=21 mTorr; speed of pressure growth: 21 mTorr / 302 days = 0.070 mTorr/day
- CP3: Jun 16th, 2023: 4 mTorr; difference: 68-4=64 mTorr; speed of pressure growth: 64 mTorr / 312 days = 0.205 mTorr/day
- CP4: no data, not in service
- CP5: Jun 16th, 2023: 4 mTorr; difference: 9-4=5 mTorr; speed of pressure growth: 5 mTorr / 312 days = 0.016 mTorr/day
- CP6: Jun 2nd, 2023: 5 mTorr; difference: 33-5=28 mTorr; speed of pressure growth: 28 mTorr / 326 days = 0.086 mTorr/day
- CP7: Jun 30th, 2023: 4 mTorr; difference: 48-4=44 mTorr; speed of pressure growth: 44 mTorr / 298 days = 0.148 mTorr/day
- CP8: Jul 8th, 2023: 5 mTorr; difference: 77-5=72 mTorr; speed of pressure growth: 77 mTorr / 290 days = 0.266 mTorr/day
H1 SQZ (OpsInfo)
camilla.compton@LIGO.ORG - posted 09:20, Wednesday 17 April 2024 - last comment - 09:13, Thursday 18 April 2024(77238)
SCAN_SQZANG added to nominal SQZ path

Sheila, Naoki, Vicky, Camilla

As we continue to see the nominal SQZ angle change lock-to-lock (attached BLRMs). We edited SQZ_MANAGER to scan sqz angle (takes 120s) every time the SQZ locks, i.e. every time we go into NLN.

In  SQZ_MANAGER we have:

We tested this taking SQZ_MANAGER down and back up, it went though SCAN_SQZANG as expected and improved the SQZ by 1dB, range improved by ~10MPc.

We'll want to monitor that the change as SCAN_SQZANG may not give us the best angle at the start of the lock when SQZ is very variable. We expect this won't delay us going into observing as only added 120s and ISC_LOCK is often still waiting for ADS to converge during  this time.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 09:13, Thursday 18 April 2024 (77266)

I have removed this change of path by removing thew weighting as we can see last night SCAN_SQZANG at the start of the lock took us to a bad squeezing once thermalized.

We weren't seeing this changing angular dependence as much before 77133 with the older PSAMS settings (8.8V,-0.7V)  or (7.2V, -0.72V). Current (7.5V,0.5V). We think we should revert to the older setting.

Images attached to this comment
H1 CAL
ibrahim.abouelfettouh@LIGO.ORG - posted 09:07, Wednesday 17 April 2024 (77239)
Calibration Sweep

Ran a broadband and simulines calibration sweep at 15:30 UTC and successfully finished at 16:01 UTC.

Now going into planned Wednesday comissioning.

Start:

PDT: 2024-04-17 08:39:27.631946 PDT

UTC: 2024-04-17 15:39:27.631946 UTC

GPS: 1397403585.631946

 

End:

PDT: 2024-04-17 09:00:59.127599 PDT

UTC: 2024-04-17 16:00:59.127599 UTC

GPS: 1397404877.127599

 

2024-04-17 16:00:59,060 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240417T153928Z.hdf5

2024-04-17 16:00:59,066 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240417T153928Z.hdf5

2024-04-17 16:00:59,071 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240417T153928Z.hdf5

2024-04-17 16:00:59,075 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240417T153928Z.hdf5

2024-04-17 16:00:59,079 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240417T153928Z.hdf5

ICE default IO error handler doing an exit(), pid = 685800, errno = 32

Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 08:13, Wednesday 17 April 2024 (77237)
OPS Day Shift Start

TITLE: 04/17 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: EARTHQUAKE
    Wind: 11mph Gusts, 7mph 5min avg
    Primary useism: 0.30 μm/s
    Secondary useism: 0.12 μm/s
QUICK SUMMARY:

IFO is in NLN and OBSERVING

 

H1 CDS
david.barker@LIGO.ORG - posted 10:39, Tuesday 16 April 2024 - last comment - 12:56, Wednesday 17 April 2024(77207)
h1calcs, cal_hash_ioc and DAQ restarts

WP11812

Jamie, Jonathan, Erik, Dave:

The two CAL HASH channels (H1:CAL-CALIB_REPORT_HASH_INT and H1:CAL-CALIB_REPORT_ID_INT) were moved from the h1calcs model, where they were being acquired as single-precision floats, to a new custom EPICS IOC called cal_hash_ioc, where they are acquired as signed 32bit integers.

The h1calcs.mdl model was modified to remove the EPICS-INPUT parts for these channels. The model was then restarted.

The IOC, which was being tested with H3 channels, was modifed and rstarted to serve the H1 channels as integers.

The EDC was modified to add H1EPICS_CALHASH.ini to its master list. This added 5 channels the the DAQ, the two hash channels and three IOC status channels.

The EDC was restarted, followed in short order by a DAQ restart.

Comments related to this report
david.barker@LIGO.ORG - 13:31, Tuesday 16 April 2024 (77212)

Tue16Apr2024         
LOC TIME HOSTNAME     MODEL/REBOOT 
10:04:52 h1oaf0       h1calcs <<< Model change to remove channels
10:07:51 h1susauxb123 h1edc[DAQ]  <<< EDC has the new channels
10:08:43 h1daqdc0     [DAQ] <<< 0-leg restart, no issues
10:08:55 h1daqfw0     [DAQ]
10:08:55 h1daqnds0    [DAQ]
10:08:55 h1daqtw0     [DAQ] 
10:09:03 h1daqgds0    [DAQ]
10:11:55 h1daqdc1     [DAQ] <<< 1-leg restart, no issues
10:12:06 h1daqfw1     [DAQ]
10:12:07 h1daqtw1     [DAQ]
10:12:08 h1daqnds1    [DAQ] 
10:12:16 h1daqgds1    [DAQ] 

oli.patane@LIGO.ORG - 16:16, Tuesday 16 April 2024 (77222)
Images attached to this comment
louis.dartez@LIGO.ORG - 12:56, Wednesday 17 April 2024 (77250)
N.B. that changes to the channels H1:CAL-CALIB_REPORT_HASH_INT and H1:CAL-CALIB_REPORT_ID_INT will appear on the CDS SDF screen, *not the H1CALCS* screen. They get automatically re-populated each time pydarm export is used to update the calibration in the control room (which should be considered to happen atomically with GDS restarts). So, when updating the calibration both the H1CALCS and the H1CDS SDF screens need to be checked.
H1 ISC
gabriele.vajente@LIGO.ORG - posted 08:25, Tuesday 16 April 2024 - last comment - 09:40, Wednesday 17 April 2024(77201)
Why is it so hard to fit a good SRCL FF?

Lately it's been very difficult to fit an efficient SRCL feedforward filter, as reported many times by Camilla et al. (76993). Here I'm trying to figure out why. Spoiler alert: I don't have an answer yet.

The main problem with the SRCL FF filter (see first plot), is that the transfer function to fit has a large phase rotation, that looks basically like a phase advance (the opposite of a delay) of about 2 ms. This is very large, and being an advance, it can't be realized in a precise and simple wasy digitally.

First observation: we can fit a pretty good SRCL FF if we allow for unstable poles, i.e. poles with positive real parts (see second plot) Of course this is not something that can be implemented in the real time system. The fit ends up having a unstable complex pole at about 420 Hz and about 5 Hz. I have no intepretation for the origin of those poles, and they might very well be only a way to reproduce a phase advance (think of the Padé approximation for phase delays).

So the question is: what is the origin of this large phase rotation? It's not seen at LLO, for example see 70548

Second observation: this phase advance appeared after we switched the LSC FF from ETMX (full chain) to ETMY PUM. The third plot compares the MICH and SRCL feedforward to be fit in two cases: an old measurement when the FF was going to ETMX, and a more recent measurement with the FF going to ETMY PUM only. For both MICH and SRCL the orange traces (FF to ETMY) show a phase advanced with respect to the blue traces (FF to ETMX). For some reasons I don't fully understand, this rotation is more problematic for SRCL than for MICH, although fitting MICH has also been more difficult and the MICH FF is relevant at lower frequencies than SRCL, so maybe the phase advanced isn't that problematic.

Looking at the measurement of the MICHFF to DARM and SRCLFF to DARM, one can see that there seems to be an additional phase delay in the FF path through ETMY PUM with respect to the FF path through ETMX full chain. Since this transfer function is at the denominator when computing the ratio SRCLtoDARM/SRCLFFtoDARM that gives use the LSC FF to fit, this seems to explain the additional phase advance we observe.

The ETMY PUM L2 lock filter bank contains a "QPrime" filter module that compensates partially the additional 1/f^2 due to the actuation from L2 instead of L3. This filter however doesn't seem able to explain this additonal phase delay.

I'm now suspicious that there might be something wrong or mistuned in the ETMY L2 drive, maybe a whitening filter missing or not functioning properly or not properly compensated?

It would be worth doing a quick test in the next commissioning time with full IFO locked: inject some white noise on ETMX L2 L and ETMY L2 L and comapre the two transfer functions to DARM. In theory they should be equal, except for a sign difference. If they're not, then there must be something wrong with ETMY, since we're using ETMX L2 to lock without issues.

Images attached to this report
Comments related to this report
gabriele.vajente@LIGO.ORG - 14:35, Tuesday 16 April 2024 (77215)

This is a comparison of the LHO LSC FF and LLO LSC FF. The difference in the absolute scale might eb due to normalizations, but the SRCL FF does not show the large phase advanced visible at LHO

Images attached to this comment
gabriele.vajente@LIGO.ORG - 07:28, Wednesday 17 April 2024 (77236)

Maybe mistery solved...

The ETMY L2 DRIVEALIGN L filter bank has a "L2L3LP" filter engaged, whiel the corresponding ETMX L2 DRIVEALIGN L filter bank does not. This filter seems to be the origin of the phase rotation. At least it explains part of the phase rotation.

Anybody knows why this filter is engaged in ETMY? Should be turn it off and retune the LSC FF?

We should turn this filter off when we engage the FF (assuming it's needed some time during lock acquistion, to be checked) and retune the LSC FF. When we do that, we should reduce the excitation amplitude and reshape it, taking into account the filter we turned off.

Images attached to this comment
camilla.compton@LIGO.ORG - 09:40, Wednesday 17 April 2024 (77240)

Gabriele, Camilla. This ETMY_L2_DRIVEALIGN_L2L filter is used while locking in TRANSISTION_FROM_ETMX (when we control DARM on EY)  and then again when the LSC FF is turned on in LOW_NOISE_LENGTH_CONTORL, plot attached. To not need to change the sensitive TRANSISTION_FROM_ETMX state we should turn the filter off with ISC_LOCK  before turning the LSC FF's on. Will need to retune the LSC FF's (from scratch starting with both MICH and SRCL FF off, and adjusting the excitation to take this L2L3LP filter into account).

Images attached to this comment
H1 CDS (ISC, PSL)
filiberto.clara@LIGO.ORG - posted 14:08, Tuesday 09 April 2024 - last comment - 12:35, Wednesday 17 April 2024(77062)
SPI Pick-off Fiber Length

WP 11805
ECR E2400083

Lengths for possible SPI Pick-off fiber. Part of ECR E2400083.

PSL Enclosure to PSL-R2 - 50ft
PSL-R2 to SUS-R2 - 100ft
SUS-R2 to Top of HAM3 (flange D7/D8) - 25ft
SUS-R2 to HAM3 (flange D5) - 20ft

Comments related to this report
jeffrey.kissel@LIGO.ORG - 16:28, Tuesday 09 April 2024 (77072)
J. Kissel [for J. Oberling]

Jason also took the opportunity during his dust monitoring PSL incursion today to measure the distance between where the new fiber collimator would go on the PSL table to the place where it would exit at the point Fil calls the PSL enclosure.

He says 
SPI Fiber Collimator to PSL Enclosure = 9ft.
jeffrey.kissel@LIGO.ORG - 13:53, Thursday 11 April 2024 (77118)
J. Kissel [for F. Clara, J. Oberling]

After talking with Fil I got some clarifications on how he defines/measures his numbers:
   - They *do* include any vertical traversing that the cable might need to go through,
   - Especially for rack-to-rack distances, always assumes that the cable will go to the bottom of the rack (typically 10 ft height from cable tray to rack bottom), 
   - He adds two feet (on either end) such that we can neatly strain relieve and dress the cable.

So -- the message -- Fil has already built in some contingency into the numbers above. 
(More to the point: we should NOT consider them "uncertain" and in doing so add an addition "couple of feet here" "couple of feet there" "just in case.")

Thanks Fil!

P.S. We also note that, at H1, the optical fibers exit the PSL at ground level on the +X wall of the enclosure between the enclosure and HAM1, underneath the light pipes. Then the immediately shoot up to the cable trays, then wrap around the enclosure, and then land in the ISC racks at PSL-R2. Hence the oddly long 50 ft. number for that journey.

Jason also reports that he rounded up to the nearest foot for his measurement of the 9ft run from where the future fiber collimator will go to the PSL enclosure "feed through."
jeffrey.kissel@LIGO.ORG - 12:35, Wednesday 17 April 2024 (77249)SEI, SYS
Upon discussion with the SPI team, we want to minimize the number of "patch panel" "fiber feedthrough" connections in order to minimize loss and polarization distortion.

As such, we prefer to go directly from the "SPI pick-off in the PSL" fiber collimator directly to the Laser Prep Chassis in SUS-R2.
That being said purchase all of the above fiber lengths, such that we can re-create a "fiber feedthrough patch panel full" system as contingency plan.

So, for the baseline plan, we'll take the "original, now contingency plan" PSL-R2 to SUS-R2, 100 ft fiber run and use that to directly connect the "SPI pick-off in the PSL" fiber collimator directly to the Laser Prep Chassis in SUS-R2.

I spoke with Fil and confirmed that 100 ft is plenty enough to make that run (from SPI pick-off in PSL to SUS-R2).
Displaying reports 9061-9080 of 84061.Go to page Start 450 451 452 453 454 455 456 457 458 End