Displaying reports 7781-7800 of 86133.Go to page Start 386 387 388 389 390 391 392 393 394 End
Reports until 16:19, Tuesday 15 October 2024
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:19, Tuesday 15 October 2024 (80693)
OPS Eve Shift Start

TITLE: 10/15 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 14mph Gusts, 9mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.35 μm/s
QUICK SUMMARY:

IFO is in NLN and OBSERVING as of 23:14 UTC

Got into observing just now following a rough lock acquisition involving an EQ lockloss and an IM/PR misalignment (in turn related to SDF reversions and dolphin) that took some time to re-align.

X1 SUS
ibrahim.abouelfettouh@LIGO.ORG - posted 15:41, Tuesday 15 October 2024 - last comment - 10:33, Thursday 17 October 2024(80689)
BBSS Transfer Function and Drift Update

Ibrahim, Oli

Below are updates and promising findings concerning the most recent BBSS LHO Top Mass -4mm BP Adjustments to tackle the F1 Drift issue summarized in alog 80577.

On Friday 10/11, Oli P and I moved the blade positions (mm BP Units) to:

Front Left: Side: 0.02, Top: -4.03 Back Left: Side: -0.03, Top -3.90 Front Right: Side: 0.02, -3.91 Back Right: Side: -0.05, -3.95 Avg: Side: -0.01 Height: -3.95

Find:

The good news is that with these 4.5 days of Data, we see no sign of F1 Drift.

We will post another alog with TF to model comparisons as we find which model d1 parameter fits these -4mm BP positions best.

Images attached to this report
Non-image files attached to this report
Comments related to this report
timothy.ohanlon@LIGO.ORG - 07:16, Wednesday 16 October 2024 (80703)

That's good news! What is the configuration for the addable masses on the top mass for both the bottom and top plate? 

ibrahim.abouelfettouh@LIGO.ORG - 12:15, Wednesday 16 October 2024 (80712)
The Addable Masses on the top plate were redistributed to the Front (top). 150g and 150g as shown in the attached image.

No change to the bottom plate. We did not change the total mass.
Images attached to this comment
ibrahim.abouelfettouh@LIGO.ORG - 10:33, Thursday 17 October 2024 (80726)

Adding this Pitch Plot (which still shows no drift!) to give a measure of how much the diurnal moves day-to-day. Plot attached shows a max of 31.8 cts trough to peak max.

This is such that future drift measurements (e.g. at LLO) can try new configurations quickly and measure the prevailing drift without getting caught in the daily pitch breathing error.

 

 

Images attached to this comment
LHO General
tyler.guidry@LIGO.ORG - posted 14:58, Tuesday 15 October 2024 (80691)
Well Pump Enabled
In prep for heightened demand during the storage building construction, I ran the well pump today for 4 hours beginning at roughly 8:15am PST. 
H1 PSL
ryan.short@LIGO.ORG - posted 14:45, Tuesday 15 October 2024 - last comment - 10:25, Friday 18 October 2024(80687)
PSL NPRO Controller Swap

R. Short, J. Oberling

After the NPRO control box was swapped last week and glitches persisted (see alog80566), we decided it would be best to switch back to the original so that signal readbacks for the NPRO would be accurate. I started by bringing down the PSL in a controlled manner (ISS, FSS, PMC, AMPs, NPRO), then went out to the LVEA PSL racks and swapped control box S/N S2200008 for S2200009. Since this is the control box that had previously been in service with this NPRO laser head, no adjustments were needed to set the temperature and current to be correct. I returned to the control room and brought the whole system back up without issue.

Once it was time to lock the FSS, similar to what was needed with the last control box swap, I manually moved the NPRO temperature down about 0.4K (from 0.19 to -0.23) and locked the RefCav so that the SQZ laser would be happy. I then updated the FSS search parameters and accepted them in SDF (see screenshot).

As I noted in the 10-day trends yesterday (see alog80665), PMC REFL has risen over the past week, so I attempted a remote alignment tweak using the picomotors before the PMC. In the end, I wasn't able to make much of any improvement to the PMC alignment. I also attempted a remote alignment tweak for the RefCav, and here I was able to get a slight improvement from 0.80V to 0.81V on the TPD.

Looking for more things to try to impact the laser glitching and hopefully bring down the amount of PMC reflected power, Jason and I returned to the PSL racks to try adjusting the NPRO pump current. We ultimately raised the current from 2.12A to 2.19A, increasing the NPRO output power from 1.82W to 1.91W according to the PD in front of the laser, but not having much of an impact on the PMC. We also tried slightly altering pump diode currents in the amplifiers, but we didn't see any improvement, so these remain as they were.

I concluded our PSL activities today with a rotation stage calibration following the steps in alog79596. The measurement file, new calibration fit, and screenshot of accepting SDFs are attached.

  W (Max power in) D B (Min power angle) C (Min power in)
Old Values 94.642 1.990 -24.794 0.000
New Values 91.236 1.990 -24.797 0.000
Images attached to this report
Non-image files attached to this report
Comments related to this report
jason.oberling@LIGO.ORG - 10:25, Friday 18 October 2024 (80745)

Original NPRO injection current was 2.133 A, and the new injection current is 2.193 A.

H1 CDS
david.barker@LIGO.ORG - posted 14:06, Tuesday 15 October 2024 - last comment - 15:34, Tuesday 15 October 2024(80690)
CDS Maintenance Summary: Tuesday 15th October 2024

WP12131 h1tcscs model, add FMs and Voltage slow channels

Daniel, Vicky, Dave:

A new h1tcscs model was installed when h1oaf0 was rebooted this morning. DAQ restart was required, the addition of 51 slow channels.

WP12132 New Dolphin adapter card Firmware.

Keith, Jonathan, Erik, EJ, Tony, Dave:

Downgraded firmware was loaded onto the Dolphin IX-611 adapter cards in h1sush7, h1cdsh8 and h1oaf0. This is to prevent sponteneous PCI level switching which is possibly the cause of some O4 cornerstation Dolphin glitches.

Procedure was:

WP12134 DAQ Restart Script, Restart rts-nds.service

Jonathan, Erik, Dave:

To prevent /run/nds/jobs filling, the scripts to restart the DAQ was modified to restart the rts-nds.service on the nds machines as well as restarting their rts-daqd.services.

This allows systemd to empty the /run/nds/jobs directory, which was over 50% full on h1daqnds1. We tested this as part of this morning's DAQ restart, it worked well.

WP12087 Add VACSTAT gauge PVs to DAQ

Dave:

My first attempt to add the VACSTAT PVs relating to vacuum gauge monitoring failed because the channel names exceeded the DAQ length limit of 54 characters.

I modified the VACSTAT code to drop the _PRESS_TORR from the PT channels names, which allowed these channels to be added to the DAQ.

A new H1EPICS_VACSTAT.ini file was created which added 368 channels to the EDC. DAQ and EDC restart was needed.

The new VACSTAT IOC was restarted, and the new MEDMs installed into production.

WP12143 Replace failed SDD in h1guardian1

Dave, Jonathan, Erik:

cds_report reported a failed disk in h1guardian1's md3 raid at noon today. Jonathan and Erik found a failed 230GB SDD which is part of the root file system on h1guardian1. It was replaced  with a 1TB SSD (the smallest we have), resyning the new disk only took about an hour.

DAQ Restart

Jonathan, Erik, Dave:

We restarted the DAQ for the addition of h1tcscs slow channels and the new VACSTAT EDC channels. EDC was restarted, its channel count increased from 57061 to 57429.

This was the first test of the new restart script, which after doing the round of rts-daqd restarts then restarts rts-nds on the nds machines.

The only problem was a spontaneous retstart of framewriter 1 after it had written about 3 full frames. It came back by itself and has been stable since.

Comments related to this report
david.barker@LIGO.ORG - 15:34, Tuesday 15 October 2024 (80692)

Tue15Oct2024
LOC TIME HOSTNAME     MODEL/REBOOT
08:20:39 h1sush7      ***REBOOT***  <<< Dolphin firmware install
08:22:27 h1sush7      h1iopsush7  
08:22:40 h1sush7      h1susfc1    
08:22:53 h1sush7      h1sussqzin  
08:23:06 h1sush7      h1susauxh7  
08:32:25 h1cdsh8      ***REBOOT*** <<< Dolphin firmware install
08:34:17 h1cdsh8      h1iopcdsh8  
08:34:30 h1cdsh8      h1isiham8   
08:34:43 h1cdsh8      h1susfc2    
08:34:56 h1cdsh8      h1sqzfces   
08:35:09 h1cdsh8      h1susauxh8  
08:35:22 h1cdsh8      h1pemh8     
08:38:04 h1oaf0       ***REBOOT*** <<< Dolphin firmware install
08:39:45 h1oaf0       h1iopoaf0   
08:39:58 h1oaf0       h1pemcs     
08:40:11 h1oaf0       h1tcscs     <<< New h1tcscs model install
08:40:24 h1oaf0       h1susprocpi 
08:40:37 h1oaf0       h1seiproc   
08:40:50 h1oaf0       h1oaf       
08:41:03 h1oaf0       h1calcs     
08:41:16 h1oaf0       h1susproc   
08:41:29 h1oaf0       h1calinj    
08:41:42 h1oaf0       h1bos       


09:04:44 h1daqdc0     [DAQ]   <<< DAQ 0-leg restart
09:04:57 h1daqfw0     [DAQ]
09:04:57 h1daqtw0     [DAQ]
09:04:58 h1daqnds0    [DAQ]
09:05:06 h1daqgds0    [DAQ]


09:05:26 h1susauxb123 h1edc[DAQ] <<< EDC restart for VACSTAT chanels


09:11:01 h1daqdc1     [DAQ] <<< DAQ 1-leg restart
09:11:13 h1daqfw1     [DAQ]
09:11:14 h1daqnds1    [DAQ]
09:11:14 h1daqtw1     [DAQ]
09:11:22 h1daqgds1    [DAQ]
09:15:52 h1daqfw1     [DAQ] <<< Spontaneous restart of FW1 DAQD
 

H1 SEI (CSWG, ISC, SEI, SUS, SYS)
jeffrey.kissel@LIGO.ORG - posted 13:04, Tuesday 15 October 2024 - last comment - 16:07, Monday 28 October 2024(80683)
Measurements of HAM2ISI to/from SUS PR3 Sus Point and M1 Stages Successful, But Incomplete
J. Kissel
WP 12140

I've completed 6 SUS + 4 ISI = 10 of 12 total DOF excitations that I wanted to drive before I ran out of time this morning. Each drive was "successful" in that I was able to get plenty of coherence between the 4 DOFs of ISI drive and SUS response, and some coherence between 6 SUS drive DOFs and ISI response. As expected, the bulk of the time was spent tuning the ISI excitations. I might have time to "finish" the data set and get the last two missing DOFs, but I was at least able to get both directions of LPY to LPY transfer functions, which are definitely juicy enough to get the analysis team started.

Measurement environmental/configuration differences of the HAM2 ISI from how they are nominally in observing:
    - PR3 M1 DAMP local damping loop gains are at -0.2, where they are nominally at -1.0. (The point of the test.)
    - CPS DIFF is OFF. (needed to do so for maintenance day)
    - Coil Driver z:p = 1:10 Hz analog low-pass (and digital compensation for it) is OFF. (need to do so to get good SNR on SUS M1 drive without saturating the SUS DACs)

Interesting things to call out that are the same as observing:
    - The PR3 alignment sliders were ON. P = -122 [urad]; Y = 100 [urad]. (Don't *expect* dynamics to change with ON vs. OFF, but we have seen diagonal response change if close an EQ stop. Haven't ever looked, but I wouldn't be surprised of off-diagonal responses change. Also DAC range gets consumed by DC alignment request, which is important for driving transfer functions.)
    - Corner station sensor correction, informed by the Bier Garten "ITMY" T240 on the ground. (the h1oaf0 computer got booted this morning, so we had to re-request the SEI_CS configuration guardian to be in WINDY. The SEI_ENV guardian had been set to LIGHT_MAINTENANCE.)
    - PR3 is NOT under any type of ISC global control; neither L, P, or Y. (global ISC feedback for the PRC's LPY DOFs goes to PRM and PR2.)

There are too many interesting transfer functions to attach, or even to export in the limited amount of time I have. 
So -- I leave it to the LSC team that inspired this test to look at the data, and use as needed.

The data have been committed to the SVN here:
    /ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/PR3/SAGM1/Data/
        2024-10-15_1627_H1SUSPR3_M1_WhiteNoise_L_0p02to50Hz.xml
        2024-10-15_1627_H1SUSPR3_M1_WhiteNoise_T_0p02to50Hz.xml
        2024-10-15_1627_H1SUSPR3_M1_WhiteNoise_V_0p02to50Hz.xml
        2024-10-15_1627_H1SUSPR3_M1_WhiteNoise_R_0p02to50Hz.xml
        2024-10-15_1627_H1SUSPR3_M1_WhiteNoise_P_0p02to50Hz.xml
        2024-10-15_1627_H1SUSPR3_M1_WhiteNoise_Y_0p02to50Hz.xml

    /ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/PR3/Common/Data
        2024-10-15_1627_H1ISIHAM2_ST1_WhiteNoise_PR3SusPoint_L_0p02to50Hz.xml
        2024-10-15_1627_H1ISIHAM2_ST1_WhiteNoise_PR3SusPoint_T_0p02to50Hz.xml
            [ran out of time for V]
            [ran out of time for R]
        2024-10-15_1627_H1ISIHAM2_ST1_WhiteNoise_PR3SusPoint_P_0p02to50Hz.xml
        2024-10-15_1627_H1ISIHAM2_ST1_WhiteNoise_PR3SusPoint_Y_0p02to50Hz.xml


For the SUS drives templates, I gathered:
     Typical:
     - The top mass, M1, OSEM sensors, in the LTVRPY Euler Basis, calibrated into microns or microradians, [um] or [urad].
         H1:SUS-PR3_M1_DAMP_?_IN1_DQ             [Filtered with the 64x filter, then downsampled to to fs = 256 Hz]
     - The top mass, M1, OSEM sensors, in the T1T2T3LFRTSDD OSEM Sensor/Coil Basis, calibrated into microns, [um].
         H1:SUS-PR3_M1_OSEMINF_??_OUT_DQ         [Filtered with the 64x filter, then downsampled to to fs = 256 Hz]
     - The top mass, M1, OSEM coils' requested drive, in the T1T2T3LFRTSD OSEM Sensor/Coil Basis, in raw (18 bit) DAC counts, [ct_M1SUS18bitDAC].
         H1:SUS-PR3_M1_MASTER_OUT_??_DQ          [Filtered with the 32x filter, then downsampled to to fs = 512 Hz]

     For this set of templates:
     - The bottom mass i.e. optic, M3, OSEM sensors, in the LPY Euler Basis, calibrated into microns or microradians, [um] or [urad].
         H1:SUS-PR3_M3_WIT_?_DQ                  [Filtered with the 64x filter, then downsampled to to fs = 256 Hz]
     - The bottom mass i.e. optic, M3, optical lever, in PIT YAW Euler Basis, calibrated into mircoradians, [urad].
         H1:SUS-PR3_M3_OPLEV_???_OUT_DQ          [Filtered with the 64x filter, then downsampled to to fs = 256 Hz]
     - The ISI's Stage 1 GS13 inertial sensors, projected to the PR3 suspension point LTVRPY Euler basis, calibrated into nanometers or nanoradians, [nm] or [nrad]
         H1:ISI-HAM2_SUSPOINT_PR3_EUL_?_DQ       [Filtered with the 4x filter, then downsampled to to fs = 1024 Hz]
     - The ISI's Stage 1 super sensors, in the ISI's Cartesian XYZRXRYRZ basis, calibrated into nanometers or nanoradians, [nm] or [nrad]
         H1:ISI-HAM2_ISO_*_IN1_DQ                [Filtered with the 2x filter, then downsampled to to fs = 2048 Hz]

Note: The six M1 OSEM sensors in the Euler Basis are set to be the "A" channels, such that you can reconstruct the transfer function between the M1 Euler Basis to all the other response channels in the physical units stated above. As usual the excitation channel for the given drive DOF (in each template, that's H1:SUS-MC3_M1_TEST_?_EXC) is automatically stored, but these channels are in goofy "Euler Basis (18-bit) DAC counts," so tough to turn into physical units.

For the brand new ISI drive templates, I gathered:
     - The ISI's Stage 1 super sensors, in the ISI's Cartesian XYZRXRYRZ basis, calibrated into nanometers or nanoradians, [nm] or [nrad]
         H1:ISI-HAM2_ISO_*_IN1_DQ                [Filtered with the 2x filter, then downsampled to to fs = 2048 Hz]
     - The ISI's Stage 1 GS13 inertial sensors, projected to the PR3 suspension point LTVRPY Euler basis, calibrated into nanometers or nanoradians, [nm] or [nrad]
         H1:ISI-HAM2_SUSPOINT_PR3_EUL_?_DQ       [Filtered with the 4x filter, then downsampled to to fs = 1024 Hz]

     - The top mass, M1, OSEM sensors, in the LTVRPY Euler Basis, calibrated into microns or microradians, [um] or [urad].
         H1:SUS-PR3_M1_DAMP_?_IN1_DQ             [Filtered with the 64x filter, then downsampled to to fs = 256 Hz]
     - The bottom mass i.e. optic, M3, OSEM sensors, in the LPY Euler Basis, calibrated into microns or microradians, [um] or [urad].
         H1:SUS-PR3_M3_WIT_?_DQ                  [Filtered with the 64x filter, then downsampled to to fs = 256 Hz]
     - The bottom mass i.e. optic, M3, optical lever, in PIT YAW Euler Basis, calibrated into mircoradians, [urad].
         H1:SUS-PR3_M3_OPLEV_???_OUT_DQ          [Filtered with the 64x filter, then downsampled to to fs = 256 Hz]
     - The ISI's Stage 1 actuators' requested drive, in the H1H2H3V1V2V3 ISI actuator basis, in raw (16-bit) DAC counts, [ct_ISIST116bitDAC].
         H1:ISI-HAM2_OUTF_??_OUT                 [Didn't realize in time that there are DQ channels H1:ISI-HAM2_MASTER_??_DRIVE_DQ stored at fs = 2048 Hz, or I would have used those.]

Note: Here, I set the number of "A" channels to twelve, such that both the ISI's Cartesian basis and the PR3 Suspoint basis versions of the GS13s can be used as the transfer function reference channel. 
Comments related to this report
jeffrey.kissel@LIGO.ORG - 13:50, Tuesday 15 October 2024 (80688)
OK ok ok. I couldn't resist and it didn't take that long. 

I attach the unit-full transfer functions between the ISI Sus. Point Drive DOFs (L, P, Y, and T) and the Top Mass SUS M1 OSEMs response in L, P, Y.

It's.... a complicated collection of TFs; and this isn't all of them that are relevant!

Just to make the point that Dan DeBra taught Brian Lantz, who taught me, and we're passing down to Edgard Bonilla: *every* DOF matters; the one you ignore is the one that will bite you.
The transverse, T, DOF drive data set demonstrates this point. None of these transverse to LPY couplings nominally exist if we just consider first principles equations of rigid-body motion of an ideal suspension. But alas, the on-resonance coupling from T to L, P, Y ranges from 0.1 ... to 50 [m/m] or [rad/m]. 

I may need to drive the ISI with an entirely different color of excitation to resolve these transfer functions above 5 Hz, where it's perhaps most interesting for DARM, but this is a good start.

The ISI drive templates have been re-committed to the repo with the calibrations of each channel in place. (It was really easy: just multiplying each channel by the appropriate 1e-9 [m/nm] or 1e-6 [m/um] in translation, and similar 1e-9 [rad/nrad] or 1e-6 [rad/urad].)
Images attached to this comment
brian.lantz@LIGO.ORG - 09:50, Wednesday 16 October 2024 (80709)

Thank Jeff!

You were right - this looks much more interesting than I had hoped. We'll run the scripts for the SUS to SUS TFs and put them up here, too.

Transverse to Pitch at 50 rad/m on resonance. Maybe "only" 10 when you turn up the damping to nominal? Ug.

brian.lantz@LIGO.ORG - 16:07, Monday 28 October 2024 (80907)

I've also taken a look at how much the ISI moves when Jeff drives the BOSEMs on the top stage of PR3. The answer is "not very much". I've attached two plots, one for the top mass Yaw drive and the other for the top mass length drive. note - The ISI reponses need to be divided by 1000 - they are showing nm or nrad/drive, while the SUS is showing microns or microradians/drive.

So - the back reaction of the osem drives can be safely ignored for PR3, and probably all the triples, as expected. (maybe not for the TMs, not that it matters right now).

It raises 2 questions

1. How do I divide a line by 1000 in a dtt plot? (I feel so old)
2. Why does the green line (SUSPoint) look so much noiser that the cart-basis blend signals? I would expect these to look nearly identical above about 1/2 Hz, because the blend signal is mostly GS-13. The calibrations look right, so why does the TF to the GS-13 signal look so much worse than the TF to the blend output?

These plots are at {SUS_SVN}/HLTS/H1/PR3/SAGM1/Results/

2024-10-15_length_to_length_plot.pdf
2024-10-15_yaw_to_yaw_plot.pdf

Non-image files attached to this comment
jeffrey.kissel@LIGO.ORG - 10:17, Wednesday 23 October 2024 (80835)
I grabbed the remaining ISI drive degrees of freedom this morning, V and R. The color and strength of the excitation was the same as it was on Oct 15th, where I used the L drive excitation params for V, and the P drive excitation params for R.

PR3 damping loops gains were at -0.2 again,
Sensor correction is ON,
CPS DIFF is OFF.
PR3 alignment offsets are ON.

For these two data sets, the PR3 top mass coil driver low pass was still ON (unlike the Oct 15th data), but with the damping loop gains at -0.2, there's no danger of saturation at all, and the low pass filter's response is well compensated, so it has no impact on any of the ISI excitation transfer functions to SUS-PR3_M1_DAMP_?_IN1_DQ response channels. It's only really important to have the LP filter OFF when driving the SUS.

There was the remnants of an earthquake happening, but the excitations were loud enough that we still got coherence above at least 0.05 Hz.

Just for consistency's sake of having a complete data set, I saved the files with virtually the same file name:
     /ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/PR3/Common/Data/
         2024-10-15_1627_H1ISIHAM2_ST1_WhiteNoise_PR3SusPoint_L_0p02to50Hz.xml
         2024-10-15_1627_H1ISIHAM2_ST1_WhiteNoise_PR3SusPoint_T_0p02to50Hz.xml
         2024-10-15_1627_H1ISIHAM2_ST1_WhiteNoise_PR3SusPoint_V_0p02to50Hz.xml   # New as of Oct 23
         2024-10-15_1627_H1ISIHAM2_ST1_WhiteNoise_PR3SusPoint_R_0p02to50Hz.xml   # New as of Oct 23
         2024-10-15_1627_H1ISIHAM2_ST1_WhiteNoise_PR3SusPoint_P_0p02to50Hz.xml
         2024-10-15_1627_H1ISIHAM2_ST1_WhiteNoise_PR3SusPoint_Y_0p02to50Hz.xml
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 12:56, Wednesday 23 October 2024 (80847)
Today I also gathered another round of all six DOFs of ISI excitation, but this time changing the color of the excitation to get more coherence between 1 to 20 Hz -- since this is where the OSEM noise matters the most for the IFO. In the end, the future fitter may have to end up combining the two data sets to get the best estimate of the plant.

In the same folder, you'll find 
    /ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/PR3/Common/Data
        2024-10-23_1739_H1ISIHAM2_ST1_WhiteNoise_PR3SusPoint_L_0p02to50Hz.xml
        2024-10-23_1739_H1ISIHAM2_ST1_WhiteNoise_PR3SusPoint_P_0p02to50Hz.xml
        2024-10-23_1739_H1ISIHAM2_ST1_WhiteNoise_PR3SusPoint_R_0p02to50Hz.xml
        2024-10-23_1739_H1ISIHAM2_ST1_WhiteNoise_PR3SusPoint_T_0p02to50Hz.xml
        2024-10-23_1739_H1ISIHAM2_ST1_WhiteNoise_PR3SusPoint_V_0p02to50Hz.xml
        2024-10-23_1739_H1ISIHAM2_ST1_WhiteNoise_PR3SusPoint_Y_0p02to50Hz.xml

Happy fitting!
brian.lantz@LIGO.ORG - 16:00, Thursday 24 October 2024 (80863)

Here is the set of plots generated by {SUSsvn}/Common/MatlabTools/plotHLTS_dtttfs_M1 for the data Jeff collected on Oct 15.
(see above, the data set is in 6 text file with names like 2024-10-15_1627_H1SUSPR3_M1_WhiteNoise_L_0p02to50Hz_tf.txt (L, P, Y, etc)

These are funny looking because the damping loops are only running at 1/5 of the normal gain. This gives higher-Q peaks and less OSEM noise coupling. This is done as part of an exercise to run the detector with a combination of real OSEM signals (ie the ones here) PLUS model-based OSEM estimators. I've set the script to show all the cross terms, and these are clearly present. It remains to be seen how much the various cross terms will matter. This is the data we will use to help answer that question.

I've also attached a slimmed-down version of the cross-coupling plots which just shows the coupling to yaw. These are the same plots as above with some of the lines removed so that I can see what is happening to yaw more easily. In each plot the red is the measured cross-coupling from dof-drive to Yaw-response. For reference, these also include the light-blue yaw-to-yaw and the grey dof-to-dof measurements.

These plots and the .mat file are in the SUS SVN at {SUS_SVN}/HLTS/H1/PR3/SAGM1/Results/

2024-10-15_1627_H1SUSPR3_M1.mat
2024-10-15_1627_H1SUSPR3_TFs_lightdamping_yawonly.pdf
2024-10-15_1627_H1SUSPR4_M1_ALL_TFs_lightdamping.pdf

Non-image files attached to this comment
brian.lantz@LIGO.ORG - 20:40, Thursday 24 October 2024 (80870)

On a side note, the ISI to ISI TFs are not unity between 0.1 and 1 Hz. I think they should be. This is a drive from the blended input of the control loop (well, several, because it's in the EUL basis) to the signal seen on the GS-13, in the same EUL basis, converted to displacement (so it will roll off below 30 mHz, because the the realtime calibration of the GS-13s in displacement rolls off, and it has a bump at 30 Hz because this is really the complementary sensitivity, and that has a bump because of the servo bump)

But it should be really close to 1 from 0.1 to 3 Hz. The rotational DOFs (right side, red line) look pretty good, but the translations (L, V, T) all show a similar non-unity response. Jim and Brian should discuss. They look similar to each other, so maybe it's a blend which isn't quite complementary?

Non-image files attached to this comment
brian.lantz@LIGO.ORG - 16:56, Saturday 26 October 2024 (80890)

I've plotted the TFs from the SUSpoint drive to the M1 EUL basis TFs. Note that in the plots, I've adjusted the on-diagonal model plots to be -1 + model. The model is the INERTIAL motion of the top stage, the measured TFs all show the RELATIVE motion between the ISI and top stage. So you want to model Top/ISI - ISI/ISI or -1 + model. This is only true for the on-diagonal TFs.

The code to do this lives in {SUSsvn}/HLTS/Common/MatlabTools/plotHLTS_ISI_dtttfs_M1.m

I've attached a big set of pdfs. The cross couplings look not-so-great. See the last 5 plots for the cross-couplings of dof->Yaw. in particular, L->Y is about the same as Y->Y. (pg 22)

The pdfs and the .mat file have been committed to the SVN at

{SUSsvn}/HLTS/H1/PR3/SAGM1/Results/
  2024-10-15_1627_H1SUSPR3_M1_SUSpointDrive.mat
  2024-10-15_1627_H1SUSPR3_M1_ALL_TFs_lightdamping_SUSpointDrive.pdf

(Also, see in the previous comments, there was a file which I named ...PR4...  this is now corrected to ...PR3... )

Non-image files attached to this comment
LHO VE
janos.csizmazia@LIGO.ORG - posted 12:56, Tuesday 15 October 2024 - last comment - 12:29, Wednesday 16 October 2024(80686)
MY GV10 Annulus Ion Pump swap
Gerardo, Janos

Reacting to the issue in aLog https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=80619, the GV10 AIP was swapped, as the pump broke down.

The AIP was swapped with a noble diode pump, as a lesson learnt from the latest AIP swap: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=80411.

The only difficulty was after swapping the pump and then pumping on the Annulus line, the old Varian controller did not work (the reason for its malfunction is now supposedly diagnosed), and so now the Ion pump runs with an Agilent IPC Mini controller. The downside of this is that the wiring of its data cable is not compatible with the current CDS setup, and therefore on the MEDM screen the AIP still appears as faulty - see picture attached. The controller will be replaced shortly, after the Varian controller is fixed. Until then, the Agilent controller will be periodically checked.

Nevertheless, the Noble-diode pump proved to be an excellent choice again, as it pumped the Annulus volume down pretty quickly (in about 7 minutes) below 1E-6 Torr.
Images attached to this report
Comments related to this report
janos.csizmazia@LIGO.ORG - 12:29, Wednesday 16 October 2024 (80713)
Gerardo replaced the IPC Mini controller with a Varian controller back on yesterday, so now the MEDM screen is also clear.
H1 AWC
daniel.sigg@LIGO.ORG - posted 12:41, Tuesday 15 October 2024 - last comment - 08:42, Wednesday 16 October 2024(80685)
Strain gauge servo

(Dave, Vicky, Daniel)

The necessary filter modules for the ZM2/4/5 strain gauge servos were booted into the h1tcscs model, and the medm updated. We added some channels to the copy guardian, since the strain gauge channels are acquired in the TwinCAT system. The new filter modules were loaded with a -1/f filter and then engaged with a gain of 1. The output limiter were set to 50V.

We played around with the new servos by stepping the target strain gauge settings. The response time seems to be about 7-8 sec. All working fine.

Comments related to this report
victoriaa.xu@LIGO.ORG - 08:42, Wednesday 16 October 2024 (80695)SQZ

So when running the PSAMS servo, to change PSAMS ROC, set H1:AWC-ZM{2,4,5}_M2_STRAIN_VOLTAGE_TARGET instead of adjusting the applied voltage offset slider H1:AWC-ZM{2,4,5}_M2_SAMS_OFFSET.

Attaching some trends of the PSAMS servo working overnight -

Screenshot 1 - For ZM4, servo filter bank settings and ndscope trends of a step in the servo setpoint. The PSAMS servo output adjusts the applied voltage (see PZT DC voltage readback, eg H1:AWC-ZM4_PSAMS_VOLTAGE_DC) to bring the strain gauge voltage readback (H1:AWC-ZM4_PSAMS_STRAIN_VOLTAGE) to the target setpoint voltage (H1:AWC-ZM4_M2_STRAIN_VOLTAGE_TARGET).

Screenshot 2 - ZM2,4,5 PSAMS strain gauge voltage trends before/after servo. Before PSAMS servo (last few days) - the applied PSAMS PZT voltage is static, and the PSAMS strain gauge voltage readbacks drifts (small amount). After turning on PSAMS servos, the applied PSAMS PZT voltages are servo'd to stabilize the strain gauge readbacks to the "TARGET" values.

Screenshot 3 - ZM5 suspension MEDM screen, showing where the PSAMS servo filter bank and target setpoints are. This ZM5 suspension medm screen is in $(userapps)/sus/common/medm/hxds/SUS_CUST_HPDS_OVERVIEW.adl (Sheila helped us svn-commit this screen from LLO, then Daniel pulled it for use at LHO).

Daniel updated the PSAMS servo parts in the AWC block of h1tcscs.mdl, found in userapps/trunk/tcs/h1/models. The SDF's appeared in the h1tcscs table (in the "OAF" box on the sdf screen). Daniel has sdf-monitored the PSAMS filter banks, and left the ZM PSAMS PZT offsets sdf-monitored. Note these pzt offset values are basically meaningless when the servo is running. The ZM2/4/5 offsets are now all SDF'd around 100V, so they would come back to ~nominal values after any resets.

This follows up on Adam's llo73464 "Adding SAMS servo to HPDS common part" (from LLO: WP 11839IIET ticket 32287, and ECR E2400361) to pull the psams servo for LHO.

Images attached to this comment
H1 General
camilla.compton@LIGO.ORG - posted 12:39, Tuesday 15 October 2024 (80684)
LVEA swept following T1500386

Paging sytem and lights (including mega-cleanroom) turned off. Tony turned WAP off. 

SQZ bay crane is ~6ft from nominal position, there's a cable hanging from the aligo PSL injection lock servo LO board (photo), also the clean-room in the West-bay is plugged in (but off). 

Images attached to this report
H1 General (GRD)
anthony.sanchez@LIGO.ORG - posted 12:29, Tuesday 15 October 2024 (80682)
CDS_CA_COPY Guardian issue

CDS_CA_COPY Guardian broke.
TJ just had me do this:

anthony.sanchez@cdsws29: guardutil print CDS_CA_COPY
ifo: H1
name: CDS_CA_COPY
module:
  /opt/rtcds/userapps/release/sys/h1/guardian/CDS_CA_COPY.py
CA prefix:
nominal state: COPY
initial request: COPY
states (*=requestable):
  10 * COPY
   0   INIT
anthony.sanchez@cdsws29: guardctrl restart CDS_CA_COPY
INFO: systemd: restarting nodes: CDS_CA_COPY
anthony.sanchez@cdsws29:

 

LOG Output.....
------------------------------------------------------

                                            #3  0x00007fed0c85003b _ZN13tcpSendThread3runEv (libca.so.3.15.5 + 0x4203b)
                                             #4  0x00007fed0c7d40bb epicsThreadCallEntryPoint (libCom.so.3.15.5 + 0x3f0bb)
                                             #5  0x00007fed0c7d9f7b start_routine (libCom.so.3.15.5 + 0x44f7b)
                                             #6  0x00007fed144d4ea7 start_thread (libpthread.so.0 + 0x7ea7)
                                             #7  0x00007fed14257a6f __clone (libc.so.6 + 0xfba6f)
2024-10-15_18:45:39.055649Z guardian@CDS_CA_COPY.service: Failed with result 'watchdog'.
2024-10-15_18:45:39.056668Z guardian@CDS_CA_COPY.service: Consumed 1w 18.768s CPU time.
2024-10-15_19:06:45.533022Z Starting Advanced LIGO Guardian service: CDS_CA_COPY...
2024-10-15_19:06:45.994090Z ifo: H1
2024-10-15_19:06:45.994090Z name: CDS_CA_COPY
2024-10-15_19:06:45.994090Z module:
2024-10-15_19:06:45.994090Z   /opt/rtcds/userapps/release/sys/h1/guardian/CDS_CA_COPY.py
2024-10-15_19:06:46.019449Z CA prefix:
2024-10-15_19:06:46.019449Z nominal state: COPY
2024-10-15_19:06:46.019449Z initial request: COPY
2024-10-15_19:06:46.019449Z states (*=requestable):
2024-10-15_19:06:46.019905Z   10 * COPY
2024-10-15_19:06:46.019905Z    0   INIT
2024-10-15_19:06:46.794201Z CDS_CA_COPY Guardian v1.5.2
2024-10-15_19:06:46.800567Z cas warning: Configured TCP port was unavailable.
2024-10-15_19:06:46.800567Z cas warning: Using dynamically assigned TCP port 45771,
2024-10-15_19:06:46.800567Z cas warning: but now two or more servers share the same UDP port.
2024-10-15_19:06:46.800567Z cas warning: Depending on your IP kernel this server may not be
2024-10-15_19:06:46.800567Z cas warning: reachable with UDP unicast (a host's IP in EPICS_CA_ADDR_LIST)
2024-10-15_19:06:46.803537Z CDS_CA_COPY EPICS control prefix: H1:GRD-CDS_CA_COPY_
2024-10-15_19:06:46.803687Z CDS_CA_COPY system archive: /srv/guardian/archive/CDS_CA_COPY
2024-10-15_19:06:47.114773Z CDS_CA_COPY system archive: id: 2b2d8bae8f5cee7885337837576f9e5d47f5891a (45275322)
2024-10-15_19:06:47.114773Z CDS_CA_COPY system name: CDS_CA_COPY
2024-10-15_19:06:47.115042Z CDS_CA_COPY system CA prefix: None
2024-10-15_19:06:47.115042Z CDS_CA_COPY module path: /opt/rtcds/userapps/release/sys/h1/guardian/CDS_CA_COPY.py
2024-10-15_19:06:47.115165Z CDS_CA_COPY initial state: INIT
2024-10-15_19:06:47.115165Z CDS_CA_COPY initial request: COPY
2024-10-15_19:06:47.115165Z CDS_CA_COPY nominal state: COPY
2024-10-15_19:06:47.115165Z CDS_CA_COPY CA setpoint monitor: False
2024-10-15_19:06:47.115355Z CDS_CA_COPY CA setpoint monitor notify: True
2024-10-15_19:06:47.115355Z CDS_CA_COPY daemon initialized
2024-10-15_19:06:47.115355Z CDS_CA_COPY ============= daemon start =============
2024-10-15_19:06:47.132272Z CDS_CA_COPY W: initialized
2024-10-15_19:06:47.165033Z CDS_CA_COPY W: EZCA v1.4.0
2024-10-15_19:06:47.165517Z CDS_CA_COPY W: EZCA CA prefix: H1:

                                            #3  0x00007fed0c85003b _ZN13tcpSendThread3runEv (libca.so.3.15.5 + 0x4203b)
                                             #4  0x00007fed0c7d40bb epicsThreadCallEntryPoint (libCom.so.3.15.5 + 0x3f0bb)
                                             #5  0x00007fed0c7d9f7b start_routine (libCom.so.3.15.5 + 0x44f7b)
                                             #6  0x00007fed144d4ea7 start_thread (libpthread.so.0 + 0x7ea7)
                                             #7  0x00007fed14257a6f __clone (libc.so.6 + 0xfba6f)
2024-10-15_18:45:39.055649Z guardian@CDS_CA_COPY.service: Failed with result 'watchdog'.
2024-10-15_18:45:39.056668Z guardian@CDS_CA_COPY.service: Consumed 1w 18.768s CPU time.
2024-10-15_19:06:45.533022Z Starting Advanced LIGO Guardian service: CDS_CA_COPY...
2024-10-15_19:06:45.994090Z ifo: H1
2024-10-15_19:06:45.994090Z name: CDS_CA_COPY
2024-10-15_19:06:45.994090Z module:
2024-10-15_19:06:45.994090Z   /opt/rtcds/userapps/release/sys/h1/guardian/CDS_CA_COPY.py
2024-10-15_19:06:46.019449Z CA prefix:
2024-10-15_19:06:46.019449Z nominal state: COPY
2024-10-15_19:06:46.019449Z initial request: COPY
2024-10-15_19:06:46.019449Z states (*=requestable):
2024-10-15_19:06:46.019905Z   10 * COPY
2024-10-15_19:06:46.019905Z    0   INIT
2024-10-15_19:06:46.794201Z CDS_CA_COPY Guardian v1.5.2
2024-10-15_19:06:46.800567Z cas warning: Configured TCP port was unavailable.
2024-10-15_19:06:46.800567Z cas warning: Using dynamically assigned TCP port 45771,
2024-10-15_19:06:46.800567Z cas warning: but now two or more servers share the same UDP port.
2024-10-15_19:06:46.800567Z cas warning: Depending on your IP kernel this server may not be
2024-10-15_19:06:46.800567Z cas warning: reachable with UDP unicast (a host's IP in EPICS_CA_ADDR_LIST)
2024-10-15_19:06:46.803537Z CDS_CA_COPY EPICS control prefix: H1:GRD-CDS_CA_COPY_
2024-10-15_19:06:46.803687Z CDS_CA_COPY system archive: /srv/guardian/archive/CDS_CA_COPY
2024-10-15_19:06:47.114773Z CDS_CA_COPY system archive: id: 2b2d8bae8f5cee7885337837576f9e5d47f5891a (45275322)
2024-10-15_19:06:47.114773Z CDS_CA_COPY system name: CDS_CA_COPY
2024-10-15_19:06:47.115042Z CDS_CA_COPY system CA prefix: None
2024-10-15_19:06:47.115042Z CDS_CA_COPY module path: /opt/rtcds/userapps/release/sys/h1/guardian/CDS_CA_COPY.py
2024-10-15_19:06:47.115165Z CDS_CA_COPY initial state: INIT
2024-10-15_19:06:47.115165Z CDS_CA_COPY initial request: COPY
2024-10-15_19:06:47.115165Z CDS_CA_COPY nominal state: COPY
2024-10-15_19:06:47.115165Z CDS_CA_COPY CA setpoint monitor: False
2024-10-15_19:06:47.115355Z CDS_CA_COPY CA setpoint monitor notify: True
2024-10-15_19:06:47.115355Z CDS_CA_COPY daemon initialized
2024-10-15_19:06:47.115355Z CDS_CA_COPY ============= daemon start =============
2024-10-15_19:06:47.132272Z CDS_CA_COPY W: initialized
2024-10-15_19:06:47.165033Z CDS_CA_COPY W: EZCA v1.4.0
2024-10-15_19:06:47.165517Z CDS_CA_COPY W: EZCA CA prefix: H1:
2024-10-15_19:06:47.165517Z CDS_CA_COPY W: ready
2024-10-15_19:06:47.165615Z CDS_CA_COPY worker ready
2024-10-15_19:06:47.165615Z CDS_CA_COPY ========== executing run loop ==========
2024-10-15_19:06:47.165779Z Started Advanced LIGO Guardian service: CDS_CA_COPY.
2024-10-15_19:06:48.010127Z CDS_CA_COPY OP: EXEC
2024-10-15_19:06:48.010884Z CDS_CA_COPY MODE: AUTO
2024-10-15_19:06:48.012375Z CDS_CA_COPY calculating path: INIT->COPY
2024-10-15_19:06:48.012922Z CDS_CA_COPY new target: COPY
2024-10-15_19:06:48.046196Z CDS_CA_COPY executing state: INIT (0)
2024-10-15_19:06:48.046407Z CDS_CA_COPY [INIT.enter]
2024-10-15_19:06:48.071850Z CDS_CA_COPY JUMP target: COPY
2024-10-15_19:06:48.072301Z CDS_CA_COPY [INIT.exit]
2024-10-15_19:06:48.139271Z CDS_CA_COPY JUMP: INIT->COPY
2024-10-15_19:06:48.139587Z CDS_CA_COPY calculating path: COPY->COPY
2024-10-15_19:06:48.140156Z CDS_CA_COPY executing state: COPY (10)
2024-10-15_19:06:48.269602Z Warning: Duplicate EPICS CA Address list entry "10.101.0.255:5064" discarded
                                         Stack trace of thread 1702297:
                                             #0  0x00007fed1
                                             
                                             
                                            

H1 CDS (CDS)
erik.vonreis@LIGO.ORG - posted 12:19, Tuesday 15 October 2024 (80681)
Dolphin firmware was updated on three front ends

[EJ, Dave, Erik, Jonathan]

Dolphin firmware was updated on h1sush7, h1cdsh8, h1oaf0 from version 08 to 97.  All other frontends had the correct version.  This version will help avoid crashes such as described here:

https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=80574

H1 CDS (CDS)
erik.vonreis@LIGO.ORG - posted 12:15, Tuesday 15 October 2024 (80680)
Workstations updated

Workstations were updated and rebooted.  This was an OS packages update.  Conda packages were not updated.

H1 TCS
camilla.compton@LIGO.ORG - posted 11:15, Tuesday 15 October 2024 (80679)
Checked CO2Y beam centering prior to laser swap

TJ, Camilla WP# 12121

Current CO2Y laser: 20510-20816D which was re-gassed in 2016 and installed in October 2022 65264.

To prepare for the CO2Y laser swap next week:

  •  TJ and I moved the spare CO2 laser that just got re-gassed (S/N 20706/alt link) out to the LVEA.
  • Turned on the FLIR camera and checked beam position with no mask, central and annular masks. 
  • Checked the beam centering on irises (used PWM for iris before power control) and used new Thorlabs VCR6S detector cards which were great!
    • The beam was to the upper left on first and second iris (didn't move irises). 
    • The beam was centered on the last two irises. 
  • We didn't scribe the current CO2 position at it was already pushed up against multiple beam dumps to record it's position. 
  • All water tubes were correctly labeled photo of laser pipe order, the RF driver to laser cables were labeled on both sides as they connect to the laser, see photo for how to plug into RF driver.
Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:23, Tuesday 15 October 2024 (80677)
Tue CP1 Fill

Tue Oct 15 10:11:38 2024 INFO: Fill completed in 11min 34secs

TCs looking good again.

Images attached to this report
H1 General (Laser Transition)
anthony.sanchez@LIGO.ORG - posted 09:39, Tuesday 15 October 2024 (80676)
LVEA is Now LASER HAZARD
Camilla has Transitioned the LVEA to LAZER Hazard for WP: 12121
LVEA is Now LASER  HAZARD.
H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 08:57, Tuesday 15 October 2024 (80675)
13:01 UTC lockloss

13:01 UTC lockloss from NLN (12:40 lock)

~1000 Hz oscillation in DARM, violins 1st harmonic area. There was a small DARM wiggle 200ms then a much larger DARM wiggle 100ms before the lockloss, IMC lost it the usual 1/4 sec after ASC-AS_A.

Images attached to this report
LHO General
tyler.guidry@LIGO.ORG - posted 14:25, Monday 14 October 2024 - last comment - 08:48, Tuesday 15 October 2024(80666)
Well Pump Enabled - Well Tank Cleaned
The well pump has been cycled twice during our tank cleaning work. The pump is currently operating for a manual cycle time of 20 minutes, at which point we will identify its level and record it for future reference. For those interested, the well pump takes ~40 seconds to begin filling after being cycled on.

C. Soike E. Otterman T. Guidry
Comments related to this report
tyler.guidry@LIGO.ORG - 08:48, Tuesday 15 October 2024 (80674)
25 minutes of pump runtime adds roughly 48" of water to the well tank.
H1 DetChar (DetChar, Lockloss)
bricemichael.williams@LIGO.ORG - posted 11:33, Thursday 12 September 2024 - last comment - 16:04, Wednesday 30 October 2024(80001)
Lockloss Channel Comparisons

-Brice, Sheila, Camilla

We are looking to see if there are any aux channels that are affected by certain types of locklosses. Understanding if a threshold is reached in the last few seconds prior to a lockloss can help determine the type of lockloss, which channels are affected more than others, as well as

We have gathered a list of lockloss times (using https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi) with:

  1. only Observe and Refined tags (plots, histogram)
  2. only Observe, Refined, and Windy tags (plots, histogram)
  3. only Observe, Refined, and Earthquake tags (plots, histogram)
  4. Observe, Refined, and Microseism tags (note: all of these also have an EQ tag, and all but the last 2 have an anthropogenic tag) (plots, histogram)

(issue: the plots for the first 3 lockloss types wouldn't upload to this aLog. Created a dcc for them: G2401806)

We wrote a python code to pull the data of various auxilliary channels 15 seconds before a lockloss. Graphs for each channel are created, a plot for each lockloss time are stacked on each of the graphs, and the graphs are saved to a png file. All the graphs have been shifted so that the time of lockloss is at t=0.

Histograms for each channel are created that compare the maximum displacement from zero for each lockloss time. There are also a stacked histogram based on 12 quiet microseism times (all taken from between 4.12.24 0900-0930 UTC). The histrograms are created using only the last second of data before lockloss, are normalized by dividing by the numbe rof lockloss times, and saved to a seperate pnd file from the plots.

These channels are provided via a list inside the python file and can be easily adjusted to fit a user's needs. We used the following channels:

channels = ['H1:ASC-AS_A_DC_NSUM_OUT_DQ','H1:ASC-DHARD_P_IN1_DQ','H1:ASC-DHARD_Y_IN1_DQ','H1:ASC-MICH_P_IN1_DQ', 'H1:ASC-MICH_Y_IN1_DQ','H1:ASC-SRC1_P_IN1_DQ','H1:ASC-SRC1_Y_IN1_DQ','H1:ASC-SRC2_P_IN1_DQ','H1:ASC-SRC2_Y_IN1_DQ', 'H1:ASC-PRC2_P_IN1_DQ','H1:ASC-PRC2_Y_IN1_DQ','H1:ASC-INP1_P_IN1_DQ','H1:ASC-INP1_Y_IN1_DQ','H1:ASC-DC1_P_IN1_DQ', 'H1:ASC-DC1_Y_IN1_DQ','H1:ASC-DC2_P_IN1_DQ','H1:ASC-DC2_Y_IN1_DQ','H1:ASC-DC3_P_IN1_DQ','H1:ASC-DC3_Y_IN1_DQ', 'H1:ASC-DC4_P_IN1_DQ','H1:ASC-DC4_Y_IN1_DQ']
Images attached to this report
Comments related to this report
bricemichael.williams@LIGO.ORG - 17:03, Wednesday 25 September 2024 (80294)DetChar, Lockloss

After talking with Camilla and Sheila, I adjusted the histogram plots. I excluded the last 0.1 sec before lockloss from the analysis. This is due to (in the original post plots) the H1:ASC-AS_A_NSUM_OUT_DQ channel have most of the last second (blue) histogram at a value of 1.3x10^5. Indicating that the last second of data is capturing the lockloss causing a runawawy in the channels. I also combined the ground motion locklosses (EQ, Windy, and microseism) into one set of plots (45 locklosses) and left the only observe (and Refined) tagged locklosses as another set of plots (15 locklosses). Both groups of plots have 2 stacked histograms for each channel:

  1. Blue:
    • the max displacement from zero between one second before and 0.1 second before lockloss, for each lockloss. 
    • The data is one second before until 0.1 second before lockloss, for each lockloss
    • the histogram is the max displacement from zero for each lockloss
    • The counts are weighted as 1/(number of locklosses in this data set) (i.e: the total number of counts in the histogram)
  2. Red:
    • I took all the data points from eight seconds before until 2 seconds before lockloss for each lockloss.
    • I then down-sampled the data points from 256 Hz to 16Hz sampling rate by taking every 16th data point.
    • The histogram is the displacement from zero of these down-sampled points
    • The counts are weighted as 1/(number of down-samples data points for each lockloss) (i.e: the total number of counts in the histogram)

Take notice of the histogram for the H1:ASC-DC2_P_IN1_DQ channel for the ground motion locklosses. In the last second before lockloss (blue), we can see a bimodal distribution with the right groupling centered around 0.10. The numbers above the blue bars is the percentage of the counts in that bin: about 33.33% is in the grouping around 0.10. This is in contrast to the distribution for the observe, refined locklosses where the entire (blue) distribution is under 0.02. This could indicate a threshold could be placed on this channel for lockloss tagging. More analysis will be required before that (I am going to next look at times without locklosses for comparison).

 

Images attached to this comment
bricemichael.williams@LIGO.ORG - 14:17, Wednesday 09 October 2024 (80568)DetChar, Lockloss

I started looking at the DC2 channel and the REFL_B channel, to see if there is a threshold in REFL_B that can be put for a new lockloss tag. I plotted the last eight seconds before lockloss for the various lockloss times. This time I split up the times into different graphs based on if the DC2 max displacement from zero in the last second before lockloss was above 0.06 (based on the histogram in previous comment): Greater = the max displacement is greater than 0.06, Less = the max displacement is less than 0.06. However, I discovered that some of the locklosses that are above 0.06 for the DC2 channel, are failing the logic test in the code: getting considered as having a max displacement less than 0.06 and getting plotted on the lower plots. I wonder if this is also happening in the histograms, but this would only mean that we are underestimating the number of locklosses above the threshold. This could be suppressing possible bimodal distributions for other histograms as well. (Looking into debugging this)

I split the locklosses into 5groups of 8 and 1 group of 5 to make it easier to distinghuish between the lines in the plots.

Based on the plots, I think a threshold for H1:ASC-REFL_B_DC_PIT_OUT_DQ would be 0.06 in the last 3 seconds prior to lockloss

 

 

Images attached to this comment
bricemichael.williams@LIGO.ORG - 11:30, Tuesday 15 October 2024 (80678)DetChar, Lockloss

Fixed the logic issue for splitting the plots into pass/fail the threshold of 0.06 as seen in the plot.

The histograms were unaffected by the issue.

Images attached to this comment
bricemichael.williams@LIGO.ORG - 16:04, Wednesday 30 October 2024 (80949)

Added code to the gitLab

Displaying reports 7781-7800 of 86133.Go to page Start 386 387 388 389 390 391 392 393 394 End