Search criteria
Section: H1
Task: DCS
It seems earthquakes causing similar magnitudes of movement on-site may or may not cause lockloss. Why is this happening? Should expect to always or never cause lockloss for similar events. One suspicion is that common or differential motion might lend itself better to keeping or breaking lock.
- Lockloss is defined as H1:DRD-ISC_LOCK_STATE_N going to 0 (or near 0).
- I correlated H1:DRD-ISC_LOCK_STATE_N with H1:ISI-GND_STS_CS_Z_EQ_PEAK_OUTMON peaks between 500 and 2500 μm/s.
- I manually scrolled through the data from present to 2 May 2024 to find events.
- Manual, because 1) wanted to start with a small sample size and quickly see if there was a pattern, and 2) because I need to find events that caused loss, then go and find similarly sized events we kept lock.
- Channels I looked at include:
- IMC-REFL_SERVO_SPLITMON
- GRD-ISC_LOCK_STATE_N
- ISI-GND_STS_CS_Z_EQ_PEAK_OUTMON ("CS_PEAK")
- SEI-CARM_GNDBLRMS_30M_100M
- SEI-DARM_GNDBLRMS_30M_100M
- SEI-XARM_GNDBLRMS_30M_100M
- SEI-YARM_GNDBLRMS_30M_100M
- SEI-CARM_GNDBLRMS_100M_300M
- SEI-DARM_GNDBLRMS_100M_300M
- SEI-XARM_GNDBLRMS_100M_300M
- SEI-YARM_GNDBLRMS_100M_300M
- ISI-GND_STS_ITMY_X_BLRMS_30M_100M
- ISI-GND_STS_ITMY_Y_BLRMS_30M_100M
- ISI-GND_STS_ITMY_Z_BLRMS_30M_100M
- ISI-GND_STS_ITMY_X_BLRMS_100M_300M
- ISI-GND_STS_ITMY_Y_BLRMS_100M_300M
- ISI-GND_STS_ITMY_Z_BLRMS_100M_300M
- SUS-SRM_M3_COILOUTF_LL_INMON
- SUS-SRM_M3_COILOUTF_LR_INMON
- SUS-SRM_M3_COILOUTF_UL_INMON
- SUS-SRM_M3_COILOUTF_UR_INMON
- SUS-PRM_M3_COILOUTF_LL_INMON
- SUS-PRM_M3_COILOUTF_LR_INMON
- SUS-PRM_M3_COILOUTF_UL_INMON
- SUS-PRM_M3_COILOUTF_UR_INMON
- ndscope template saved as neil_eq_temp2.yaml
- 26 events; 14 lockloss, 12 locked (3 or 4 lockloss event may have non-seismic causes)
- After, usiing CS_PEAK to find the events, I, so far, used the ISI channels to analyse the events.
- The SEI channels were created last week (only 2 events captured in these channels, so far).
- Conclusions:
- There are 6, CS_PEAK events above 1,000 μm/s in which we *lost* lock;
- In SEI 30M-100M
- 4 have z-axis dominant motion with no motion or strong z-motion or no motion in SEI 100M-300M
- 2 have y-axis dominated motion with a lot of activity in SEI 100M-300M and y-motion dominating some of the time.
- There are 6, CS_PEAK events above 1,000 μm/s in which we *kept* lock;
- In SEI 30M-100M
- 5 have z-axis dominant motion with only general noise in SEI 100M-300M
- 1 has z-axis dominant noise near the peak in CS_PEAK and strong y-axis domaniated motion starting 4 min prior to the CS_PEAK peak; it too only has general noise in SEI 100M-300M. This x- or y-motion which starts about 4 min before the peak in CS_PEAK has been observed in 5 events -- Love waves precede Rayleigh waves, could be Love waves?
- All events below 1000 μm/s which lose lock seem to have a dominant y-motion in either/both SEI 30M-100M / 100M-300M. However, the sample size is not large enough to convince me that shear motion is what is causing lockloss. But it is large enough to convince me to find more events and verify. (Some plots attached.)
In a study with student Alexis Vazquez (see the poster at https://dcc.ligo.org/LIGO-G2302420, we found that there was an intermediate range of peak ground velocities in EQs where lock could be lost or maintained. We also found some evidence that lock loss in this case might be correlated with high microseism (either ambiant or caused by the EQ). See the figures in the linked poster under Findings and Validation.
One of the plots (2nd row, 2nd column) has the incorrect x-channel on some of the images (all posted images are correct, by chance). Patterns reported may not be correct, will reanalyze.
J. Kissel, R. Savage After Rick and I conversed about the systems level compromises in play (see LHO:69175) when placing the newer 410.2 Hz PCALX line near the 410.3 Hz PCALY line for comparisons like that in LHO:69290, we agree to push forward with the plans discussed in LHO:69175: (1) Move the PCALX 410.2 Hz PCALXY comparison line further away from the pre-existing PCALY 410.3 Hz TDCF line. Joe, for entirely different reasons, recommends 0.5 Hz separation instead of 0.1 Hz. THIS ALOG The new frequency is H1:CAL-PCALX_PCALOSC2_OSC_FREQ = 409.8 Hz. (2) Update the "DARM Model transfer function values at calibration line frequencies" EPICs records for the PCALX 410.2 Hz line, NOT YET DONE, SEE BELOW (3) Revert all DEMOD the band-passes to have a pass band that's +/- 0.1 Hz wide (what we had in O3) DONE ALREADY, and now for 410.3 Hz: see LHO:69265 and THIS ALOG (4) Revert all DEMOD I & Q low passes to a 10 second time constant, or 0.1 Hz corner frequency DONE ALREADY, and now for 410.3 Hz: see LHO:69265 and THIS ALOG (5) Change COH_STRIDE back to 10 seconds to match the low pass, and Change the BUFFER_SIZE back to 13.0 in order to preserve the rolling average of 2 minutes. DONE ALREADY, and now for 410.3 Hz: see LHO:69265 and THIS ALOG I've also modified the three band-pass filters in the SIG banks of the special segregated PCALX DEMOD for the PCALXY comparison (i.e. H1:CAL-CS_TDEP_PCAL_X_COMPARE_PCAL_DEMOD_SIG, _EXT_DEMOD_SIG, _ERR_DEMOD_SIG). These had had a pass pand with of 0.01 Hz, so I've created a new band pass for 409.8, butter("BandPass",6,409.79,409.81). It lives in FM1, and I've copied the old band for 410.2 Hz pass over to FM2. In order to have *all* of our ducks in a row with the line move, we still need to do: (2) Update the "DARM Model transfer function values at calibration line frequencies" EPICs records for the PCALX 410.2 Hz line, but that also means we need to do a step not listed above (0) Update the pydarm_H1.ini file to reflect that the pcalx comparison line is now at 409.8 Hz, and (6) Let the GDS team know that there's a calibration line frequency change and they need to update the GDS line subtraction pipeline. This line frequency change is in play as of 2023-05-04 00:20 UTC. Stay tuned! Here's the latest list of calibration lines: Freq (Hz) Actuator Purpose Channel that defines Freq Since O3 15.6 ETMX UIM (L1) SUS \kappa_UIM excitation H1:SUS-ETMY_L1_CAL_LINE_FREQ Amplitude Change on Apr 2023 (LHO:68289) 16.4 ETMX PUM (L2) SUS \kappa_PUM excitation H1:SUS-ETMY_L2_CAL_LINE_FREQ Amplitude Change on Apr 2023 (LHO:68289) 17.1 PCALY actuator kappa reference H1:CAL-PCALY_PCALOSC1_OSC_FREQ Amplitude Change on Apr 2023 (LHO:68289) 17.6 ETMX TST (L3) SUS \kappa_TST excitation H1:SUS-ETMY_L3_CAL_LINE_FREQ Amplitude Change on Apr 2023 (LHO:68289) 33.43 PCALX Systematic error lines H1:CAL-PCALX_PCALOSC4_OSC_FREQ New since Jul 2022 (LHO:64214, LHO:66268) 53.67 | | H1:CAL-PCALX_PCALOSC5_OSC_FREQ Frequency Change on Apr 2023 (LHO:68289) 77.73 | | H1:CAL-PCALX_PCALOSC6_OSC_FREQ New since Jul 2022 (LHO:64214, LHO:66268) 102.13 | | H1:CAL-PCALX_PCALOSC7_OSC_FREQ | 283.91 V V H1:CAL-PCALX_PCALOSC8_OSC_FREQ V 409.8 PCALX PCALXY comparison H1:CAL-PCALX_PCALOSC2_OSC_FREQ New since Jan 2023, Frequency Change THIS ALOG 410.3 PCALY f_cc and kappa_C H1:CAL-PCALY_PCALOSC2_OSC_FREQ No Change 1083.7 PCALY f_cc and kappa_C monitor H1:CAL-PCALY_PCALOSC3_OSC_FREQ No Change n*500+1.3 PCALX Systematic error lines H1:CAL-PCALX_PCALOSC1_OSC_FREQ No Change (n=[2,3,4,5,6,7,8])
As of git hash ac191a90, I've changed the pydarm_H1.ini pydarm parameter file in order to make this change. - LINE 512 cal_line_cmp_pcalx_frequency = 410.2 + LINE 512 cal_line_cmp_pcalx_frequency = 409.8 This takes care of item (0) above.
Dripta B. & Tony S. were able to complete the full measurment of the X End PCAL this afternoon.
The beam positions at the RX sensor beam spot position looked good, see photo below.
Things to Note about this measurment are:
The Phones were not working at the End X station. None of them, so I had to call the operator via my cell phone while outside the building.
We used PS4 for this measurment this afternoon.
Neither of us noticed that the OFS was railed at -7.4V for our first 3 measurments and then it suddenly went up to -3.88V. We looked back at some historical data to find that the -3.88V is indeed the desired value for the OFS PD.
This forced us to start the measurments over again. We are not sure why the OFS was railed to -7.4V, nor are we sure why it stopped when I swapped the RX Sphere for PS4.
Data analysis is still ongoing.
This measurement actually happened on the 15th, but the analysis was done on the 22nd.
The analysis report for LHO End X measurment using PS4 as the Working standard or the sphere that we take down to the end station instead of PS3.
We also happened to change the Gold Standard, so there is alot of changes happening at once. But as more data comes in we will remove the old data from these plots.
But the Mean value for TxPD Force Coefficient is 7.921285e-13 N/ct
and the Mean Value for the RxPD Force Coefficient is 6.230834e-13 N/ct
Maybe this isn't the best place to share this, but life can be frustrating as a deuteranope. Matplotlib has an easy to use colorblind friendly style built in. It can be invoked by adding the line plt.style.use('tableau-colorblind10') before starting to setup your plot (if you've used the usual import matplotlib.pyplot as plt). Maybe this is common knowledge, but I just found it while working on a script.
The first plot is one of the plots produced for the weekly CPS noise monitoring famis for ITMX. The second plt is the same one, but I added the colorblind style to the script. For me, on the first image the H3 and V1 sensors on both subplots are almost the same color. It's especially difficult to go between the legend and the traces on the first plot. Reading these plots on a backlit monitor make this even harder. More than a 4-6 traces, I have a really hard time telling which line is which.
On the second figure, it's much easier. Add in some different linestyles and thicknesses, and everybody's pretty plots can be appreciated by more people.
Nice one! Tagging all the groups, so some folks get emails about this dueteranope-approved color palate. Now we just need the equivalent for matlab!
On the DMT production computers, h1dmt0, h1dmt1, and h1dmt2 in the MSR, I have updated the kernel, patched, and upgraded the software to GDS Suite 20210507, as per the approved SCCB ticket https://git.ligo.org/sccb/requests/-/issues/685. This completes WP 9749.
I have remotely updated the DMT production computers to gstlal-calibration-1.3.1, as per SCCB issue: https://git.ligo.org/sccb/requests/-/issues/651. The generation of DMT hoft is working again. This completes Work Permit 9588.
The DMT production computers at LHO have been patched and rebooted to bring in the latest security patches. After several restarts, low latency DMT H1 hoft is flowing to CIT again and going to disk at LHO. And everything else in the DMT is working again, as well. This completes WP 8539.
The DMT production computers have been patched and rebooted to bring in the latest security patches and production updates from lscsoft. The GDS and calibration software on the DMT were not changed. Everything on the DMT is working and this completes WP #8497.
The primary and redundant calibration pipelines producing DMT hoft were restarted around 11 am PST during maintenance after an email exchange with the calibration group, ldas, and Keita and the operator was informed. The restarts were to fix a problem with the DMT redundant hoft pipeline (it was wrting 1 s of data to disk every 3 seconds) and to reduce the latency of the DMT primary hoft pipeline (which had a latency of 17 s as measured at Caltech, and now its 9 s). The DMT calibration pipelines are both working again.
The DMT production computers at LHO have been patched and rebooted, bringing them to Scientific Linux 7.7, and gstlal-calibration-1.2.11 has been installed. All the DMT monitors should be working again. This complete WP #8435.
I have completed patching and rebooting the DMT production computers while H1 was down for commissioning this afternoon.
(This work was started yesterday during maintenance, but was stopped when it was realized DMT generation of the redundante hoft stream, when restarted, would not write .gwf files until it could get iDQ info from LDAS. And LDAS was down for maintenance yesterday too. The DMT primary hoft stream was not touched yesterday and there was no interuption with low-latency hoft going to CIT, exept briefly during the reboot of the DMT during this commissioning time today.)
Everything on the DMT is working now, and this completes WP 8345.
Dan, Dave:
Dan reports that h1fw0's main raid (E18-0 controlled by h1ldasgw0) is not accessible from LDAS. The unit is in a hardware fault, audible alarm is sounding, both STAT LEDs on front are RED. On the rear controller 0 (top unit) is RED and controller 1 is GREEN.
h1fw0 continues to write to this file system, so it looks like a partial failure?
with Madeline Wade.
The DMT computers have been patched and rebooted. The GDS calibration has been restarted with the time varying calibration factors turned on. This complete WP 8214.
All (non-SRC) time-dependent correction factors were turned in the GDS calibration pipeline at approximately GPS time 1242497475. The new configuration file for this update is
aligocalibration/trunk/Runs/O3/GDSFilters/H1GDS_1242497475.ini
We have been running with the new calibration lines for several weeks and the stability of the time-dependent correction factors during this time warranted this configuraiton change.
With M Wade.
The DMT computers have patched and updated to gds-2.18.17; GDS calibration restated to remove Cal lines in GDS CLEANED STRAIN. This completes WP 8191.
Just to give some more details on the updates to the GDS calibration pipeline: We restarted the pipelines with a new configuration file
aligocalibration/trunk/Runs/O3/GDSFilters/H1GDS_1240326116.ini
The new configuration file includes the correct list of line frequencies to be subtracted from the GDS-CALIB_STRAIN_CLEAN channel. The restart occurred around GPS time 1240691683.
- with Dan Moraru and Dave Barker:
* The DMT computers have been patched.
* The GDS fileserver is now running Solaris 11.3 and the failed drive on the OS filesystem has been replaced and mirrored.
* The QFS filesystem is now at version 6.1.50
* The GDS calibration software has been updated to gstlal-calibration-1.2.9
This completes WPs 8128 and 8130.
The OS of the DMT login box h1dmtlogin was upgraded today to Debian 9. This was done under WP 8116.
The DMT computers have been patched and rebooted, bringing in gds2.18.16-1.el7 and " new features needed to connect idq to the low latency frames," as per WP 8106. John Zweizig will be reconfiguring the new features.
I have reconfigured the dmt dq monitor to copy the iDQ channels into the h(t) frames. The monitor includes a 10s deadline on the arrival time, in which case the idq channels will be written as zeros, to prevent delays in the h(t) frame delivery. The current idq channel list includes:
H1:IDQ-OK_OVL_16_4096 0
H1:IDQ-RANK_OVL_16_4096 0
H1:IDQ-FAP_OVL_16_4096 0
H1:IDQ-EFF_OVL_16_4096 0
H1:IDQ-LOGLIKE_OVL_16_4096 0
H1:IDQ-PGLITCH_OVL_16_4096 0
As per WP 8104 and SCCB issue 82, I have updated the DMT production computers to gstlal-calibration-1.2.8. For example, this allows pcal correction factors that are applied to GDS strain to be updated from the front end for ER14. See: https://git.ligo.org/sccb/requests/issues/82.
I have not restarted the DMT calibration code. No changes have taken place yet.
When H1 is down, I have asked Aaron Viet to restart the DMT calibration code and put in an alog. This should not affect FOMs in the control room.
(More DMT/GDS updates are scheduled for next Tuesday maintenance, on March 5, as per WP 8106.)