Ran the (userapps)/isc/h1/scripts/a2l/a2l_min_multi.py script for all four quads in both P and Y to try and help our range. Minimal improvements, this was done in tandem with the SQZ scans which gained us about 2 Mpc.
ETMX P
Initial: 3.12
Final: 3.09
Diff: -0.03
ETMX Y
Initial: 4.79
Final: 4.81
Diff: 0.02
ETMY P
Initial: 4.48
Final: 4.49
Diff: 0.01
ETMY Y
Initial: 1.13
Final: 1.26
Diff: 0.13
ITMX P
Initial: -1.07
Final: -1.02
Diff: 0.05
ITMX Y
Initial: 2.72
Final: 2.79
Diff: 0.07
ITMY P
Initial: -0.47
Final: -0.43
Diff: 0.04
ITMY Y
Initial: -2.3
Final: -2.36
Diff: -0.06
Fri Jul 12 08:08:03 2024 INFO: Fill completed in 8min 0secs
Jordan confirmed a good fill curbside.
TITLE: 07/12 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 3mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
TITLE: 07/12 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Mostly uneventful evening, but after the lockloss while trying to relock (hadn't done an IA), DRMI unlocked twice while we were in ENGAGE_DRMI_ASC, and got beamsplitter saturations both times, with the second time also getting a BS ISI saturation (1st time ndscope, 2nd time ndscope). Not sure what that was about, as I ran an IA and when I tried locking again it was fine. After being in Observing for a little bit I decided that I wanted to adjust sqz because it was really bad, and I had some issues after running it the first time (clicked things out of order even though it shouldn't matter??), but eventually I was able to get a lot more squeezing and a much better sensitivity at the higher frequencies.
LOG:
23:00 Relocking and at PARK_ALS_VCO
23:35 NOMINAL_LOW_NOISE
23:42 Started running simulines measurement to check if simulines is working
00:05 Simulines done
00:13 Observing
00:44 Earthquake mode activated due to EQ in El Salvador
01:04 Seismic to CALM
04:32 Lockloss
Relocking
- 17 seconds into ENGAGE_DRMI_ASC, BS saturated and then LL
- BS saturation twice in ENGAGE_DRMI_ASC, then ISI BS saturation, then LL
05:08 Started an initial alignment
05:27 IA done, relocking
06:21 NOMINAL_LOW_NOISE
06:23 Observing
06:38 Out of Observing to try and make sqz better because it's really bad
07:24 Observing
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:10 | PCAL | Rick, Shango, Dan | PCAL Lab | y(local) | In PCAL Lab | 23:54 |
06:23 UTC Observing
Thu Jul 11 08:11:57 2024 INFO: Fill completed in 11min 53secs
late entry from this morning
Observing at 157Mpc and have been locked for 3.5 hours. We rode out a 5.2 earthquake from El Salvador earlier. Wind is low and going down.
Lock loss 1404770064
Lost lock during commissioning time, but we were between measurements so it was caused by something else. Looking at the lock loss tool ndscopes, ETMX has that movement we've been seeing a lot of just before the lock loss.
Ibrahim, Oli
Attached are the most recent (07-10-2024) BBSS Transfer Functions following the most recent RAL visit and rebuild. The Diaggui screenshots show the first 01-05-2024 round of measurements as a reference. The PDF shows these results with respect to expectations from the dynamical model. Here is what we think so far:
Thoughts:
The nicest sounding conclusion here is that something is wrong with the F3 OSEM because it is the only OSEM and/or flag involved in L, P, Y (less coherent measurements) but not in the others; F3 fluctuates and reacts much more irratically than the others, and in Y, the F3 OSEM has the greatest proportion of actuation than P and a higher magnitude than L, so if there were something wrong with F3, we'd see it in Y the loudest. This is exactly where we see the loudest ring-up. I will take spectra and upload this in another alog. This would account for all issues but the F1, LF and RT OSEM drift, which I will plot and share in a seperate seperate alog.
We have also now made a transfer function comparison between the dynamical model, the first build (2024/01/05), and following the recent rebuild (2024/07/10). These plots were generated by running $(sussvn)/trunk/BBSS/Common/MatlabTools/plotallbbss_tfs_M1.m for cases 1 and 3 in the table. I've attached the results as a pdf, but the .fig files can also be found in the results directory, $(sussvn)/trunk/BBSS/Common/Results/allbbss_2024-Jan05vJuly10_X1SUSBS_M1/. These results have been committed to svn.
We're running a patched version of simuLines during the next calibration measurement run. The patch (attached) was provided by Erik to try and get around what we think are awg issues introduce (or exacerbated) by the recent awg server updates (mentioned in LHO:78757). Operators: there's is nothing special to do. just follow the normal routine as I applied the patch changes in place. Depending on the results of this test, I will either roll them back or work with Vlad to make them permanent (at least for LHO).
Simulines was run right after getting back to NOIMINAL_LOW_NOISE. Script ran all the way until after Commencing data processing
, where it then gave:
Traceback (most recent call last):
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 712, in
run(args.inputFile, args.outPath, args.record)
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 205, in run
digestedObj[scan] = digestData(results[scan], data)
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 621, in digestData
coh = np.float64( cohArray[index] )
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/gwpy/types/series.py", line 609, in __getitem__
new = super().__getitem__(item)
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/gwpy/types/array.py", line 199, in __getitem__
new = super().__getitem__(item)
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/astropy/units/quantity.py", line 1302, in __getitem__
out = super().__getitem__(key)
IndexError: index 3074 is out of bounds for axis 0 with size 0
All five excitations looked good on ndscope during the run.
Also applied the following patch to simuLines.py before the run. The purpose being to extend the sine definition so that discontinuities don't happen if a stop command is executed late. If stop commands are all executed on time (the expected behavior), then this change will have no effect.
diff --git a/simuLines.py b/simuLines.py
index 6925cb5..cd2ccc3 100755
--- a/simuLines.py
+++ b/simuLines.py
@@ -468,7 +468,7 @@ def SignalInjection(resultobj, freqAmp):
#TODO: does this command take time to send, that is needed to add to timeWindowStart and fullDuration?
#Testing: Yes. Some fraction of a second. adding 0.1 seconds to assure smooth rampDown
- drive = awg.Sine(chan = exc_channel, freq = frequency, ampl = amp, duration = fullDuration + rampUp + rampDown + settleTime + 1)
+ drive = awg.Sine(chan = exc_channel, freq = frequency, ampl = amp, duration = fullDuration + rampUp + rampDown + settleTime + 10)
def signal_handler(signal, frame):
'''
Here's what I did:
cp /ligo/groups/cal/src/simulines/simulines/newDARM_20231221/settings_h1_newDARM_scaled_by_drivealign_20231221_factor_p1.ini /ligo/home/vladimir.bossilkov/gitProjects/simulines/simulines/settings_h1.ini
]./simuLines.py -i /opt/rtcds/userapps/release/cal/common/scripts/simuLines/logs/H1/20240711T234232Z.log
]No special environment was used. Output:
2024-07-12 14:28:43,692 | WARNING | It is assumed you are parising a log file. Reconstruction of hdf5 files will use current INI file.
2024-07-12 14:28:43,692 | WARNING | If you used a different INI file for the injection you are reconstructing, you need to replace the default INI file.
2024-07-12 14:28:43,692 | WARNING | Fetching data more than a couple of months old might try to fetch from tape. Please use the NDS2_CLIENT_ALLOW_DATA_ON_TAPE=1 environment variable.
2024-07-12 14:28:43,692 | INFO | If you alter the scan parameters (ramp times, cycles run, min seconds per scan, averages), rerun the INI settings generator. DO NOT hand modify the ini file.
2024-07-12 14:28:43,693 | INFO | Parsing Log file for injection start and end timestamps
2024-07-12 14:28:43,701 | INFO | Commencing data processing.
2024-07-12 14:28:55,745 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240711T234232Z.hdf5
2024-07-12 14:29:11,685 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240711T234232Z.hdf5
2024-07-12 14:29:20,343 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240711T234232Z.hdf5
2024-07-12 14:29:29,541 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240711T234232Z.hdf5
2024-07-12 14:29:38,634 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240711T234232Z.hdf5
Seems good to me. Were you guys accidentally using some conda environment when running simulines yesterday? When running this I was in " cds-testing
" (which is the default?!). I have had this error in the past due to borked environments [in particular scipy which is the underlying responsible code for coherence], which is why I implemented the log parsing function.
The fact that the crash was on coherence and not the preceding transfer function calculation rings the alarm bell that scipy is the issue. We experienced this once in LLO with a single bad conda environment that was corrected, though I stubbornly religiously ran with a very old environment for a long time to make sure that error doesn't come up,
I ran this remotely so can't look at PDF if i run 'pydarm report'.
I'll be in touch over teamspeak to get that resolved.
Attaching the calibration report
There's a number of WAY out there data points in this report.
Did you guys also forget to turn off the calibration lines when you ran it?
Not marking this report as valid.
right, there was no expectation of this dataset being valid. the IFO was not thermalized and the cal lines remained on. The goal of this exercise was to demonstrate that the patched simulines version at LHO can successfully drive calibration measurements. And to that end the exercise was successful. LHO has recovered simulines functionality and we can lay to rest the scary notion of regressing back to our 3hr-long measurement scheme for now.
The run was probably in done in the 'cds' environment. At LHO, 'cds' and 'cds-testing' are currently identical. I don't know the situation at LLO, but LLO typically runs with an older environment than LHO.
Since it's hard to stay with fixed versions on conda-forge, it's likely several packages are newer at LHO vs. LLO cds environments.
It seems earthquakes causing similar magnitudes of movement on-site may or may not cause lockloss. Why is this happening? Should expect to always or never cause lockloss for similar events. One suspicion is that common or differential motion might lend itself better to keeping or breaking lock.
- Lockloss is defined as H1:DRD-ISC_LOCK_STATE_N going to 0 (or near 0).
- I correlated H1:DRD-ISC_LOCK_STATE_N with H1:ISI-GND_STS_CS_Z_EQ_PEAK_OUTMON peaks between 500 and 2500 μm/s.
- I manually scrolled through the data from present to 2 May 2024 to find events.
- Manual, because 1) wanted to start with a small sample size and quickly see if there was a pattern, and 2) because I need to find events that caused loss, then go and find similarly sized events we kept lock.
- Channels I looked at include:
- IMC-REFL_SERVO_SPLITMON
- GRD-ISC_LOCK_STATE_N
- ISI-GND_STS_CS_Z_EQ_PEAK_OUTMON ("CS_PEAK")
- SEI-CARM_GNDBLRMS_30M_100M
- SEI-DARM_GNDBLRMS_30M_100M
- SEI-XARM_GNDBLRMS_30M_100M
- SEI-YARM_GNDBLRMS_30M_100M
- SEI-CARM_GNDBLRMS_100M_300M
- SEI-DARM_GNDBLRMS_100M_300M
- SEI-XARM_GNDBLRMS_100M_300M
- SEI-YARM_GNDBLRMS_100M_300M
- ISI-GND_STS_ITMY_X_BLRMS_30M_100M
- ISI-GND_STS_ITMY_Y_BLRMS_30M_100M
- ISI-GND_STS_ITMY_Z_BLRMS_30M_100M
- ISI-GND_STS_ITMY_X_BLRMS_100M_300M
- ISI-GND_STS_ITMY_Y_BLRMS_100M_300M
- ISI-GND_STS_ITMY_Z_BLRMS_100M_300M
- SUS-SRM_M3_COILOUTF_LL_INMON
- SUS-SRM_M3_COILOUTF_LR_INMON
- SUS-SRM_M3_COILOUTF_UL_INMON
- SUS-SRM_M3_COILOUTF_UR_INMON
- SUS-PRM_M3_COILOUTF_LL_INMON
- SUS-PRM_M3_COILOUTF_LR_INMON
- SUS-PRM_M3_COILOUTF_UL_INMON
- SUS-PRM_M3_COILOUTF_UR_INMON
- ndscope template saved as neil_eq_temp2.yaml
- 26 events; 14 lockloss, 12 locked (3 or 4 lockloss event may have non-seismic causes)
- After, usiing CS_PEAK to find the events, I, so far, used the ISI channels to analyse the events.
- The SEI channels were created last week (only 2 events captured in these channels, so far).
- Conclusions:
- There are 6, CS_PEAK events above 1,000 μm/s in which we *lost* lock;
- In SEI 30M-100M
- 4 have z-axis dominant motion with no motion or strong z-motion or no motion in SEI 100M-300M
- 2 have y-axis dominated motion with a lot of activity in SEI 100M-300M and y-motion dominating some of the time.
- There are 6, CS_PEAK events above 1,000 μm/s in which we *kept* lock;
- In SEI 30M-100M
- 5 have z-axis dominant motion with only general noise in SEI 100M-300M
- 1 has z-axis dominant noise near the peak in CS_PEAK and strong y-axis domaniated motion starting 4 min prior to the CS_PEAK peak; it too only has general noise in SEI 100M-300M. This x- or y-motion which starts about 4 min before the peak in CS_PEAK has been observed in 5 events -- Love waves precede Rayleigh waves, could be Love waves?
- All events below 1000 μm/s which lose lock seem to have a dominant y-motion in either/both SEI 30M-100M / 100M-300M. However, the sample size is not large enough to convince me that shear motion is what is causing lockloss. But it is large enough to convince me to find more events and verify. (Some plots attached.)
In a study with student Alexis Vazquez (see the poster at https://dcc.ligo.org/LIGO-G2302420, we found that there was an intermediate range of peak ground velocities in EQs where lock could be lost or maintained. We also found some evidence that lock loss in this case might be correlated with high microseism (either ambiant or caused by the EQ). See the figures in the linked poster under Findings and Validation.
One of the plots (2nd row, 2nd column) has the incorrect x-channel on some of the images (all posted images are correct, by chance). Patterns reported may not be correct, will reanalyze.
[Dave, Erik]
Dave found that DACs in h1sush2a were in a FIFO HIQTR state since 2024-04-09 11:33 UTC.
FIFO HIQTR means that DAC buffers had more data than expected. DAC latency would be proportionally higher than expected.
The models were restarted, which fixed the issue.
The upper bound on sush2a latency for the first three months of O4B is 39 IOP
cycles. At 2^16 cycles per second, that's a maximum of 595
microseconds.
At 1 kHz that's 214 degrees of phase shift.
Normal latency is 3 IOP cycles, 46 microseconds, 16 degrees phase shift
@ 1 kHz.
The minimum latency when sush2a was in error was 4 cycles, 61
microseconds, 23 deg @ 1 KHz.
SDFed