Closes FAMIS#26474, last checked 81923
The only thing of note is a small pressure spike at EX seen last week, seen in the pressure and ctrl channel.
FAMIS Link: 26027
Only CPS channels which look higher at high frequencies (see attached) would be the following (which have been like this for a while):
TITLE: 01/21 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 5mph Gusts, 3mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.79 μm/s
QUICK SUMMARY: Locked for 20 min, it looks like 2 auto relockes overnight. The useism is peaking above 1um/s, we'll see if we can relock after maintenance.
Workstations were updated and rebooted. This was an OS packages update. Conda packages were not updated.
TITLE: 01/21 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Just got back into Observing after popping out to adjust the opo temperature. Our range had been steadily dropping over the past few locks and had gotten down below 150 Mpc, and our squeezing was getting worse along with it, so I decided to try adjusting the opo temperature before I left. I got us from an average of -3 down to -4.5dB squeezing for the 1.7kHz band. The evening has been quiet and relocking went well earlier with nothing needing to be touched.
LOG:
00:08 Lockloss
01:12 NOMINAL_LOW_NOISE
01:15 Observing
05:54 Out of Observing to adjust OPO temp
06:00 Back into Observing
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:16 | Corey | Optics lab | n | Looking for stuff | 00:13 |
Since we've been seeing ETMY mode 1 (and mode6 to a lesser extent) increasing during lock stretches for the last 5 or 6 days, I've gone ahead and edited lscparams to use the new filter/gain configuration that TJ noted in 82337 - he does say in there that this configuration has been noted previously to cause mode1 to start ringing back up, but so far we have had a 20 hour stretch a couple days ago with these settings and didn't see any ringing up, so I think we might be okay.
I've reloaded ISC_LOCK and VIOLIN_DAMPING.
TITLE: 01/21 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 3mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.30 μm/s
QUICK SUMMARY:
Currently relocking and at DRMI_LOCKED_CHECK_ASC. Last lockloss was due to an ETMX glitch, and we seem to be relocking with no issues so far.
01:15 Observing
TITLE: 01/20 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
Other than an earthquake lockloss, it was a pretty quiet.....but then in the last 20min we had one of our ETMx Glitches.
LOG:
Mon Jan 20 10:08:53 2025 INFO: Fill completed in 8min 50secs
TCmins [-75C, -73C] OAT (-1C, 30F). deltaTemp trip time 10:08:55.
TITLE: 01/20 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 3mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.26 μm/s
QUICK SUMMARY:
H1 has been locked for about 2hrs on this chilly (~22degF) morning with low winds and microseism between the 50th & 95th percentile.
12:19 UTC lockloss, IFO_NOTIFY says its was in NLN with SDF diffs, but we lost lock moments before I logged in.
13:58 UTC Observing, I adjusted the OPO temperature before I went into observing.
Something weird happened with the alert system last night - there were multiple times where IFO_NOTIFY went into ALERTS_ACTIVE, but it didn't call Ryan every time that happened.
The squeezer was having trouble locking/staying locked in FDS, so once we were out of Observing for 8 minutes, IFO_NOTIFY went into ALERT_ACTIVE, during which it should have called Ryan, but maybe it didn't get to because only 20 seconds later, it got back to FDS.
However, it dropped back out a few seconds later, and once those 8 minutes had gone by, IFO NOTIFY went into ALERT_ACTIVE again, and it did call Ryan at 07:21 UTC.
When the SQZer got back to FDS for a few seconds two minutes later, it again took us back out of ALERT_ACTIVE, and then waited the 8 minutes again before going into ALERT_ACTIVE again and sitting in there for 23 minutes while the SQZer tried to relock. During these 23 minutes, Ryan did not get called.
Then the same thing happened again where the SQZer got to FDS for a few seconds, taking us out of ALERT_ACTIVE and resetting that 8 minute timer. This time it stayed in ALERT_ACTIVE for over 4 hours and did not call Ryan until more than 4 hours later.
I've attached an ndscope and the IFO_NOTIFY log from last night, complete with my commentary.
It seems like besides the times where the call did not go out like it was supposed to, there might also need to be repeat calls made during times where we sit out of Observing for longer periods of time, maybe once per hour or per 30 minutes?
Tagging SQZ. 2025/01/20 07:02:12 UTC for 5 hours we had the SQZ FC struggling to lock. OPO, CLF and PMC stayed locked during this time, plot.
Initially unlocked with message "IR unlocked?" and then repeatedly locked back to IR_FOUND where it lost lock with message "GR lost lock??", checker is return ezca['SQZ-FC_TRANS_C_LF_OUTPUT'] > sqzparams.fcgs_trans_lock_threshold (60) which makes sense, we are at 100 when locked so not near this threshold. Unsure on the cause of FC green unlocking, maybe something in the FC servo dragged the lock away unless we are just seeing the unlock itself, plot and zoom.
TITLE: 01/20 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Relocking from the lockloss and currently at PREP_DC_READOUT_TRANSITION. I didn't change the damping for ETMY mode1 officially, but it seems like it will need to be made permanent. Besides the lockloss everything was quiet, I ran an initial alignment and helped ALSY a bit and we have had no issues relocking.
LOG:
21:20 Changed damping for ETMY mode1 to be only FM1 and FM10 (nominal is FM1 FM8 FM10), and gain to -0.2 (same as TJ did yesterday 82340)
04:37 Lockloss
04:40 Initial alignment
TITLE: 01/18 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 160Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 10mph Gusts, 7mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.23 μm/s
QUICK SUMMARY:
Observing and have been Locked for over 10 hours. Range dipped a bit earlier due to unknown reasons, but looks to be slowly coming back up.
I had forgotten to post my summary for Friday DAY due to some parts-searching, so here is what I had:
TITLE: 01/17 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
H1 locked entire shift! There was a short 1-min break for a calibration change.
LOG:
E. Goetz, J. Kissel, L. Dartez Summary: There is evidence that some of the lines found in the gravitational wave band are actually aliased lines from higher frequencies. So far it is unclear exactly how many of the lines in the run-averaged list are due to this problem and if the lines are stationary or if violin ring-ups may induce more lines in band due to aliasing artifacts. Further investigation is needed, but this investigation suggests that the current level of anti-aliasing is insufficient to suppress the artifacts at high frequency aliasing into the gravitational wave band. Details: We used the live, test-point acquisition of the 524 kHz sampled DCPD data in DTT, channel H1:OMC-DCPD_B1_OUT (equivalent to the nominal H1:OMC-DCPD_B0_OUT channel used in loop). This channel had the same 524k-65k and 65k-16k decimation digital AA filtering applied. The time series was exported from DTT and processed by stitching together the time segments into a single time series. Then one can process the time series using the scipy.signal.welch() function of 1) the full 524 kHz sampled data, 2) 524 kHz sampled data decimated (no additional AA filtering) by a factor of 32 to get the 16 kHz sampled frequency data, 3) 524 kHz sampled data decimated using additional AA filtering by using the scipy.signal.decimate() function which has a built-in anti-aliasing filter. We also plotted in DTT the individual channels against the 16k H1:OMC-DCPD_B_OUT_DQ channel, showing that some of the lines are visible in the in-loop DCPD 16 kHz channel, but not visible in the test point 524 kHz channels. Figure 1: ASD of raw 524 kHz data (blue), decimated data without any extra anti-aliasing filter applied (orange), and decimated data with additional anti-aliasing filtering (green). Orange peaks are visible above the blue and green traces above ~750 Hz. Figure 2: ASD ratio of the decimated data without anti-aliasing filtering to the raw data showing the noise artifacts Figure 3: Zoom of figure 2 near 758 Hz Figure 4: ASD computed from DTT showing DCPD B Ch 1, 5, 9, 13 and and the H1:OMC-DCPD_B_OUT_DQ channel at the same time as the 9 and 13 channels were acquired (limitations of the front end handling of 524 kHz test points to DTT) Figure 5: Zoom of figure 4 near 758 Hz We were also interested in the time-variability of these artifacts and watched the behaviour of H1:OMC-DCPD_B_OUT_DQ, and saw amplitude variations on the order of factors of a few and frequency shifts on the order of 0.1 Hz, at least for the artifacts observed near 758 Hz. Figures 4 and 5 indicate that there are more artifacts not necessarily directly caused by aliasing; perhaps these are non-linearity artifacts? This needs further study. A count of the number of 0.125 Hz frequency bins from 0 to 8192 Hz of the ratio between downsampling without additional anti-aliasing filtering and the raw 524 kHz ASD indicates that ~4900 bins are above a threshold of 1.1 (though most of those bins are above 2 kHz, as indicated by figure 2).
E. Goetz, L. Dartez We temporarily added extra digital AA filtering in the DCPD A1/2 B1/2 TEST banks (we are planning to revert to https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=82313 on Tuesday), to see if we can suppress the aliased artifacts. Repeating the same procedure as before: computing the ratio of the ASD of the decimated data to the ASD of the full 524 kHz band data, both with and without the extra digital AA filtering we can see a significant improvement in the low frequency artifacts. The temporary filtering is just copies of the standard 524k-65k and 65k-16k filters, but it shows significant reduction in low frequency artifacts (see especially Figure 2). This suggests that improvements to the sensing path anti-aliasing filtering would be beneficial to detector sensitivity, reducing the impacts of high frequency artifacts that are being aliased to in-band.
The temporary TEST filter modifications that duplicated the decimation filters into additional filter banks has been reverted back to LHO aLOG 82313.
For easier comparison, I've attached ratio plots the same size and scale as in other aLOGs
We haven't aligned the OMC ASC for while so today I did Gabriele's method where we inject lines in the four OMC ASC loops at different frequencies and then demodulate the 410 Hz PCAL line at these frequencies and then at 410Hz to look at which combination of offsets improves our optical gain the most.
The plot of BLRMS of the DCPD_SUM_OUT at 410 Hz vs. QPD offset is shown below.
The start and end times used were 17:22:40 UTC and 17:42:40 UTC.
The detector was in NLN s but squeezing was turned off.
The code is at /ligo/home/jennifer.wright/git/2025/OMC_Alignment/OMC_Alignment_2025_01_16.ipynb.
Usually the start and end times the plots use contain some times without the OMC ASC lines but that was not possivle here as we went to NLN_CAL_MEAS before I had turned off the lines.
Looking at the plot, I think we need to change
H1:ASC-OMC_A_PIT_OFFSET by -0.1
H1:ASC-OMC_A_YAW_OFFSET by 0.075
H1:ASC-OMC_A_PIT_OFFSET by -0.075
H1:ASC-OMC_A_YAW_OFFSET by 0.08
or something close to these.
The third and fourth channels in the above list should be B_PIT and B_YAW respectively and I got the fourth offset value wrong.
OMC-ASC_QPD_B_PIT_OUTPUT by -0.075
OMC-ASC_QPD_B_PIT_OUTPUT by -0.08
D. Davis, E. Capote, O. Patane
There was a discussion recently in the Detchar tools channel about how to interpret the cumulative range plots generated on the summary pages, such as today's cumulative range plot. Specifically, it seems incorrect that we could accumulate 30% of our range below 30 Hz.
Derek has pointed out that this is so misleading because the calculation of cumulative range in this manner is actually performed somewhat incorrectly. In short, range can be thought as analogous to SNR, which is a quantity that must be added in quadrature. Therefore, the order matters when calculating a cumulative range, i.e. the range acquired from 10-20 Hz, then 10-30 Hz, 10-40 Hz, etc. Therefore, the total cumulative range number, as in the one we think about all the time (160 Mpc, for example) is correct, but determining the range over a subset of the band (such as 10-30 Hz) needs to be done more carefully so it is not misleading.
Once we started discussing this, I pointed out that this means that the way we compare ranges is also misleading, as in when we run our DARM integral comparison scripts, we are subtracting the cumulative range of two different DARM PSDs, but we subtract it in amplitude (Mpc) and not in quadrature (Mpc^2).
Derek has created an improved way to calculate cumulative range, which they have coined to be the "cumulative normalized range". To get right to the point: it is better to normalize the cumulative range squared by the total range. This is an example plot showing how these two differ. This plot shows that for a given DARM PSD, the cumulative normalized range better estimates the sensitivity gained over a particular range of frequency. The low frequency portion is still very important (this results from the f^(-7/3) dependence in the range calculation), but indeed we gain very little sensitivity between 10-20 Hz, for example. You can also see that, when using the normalized method, the curve where you integrate up in frequency and the curve where you integrate down in frequency intersect at about 50% of the range, which is what you would expect.
In equation form, this image attachment defines the total cumulative range, and this image attachment shows our defintion of the normalized cumulative range.
In order to more sensibly compare two sensitivities by frequency, we have also derived a way to calculate the cumulative normalized range difference. The derivation is slightly more complicated, but the result is that you subtract the two cumulative normalized quantities, and then normalize by the sum of the two ranges.
This image attachment shows the equation form of this.
To make sense of why this method is better than the method we use now, you can imagine that we have two PSDs, one with 100 Mpc of range, and one that is exactly the same, except that between 10-20 Hz there is an additional gain of 20 Mpc, such that the total range is now 120 Mpc. If you compare these two bizarre PSDs, you would expect that the cumulative range difference between the two from 10-20 Hz is 20 Mpc, and then zero thereafter. This is an example plot showing how the cumulative range difference would appear, using the method where you subtract the two cumulative ranges, and then the method where you apply this normalized range method. The normalized range calculation behaves as expected, while the method that straightforwardly subtracts the two cumulative ranges overshoots the range gain from 10-20 Hz, and then misleadingly indicates the range is decreasing above 20 Hz to make up for it.
There is a lot of information to grasp here, and Derek and I will be posting a document to the DCC soon with a fuller explanation and full derivations. Oli has taken the time to implement these new methods in our DARM comparison scripts, and they will follow up here with more information about that.
As a start, I've only corrected these things in the range_compare script that I previously made based off of the Hanford Noise Budget darm_integral_compare script (81015). This script that I made is a simplified version of the script used for creating NoiseBudget plots so I thought it would be a good start to making these changes. There are also plans to correct the the calculations in other places (summary pages and the official NoiseBudget scripts for example).
All changes have been committed to git and are up to date in gitcommon/ops_tools/rangeComparison/. In addition to the changes necessary to correct the cumulative range plots, I also swapped out the way we were grabbing data so it now uses GWPy, and I added in an additional plot that shows the cumulative sum of the range over frequency. Here's an comparison of the old vs new cumulative range
Derek and I have just updated a document to the DCC with a full workup of this change and some fun examples, see P2500021.