E. Goetz, J. Kissel, L. Dartez Summary: There is evidence that some of the lines found in the gravitational wave band are actually aliased lines from higher frequencies. So far it is unclear exactly how many of the lines in the run-averaged list are due to this problem and if the lines are stationary or if violin ring-ups may induce more lines in band due to aliasing artifacts. Further investigation is needed, but this investigation suggests that the current level of anti-aliasing is insufficient to suppress the artifacts at high frequency aliasing into the gravitational wave band. Details: We used the live, test-point acquisition of the 524 kHz sampled DCPD data in DTT, channel H1:OMC-DCPD_B1_OUT (equivalent to the nominal H1:OMC-DCPD_B0_OUT channel used in loop). This channel had the same 524k-65k and 65k-16k decimation digital AA filtering applied. The time series was exported from DTT and processed by stitching together the time segments into a single time series. Then one can process the time series using the scipy.signal.welch() function of 1) the full 524 kHz sampled data, 2) 524 kHz sampled data decimated (no additional AA filtering) by a factor of 32 to get the 16 kHz sampled frequency data, 3) 524 kHz sampled data decimated using additional AA filtering by using the scipy.signal.decimate() function which has a built-in anti-aliasing filter. We also plotted in DTT the individual channels against the 16k H1:OMC-DCPD_B_OUT_DQ channel, showing that some of the lines are visible in the in-loop DCPD 16 kHz channel, but not visible in the test point 524 kHz channels. Figure 1: ASD of raw 524 kHz data (blue), decimated data without any extra anti-aliasing filter applied (orange), and decimated data with additional anti-aliasing filtering (green). Orange peaks are visible above the blue and green traces above ~750 Hz. Figure 2: ASD ratio of the decimated data without anti-aliasing filtering to the raw data showing the noise artifacts Figure 3: Zoom of figure 2 near 758 Hz Figure 4: ASD computed from DTT showing DCPD B Ch 1, 5, 9, 13 and and the H1:OMC-DCPD_B_OUT_DQ channel at the same time as the 9 and 13 channels were acquired (limitations of the front end handling of 524 kHz test points to DTT) Figure 5: Zoom of figure 4 near 758 Hz We were also interested in the time-variability of these artifacts and watched the behaviour of H1:OMC-DCPD_B_OUT_DQ, and saw amplitude variations on the order of factors of a few and frequency shifts on the order of 0.1 Hz, at least for the artifacts observed near 758 Hz. Figures 4 and 5 indicate that there are more artifacts not necessarily directly caused by aliasing; perhaps these are non-linearity artifacts? This needs further study. A count of the number of 0.125 Hz frequency bins from 0 to 8192 Hz of the ratio between downsampling without additional anti-aliasing filtering and the raw 524 kHz ASD indicates that ~4900 bins are above a threshold of 1.1 (though most of those bins are above 2 kHz, as indicated by figure 2).
E. Goetz, L. Dartez We temporarily added extra digital AA filtering in the DCPD A1/2 B1/2 TEST banks (we are planning to revert to https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=82313 on Tuesday), to see if we can suppress the aliased artifacts. Repeating the same procedure as before: computing the ratio of the ASD of the decimated data to the ASD of the full 524 kHz band data, both with and without the extra digital AA filtering we can see a significant improvement in the low frequency artifacts. The temporary filtering is just copies of the standard 524k-65k and 65k-16k filters, but it shows significant reduction in low frequency artifacts (see especially Figure 2). This suggests that improvements to the sensing path anti-aliasing filtering would be beneficial to detector sensitivity, reducing the impacts of high frequency artifacts that are being aliased to in-band.
The temporary TEST filter modifications that duplicated the decimation filters into additional filter banks has been reverted back to LHO aLOG 82313.
For easier comparison, I've attached ratio plots the same size and scale as in other aLOGs
Fri Jan 17 10:09:34 2025 INFO: Fill completed in 9min 30secs
Laser Status:
NPRO output power is 1.842W
AMP1 output power is 70.11W
AMP2 output power is 137.6W
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 30 days, 21 hr 54 minutes
Reflected power = 25.75W
Transmitted power = 102.5W
PowerSum = 128.2W
FSS:
It has been locked for 0 days 2 hr and 54 min
TPD[V] = 0.773V
ISS:
The diffracted power is around 4.0%
Last saturation event was 0 days 2 hours and 54 minutes ago
Possible Issues:
PMC reflected power is high
TITLE: 01/17 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 3mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.22 μm/s
QUICK SUMMARY:
H1 had 2-locklosses overnight which came back automatically. Ther was a drop from 0944--0945 due to PI24. Microseism is squarely between 50th-95th percentile and winds are low. Nuc31's USGS website needed a refresh; nuc35's MC2 & PR2 cameras are offline/blue.
TITLE: 01/17 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 22:34 UTC (7hr 30 min lock)
Smooth shift locked the whole time.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
17:16 | SAFETY | LAZER HAZ (\u2310\u25a0_\u25a0) | LVEA | !!!YES!!! | LVEA = LASER HAZARD! | 16:51 |
00:28 | JOG | Camilla | Y arm | n | Improve or maintain health | 01:00 |
TITLE: 01/17 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
OUTGOING OPERATOR: Tony
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 22:34 UTC
LOG:
None
The increase in space temperature in zones 2A and 5 in the LVEA this morning appears to be from sun exposure on the building. Both zones experienced rapid increases at the same time, at the same rate of increase. Both zones have the most sun exposure during the morning hours. The space sensors are located in the exterior walls which makes them susceptible to radiant heating during periods of direct sun exposure. These sensor locations also caused issues at the mid and end stations, when the southern exposures routinely read colder when the winds blow. I have made slight adjustments to the set points in zones 2A and 5 to accommodate for the sun exposure. If it continues to be a problem, sensor relocation will likely be needed.
We haven't aligned the OMC ASC for while so today I did Gabriele's method where we inject lines in the four OMC ASC loops at different frequencies and then demodulate the 410 Hz PCAL line at these frequencies and then at 410Hz to look at which combination of offsets improves our optical gain the most.
The plot of BLRMS of the DCPD_SUM_OUT at 410 Hz vs. QPD offset is shown below.
The start and end times used were 17:22:40 UTC and 17:42:40 UTC.
The detector was in NLN s but squeezing was turned off.
The code is at /ligo/home/jennifer.wright/git/2025/OMC_Alignment/OMC_Alignment_2025_01_16.ipynb.
Usually the start and end times the plots use contain some times without the OMC ASC lines but that was not possivle here as we went to NLN_CAL_MEAS before I had turned off the lines.
Looking at the plot, I think we need to change
H1:ASC-OMC_A_PIT_OFFSET by -0.1
H1:ASC-OMC_A_YAW_OFFSET by 0.075
H1:ASC-OMC_A_PIT_OFFSET by -0.075
H1:ASC-OMC_A_YAW_OFFSET by 0.08
or something close to these.
The third and fourth channels in the above list should be B_PIT and B_YAW respectively and I got the fourth offset value wrong.
OMC-ASC_QPD_B_PIT_OUTPUT by -0.075
OMC-ASC_QPD_B_PIT_OUTPUT by -0.08
As a follow up to my other post about ETMX glitches, I looked at using a strategy used in DRMI locking to try to help the IFO ride through the high frequency glitches that have been causing locklosses. On the BS suspension the ISC path includes filters that whiten the high frequency ISC signal, a limit is applied to that whitened signal then it's dewhitened. Think the filters used for this are a zpk(1,200,1) and it's inverse, with a limit of 50000 in the ISC input to the BS.
I attempted to look at the impact of doing that on a couple glitches leading up to a lockloss. I got data for the ESD drive from one of the lock losses and used lsim in matlab to model the change in the ESD timeseries. The attached image shows the timeseries for each step of whitening, limit and dewhiten compared to the original glitch. It's not a proper model of the DARM loop, Sheila and I might talk about doing that, I just wanted to see if doing the simplest estimate would blow up before digging deeper into it.
Thick blue is the original ESD drive from one of the quadrants, Red is the whitened siganl, yellow is the whitened signal limited to below the saturation level for the ESD, and the thin purple line is the dewhitened, final timeseries. The thin line doesn't show crazy behavior and stays below the saturation threshold (2^19*275, which comes from some adjustments made to accomodate the new dac on ETMX).
I expect if this worked, it would inject some higher frequency junk into this segment, but the glitches already do that. The hope is this would reduce the drive from these high frequency saturations and let the IFO ride them out.
J. Kissel More changes along the lines of LHO:82261. As we continue to explore the configurations of the OMC DCPD channels, probing anti-aliasing and ADC noise, we find it helpful to have all of the frequency-dependent, actual physical channel filter differences available in all four test banks. As such, I've - changed the name of gain(0.25) filter from "sum2avg" to "sum4avg." Pun intended. - Copied the DCPD A "NewV2A" and "NewAW" from A0 now called "A_V2A" and "A_AW" into FM2 and FM3, - Copied the DCPD B "NewV2A" and "NewAW" from B0 now called "B_V2A" and "B_AW" into FM4 and FM5, and - Moved all of the channel-independent gains to the bottom row, FM6: "18b_cts2V" gain of 40 / 2^18 [V/ct] FM7: "sum4avg" gain of 0.25 FM8: "A2mA" - keeping only the FM1, 1 Hz 5th order elliptic highpass and the digital AA decimation filters, Dec65K and Dec16K in FM9 and FM10 in place. This filter file change has been loaded (because we're not in observing at the moment), and committed to the userapps repo, /opt/rtcds/userapps/release/cds/h1/filterfiles/ H1IOPOMC0.txt rev 30421.
TITLE: 01/16 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 3mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.23 μm/s
QUICK SUMMARY:
Comissioning Activities:
Calibration sweep: completed
OMC ASC Alignment test : Completed
PR2 Beam spot measurement : Ongoing.
PEM CPS Slider Ghost beam investigations: ongoing
SRC Injection sweeps: Complete
Squeeze measurements: Ongoing.
Lockloss from NLN_CAL_MEAS @ 18:56 UTC
Alarm Handler:
Dust PSL 101 again.
Camilla, Sheila, Erik
Erik points out that we've lost lock 54 times since November in the guardian states 557 558 transition from ETMX or low noise ESD ETMX.
We thought that part of the problem with this state was a glitch caused when the boost filter in DARM1 FM1 is turned off, which motivated Erik's change to the filter ramping on Tuesday 82263, which was later reverted after two locklosses that happened 12 seconds after the filter ramp, 82284.
Today we added 5 seconds to the pause after the filter is ramped off (previously the filter ramp time and the pause were both 10 seconds long, now the filter ramp time is still 10 seconds but the pause is 15 seconds). We hope this will allow us to better tell if the filter ramp is the problem or something that happens immediately after.
In the last two lock acquisitions, we've had the fast DARM and ETMY L2 glitch 10seconds after DARM1 FM1 was turned off. Plots attached from Jan 26th and zoom, and Jan 27th and zoom. Expect this means this fast glitch is from the FM1 turning off, but we've seen this glitch come and go in the past, e.g. 81638 where we though we fixed the glitch by never turning on DARM_FM1, but we still were turning FM1 on, just later in the lock sequence.
In the lock losses we saw on Jan 14th 82277 after the OMC change (plot), I don't see the fast glitch but there is a larger slower glitch that causes the lockloss. One thing to note different between that date and recently is that the counts of the SUS are double the size. We always have the large slow glitch, but when the ground is moving more we struggle to survive it? Did the 82263 h1omc change fix the fast glitch from FM1 turning off (that seems to come and go) and we were just unlucky with the slower glitch and high ground motion the day of the change?
Can see from the attached microseism plot that it was much worse around Jan 14th than now.
Around 2025-01-21 22:29:23 UTC (gps 1421533781) there was a lock-loss in the ISC_LOCK state 557 that happened before FM1 was turned off.
It appears to have happend about 15 seconds after executing the code block where self.counter == 2
. This is about half way through the 31 second wait period before executing the self.counter == 3,4 blocks.
See attached graph.
Thu Jan 16 10:06:15 2025 INFO: Fill completed in 6min 12secs
Jordan confirmed a good fill curbside. TCs started high around +30C so trip temp was raised to -30C for today's fill. TCmins [-55C, -54C] OAT (2C, 36F).
Robert and I just went into the LVEA for Commissioning activities and the lights were already on. Expect they had been left on since Tuesday.
Opened FRS33087 to potentially install a Gneiss environment monitor in the LVEA to read light levels via EPICS.
Latest Calibration:
gpstime;python /ligo/groups/cal/src/simulines/simulines/simuLines.py -i /ligo/groups/cal/H1/simulines_settings/newDARM_20231221/settings_h1_newDARM_scaled_by_drivealign_20231221_factor_p1.ini;gpstime
notification: end of measurement
notification: end of test
diag> save /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250116T163130Z.xml
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250116T163130Z.xml saved
diag> quit
EXIT KERNEL
2025-01-16 08:36:40,405 bb measurement complete.
2025-01-16 08:36:40,405 bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250116T163130Z.xml
2025-01-16 08:36:40,405 all measurements complete.
gpstime;python /ligo/groups/cal/src/simulines/simulines/simuLines.py -i /ligo/groups/cal/H1/simulines_settings/newDARM_20231221/settings_h1_newDARM_scaled_by_drivealign_20231221_factor_p1.ini;gpstime
PST: 2025-01-16 08:40:33.517638 PST
UTC: 2025-01-16 16:40:33.517638 UTC
GPS: 1421080851.517638
2025-01-16 17:03:33,281 | INFO | 0 still running.
2025-01-16 17:03:33,281 | INFO | gathering data for a few more seconds
2025-01-16 17:03:39,283 | INFO | Finished gathering data. Data ends at 1421082236.0
2025-01-16 17:03:39,501 | INFO | It is SAFE TO RETURN TO OBSERVING now, whilst data is processed.
2025-01-16 17:03:39,501 | INFO | Commencing data processing.
025-01-16 17:03:39,501 | INFO | Ending lockloss monitor. This is either due to having completed the measurement, and this functionality being terminated; or because the whole process was aborted.
2025-01-16 17:04:16,833 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20250116T164034Z.hdf5
2025-01-16 17:04:16,840 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20250116T164034Z.hdf5
2025-01-16 17:04:16,845 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20250116T164034Z.hdf5
2025-01-16 17:04:16,850 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20250116T164034Z.hdf5
2025-01-16 17:04:16,854 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20250116T164034Z.hdf5
ICE default IO error handler doing an exit(), pid = 1289567, errno = 32
PST: 2025-01-16 09:04:16.931501 PST
UTC: 2025-01-16 17:04:16.931501 UTC
GPS: 1421082274.931501
At 17:48:23 Wed 15jan2025 PST all end station receivers of long-range-dolphin IPC channels originating from h1lsc saw a single IPC receive error.
The models h1susetm[x,y], h1sustms[x,y] and h1susetm[x,y]pi all receive a single channel from h1lsc and recorded a single receive error at the same time. No other end station models receive from h1lsc.
On first investigation there doesn't appear to be anything going on with h1lsc at this time to explain this.
FRS33085 is an umbrella ticket covering any IPC errors seen during O4.
Yesterday's IPC receive error was the fourth occurence during O4, we are averaging roughly one every six months.
I have cleared the end station SUS errors with a DIAG_REST when H1 was out of observe.
Repeated 82151, with H1:IOP-OAF_L0_MADC{2,3}_TP_CH{10-13} 65kHz channels on CO2Y. WP# 12261.
Plots attached of the DC and AC out channels. These signals are straight from the PD in counts, before the filtering to undo the D1201111 pre-amp listed in 81868. PWM is at 5kHz, as can be seen in the spectrum.
I misread the graph, for CW 100%
(Niko, Corey, Hugh, Keita, Georgia, Craig, Richard, Fil, Ed)
Today was the big day to squeeze in a variety of HAM1 Tasks. Once the doors were removed this morning, a suite of activities ensued:
Hugh will post specifics for L4C installation and for remaining tasks (and I will post photos).
Keita will post specifics for REFL_B PD installation.
Since, there has been a recent search for the D1300278 cables, and just for documentation, wanting to update this alog to note that the cable for this new REFL PD (LSC REFL B) was D1300278-V2-S1301459 (the shorter 106" long cable) and entered into ICS for this installation in Nov2018.