In the past few weeks have seen rocky performance out of the Calibration pipeline and its IFO-tracking capabilities. Much, but not all, of this is due to [my] user error. Tuesday's bad calibration state is a result of my mishandling of the recent drivealign L2L gain changes for the ETMX TST stage (LHO:78403, LHO:78425, LHO:78555, LHO:79841). The current practice adopted by LHO with respect to these gain changes is the following: 1. Identify that KAPPA_TST has drifted from 1 by some appreciable amount (1.5-3%), presumably due to ESD charging effects. 2. Calculate the necessary DRIVEALIGN gain adjustment to cancel out the change in ESD actuation strength. This is done in the DRIVEALIGN bank so that it's downstream enough to only affect the control signal being sent to the ESD. It's also placed downstream of the calibration TST excitation point. 3. Adjust the DRIVEALIGN gain by the calculated amount (if kappaTST has drifted +1% then this would correspond to a -1% change in the DRIVEALIGN gain). 3a. Do not propagate the new drivealign gain to CAL-CS. 3b. Do not propagate the new drivealign gain to the pyDARM ini model. After step 3 above it should be as if the IFO is back to the state it was in when the last calibration update took place. I.e. no ESD charging has taken place (since it's being canceled out by the DRIVEALIGN gain adjustments). It's also worth noting that after these adjustments the SUS-ETMX drivealign gain and the CAL-CS ETMX drivealign will no longer be the copies of each other (see image below). The reasoning behind 3a and 3b above is that by using these adjustments to counteract IFO changes (in this case ESD drift) from when it was last calibrated, operators and commissioners in the control room could comfortably take care of performing these changes without having to invoke the entire calibration pipeline. The other approach, adopted by LLO, is to propagate the gain changes to both CAL-CS and pyDARM each time it is done and follow up with a fresh calibration push. This approach leaves less to 'be remembered' as CAL-CS, SUS, and pyDARM will always be in sync but comes at the cost of having to turn a larger crank each time there is a change.Somewhere along the way I updated the TST drivealign gain parameter in the pyDARM model even though I shouldn't have. At this point, I don't recall if I was confused because the two sites operate differently or if I was just running a test and left this parameter changed in the model template file by accident and subsequently forgot about it. In any case, the drivealign gain parameter change made its way through along with the actuation delay adjustments I made to compensate for both the new ETMX DACs and for residual phase delays that haven't been properly compensated for recently (LHO:80270). This happened in commit 0e8fad of the H1 ifo repo. I should have caught this when inspecting the diff before pushing the commit but I didn't. I have since reverted this change (H1 ifo commit 41c516). During the maintenance period on Tuesday, I took advantage of the fact that the IFO was down to update the calibration pipeline to account for all of the residual delays in the actuation path we hadn't been properly compensating for (LHO:80270). This is something that I've done several times before; a combination of the fact that the calibration pipeline has been working so well in O4 and that the phase delay changes I was instituting were minor contributed to my expectation that we would come back online to a better calibrated instrument. This was wrong. What I'd actually done was install a calibration configuration in which the CAL-CS drivealign gain and the pyDARM model's drivealign gain parameter were different. This is bad because pyDARM generates FIR filters that are used by the downstream GDS pipeline; those filters are embedded with knowledge of what's in CAL-CS by way of the parameters in the model file. In short, CAL-CS was doing one thing and GDS was correcting for another. -- Where do we stand? At the next available opportunity, we will be taking another calibration measurement suite and using it reset the calibration one more time now that we know what went wrong and how to fix it. I've uploaded a comparison of a few broadband pcal measurements (image link). The blue curve is the current state of the calibration error. The red curve was the calibration state during the high profile event earlier this week. The brown curve is from last week's Thursday calibration measurement suite, taken as part of the regularly scheduled measurements. -- Moving forward, I and others in the Cal group will need to adhere more strictly to the procedures we've already had in place: 1. double check that any changes include only what we intend at each step 2. commit all changes to any report in place immediately and include a useful log message (we also need to fix our internal tools to handle the report git repos properly) 3. only update calibration while there is a thermalized ifo that can be used to confirm that things will back properly, or if done while IFO is down, require Cal group sign-off before going to observing
Daniel has the Beckhoff slow controls system offline for investigation.
I've bypassed the following alarms:
Bypass will expire:
Thu Sep 26 10:54:08 PM PDT 2024
For channel(s):
H1:PEM-C_CER_RACK1_TEMPERATURE
H1:PEM-C_MSR_RACK1_TEMPERATURE
H1:PEM-C_MSR_RACK2_TEMPERATURE
H1:PEM-C_SUP_RACK1_TEMPERATURE
all alarms are active again.
I loaded the new H1LSC.txt filter file into h1lsc. This has added Elenna's "new0926" filter to PRCLFF. This filter is currently turned off and has not been switch on recently.
Thu Sep 26 08:14:57 2024 INFO: Fill completed in 14min 52secs
Jordan confirmed a good fill curbside.
TITLE: 09/26 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 1mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.23 μm/s
QUICK SUMMARY:
When I arrived the IFO was trying to lock its self after a Lockloss from NLN. .
Unknown Lockloss
While getting the screenshots and typign this up the IFO went to PRMI twice.so I decided to get it an Initial Alignment after that big gusty wind storm yesterday.
A follow up to alog78711 with all of the 11/12 completed assemblies.
I made 3 comparison plots; all suspensions(contains the legend on the 1st page, measurement date to corresponding sus s/n), both of the suspended versions, all 9 of the freestanding versions (there is 1 left to be finished that we're waiting on a part rework/repair).
We are still working on fine tuning the results of two HRTS suspension, measured on 08_30_2100 and 09_05_1800 (dark green and purple line in the plot shown here), especially for V and R dof. The magnitude (page 03 and 04 here) is lower than the rest of the batch and there is also some cross coupling from Rdof.
Continuing in the investigation of the PRCL coupling and the attempts at applying a PRCL feedforward...
Since we have performed several injections into PRCL over the last few weeks, we can track the PRCL coupling. I made this plot comparing the DARM/PRCL transfer function from three different times: an injection performed before testing a new feedforward on Sept 16, an injection from the noise budget, and an injection done while updating the sensing matrix for SRCL and PRCL after rephasing POP9, Sept 23. The PRCL coupling is relatively stable from all of these tests.
However, the feedforward we have tried has not been working, which has been very confusing. I think I have discovered the reason why: in addition to the measure of the DARM/PRCL transfer function, there is a measure of the "preshaping", which includes the high pass filter we apply to the feedforward. Looking at my code, I found that I had been using the SRCL preshaping to calculate the PRCL coupling, which uses a different high pass filter (comparison of the three high pass filters). I should have been using the MICH preshaping measurement instead, since MICH and PRCL use the same high pass, and everything downstream from that point is the same. Sorry everyone for that mistake.
So, when using the proper preshaping, a PRCL feedforward fit should work. Since we already have this data taken, I ran a quick fit of the feedforward, and compared it to previous fits. Camilla's fit from July 11 was successful, and I can see that the low frequency gain and phase is very similar to the new fit. I believe one reason her fit no longer works is the coupling shape from up to about 70 Hz has changed. Meanwhile, the incorrect fits I tried recently are very different in magnitude and phase (shown in green as "old incorrect fit"), explaining why they failed.
This new fit requires about a factor of 5 more gain than Camilla's fit above 200 Hz, which I think should be ok. The PRCL injections see a large bandstop around 250 Hz, so we don't measure the coupling there. If we can turn off these bandstops we can probably get a much better measurement up to 1000 Hz, which will help this shaping.
Overall, the fit should subtract up to 10 times the noise from 10-30 Hz and on average 3 times the noise up to 100 Hz. Based on our recent noise budget projections, which show that PRCL is directly limiting DARM noise up to 30 Hz, this will have a positive effect on the low frequency sensitivity.
The new filter is placed in the PRCL FF bank in FM6, labeled as "new0926". Since we are in observing, I didn't load the model, just saved the filter. To test, load the filter bank, and engage with a gain of 1, along with the FM10 highpass.
I recommend that this filter is tested along with the commissioning work for Thursday. It would be useful to have an injection before the filter is applied and after the filter is applied, each with enough averages to provide a good fit. If this feedforward filter doesn't work, we can refit. If it does, we can look at fitting iteratively for further subtraction. Please try to get about 30 averages for both measurements, and if possible, boost the injection strength above 50 Hz to get a bit better coherence.
If the feedforward doesn't work, please also additionally run a PRCL preshaping measurement. This means running with the PRCL FF input set OFF, with the high pass filter ON and a gain of 1. I think we have a template for this in the LSC feedforward folder. If not, the preshaping template of the MICH preshaping ("MICHFF") can be repurposed with the appropriate channels. This will help avoid the confusing mistake I previously made.
TITLE: 09/26 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
IFO is in NLN and OBSERVING since 03:09 UTC (2hr 10 min lock)
Wind was actively inhibiting locking for the first few hrs of shift. Gusts were ranging from 35-48mph. There were a few times where it dipped below 35mph at which point I began to lock, to no avail, losing lock usually at CHECK_IR or DRMI.
There was a lull with <20mph winds ~1:40 UTC. We got all the way to MICH_FRINGES bu IFO was not able to lock PRMI or DRMI so I began INITIAL_ALIGNMENT. From this point on, locking was fully automatic and we reached NLN approximately 1.5hrs after the initial alignment started.
Other:
LOG:
None
Since we've gotten back from the OFI vent, we've had chunks of time where we struggle with PI's ringing up (attachment1). During that time, we've made changes to filter bank settings and adjusted the max power during our efforts to try and tame them.
Timeline of PI ringups and associated changes (all times in UTC)
August 22-23rd (alog79673)
- Multiple locklosses from PI24 (lockloss1,lockloss2,lockloss3)
- Changes made to PI damping (alog79665)
August 24
- nominal max power setting changed to 61W so as to bypass it for the weekend
August 27 (alog79753)
- PI31 ringup (no lockloss)
August 31 (alog79836)
- PI28 & PI29 ring up and cause lockloss
September 02 (alog79860)
- PI28/29 ring up and caused lockloss
- nominal max power changed back to 60W
- PI28/29 ringup causing lockloss
- PI28/29 ringup causing lockloss
- PI28/29 ringup causing lockloss
- These last three lockloss were with 60W
- These four locklosses all happened one after the other.
September 17
- PI24 ringup and lockloss
September 21
- PI24 ringup and lockloss
September 23
- PI24 ringup and lockloss
September 25
- PI24 ringup and lockloss
- PI24 ringup and lockloss
- PI24 ringup and lockloss
Something weird first noticed by TJ during the 17:52UTC ring up today was that PI24 looked a lot noisier than usual. I looked back at the previous PI24 ringups(attachment2) and noticed that only the ringups from today (September 25th) are this noisy, compared to the other PI24 locklosses we've seen since the vent.
After this latest lockloss, the nominal max power has been increased back up to 61W (alog80295). We are discussing changes to ring heater power since it looks like the max power changes are enough to stop one mode from ringing up, but might be causing the others to start ringing up.
TITLE: 09/25 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Wind
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
Day was full of Short locks that all eneded with PI ring ups.
Lockloss from a PI 24 ring up
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1411331841
Screen shot. apparently the Lockloss page has no locklosses attributed to PI ring ups.
Relocking:
I did an Initial Alignment.
Wind is gusting above 40 mph
Holding ISC_LOCK in IDLE until the tumbleweed land.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:58 | SAF | H1 | LVEA | YES | LVEA is laser HAZARD | 18:24 |
15:19 | PCAL | Karen & Francisco | PCAL lab | yes | Technical cleaning and storing parts. | 15:37 |
16:45 | FAC | Eric | EX & EY | N | Turning Down the Bugs the Bunnies alarm Volume at Chiller yards. | 17:26 |
17:28 | PEM | Robert | SR10 & Alabama | N | Checking along SR10 & Alabama for road Noise. | 21:28 |
21:02 | VAC | Travis | LVEA HAM6 | Yes | checking a feed through | 21:22 |
TITLE: 09/25 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Wind
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 39mph Gusts, 25mph 3min avg
Primary useism: 0.07 μm/s
Secondary useism: 0.32 μm/s
QUICK SUMMARY:
IFO is in DOWN due to ENVIRONMENT
High winds with around 40mph gusts. Shall re-attempt lock acquisition when winds are lower speed.
Sheila, Louis, Francisco
SUMMARY: On Thursday, September 19 2024, the low frequency error of the sensing function magnitude decreased (see figure 1) from turning off H1:ASC-AS_A_DC_YAW_OFFSET.
We turned off H1:ASC-AS_A_DC_YAW_OFFSET prior to making a calibration measurement (LHO:80180), given our observations from LHO:80063. Figure 3 shows a change in magnitude of ~5%, from turning off the AS_A_Y, in comparison to measurement 20240914T183802Z ("Saturday cal. meas.", the most recent measurement prior to our change), and figure 2 confirms an uncertainty of less than 3% for the frequencies (see table) of interest. The calibration measurements used in this log were done with a thermalized interferometer -- see LHO:79691, LHO:80057, LHO:80061, LHO:80093, LHO:80159, LHO:80180.
I'm adding figures 4, 5 , and 6 to summarize the coupling between DARM and DHARD_Y on LHO:80063. In figure 4, the trace where AS_A_DC_YAW_OFFSET = 0 (red trace, bottom plot) shows minimal coupling with DARM (top plot). The frequencies at which DHARD_Y PSD magnitude was minimized match the injections from the following table
Frequency (Hz) |
Channel name |
---|---|
15.6 | H1:SUS-ETMY_L1_CAL_LINE_FREQ |
16.4 | H1:SUS-ETMY_L2_CAL_LINE_FREQ |
17.1 | H1:CAL-PCALY_PCALOSC1_OSC_FREQ |
17.6 | H1:SUS-ETMY_L3_CAL_LINE_FREQ |
which indicates a coupling from DARM to DHARD_Y.
To contextualize, we are interested in understanding (and supressing) the parasitic cross-coupling between ASC and DARM loops (similar to LHO:50498, and, more recently, LHO:78606). The cross-coupling can be coming from DARM and affecting ASC or coming from ASC and affecting DARM. We are using the sensing function, instead of the actuation function, to rule out other coupling mechanisms. A technical note describing the parasitic cross-coupling is in development.
DESCRIPTION OF THE FIGURES
Fig 1 - out_10-30Hz: Sensing TF ranging from 10 to 30 Hz for different calibration measurement reports.
Fig 2 - unc_10-30Hz: Relative uncertainty of each TF from figure 1.
Fig 3 - ratio_10-30Hz: Change in magnitude of each TF from figure 1, using report from 20240919T153719Z as reference.
Fig 4 - darm_and_dhardy_psd: PSDs of (top) LSC-DARM_IN1_DQ and (bottom) DHARD_Y_OUT_DQ for each value of ASC-AS_A_DC_YAW_OFFSET (AS_A_Y for reference).
Fig 5 - dhardy_psd_and_coh: PSD of DHARD_Y and coherence of DHARD_Y_OUT_DQ/LSC-DARM_IN1_DQ for each value of AS_A_Y. Note the coherence when AS_A_Y = 0 (red trace).
Fig 6 - dhardy_tf: TF of DHARD_Y_OUT_DQ/LSC-DARM_IN1_DQ. Even though the phase changes substantially at AS_A_Y = 0, the coherence on figure 5 indicates that the coupling between DAHRD_Y and DARM is low.
Fig 7, 8, and 9 are, respectively, full ranges references of figures 1, 2, and 3.
Oli is preparing an alog about the history of our PIs since the OFI vent.
Since we have lost lock to PI 24 more and more frequently over the last week and today all locks have been short, I've changed the input power to 61W, which did help with this PI when we did it in early September.
TITLE: 09/25 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 18mph Gusts, 11mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.31 μm/s
QUICK SUMMARY:
Robert is out at SR10 & alabama along the easement on the sholder. We should make sure hes still ok...
After this Lockloss we started relocking again and the DRMI signals looked terrible and were pulling away.
Shelia had us Stopped in OFFLOAD DRMI ACS. I touched up SRM Yaw and Turned off the following.
H1:ASC-SRC1_P_SW1
H1:ASC-SRC1_Y_SW1
H1:ASC-SRC2_P_SW1
H1:ASC-SRC2_Y_SW1
NLN Reached at 19:06 UTC
OBSERVING Reached at 19:08:35 UTC
Lockloss @ 09/25 17:52UTC from PI24 ringup. Tony valiantly tried to damp it but was unsuccessful.
Wed Sep 25 08:14:14 2024 INFO: Fill completed in 14min 10secs
Jordan confirmed a good fill curbside.
-Brice, Sheila, Camilla
We are looking to see if there are any aux channels that are affected by certain types of locklosses. Understanding if a threshold is reached in the last few seconds prior to a lockloss can help determine the type of lockloss, which channels are affected more than others, as well as
We have gathered a list of lockloss times (using https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi) with:
(issue: the plots for the first 3 lockloss types wouldn't upload to this aLog. Created a dcc for them: G2401806)
We wrote a python code to pull the data of various auxilliary channels 15 seconds before a lockloss. Graphs for each channel are created, a plot for each lockloss time are stacked on each of the graphs, and the graphs are saved to a png file. All the graphs have been shifted so that the time of lockloss is at t=0.
Histograms for each channel are created that compare the maximum displacement from zero for each lockloss time. There are also a stacked histogram based on 12 quiet microseism times (all taken from between 4.12.24 0900-0930 UTC). The histrograms are created using only the last second of data before lockloss, are normalized by dividing by the numbe rof lockloss times, and saved to a seperate pnd file from the plots.
These channels are provided via a list inside the python file and can be easily adjusted to fit a user's needs. We used the following channels:
After talking with Camilla and Sheila, I adjusted the histogram plots. I excluded the last 0.1 sec before lockloss from the analysis. This is due to (in the original post plots) the H1:ASC-AS_A_NSUM_OUT_DQ channel have most of the last second (blue) histogram at a value of 1.3x10^5. Indicating that the last second of data is capturing the lockloss causing a runawawy in the channels. I also combined the ground motion locklosses (EQ, Windy, and microseism) into one set of plots (45 locklosses) and left the only observe (and Refined) tagged locklosses as another set of plots (15 locklosses). Both groups of plots have 2 stacked histograms for each channel:
Take notice of the histogram for the H1:ASC-DC2_P_IN1_DQ channel for the ground motion locklosses. In the last second before lockloss (blue), we can see a bimodal distribution with the right groupling centered around 0.10. The numbers above the blue bars is the percentage of the counts in that bin: about 33.33% is in the grouping around 0.10. This is in contrast to the distribution for the observe, refined locklosses where the entire (blue) distribution is under 0.02. This could indicate a threshold could be placed on this channel for lockloss tagging. More analysis will be required before that (I am going to next look at times without locklosses for comparison).
I started looking at the DC2 channel and the REFL_B channel, to see if there is a threshold in REFL_B that can be put for a new lockloss tag. I plotted the last eight seconds before lockloss for the various lockloss times. This time I split up the times into different graphs based on if the DC2 max displacement from zero in the last second before lockloss was above 0.06 (based on the histogram in previous comment): Greater = the max displacement is greater than 0.06, Less = the max displacement is less than 0.06. However, I discovered that some of the locklosses that are above 0.06 for the DC2 channel, are failing the logic test in the code: getting considered as having a max displacement less than 0.06 and getting plotted on the lower plots. I wonder if this is also happening in the histograms, but this would only mean that we are underestimating the number of locklosses above the threshold. This could be suppressing possible bimodal distributions for other histograms as well. (Looking into debugging this)
I split the locklosses into 5groups of 8 and 1 group of 5 to make it easier to distinghuish between the lines in the plots.
Based on the plots, I think a threshold for H1:ASC-REFL_B_DC_PIT_OUT_DQ would be 0.06 in the last 3 seconds prior to lockloss
Fixed the logic issue for splitting the plots into pass/fail the threshold of 0.06 as seen in the plot.
The histograms were unaffected by the issue.
Added code to the gitLab
anthony.sanchez@cdsws29: python3 /ligo/home/francisco.llamas/COMMISSIONING/commissioning/k2d/KappaToDrivealign.py
Fetching from 1409164474 to 1409177074
Opening new connection to h1daqnds1... connected
[h1daqnds1] set ALLOW_DATA_ON_TAPE='False'
Checking channels list against NDS2 database... done
Downloading data: |█████████████████████████████████████████████████████████████████████████████████████| 12601.0/12601.0 (100%) ETA 00:00
Warning: H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN changed.
Average H1:CAL-CS_TDEP_KAPPA_TST_OUTPUT is -2.3121% from 1.
Accept changes of
H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN from 187.379211 to 191.711514 and
H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN from 184.649994 to 188.919195
Proceed? [yes/no]
yes
Changing
H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN and
H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN
H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN => 191.7115136134197
anthony.sanchez@cdsws29:
I'm not sure if the value set by this script is correct. KAPPA_TST was 0.976879 (-2.3121%) at the time this script looked at it. The L2L DRIVEALIGN GAIN inH1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN
was 184.65 at the time of our last calibration update. This is the time at which KAPPA_TST was set to 1. So to offset the drift in the TST actuation strength we should change the drivealign gain to 184.65 * 1.023121 = 188.919. This script chose to update the gain to 191.711514 instead; this is 187.379211 * 1.023121, with 187.379211 being the gain value at the time the script was run. At that time, the drivealign gain was already accounting for a 1.47% drift in the actuation strength (this has so far not been properly compensated for in pyDARM and may be contributing to the error we're currently seeing...more on that later this weekend in another post.). So I think this script should be basing corrections as percentages applied with respect to the drivealign gain value at the time when the kappa's were last set (i.e. just after the last front end calibration update) *not* at the current time. also, the output from that script claims that it also updatedH1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN
but I trended it and it hadn't been changed. Those print statements should be cleaned up.
to close out this discussion, it turns out that the drivealign adjustment script is doing the correct thing. Each time the drivealign gain is adjusted to counteract the effect of ESD charging, the percent change reported by Kappa TST should be applied to the drivealign gain at that time rather than what the gain was when the kappa calculations were last updated.
Ansel Neunzert, Evan Goetz, Owen (Zhiyu) Zhang
Summary
Following the PSL control box 1 move to a separate power supply (see LHO aLOG 79593), we search the recent Fscan spectra for any evidence of the 9.5 Hz comb triplet artifacts. The configuration change seems promising. There is strong evidence that this change has had a positive effect. However, there are a few important caveats to keep in mind.
Q: Does the comb improve in DARM?
A: Yes. However, it has changed/improved before (and later reversed the change), so this is not conclusive by itself.
Figures 1-4 show the behavior of the comb in DARM over O4 so far. Figures 1 and 2 are annotated with key interpretations, and Figure 2 is a zoom of Figure 1. Note that the data points are actually the maximum values within a narrow spectral region (+/- 0.11 Hz, 20 spectral bins) around the expected comb peak positions. This is necessary because the exact frequency of the comb shifts unpredictably, and for high-frequency peaks this shift has a larger effect.
Based on these figures, there was a period in O4b when the comb’s behavior changed considerably, and it was essentially not visible at high frequencies in daily spectra. However, it was stronger at low frequencies (below 100 Hz) during this time. This is not understood, and in fact has not been noted before. Maybe the coupling changed? In any case, it came back to a more typical form in late July. So, we should be aware that an apparent improvement is not conclusive evidence that it won’t change again.
However, the recent change seems qualitatively different. We do not see evidence of low or high frequency peaks in recent days. This is good news.
Q: Does the comb improve in known witness channels?
A: Yes, and the improvement is more obvious here, including in channels where the comb has previously been steady throughout O4. This is cause for optimism, again with some caveats.
To clarify the situation, I made similar history plots (Figures 5-8) for a selection of channels that were previously identified as good witnesses for the comb. (These witness channels were initially identified using coherence data, but I’m plotting normalized average power here for the history tracks. We’re limited here to using channels that are already being tracked by Fscans.)
The improvement is more obvious here, because these channels don’t show the kind of previous long-term variation that we see in the strain data. I looked at two CS magnetometer channels, IMC-F_OUT_DQ, and LSC-MICH_IN1. In all cases, there’s a much more consistent behavior before the power supply isolation, which makes the improvement that much more convincing.
Q: Is it completely gone in all known witness channels?
A: No, there are some hints of it remaining.
Despite the dramatic improvements, there is subtle evidence of the comb remaining in some places. In particular, as shown in Figure 9, you can still see it at certain high frequencies in the IMC-F_OUT channel. It’s much improved from where it was before, but not entirely gone.
Just an update that this fix seems to be holding. Tracking the comb height in weekly-averaged spectra shows clear improvement (plot attached). The combfinder has not picked up these combs in DARM recently, and when I spot-check the daily and weekly spectra I see no sign of them by eye, either.