During the May 30th violin mode ring-up, the narrow line contamination in the frequency bands surrounding the fundamental and harmonic frequencies displays an asymmetry that is unexplained thus far. This contamination is most visible in the regions surrounding the 1500Hz harmonic with a 30Hz shift up in frequency as shown in figures 1 and 2. The shift increases with frequency as the contamination around the 500Hz fundamental is actually shifted 3Hz down in frequency, as shown in figure 3. Around 1000Hz, there is also a shift but it is only 17Hz shown in figure 4.
Similar behavior is also seen during the June 30th ring-up as shown in figures 5-8. The shift is in the opposite direction for this ring-up and it decreases with frequency, going from a 17Hz shift down at the 500Hz fundamental to a 3Hz shift down at the 1500Hz harmonic. Again, this behavior is unexplained.
These shifts are calculated as the difference between the median frequency of the lines in the 200Hz band surrounding the violin modes and the median frequency of the violin mode lines. Lines are considered violin mode lines if they have an amplitude that is 70% or more of the max violin mode amplitude in the band.
Kappa_UIM has seen a huge increase in noise starting ~8 days ago (screenshot attached). It's not clear to me what is causing this yet. This is also shown on the summary pages: https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20230728/cal/time_varying_factors/ The UIM Kappa line value started to show noisy activity at GPS 1373905226.2881477.
There's a feature just above the corresponding 15.6 Hz calibration line which appears at the identified time. Fig 1 shows this feature in high resolution. It peaks around 15.605 and is present consistently (in Fscan daily data) since the date of the noise change.
I also computed some shorter-duration spectra with gwpy to double check that its appearance corresponds to the gps time Louis posted. I needed about 500s fft length to resolve the relevant features, and I wanted a couple of averages so I ended up looking at 1000s time periods. Apologies for the messy overlay of figures with not-precisely-matching y-axes! I think the shape difference is clear regardless of the scale. Fig 2 shows some samples right before the change. The change occurs near the end of an observing segment but low-noise data remains available right after, so I looked at both the immediate time after the change (fig 3) and the next observing segment (fig 4). Indeed, it looks like a good match.
This is caused by a mistake in the MICHFF filter. It turns out that the filter I retuned on July 20 has a sharp feature at 15.6 Hz that I did not nottice before. This is injecting MICH noise in DARM at 15.6 Hz. This can be fixed by either tweaking the current filter, or retuning the MICHFF again (being more careful with narrow feartures in the fit!)
I modified the current filter to remove the 15.6 Hz feature and saved it. We should reload the MICHFF filters at the first opportunity
This filter was reloaded since we lost lock. We should see an improvement in our next lock.
Lock loss caused by commissioning activity.
Fri Jul 28 10:08:08 2023 INFO: Fill completed in 8min 4secs
Travis confirmed a good fill curbside.
First observed as a persistent mis-calibration in systematic error monitoring Pcal lines which measure PCAL / GDS-CALIB_STRAIN affecting both LLO and LHO, [LLO Link] [LHO Link], characterised by these measurements consistently disagreeing with the uncertainty envelope.
It us presently understood that this arises from bugs in the code producing the GDS FIR filters there exists a sizeable discrepancy, which Joseph Betzwieser is spear-heading a thorough investigation to correct,
I make a direct measurement of this systematic error by dividing CAL-DARM_ERR_DBL_DQ / GDS-CALIB_STRAIN , where the numerator is further corrected for kappa values of the sensing, cavity pole, and the 3 actuation stages (GDS does the same corrections internally). This gives a transfer function of the difference induced from errors in the GDS filters.
Attached in this aLog, and its sibling aLog in LLO, is this measurement in blue, the PCAL / GDS-CALIB_STRAIN measurement in orange, and the smoothed uncertainty correction vector in red. Attached also is a text file of this uncertainty correction for application in pyDARM to produce the final uncertainty, in the format of [Frequency, Real, Imaginary].

After applying this error TF, the uncertainty budget seems to agree with monitoring results (attached).
After running the command documented in alog 70666, I've plotted the monitoring results on top of the manually corrected uncertainty estimate (see attached). They agree quite well.
The command is:
python ~cal/src/CalMonitor/bin/calunc_consistency_monitor --scald-config ~cal/src/CalMonitor/config/scald_config.yml --cal-consistency-config ~cal/src/CalMonitor/config/calunc_consistency_configs_H1.ini --start-time 1374612632 --end-time 1374616232 --uncertainty-file /home/ling.sun/public_html/calibration_uncertainty_H1_1374612632.txt --output-dir /home/ling.sun/public_html/
The uncertainty is estimated at 1374612632 (span 2 min around this time). The monitoring data are collected from 1374612632 to 1374616232 (span an hour).
J. Kissel, J. Betzwieser
FYI: The time at which Vlad used to gather TDCFs to update the *modeled* response function at the reference time (R, in the numerator of the plots) is
2023-07-27 05:03:20 UTC
2023-07-26 22:03:20 PDT
GPS 1374469418
This is a time when the IFO was well thermalized.
The values used for the TDCFs at this time were
\kappa_C = 0.97764456
f_CC = 444.32712 Hz
\kappa_U = 1.0043616
\kappa_P = 0.9995768
\kappa_T = 1.0401824
The *measured* response function (GDS/DARM_ERR, the denominator in the plots) is from data with the same start time, 2023-07-27 05:03:20 UTC, over a duration of 384 seconds (8 averages of 48 second FFTs).
Note these TDCF values list above are the CAL-CS computed TDCFs, not the GDS computed TDCFs. They're the value exactly at 2023-07-27 05:03:20 UTC, with no attempt to average further over the duration of the *measurement*. See attached .pdf which shows the previous 5 minutes and the next 20 minutes. From this you can see that GDS was computing essentially the same thing as CALCS -- except for \kappa_U, which we know
- is bad during that time (LHO:72812), and
- unimpactful w.r.t. the overall calibration.
So the fact that
:: the GDS calculation is frozen and
:: the CALCS calculation is noisy, but is quite close to the frozen GDS value is coincidental, even though
:: the ~25 minute mean of the CALCS is actually around ~0.98 rather than the instantaneous value of 1.019
is inconsequential to Vlad's conclusions.
I'm adding the modeled correction due to the missing 3.2 kHz pole here as a text file. I plotted a comparison showing Vlad's fit (green), the modeled correction evaluated on the same frequency vector as Vlad (orange), and the modeled correction evaluated using a dense frequency spacing (blue), see eta_3p2khz_correction.png. The denser frequency spacing recovers error of about 2% between 400 Hz and 600 Hz. Otherwise, the coarsely evaluated modeled correction seems to do quite well.
The above error was fixed in the model at GPS time 1375488918 (Tue Aug 08 00:15:00 UTC 2023) (see LHO:72135)
TITLE: 07/28 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 4mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY: Locked for 2+hours, calm day so far with no planned activities. calibration and commission activities will happen in the morning.
TITLE: 07/28 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
SHIFT SUMMARY:
Observing and Locked for 2 hours.
- While relocking after the lockloss (71783), got stuck at FIND_IR - was able to lock the x arm but at ~0.1 (see attachment 1) and it couldn't finetune it, but H1_MANAGER finally decided to run an initial alignment and all was good.
7:00 Detector Observing and Locked for 7hrs 18mins
8:44 Entered Earthquake mode
8:54 Out of Earthquake mode
9:29 Entered Earthquake mode
9:50 Out of Earthquake mode
11:12 Lockloss from sudden local seismic event (71783)
13:06 Reached NOMINAL_LOW_NOISE
13:17 H1_MANAGER brought us back into Observing
LOG:
No log
Lockloss at 11:12 due to some sort of local seismic event
Just got back into Observing. Didn't have to touch anything! We did have to run an initial alignment.
Detector is in Observing and has been locked for 11hrs 22mins. There were a few earthquakes that rolled through so we did go into Earthquake mode a couple of times (8:44-8:54 and 9:29-9:50), but we rode them out.
The MY temp alarm hasn't been triggered since it went off during Ryan S's shift (71778).
TITLE: 07/28 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 8mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY:
Taking over from Ryan S. We're Observing and have been Locked for 7hrs 29mins.
I'll keep watch on the MY station temps.
TITLE: 07/27 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
SHIFT SUMMARY: Quiet shift tonight, relocked easily and H1 has been observing for 7 hours.
Handing off to Oli for the rest of the night.
LOG:
No log for this shift.
State of H1: Observing at 150Mpc
H1 has been locked and observing for 3 hours. Locking at the start of the shift went smoothly (except for a small manual adjustment of the DIFF offset). Some dust alarms for the optics lab, these have since stopped.
Yesterday during commissioning time I did a couple of experiments with CHARD_Y (71738)
How much margin do we have for CHARD_Y noise?
With the 10-100 Hz noise injection, I could estimate a CHARD_Y noise projection to DARM, using the excess power method (ratio of PSD). Using the measured transfer function between CHARD_Y and DARM gies the same result. The first plot shows the effect of the noise injection in CHARD_Y. The second plot shows the noise projection, and that we have a safety factor of about 30-100 above 15 Hz. We can use this information to design a new CHARD_Y filter.
Increasing the CHARD_Y gain by 3
In the second experiment I increased the CHARD_Y gain by a factor of 3, since the model predicted that the loop would be stable. This would give me more suppression at low frequency and a bit of suppression of the 2.6 Hz peak. This is pretty much what we observed. The change in the DARM or CHARD_Y residual RMS isn't large, as expected. So there is no effect on the sensitivity. We should try to design a better filter that gives us suppression at 1 Hz and 2.6 Hz to reduce the CHARD_Y RMS.
Note that the 1 Hz peak in CHARD_Y is coherent with PR2 and PR3 damping loops, so maybe we can gain something by also looking at those damping loops.
Here's a proposed new CHARD_Y controller, based on the 3x gain, adding more suppression at 1 and 2.6 Hz, and with increased noise injection above 10 Hz that should be ok given the measured coupling to DARM.
The last plot shows the predicted performance of this new loop: residual motion below 3 Hz should be largely suppressed. Only the 3.4 Hz peak is increased less than a factor 2.
Engaging this new controller with a gain of 180 caused a lock loss with an oscillation at 3.4 Hz, which is the expected higher UGF.
Probably the plant measurement is not accurate enough at such high frequency.
Tried a slighlty modified controller with more phase margin at 3-4 Hz. Now uploaded to FM9. This can be engaged with the nominal gain of 60, and it is supposed to be stable all the way to the working gain of 180.
However, incrreasing the gain to 120 already generates a large peak at 3.4 Hz. This is consistent with the previous lock loss.
The low frequency performance with this new controller with a gain of 120 is good as expected, but the new peak at 3.4 Hz actually increases the DARM RMS. I believe thin increase is responsible for the higher noise in DARM at >10 Hz, since there isn't much coherence between CHARD_Y and DARM.
I wanted to measure again the CHARD_Y plant, since the previous measurement was not very good at >2 Hz, and I suspect the real plant gives less phase margin that the fit model we have now. Unfortunately I incraesed the noise amplitude too much and we lost lock. To be repeated.
I also tried to reduce the coupling of CHARD_Y to DARM by fine tuning the ITMY A2L, but I couldn't get any improvement. I injected a 21.5 Hz line in CHARD_Y, but that showed up in DARM with a lot of sidebands and appeared quite non-stationary. More care will be needed to retuned the A2L to reduce CHARD_Y coupling to DARM: this might be necessary if the new controller injects too much noise at frequencies above 10 Hz
Shivaraj sent Rick and I a message about some noise found on H1:PCALX_TX_PD and H1:PCALX_RX_PD.
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20230703/cal/pcal_x/
This particular noise took place on the 3rd of July, LHO was empty of people except an operator. Clear examples of this type of noise on these channels can also be found on the 4th, the 5th, and 17th of july.
Checking the same channels on EY:
There is some form of noise on H1:PCALY_TX_PD that shows up on the 11th , 12th, and 13th of this month that looks similar, though not as often and not as intense as the noise found on PCALX and it's not always on both the PCALY_TX and PCALY_RX PD at the same time like seen at EX.
I have reached out to Shivaraj, to try to learn more about this and see if it's a problem for DARM. Which it doesn't seem to be according to what he saw in Bruco .
This noise could point to a problem with our PCAL lasers since its in both TX and RX PD's on at EX. But it also could be the AOM, or the OFS being saturated , or otherwise interacting with changes in temp or humidity.
This could also be a DAQ issue, like a chassis or board because it's showing up on both channels at the same time at EX. Shivaraj mentioned that there might be "cross talk between different channels in a board, and if the glitches are in the light and seen by both PD's they would also show up in other channels, which we could likely use to our advantage."
I took this issue to the Noise sprint on Wednesday and Adrian Helming-Cornell, Jane Glanzer,Vishal Yalla took up the project.
Dave Barker and Erik also apparently tried to also look into this as well and by Wednesday's lunch time there was some sharing of information
The Noise Sprint group started a google doc where we put all the information that we were gathering:
https://docs.google.com/document/d/127y-9zX6So-zWHxpziH0cU9SAjMjV1lUiJrwKRHdB4A/edit
That may not be a clickable link so here is the content:
alog: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=71725
PCAL X Noise found by Shivaraj
Shivaraj sent Rick and I a message about some noise found on H1:PCALX_TX_PD and H1:PCALX_RX_PD.
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20230703/cal/pcal_x/
This particular noise took place on the 3rd of July, LHO was empty of people except an operator. Clear examples of this type of noise on these channels can also be found on the 4th, the 5th, and 17th of july.
Checking the same channels on EY:
There is some form of noise on H1:PCALY_TX_PD that shows up on the 11th , 12th, and 13th of this month that looks similar, though not as often and not as intense as the noise found on PCALX and it's not always on both the PCALY_TX and PCALY_RX PD at the same time like seen at EX.
I have reached out to Shivaraj, to try to learn more about this and see if it's a problem for DARM. Which it doesn't seem to be according to what he saw in Bruco .
This noise could point to a problem with our PCAL lasers since its in both TX and RX PD's on at EX. But it also could be the AOM, or the OFS being saturated , or otherwise interacting with changes in temp or humidity.
This could also be a DAQ issue, like a chassis or board because it's showing up on both channels at the same time at EX. Shivaraj mentioned that there might be "cross talk between different channels in a board, and if the glitches are in the light and seen by both PD's they would also show up in other channels, which we could likely use to our advantage."
PCAL Background:
-> PCAL = Photon Calibration
Used for calibration using test masses at the ends stations of the interferometer using a physical force
PCAL chassis layout: https://dcc.ligo.org/cgi-bin/private/DocDB/ShowDocument?.submit=Identifier&docid=S1400489&version=5
Potential Causes:
Tasks:
Channel Names:
(Might have correlation between calibration channels; however chassis channels are not resolving without calibration channels)
Calibration channels:
Link to GSTAL If you find a time that the kappas have some weird signals then check out the GSTAL data for those times as well.
Chassis documentation: https://dcc.ligo.org/LIGO-D1400153
Other instances:
June 9
June 10 - few lines
June 11 - few lines
June 19
June 24
July 3
July 4
July 5
July 11
July 17
July 18
July 19
July 20
July 21 - few lines
July 25 - few lines
I have 2 gps times that I have narrowed this down to happen between.
Between 1372456038 -1372456158
Calibration channels:
July 4th, 2023: 16:00:00 - 18:00:00 UTC (1372510818 GPS)
H1:CAL-CS_TDEP_KAPPA_PUM_OUTPUT, raw,start: 2023-07-04 16:00:00 (1372521618) len: 2:00:00.
H1:CAL-CS_TDEP_KAPPA_UIM_OUTPUT, raw,start: 2023-07-04 16:00:00 (1372521618) len: 2:00:00.
H1:CAL-CS_TDEP_KAPPA_TST_OUTPUT, raw,start: 2023-07-04 16:00:00 (1372521618) len: 2:00:00.
July 3th, 2023: 14:00:00 - 16:00:00 UTC (1372510818 GPS)
H1:CAL-CS_TDEP_KAPPA_PUM_OUTPUT, raw,start: 2023-07-03 14:00:00 (1372428018) len: 2:00:00.
H1:CAL-CS_TDEP_KAPPA_UIM_OUTPUT, raw,start: 2023-07-03 14:00:00 (1372428018) len: 2:00:00.
H1:CAL-CS_TDEP_KAPPA_TST_OUTPUT, raw,start: 2023-07-03 14:00:00 (1372428018) len: 2:00:00.
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20230703/cal/pcal_x/
More searching is needed to ensure that this is resolved.
WP 11184 This morning, Tony and I turned back on both ETMX and ETMY HWS lasers and their cameras. Had them off for one week (alog 69431) to check for any noise caused by them. Tagging DetChar.
Evan Goetz, Debasmita Nandi, Taylor Starkman, Ansel Neunzert
We compared the weekly average spectrum starting May 10 with the week starting May 17. We saw a few noticeable changes in the weekly spectra, but careful follow-up shows that several of them are unassociated with the HWS changes. In particular, daily spectra show that changes in the 29.96 Hz and 1.66 Hz combs do not happen at the same time as the HWS changes.
There is a small 14.9009 Hz comb that seems to turn on during the week of the test and off afterward. (That is, the comb seems to be associated with the HWS being in the off state, which is fairly counterintuitive). Only a few peaks are visible, and it has not been noticed since.
Mostly noting these things here for future reference and searchability. We do not see large-scale spectral changes associated with this test.