Summary of the report:
Full report: link.
Fri Jan 24 10:14:33 2025 INFO: Fill completed in 14min 29secs
Jordan confirmed a good fill curbside. TCmins [-91C, -90C] OAT (4C, 39F), deltaTempTime 10:14:22
[Evan Louis Matthew] This morning after Evan and Louis fixed the anti-aliasing filters we used our 'lockloss-less' recipe to re-engage the filters while smoothly changing the demod phase. This required a slight adjustment to the arguments in the command LHO:82430 (listed below). The new filters seem to be working as expected and are not causing yesterday's calibration error. This transition was done without lockloss. This is a placeholder alog to state that the calibration and the ifo have been reverted to their nominal state from this morning. I'll follow up and edit this entry with additional details later tonight/ tomorrow morning. To bring the filters back on and step the phase rot back to modified angle in same ramp time as the filters' ramps.
TITLE: 01/24 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 1mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.18 μm/s
QUICK SUMMARY: H1 lost lock just a half hour ago at 15:09 from a not obvious cause (link to lockloss tool) and is relocking; just reached DRMI.
H1 back to observing at 16:45 UTC. Had to help PRM during DRMI locking, but otherwise this was an automatic relock.
I updated OMC-LSC_PHASEROT from -21 to 56 since TJ pointed out in his alog from last night and accepted it in both SAFE and OBSERVE SDF tables (screenshots attached). Since the OMC was already locked by the time I did this, I just used the command from alog82430, which worked and did not cause a lockloss. This is possibly why the calibration overnight looked strange.
During relocking, H1 couldn't get a DRMI or PRMI so it went to go to the Check_Mich_Fringes state but we lost lock a few seconds into it. The LASER_PWR node was still moving up to 10W that we use for Check_Mich at the time that the 2W request came in, but this was ignored while it was moving. I'm not entirely sure why. So as with many lock losses, our IMC lost lock and since we were at 10W, it couldn't relock. The IMC eventually relocked 2.5 hours later, long enough to give me a call. By the time I logged in, it was already started an initial alignment at 10W. I requested 2W for the PRC alignment step and then it finished off initial alignment on its own.
All of the states in LASER_PWR that do the adjusting are "protected" guardian states, meaning that they have to return true before it's allow to move on. I can't remember why exaclty, but I think this was because it would confuse the rotation stage if you made a power request while there's another one going on. I would expect that once this state was done though, that it would have then moved to the 2W adjusting state, but it looks like it ignored that request entirely. I'll add this to my todo list to fix.
TITLE: 01/24 Eve Shift: 21:00-0600 UTC (1300-2200 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY:
After we got to Observing and everyone went home, nothing really happened after my last alog . Its been quiet ever since everyone left.
Since the Lockclock was interupted which was mentioned in my last alog, I should remind everyone:
This lock started at 19:41:24 UTC
Thus H1 has been locked for 10+ Hours.
oh also:
CALCS has some pending configuration changes according to the CDS Overview screen.
TITLE: 01/24 Eve Shift: 21:00-0600 UTC (1300-2200 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 6mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.19 μm/s
QUICK SUMMARY:
H1 has been locked for 7 hours as of 02:41:24 UTC
H1 is currently Observing.
End of Comissioning and CtrlZ:
Calibration Team has been working all day on the OMC phase and gains.
Camilla touched up the SQZr settings and temp.
Robert covered the Viewports & we are almost ready to get back to Observing.
The calibration team now has to revert all of their changes.
GDS has been restarted a few times.
1:42 UTC DCPD AA filters turned off and phase changed. No lockloss!!! YAY!!
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=82433
After some final tweaks and some SDF accepts from Louis's changes, whe got back to OBSERVING without a Lockloss at 2:09 32 UTC!
This lock started at 19:41:24 UTC.
At 00:33 UTC I noticed that the Lock Clock FOM had crashed, and thus I relaunched the lockclock.
When it returned, it only had 30 minutes on it, even though we have not lost lock yet and had been locked for many hours.
All of the lock clocks all read the same 30 minutes.
On the Calibration_Monitor, CDS_OverView.
According to this FOM Screenshot of Nuc28 we had been locked for 4 hours and 19 minutes at 00:01 UTC:
https://lhocds.ligo-wa.caltech.edu/cr_screens/archive/png/2025/01/23/16/nuc28-1.png
The Lockclock crashed again, it may have coincided with a restart of GDS? Sorry, Louis.
After talking with Dave this turned out to be ; hand editing a puppet file issue.
When dave started working on this https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=82429 puppet started to overwrite the file that he had changed.
J. Driggers, T. Sanchez, O. Patane, M. Todd, L. Dartez This is a placeholder alog to state that the calibration and the ifo have been reverted to their nominal state from this morning. I'll follow up and edit this entry with additional details later tonight/ tomorrow morning. In short: - The OMC demod phase changes from LHO:82413 have been reverted using the steps in LHO:82430 successfully without breaking lock. - The additional 16kHz AA filters in the DCPD A and B paths have been turned off. - Camilla reverted her adjustments to the squeezer (LHO:82432) - The calibration changes have been fully reverted. There are many details to the several attempts we made to get things working in the calibration pipeline to keep the AA filtering in place. I'll share more with an update ASAP.
[Jenne Louis Matt]
The change in the filters used by calibration created a 10% calibration error. Louis is trying to fix this but in an effort to have a way to revert to the previous filters without losing lock Jenne came up with a cdsutils way to revert back. Essentially it toggles off the new filters ('H1:OMC-DCPD_A0 FM10 H1:OMC-DCPD_B0') FM10
and steps the demod phase ('H1:OMC-LSC_PHASEROT')
in steps of 1 degree, 77 times, with a 0.065 sec delay (77steps/5seconds) between each step. The filter toggles have a 5 second ramp time, which informs the step delay in the cdsutils step, and the 77 degrees is the difference from the old demod phase and the new. Hopefully this avoids a lockloss in the event we are to revert, but may not. *fingers crossed*
Here is the command:
cdsutils switch H1:OMC-DCPD_A0 FM10 OFF; cdsutils switch H1:OMC-DCPD_B0 FM10 OFF; cdsutils step H1:OMC-LSC_PHASEROT 1,77 -s 0.065
UPDATE:
It worked :) anti-alias filters off and OMC-LSC phaserot returned to nominal
Joe, Fancisco and I got confused about the reverting and changing of the OMC Phase rot changes around this time.
I opened up an ndscope to see what happened.
Locklossalert had a bug whereby repeat GRD_NOTIFY cell phone calls/texts were not being sent if H1 remained in the same lock/unlocked state throughout. This was seen on at least two occasions, Monday 20th January at 4am and Sunday 12th January at 6am.
New code with the fix was restarted on cdslogin at 16:03 23jan2025. All LLA settings were restored, but the reset cleared the H1 lock-clock (from 4+ hrs).
Summary
Q: What is the relationship between the strength of violin mode ring-ups and the number of narrow spectral artifacts around the violin modes? Is there a clear cut-off at which the contamination begins?
A: The answer depends on the time period analyzed. There was an unusual time period spanning from mid-June 2023 through (very approximately) August 2023. During this time period, the number lines during ring-ups was much greater than in the rest of O4, and the appearance of the contamination may have begun at lower violin mode amplitudes.
What to keep in mind when looking at the plots.
1. These plots use the Fscan line count in a 200-Hz band around each violin mode region, which is a pretty rough metric, and not good for picking up small variations in the line count. It's the best we've got at the moment, and it can show big-picture changes. But on some days, contamination is present, but only in the form of ~10 narrow lines symmetrically arranged around a high violin mode peak. (Example in the last figure, fig 7) This small jump in the line count may not show up above the usual fluctuations. However, in aggregate (over all of O4) this phenomenon does become an issue for CW data quality. These "slight contamination" cases are also particularly important for answering the question "at what violin mode amplitude does the contamination just start to emerge?" In short, we shouldn't put too much faith in this method for locating a cut-off problematic violin mode height.
2. The violin modes may not be the only factor in play, so we shouldn't necessarily expect a very clear trend. For example, consider alog 79825 . This alog showed that at least some of the contamination lines are violin mode + calibration line intermodulations. Some of them (the weaker ones) disappeared below the rest of the noise when the violin mode amplitude decreased. Others (the stronger ones) remained visible at reduced amplitude. Both clusters vanished when the temporary calibration lines were off. If we asked the question "How high do the violin modes need to be...?" using just these two clusters, we'd get different apparent answers depending on (a) which cluster we chose to track (weak or strong), and (b) which time period we selected (calibration lines on or off). This is because at least some of the contamination is dependent on the presence & strength of a second line, not a violin mode.
Looking at the data
First, let's take a look at a simple scatter plot of the violin mode height vs the number of lines identified. This is figure 1. It's essentially an updated version of the scatter plots in alog 71501. It looks like there's a change around 1e-39 on the horizontal axis (which corresponds to peak violin mode height).
However, when we add color-coding by date (figure 2), new features can be seen. There's a shift at the left side of the plot, and an unusual group of high-line-count points in early O4.
The shift at the left side of the plot is likely due to an unrelated data quality issue: combs in the band of interest. In particular, the 9.5 Hz comb, which was identified and removed mid O4, contributes to the line count. Once we subtract out the number of lines which were identified as being part of a comb, this shift disappears (figure 3).
With the distracting factor of comb counts removed, we still need to understand the high-line-count time period. This is more interesting. I've broken the data down into three epochs: start of O4 - June 21, 2023 (figure 4); June 21, 2023 - Sept 1 2023 (figure 5); and Sept 1 2023 - present (figure 6). As shown in the plots, the middle epoch seems notably different from the others.
These dates are highly approximate. The violin mode ring-ups are intermittent, so it's not possible to pinpoint the changes sharply. The Sept 1 date is just the month boundary that seemed to best differentiate between the unusual time period and the rest of O4. The June 21 date is somewhat less arbitrary; it's the date on which the input power was brought back to 60W (alog 70648), which seems a bit suspicious. Note that, with this data set, I can't actually differentiate between a change on June 21 and a change (say) on June 15th, so please don't be misled by the specificity of the selected boundary.
Kiet, Sheila
We recently started looking into the whether nonlinearity of the ADC can contribute to this by looking at the ADC range that we were using in O4a.
They are showed in the H1:OMC-DCPD_A_WINDOW_{MAX,MIN} that sum the 4 DC photodiodes (DCPD). They are 18 bits DCPD, so that channel should saturate at 4* 2^17 ~520,000 counts.
Now there are instances that agree with Ansel report when there are violin mode ring up that we can see a shift in the count baseline.
Jun 29 - Jun 30, 2023 when the baseline seems to shift up and stay there for >1 months, Detchar summary page show significant higher violin mode ring up in the usual 500-520Hz region as well as the nearby region (480-500 Hz)
Oct 9, 2023 is when the temporary calibration lines are turned off 72096, the down shift happened right after the lines are off (after 16:40 UTC)
During this period, we were using a~5% of the ADC range (difference between max and min channel divided by the total range - 500,000 to 500,000 counts), and it went down to ~2.5 % once the shift happenned on Oct 9, 2023. We want to do something similar with Livingston, using the L1:IOP-LSC0_SAT_CHECK_DCPD_{A,B}_{MAX,MIN} channels to see the ADC range and the typical count values of those channels.
Another thing for us to maybe take a closer look is the baseline count value increase around May 03 2023. There was a change to the DCPC total photocurrent during that time (69358). Maybe worth checking if there is violin mode contaimination during the period before that.
Kiet, Sheila
More updates related to the ADC range investigation:
Further points + investigations:
Kiet, Sheila
Following up on the investigation into potential intermixing between higher-order violin modes down to the ~500 Hz region:
The Fscan team compiled a detailed summary of the daily maximum peak height (log10 of peak height above noise in the first violin mode region) for the violin modes near 500 Hz (v1) and 1000 Hz (v2). They also tracked line counts in the corresponding frequency bands: 400–600 Hz for v1 and 900–1000 Hz for v2. This data is available in the Google spreadsheet (LIGO credentials required).
n1_height
and n2_height
are the max peak heights of v1 and v2, and n1_count
and n2_count
are the corresponding line counts. There appears to be a threshold in violin mode amplitude beyond which line counts increase (based on {n1_height, n2_height} vs. {n1_count, n2_count} trends).Next: We plan to further investigate the lines that appear when both modes are high, the goal is to identify possible intermodulation products using the recorded peak frequencies of the violin modes.
J. Kissel, L. Dartez, E. Goetz In process of updating the calibration after installing the extra AA 65k to 16k digital AA filter we turned on this morning (see 82404, 82412 and 82413), we've updated the "template" pyDARM DARM model parameter set that is the basis for all copies for every report against which the model is compared against measurement, and from which the calibration pipeline's model is derived. The changes are relatively simple, -omc_filter_noncompensating_modules = 9,10 : 9,10 +omc_filter_noncompensating_modules = 8,9,10 : 8,9,10 -omc_filter_file = Common/H1CalFilterArchive/h1iopomc0/H1IOPOMC0_1364929770.txt +omc_filter_file = Common/H1CalFilterArchive/h1iopomc0/H1IOPOMC0_1421610658.txt where the "-" lines are the "before" and the "+" lines are the "after." Here's the location of the file, and the corresponding "before" vs. "after" git commit hash. /ligo/groups/cal/H1/ifo/pydarm_H1.ini Previous version f480b0a1 Now new version 17649002
Also (somehow) remembered that another minor problem with the TST stage actuation path was the current CALCS replica of the L2L_DRIVEALIGN_GAIN. H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN needs to be the same the pydarm_H1.ini vs. what's in the front-end. Changed the value from 191.712 to 184.65, - tst_drive_align_gain = 191.712 + tst_drive_align_gain = 184.65 This is all changing the same file in the same location, but here's the next iteration's change. /ligo/groups/cal/H1/ifo/pydarm_H1.ini ccc02365
In addition to changing the parameter file for the extra OMC DCPD digital AA filtering, we decided to *also* push a somewhat long-standing issue with apparent delays in actuation functions. We're applying 23.0e-6 [sec] worth of delay to the model of the UIM stage, and more consequentially, 20.2e-6 worth of delay to the TST stage. Both numbers are informed by the fit to the actuation measurements we just took, see 20250123T211118Z. The updated parameter set has been push to git with the following local location and mothership git hash. /ligo/groups/cal/H1/ifo/pydarm_H1.ini 4d9eb345 The calibration we installed / pushed / exported today will have all three of these changes in play.
After the CAL team changed the OMC digital phase in 82413, the SQZ at the start of the lock went through an excursion much larger than usual, all the way to 5dB of ASQZ before settling around -3.5dB of SQZ, plot. We often have a SQZ excursion at the start of the lock but it rarely gets above 0dB. We are unsure if it makes sense that the OMC change would effect the SQZ like this or we were just unlucky.
I paused SQZ_MANGER and took SQZ_ANG_ADF to DOWN. I think tuned H1:SQZ-CLF_REFL_RF6_PHASE_PHASEDEG for the best SQZ and adjusted H1:SQZ-ADF_OMC_TRANS_PHASE to get H1:SQZ-ADF_OMC_TRANS_SQZ_ANG to zero, effectively setting the setpoint of the servo. Adjusted from -138 to -128 sdf. We may need to change this again or revert before going back to observing if the SQZ isn't good with a more thermalized IFO.
Also used this time to reduce the HAM7 rejected SHG power. Enabling (not moving) the picos unlocked SQZ which is surprising.
The supposition of *how* the change in digital anti-aliasing might have impacted the ADF / SQZ loop is as follows: - The actuation / excitation of the ADF / SQZ tuning system is a single line driven into the ADF's VCXO at 322.0 Hz. - That ADF field beats against the IFO field, which is influenced by the DARM control loop. - The new digital AA filter causes -4.5 deg phase loss (and an increase in amplitude of 0.1%) at 322.0 Hz, and changes the DARM open loop gain in that same way. - Thus the 322 Hz DEMOD of the IFO vs. SQZ beatnote field that's picked off at 3.125 MHz needs a phase adjustment. If that (I don't really know what I'm talking about) model of what's happening is happening, I would have expected the ADF SQZ DEMOD phase to need exactly 4.5 [deg] of phase change, much like the OMC LSC SERVO's DEMOD phase needed changing at it's modulation frequency (see LHO:82413). Camilla's main log here suggests a (-138) - (-128) = 10 [deg] phase shift. I very much welcome other mechanisms/models of what has happened if this change in SQZ behavior is a result of the OMC DCPD digital filter change.
Once we'd been locked for 4 hours, I again tweaked the SQZ angle by finding the angle ether side of good SQZ and going to the middle the adjusting ADF phase, diff attached, so that the total phase change since the OMC changes is the 4deg that Jeff predicted.
SQZ at 1kHz+ still isn't as good as I remember so we touched the OPO temp while in observing as in 80461, no change needed.
Reverted back to -128deg ready for reverting CAL team to revert thier changes.
E. Goetz, J. Kissel, L. Dartez In previous aLOGs (see, e.g., LHO aLOG 82405), we had to decimate the 524 kHz data offline in order to evaluate the improvement of TEST channels with changes to anti-alias filtering. We expected to see a reduction in artifacts with the addition of 1 extra 65-16k decimation filter. Attached is a figure showing before and after the ratio of ASDs calculated in DTT from the DCPD A0 channel (not the TEST A channels). This figure shows some improvements (though perhaps hard to see visually) and introduces more questions, especially comparing to a plot like in LHO aLOG 82405, attached here as well. Statistics: Bins above 1% before = 33416 Bins above 1% after = 30273 Bins above 1% before, f < 2000 Hz = 9011 Bins above 1% after, f < 2000 Hz = 8362 Mean before = 1.1087 Mean after = 1.0651 Mean before, f < 2000 Hz = 1.0756 Mean after, f < 2000 = 1.0495 So we do see that the number of bins above 1% has gone down as expected (good), but the raw number of bins above 1% is much different than our expectations. Figure 1 simply seems far noisier than Figure 2. What is the cause of this? It would seem to imply that the IOP downsampling is not simply grabbing 1 value for every 32 samples in a consistent manner, or perhaps the way DTT grabs and exports the PSD data? I'll have to keep digging, but this seems strange
This may be improved by using a the double precision version of diaggui, 'diaggui_test', creates much less noisy ASDs, especially at higher frequencies.
In the attached image, single precision ASD is on the left and double precision is i on the right.
There may be a DTT export precision issue at play here with the ASD as Erik suggests. I wanted to carry out a time series analysis offline, so I exported all of the data before and after for the 16k (H1:OMC-DCPD_A_OUT_DQ) and 524k (H1:OMC-DCPD_A0_OUT) channels. Then I computed the PSD of the 524k channel and 16k channel, plus downsampling the 524k channel and computing the PSD. Then I plot the ratio of the 16k PSD over the 524k PSD (cut off to the 16k Nyquist) to inspect the data for excess noise before and after the addition of the extra 65-16k downsampling filter. I don't understand the red curve, but the blue curve seems reasonable, as well as the black and grey curves. The blue curve shows excess noise that is then suppressed by the additional filter seen in the absence of large ratio values in the black and gray curves. This result shows that the extra filtering is helpful, but until we can push a new calibration, we'll have to hold off adding it in.