TITLE: 08/31 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
I came in and it was locked. but lost lock shortly after from the dreaded Double PI Ring up.
Relocked with out an IA.
Another Unkown lockloss.
I requested an IA which went easy peazy.
The Locking process did do the DRMI lockloss but didn't fully "lockloss" all the way to Down.
Got back up and runnig in NLN around 18:32 UTC , you know , just in time to postpone the Calibration until 22:00
Ran Francisco's ETMX Drive Align Script to try to get KAPPA back to 1.
Did a Calibration Sweep
@ 23:00 UTC I gave NUC 33 a Hard shutdown, and Found out that the spare NUC that I'd like to replace it with is mounted to the wall insuch a way that is Not easily removed without dismounting the monitor bracket.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:58 | SAF | H1 | LHO | YES | LVEA is laser HAZARD | 18:24 |
17:12 | Vac | Gerardo | VPW | N | Checking on parts | 17:53 |
Lockloss @ 23:14 UTC - link to lockloss tool
No obvious cause; maybe some small motion by ETMX immediately before lockloss like we've seen before, but it's much smaller than usual.
H1 back to observing at 00:14 UTC.
To go to observing, I reverted the SDF diff on the susetmx model for the new ETMX drivealign L2L gain provided by Francisco's script (screenshot attached; see alog79841). This had not been updated in Guardian-space, so it was set to the previous setpoint during TRANSITION_FROM_ETMX. I've updated the gain to 191.711514 in lscparams.py, saved it, and loaded ISC_LOCK.
The following gains were set to Zero:
caput H1:CAL-PCALY_PCALOSC1_OSC_SINGAIN 0
caput H1:CAL-PCALY_PCALOSC2_OSC_SINGAIN 0
caput H1:CAL-PCALY_PCALOSC3_OSC_SINGAIN 0
caput H1:CAL-PCALY_PCALOSC4_OSC_SINGAIN 0
caput H1:CAL-PCALY_PCALOSC9_OSC_SINGAIN 0
caput H1:CAL-PCALX_PCALOSC1_OSC_SINGAIN 0
caput H1:CAL-PCALX_PCALOSC1_OSC_COSGAIN 0
caput H1:CAL-PCALX_PCALOSC4_OSC_SINGAIN 0
caput H1:CAL-PCALX_PCALOSC5_OSC_SINGAIN 0
caput H1:CAL-PCALX_PCALOSC6_OSC_SINGAIN 0
caput H1:CAL-PCALX_PCALOSC7_OSC_SINGAIN 0
caput H1:CAL-PCALX_PCALOSC8_OSC_SINGAIN 0
Then I took ISC_LOCK to NLN_CAL_MEAS
22:13 UTC ran the calibration following command:
pydarm measure --run-headless bb
notification: end of test
diag> save /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240831T221301Z.xml
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240831T221301Z.xml saved
diag> quit
EXIT KERNEL
2024-08-31 15:18:13,008 bb measurement complete.
2024-08-31 15:18:13,008 bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240831T221301Z.xml
2024-08-31 15:18:13,008 all measurements complete.
anthony.sanchez@cdsws29:
anthony.sanchez@cdsws29: gpstime;python /ligo/groups/cal/src/simulines/simulines/simuLines.py -i /ligo/groups/cal/src/simulines/simulines/newDARM_20231221/settings_h1_newDARM_scaled_by_drivealign_20231221_factor_p1.ini;gpstime
PDT: 2024-08-31 15:18:49.210990 PDT
UTC: 2024-08-31 22:18:49.210990 UTC
GPS: 1409177947.210990
2024-08-31 22:41:50,395 | INFO | Commencing data processing.
Traceback (most recent call last):
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 712, in
run(args.inputFile, args.outPath, args.record)
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 205, in run
digestedObj[scan] = digestData(results[scan], data)
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 621, in digestData
coh = np.float64( cohArray[index] )
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/gwpy/types/series.py", line 609, in __getitem__
new = super().__getitem__(item)
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/gwpy/types/array.py", line 199, in __getitem__
new = super().__getitem__(item)
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/astropy/units/quantity.py", line 1302, in __getitem__
out = super().__getitem__(key)
IndexError: index 3074 is out of bounds for axis 0 with size 0
ICE default IO error handler doing an exit(), pid = 2858202, errno = 32
PDT: 2024-08-31 15:41:53.067044 PDT
UTC: 2024-08-31 22:41:53.067044 UTC
GPS: 1409179331.067044
These changes were reverted and restored back to their previous values.
H1:CAL-PCALY_PCALOSC1_OSC_SINGAIN
H1:CAL-PCALY_PCALOSC2_OSC_SINGAIN
H1:CAL-PCALY_PCALOSC3_OSC_SINGAIN
H1:CAL-PCALY_PCALOSC4_OSC_SINGAIN
H1:CAL-PCALY_PCALOSC9_OSC_SINGAIN
H1:CAL-PCALX_PCALOSC1_OSC_SINGAIN
H1:CAL-PCALX_PCALOSC1_OSC_COSGAIN
H1:CAL-PCALX_PCALOSC4_OSC_SINGAIN
H1:CAL-PCALX_PCALOSC5_OSC_SINGAIN
H1:CAL-PCALX_PCALOSC6_OSC_SINGAIN
H1:CAL-PCALX_PCALOSC7_OSC_SINGAIN
H1:CAL-PCALX_PCALOSC8_OSC_SINGAIN
I then took ICS_LOCK back back to NOMINAL_LOW_NOISE.
TITLE: 08/31 Eve Shift: 2300-0500 UTC (1600-2200 PST), all times posted in UTC
STATE of H1: Calibration
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 7mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY: H1 has been locked for 4.5 hours. Tony and I are wrapping up some calibration time to take the regular sweeps while Louis helped troubleshoot (see their alogs for details). Will resume observing soon.
anthony.sanchez@cdsws29: python3 /ligo/home/francisco.llamas/COMMISSIONING/commissioning/k2d/KappaToDrivealign.py
Fetching from 1409164474 to 1409177074
Opening new connection to h1daqnds1... connected
[h1daqnds1] set ALLOW_DATA_ON_TAPE='False'
Checking channels list against NDS2 database... done
Downloading data: |█████████████████████████████████████████████████████████████████████████████████████| 12601.0/12601.0 (100%) ETA 00:00
Warning: H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN changed.
Average H1:CAL-CS_TDEP_KAPPA_TST_OUTPUT is -2.3121% from 1.
Accept changes of
H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN from 187.379211 to 191.711514 and
H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN from 184.649994 to 188.919195
Proceed? [yes/no]
yes
Changing
H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN and
H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN
H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN => 191.7115136134197
anthony.sanchez@cdsws29:
I'm not sure if the value set by this script is correct. KAPPA_TST was 0.976879 (-2.3121%) at the time this script looked at it. The L2L DRIVEALIGN GAIN inH1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN
was 184.65 at the time of our last calibration update. This is the time at which KAPPA_TST was set to 1. So to offset the drift in the TST actuation strength we should change the drivealign gain to 184.65 * 1.023121 = 188.919. This script chose to update the gain to 191.711514 instead; this is 187.379211 * 1.023121, with 187.379211 being the gain value at the time the script was run. At that time, the drivealign gain was already accounting for a 1.47% drift in the actuation strength (this has so far not been properly compensated for in pyDARM and may be contributing to the error we're currently seeing...more on that later this weekend in another post.). So I think this script should be basing corrections as percentages applied with respect to the drivealign gain value at the time when the kappa's were last set (i.e. just after the last front end calibration update) *not* at the current time. also, the output from that script claims that it also updatedH1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN
but I trended it and it hadn't been changed. Those print statements should be cleaned up.
to close out this discussion, it turns out that the drivealign adjustment script is doing the correct thing. Each time the drivealign gain is adjusted to counteract the effect of ESD charging, the percent change reported by Kappa TST should be applied to the drivealign gain at that time rather than what the gain was when the kappa calculations were last updated.
If the IFO is locked and thermalized when the normal calibration measurement time rolls around today, please do the following before moving to ISC_LOCK state 700 (NLN_CAL_MEAS):caput H1:CAL-PCALY_PCALOSC1_OSC_SINGAIN 0 caput H1:CAL-PCALY_PCALOSC2_OSC_SINGAIN 0 caput H1:CAL-PCALY_PCALOSC3_OSC_SINGAIN 0 caput H1:CAL-PCALY_PCALOSC4_OSC_SINGAIN 0 caput H1:CAL-PCALY_PCALOSC9_OSC_SINGAIN 0 caput H1:CAL-PCALX_PCALOSC1_OSC_SINGAIN 0 caput H1:CAL-PCALX_PCALOSC1_OSC_COSGAIN 0 caput H1:CAL-PCALX_PCALOSC4_OSC_SINGAIN 0 caput H1:CAL-PCALX_PCALOSC5_OSC_SINGAIN 0 caput H1:CAL-PCALX_PCALOSC6_OSC_SINGAIN 0 caput H1:CAL-PCALX_PCALOSC7_OSC_SINGAIN 0 caput H1:CAL-PCALX_PCALOSC8_OSC_SINGAIN 0
then follow the normal calibration procedure at https://cdswiki.ligo-wa.caltech.edu/wiki/TakingCalibrationMeasurements. then go back to NLN. Then after going back to NLN (state 600)caput H1:CAL-PCALY_PCALOSC1_OSC_SINGAIN 115 caput H1:CAL-PCALY_PCALOSC2_OSC_SINGAIN 5000 caput H1:CAL-PCALY_PCALOSC3_OSC_SINGAIN 5000 caput H1:CAL-PCALY_PCALOSC4_OSC_SINGAIN 1430 caput H1:CAL-PCALY_PCALOSC9_OSC_SINGAIN 3619 caput H1:CAL-PCALX_PCALOSC1_OSC_SINGAIN 30000 caput H1:CAL-PCALX_PCALOSC1_OSC_COSGAIN 30000 caput H1:CAL-PCALX_PCALOSC4_OSC_SINGAIN 40 caput H1:CAL-PCALX_PCALOSC5_OSC_SINGAIN 30 caput H1:CAL-PCALX_PCALOSC6_OSC_SINGAIN 30 caput H1:CAL-PCALX_PCALOSC7_OSC_SINGAIN 60 caput H1:CAL-PCALX_PCALOSC8_OSC_SINGAIN 2000
This is to facilitate a test for the gstlal pipeline. I will get something similar into guardian before the next calibration measurements.
Unknown Lockloss
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1409159062
No PI rings ups.
No Wind gusts.
Sat Aug 31 08:11:26 2024 INFO: Fill completed in 11min 22secs
Verbals alarmed about a PI 28 Ring up at 14:59:33 UTC
Almost Immediately PI 29 started to ring up as well at 14:59:40 UTC
I watched the PI modes for 29 move and try to find the right damper settings but nothing happened with PI 28. PI 28 had a larger ring up than PI 29.
Verbals alarms announced DCPDs at 15:01:14 UTC
Saturations started at 15:01:08 UTC
We were unlocked by 15:01:20 UTC.
This was the fastest PI related lockloss I've encountered.
Lockloss
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1409151698
TITLE: 08/31 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 137Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 4mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
IFO has been locked for 1 Hour and 33 minutes.
Looks like Oli Was up with the IFO last night.
NUC33 is having it's issue again. I'm going to reboot NUC 33 and when it comes up I'll remove the cameras from it and see if it works better with out the camera feeds.
NUC 33's clock read 00:00 when I hard rebooted it, Suggesting that it has been not working for close to 8 hours.
I have exited the camera feeds from the top of nuc 33.
Current time is 14:56 UTC ( 7:56 PST ) .
Yesterday it failed around 2 pm again so perhaps today it will last a bit longer than that today.
Had to help H1 Manager out because we were in NOMINAL_LOW_NOISE but the OPO was having trouble getting locked with the ISS. OPO trans was having trouble getting to 70 and so definitely couldn't reach its setpoint of 80uW. I changed the setpoint to 69 since it was maxing out around 69.5, and I changed the OPO temp and accepted it in sdf.
TITLE: 08/31 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 144Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Very quiet evening. H1 relocked and started observing at 00:12 UTC; current lock stretch is almost at 5 hours.
LOG: No log for this shift.
TITLE: 08/30 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Aligning
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
15:40 UTC Fixed NUC33 issue alog https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=79810
16:30 UTC Forklift started driving from the VPW to the Water tank.
16:40 UTC Forklift operation stopped
21:51 UTC Superevent Candidate S240830gn
After 16 Hours and 2 minutes we finally got a lockloss 22:32 UTC
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1409092378
Ryan Cleared LSC Calibration filters
LVEA WAP turned on for PEM noise hunting
CER Fans shut off for Noise hunting.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:58 | SAF | H1 | LHO | YES | LVEA is laser HAZARD | 18:24 |
14:39 | FAC | Karen | Optics Lab | No | Technical Cleaning | 15:04 |
16:24 | PEM | Robert, Sam, Carlos, Genivieve | Y-arm | N | Out Door Testing of CE PEM Equipment, Robert & Carlos back | 19:27 |
22:35 | PEM | Robert, Sam | CER | N | Noise tracking | 23:27 |
23:09 | VAC | Gerardo | HAM Shaq | N | Checking Vacuum system | 23:20 |
H1 back to observing at 00:12 UTC
Ansel Neunzert, Evan Goetz, Alan Knee, Tyra Collier, Autumn Marceau
Background
(1) We have seen some calibration line + violin mode mixing in previous observing runs. (T2100200)
(2) During the construction of the O4a lines list, it was identified (by eye) that a handful of lines correspond to intermodulation products of violin modes with permanent calibration lines. (T2400204) It was possible to identify this because the lines appeared in noticeable quadruplets with spacings identical to those of the low-frequency permanent calibration lines.
(3) In July/August 2023, six temporary calibration lines were turned on for a two-week test. We found that this created many additional low-frequency lines, which were intermodulation products of the temporary lines with each other and with permanent calibration lines. As a result, the temporary lines were turned off. (71964)
(4) It’s been previously observed that rung-up violin modes correlate with widespread line contamination nearby the violin modes, to an extent that has not been seen in previous observing runs. The causes are not understood. (71501, 79579)
(5) We’ve been developing code which groups lines by similarities in their time evolution. (code location) This allows us to more quickly find groups of lines that may be related, even when they do not form evenly-spaced combs.
Starting point: clustering observations
All lines on the O4a lists (including unvetted lines, for which we have no current explanations) were clustered according to their O4a histories. The full results can be found here This discussion focuses on two exceptional clusters in the results. The clusters are given here by their IDs (433 and 415, arbitrary tags).
Cluster ID 415 was the most striking. It’s very clear from figure 1 that it corresponds to the time periods when the temporary calibration lines were on, and indeed it contains many of the known intermodulation products. However, it also contains many lines that were not previously associated with the calibration line test.
Cluster ID 433 has the same start date at cluster 415, but its end date is much less defined, and apparently earlier.
Given the background described above, we were immediately suspicious that the lines in the clusters could be explained as intermodulation products of the temporary calibration lines with rung-up violin modes. We suspected that the “sharper" first cluster was composed of stronger lines, and the second cluster of weaker lines. If the violin mode(s) responsible decreased in strength over the time period when the temporary calibration lines were present, the intermodulation products would also decrease. Those which started out weak would disappear into the noise before the end of the test, explaining the second cluster’s “soft" end.
Looking more closely - can we be sure this is violin modes + calibration lines?
We wanted to know which violin mode frequencies could explain the observed line frequencies. To do this, we had to work backward from the observed frequencies to try to locate the violin mode peaks that would best explain the lines. It’s a bit of a pain; here’s some example code. (Note: the violin modes don’t show up on the clustering results directly, unfortunately. Their positions on the lines list aren’t precise enough; they’re treated as broad features while here we need to know precise peak locations. Also, because the line height tracking draws from lalsuite spec_avg_long which uses the number of standard deviations above the running mean, that tends to hide broader features.)
As an example, I’ll focus here on the second violin mode harmonic region (around 1000 Hz), where the most likely related violin mode was identified by this method as 1008.693333 Hz.
Figure 2 shows a more detailed plot of the time evolution of lines in the “sharper" cluster, along with the associated violin mode peak. Figure 4 shows a more detailed plot of the time evolution of lines in the “softer" cluster, along with the same violin mode peak. These plots support the hypothesis that (a) the clustered peaks do evolve with the violin mode peak in question, and (b) the fact that they were split into two clusters is in fact because the “softer" cluster contains weaker lines, which become indistinguishable from the rest of the noise before the end of the test.
Figures 5 and 6 show a representative daily spectrum during the contaminated time, and highlight the lines in question. Figure 5 shows the first-order products of the associated violin mode and the temporary calibration lines. Figure 6 overlays that data with a combination of all the lines in both clusters. Many of these other cluster lines are identifiable (using the linked code) as likely higher order intermodulation products.
Take away points
Calibration lines and violin modes can intermix to create a lot of line contamination. This is especially a problem when violin modes are high amplitude. The intermodulation products can be difficult to spot without detailed analysis, even when they’re strong, because it’s hard to associate groups of lines and calculate the required products. Reiterating a point from 79579, this should inspire a caution when considering adding calibration lines.
However, we still don’t know exactly how much of the violin mode region line contamination observed in O4 can be explained specifically using violin modes + calibration lines. This study lends weight to the idea that it could be a significant fraction. But preliminary analysis of other contaminated time periods by the same methods doesn’t produce such clear results; this is a special case where we can see the calibration line effects clearly. This will require more work to understand.
Ansel Neunzert, Evan Goetz, Owen (Zhiyu) Zhang
Summary
Following the PSL control box 1 move to a separate power supply (see LHO aLOG 79593), we search the recent Fscan spectra for any evidence of the 9.5 Hz comb triplet artifacts. The configuration change seems promising. There is strong evidence that this change has had a positive effect. However, there are a few important caveats to keep in mind.
Q: Does the comb improve in DARM?
A: Yes. However, it has changed/improved before (and later reversed the change), so this is not conclusive by itself.
Figures 1-4 show the behavior of the comb in DARM over O4 so far. Figures 1 and 2 are annotated with key interpretations, and Figure 2 is a zoom of Figure 1. Note that the data points are actually the maximum values within a narrow spectral region (+/- 0.11 Hz, 20 spectral bins) around the expected comb peak positions. This is necessary because the exact frequency of the comb shifts unpredictably, and for high-frequency peaks this shift has a larger effect.
Based on these figures, there was a period in O4b when the comb’s behavior changed considerably, and it was essentially not visible at high frequencies in daily spectra. However, it was stronger at low frequencies (below 100 Hz) during this time. This is not understood, and in fact has not been noted before. Maybe the coupling changed? In any case, it came back to a more typical form in late July. So, we should be aware that an apparent improvement is not conclusive evidence that it won’t change again.
However, the recent change seems qualitatively different. We do not see evidence of low or high frequency peaks in recent days. This is good news.
Q: Does the comb improve in known witness channels?
A: Yes, and the improvement is more obvious here, including in channels where the comb has previously been steady throughout O4. This is cause for optimism, again with some caveats.
To clarify the situation, I made similar history plots (Figures 5-8) for a selection of channels that were previously identified as good witnesses for the comb. (These witness channels were initially identified using coherence data, but I’m plotting normalized average power here for the history tracks. We’re limited here to using channels that are already being tracked by Fscans.)
The improvement is more obvious here, because these channels don’t show the kind of previous long-term variation that we see in the strain data. I looked at two CS magnetometer channels, IMC-F_OUT_DQ, and LSC-MICH_IN1. In all cases, there’s a much more consistent behavior before the power supply isolation, which makes the improvement that much more convincing.
Q: Is it completely gone in all known witness channels?
A: No, there are some hints of it remaining.
Despite the dramatic improvements, there is subtle evidence of the comb remaining in some places. In particular, as shown in Figure 9, you can still see it at certain high frequencies in the IMC-F_OUT channel. It’s much improved from where it was before, but not entirely gone.
Just an update that this fix seems to be holding. Tracking the comb height in weekly-averaged spectra shows clear improvement (plot attached). The combfinder has not picked up these combs in DARM recently, and when I spot-check the daily and weekly spectra I see no sign of them by eye, either.
TITLE: 08/30 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 9mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.10 μm/s
QUICK SUMMARY:
H1 unlocked about a half hour ago after a 16 hour lock stretch. Currently running an initial alignment and will start locking immediately after.