C.Swain, L.Dartez, J.Kissel
Finished characterizing the spare whitening chassis, OMC DCPD S2300004.
The last two files are text documents containing updated notes on gathering the transfer function measurement data and noise measurement data of the whitening chassis, respectively.
Adding more details to OMC DCPD S2300004 Characterization for better accessibility:
Whitening ON Fit:
| Channel | Fit Zeros | Fit Poles | Fit Gain |
| DCPDA | [0.997] Hz | [9.904e+00, 4.401e+04] Hz | 435752.409 |
| DCPDB | [1.006] Hz | [9.994e+00, 4.377e+04] Hz | 433177.144 |
These results align very well with the goal of a [1:10] whitening chassis.
A difference of [0.3, 0.6] % from the model for the DCPDA and DCPDB fit zeros, respectively, is present. A difference of [0.96, 0.06] % from the model for the DCPDA and DCPDB fit poles, respectively, is present.
As all the differences are below 1%, the precision of the results is satisfactory.
Noise Measurements:
Average ASD for Whitening ON: 300 nV/rtHz
Average ASD for Whitening OFF: 30 nV/rtHz
Average ASD SR785 Noise Floor Level: 20 nV/rtHz
Whitening ON and Whitening OFF noise is consistent with OMC DCPD S2300003 noise measurements. Noise Floor is slightly higher than the S2300003 Noise Floor measurements (~20nV/rtHz compared to ~6nV/rtHz). This difference in noise floor measurements could be caused by a difference in measurement setup when gathering S2300003 data compared to S2300004 data.
Overall:
The measurments of the OMC DCPD S2300004 whitening chassis align well with what is to be expected from a z:p = [1 : (10, 44e3)] Hz whitening filter, with an average ASD noise floor of 300nV/rtHz in the Whitening ON state.
SR785 noise floor level is slightly higher than previously recorded from the OMC DCPD S2300003 whitening chassis but does not contribute to either the Whitening OFF or Whitening ON states, so this difference may effectively be ignored.
[Daniel, Keita, Sheila, Jenne]
After checking when we do / don't offload the OM suspensions, we only offload them in the MICH and SRY initial alignment states, after they have completed successfully. We do not offload them in DRMI ASC.
Also, from Ibrahim's alog 71183, we haven't succeeded with MICH or SRY initial alignment since *before* we turned on the OM2 heater. So, probably we're just in a situation where OM2 needs to be moved by hand to compensate for some different alignment due to its being warmer to get the beam more centered on the AS, and then things will be fine. MICH will lock and align during the initial alignment sequence, and then OM1 and OM2 will be offloaded. We will probably also need to help OM2 by hand again after we turn off the heater.
If this is not sufficient to ameliorate most of our problems, then we should engage and offload some AS WFS centering before attempting to lock MICH. There are two ways we could do this, either add AS WFS centering and offloading to the Input Align (or PRX) states, or we could insert the AS_Single_bounce alignment and offloading states after PRX and before PREP_for_MICH. Since these options are before the BS is aligned (which happens after MICH is locked), we'd be offloading OM1 and OM2 under the assumption that the BS alignment is 'close' (and OM1 and OM2 will get re-offloaded again after the BS is fully aligned, so it doesn't matter at these states if they're not perfect). Also though, the BS has to be quite close for the Yarm green initial alignment to have succeeded, so the assumption that the BS is 'close' is probably fine.
For now, I'm not going to make any changes to the guardian, and we'll see how initial alignment goes after hand-aligning OM2.
Okay, even worse.
We *do* use the AS WFS for the Xarm input (which I hadn't checked earlier), and the DC alignment of the AS WFS works, and pushes the OMs around. However, the offloading state (which is what I had checked) for the Input_Align was only offloading the RMs (which weren't in use, so were probably "offloading" zeros). I have added OM1 and OM2 to the Input align offload states (for both Xarm and Yarm). We're giving it a try now. The Input align certainly moved OM2, but since it wasn't offloaded, it went back to where it came from.
Okay - Input align offloaded now correctly offloads OM1 and OM2. This should be much better now.
This seems to have worked nicely. MICH alignment worked automatically without any further intervention.
Daniel also notes that the demod phase seems much more reasonable now, and most of the signal is correctly in the Q phase. So, the apparent mis-phasing was likely due to the very poor alignment.
Attached shows that the demod phases are OK.
Tue Jul 11 10:06:01 2023 INFO: Fill completed in 6min 0secs
Around 9am I turned the corner station dust monitor pump back on. I had powered it down last Friday with concerns of overheating. This pump runs hotter than the End station pumps and is withing operating temps.
J. Kissel Executive Summary: The current, 2023-03-21, OMC DCPD electronics' compensation filters "NewV2A" and "NewAW" are still very accurately compensating for the frequency dependence of the analog electronics' response, so this is NOT a contributing factor to the complexity of the low frequency sensing function AND the response continues to be stable in time. As a part of the continued investigation as to why the low-frequency sensing function is so complicated (see page 1 of H1_calibration_report_20230628T015112Z.pdf from LHO:70908 report of the measurement from LHO:70902), I wanted to rule out any possibility that the OMC DCPD electronics' analog response had changed (like we'd seen during the initial 2022 "burn in" period of the electronics -- see the 2022 data within 20230310_H1_DCPDTransimpedanceAmp_OMCA_timeratio.pdf from LHO:68167). See the first and second attachments. These show a comparison of the remote excitations of the OMC DCPD electronics chain between the second set of "while in nominal low noise" measurements from 2023-04-03 (i.e. from LHO:68377) in BLUE and the ones taken this morning with no light on the DCPDs (the IMC was OFFLINE) in RED. This transfer function, driving through the electronics *and* the compensation for those electronics' response continues to behave as "flat below 500 Hz." Thus the compensation is still doing an excellent job at compensating for the low-frequency response of the electronics. In fact, with no light on the DCPDs, the data is cleaner below 100 Hz. This puts 3 data points "on the board" where the response has not changed within the precision / accuracy that matters: - 2023-03-10 - 2023-04-03 - 2023-07-11 If we agree that three makes a pattern, then we shouldn't need to worry about this again until the electronics change. Since we're experimentalists, I'll probably measure again in a month or two just to confirm. For future Jeff: (1) You can do this in ~20 minutes on a Tuesday morning. (2) Take IMC to OFFLINE, or if folks need the beam, make sure the fast shutter is closed. (3) Open templates from /ligo/svncommon/CalSVN/aligocalibration/trunk/Common/Electronics/H1/SensingFunction/OMCA/Data/ 20230711_H1_TIAxWC_OMCA_S2100832_S2300003_DCPDA_RemoteTestChainEXC.xml 20230711_H1_TIAxWC_OMCA_S2100832_S2300003_DCPDB_RemoteTestChainEXC.xml and update the references. (4) One DCPD at a time, turn ON the analog relay to allow the H1:OMC-TEST_DCPD_EXC to head out through the DAC, the whitening chassis, and into the TEST input of the TIA. These relays, H1:OMC-DCPD_A_RELAYSET and H1:OMC-DCPD_B_RELAYSET can be found from the sitemap > OMC control > "sum "norm" "null" sub screen > Relays screen. (5) Hit go on the measurement. (6) Restore the relays to OFF. Done!
TITLE: 07/11 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 11mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.05 μm/s
QUICK SUMMARY: 13:48 hour lock just ended so we can begin maintenance work. SUS in-lock charge measurements successfully ran without killing the lock.
TITLE: 07/11 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Commissioning
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 15mph Gusts, 12mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.05 μm/s
QUICK SUMMARY:
14:20 UTC Magnetic injections started
14:21 UTC Dropped out of OBSERVING and into Comissioning.
Been pretty quiet.
SEC_ESD_INJECTIONS are still running.
Current IFO Status: NOMINAL_LOW_NOISE Not Observing, and prepairing for 4 hours of down time for Maintenance Day
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 14:44 | FAC | Karen | VAC Prep | N | Technical cleaning | 16:44 |
Current IFO Status: NOMINAL_LOW_NOISE & OBSERVING
Range : 140.9.Mpc
The wind has died down and the Violins look good.
TITLE: 07/11 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 138Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 23mph Gusts, 18mph 5min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.05 μm/s
QUICK SUMMARY:
Winds have been up and down tonight with some gusts above 30mph.
Current IFO Status: NOMINAL_LOW_NOISE & OBSERVING for 6 Hours
TITLE: 07/11 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 137Mpc
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 1:17 UTC. Violins have been coming down nicely since the earthquake and the lock acquisition.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 22:31 | PCAL | Rick/Dripta | PCAL Lab | Y (LOCAL) | Measurements | 00:31 |
There has been excess DARM noise mostly in 20 to 40 Hz region at LHO recently. Gabriele and others have been investigating the potential role of 2.6 Hz peak in this noise, for more info on their work please see alogs 71005, 71092, 71205. This noise can be seen as excess glitches in 20-40 Hz band as seen in the first plot for July 2.
This noise kinda got worse June 22 onwards can be seen in the glitchgram of the day as an increase in glitches few hours before the end of the day. Marissa pointed out during a Detchar call that GravitySpy has been classifying a lot of glitches as Fast Scattering. The time-frequency spectrograms (second file) of these glitches do not look very similar to the fast scatter at LLO but as long as they are high frequency (which these are) and could be some form of scatter, this classification seems apt.
The third plot below shows all the omicron triggers between Jun 30 and Jul 3 and the fourth plot shows those classified as Fast Scattering by Gravityspy above a confidence of 0.9. Furthermore the fifth plot shows an increase in the number of glitches classified as Fast Scattering after June 22.
From the Q scan (sixth figure), we can see that this is ~5 Hz Fast Scatter, which means the scattering surface is moving at ~ 2.5 Hz. When microseism is low, anthropogenic motion at f Hz, shows up as fast scatter at 2f Hz. More details on this noise modeling can be found in G2300482. Sixth figure is actual noise, seventh figure is a model based on 2.5 Hz motion.
Assuming the peak frequency to be 30 Hz, the scattering surface is moving at 15 µm/s.
Substituting V and f = 2.5 in , Vscatter = Ascatterωscatter
We get Ascatter ~ 0.96 µm. Somewhere there is a scattering surface moving about 2 microns peak to peak, at 2.5 (or 2.6) Hz (it's very difficult to distinguish between 2.5 Hz and 2.6 Hz in the Q scan) causing this noise. Not sure if we have had injections around this frequency in the ACBs.
Today we had SQZ ASC issue again. The first attached figure shows the related signals when SQZ ASC ran away. It seems that the ANG P/Y ran away for some reason and the SQZ-ASC_TRIGGER_INMON got down below the threshold and the SQZ ASC was turned off.
To avoid this, I reduced the gain of ANG P/Y by 20dB. I put -20dB in FM5 of ANG P/Y. The second attached figure shows the same signals with reduced ANG P/Y gain. The ANG P/Y ran away for the first 10 minutes, but the drift is much less and the ANG P/Y came back to reasonable value after that.
I still don't understand the reason of the drift (related to beam spot control?), but at least SQZ ASC can be engaged without running away now.
IFO is in NLN and OBSERVING for 1:41 hrs (since 1:17 UTC). Nothing else of note.
TITLE: 07/09 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 141Mpc
SHIFT SUMMARY:
IFO is LOCKED and OBSERVING - now for 48 hours and 35 minutes and counting, O4 record!
Other:
Squeezer unlocked without unlocking full IFO, which knocked us out of observing briefly, but quickly and automatically recovered within three minutes, putting us back in observing. This resulted in a data gap in GWISTAT during this time (4:34:38 UTC to 4:37:12 UTC), which was communicated on the RRT O4 mattermost channel and on teamspeak upon clarification.
LOG:
Tagging TCS: Out of observing 07:47 - 07:48 UTC as the CO2X laser unlocked, see attached. This happened on 06/28 too 70910. Unsure why, the PZT doesn't seem to be drifting out of range it just quickly drops, see attached.
Tagging SQZ, this is an old alog of SQZ_MANAGER taking us out of observing 2023/07/09 04:34UTC for 1m51s. This looks to be because the SQZ_SHG got to 0V and ran out of range, attached plot.
We do have a checker in LOCK_SHG and SQZ_READY_IFO to check H1:SQZ-SHG_PZT_VOLTS is between 15V and 85V 70076, but this was 46 hours into the lock where SHQ_PZT started at 40V so editing the checker wouldn't help here. We just need to stop the slow drift, LVEA temp is steady during this time.
>> guardctrl log -n 500 -a "2023/07/09 4:34:00 UTC" -b "2023/07/09 4:37UTC" SQZ_SHG
2023-07-09_04:34:37.763756Z SQZ_SHG [LOCKED.run] USERMSG 0: PZT voltage limits exceeded.
2023-07-09_04:34:37.825851Z SQZ_SHG JUMP target: SCANNING
Every lockloss after Tuesday maintenance last week has run up the violins.
Even though before this last lockloss the fundamental mode was very low (but harmonics were still elevated) the IFO has relocked with very high violins. Attached is the violins screen and DARM at the LOWNOISE ISC_STATES before we get to NLN.
They look much better now (after few hours of damping) - see screenshot attached (red trace is the current value, blue and green is old data). Not sure what is causing them to ring up after every lockloss. I will keep tracking them, especially after lockloss and new lock acquisition.
I just had a thought, although I'm not sure how much credibility it has. Our higher order violin modes have also been quite high the last week or so. I wonder if we had one as-yet-undetermined something that kicked up all the modes, and then now between locks there's energy transfer from those higher order modes (that we don't damp in lock, so they haven't really been getting smaller) to the fundamental modes. Then each lock we damp the fundamentals (and some of the 1kHz modes), but then the next lockloss puts a little more energy into the modes, and again there's energy transfer between the modes. Would we help this situation if we damped more of the higher order modes? Even if it doesn't directly help, getting those modes damped is probably a good idea (although generally a low priority, since they usually aren't a problem and we have many higher priority items).
It seems that the OMC single bounce mode matching is better with hot OM2 than it was with the similar measurement 70409
Daniel turned off the sidebands and manually aligned the locked OMC.
I used an adpated version of the OMCscan class to fit the spectrum up to 20/02 carrier modes. The scan went through two free spectral ranges so I just used the first 60s to make the analysis easier, and assuming that within this 60s of data the third smallest clear peak was 20, and the fourth one was 10 mode.
The fitted spectra is attached.
Then I used an adapted version of fit_two_peaks.py to fit a sum of two lorentzians to the 20 and 02 carrier modes, the fit is shown in the second graph.
We expect the HOM spacing to be 0.588 MHz as per this entry and DCC T1500060 Table 25.
The spacing for the modes measured is 0.549 MHz.
From the heights of the two peaks this suggests mode-mismatch of the OMC to be C02+C20/C00 = (0.457mA+0.629mA)/(16.39mA+0.457mA+0.629mA) = 6.2% mode mis-match.
From the locked/unlocked powers on the OMC REFL PD the visibility on resonance is 1-(1.84mW/22.6mW) = 92% visibility.
If the total loss is 8%, this implies that the other non-mode-matching losses are roughly 1.4%.
To run the OMC scan code go to
/ligo/gitcommon/labutils/omc_scan/ and run
python OMCscan_nosidebands2.py 1371921531 60 "Sidebands off, 10W input, hot OM2" "single bounce" --verbose --make_plot -o 2
in the labutils conda environment and on git branch dev.
To do the double peak fitting run:
python fit_two_peaks_no_sidebands2.py
in the labutils conda environment and on git branch dev.
This was a single bounce scan off ITMX, with 0.44W on the ring heater upper and lower segments, and no CO2.
Using Jennie's mode mismatch of 6.2%, we can use the ratio of locked vs unlocked reflected power to estimate the OMC losses, finesse and transmission for a perfectly mode matched beam.
I've used a time when the fast shutter was blocked from 70409 to subtract the dark offset the refl diodes, this gives reflected power on resonance of 22.61mW, reflected power off resonance of 1.85mW.
The power in the mode mismatch is reflected_power_off_resonance * 6.2% = P_mm 1.4mW
Visibility for the 00 mode is (refl_on_resonance - P_mm)/(refl_off_resonance - P_mm) = 2.1%
The attached script uses this visibility to find the loss using:
def Refl_fraction(r_loss):
on_res = (r1 - (t1**2 * r1 * r_loss)/(1-r1**2*r_loss))**2
off_res = (r1 + (t1**2 * r1 * r_loss)/(1+r1**2*r_loss) )**2
return on_res/off_res
with r1 = sqrt(reflectivity of the input output mirrors) = sqrt(1-7690e-6) from T1500060 page 143, and r_loss = sqrt(1-round trip losses). With a visibilty of 2.1% this gives us a round trip loss of 2616ppm. If true this level of loss would imply a finesse of 351, well lower than previous measurements: 69707. This would imply that the transmission of the OMC cavity for a 00 mode is 73%.
Koji pointed out that infering losses from the visibility as I did above is very sensitive to the HOM content, and including the first order modes above would have resulted in a different value of OMC losses.
As an alternative approach, I adapted a mathematica notebook that Koji shared to use the transmitted power along with the visibility, and infer higher order mode content and cavity transmission by making an assumption about the DCPD QE.
One confusing point about using these reflected power measurements is that we have to correctly take into account that the beam which arrives at the OMC REFL path has reflected twice off the QPD pick off beamsplitter. (So, the incident power on the REFL diode = incident power on OMC breadboard/ R_pick_off^2)
The results we get for cavity losses (and higher order mode content) depend on what we assume for DCPD QE with this method. Below are the results of the attached script run with a QE of 1 and a QE of 96%, this only makes a small change in the higher order mode content we infer, and that small change in HOM also causes a small change in what we infer for the total efficiency of the OMC breadboard in the two cases.
Power on refl diode when cavity is off resonance: 22.612 mW
Incident power on OMC breadboard (before QPD pickoff): 23.052 mW
Power on refl diode on resonance: 1.848 mW
Measured effiency (DCPD current/responsivity if QE=1)/ incident power on OMC breadboard: 81.6 %
assumed QE: 96.0 %
power in transmission (for this QE) 19.598 mW
HOM content infered: 8.069 %
Cavity transmission infered: 93.376 %
predicted efficiency () (R_inputBS * mode_matching * cavity_transmission * QE): 81.616 %
omc efficency for 00 mode (including pick off BS, cavity transmission, and QE): 88.780 %
round trip loss: 540 (ppm)
Finesse: 396.346
assumed QE: 100 %
power in transmission (for this QE) 18.814 mW
HOM content infered: 7.903 %
Cavity transmission infered: 89.479 %
predicted efficiency () (R_inputBS * mode_matching * cavity_transmission * QE): 81.616 %
omc efficency for 00 mode (including pick off BS, cavity transmission, and QE): 88.620 %
round trip loss: 886 (ppm)
Finesse: 388.021
The good: we now have access to longer segments of observing-quality data and can get a clearer look at narrow spectral artifacts. It looks like the line and comb situation is fairly stable and consistent with previous observations in smaller data sets. We're generally not seeing new problems arising, but are better understanding the existing problems and their scope.
The bad: Combs are more pervasive than previously known, especially at intermediate and high frequencies. In particular, a large number of spectral bins are contaminated by a set of ~9.47 Hz combs over a wide spectral range, which is problematic for CW searches.
The details:
The 4.98423 Hz comb is still present and clearly visible at low frequencies. The ~9.47 Hz combs (specifically 9.47431, 9.475383, and 9.480526 Hz) are weaker, but the larger data set reveals that they are more pervasive. They contaminate a spectral region spanning from about 200 Hz to 930 Hz (98th harmonic). There are 3 distinct combs involved in this structure; the triple peak can only be seen at high spectral resolution.
There is also a 29.969515 Hz comb which is visible up to its 60th harmonic at about 1800 Hz, and a 99.99864 Hz comb which is visible up to at least 2 kHz.
Coherence information:
We also have averaged coherence data over the same time period. As a reminder, Fscan tracks a limited set of high-priority channels and not the full DetChar list.
Prior related alogs: 68261, 66925
Attached figures: (1) 200-930 Hz plot demonstrating the range of the 9.47 Hz combs, as well as some of the other noted combs. (2) Zoom on the triple peak of these combs. (3) Low frequency spectrum demonstrating the 4.98 Hz comb.
Carolina Li, Ansel Neunzert
Summary
It looks like the pervasive ~9.47 Hz combs are coherent with:
Related to jitter?
(The WFS channels see the 29.97 Hz comb too.)
Background
For reasons of computational cost, Fscan only tracks a limited channel list-- not including these channels. However, daily bruco scans are generated through the STAMP-PEM monitor (thanks Kiet!) which are are lower resolution but cover many more channels. Carolina has been working to cross-reference the STAMP-PEM data with Fscan data to extract hints about promising channels for noise-hunting, working around the frequency resolution by leveraging the fact that combs show up in multiple spectral bins and should have the same coherences in all of them. Carolina generated a generated a heat map counting the number of times that various channels were coherent with h(t) in the STAMP-PEM frequency bins (low resolution) corresponding to Fscan auto-generated combs (high resolution). Fig 1 shows her initial test case, which clearly highlights the listed channels. This prompted me to follow up with higher-resolution Fscan spectra and confirm.
In parallel, Elenna had recommended taking a look at a wider set of channels related to jitter noise, which it just so happens these channels are part of...
Attached figures
1: heat map generated using STAMP-PEM coherence data and Fscan comb lists
2-5: high resolution Fscan coherence spectra for the channels listed, overlaid with Fscan comb lists
Attached data is for July 6.