Wed May 29 10:15:30 2024 INFO: Fill completed in 15min 25secs
Gerardo confirmed a good fill curbside.
FAMIS 28354, last checked in alog77843
Late alog for the in-lock charge measurement taken last Tuesday before maintenance (May 21st). There was no measurement taken this week (IFO was unlocked before maintenance this Tuesday).
J. Kissel Remeasured the same remote, DAC-driven measurement of the whole OMC DCPD sensing chain -- the transimpedance amplifier, the whitening chassis, and the front-end compensation for the frequency response of those -- as yesterday, now 24-ish hours after the DCPD transimpedance amplifiers were powered on. Finally -- some good news! As suspected, after the giant inductors thermalized, the low frequency response has restored to the same post-2024-vent ~0.3% magnitude drop below 25 Hz. Yesterday, I'd only waited 20 minutes after turning them on when I took the data from LHO:78095. So -- clearly the thermal time constant for the inductors is longer than 20 minutes!
As I logged in this morning, the SQZ ASC and FC ASC offload and graceful clear history scripts that I ran and left open yesterday (whoops) reran. 15:16UTC. Before I could save sdf diffs, we went though DOWN SDF revert. There was nothing in SQZ ASC, but FC ASC got cleared. As squeezing hasn't been in a good alignment over the last 12 hours, this doesn't really matter. But if we were in observing it would have knocked us out.
Reminder to close scripts once used. CDS is there a way we can disable this from happening?
Currently these scripts are called from medm via command 'xterm -g 80x15 -hold -e python3 clear_FC_ASC.py &' or simular
After speaking with Dave and Joanthan there's two things to be done to avoid this in future:
-hold
'print("done")
time.sleep(60)
sys.exit(0)
TITLE: 05/29 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 142Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 4mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
IFO just lost lock 14:47 UTC and now relocking
Dust monitors also have a few warnings:
*H1:PEM-CS_DUST_LVEA6 WARNING: dust counts did not change, please investigate
*H1:PEM-CS_DUST_LVEA10 WARNING: dust counts did not change, please investigate
*H1:PEM-CS_DUST_PSL101 WARNING: dust counts did not change, please investigate
*H1:PEM-CS_DUST_PSL102 WARNING: dust counts did not change, please investigate
BLRMs through our locks last night attached. Low frequency noise changes, this doesn't seem related to the wind.
TITLE: 05/29 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Wind
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
A bit of a wind storm at the beginning of the shift but luckily strongest winds only lasted about an hour and allowed for locking shortly thereafter. Had about 4-5hrs of observing during this shift (with another lockloss minutes before the end of the shift).
LOG:
LATE Squeezer Entry For Yesterday!
Yesterday for the lock at the beginning of the shift (NLN around 100utc after the winds), the Squeezer did not lock on its own and also needed to run the scan_alignment_fds + sqz_angle states; also Naoki tweaked ZM4 a little before H1 eventually went to Observing. Below are the SDF diffs screenshots which were accepted:
After the early evening winds dropped below 30mph, returned to locking and H1 made it back ton NLN. Had to turn on Squeezing and then ran requested SQZ states (Scan_Alignment_FDS & Scan_SQZAng). Naoki also looked at alignment for the squeezer to address squeezer noise as well.
Finally returned to Observing with H1 and it's different/new SR2/SR3 pointing (from today's commissioning work earlier). Range is hovering around 145Mpc.
FranciscoL, [Remote: RickS]
After one week of moving the inner beam to the center of the Rx (alog 77967), on May 28, we moved the beam down. This completes a cycle in which we characterize the XY comparison value to both directions of the pcal beam movement - one side right, one side down.
Attachment 'BothBeamsBefore' shows the beams before making any changes. Attachment 'AlignedTarget' shows the alignment with the beam height gauge. The pdf 'EndStationLog.pdf' lists the voltage values after each significant step of procedure T2400163. The steps represent writing down a voltage value after a particular change of the system. Some steps were recorded multiple times after minor changes.
The voltage ranged by ± 0.01 V during the measurement, mostly noticed when the target is placed on the integrating shpere. We might need to look closer at the seismic isolation configuration to reduce the noise during this - and potentially also end station - measurement procedure.
The 'Initial' measurement is *equal* to the last voltage measurement from the previous movement, done on May 21 (alog 77967). The initial and final voltage measuremed during todays procedure did not change.
With the movement made today, we expect that the X/Y comparison will change by +21.15 HOPs.
Today Dripta and I went to EY and did what would have previously call a standard ES measurement with PS4.
And We also employed the new End Station procedure.
Details and analysis ccoming in a comment to this alog.
A PCAL ENDY Station Measurement was done on May, the PCAL team (Dripta B. & Tony S.) went to ENDY with Working Standard Hanford aka WSH(PS4) and took two End station measurements to verify that the results were consistent with each other. One with our previous version of T1500062-V16 procedure and another with T1500062-V17.
The ENDY Station Measurements was carried out mostly according to the procedures outlined in Document LIGO-T1500062-v16 & v17, Pcal End Station Power Sensor Responsivity Ratio Measurements: Procedures and Log.
Measurement Log
First thing we did was take a picture of the beam spot before anything is touched!
Martel:
Martel Voltage source applies voltage into the PCAL Chassis's Input 1 channel. We record the GPStimes that a -4.000V, -2.000V and a 0.000V voltage was applied to the Channel. This can be seen in Martel_Voltage_Test.png . We also did a measurement of the Martel's voltages in the PCAL lab to calculate the ADC conversion factor, which is included on the above document.
Plots while the Working Standard(PS4) is in the Transmitter Module during Inner beam being blocked, then the outer beam being block, followed by the background measurment: WS_at_TX.png.
The Inner, outer, and background measurement while WS in the Receiver Module: WS_at_RX.png.
The Inner, outer, and background measurement while RX Sphere is in the RX enclosure, which is our nominal set up without the WS in the beam path at all.: TX_RX.png.
-----------------------------------------------------------------------------------------------------------------------------------------------
The New Document has a different order of measurements that are taken and are taken in a different way.
We placed the Working Standard (PS4) in the path of the INNER Beam at the TX module.
Then the Working Standard (PS4) in the path of the OUTER Beam at the TX module.
A background measurement.
Then we take the Working Standard and put it in the RX module to get the INNER Beam.
Then the OUTTER Beam in the RX Module.
And a Background.
This is where things get different....
We remove the beam block and give the Working Standard Both Inner and Outer Beams at the SAME TIME while it's at the RX module.
We also put the RX sphere back to the RX module and put both beams on it at the same time. Like nominal opperation when the PCAL lines are turned off.
Then we take a background.
This was repeated ~10 mins later because we wanted to see if there is any time dependent variations.
The last picture is of the Beam spots after we had finished the measurement.
Old procedure measurement results : "rhoR_prime": 10565.2
New Procedure Measurement: rhoR_prime : 10571.1. This 5 hop difference is well within our uncertainty .
second New Procedure rhoR_prime 10574.3, was off by less than 3hops ,well within uncertainty.
Preliminary analysis suggests that discrepancy in rhoRprime calculated via two methods is allowed within the uncertainty.
All of this data and Analysis has been commited to the SVN or GIT:
https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Projects/PhotonCalibrator/measurements/LHO_ENDY/
Obligitory BackFront PS4/PS5 Responsivity Ratio:
PCAL Lab Responsivity Ratio Measurement:
A WSH/GS (PS4/PS5)BF Responsivity Ratio measurement was ran, analyzed, and pushed to the SVN.
PS4PS5_alphatrends.pdf to show that the recent changes to the lab have not impacted the Lab measurements
This adventure has been brought to you by Dripta B. & Tony S.
TITLE: 05/28 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 24mph Gusts, 16mph 5min avg
Primary useism: 0.06 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
The site is currently being hammered by high winds gusts have recently started approaching 40mph (H1 just had a lockloss during these high winds...and green arms have been having troubles, so I am taking H1 to IDLE and Observatory Mode to WINDY on an hr by hr basis....hoping winds cooperate after sundown.
When H1 is able to return to NLN, Camilla requested for SQZ that scan_alignment_fds & scan_sqzang be run.
Terry, Kar Meng
The second SHG seems to have the appropriate finesse (~60) at 1064 nm. Next steps remove high reflector from temporary mount and place it in the SHG housing, then recover modes then look for green light.
TITLE: 05/28 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
IFO is in LOCKING at LOCKING_ARMS_GREEN.
We acquired NLN earlier but lost lock while some last min SQZ maintenance was being done. We lost lock from LOWNOISE_COIL_DRIVERS at 23:23 UTC and the wind is really picking up now (40mph gusts).
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
14:45 | FAC | Ken | LVEA | N | Conduit work | 19:01 |
15:00 | FAC | Kim | EX | N | Tech cleaning | 16:08 |
15:03 | VAC | Jordan | MX, EX | N | Turbopump Tests | 17:16 |
15:10 | PCAL | Tony, Dripta | EY | Y | Transitioning to Laser Hazard, PCAL Calibration | 18:08 |
15:11 | SYS | Jeff | LVEA | Y | OMC Electronics Characterization | 17:30 |
15:12 | SAF | LVEA | LVEA | YES | LVEA IS IN LASER HAZARD | 18:04 |
15:19 | IAS | Tyler | LVEA | Y | Faro Work | 19:10 |
15:24 | TCS | TJ | LVEA | Y | Realigning HWS Corner System | 16:36 |
15:26 | IAS | Ryan C | LVEA | Y | Faro Work | 19:12 |
15:31 | VAC | Travis | EY | Y | Install decoupled cooling lines on EY Turbo Station | 19:00 |
15:38 | PSL | Jason, Ryan S | PSL Enclosure | Y | Investigating PSL slow increase in PMC reflected power | 18:04 |
15:41 | TCS | Camilla | LVEA | Y | Realigning HWS Corner System | 16:35 |
15:46 | ISC | Richard, Fil | EX, EY (not VEAs) | N | Temp unplug of IRIG-B Signal from ADC | 16:18 |
15:54 | FAC | Chris | LVEA | Y | Chat with Tyler | 16:11 |
15:54 | FAC | Karen | EY | Y | Technical Cleaning | 15:54 |
15:58 | ARCH | Archeologist | MY Chiller Yard | N | APE Check (Area of Potential Effect) | 17:15 |
15:58 | VAC | Gerardo | LVEA | Y | Pulling Cable | 19:07 |
16:06 | PCAL | Francisco | EY | Y | Grabbing Laser Gogges | 16:33 |
16:07 | PEM | Robert | LVEA | Y | Comissioning test setup | 18:40 |
16:12 | FAC | Chris, Eric | Mechanical Rm | N | Replacing Fan Bearing | 19:12 |
16:23 | EE | Fil | LVEA, CER | N | IO Chassis Inspection, Helping VAC | 19:15 |
16:33 | SEI | Jim | LVEA | Y | Electronics work by HAM7 | 19:05 |
16:34 | PCAL | Francisco | EX | Y | PCAL Spot Move | 18:26 |
16:42 | FAC | Karen, Kim | FCES | N | Tech Cleaning | 17:11 |
16:50 | Terry | Optics Lab | N | Optics Lab Work | 18:57 | |
17:16 | ARCH | Archeologist, Richard | EY Beam Tube | N | APE | 17:57 |
17:17 | VAC | Jordan | LVEA | Y | Helping Gerardo pull cables | 19:07 |
17:31 | SYS | Jeff, Preet | LVEA | N | OMC Electronics Characterization Pt 2 | 18:59 |
17:41 | VAC | Janos, Isaiah | LVEA | N | Pulling Cables | 19:07 |
18:04 | IAS | Jason | LVEA | N | Faro Work | 19:12 |
18:21 | PCAL | Tony | PCAL Lab | Local | Responsivity ratio measurement | 18:49 |
18:36 | EE | Marc | LVEA | N | Unplugging (after talking to Fil in LVEA) | 18:52 |
19:13 | OPS | Ryan C | LVEA | N | LVEA Sweep | 19:24 |
20:04 | VAC | Janos, Isaiah | MY | N | Tour | 21:04 |
20:23 | PCAL | Tony | PCAL Lab | N | Measurements | 22:43 |
20:52 | CAL | Terry | Optics lab | Y | SQZ work | 23:15 |
I made two minor changes that I tested out during relocking today.
LOCKING_ARMS_GREEN state will not run increase flashes if the SEI state is not nominal in the respective arm. This is to stop increase flashes aligning to while an ISI is tripped and aligning to a bad spot. It will notify "Arm SEI not nominal, no I.F. until resolved", but I couldn't get it to stay up for long before other notifications would get in the way. Regardless, it wont run I.F. and will work for unattended times.
In ALS_DIFF the find IR used to step the offset and then do an instantaneous check to see if there was any resonance with a low threshold. We've gotten unlucky a few times where it dipped just below the threshold during the check. To remove luck from this, I have it now grab data every step and look at the max. The downside to this is that if there is an nds issue, it will fail. I also increased the fine tune IR step size from 3 ->5 and it seemed to work well.
Sheila, Camilla, Jenne, Naoki
While Ibrahim was doing initial alignment during maintence today, I moved SR3 (using sliders) and SR2 using the osem, to the positions in the second column of 77719 (-P move of SR3, May 7 10:39 pacific).
Alena has been working with the zeemax model of what happened when our OFI was damaged, and she believes that this location is more centered in the OFI, and this data may be useful to her modeling. We haven't recovered the range or all of the optical gain that we thought was possible before the OFI was damaged (our OMC alignment wasn't optimized when this happened), so we are hoping that a different spot may help.
When we relocked after this the range was low (~145 Mpc), the attached screenshot shows that we have more LSC coherence now. We should be able to adjust LSC feedforward during tomorow's commissioning time to address that.
J. Kissel EDIT :: False Alarm -- see comment below! Since I'm at the "what does it all mean?" stage of madness with the OMC DCPD Transimpedance Amplifier (TIA), I decided to make it worse -- I remeasured the original transfer function that triggered me worrying about the TIA response in after the OMC swap (see LHO:75986). I did this to try to corroborate whether there is a change in the DCPDA response above 1 kHz as mentioned in LHO:78090. This transfer function is the remote, DAC-driven measurement of the whole DCPD sensing chain -- the transimpedance amplifier, the whitening chassis, and the front-end compensation for the frequency response of those -- this has been described most recently in LHO:71225. I *don't* see a difference above 1 kHz, but I *do* see more difference in response below 25 Hz than I had in 2024-Feb-26. See the attached transfer functions of the DCPDA chain and the DCPDB chain. Black is pre-OMC swap, 2023-Jul-11. Brown is post-OMC swap, 2024-Feb-26. Red is today, also-post OMC swap, 2024-May-28. All measurements use the S2300003 whitening chassis, and the compensation from the 2023-Mar era compensation we've been using throughout O4. The only "oh, well it could just be" that I can think of immediately is that the DCPDs had been powered down from 10:10a to 11:30a PDT today to characterize the measurement setup in analog (see LHO:78090), and these were measured "only" 20 minutes after being powered back on at 11:50a. So -- we'll have to take the measurement again at the next opportunity -- such that the TIA electronics have what we think it the appropriate amount of time to thermalize -- a few hours. Just maddening. Data lives in /ligo/svncommon/CalSVN/aligocalibration/trunk/Common/Electronics/H1/SensingFunction/OMCA/Data/ 20240528_H1_TIAxWC_OMCA_S2100832_S2300003_DCPDA_RemoteTestChainEXC.xml 20240528_H1_TIAxWC_OMCA_S2100832_S2300003_DCPDB_RemoteTestChainEXC.xml
False alarm! It appears indeed that I had inadvertently captured a thermal transient of the transimpedance amplifiers -- since it had only been 20 minutes since I'd turned on the power to the preamps. See LHO:78112, where I took this same measurement again, and the response restored to the same post-2024-OMC-swap-vent response.
Pictures of the OMC IO Chassis.
DCC-D1301004 has been updated
LLO noted a 10 Hz comb that was attributed to the IRIG B monitor channel at the end station connected to the PEM chassis. (https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=71217)
LHO has agreed to remove this signal for a week to see how it impacts our DARM signal.
EX and EY cables have been disconnected.
Disconnection times
EX | GPS 1400946722 | 15:51:44 Tue 28 May 2024 UTC |
EY | GPS 1400947702 | 16:08:04 Tue 28 May 2024 UTC |
I checked H1:CAL-PCALX_IRIGB_DQ at gps=1400946722 and H1:CAL-PCALY_IRIGB_DQ at gps=1400947702. From 10 seconds prior to the cable disconnection to 1 sec before the cable disconnection, IRIG-B code in these channels agreed with the time stamp after taking into account the leap second offset (18 sec currently).
Note that the offset is there because the IRIG-B output from the CNS-II witness GPS clock ignores leap seconds.
I fixed things in https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Common/Scripts/Timing/ so that they run in the control room with modern gpstime package and also the offset is not hard coded. I committed the changes.
Please reconnect the cable soon so we have independent witness signals of the time stamp. There could be a better implementation but we need the current ones until a proper fix is proposed, approved and implemented.
I have checked the weekly Fscans to look for similar 1 Hz and 10 Hz combs in the H1 data (which we haven't see in the H1 O4 data thus far), or any obvious changes in the H1 spectral artifacts occurring due to the configuration change May 28. I do not see any changes due to this configuration change. This may be because the coupling from the timing IRIG-B signal may be lower at LHO than it is at LLO. I do notice that there is some change around the beginning of May 2024 that the number of line artifacts seems to increase; this should be investigated further. Attached are two figures showing the trend of 1 Hz and 10 Hz comb, where the black points are the average comb power and colored points are the individual comb frequency power values; color of the individual points indicates the frequency. Note that there is no change in the last black data point (the only full-1-week Fscan so far).
The IRIGB cables at EX and EY have been reconnected (PEM AA Chassis CH31).
On May 13th (as part of my DQ shift), I noticed a line of 4kHz glitches in the glitchgram, and they have appeared every day since then. Checking the summary pages revealed they’ve been intermittently present since Apr 8, 2024 but not previously (I checked from Dec 1, 2023 onwards). Fig. 1 shows an example glitchgram from May 13th. Figs. 2-4 show Omegascans for some of these glitches. I also created a spreadsheet to characterize these glitches, and did not find any pattern or regularity to their appearance.
I checked the aLogs for possible sources and found the SQZ angle was being tuned around the time the glitches started (aLog). I then compared the timing of the glitches to the SQZ: FC ASC channels and found a correlation between the deviation in the output and the strength of the glitches. See Figs 5-7 for examples, as well as the spreadsheet. I will be meeting with the LSC fellows on Tuesday to discuss these.
Since May 24, the 4kHz glitches looked somewhat mitigated.
The correlation between the deviations of SQZ channels and the loud glitches at 4kHz has been hardly seen since May 24.
(The attachments are Omicron trigger plot and SQZ FC ASC channel on May 26)
There were SQZ commissioning works on May 24, and there are 2 alogs (alog 77980, alog 78033) related to SQZ alignment and the low frequency noises.
The 4kHz glitches are back on May 29, and the correlation between the glitches and the deviation of SQZ FC ASC channels looks apparent again.
Naoki, Karmeng, Andrei, Sheila
We did NLG sweep on DARM at NLG 16.9 (nominal) and 42. The IFO was locked for more than 20 hours and well thermalized. We also tried NLG of 65.9, but the squeezing level was not stable so we gave up this NLG. We will take the third NLG data tomorrow if IFO is thermalized.
Previous NLG sweep on DARM: 73747
How to measure NLG: 76542
How to change NLG: 73801
Note:
UTC | demod phase | DTT ref | NLG | SQZ at 2kHz (dB) | |
FDS | 16:29:00-16:34:00 | 160.91 | 3 | 16.9 | -4.2 |
No SQZ | 16:36:20-16:41:20 | 0 | |||
ASQZ | 16:56:20-17:01:20 | -100.37 | 6 | 16.9 | 15.4 |
mean SQZ | 17:11:00-17:16:00 | 7 | 16.9 | 12.6 | |
FDS | 18:02:10-18:07:10 | 162.13 | 4 | 42 | -4.4 |
ASQZ | 18:19:30-18:24:00 | -87.6 | 8 | 42 | 20 |
mean SQZ | 18:29:00-18:34:00 | 9 | 42 | 17 |
OPO trans (uW) | OPO temp | Seed amplified | Seed unamplified | NLG |
80 | 31.470 | 0.215 | 0.0127 | 16.9 |
100 | 31.454 | 0.534 | 0.0127 | 42 |
110 | 31.448 | 0.0725 | 0.0011 | 65.9 |
After PR2 spot move yesterday in 78012, we did NLG sweep again. This time we took three NLG at 7.9, 16.3 (nominal), 55.9.
UTC | demod phase | DTT ref | NLG | SQZ at 2kHz (dB) | |
No SQZ | 16:41:00-16:46:00 | 0 | |||
FDS | 16:54:08-16:59:08 | 161.59 | 10 | 7.9 | -4.4 |
ASQZ | 17:04:00-17:09:00 | -95.63 | 11 | 7.9 | 11.1 |
mean SQZ | 17:10:52-17:15:52 | 12 | 7.9 | 9.1 | |
FDS | 17:31:40-17:36:40 | 192.47 | 13 | 55.9 | -4.1 |
ASQZ | 17:43:54-17:48:54 | -86.42 | 14 | 55.9 | 21.2 |
mean SQZ | 17:50:53-17:55:53 | 55.9 | 18.5 | ||
FDS | 18:12:31-18:17:00 | 190.31 | 15 | 16.3 | -4.7 |
ASQZ | 18:21:54-18:26:54 | -97.79 | 16 | 16.3 | 15.3 |
mean SQZ | 18:28:12-18:33:12 | 17 | 16.3 | 12.5 |
OPO trans (uW) | OPO temp | Seed amplified | Seend unamplified | NLG |
60 | 31.484 | 0.0086 | 0.00109 | 7.9 |
80 | 31.468 | 0.0174 | 0.00107 | 16.3 |
105 | 31.446 | 0.0609 | 0.00109 | 55.9 |
Vicky, Karmeng
This NLG scan is compatible with ~30% SQZ losses, ~20 mrad phase noise.
Attachment 1, 2 - Calculated loss ~30% from mean sqz and generated sqz, then fit ASQZ/SQZ to estimate phase noise ~ 20 mrad and technical noise. If fitting SQZ+ASQZ together to estimate, then fit loss ~32-33%. This uses standard linear opo equations to estimate generated squeezing level based on NLG.
Attachment 3 - Calculated loss ~ 27% from mean sqz and generated sqz, then fit ASQZ/SQZ to estimate phase noise ~ 20 mrad and technical noise. This uses bowtie opo equations to estimate generated squeezing level based on NLG (few % lower generated sqz than the above estimate). See Eq. 13 of Dhruva's ADF paper, e.g. P2200041.
NLG calibration - Estimates OPO green pump trans threshold @ 142 uW. This seems close to previous threshold estimates ~149 uW made just after moving to this crystal spot (LHO:73562, Oct2023 crystal move).
A comparison of O4a (hd,ifo) and O4b NLG (ifo) scans, maybe most interesting is comparing homodyne vs. interferometer squeezing in O4a (~32% loss, LHO:78000). O4a/O4b IFO losses look similar, but I think it's largely an issue with this O4b measurement?
A note about this NLG scan - I think total optical losses should be < 30% based on seeing >5dB SQZ previously. For example, the -5.4 dB SQZ observed in LHO:76553 is too much squeezing, and incompatible with losses >30%. So I think this measurement has higher losses than "normal" in O4b, maybe related to the alignment / mode-matching / (something that drifts) not being optimal here. Would be interesting to get back to the -5dB spot (of course), and see how losses look then.
Code with instructions is here: https://git.ligo.org/victoriaa.xu/nlgscans