While we're commissioning this afternoon, I've got NonSENS cleaning engaged. I've also temporarily accepted the SDF diff that turns on the cleaning, so that we can see the range of the _CLEAN channel in real time (this is why our range is high on our control room screenshots right now).
The NonSENS cleaning was on starting a few tens of seconds before 20:00:00 UTC today, and I intend to leave it as-is for at least an hour. Robert is doing work at this time, but it's quite minimally invasive, so this time should still be useful for PE verification software injections that are a part of the review of the cleaning.
Wed Oct 11 10:11:20 2023 Fill completed in 11min 16secs
Jordan confirmed a good fill curbside.
Tyler "has recently changed the chiller pumps to not be on at all times and come on when needed. He gave me some data for when the EX chiller has been turning on and off over the last 2.5 weeks" (TJ, 2023). I took this data and plotted it against our range as well as the average internal temperature at EX to see if we would lose range when the chiller at EX turned on(attachment1).
Over this date range, the longest amount of time that the chiller was on consistently was for just over two hours. With both these longer segments and the more regular shorter segments of 'on' time (sometimes just 10 minutes on), there is no noticeable change in the range when the chiller turns on or off. There are multiple times where the range trends downward after the chiller starts turning on for longer stretches of time (red attachment2), but it is not consistent - yellow(attachment3) and green(attachment4) rectangles show the range rising with chiller 'on' times. Therefore I am inclined to attribute the lowering of the range during these times as being due to changes in indoor/outdoor temperatures and wind and not the chiller.
Tagging Facilities as well.
Thank you for getting these out Oli, this is great! I agree with you that it doesn't seem like this chiller pump has any direct impact on our range, though perhaps we might want to consider looking further into a possible correlation of VEA temps and our range.
Clarification: It is not the chiller "pump" that was reconfigured to run only when demanded. It was the chiller themselves. It's also not clear that this was the status quo for all chillers across site. It was only recently discovered on one chiller in particular. The chiller "pump" has always and will continue to run regardless of the state of the chiller itself as glycol needs to be continually cycled through the AHU cooling coils at all times.
TITLE: 10/11 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 16mph Gusts, 13mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.62 μm/s
QUICK SUMMARY:
Locked for 5 hours when I arrived.
Everything Seemed to look great,
15:02 UTC a seemingly random lockloss occured.
Richard informed me that the Dust monitors in the PSL were alarming again , likely due to the elevated wind speeds.
TITLE: 10/11 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
LOG:
No log for this shift.
During maintenance today I adjusted weights on the PSL periscope to try to reduce the DARM peak in the 120 Hz region. I reduced the highest part of the peak where the HVAC-produced seismic peak overlaps with a periscope peak, by about 3/5. Also, with the consent of Tyler, I shut off SF2, which is mainly responsible for the new HVAC peak, reducing the total LVEA flow from 31k CFM to about 27k CFM. Before the pandemic we ran with two fans pushing about 10k CFM each for a total of about 20k CFM, so I dont think it will be too much of an issue. I have been monitoring LVEA temperature and it looks like the excursions are getting smaller.
The improvement at 120 Hz is looking pretty good, more details later.
Lockloss @ 01:43 UTC from several earthquakes in Afghanistan (M6.3 followed by aftershocks of M5.0 and M4.1). Earthquake mode first activated at 00:59 UTC when the initial S-wave hit.
Holding H1 in DOWN until ground motion calms down.
H1 is back to observing as of 04:14 UTC
TITLE: 10/10 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 12mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.31 μm/s
QUICK SUMMARY: H1 has been locked and observing for 2 hours.
TITLE: 10/10 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
Tuesday maintenance day recovery went well.
There was a H1:PSL-ENV_LASERRM_ASC_TEMP_DEGF Channel That isstill getting back to nominal temps in the PSL
The PSL Dust mon has been going off since the wind has started to pick up.
H1 has been locked for 2 Hours and 12 Minutes.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:05 | TCS | Camilla | CtrlRm | n | Blasting CO2 EX | 15:50 |
| 15:08 | FAC | Randy | EX,EY | N | Taking Trailer to EY & EX | 17:08 |
| 15:09 | VAC | Janos | FTCE | N | Counting Particles | 15:21 |
| 15:10 | FAC | Cindy | HAM Shaq | N | Technical Cleaning and recycling Cardboard | 17:10 |
| 15:11 | FAC | Kim | EX | N | Technical Cleaning | 16:30 |
| 15:12 | FAC | Karen | EY | N | Technical Cleaning | 16:30 |
| 15:22 | VAC | Janos Travis | Mid X and EX | N | Vacuum Work | 17:22 |
| 15:23 | EE | FIL | CER | N | ITMX Coil driver work needs SUS and HEPI SAFE | 15:42 |
| 15:30 | CDS | Dave B. | Remote | N | Picket Fence work | 17:30 |
| 15:31 | VAC | Jordan Gerardo | FTCE | N | Ion Pump install | 18:31 |
| 15:32 | Tour | Matt H & Co | FTCE & LVEA | N | RAL Tour | 16:52 |
| 15:35 | PSL | Jason | CtrlRm | N | PSL Refcav tuning | 15:39 |
| 15:36 | PEM | Robert | PSL | YES | Tuning Periscope Resonate Freq | 17:36 |
| 15:45 | EE | Fil & Marc | LVEA | N | PEM Inventory | 17:17 |
| 15:52 | FAC | Tyler & Mc Miller | EY | N | Chiller Maintenance. | 18:02 |
| 16:01 | SQZ | Shelia & Vicky | SQZt0&7 | LOCAL | Homodyne Alignment | 19:17 |
| 16:22 | SQZ | Jeff, Regina & Doortea | LVEA | LOCAL | Crashing the RAL Party and Joinin SQZrs | 16:49 |
| 16:41 | PEM | Fil | MIDX | N | Looking for part numbers | 17:01 |
| 16:47 | OMC | Keita | HAM6 | N | Plugging in Beckhoff cable to OM2 | 16:50 |
| 16:59 | RAL | Matt H &Co | Roof | N | RAL Tour to Roof | 17:17 |
| 17:01 | SQZ | Camilla & Doortea | LVEA SQZt0 | LOCAL | Helping Sheila & Vicki with SQZ work | 19:26 |
| 17:04 | FAC | Kim & Karen | LVEA | N | Technical Cleaning | 18:48 |
| 17:48 | Hartman | Camilla | LVEA | YES | LASER HAZARD TRANSITION | 19:33 |
| 18:03 | VAC | Travis & janos | LVEA | Yes | VAC Work with Laser Haz Goggles. | 18:04 |
| 18:24 | EE | Marc & Fil | ISCT1 | Yes | Documenting Cabling at ISCT1 | 18:47 |
| 18:26 | OM2 | keita | HAM6 | yes | Undoing the Beckhoff and OM2 Changes this morning | 18:27 |
| 18:56 | VAC | Jordan | BSC3 | YES | Opening FIlter Cav Gate valve | 19:11 |
| 19:53 | VAC | Travis | Mid Y | N | Vacuum work | 21:23 |
| 19:56 | PEM | Robert | PSL AntiRoom | No | Checking PSL AC For proper Functionality | 20:12 |
| 19:57 | SQZ | Sheila & Camilla | SQZt7 | Local | Touching up SQZ settings | 20:27 |
| 20:29 | FAC | Mitchel | Mechanical Room | N | Checking HEPI Pumps and Dust Monitors | 20:59 |
Closes WP11453. Camilla, Doratea
Took LVEA to laser hazard and swapped both ITMX and ITMY HWS SLED following procedure in T1500193. Planned to only swap ITMX that's decayed quickly 72993. Found that ITMY had a sudden drop in power last week with no clear cause, becoming noisy 10/04 11:22UTC and sharply dropping 10/07 05:46UTC, plot attached, so also swapped ITMY. This is much quicker than we usually need to swap them, last swapped in 71476. Updated ICS records (linked below) and SLED stock alog 66832. Should take new references are check camera sync frequencies when IFO next unlocked.
To see if the OM2/beckhoff coupling is a direct electronics coupling or not, we've done A-B-A test while the fast shutter was closed (no meaningful light on the DCPD).
State A (should be quiet): 2023 Oct/10 15:18:30 UTC - 16:48:00 UTC. The same as the last observing mode. No electrical connection from any pin of the Beckhoff cable to the OM2 heater driver chassis. Heater drive voltage is supplied by the portable voltage reference.
State B (might be noisy): 16:50:00 UTC - 18:21:00 UTC. The cable is directly connected to the OM2 heater driver chassis.
State A (should be quiet): 18:23:00- 19:19:30 UTC or so.
DetChar, please directly look at H1:OMC-DCPD_SUM_OUT_DQ to find combs.
It seems that even if the shutter is closed, once in a while very small amount of light reaches DCPDs (green and red arrows in the first attachment). One of them (red arrow) lasted long and we don't know what was going on there. One of the short glitches was caused by BS momentarilly kicked (cyan arrow) and scattered light in HAM6 somehow reached DCPDs, but I couldn't find other glitches that exactly coincided with optics motion or IMC locked/unlocked.
To give you a sense of how bad (or not) these glitches are, 2nd attachment shows the DCPD spectrum of a quiet time in the first State A period (green), strange glitchy period indicated by the red arrow in the first attachment (blue), a quiet time in State B (red) and during the observing time (black, not corrected for the loop).
FYI, right now we're back to State A (should be quiet). Next Tuesday I'll inject something to thermistors in chamber. BTW 785 was moved in front of the HAM6 rack though it's powered off and not connected to anything.
I checked H1:OMC-DCPD_SUM_OUT_DQ and don't see the comb in any of the three listed intervals (neither state A nor B). Tested with a couple of SFT lengths (900s and 1800s) in each case.
Since it seems that the coupling is NOT a direct electronics coupling from Beckhoff -> OM2 -> DCPD, we fully connected the Beckhoff cable to the OM2 heater driver chassis and locked the OMC to the shoulder with an X single bounce beam (~20mA DCPD_SUM, not 40mA like in the usual nominal low noise state). That way, if the Beckhoff is somehow coupling to OMC PZT that might cause visible combs in the DCPD.
We didn't see the comb in this configuration. See the 1st attachment, red is the shoulder lock and green is when 1.66Hz comb was visible with the full IFO (the same time reported by Ansel in alog 73000), showing just two largest peaks of 1.66Hz harmonics visible in the green trace. (It seems that the 277.41Hz and 279.07 Hz peak are 167th and 168th harmonics of 1.66Hz.) Anyway, because of the higher noise floor, even if the combs are there we couldn't have seen these peaks. We've had a different comb spacing since then (alog 73028) but anyway I don't see anything at around 280Hz. FYI I used 2048 FFTs for both, red is a single FFT and the green is an average of 6. This is w/o any normalization (like RIN).
In the top panel of 2nd attachment, red is the RIN of OMC-DCPD_SUM_OUT_DQ of the shoulder lock, blue and dark green are RIN of 2nd loop in- and out-of-loop sensor array. Magenta, cyan and blue green are the same set of signals when H1 was in observing last night. Bottom panel shows coherence between DCPD_SUM during the shoulder lock and ISS sensors as well as IMC_F, which just means that there's no coherence except for high kHz.
If you look at Georgia's length noise spectrum from 2019 (alog 47286), you'll see that it's not totally dissimilar to our 2nd plot top panel even though Georgia's measurement used dither lock data. Daniel points out that a low-Q peak at around 1000Hz is a mechanical resonance of OMC structure causing the real length noise.
Configurations: H1:IMC-PWR_IN~25.2W. ISS 2nd loop is on. Single bounce X beam. DCPD_SUM peaked at about 38mW when the length offset was scanned, and the lock point was set to the middle (i.e. 19mA). DC pointing loops using AS WFS DC (DC3 and DC4) were on. OMC QPD loops were not ON (it was enabled at first but was disabled by the guardian at some point before we started the measurement). We were in this state from Oct/17/2023 18:12:00 - 19:17:20 UTC.
BTW Beckhoff cable is still fully connected to the OM2 heater driver chassis. This is the first observation data with such configuration after Fil worked on the grounding of Beckhoff chassis (alog 73233).
Detchar, please find the comb in the obs mode data starting Oct/17/2023 22:33:40 UTC.
The comb indeed re-appeared after 22:33 UTC on 10/17. I've attached one of the Fscan daily spectrograms (1st figure); you can see it appear in the upper right corner, around 280 Hz as usual at the start of the lock stretch.
Two other notes:
Just to see if anything changes, I used the switchable breakout board at the back of the OM2 heater driver chassis to break the thermistor connections but kept the heater driver input coming from the Beckhoff. The only two pins that are conducting are pins 6 and 19.
That happened at around Oct/18/2023 20:18:00 to 20:19-something UTC when others were doing the commissioning measurements.
Detchar, please look at the data once the commissioning activities are over for today.
Because there was an elevated noise floor in the data from Oct/17/2023 18:12:00 mentioned in Keita's previous comment, there was some doubt as to whether the comb would have been visible even if it were present. To check this, we did a direct comparison with a slightly later time when the comb was definitely present & visible. The first figure shows an hour of OMC-DCPD_SUM_OUT_DQ data starting at UTC 00:00 on 10/18 (comparison time with visible comb). Blue and yellow points indicate the comb and its +/-1.235 Hz sidebands. The second figure shows the time period of interest starting 18:12 on 10/17, with identical averaging/plotting parameters (1800s SFTs with 50% overlap, no normalization applied so that amplitudes can be compared) and identical frequencies marked. If it were present with equivalent strength, it looks like the comb ought to have been visible in the time period of interest despite the elevated noise floor. So this supports the conclusion that the comb *not* present in the 10/17 18:12 data.
Following up, here's about 4 hours of DELTAL_EXTERNAL after Oct 18 22:00. So this is after Keita left only the heater driver input connected to the Beckhoff on Oct/18/2023 20:18:00. The comb is gone in this configuration.
Jordan, Gerardo
Today, we were able to install one of the new 150 l/s ion pumps on the Filter Cavity Tube. The Ion Pump/Tee/Angle valve assembly was pre-built in the staging building and then pumped down and leak checked. This assembly was stored under vacuum and then brought to the Filter Cavity Enclosure.
We closed FCV-3 (BSC3), FCV-7, FCV-8, and FCV-9 (HAM8) to isolate the C8 cross. Then closed the -Z axis GV on the C8 cross to isolate the ion pump port. We then vented the ion pump assembly with N2 and removed the angle valve/6" zero-length reducer on the cross, and installed the ion pump on the 6" CF port. A genie lift was used to lift/hold the ion pump while the connections were made.
Once installed, we used the leak detector/small turbo to pump down the assembly, and then helium leak tested the one CF connection that was made. There was no detectable signal above the helium background of 2.3E-10 Torr-l/s.
The ion pump was powered on locally, and quickly dropped to ~2mA. The -Z gate valve remains closed, and the rest of the FCT gate valves were reopened once the ion pump was leak checked and powered on. We will continue with the installation of the rest of the pumps in the following weeks.
Maintanence Day Update!!
SQZ Homodyne Alignment LOCAL LASER HAZ! --Complete
Hartman work --Complete
TCS CO2 ISS work First thing --Complete
PSL RefCav work PSL team --Complete
PSL Periscope Tests using IM4 PEM team --Complete
RAL Tour. --Complete
EY Chiller maintenance --Complete
Mac Miller Chillers with Tyler --Complete
ITMX Coil driver needs SEI&SUS->SAFE State. --Complete
Dust mon 6 & 10 alarming all throughout Maintenance time.
Had to take ISC_LOCK to Green Arms Manual to get through Initial Alignment.
H1 got an Initial Alignment and Relocking is going smoothly.
NOMINAL_LOW_NOISE Reached at 20:47 UTC
Observing Reached at 20:58 UTC
Supply Fan 2 in AHU 1 was manually disabled today after Robert found it to be a sizeable noise source. I went local to the fan during its shutdown to see if there was any observable noise/vibration during its winding down. Nothing discernable was seen/felt. I suspect that one of two things may be the cause of this additional noise. 1: This fan's vane position is of the most closed off that we have on site, so opening the pitch on SF 2 and closing 1 a similar amount may be a solution. 2: There may be a clocking/timing issue between the two stages on SF2 causing unnecessary turbulence between the intake and exhaust side. I will check the timing asap. somewhat peripheral to the issue of excessive noise in AHU1 is condensate which is still a "mild" issue. I still believe this is a cause of low airflow across the cooling coil. I have concern that disabling fan 2 will exacerbate the issue but will keep a close eye on it this week. AHU 1 services the West Bay, the Mechanical Room and BSC3-North Bay. Special attention should be given to temperatures in these zones. R. Schofield T. Guidry
Vicky, Sheila, Camilla, Dorotea, Naoki
We went to SQZT7 this morning with the new homodyne (73241). In the end we wer able to see decently flat shot noise and decent visibility. On the way, we ran into some difficulties that caused some confusion:
In the end we have flat shot noise, and a visibility of 98.5% measured on PDA (3.07% loss) and visibility of 97.8% (4.44% loss) measured on PDB. The nonlinear gain of 11 measured with seed max/ no pump. A comment to this alog will contain the measured sqz/asqz/mean sqz.
Screenshot summarizing homodyne measurements today. With measured carrier NLG=11 (for generated squeezing ~14.7-14.8 dB), we observe
Comparing sqz/anti-sqz to generated sqz: ~7% unexplained homodyne losses. This is consistent with our last estimate of excess HD losses (8/29/2023, LHO:72802, ~7% mystery loss). Since then, we swapped the HD detector and improved readout losses (visibility). We now measure more homodyne squeezing at 6 dB, consistent with expected loss reductions. That is compared to 8/29 (LHO:72802), we have less total loss, less budgeted hd loss && more squeezing, but the same unexplained hd losses as before.
Comparing mean-squeezing to generated sqz: could be consistent with sqz/asqz losses. I think there is a mis-estimate of the generated squeezing level from non-linear gain. If we ignore our NLG11 measurement, and instead use the generated squeezing level to match observed 13.5 dB of anti-squeezing, then we allow losses to determine the measured 6 dB squeezing level, we would have an NLG=10 (not 11) for a generated squeezing level of 14.5 dB. This would suggest 7% unexplained losses, same as the sqz/asqz measurements.
For ~7% mystery losses, this is compared to total HD losses of 21%, of which we budget 15% losses. From the sqz wiki, the budgeted losses are:
If we include phase+dark noise that degrades squeezing but is not loss, then 21% total loss can explain the 6dB measured squeezing, see e.g. from the gsheet calculator (edited to include ranges for NLG=10 and NLG=11):
| SQZ | ASQZ | |
| NLG | 10 - 11 | |
| x | (0.68, 0.70) | (0.68, 0.70) |
| gen sqz (dB) | (-14.5, -15.01) | (14.5, 15.01) |
| with throughput eta = | 0.79 | |
| meas sqz (dB) | (-6.24, -6.29) | (13.54, 14.03) |
| with phase noise (mrad) = | 20.00 | |
| meas sqz (dB) | (-6.08, -6.11) | (13.54, 14.03) |
| with dB(Vtech/Vshot) = | -22.00 | |
| var(v_tech/v_shot) | 0.0063 | 0.0063 |
| meas sqz (dB) | (-5.97, -6.00) | (13.54, 14.03) |
DTT homodyne template saved at $userapps/sqz/h1/Templates/dtt/HD_SQZ/HD_SQZ_101023.xml .
Edited to include some history of homodyne measurements:
It could still be interesting to vary NLG to see if we can obseve any more squeezing, or if an additional technical noise floor (aside from dark noise) is needed to explain the NLG sweeps.
We revised the sqz loss wiki table again today, and are including it to explain what we think our current understanding of losses is.
It seems likely that the 7% extra losses we see on homodyne measurements are in HAM7, so we've nominally added that to the loss budget.
In addition to this, there would be an additional 8% loss on the sqz beam if we didn't correct it's linear polarization with a half wave plate. 72604 At the time of the chamber close out, (65110) we measured throughput from HAM7 to HAM5 that would implies that two passes through the OFI were giving us 97.6% transmission, so this is not compatible with the polarization being wrong by this much. We haven't included this as a loss in the loss budget because it seems incompatible with our measurement in chamber.
The wiki currently lists the OMC transmission as 92%, and the PD QE as 98%. The PD QE may be worse than this (see 61568), but measurements of the product of QE and OMC transmission for 00 mode seem to indicate that is in the range 90-92%, so this is close.
With the infered losses of from the measured sqz/anti-sqz in the IFO, the plausible range of losses is 30-35%, we are using 32%. With only known losses (including the values for OMC trans and PD QE), we have 14% unexplained loss. If we include the 7% apparent HAM7 losses, we have 9% unexplained losses in the IFO. This does seem similar to the 8% polarization problem, but it would also include SQZ-OMC mode matching.
Possible future scenarios: We may be able to reduce the 7% HAM7 losses, and we may be able to swap the OMC to reduce those losses from 92% to 97%.
| total efficiency | resulting sqz measured without subtraction (technical noise -12dB below shot, 20mrad phase noise) | if technical noise is 20dB below unsqueezed shot noise | |
| fix HAM7 losses | 0.73 | 4.4dB | 5dB |
| swap OMC (92%-> 97%) | 0.71 | 4.14dB | 4.8dB |
| swap OMC and fix HAM7 losses | 0.77 | 4.85dB | 5.6dB |
| swap OMC, fix HAM7 losses, and fix 8% from polarization issue (if that is real) | 0.84 | 5.83dB | 6.8dB |
These numbers come from the aoki equations that Vicky added to the google sheet here: gsheet
Don G. and Sheila have very likely resolved the homodyne polarization issue as being due to the SQZT7 periscope. So, the mis-polarization is likely not an issue for squeezing in the interferometer.
The sqz beam leaves HAM7 via reflection off the sqz beam diverter. From the latest CAD layout from Don, the outgoing reflected beam (blue) is ~75.58 degrees from global +X. The periscope re-directs the beam to travel along SQZT7, approximately along +Y. The CAD layout thus suggests that the SQZT7 periscope re-directs the beam in yaw (counter-clockwise) by an estimated 90 - 75.58 = 14.4 degrees.
From recent homodyne measurements LHO:72604, of the sqz light leaving HAM7 and arriving on SQZT7, ~8% of the power was in the wrong polarization, this calculates to a ~16.5 degrees polarization misrotation. Compared to this 16.5 degree misrotatation we were searching for, the 14.4 degrees polarization rotation induced by the periscope image rotation can plausibly explain the misrotation.
Due to regular updates cdsssh needed to be rebooted. I filed WP 11466 due to a small number of users being on the system. I was going to wait for 2pm (as an arbitrary time when we were out of maintenance). However I was able to verify earlier that no one was doing remote work and restart the server earlier than planned. It was down for a minute or two and is up and accepting remote connections again.
* Added to ICS DEFECT-TCS-7753, will give to Chrisitna for dispositioning once new stock has arrived.
New stock arrived and has been added to ICS. Will be stored in the totes in the TCS LVEA cabinet.
ISC has been updated. As of August 2023, have 2 spare SLEDs for each ITM HWS.
ISC has been updated. As of October 2023, have 1 spare SLEDs for each ITM HWS, with more ordered.
Spare 8240nm SLEDs QSDM-840-5 09.23.313 and QSDM-840-5 09.23.314 arrived and will be placed in the TCS cabinets on Tuesday. We are expecting qty 2 790nm SLEDs too.
Spare 790nm SLEDs QSDM-790-5--00-01.24.077 and QSDM-790-5--00-01.24.079 arrived and will be placed in the TCS cabinets on Tuesday.
In 84417, we swapped:
The removed SLEDs have been dispositioned, DEFECT-TCS-7839.
Here is a plot showing the effect of the cleaning that is currently ongoing. Since Robert was able to significantly mitigate the 120 Hz peak yesterday, there is not much difference between the strain channel and the cleaned channel there anymore. But, our LSC FF needs tuning (measurements to be taken later this commissioning period), so there's lots of effect down there.
Both the Jitter and LSC noises were retrained on data from our most recent lock. The high frequency laser noise I havne't retrained in several weeks, and it's still doing quite nicely.
This quiet period ended at 20:22 UTC (for calibration measurements).
Attached is the range plot from the control room wall, where we can see the improvement due to the cleaning being engaged.
Since we have lost lock, I re-accepted in the h1oaf Observe.snap file the correct value of 0 gain for H1:OAF-NOISE_WHITENING_GAIN, so NonSENS will be off (without any SDF diffs) when we get relocked.