Naoki, Vicky
At 25W, holding at REDUCE_RF45_MODULATION_DEPTH before MAX_POWER, we did a quick anti-sqz/sqz measurement to see if we could loosely infer sqz losses with the IFO relatively cold. We did not see clearly more squeezing here at lower power / colder ifo, as we have in the past, e.g. 66877. We weren't able to easily engage ASC here (it was before move_spots), but slider values are similar to when IFO was last locked and a little bit of walking alignment didn't make a big difference, so we left ASC off.
Looking at DARM (based on GDS), and the SQZ BLRMS, we can read the sqz levels:
which are surprisingly consistent to what we have at full power, e.g. the recent sqz dataset in 71902 with IFO at 60W and thermalized in full lock.
Thoughts:
-- I'm not sure what it means that now we have similar amounts of squeezing with the IFO at low and high power.
-- For this test, we left PSAMS at 200V/200V and didn't try walking PSAMS, though in 66877 we saw better squeezing with a cold IFO and lower PSAMS.
-- The slope of DARM at high ~kHz frequencies seems to be slightly different between 25W and 60W? From the dtt, compare red dashed = "60W no sqz" & cyan line = "25W FDS". Here, the lock sequence was at REDUCE_RF45_MODULATION_DEPTH, so before lownoise length control and laser noise suppression.
We then measured NLG ~ 11.05. In the sqz dataset 71902 we did not separately measure NLG, but given that the NLG has remained consistently accurate with the Feb 2023 NLG calibration, so we can re-affirm that trusting the calibration is probably okay, assuming opo temperature is well-tuned. For comparison, NLG calibration suggests NLG=11 (gen sqz ~ 14.75dB) with opo_trans=80uW.
J. Kissel, J. Warner Given the lovely "natural experiment" of the Aug 25th 2023 ~3.2 [deg F] / 1.8 [deg C] temperature increase "impulse" over 4 hours at EY (see LHO:72444 and LHO:72428), I wanted to understand both (a) How the IFO handles it / what caused the lock loss (b) Document the levels and time-scales of alignment change that resulted in bad alignment for *several* lock stretches -- indeed *days* after the impulse. We often wave our hands saying things like "well, the vacuum system acts like a low pass filter, with a time constant of [[insert hand-waivers favorite time-scale on the order of hours]]." I wanted to see if we could quantify that, and if not, add a bit more clarity to how complicated the situation is. To do so, I looked at the Y End Stations' signals in Z, RZ, PIT, and YAW that are either (1) out of loop, or (2) when in-loop -- the loop's feedback control output using the "classic trick" of approximating G/(1+G) ~ 1, where G >> 1, such that (plant) * CTRL = out-of-loop signal as though the loop wasn't there. Those sensors include - HPI ST1 ISO OUT (which are the IPS, under DC-coupled feedback control) -- calibrated into nano- meters or radians - ISI ST1 ISO OUT (which are the CPS, under DC-coupled feedback control) -- calibrated into nano- - ISI ST2 ISO OUT (which are the CPS, under DC-coupled feedback control) -- calibrated into nano- - SUS ETMY M0 LOCK OUT (i.e. the WFS, under DC-coupled global ASC control) -- calibrated into micro- meters or radians - SUS ETMY M0 DAMP IN (which are "out-of-loop" because the local damping loops are AC-coupled) -- calibrated into micro- - SUS TMSY M1 DAMP IN (equally "out-of-loop") -- calibrated into micro- - ETMY L3 Optical Levers -- calibrated into micro- (I'm pleasantly surprised at how well they all agree, to the ~0.25 micro- kind of level that I have for these trends.) I conclude: (I) The IFO Yaw is most impacted by the SEI system's RZ motion, due to the Z to RZ cross-coupling of the radially symmetric system of triangular ISI blade springs, as the blades sag from temperature increase (i) The total SEI system's yaw swung ~2 [urad] during the excursion dominated by ISI ST1, (ii) The SUS ETMY and SUS TMSY follow this input in common, and (ii) ISI ST1 takes the longest time to recover alignment -- then trending over *days* slowly back to pre-impulse equilibrium -- and still not yet there as of Aug 28 (II) The IFO Pitch most impacted by the ETMY and TMSY SUS system's expected Pitch and Vertical sag from temperature increase. (i) The IFO's global alignment drives the pitch of ETMY, which drifted *down* in pitch over ~10 [urad] before losing lock, presumably from running out of range (ii) The ASC signals seem to slowly drive the ETMY off into-the weeds trying to recover the original, pre-impulse, alignment, causing *eventual* subsequent lock losses as it pushes the optic *past* the pre-impulse alignment position (iii) The TMS, which does not have global control, pitches a similar amount, ~14 [urad] in the *opposite direction, up* in pitch, and also taking *days* to get back the original value (somewhat alleviated ) (iv) The fact that the Sus-point blades are the *same* for the QUAD and TMS, they're the biggest blades in either SUS, and the order of pitching is about the same implies to me that the pitching is dominated by the upper, Sus-point blades. I attached the trends that drive me towards these conclusions. Give yourself time -- I've stared at these all afternoon to come to these conclusions. And honestly, I *still* don't think I've looked at enough plots (e.g. I don't show the ETMY alignment sliders that are the operators and/or initial alignment drives trying to make up for the SEI blades yaw and SUS blades pitch). I also attach a .txt file that goes into more detail about how I calibrated the various CTRL signals, including where I got the transfer function values that are scaling the trends that you see.
Very Interesting!
I've added a few more plots which show the in-loop motion as measured by the CPS sensors on ETMY. These show that the platform didn't move down or twist during the temperature excursion. This is what you would expect, given Jeff's plots from above - the springs sag, and the servos compensate. That's all just peachy, so long as there is no yaw seen at the optic - but the oplev does see yaw.
Either - there is some yaw in the ISI which is not seen by the CPS sensor (eg the sensor itself is temperature sensivity - but this should be pretty small) , or
- the yaw is just from SUS, or
- oplev is affected by temperature, or
- the yaw is coming from somewhere else (HEPI, piers, SUS, the devil, etc)
I've not thought about this very hard yet - but I attach a 20 hour time stretch from the temp sensor (calibration is crazy, but the shape matches. not sure what's up) and 4 CPS cart-basis sensor signals (calibration should be nanometers or nanoradians). The in-loop change on the CPS is less than 1 nanorad.
Lockloss @ 23:20, DCPD saturation right before, cause unknown.
TITLE: 08/28 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
INCOMING OPERATOR: Austin
SHIFT SUMMARY:
Quiet shift until the noisy earth rolled a Mag7.1 Earthquake to H1. Made it back to Low Noise (but then it just lost lock).
(Jim discovered LVEA lights were on...probably had been for several days!)
LOG:
Looks like I forgot to mention some items from my log (due to me not Saving My Alog Draft, getting logged out and losing morning logged items. Apologies for not having the time, but just wanted to note:
TITLE: 08/28 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: SEISMON_ALERT
Wind: 14mph Gusts, 10mph 5min avg
Primary useism: 0.06 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
- H1 is holding off at REDUCE RF45 for SQZ work
- SEI looks to have mostly recovered from the EQ earlier
- CDS/DMs ok
After H1's longest lock of O4 thus far (60hr29min Lock from 8/23(wed) 1020amPT -- 8/25(fri) 11pmPT), H1 has has been finicky with locking and also has had a degrading inspiral range.
The primary issue prior to this lockloss was the EY temperature increase of 3+degF due to FAILURE of Chiller Pump #1 at EY. From about 10am-5pm the temperatures were NON-nominal (but H1 remained locked this whole time(!) as ASC maintained the pointing for EY during this HUGE temperature swing).
With that said about the temperature drift, and at TJ's request, wanted to also list Commissioning changes which occured during the 2.5days of the lock.
(Times in PT)
Aug23 (Wed)
Aug24 (Thurs): Nothing in alog
Aug25 (Fri)
Aug26 (Sat)
Last Tuesday (Aug 21, 2023) Rick and I went down to the EX station while the IFO was still locked.
We asked the operator to turn off BRS sensor correction (Sitemap-> ISI_CONFIG ->SEI_CONF ->WINDY_NO_BRSX). This was to allow us to gentley walk to the other side of the beam tube to access the PCAL Transmitter module.
Once there Rick and I opened the Tx enclosure and blocked the Outer ( Lower ) PCAL beam at GPStime: 1376752360
We saw the following motion of the PCAL channel : H1:CAL-CS_TDEP_PCAL_X_OVER_Y_REL_MAG .
We did expect this to go up and then settle down. We were suprised by how long it took to settle down.
At GPStime 1376753100 we blocked the inner (Upper) beam was blocked.
That seems to last uninteruppted until GPStime: 1376754170
At 1376754420 ISC_LOCK went to from NOMINAL_LOW_NOISE to DOWN and signals the the start of the End station measurments made that day.
More analysis coming soon.
The nominal value of the squeezer laser diode current was change to 1.863 from 1.95. The tolerance is unchanged at 0.1. Looking at trends we sometimes read a value just below 1.85 leading to a failed laser condition which in turn triggers a relocking of the squeezer laser. However, since we are already locked, all we see is the fast and commen gains ramping down and up.
Looking at this diode current trend over the past 500 days, we see it fairly stable but trending down very slowly. It may have lost 10 mA over the past year. Resetting the nominal value should keep us in the good band for a while if this trend continues.
So far this seems to have fixed the TTFSS gain changing issue! Haven't seen gain changes while locked in the past couple days, since Daniel changed the laser diode nominal current (bottom purple trend).
In the past week there wasn't single TTFSS gain ramping incident during lock. The fast and common gains are again monitored in SDF.
2016 H1 lost lock due to 7.1 south pacific earthquake.
We rode through the P & S waves....the R wave is still about 30+min away and it will be "spicy" ~Jim W.
I have taken H1 to IDLE....while we wait for the R wave to pass & for the planet to calm down after that.
Attached are some screenshots Tony snapped.
Closes FAMIS 21128
The cup in back corner of mezzanine was empty
Mon Aug 28 10:10:25 2023 INFO: Fill completed in 10min 20secs
Travis confirmed a good fill curbside.
FAMIS 19991
Jason was in the anteroom last Tuesday for inventory, which is seen by the environmental trends. Since then, the differential pressure between the anteroom and the laser room has been elevated, but not alarmingly so.
Well pump is running to replenish the fire water tank. The pump will run for 4 hours and automatically shut down.
At 9:45am local time Chiller 1 at End Y went into alarm for "Evaporator Water Flow Lost". When I arrived at the EY chiller yard I observed that neither chiller was running but chilled water pump 1 was continuing to run. I noted the alarm and headed for the mezzanine above the AHU's to asses what the supply and return pressures were nearest the evaporator coil. Immediately I read 0 (or what I thought was 0) at the return line. This would generally indicate that there has been enough glycol loss within the system that makeup is necessary via the local tank (though I've never seen it get to 0). Until I read that both supply lines were at an alarming 140psi (normal operating pressures for all 4 supply and return float around 30). I immediately phoned Richard to have him command the chilled water pump 1 off to stop the oversupply of chilled water. For reasons not clear to me the disable command via FMCS Compass was not taken at the pump. I went back to the chiller yard and observed that 1: the pump had not been disabled and 2: pressures at the pump were at around 100psi (normal operating for the current frequency is about 50). Following that, I manually threw the pump off at the VFD to prevent further runaway of the system. Between the time of noting 140psi and manually throwing the pump off, the system pressure increased to 160psi. After a thorough walk-down of the system, I elected not to utilize our designed redundancy in chiller 2 and chilled water pump 2 as I was still unaware what was causing the massive overpressure at all supply and return lines. It was also found that the return line was not actually at 0, but instead had made a full rotation and was pegged on the backside of the needle (all of these need replacement now). Macdonald-Miller was called on site to help asses what the issue might be. Given that there was recent incursions to flow via R. Schofield the strainer was the primary point of concern. We flushed the strainer briefly at the valve and noted a large amount of debris. after a second flush, much less/next to none was noted. This alleviated the system pressure substantially. The exact cause of the fault and huge increase of pressure is still not clear. There are a number of flow switches at the chiller. Bryan with Mac-Miller suspects part of the issue may live there, and we are going to pursue this further during our next maintenance window. Work was also performed at the strainer within the chiller where rubber/latex-esque debris was found. Work on Chiller 1 to continue but for now the system and end station is happy on chiller2/CHWP2. Looking at the FMCS screen shows temp's have normalized as of the writing of this log. T. Guidry B. Haithcox. R. Thompson C. Soike R. McCarthy
J. Kissel, for T. Guidry, R. McCarthy Just wanted to get a clear separate aLOG in regarding what Corey mentioned in passing in his mid-shift status LHO:72423: The EY HVAC Air Handler's chilled water pump 1 of 2 failed this morning 2023-08-25 at 9:45a PDT, and thus the EY HVAC system has been shut down for repair at 17:35 UTC (10:35 PDT). The YVEA temperature is therefore rising as it equilibrates with the outdoor temperature; thus far from 64 deg F to 67 deg F. Tyler, Richard, and an HVAC contractor are on it, actively repairing the system, and I'm sure we'll get a full debrief later. Note -- we did not stop our OBSERVATION INTENT until 2h 40m hours later 2023-08-25 20:18 UTC (13:18 PDT), when we've gone out to do some commissioning.
The work that they've been doing so far today to diagnose this issue has been in the 'mechanical room'. Their work should not add any additional significant noise over what aready occurs in that room at all times, so I do not expect that there should be any data quality issues as a result of this work. But, we shall see (as Jeff points out) if there are any issues from the temperature itself changing.
They are done for the weekend and temperatures are returning to normal values.
Chiller Pump #2 is the chiller we are now running.
Chiller Pump #1 will need to be looked at some more (Tyler mentioned the contractor will return on Tues).
Attached is a look at the last 4+yrs and both EY chillers (1 = ON & 0 = OFF).
See Tyler's LHO:72444 for more accurate and precise description of what had happened to the HVAC system.
TITLE: 08/25 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 11mph 5min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY: H1 has been locked for 46 hours.
Just a note: I was the Day Operator, but arrived late due to Route10 being CLOSED. :(
Naoki noticed the pump fiber rejected PD (in ham7, rejects pump fiber light that comes out in the wrong polarization) was saturated, so today I re-aligned the pump fiber polarization using sqzt0 pico's (described recently also in 71761).
I'm not sure why pump fiber needs to have its input polarization readjusted so often; I checked the CLF fiber and FCGS fiber, and they both seemed relatively well-aligned despite having not adjusted those in a while. Especially this time, it seems the fiber polarization got misaligned more quickly than before.
Austin had to reset the sqz pump ISS again on Sunday (72474), lowering the generated sqz level by lowering OPO trans from 80uW (recent nominal) to 65uW. Sqz level correspondingly went down. Naoki and I have re-aligned the pump fiber polarization, and brought the squeezer back to the nominal 80uW generated sqz level.
It's strange that we had to re-align the pump fiber polarization, again. The pump fiber polarization seems to be misaligning more quickly recently, see trends. This time, the fiber polarization misaligned to saturation in 1-2 days (last time was several days, before that we never saw it misaligned to saturation). It also needed both the L/2 and L/4 waveplates to re-align it. We should definitely monitor this situation and see if we can understand/fix why it's happening.
Genevieve, Lance, Robert
To further understand the roughly 10Mpc lost to the HVAC (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72308), we made several focussed shutdowns today. These manipulations were made during observing (with times recorded) because such HVAC changes happen automatically during observing and also, we were reducing noise rather than increasing it. The times of these manipulations are given below.
One early outcome is that the peak at 52 Hz in DARM is produced by the chilled water pump at EX (see figure). We went out and looked to see if the vibration isolation was shorted, it was not, though there are design flaws (the water pipes arent isolated). We switched from CHWP-2 to CHWP-1 to see if the particular pump was extra noisy. CHWP-1 produced a similar peak in DARM at its own frequency. The peak in accelerometers is also similar in amplitude to the one from the water pump at EY. One possibility is that the coupling at EX is greater because of the undamped cryobaffle at EX.
Friday HVAC shutdowns; all times Aug. 18 UTC
15:26 CS SF1, 2, 3, 4 off
15:30:30 CS SF5 and 6 off
15:36 CS SF5 and 6 on
15:40 CS SF1, 2, 3, 4 back on
16:02 EY AH2 (only fan on) shut down
16:10 EY AH2 on
16:20 EY AH2 off
16:28 EY AH2 on
16:45 EY AH2 and chiller off
16:56:30 EY AH2 and chiller on
17:19:30 EX chiller only off, pump stays on
17:27 EXwater pump CHWP-2 goes off
17:32: EX CHWP-2 back on chiller back on right after
19:34:38 EX chiller off, CHWP-2 pump stays on for a while
19:45 EX chiller back on
20:20 EX started switch from chiller 2 to chiller 1 - slow going
21:00 EX Finally switched
21:03 EX Switched back to original, chiller 1 to chiller 2
Turning Roberts reference to LHO:72308 into a hyperlink for ease of navigation. Check out LHO:72297 for a bigger picture representation of how the 52 Hz peak in the broader DARM sensitivity, and from the Time Stamps in Elenna's plots, they were taken at 15:27 UTC, just after the corner station (CS) "SFs 1, 2, 3, 4" are off. SF stands for "Supply Fans" i.e. those air handler unit (AHU) fans that push the cool air in to the LVEA. Recall, there are two fans per air handler unit, for the two air handler units (AHU1 and AHU2) that feed the LVEA in the corner station. The channels that you can use to track the corner station's LVEA HVAC system are outlined more in LHO:70284, but in short, you can check the status of the supply fans via the channels H0:FMC-CS_LVA_AH_AIRFLOW_1 Supply Fan (SF) 1 H0:FMC-CS_LVA_AH_AIRFLOW_2 Supply Fan (SF) 2 H0:FMC-CS_LVA_AH_AIRFLOW_3 Supply Fan (SF) 3 H0:FMC-CS_LVA_AH_AIRFLOW_4 Supply Fan (SF) 4
My Bad: -- Elenna's aLOG is showing the sensitivity on 2023-Aug-17, and the Robert logging of times listed above are for 2023-Aug-18. Elenna's demonstration is during Robert's site-wide HVAC shut down 2023-Aug-17 -- see LHO:72308 (i.e. not just the corner as I'd errantly claimed above.).
For these 2023-Aug-18 times mentioned in this LHO aLOG 72331, check out the subsequent analysis of impact in LHO:72778.
Robert did an HVAC off test. Here is a comparison of GDS CALIB STRAIN NOLINES from earlier on in this lock and during the test. I picked both times off the range plot from a time with no glitches.
Improvement from removal of 120 Hz jitter peak, apparent reduction of 52 Hz peak, and broadband noise reduction at low frequency (scatter noise?).
I have attached a second plot showing the low frequency (1-10 Hz) spectrum of OMC DCPD SUM, showing no appreciable change in the low frequency portion of DARM from this test.
Reminders from the summary pages as to why we got so much BNS range improvement from removing the 52 Hz and 120 Hz features shown in Elenna's ASD comparison. Pulled from https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20230817/lock/range/. Range integrand shows ~15 and ~5 MPC/rtHz reduction at the 52 and 120 Hz features. BNS range time series shows brief ~15 MPC improvement at 15:30 UTC during Robert's HVAC OFF tests.
Here is a spectrum of the MICH, PRCL, and SRCL error signals at the time of this test. The most visible change is the reduction of the 120 Hz jitter peak also seen in DARM. There might be some reduction in noisy peaks around 10-40 Hz in the signals, but the effect is small enough it would be useful to repeat this test to see if we can trust that improvement.
Note: the spectra have strange shapes, I think related to some whitening or calibration effect that I haven't bothered to think about to make these plots. I know we have properly calibrated versions of the LSC spectra somewhere, but I am not sure where. For now these serve as a relative comparison.
According to Robert's follow-up / debrief aLOG (LHO:72331) and the time-stamps in the bottom left corner of Elenna's DTT plots, she's is using the time 2023-08-17 15:27 UTC, and that corresponds to the time when Robert had turned off all four the supply fans (SF1, SF2, SF3, and SF4) in the corner station (CS) air handler units (AHU) 1 and 2 that supply the LVEA around 2023-08-17 15:26 UTC.