TITLE: 08/01 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 2mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.05 μm/s
QUICK SUMMARY:
Workstations were updated and rebooted. The update was only to OS packages. Conda packages were not updated.
TITLE: 08/01 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
SHIFT SUMMARY: H1 has been locked and observing for 13 hours.
Noted in my mid-shift report that the MY AHU alarm started around 23:40; the AHU temperature has started to come down and has now dropped below 85 degrees so the alarm has stopped. Very quiet shift otherwise.
Incoming owl shift operator: TJ
LOG:
No log for this shift.
State of H1: Observing at 147Mpc
H1 has been locked for 9 hours. The MY air handler alarm started around two hours ago; checking the medm it shows the cooling temp is around 86 degF and has been climbing over the past two days. Since the IFO doesn't much care about the mid station temperatures and a contractor will be out to check it tomorrow, this is not an urgent cause for concern (but I've notified Bubba and am tagging facilities anyway).
FAMIS 19987
No major events of note.
FAMIS 19964
pH of PSL chiller water was measured to be between 10.0 and 10.5 according to the color of the test strip.
FAMIS 21126
I added no water to either chiller, they both were at or very near the values from the previous check on the 21st. Filters looked good and I saw no water in the leak-detecting Dixie cup.
TITLE: 07/31 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 142Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 17mph Gusts, 13mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY: Taking over from Ryan C. H1 has been locked and observing for 5 hours.
TITLE: 07/31 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
SHIFT SUMMARY: The squeezer lost lock a few times and kicked us out of observing, 1 lockloss with an automated relock. Handing off to RyanS
Lock#1:
Squeezer ISS took us out of commissioning from 15:59UTC to 16:09UTC
Lockloss @ 16:31UTC
Lock#2:
Couldn't get DRMI or PRMI, went to CHECK_MICH then back to PRMI then lockloss 16:54 UTC
Lock#3
Couldn't get DRMI or PRMI (2nd try) so H1MANAGER took us to initial alignment as expected, which went smoothly. We were then able to lock DRMI within 30 seconds during the following relocking.
Back into NLN at 18:05TUC and Observing at 18:26UTC. **Fully automated relock with an automated initial alignment**
We lost connection to the CDS_WAP_{EX,EY,LVEA}_STATUS briefly (~50 seconds) at 21:55UTC which are and were all off
Superevent S230731an
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 17:34 | FAC | Karen | Optics lab, VAC prep | N | Tech clean | 17:53 |
| 19:01 | SEI | Jim | Mezz | N | Parts/tools search | 19:06 |
| 19:27 | CAL | Gabriele | Remote | N | CHARD_Y measurement | 20:48 |
| 20:28 | VAC | Travis & Jordan | Optics lab | N | Disassembly of a setup | 21:56 |
| 21:09 | FAC | Ken | MY | n | Light fixtures back | 22:10 |
FAMIS 19669
Script reports ITMX_ST2_CPSINF H3 &V1 high freq noise is high! I agree that H3 looks higher, but V1 looks OK.
Today between 19:25 UTC and 20:45 UTC I measured the CHARD_Y open-loop gain with two broadband noise injections (different shapes). The result is a pretty good measurement that will enable a better design of the CHARD_Y controller.
Right now: the controller has little gain at 2.6 Hz, and a very large (10 dB) gain peaking at 1 Hz.
The template is available in /ligo/home/gabriele.vajente/ASC/CHARD_Y_shaped_exc_2023_07_31b.xml
Here's a fit of the CHARD_Y plant (open loop gain divide by controller, including gain 60). zeros and poles in s-domain below (rad/s)
z = [-4.27268475+16.14989614j, -4.27268475-16.14989614j,
-0.84677811+11.7910939j , -0.84677811-11.7910939j ,
-0.15001959 +3.1577398j , -0.15001959 -3.1577398j ,
-0.14073526 +2.60978661j, -0.14073526 -2.60978661j,
-0.89432965 +1.06440904j, -0.89432965 -1.06440904j]
p = [-1.26172189+18.71400182j, -1.26172189-18.71400182j,
-0.42125296+16.04217932j, -0.42125296-16.04217932j,
-1.88894857+14.52668064j, -1.88894857-14.52668064j,
-0.17624191 +6.42823571j, hisproduces -0.17624191 -6.42823571j,
-0.08126954 +3.12803462j, -0.08126954 -3.12803462j,
-0.33177813 +2.74361234j, -0.33177813 -2.74361234j,
-1.01199783 +0.j , -0.20377166 +0.j ]
k = -2608.840762444897
Using this model too predict the closed-loop gain underestimates a bit the gain peaking observed at about 1 Hz. So I did a refinement of the fit to better model the little gain and phase wiggles at 1.4 Hz:
z = [-4.27268475+16.14989614j, -4.27268475-16.14989614j,
-0.84677811+11.7910939j , -0.84677811-11.7910939j ,
-0.15001959 +3.1577398j , -0.15001959 -3.1577398j ,
-0.14073526 +2.60978661j, -0.14073526 -2.60978661j,
-0.89432965 +1.06440904j, -0.89432965 -1.06440904j,
-0.55930335 +8.32753387j, -0.55930335 -8.32753387j,
-0.13681604 +6.53617672j, -0.13681604 -6.53617672j,
-0.75698522 +6.05299388j, -0.75698522 -6.05299388j]
p = [-1.26172189+18.71400182j, -1.26172189-18.71400182j,
-0.42125296+16.04217932j, -0.42125296-16.04217932j,
-1.88894857+14.52668064j, -1.88894857-14.52668064j,
-0.17624191 +6.42823571j, -0.17624191 -6.42823571j,
-0.08126954 +3.12803462j, -0.08126954 -3.12803462j,
-0.33177813 +2.74361234j, -0.33177813 -2.74361234j,
-1.01199783 +0.j , -0.20377166 +0.j ,
-0.53443931 +8.46054777j, -0.53443931 -8.46054777j,
-0.12110921 +6.58734716j, -0.12110921 -6.58734716j,
-0.59423727 +5.96287439j, -0.59423727 -5.96287439j]
k = -2557.795730924652)
This reproduces pretty well the closed loop transfer function. So we can be confident that it will model well the performance of a new controller design. We'll include the refined fit in the model
Here's a compariison of the fit from the previous measurement and from the new measurement from today. They are quite different, and that explains why the previous controller design failed
Based on the measured CHARD_Y plant, I designed a new controller that would give us more supression at 2.6 Hz, and avooid gain peaking at 1 Hz. This comes at the price of about 10 times large noise injection at 10 Hz and above. This might be too much at 10-15 Hz, but ok above 15-20 Hz. Hopefully once the new controller is engaged and we have a more stable CHARD_Y, we should be able to fine tune the A2L to gain back some noise coupling at frequencies below 20 Hz.
Comparison of the open-loop gain in the same three configurations
Comparison of the loop suppression in the same three configurations
The new controller can be engaged with the nominal gain of 60, and it is stable for an increased gain of 180. Even with a gain of 60, it removes completely the gain peaking at 1 Hz, and provides 3 dB of suppression at 2.6 Hz.
Mon Jul 31 10:11:03 2023 INFO: Fill completed in 10min 59secs
Gerardo confirmed a good fill curbside
We've recovered NLN and Observing at 18:05 and 18:26 respectively fully automated, including an initial alignment!
The SQZer lost lock at 19:07 and kicked us out of observing, it relocked at 19:10 and we went back to observing.
Lockloss @ 05:50 UTC, caused by Dolphin glitch.
First event I noticed was HEPI HAM1 watchdog tripping, then seeing IOP DACKILL tripped for iopsusb123, iopsush2a, iopsush34, and iopsush56 and a very red CDS overview (attached). Called Dave and we're now working on recovery.
EDIT: Attached CDS overview screenshot at time of glitch.
At first glace Dave believes the glitch originated from the OMC model, causing everything else to trip. He's isolating it from the Dolphin network and restarting it now.
Ryan S, Dave:
After diag resetting which cleared a bunch of cached IPC errors we were left with:
1. Every receiver of IPCs originating from h1omc0 were continuously bad, including those at the end stations.
2. The IOP DACKILLs were permanently asserted on h1sus[b123, h2a, h34, h56]
So it looks like the IX Dolphin card on h1omc0 has gone offline and caused a glitch which took down the corner station SUS listed above.
Recovery:
Log into h1omc0 and check it can see its IO Chassis [it can] and see if the dmesg logs show anything from this time [they dont].
Fence h1omc0 from Dolphin and reboot.
When h1omc0 came back and its models restarted, all the outstanding IPC receive errors cleared.
Onto SUS. For each we safed the SUS, bypassed the SEI SWWD receivers, stopped all the models then started all the models. When the IOP was running again, I unbypassed the corresponding SEI SWWDs.
This worked well for h1susb123, h1sush2a and h1sush34. It did not work for h1sush56, the IOP model failed to restart.
I stopped the partially started h1sush56 models, checked the IO Chassis was visible [it was], fenced it from Dolphin and rebooted.
As h1sush56 came back from reboot, we saw a lot of IPC flashes on various systems. I had seen one or two flashes, but h1sush56 flashed the IPCs for many seconds until its IOP model got going. From that point onwards the Dolphin network was good, no new IPC errors.
Logging for Brina, who collected a bunch of relevant times to compare sqz vs. no-sqz at different OM2 settings. We aim to do some quantum noise budgeting with these times, to see whether/how much quantum noise is a contributor between 50-150 Hz, and to use the no-sqz data to look at ifo output losses for different OM2 settings. Looking at correlated noise budget from Craig recently, LHO:71333, hopefully we can understand what's going on with quantum noise ~30-200 Hz..
| Date | OM2 temp | gps_start | gps_stop | UTC | alog | ||
| 2023/06/27 | ~56 C | no-sqz | 1371910278 | 1371910578 | 14:11 to 14:16 | 1st hot OM2, LHO:70849 | |
| sqz | 1371910698 | 1371910698 | 14:18 to 14:22 | ||||
| 2023/06/28 | no-sqz | 1372017274 | 1372021147 | 19:54:16 - 20:58:49 | 70930 1-hour no-sqz, hot OM2, xcorr | ||
| sqz | 1372042818 | 1372046418 | 3:00 - 4:00 (6/29) | ||||
| 2023/07/13 | ~22 C | no-sqz | 1373320175 | 1373320818 | 21:49:17- 22:00:00 | cold OM2, LHO:71302 ifo alignment tests | |
| sqz | 1373322138 | 1373322738 | 22:22 - 22:32 | sqz on, AS 36 Q yaw 50,000 | |||
| 2023/07/19 | ~47 C | no-sqz | 1373812338 | 1373812938 | 14:32 - 14:42 | warm OM2 (during pump iss failure, 71497) | |
| sqz | 1373813118 | 1373813118 | 14:45 - 15:00 | (should be more sqz times available) | |||
| 2023/07/19 | ~57 C | no-sqz | 1373839430 | 1373840398 | 22:03:32 - 22:19:40 | 2nd hot OM2, LHO:71518 | |
| sqz | 1373843778 | 1373844498 | 23:16 - 23:28 |
So far I've just started comparing darm spectra (using GDS-CALIB_STRAIN_NOLINES) between the following times, some from the above table:
-- 'cold OM2' uses no-sqz = 1373320175 - 1373320818, fds = 1373322138 - 1373322738
-- 'warm OM2' uses no-sqz = 1373812338 - 1373812938, fds = 1373813118 - 1373814018
-- 'hot OM2 3' uses the recent hot OM2 no-sqz = 1374004600 - 1374005215 (LHO:71591), and fds = 1373984483 - 1373985483 from when range was ~150 this morning. Note FC detuning is -25 Hz for this trace, and many LSC FFs have been retuned here, compared to previous traces.
To start, I looked at compared both the DARM difference with squeezing for different OM2 settings (sqz compared to OM2), and also un-squeezed / squeezed DARMs for different OM2 settings (OM2_compared_to_sqz).
From this second pdf, OM2_compared_to_sqz, a few quick things I notice:
- Hot OM2, >1 kHz (in the shot-noise-limited region at high frequencies), darm looks a bit worse; this is consistent with the decrease in optical gain at hot OM2.
- Hot OM2, ~100 Hz, squeezing seems to give some noticeable improvement around and just above 100 Hz (??). This improvement is not that clear without squeezing. Unclear to me if related to FC detuning or OM2, but I wonder if this ~100Hz noise is in part related to quantum noise..
- Hot OM2, < 100 Hz, low-frequency noise looks much better; the LSC FF tuning seems very effective at improving noise in this configuration.
- At warm OM2, maybe low-frequencies < 100 Hz see some scatter shelves (?), but I might just be seeing things, not totally clear.
Next I'll try to compare this with the quantum noise budget without squeezing, starting by comparing the no-sqz traces to look at IFO shot-noise based output losses, for different OM2s (as Sheila suggested). Ideally this will require independent knowledge of some IFO parameters like the readout angle and the SRCL detuning. Once it makes sense with no-sqz darm, I'll continue to work on making sense of both the full and the semi-classical quantum noise budgets with squeezing injected, and budgeting out the quantum noise contributions to get a sense of whether low-frequency quantum noise (e.g. from sqz misrotations) are plausibly showing up in darm, and what knobs we could turn if so.
Edit: indeed the cold OM2 time had a glitch. Updated plots with the following cold OM2 no-sqz time (different day, early in lock.. but looks more reasonable).
Updated plots: effect of sqz at different OM2s, and sqz vs. no-sqz at different OM2s.
Times used:
cold OM2, no sqz, gps start = 1373371323 (2023-07-14, 12:01:45 UTC)
cold OM2, no sqz, gps stop = 1373372668 (2023-07-14, 12:24:10 UTC)
Injections finished at 14:39UTC
Inlock charge measurements start at 14:45UTC
Inlock charges are having an ezca connection error, can't connect to H1:SUS-ITMX_L3_DRIVEALIGN_L2L. I've stopped to OP STOP and EXEC as suggested by the log but it gets the same error everytime. The inlock charge measurements were not able to be completed