Tony, Dave:
The EDC is consistently not connecting to 17 of the 27 channels served by the CP1 overfill IOC. This started soon after the upgraded zotvac0 was installed. We ran the original IOC to see if the new machine was the issue, but the EDC still only connected to 10 of the 27 channels in H1EPICS_CP1.ini. It is the same block of 10 which connect, there is no obvious reason why these 10 (checked position in file, type of PV). So we are running the new zotvac0 overnight.
OPS: we expect the EDC disconnect channel count to remain at 17 or hopefully eventually drop to zero.
Vicky and I noticed that there seems to be an almost shelf-like structure with an upper frequency of about 55 Hz, that was visible when we had just acquired lock. I had a quick look at the coherence between DARM (Cal-deltal_external) and various channels (right side of attachment), and it's clear that there is a significant amount of coherence with LSC channels (top left frame of the right side of the attachment).
Mostly to see if I could, I ran a quick training of the LSC NonSENS subtraction, and turned it on for a minute or so just before we went to Observing. The effect of the nonsens cleaning (with freshly trained LSC MICH and SRCL subtraction) at the beginning of the lock is on the left side of the attachment, and it looks roughly like the subtraction is flattening out that shelf-like structure.
An hour or so into the lock, and all of that coherence is completely gone (as Gabriele pointed out earlier today). My take away is that our LSC FF is not well-tuned for the first part of the thermalization of the lock. That may be something that we just live with, but I thought it was interesting to note that indeed part of the reason our range increases over the first hour or so of a lock is that we're thermalizing into the coupling function that the FF is tuned for.
TITLE: 10/18 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: One lock loss during the shift during commissioning time. The IFO seemed to be very misaligned after this lock loss and I had to run an initial alignment. After that it was straight up to low noise.
The SEI_ENV node has been going to earhtquake mode from the high seism, so I had to increase the threshold (alog73569).
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:15 | FAC | Karen | MY | n | Tech clean | 17:04 |
| 16:47 | FAC | Chris | Garb area | n | Peaking at top gun equipment | 17:01 |
| 16:52 | FAC | Kim | Garb area | n | Grab garb | 17:01 |
| 17:44 | FAC | Kim | MX | n | Tech clean | 18:45 |
| 19:29 | SQZ | Vicky, Naoki | LVEA-SQZ | Local | Homodyne meas. | 19:57 |
| 19:35 | - | Richard | LVEA | n | Checking on equipment | 19:39 |
| 19:36 | TCS | Camilla | CR | N | CO2 Tests | 22:51 |
| 20:08 | ISC | Keita | LVEA | n | Swap OM2 heater cable | 20:21 |
| 20:39 | ISC | Camilla,Jenne | CR | N | LSC FF off tests from 20:07UTC | 22:39 |
| 20:52 | VAC | Gerardo | LVEA | n | Checking on AI pump 8 and the mega cleanroom | 21:35 |
This afternoon we stepped CO2X up from 1.7W in to 2.5W input, with steps of +0.2W requested per 20 minutes. We had no squeezing during this time and no LSC feedforward during the last steps. Attached is scopes, DARM (increased noise at 20-50Hz is a glitch, not caused by CO2 change) and DARM at high frequency.
Circulating power in the X-arm increased plot and high frequency noise lowered plot with increased CO2X, but its hard to see any other effects.
Are CO2 Beams still well aligned to IFO? Unsure
Looking at when the CO2 beam is first turned on at POWER_25W plot compared to the IFO beam when it's just got to 60W input plot:
Last adjusted the CO2 alignment in 68391. Would like to check alignment in lock after CO2 beams have been on for a while as we have previously seen the CO2 alignment drift as the mirrors warm up.
TITLE: 10/18 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Commissioning
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 3mph 5min avg
Primary useism: 0.19 μm/s
Secondary useism: 0.41 μm/s
QUICK SUMMARY:
We've been Locked for 21minutes and are currently Commissioning. Wind is low amd useism is a bit high.
With high primary and secondary useism the SEI_ENV node has been transitioning to earthquake mode based on higher peakmon (H1:ISI-GND_STS_CS_Z_EQ_PEAK_OUTMON) signals, but not due to an actual earthquake. I've raised this threshold from 600 to 850 for now. It looks like the waves should calm down in about a day so I'll plan to revert this tomorrow.
TJ reverted this threshold earlier today. Around the same time (11:22pst) I made a change to the filter used for the peakmon calibration, that should reduce the confusion between microseism and earthquakes.
First picture show 3 ground spectra, red is a more normal frequency higher microseism time on the 16th, blue is a spectra take when both the primary and secondary microseism were elevated from a large mass of wave activity right off the coast and pink is a largish eq from a couple days ago. On the blue spectra the .1-.3hz peak has spread down to 80 or 90mhz, and was bleeding into the band peakmon uses.
Second image compares the bandpass peakmon was using to the one I installed today. Blue is the new filter, a cheby1 lowpass with a zpk([0],[.015],1) high pass, green is the old total filter. The problem this week with the old filter was the higher primary microseism peak at 50-60 mhz and the broader peak around .1-.3hz extending down to 80mhz. The blue filter should greatly reduce the chance of that secondary peak confusing the SEI guardians.Rise time for this filter is important, and the blue filter has some lower frequency poles than the green, but the foton step response for the filters isn't dramatically different. The new filter takes a couple seconds longer to respond, but that should be easy to tweak.
We've already had one successful earthquake transition about an hour after I changed the filter. Wave forecasts for next week suggest more high microseism starting this next Tues-Wed.
The wet well at the LSB was found to be nearly full last Thursday, 10/12. ACE was called out for an emergency pumping where 1800 gallons was removed from the tank. The following Monday Campbell and Company arrived to diagnose both pumps. All floats were found to be in good working order. Following that, pump 2 was found to be non-operational likely from failed windings. Given that pump 1 on its own was unable to keep the tank level low and a crew was scheduled to go into the tank anyway, I elected to have both pumps replaced with new from stock we had on the shelf. The runtime and cycles of each pump at the time of replacement are attached in the photos below: T. Guidry C. Soike
Despite re-routing the newly replaced cable higher (where possible), rodents found the low point(s) and damaged it again taking Chiller 1 down. Additional rodent deterrent items have been ordered, and plans are in place to resplice and "armor" the cable. T. Guidry
After Gabriele's 73546, Jenne and I looked LSC-DARN_IN1 before and after the feedforward was added. You can see the new FF (in particular SRCL) has more noise 5 to 9Hz, plot attached of only old vs new FF. The larger peaks at 0.05 and 0.1Hz are caused by increased microseism, they are not present on Oct 14th. No other noise down to 1e-4Hz.
We turned off the MICH FF only 20:07UTC - 20:21UTC and then the SRCL FF only 20:22UTC - 20:35UTC. Turned off both FF from 20:35UTC until 20:42UTC where we had a lockloss. Plot attached of LSC-DARN_IN1 in these configurations.
As part of thinking about why a little bit of excess motion around a few Hz can cause nonstationarity in the GW band, I had a cursory look at the filters in place for the ETM R0 tracking.
I have not yet re-looked to see why we made different choices for the angular vs length loops, but the top frame of the attached plot shows the magnitude of the control filters for ETMY's R0 tracking for pitch and length. The bottom frame shows the rough magnitude of the open loop gain of the length loop, using a filter that was in the filter bank called 'plant', and including the gain of 20 that is in the filter bank.
So, it looks like the angular loops (I'm assuming the yaw filters, which are named the same, are actually the same as the pitch filters) emphasize this region where we've got a bit of excess motion in DARM due to the LSC FF, but the length loop does not. We should give this some thought, but consider adding some emphasis to the length tracking filter to have the length tracking work up to ~5 Hz or so.
A parallel path forward is that we should also consider is changing the highpass in the SRCL FF filter bank to be more severe (thus sacrificing some phase and efficacy around 10 Hz, but hopefully an overall win above 20 Hz), however this implies doing the full remeasurement and refitting of the SRCL FF. We can work on this later this week, or whenever we next have some commissioning time.
Here's a design for a more aggressive high pass filter, at the price of 20 degrees of phase rotation at 10 Hz. I think we might need a retuned of the SRCLFF after implementing it. The new high pass is saved into FM10 of SRCLFF1 (not yet loaded).
The noise reinjection between 2 and 4 Hz should be about 4 times lower if we manage to retuned the SRCLFF filter without increaasing the low frequency gain.
No obvious cause but there was TCS CO2 steps happening at the time, and our funny LSC-DARM wiggle ~100ms before lockloss.
I looked at the quad L2 & L3 MASTER_OUT channels as well as the ETMX L2 NOISEMON and FASTIMON channels that you looked at for the 10/18 01:53UTC lockloss in 73552, and noticed a couple of things:
Comparing this to the two recent locklosses that had pre-lockloss glitches, 10/15 08:53UTC and 10/18 01:53UTC(73552), both had not been caught by other quads or ASC-AS_A, and in regards to which channel had seen the glitch first, for the 10/15 LL I could not tell who saw it first (I had said that it had hit ETMX L3 first in 73552 but I think that was wrong), and for the 10/18 01:53 LL we previously discovered that either DARM or the DCPDs saw it first.
| LL | Time before LL (s) | Seen First By | Also Seen By |
|---|---|---|---|
| 10/15 08:53UTC | ~0.5 | either DARM, DCPDs, ETMX L3 | NOISEMON, FASTIMON, ETMX_L2 |
| 10/18 01:53UTC (73552) | ~0.17 | DARM or DCPDs | NOISEMON, FASTIMON, ETMX_L2 |
| 10/18 20:42UTC | ~0.1 | ETMX L3 | NOISEMON, FASTIMON, ETMX_L2, ITMs_L2, ETMY_L2, ASC-AS_A |
Peter, Sidd
Counts_to_volts = 40/2^16
Transimpedance = 20,000
Responsivity = 0.35
Oplev_from_ETMX = 5.6 m
Diode_area = 10x10 mm^2
Angle_from_test_mass = 7.6 degrees
Power_scatter = Counts x Counts_to_volts x (1/Responsivity) x (1/Transimpedance)
Solid_angle = diode_area/(Oplev_from_ETMX)^2
BRDF = Power_scatter/(Arms_Power x solid_angle x cos(angle_from_test_mass) )
I compared the ETMX Oplev BRDF between the 75 W and 60 W configuration. The details are below, the first three days are when LHO operated with 75 W and the last three days are with 60 W input power. I do not find any significant change in the ETMX Oplev BRDF.
|
Date |
Power (KW) |
Counts |
Power_Scatter (µW) |
BRDF |
|
2023-06-05 |
430.7 |
4960 |
432.4 |
3.18e-4 |
|
2023-06-07 |
439.2 |
4964 |
432.8 |
3.12e-4 |
|
2023-06-14 |
434.24 |
5016 |
437.3 |
3.19e-4 |
|
2023-07-28 |
370.1 |
4243 |
370 |
3.16e-4 |
|
2023-08-10 |
364 |
4177 |
364.2 |
3.17e-4 |
|
2023-08-13 |
368.4 |
4206 |
366 |
3.14e-4 |
The calculation is in the notebook
Just to clarify, the Counts term in the Power_scatter is the difference in the Oplev counts between locked and unlocked ifo.
Starting around 11am PDT the picket fence trend plot on nuc5 started restarting itself. This was because the pnsndata service (providing stations OTR on the Washinton penninsular and LAIR at Eugene Oregon) became intermittent. To test this I took these stations out and re-ran picket fence with no issues. Later it looked like pnsndata was more stable, so I added it back in. This did not last and at 12:40 I removed it again from our client.
Note that with the west coast stations removed, the map has centered itself on Idaho.
I'll continue to monitor psnsdata and add it back when it becomes stable.
We checked with Renate Hartog (the PNSN network manager) through a series of calls and emails. She's gotten everything going now, thanks Renate!
Turns out that they had a system re-boot at about that time and people forgot to restart a (virtual) network interface. As of 17:13 pacific, the issue has been resolved and we can now connect to the PNSN.
Hopefully things will be stable and running again soon. Thanks for the notification about this Dave.
Edgard
Followed the usual instructions in the wiki, ran a broadband first, then a simulines.
Start:
PDT: 2023-10-18 12:07:24.634154 PDT
UTC: 2023-10-18 19:07:24.634154 UTC
GPS: 1381691262.634154
End:
PDT: 2023-10-18 12:30:22.282758 PDT
UTC: 2023-10-18 19:30:22.282758 UTC
GPS: 1381692640.282758
Files:
2023-10-18 19:30:21,981 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20231
018T190729Z.hdf5
2023-10-18 19:30:22,019 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS
_20231018T190729Z.hdf5
2023-10-18 19:30:22,043 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS
_20231018T190729Z.hdf5
2023-10-18 19:30:22,068 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS
_20231018T190729Z.hdf5
2023-10-18 19:30:22,094 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS
_20231018T190729Z.hdf5
Current plans include:
Using two periods of quiet time during the last couple of days (1381575618 + 3600s, 1381550418 + 3600s) I computed the usual coherences:
https://ldas-jobs.ligo-wa.caltech.edu/~gabriele.vajente/bruco_STRAIN_1381550418/
https://ldas-jobs.ligo-wa.caltech.edu/~gabriele.vajente/bruco_STRAIN_1381575618/
The most interesting observation is that, for the first time as far as I can remember, there is no coherence above threshold with any channels for wide bands in the low frequency range, notably between 20 and 30 Hz, and also for many bands above 50 Hz. I'll assume for now that most of the noise above ~50 Hz is explained by thermal noise and quantum noise, and focus on the low frequency range (<50 Hz).
Looking at the PSDs for the two hour-long times, the noise belowe 50 Hz seems to be quite repeatable, and follows closely a 1/f^4 slope. Looking at a spectrogram (especially when whitened with the median), one can see that there is still some non-stationary noise, although not very large. So it seems to me that the noise below ~50 Hz is made up o some stationary 1/f^4 unknown noise (not coherent with any of the 4000+ auxiliary channels we record) and some non-stationary noise. This is not hard evidence, but an interesting observation.
Concerning the non-stationary noise, I think there is evidence that it's correlated with the DARM low frequency RMS. I computed the GDS-CALIB RMS between 20 and 50 Hz (whitened to the median to weight equally the frequency bins even though the PSD has a steep slope), and the LSC_DARM_IN1 RMS between 2.5 and 3.5 Hz (I tried a few different bands and this is the best). There is a clear correlation between the two RMS, as shown in a scatter plot, where every dot is the RMS computed over 5 seconds of data, using a spectrogram.
DARM low frequency (< 4 Hz) is highly coherent with ETMX M0 and R0 L damping signals. This might just be recoil from the LSC drive, but it might be worth trying to reduce the L damping gain and see if DARM RMS improves
Bicoherence is also showing that the noise between 15 and 30 Hz is modulated according to the main peaks visible in DARM at low frequency.
We might be circling back to the point where we need to reconsider/remeasure our DAC noise. Linking two different (and disagreeing) projections from the last time we thought about this, it has the correct slope. However, Craig's projection and the noisemon measurement did not agree, something we never resolved.
Projection from Craig: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=68489
Measurement from noisemons: https://alog.ligo-wa.caltech.edu/aLOG/uploads/68382_20230403203223_lho_pum_dac_noisebudget.pdf
I updated the noisemon projections for PUM DAC noise, and fixed an error in their calibration for the noise budget. They now agree reasonably well with the estimates Craig made by switching coil driver states. From this we can conclude that PUM DAC noise is not close to being a limiting noise in DARM at present.
To Chris' point above -- we note that the PUMs are using 20-bit DACs, and we are NOT using and "DAC Dither" (see aLOGs motivating why we do *not* use them in LHO:68428, and LHO:65807, namely that [in the little testing that we've done] we've seen no improvement, so we decided they weren't worth the extra complexity and maintenance.)
If at some point there’s a need to test DAC dithers again, please look at either (1) noisemon coherence with the DAC request signal, or (2) noisemon spectra with a bandstop in the DAC request to reveal the DAC noise floor. Without one of those measures, the noisemons are usually not informative, because the DAC noise is buried under the DAC request.
Attached is a revised PUM DAC noisemon projection, with one more calibration fix that increases the noise estimate below 20 Hz (although it remains below DARM).
Lockloss @ 10/18 01:53UTC
DARM_IN1(attachment1) saw some sort of glitch(left data tip) ~170ms before DARM registered the lockloss starting(right data tip).
ETMX L2 and L3 also saw this glitch(attachment2). Seems like the glitch hit DARM_IN1, then ETMX L3, then ETMX L2?
I noticed something similar with the 10/15 08:53UTC LL(attachment3), but in that case it looked like the movement was first seen in ETMX L3, then DARM_IN1 and ETMX L2. In that instance the glitch also happened a full 0.5s before the lockloss.
03:25 Back Observing
Very interesting find Oli! I've attached some more plots to hopefully help us narrow down where this is coming from. The L2 ETMX fastimon and noisemon channels see this same glitch, but the L2 OSEM sensor inputs on all of the quads don't. This makes me think that it could possible be coming from the OMC DCPD signals, the error signals forLSC-DARM. Tough to say based on data rates, but the second attachement also maybe shows the DCPD signal moving first, but take that with a grain of salt.
To see if the range changes seen when we turned up the ETM RH from 1.0W/segment to 1.2W/segment in 73445 were real, this afternoon we are reversing this change.
At 20:40UTC, Ryan took us out of Observing to reduce both H1:TCS-ETM{X,Y}_RH_SET{UPPER,LOWER}POWER to 1.1W, we'll plan to reduce all the way to 1.0W in 2 hours.
I made the second step down from 1.1 to 1.0 at 22:41UTC for H1:TCS-ETM{X,Y}_RH_SET{UPPER,LOWER}POWER.
Unsure if this made any difference to the range. Range looks improved after the step down but the wind also dropped during this change, ndscopes attached. No large change seen in DARM.
To see if the OM2/beckhoff coupling is a direct electronics coupling or not, we've done A-B-A test while the fast shutter was closed (no meaningful light on the DCPD).
State A (should be quiet): 2023 Oct/10 15:18:30 UTC - 16:48:00 UTC. The same as the last observing mode. No electrical connection from any pin of the Beckhoff cable to the OM2 heater driver chassis. Heater drive voltage is supplied by the portable voltage reference.
State B (might be noisy): 16:50:00 UTC - 18:21:00 UTC. The cable is directly connected to the OM2 heater driver chassis.
State A (should be quiet): 18:23:00- 19:19:30 UTC or so.
DetChar, please directly look at H1:OMC-DCPD_SUM_OUT_DQ to find combs.
It seems that even if the shutter is closed, once in a while very small amount of light reaches DCPDs (green and red arrows in the first attachment). One of them (red arrow) lasted long and we don't know what was going on there. One of the short glitches was caused by BS momentarilly kicked (cyan arrow) and scattered light in HAM6 somehow reached DCPDs, but I couldn't find other glitches that exactly coincided with optics motion or IMC locked/unlocked.
To give you a sense of how bad (or not) these glitches are, 2nd attachment shows the DCPD spectrum of a quiet time in the first State A period (green), strange glitchy period indicated by the red arrow in the first attachment (blue), a quiet time in State B (red) and during the observing time (black, not corrected for the loop).
FYI, right now we're back to State A (should be quiet). Next Tuesday I'll inject something to thermistors in chamber. BTW 785 was moved in front of the HAM6 rack though it's powered off and not connected to anything.
I checked H1:OMC-DCPD_SUM_OUT_DQ and don't see the comb in any of the three listed intervals (neither state A nor B). Tested with a couple of SFT lengths (900s and 1800s) in each case.
Since it seems that the coupling is NOT a direct electronics coupling from Beckhoff -> OM2 -> DCPD, we fully connected the Beckhoff cable to the OM2 heater driver chassis and locked the OMC to the shoulder with an X single bounce beam (~20mA DCPD_SUM, not 40mA like in the usual nominal low noise state). That way, if the Beckhoff is somehow coupling to OMC PZT that might cause visible combs in the DCPD.
We didn't see the comb in this configuration. See the 1st attachment, red is the shoulder lock and green is when 1.66Hz comb was visible with the full IFO (the same time reported by Ansel in alog 73000), showing just two largest peaks of 1.66Hz harmonics visible in the green trace. (It seems that the 277.41Hz and 279.07 Hz peak are 167th and 168th harmonics of 1.66Hz.) Anyway, because of the higher noise floor, even if the combs are there we couldn't have seen these peaks. We've had a different comb spacing since then (alog 73028) but anyway I don't see anything at around 280Hz. FYI I used 2048 FFTs for both, red is a single FFT and the green is an average of 6. This is w/o any normalization (like RIN).
In the top panel of 2nd attachment, red is the RIN of OMC-DCPD_SUM_OUT_DQ of the shoulder lock, blue and dark green are RIN of 2nd loop in- and out-of-loop sensor array. Magenta, cyan and blue green are the same set of signals when H1 was in observing last night. Bottom panel shows coherence between DCPD_SUM during the shoulder lock and ISS sensors as well as IMC_F, which just means that there's no coherence except for high kHz.
If you look at Georgia's length noise spectrum from 2019 (alog 47286), you'll see that it's not totally dissimilar to our 2nd plot top panel even though Georgia's measurement used dither lock data. Daniel points out that a low-Q peak at around 1000Hz is a mechanical resonance of OMC structure causing the real length noise.
Configurations: H1:IMC-PWR_IN~25.2W. ISS 2nd loop is on. Single bounce X beam. DCPD_SUM peaked at about 38mW when the length offset was scanned, and the lock point was set to the middle (i.e. 19mA). DC pointing loops using AS WFS DC (DC3 and DC4) were on. OMC QPD loops were not ON (it was enabled at first but was disabled by the guardian at some point before we started the measurement). We were in this state from Oct/17/2023 18:12:00 - 19:17:20 UTC.
BTW Beckhoff cable is still fully connected to the OM2 heater driver chassis. This is the first observation data with such configuration after Fil worked on the grounding of Beckhoff chassis (alog 73233).
Detchar, please find the comb in the obs mode data starting Oct/17/2023 22:33:40 UTC.
The comb indeed re-appeared after 22:33 UTC on 10/17. I've attached one of the Fscan daily spectrograms (1st figure); you can see it appear in the upper right corner, around 280 Hz as usual at the start of the lock stretch.
Two other notes:
Just to see if anything changes, I used the switchable breakout board at the back of the OM2 heater driver chassis to break the thermistor connections but kept the heater driver input coming from the Beckhoff. The only two pins that are conducting are pins 6 and 19.
That happened at around Oct/18/2023 20:18:00 to 20:19-something UTC when others were doing the commissioning measurements.
Detchar, please look at the data once the commissioning activities are over for today.
Because there was an elevated noise floor in the data from Oct/17/2023 18:12:00 mentioned in Keita's previous comment, there was some doubt as to whether the comb would have been visible even if it were present. To check this, we did a direct comparison with a slightly later time when the comb was definitely present & visible. The first figure shows an hour of OMC-DCPD_SUM_OUT_DQ data starting at UTC 00:00 on 10/18 (comparison time with visible comb). Blue and yellow points indicate the comb and its +/-1.235 Hz sidebands. The second figure shows the time period of interest starting 18:12 on 10/17, with identical averaging/plotting parameters (1800s SFTs with 50% overlap, no normalization applied so that amplitudes can be compared) and identical frequencies marked. If it were present with equivalent strength, it looks like the comb ought to have been visible in the time period of interest despite the elevated noise floor. So this supports the conclusion that the comb *not* present in the 10/17 18:12 data.
Following up, here's about 4 hours of DELTAL_EXTERNAL after Oct 18 22:00. So this is after Keita left only the heater driver input connected to the Beckhoff on Oct/18/2023 20:18:00. The comb is gone in this configuration.
Looks like zotvac0 has gone offline and the EDC is now disconnected from all 27 channels. We'll leave it like this overnight and work on it first thing tomorrow.