This will be a running log of activities as they occur for bringing back CDS after the planned outage over the weekend. Refer to Richard's alog about what happened over the weekend for prep.
We started the shift with lights, front gate operation, wifi, etc.
Filiberto, Ed, Rich Installation is complete for the ITM ESD Driver for ITMX and ITMY. A short was found on the bias connection to ITMX (see attached sketch to clear up the pin numbering of the legacy connector). The shield was cut and insulated on the 2ft section of cable up near the vacuum feedthrough for this connection. All nominally 24V connections were HIPOT tested to 500V, and the high voltage bias connections were tested to 700V. An ADC (ADC07) was added to SUS-C5-14 to permit the ITM ESD readback channels to be implemented per the latest wiring diagram (D1500464). At time of writing, no aspects of the installed system have been verified in situ. This is the next item on the checkout. Some useful system constants (total for both drivers): +/- 18VDC Nominal Quiescent Current -> +460mA, -420mA +/- 24VDC Nominal Quiescent Current -> +40mA, -40mA +/- 430VDC Nominal Quiescent Current -> +/- 5.6mA Serial Numbers: ITMY ESD Driver (D1600092) (installed in SUS-R6) -> S1600266 ITMX ESD Driver (D1600092) (installed in SUS-R5) -> S1600267 ITM PI AI Chassis (D1600077) (installed in SUS-C6 U23) -> S1600245
As noted by Dave B. We began the orderly shutdown of the site at 1500 local time on Friday. The power actually went out at 1715 and we were almost ready. Vacuum system with the generators setup did not go as smoothly as it could. 1. Generator output was too low for the UPS to operate on. UPS looking for 115V and we were at 112V. We bypassed the UPS and ran the 24V dc supplies directly from the Generator. 2. The GFI outlet on EY generator would not function so it was replaced By 1930 we were comfortable leaving the site. Sat. 0800 Site was dark all buildings, generators, Vacuum system in good shape 1800 New switch at substation installed. Sun. 0800 Site was dark all buildings, generators, Vacuum system in good shape 1200 Testing of switch complete (hicups included) 1300 Site power restored. Facility cooling began. converted Vacuum system to building power generators turned off. 1500 left for day. Other Vacuum details by Vacuum group.
During power off procedure h1ecatc1 reported it had unsaved changes, As the changes where unknown at the moment, we ignored them and continue with normal shutdown.
State of H1: systems are off
Weekend Prep activities:
* indicates there was some small glitche in our procedure
This alignment snapshot is the final lock before the weekend.
Ready for the power outage. Safe.snap last updated June 1st.
HWS:
CO2:
we are starting the shutdown procedure for LHO CDS.
J. Kissel, D. Barker After the modifications to the HAM triple models to make coil driver filtering individually switchable (see LHO aLOG 27223), I had renamed the library parts in the /opt/rtcds/userapps/release/sus/common/models/STATE_BIO_MASTER.mdl in order to clean up the confusion between the digital control of a modified triple-acquistion driver vs. and unmodified triple-acquisition driver. However, in doing so, this renaming destroys the library link's reference in other models that use the specific block. This was identified by Dave who was trying to compile the BSFM model in prep for the power outage (which is the only *other* suspension type that uses the TACQ driver). As such, I copied in the newly renamed part from the library which restored the link. The model now compiles nicely, and has been committed to the userapps repository.
State of H1: at 21:33UTC locked in ENGAGE_REFL_POP_WFS
Activities at 21:33UTC:
Activities during the day:
H1 locked:
H1 unlocked:
Travis Sadeki and Calum Torrie
I have copied Evan's new actuation function to the h1hwinj1 directory currently used for CW injections: ~hinj/Details/pulsar/O2test/. I used the one that corrects for the actuation delay: H1PCALXactuationfunction_withDelay.txt. For reference, the uncorrected version (no "_withDelay") sits in the same directory, along with the one we first tried last week: H1PCALXactuationfunction.txt.25may2016. The perl script that generates the command files in.N (N=0-14) has been updated to use "_withDelay" and the command files regenerated. The CW injections have been killed and automatically restarted by monit. Attached are second trends before and after the gap showing that things look about the same, as expected, but there is a small increase in injection amplitude (~5%).
Evan wondered if the ~5% increase in total injection amplitude was dominated by the highest frequency injection or one at lower frequencies. I took a look for this time interval and found that the total amplitude is dominated by the injection at ~1220.5 Hz. Simply comparing spectral line strengths before and after the changeover turned out not to be a robust way to estimate the frequency-dependent ratio of the new to the old inverse actuation function, because some pulsar injections (especially the highest frequency one) are going through rapid antenna pattern modulations during this period. But comparing the new to the old spectral line strengths at the same sidereal time several days later (after power outage recovery) gives robust measures for a sampling of low-, medium- and high-frequency injections:
Freq (Hz) | "Old" amplitude (before switchover) | New amplitude (4 sidereal days later) | Ratio (new/old)
190.90 | 0.32292 | 0.32211 | 1.00
| 849.00 | 60.502 | 62.344 | 1.03
| 1220.5 | 299.37 | 318.70 | 1.06
| 1393.1 | 207.50 | 224.37 | 1.08
| 1991.2 | 28.565 | 32.788 | 1.15
| |
Activities:
Currently:
Work still planned for today:
Tours:
Moday timeline:
To be more clear regarding the HEPI task I will perform Monday morning, see WP 5910.
This is the Capacitive Accumulator Pressure checking which requires the HEPI Pumps off. This is done only every 3 to 6 months.
Tonight we are again having random, fast locklosses, in different configurations. We also are seeing some large glitches that don't knock us out of lock. Again they seem to correspond to times when there is something noisy in SR3 channels, while its not clear that the SR3 channels are seeing real optic motion, it is probably worth swapping some electronics as a test because these frequent locklosses are making commissioning very difficult.
See 27437 and Andy Lundgren's comments
The first attached plot shows that something about this channel changed on May 10th, and that there have been noisy periods since then. The next two are two more examples of sudden unexplained locklosses where something shows up in SR3.
KIwamu and I unplugged the cables from the Sat amp to the chamber for both M2 and M3, and the locklosses and glitches still happened. The good news is that Kiwamu seems to have found a good clue about the real culprit.
Our current theory is that locklosses are due to the ISS which shuts itself off for some reason at random times at a rate of once in 10 minutes or so. This causes a glitch in the laser intensity. Before a lockloss, there was a fast glitch (~milliseconds) in PRCL, SRCL and CARM error signals. That made us think that the laser field may be glitching. Indeed, we then found that the ISS had gone off automatically at the same time as the glitch and seemingly had caused the subsequent lockloss. We then tested the stability of ISS in a simpler configuration where only IMC is locked. We saw glitches of the same type in this configuration too.
In order to localize the issue, we are leaving the ISS open overnight to see if some anomaly is there without the ISS loop.
Conclusion: it was the ISS which had a too low diffraction power.
According to the overnight test last night, I did not find a glitchy behavior in the laser intensity (I looked at IMC-MC2_TRANS_SUM). This means that the ISS first loop is the culprit. Looking at trend of the recent diffraction power, the diffraction power kept decreasing in the past few days from 12-ish to almost 10% (see the attached). As Keita studied before (alog 27277), a diffraction power of 10% is about the value where the loop can go unstable (or hit too low diffraction value to shut off the auto-locked loop). I increased the diffraction power to about 12% so that the variation in the diffraction power looks small to my eyes.
Note that there are two reasons that the diffracted power changes, i.e. intentional change of the set point (left top) and the HPO power drift (right bottom). When the latter goes down, ISS doesn't have to diffract as much power, so the diffraction goes lower.
In the attached, at the red vertical line somebody lowered the diffraction for whatever reason, and immediately the ISS got somewhat unhappy (you can see it by the number of ISS "saturation" in right middle panel).
Later at the blue vertical line (that's the same date when PSL air conditioning was left on), the diffraction was reduced again, but the HPO power went up, and for a while it was OK-ish.
After the PSL was shut down and came back, however, the power slowly degraded, the diffraction went lower and lower, and the number of saturation events sky-rocketed.
Step 6.3: LVEA Network Switches
This box appeared to be running (green lights & no errors).
Step 6.4 Work Station Computers
Work Stations powered in Control Room-----> But BEFORE these were powered up, The NFS File Server should have been checked in the MSR before powering anything up. This needs to be added to the procedure.
We have found several issues due to computers being started before the NFS File Server was addressed. These items had to be brough up again:
----->And the items above allow us to run Dataviewer now and also bring up the Vacuum Overview
Step 6.5 Wiki & alog & CDS Overview MEDM
They are back. Overview on the wall (thanks, Kissel).
Update: The wiki was actually NOT back since it was started before the NFS File Server was started. So the wiki was restarted.
Sec 8: EX (Richard & TJ)
They have run through the document & are moving on to other outbuildings. On the overview for EX, we see Beckhoff & SUSAUX are back.
Sec 10: MX (Richard & TJ)
Haven't heard from them, but we see that PEMMX is back on the Overview.
Sec 9: EY (Richard & TJ)
This is backonline. So now we can start front ends in the Corner Station!
They are now heading to MY now.....
Sec 11: MY (Richard & TJ)
...but it looks like MY is already back according to the CDS Overview.
STATUS at 10AM (after an hour of going through procedure):
Most of the CDS Overview is GREEN, -except- the LSC. Dave said there were issues with bring the LSC front end back up and will need to investigate in the CER.
End Stations: (TJ, Richard)
7.5 Front Ends (Updated from Kissel)
Everything is looking good on the CDS Overview
Sec 7.8: Guardian (updated from Kissel)
Working on bringing this back. Some of these nodes need data from LDAS (namely, ISC); so some of these may take a while.
BUT, basic nodes such as SUS & SEI may be ready fairly sooner.
7.1 CORNER STATION DC POWER SUPPLIES
These have all been powered ON (Richard).
(Still holding off on End Stations for VAC team to let us know it's ok.)
EY (Richard)
High Power voltage & ESD are back (this step is not in recovery document).
EX (Richard)
High Power voltage & ESD are back (this step is not in recovery document).
There appear to be some order issues here. (Jeff, Nutsinee, and others are working on fixing the order in the document.)
1) We held off on addressing DC high power because of wanting to wait for Vacuum System Team at the LVEA (for vacuum gauge) and at the End Stations (for vacuum gauge & the ESD).
2) We held off on some Corner Station items, because of them being on the Dolphin Network. So to address End Stations FIRST, Assigned Richard & TJ to head out to start the End Station sections of the document & get their Dolphin Network items online. Once they were done, Dave started on cornter station Front Ends on the Dolphin network.
Extraneous items:
Sec 7.6 LVEA HEPI Pump Station Controller
After Hugh's HEPI Maintenance, Jim brought this back.
Sec 8.5 EX HEPI Pump Station Controller
After Hugh's HEPI Maintenance, Jim brought this back.
Sec 9.5 EY HEPI Pump Station Controller
After Hugh's HEPI Maintenance, Jim brought this back.
UPDATE:
At this point (11:42am), we appear to be mostly restored with regard to the CDS side. Most of the operations subsystems are back (we are mostly green on the Ops Overview). The VAC group's Annulus Ion Pumps are back to using site power.
Lingering CDS/Operations items: