This will be a running log of activities as they occur for bringing back CDS after the planned outage over the weekend. Refer to Richard's alog about what happened over the weekend for prep.
We started the shift with lights, front gate operation, wifi, etc.
Filiberto, Ed, Rich Installation is complete for the ITM ESD Driver for ITMX and ITMY. A short was found on the bias connection to ITMX (see attached sketch to clear up the pin numbering of the legacy connector). The shield was cut and insulated on the 2ft section of cable up near the vacuum feedthrough for this connection. All nominally 24V connections were HIPOT tested to 500V, and the high voltage bias connections were tested to 700V. An ADC (ADC07) was added to SUS-C5-14 to permit the ITM ESD readback channels to be implemented per the latest wiring diagram (D1500464). At time of writing, no aspects of the installed system have been verified in situ. This is the next item on the checkout. Some useful system constants (total for both drivers): +/- 18VDC Nominal Quiescent Current -> +460mA, -420mA +/- 24VDC Nominal Quiescent Current -> +40mA, -40mA +/- 430VDC Nominal Quiescent Current -> +/- 5.6mA Serial Numbers: ITMY ESD Driver (D1600092) (installed in SUS-R6) -> S1600266 ITMX ESD Driver (D1600092) (installed in SUS-R5) -> S1600267 ITM PI AI Chassis (D1600077) (installed in SUS-C6 U23) -> S1600245
As noted by Dave B. We began the orderly shutdown of the site at 1500 local time on Friday. The power actually went out at 1715 and we were almost ready. Vacuum system with the generators setup did not go as smoothly as it could. 1. Generator output was too low for the UPS to operate on. UPS looking for 115V and we were at 112V. We bypassed the UPS and ran the 24V dc supplies directly from the Generator. 2. The GFI outlet on EY generator would not function so it was replaced By 1930 we were comfortable leaving the site. Sat. 0800 Site was dark all buildings, generators, Vacuum system in good shape 1800 New switch at substation installed. Sun. 0800 Site was dark all buildings, generators, Vacuum system in good shape 1200 Testing of switch complete (hicups included) 1300 Site power restored. Facility cooling began. converted Vacuum system to building power generators turned off. 1500 left for day. Other Vacuum details by Vacuum group.
During power off procedure h1ecatc1 reported it had unsaved changes, As the changes where unknown at the moment, we ignored them and continue with normal shutdown.
State of H1: systems are off
Weekend Prep activities:
* indicates there was some small glitche in our procedure
This alignment snapshot is the final lock before the weekend.
Ready for the power outage. Safe.snap last updated June 1st.
HWS:
CO2:
we are starting the shutdown procedure for LHO CDS.
J. Kissel, D. Barker After the modifications to the HAM triple models to make coil driver filtering individually switchable (see LHO aLOG 27223), I had renamed the library parts in the /opt/rtcds/userapps/release/sus/common/models/STATE_BIO_MASTER.mdl in order to clean up the confusion between the digital control of a modified triple-acquistion driver vs. and unmodified triple-acquisition driver. However, in doing so, this renaming destroys the library link's reference in other models that use the specific block. This was identified by Dave who was trying to compile the BSFM model in prep for the power outage (which is the only *other* suspension type that uses the TACQ driver). As such, I copied in the newly renamed part from the library which restored the link. The model now compiles nicely, and has been committed to the userapps repository.
State of H1: at 21:33UTC locked in ENGAGE_REFL_POP_WFS
Activities at 21:33UTC:
Activities during the day:
H1 locked:
H1 unlocked:
Travis Sadeki and Calum Torrie
I have copied Evan's new actuation function to the h1hwinj1 directory currently used for CW injections: ~hinj/Details/pulsar/O2test/. I used the one that corrects for the actuation delay: H1PCALXactuationfunction_withDelay.txt. For reference, the uncorrected version (no "_withDelay") sits in the same directory, along with the one we first tried last week: H1PCALXactuationfunction.txt.25may2016. The perl script that generates the command files in.N (N=0-14) has been updated to use "_withDelay" and the command files regenerated. The CW injections have been killed and automatically restarted by monit. Attached are second trends before and after the gap showing that things look about the same, as expected, but there is a small increase in injection amplitude (~5%).
Evan wondered if the ~5% increase in total injection amplitude was dominated by the highest frequency injection or one at lower frequencies. I took a look for this time interval and found that the total amplitude is dominated by the injection at ~1220.5 Hz. Simply comparing spectral line strengths before and after the changeover turned out not to be a robust way to estimate the frequency-dependent ratio of the new to the old inverse actuation function, because some pulsar injections (especially the highest frequency one) are going through rapid antenna pattern modulations during this period. But comparing the new to the old spectral line strengths at the same sidereal time several days later (after power outage recovery) gives robust measures for a sampling of low-, medium- and high-frequency injections:
| Freq (Hz) | "Old" amplitude (before switchover) | New amplitude (4 sidereal days later) | Ratio (new/old)
| 190.90 | 0.32292 | 0.32211 | 1.00
| 849.00 | 60.502 | 62.344 | 1.03
| 1220.5 | 299.37 | 318.70 | 1.06
| 1393.1 | 207.50 | 224.37 | 1.08
| 1991.2 | 28.565 | 32.788 | 1.15
| |
Just did a quick check on the bump I first reported in 27479 and yes it is still there and yes it is still wandering. Of note, it can move quickly. It went from ~60Hz to ~80Hz in less than 30 minutes 2 June 1055 to 1125 pdt.
An additional ADC was added to the h1susauxb123 I/O chassis. The new card was added to the expansion board at bus 1-2, making it ADC1 in relation to the existing ADC cards. All ribbon cables between the ADC and interface cards were rearranged with the exception of ADC0 so that none of the cables to the AA chassis needed to be swapped and the h1susauxb123 model doesn't need to be modified.
Which ECR is this done under ( I assume it is the ITM ESD install)?
I'm assuming it is covered by ECR E1600064 though it is not clear if that ECR shows the additional ADC channels for the sus-aux system needed to support the PI-ESD install on the ITMs.
I should have made it apparent that there are now 8 ADC cards installed in the h1susauxb123 I/O chassis. My original post omitted this important detail. The newly installed card is ADC7.
Activities:
Currently:
Work still planned for today:
Tours:
Moday timeline:
To be more clear regarding the HEPI task I will perform Monday morning, see WP 5910.
This is the Capacitive Accumulator Pressure checking which requires the HEPI Pumps off. This is done only every 3 to 6 months.
Tonight we are again having random, fast locklosses, in different configurations. We also are seeing some large glitches that don't knock us out of lock. Again they seem to correspond to times when there is something noisy in SR3 channels, while its not clear that the SR3 channels are seeing real optic motion, it is probably worth swapping some electronics as a test because these frequent locklosses are making commissioning very difficult.
See 27437 and Andy Lundgren's comments
The first attached plot shows that something about this channel changed on May 10th, and that there have been noisy periods since then. The next two are two more examples of sudden unexplained locklosses where something shows up in SR3.
KIwamu and I unplugged the cables from the Sat amp to the chamber for both M2 and M3, and the locklosses and glitches still happened. The good news is that Kiwamu seems to have found a good clue about the real culprit.
Our current theory is that locklosses are due to the ISS which shuts itself off for some reason at random times at a rate of once in 10 minutes or so. This causes a glitch in the laser intensity. Before a lockloss, there was a fast glitch (~milliseconds) in PRCL, SRCL and CARM error signals. That made us think that the laser field may be glitching. Indeed, we then found that the ISS had gone off automatically at the same time as the glitch and seemingly had caused the subsequent lockloss. We then tested the stability of ISS in a simpler configuration where only IMC is locked. We saw glitches of the same type in this configuration too.
In order to localize the issue, we are leaving the ISS open overnight to see if some anomaly is there without the ISS loop.
Conclusion: it was the ISS which had a too low diffraction power.
According to the overnight test last night, I did not find a glitchy behavior in the laser intensity (I looked at IMC-MC2_TRANS_SUM). This means that the ISS first loop is the culprit. Looking at trend of the recent diffraction power, the diffraction power kept decreasing in the past few days from 12-ish to almost 10% (see the attached). As Keita studied before (alog 27277), a diffraction power of 10% is about the value where the loop can go unstable (or hit too low diffraction value to shut off the auto-locked loop). I increased the diffraction power to about 12% so that the variation in the diffraction power looks small to my eyes.
Note that there are two reasons that the diffracted power changes, i.e. intentional change of the set point (left top) and the HPO power drift (right bottom). When the latter goes down, ISS doesn't have to diffract as much power, so the diffraction goes lower.
In the attached, at the red vertical line somebody lowered the diffraction for whatever reason, and immediately the ISS got somewhat unhappy (you can see it by the number of ISS "saturation" in right middle panel).
Later at the blue vertical line (that's the same date when PSL air conditioning was left on), the diffraction was reduced again, but the HPO power went up, and for a while it was OK-ish.
After the PSL was shut down and came back, however, the power slowly degraded, the diffraction went lower and lower, and the number of saturation events sky-rocketed.
Rana, Evan
WE measured the SRM to SRCL TF today to find the frequency and Q of the internal mode. Our hypothesis is that the thermal noise from the PEEK screws used to clamp the mirror into the mirror holder might be significant contribution to DARM.
The attached Bode plot shows the TF. The resonance frequency is ~3340 and the Q ~150. Our paper and pencil estimate is that this may be within an order of magnitude of DARM, depending upon the shape of the thermal noise spectrum. If its steeper than structural damping it could be very close.
"But isn't this ruled out by the DARM offset / noise test ?", you might be thinking. No! Since the SRCL->DARM coupling is a superposition of radiation pressure (1/f^2) and the 'HOM' flat coupling, there is a broad notch in the SRCL->DARM TF at ~80 Hz. So, we need to redo this test at ~50 Hz to see if the changing SRCL coupling shows up there.
Also recall that the SRCLFF is not doing the right thing for SRM displacement noise; it is designed to subtract SRC sensing noise. Stay tuned for an updated noise budget with SRM thermal noise added.
** see https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=27455 for pictures of the SRM compsoite mass.
The peak is also visible in the DARM spectrum. In this plot the peak is at 3335 instead of 3340 Hz. Why is there a 1.5% frequency shift?
Here are projected SRM thermal noise curves for structural and viscous damping.
Given a typical SRC coupling into DARM of 1×10−4 m/m at 40 Hz, 20 W of PSL power, and 13 pm of DARM offset (25019), this would imply a noise in DARM of 1×10−20 m/Hz1/2 at 40 Hz if the damping is structural.
When I modelled the optics in https://dcc.ligo.org/LIGO-T1500376 and in particular the surrogate SRM I had assumed optic was bonded. After looking again earlier with Rana and Betsy realised it is held with 2 set screws (Peek?) on barrell at 12 o'clock and two line contacts at 4 and 8 o'clcok. See https://dcc.ligo.org/LIGO-D1200886.
The previous bonded model for the SRM surrogate (I believe) had a fisrt mode predicted around 8k Hz. However, from a quick model I ran today (with the set screws etc ... ) the first mode appears to be around 3400 Hz. The mode is associated with the optic held with the peek screws. (Now I was doing model using remote desktop so I will need to check it again when I get a better connection, so more to follow on this. I will also post updated model, once I get back to Caltech.)
The ~3340Hz peak is also clearly visible in the PDA/PDB x-correlation spectrum. See alog 26345.
A couple of comments on this topic:
Danny, Matt (Peter F remotely)
Due to the issues currently seen at LHO, we were asked how the LLO SRM surrogate was put together and if we could add to the alog for a record of the process. The easiest way is to do it via photos (which we have of the assembly process).
IMG_1462....There are only two setscrews that hold the optic in place. Can be seen putting these in place below in the "cup" that holds the optic (eventually). Im not sure of the material but Peter F's speculation is that "I think those set screws must be the carbon-loaded PEEK type. The only other option I can think of for a black set screw would be carbon-steel, and it surely isn’t that."
IMG_1455...Here you seen the three main parts. The optic, the “cup” that the optic goes into and then the main mass the cup goes in. Note in the “cup” you see the two raised parts at around 4 and 8 o’clock that the setscrews ‘push’ the optic onto. So its not 'really' a three point contact, its 2 points (set screws) and 2 lines (in the holder)
IMG_1466...Here is the optic going into the cup making sure the fiducial on the optic lines up with the arrow on the cup
IMG_1470.....Optic now in the cup and doing up the setscrews that hold it in place. I cant remember how much we torqued it up (we only did it by hand). But as Peter F again speculated that perhaps we just did the setscrews up tighter than LHO
IMG_1475....Flipping the cup (with the optic in it) over and placing in main mass
IMG_1478....Cup now sitting in Main mass (without screws holding cup into main mass)
IMG_5172......the SRM surrogate installed into the suspension
It looks like there might be a mode in the L1 SRM at 2400 Hz. See the attached plot of SRCL error signal from January, along with DARM and the coherence. There is also a broad peak (hump) around 3500 Hz in SRCL, with very low coherence (0.04 or so) with DARM. The SRCL data has been scaled by 5e-5 here so that it lines up with DARM at 2400 Hz.
Here are two noise budgets showing the expected DARM noise assuming (1) structural (1/f1/2) SRM damping and (2) a hyperstructural (1/f3/4) SRM damping. This hyperstructural damping could explain the DARM noise around 30 to 40 Hz, but not the noise at 50 Hz and above.
I also attach an updated plot of the SRCL/DARM coupling during O1, showing the effect of the feedforward on both the control noise and on the cavity displacement noise (e.g., thermal noise). Above 20 Hz, the feeforward is not really making the displacement noise coupling any worse (compared to having the feedforward off).
Note that the PEEK thermal noise spectrum along with the SRCL/DARM coupling is able to explain quite well the appearance of the peak in DARM.
I am attaching noise budget data for the structural case in 27625.
Evan and I spent most of the day trying to investigate the sudden locklosses we've had over the last 3 days.
1) We can stay locked for ~20 minutes with ALS and DRMI if we don't turn on the REFL WFS loops. If we turn these loops on we loose lock within a minute or so. Even with these loops off we are still not stable though, and saw last night that we can't make it through the lock acquisition sequence.
2)In almost every lockloss, you can see a glitch in SR3 M2 UR and LL noisemons just before the lockloss, which lines up well in time with glitches in POP18. Since the UR noisemon has a lot of 60 Hz noise, the glitches can only be seen there in the OUT16 channel, but the UR glitches are much larger. (We do not actuate on this stage at all). However, there are two reasons to be skeptical that this is the real problem:
It could be that the RF problem that started in the last few days somehow makes us more senstive to loosing lock because of tiny SR3 glitches, or that the noisemons are just showing some spurious signal which is related to the lockloss/ RF problems. Some lockloss plots are attached.
It seems like the thing to do would be trying to fix the RF problem, but we don't have many ideas for what to do.
We also tried running the Hang's automatic lockloss tool, but it is a little difficult to interpret the results from this. There are some AS 45 WFS channels that show up in the third plot that apprears, which could be related to either a glitchy SR3 or an RF problem.
One more thing: Nnds1 chrashed today and Dave helped us restart it over the phone.
For the three locklosses that Sheila plotted, there actually is something visible on the M3 OSEM in length. It looks like about two seconds of noise from 15 to 25 Hz; see first plot. There's also a huge ongoing burst of noise in the M2 UR NOISEMON that starts when POP18 starts to drop. The second through fourth attachments are these three channels plotted together, with causal whitening applied to the noisemon and osem. Maybe the OSEM is just witnessing the same electrical problem as is affecting the noisemon, because it does seem a bit high in frequency to be real. But I'm not sure. It seems like whatever these two channels are seeing has to be related to the lockloss even if it's not the cause. It's possible that the other M2 coils are glitching as well. None of the other noisemons look as healthy as UR, so they might not be as sensitive to what's going on.
RF "problem" is probably not a real RF problem.
Bad RFAM excess was only observed in out-of-loop RFAM sensor but not in the RFAM stabilization control signal. In the attached, top is out-of-loop, middle is the control signal, and the bottom is the error signal.
Anyway, whatever this low frequency excess is, it should come in after the RF splitter for in- and out-of-loop board. Since this is observed both in 9 and 45MHz RFAM chassis, it should be in the difference in how in- and out-of-loop boards are configured. See D0900761. I cannot pinpoint what that is but my guess is that this is some DC stuff coming into the out-of-loop board (e.g. auto bias adjustment feedback which only exists in out-of-loop).
Note that even if it's a real RFAM, 1ppm RIN at 0.5Hz is nothing assuming that the calibration of that channel is correct.
Correction: The glitches are visible on both the M2 and M3 OSEMs in length, also weakly in pitch on M3. The central frequency looks to be 20 Hz. The height of the peaks in length looks suspiciously similar between M2 and M3.
Just to be complete, I've made a PDF with several plots. Every time the noise in the noisemons comes on, POP18 drops and it looks like lock is lost. There are some times when the lock comes back with the noise still there, and the buildup of POP18 is depressed. When the noise ends, the buildup goes back up to its normal value. The burst of noise in the OSEMs seems to happen each time the noise in the noisemons pops up. The noise is in a few of the noisemons, on M2 and M3.
Step 6.3: LVEA Network Switches
This box appeared to be running (green lights & no errors).
Step 6.4 Work Station Computers
Work Stations powered in Control Room-----> But BEFORE these were powered up, The NFS File Server should have been checked in the MSR before powering anything up. This needs to be added to the procedure.
We have found several issues due to computers being started before the NFS File Server was addressed. These items had to be brough up again:
----->And the items above allow us to run Dataviewer now and also bring up the Vacuum Overview
Step 6.5 Wiki & alog & CDS Overview MEDM
They are back. Overview on the wall (thanks, Kissel).
Update: The wiki was actually NOT back since it was started before the NFS File Server was started. So the wiki was restarted.
Sec 8: EX (Richard & TJ)
They have run through the document & are moving on to other outbuildings. On the overview for EX, we see Beckhoff & SUSAUX are back.
Sec 10: MX (Richard & TJ)
Haven't heard from them, but we see that PEMMX is back on the Overview.
Sec 9: EY (Richard & TJ)
This is backonline. So now we can start front ends in the Corner Station!
They are now heading to MY now.....
Sec 11: MY (Richard & TJ)
...but it looks like MY is already back according to the CDS Overview.
STATUS at 10AM (after an hour of going through procedure):
Most of the CDS Overview is GREEN, -except- the LSC. Dave said there were issues with bring the LSC front end back up and will need to investigate in the CER.
End Stations: (TJ, Richard)
7.5 Front Ends (Updated from Kissel)
Everything is looking good on the CDS Overview
Sec 7.8: Guardian (updated from Kissel)
Working on bringing this back. Some of these nodes need data from LDAS (namely, ISC); so some of these may take a while.
BUT, basic nodes such as SUS & SEI may be ready fairly sooner.
7.1 CORNER STATION DC POWER SUPPLIES
These have all been powered ON (Richard).
(Still holding off on End Stations for VAC team to let us know it's ok.)
EY (Richard)
High Power voltage & ESD are back (this step is not in recovery document).
EX (Richard)
High Power voltage & ESD are back (this step is not in recovery document).
There appear to be some order issues here. (Jeff, Nutsinee, and others are working on fixing the order in the document.)
1) We held off on addressing DC high power because of wanting to wait for Vacuum System Team at the LVEA (for vacuum gauge) and at the End Stations (for vacuum gauge & the ESD).
2) We held off on some Corner Station items, because of them being on the Dolphin Network. So to address End Stations FIRST, Assigned Richard & TJ to head out to start the End Station sections of the document & get their Dolphin Network items online. Once they were done, Dave started on cornter station Front Ends on the Dolphin network.
Extraneous items:
Sec 7.6 LVEA HEPI Pump Station Controller
After Hugh's HEPI Maintenance, Jim brought this back.
Sec 8.5 EX HEPI Pump Station Controller
After Hugh's HEPI Maintenance, Jim brought this back.
Sec 9.5 EY HEPI Pump Station Controller
After Hugh's HEPI Maintenance, Jim brought this back.
UPDATE:
At this point (11:42am), we appear to be mostly restored with regard to the CDS side. Most of the operations subsystems are back (we are mostly green on the Ops Overview). The VAC group's Annulus Ion Pumps are back to using site power.
Lingering CDS/Operations items: