Activity Log: All Times in UTC (PT) 15:58 (07:58) Chris – Beam tube sealing ~ 150 yards from End-X 17:40 (09:40) Peter – Transition LVEA to laser safe 17:51 (09:51) Bubba – In the LVEA working on crane rail shimming 17:52 (09:52) Peter – Reset PSL External Shutter flow sensor. 17:55 (09:55) Keita – Testing on ETM-X 18:31 (10:31) Kyle – Going to Y-Arm Compressor room 18:43 (10:43) Reset tripped BS Stage 1 & Stage 2 WDs 18:45 (10:45) Mitch – Working in the Optics Lab 19:16 (11:16) Betsy – Going onto LVEA to look for cables 19:21 (11:21) Mitch – Out of Optics Lab 19:22 (11:22) Betsy – Out of LVEA 19:49 (11:49) Reset tripped ETMX Stage 1 & Stage 2 WDs 19:50 (11:50) Kyle – Back from Y-Arm 21:38 (13:38) Kyle – Going back to Y-Arm Compressor room 21:54 (13:54) Mitch – Going into the Optics Lab 21:55 (13:55) Christina – Forklift package from LSB to VPW 22:08 (14:08) Christina – Finished and forklift is parked back at the LSB 22:15 (14:15) Kiwamu – Opened main shutter (with Peter K approval) – LVEA is still laser safe 22:27 (14:27) Alastair & Nutsinee – Going into the LVEA 22:34 (14:34) Mitch – Out of Optics lab 22:53 (14:53) Bubba & John – In the LVEA working on the main crane shimming 22:59 (14:59) Reset tripped OMC SUS WD. 23:40 (15:40) Alastair – Going into LVEA to turn on CO2 laser 00:00 (16:00) Turn over to Travis End of Shift Summary: Title: 02/04/2015, Day Shift 16:00 – 00:00 (08:00 – 16:00) All times in UTC (PT) Support: Kiwamu, Hugh, Incoming Operator: Travis Shift Detail Summary: IFO was not locked during the day due to elevated seismic and microseism levels. Various groups took advantage of the no locking to do crane maintenance in the LVEA and make several fixes/upgrades. Ongoing commissioning work and testing during the shift.
FRS 4296, 4333 Bugzilla 969 The gds tools are now version gds-2.17.1.3-1, which should fix manual channel name input problems in DTT. When the "Start" button is pressed to run an analysis, DTT checks active channel names to see if they exist in the channel list. If any do not, a popup window will appear to inform you of the channel names that could not be found. You then have the option to cancel the test or continue with the channels which are valid. This release is meant to allow users to copy/paste or manually enter channel names for analysis. Since a manual entry does not provide any channel rate data, it is still possible to get the wrong channel using NDS2 when there are more than one channel rate to choose from.
Kiwamu, Jim, Dave:
New h1lsc and h1omc models were created and installed this morning. The DAQ was then restarted.
The change was to expand a single filter module to two in series (providing the potential for 20 filters) for the DARM, MICH, PRCL and SRCL paths.
models were chaged at 11:11 PST and the DAQ at 11:13 PST.
Changes attached.
Updated userapps: isi/common/guardian/isiguardianlib/watchdog:
hugh.radkins@opsws1:watchdog 0$ svn up
U states.py
Updated to revision 12546.
guardctrl restart ISI_ETMX_ST2: this caused the guardian to deisolate stage2. Not what I experienced Tuesday when I did similar.. When I restarted stage1 guardian, it did not touch the platform, as I expected.
Tripped the platform and now as expected, the damping loops were turned off when a state4 trip occurs. Okay, so update fixed issue.
At ETMY, restart Stage1 first, then Stage2 and Guardian did nothing to the platforms...
Restarted HAM3, no problem. Tested (tripping PR2 in the process) and the DAMPing path was turned off. So good.
When restarting ISI_HAM6, this ISI guardian switched its request state to NONE but did not do anything, chamber manager was throwing messages. Toggled ISI to HIGH_ISOLATED and everyone became happy.
TCS crew is working on SRC so I'll not touch them.
Guardian restart & test ETMX, HAM3.
Guardian restart ETMY, HAM2, HAM6.
I'll complete the remainder of the restarts and maybe test a couple more when TCS is clear.
restarted the guardian for the BS ISI stages.
Completed the Guardian restarts but I did not do any further trips to test the function.
Restarted, HAMs 4 & 5 and the ITMs. Interesting about half of these, toggled the ISI to NONE and some became unmanaged, but, switching them back to HIGH_ISOLATED and INITing the CHamber manager set things back right without troubling the platforms.
WP closed.
Turns out that HWS at the end stations were temporarily turned on. When HWS is on and not on a separate power supply, it injects 57Hz noise into the ESD, which is picked up by the monitor. Another mystery solved.
On the attached top left, red is with the EX HWS off, blue is on, IFO unlocked, ETMs uncontrolled. EX ESD UL digital output was turned off. The 57Hz line and its harmonics are gone with the HWS.
On the bottom left are the same thing but with EY HWS off/on. No 57Hz in either of the two traces. That's because EY HWS is on a separate temporary power unit (alog 18355). The reason why we also saw 57Hz in EY LVESDAMON in lock (alog 25310) is not clear, maybe EX ESD was not turned off (though all outputs were zeroed) and shook the ETMX, and the line might have propagated to EY through DARM loop (we can see the line in DARM).
I don't know if X-Y difference (bottom right), apart from the callines in Y, is for real or just the sensing noise.
WP#5722
While we have high useism I wanted to revisit how this node is doing since I haven't touched it since before ER8. I created, started the node, and then had it switch blends with no problem. I then tried to mess it up to see how it would react. This is where I left off last time.
Some of my tests:
Overall it went well and I have a few things to work on. When I get another chance to test this out I will try it on ETMX with the BRS SC as well, the BLENSCOR node as I have been calling it.
I flushed the chiller water for both X and Y arms today, with advice from Jeff on what to do. The water in the LHO chillers has not been a problem so far, and shows no evidence of particulate or discoloration. However the chiller manual suggests that replacing the water every 6months would be a reasonable maintenance schedule.
Given that there was no problem with the water we could have got away with just draining and refilling the chillers a couple of times to replace the water. However at LLO we will want to actively flush the system, so for that reason I also flushed the chillers here.
First I turned off the lasers, then their power supplies and the power supply to the AOM driver. The laser, laser driver, AOM head and AOM driver are all water cooled. For each chiller in turn I performed the flush as follows:
*Tools required - flat blade screwdriver for hose clamp
1) Turn off the chiller then disconnect the chiller using the quick connects, then drain the chiller using the drain plug at the bottom on the back.***Suggest making sure there are no power cables around on the ground at this point becase water may get on ground***
2) Close drain plug, refill chiller with distilled water ("laboratory grade" = reverse osmosis filtered then distilled). Reconnect cables and the "Process Outlet" pipe to the top of back of chiller.
3) Leave the process inlet pipe disconnected and remove the quick disconnect attachment from the pipe. Run this pipe into a water collection basin (I used a mop bucket which has wheels and is a good size for this)
4) Note that when filling the water collection basin, make sure that it doesn't get so full that you can't move it back down the stairs. 5 gallons is about as much as you would want to carry.
5) Repeat the following steps until all water is replaced
a. Turn on chiller while holding pipe into basin
b. Wait for approx 10 seconds of water to come out of chiller, then turn chiller off using the switch on the back (not the one on the front which takes too long to turn chiller off)
c. Top chiller back up to full with distilled water
6) Once the water is replaced reconnect the process inlet pipe and hose clamp
7) Turn on the chiller and run for several minutes. Check water level and top up as necessary.
I have measured the HARD arm loops at several different input powers, so that we can see how the peaks move around with radiation pressure. Some of the measurements could be a bit better (eg. CHARD YAW 10W to lower freq), but I think it's enough that we get the idea of what's going on, and can confirm with the ASC model.
For each plot, red is 20W, orange is 10W and blue is 2W. Shaded regions indicate the ~1 sigma error bars from the coherence of the measurement. The gains and control filters are different for each of these measurements, but I know what the state was, and the model script takes those differences into account. I will also divide out the rest of the loop contributions, so that we can see how the suspension changes by itself.
For all of the loops except for DHARD Pitch, we have basically no gain in the frequency region that I had the patience to measure. We should look forward to using the confirmed ASC models to design sensible loops that have some actual gain.
The DHARD Yaw comparison with the model was reported in alog 25054, but I have not yet made the comparison plots for the other degrees of freedom. The oplev damping is still being added to the quad suspension model, so it may be another little while before the pitch measurements get a model to compare with.
Attached as a tarball is a folder including the DTT templates for these measurements, the transfer functions and coherence text files exported from DTT, the matlab script to create the figures, and all of the figures saved in several different formats. Note that the coherences that are exported into the text files are just the IN2/IN1 coherence, although others are available in the DTT templates.
In bring the diagnostic breadboard back online, which required powering up the electronics, the flow sensor sensor tripped causing the shutter to close. It was reset from the Control Room. Back to normal.
The reduction in gain on chiller loop looks to have got rid of the oscillation that we were seeing. Attached is the over night data, which shows the same small power output range and pzt range, with a smoother chiller response. Y-arm laser wasn't on last night.
Transition Summary: 02/04/2016, Day Shift 16:00 – 00:00 (08:00 – 16:00) All times in UTC (PT) State of H1: IFO unlocked. The wind is a light breeze (< 7mph). Seismic and microseism are still as elevated as they were last night. Locking may be somewhat difficult at this time.
It has been a fight to get DRMI to stay locked for more than a few minutes due to high ground motion. Commissioners suggest calling it a night as microseism continues to trend upward.
The ground motion is just too high, so Evan, Travis and I agree that we should call it a night. We can't hold DRMI for more than 2 minutes or so. Earlier today, I flipped the phase of the ASAIR 90 PD, so that we get roughly the same number of counts when DRMI is locked as we used to. We only use ASAIR90 for thresholding engagement of the DRMI ASC, so it's not a super critical PD phasing-wise. I'll come back to it tomorrow.
This is another instance of getting the '75% notification', followed shortly by a WD trip on HAM6. All I did was to reset the WD.
Vern, Jim, Dave:
Several times in the past few weeks we have seen an invocation of tconvert produce the output:
tconvert NOTICE: Leap-second info in /ligo/apps/linux-x86_64/ligotools/config/public/tcleaps.txt is no longer certain to be valid; We got valid information from the web, but were unable to update the local cache file: No permission to write in directory /ligo/apps/linux-x86_64/ligotools/config/public
This afternoon we spent some time tracking this down. tconvert uses a web server in france to access the IERS bulletin-C information page which alerts us to upcoming leap seconds. It appears that this server was pure HTTP until January 11th this year. It is now an HTTP server which redirects the request to a HTTPS page. The TCL code inside of tconvert interprets the redirection return message as an error. It then marks the expiration of the leap second information for 24 hours in the future.
The procedure we used to maintain future leap second notification was to twice a year run tconvert as user controls, which updated a leap seoncds database file (owned by controls) which was good for the next six months. With the recent HTTP issues, controls now produces a file which is good for only a day.
This afternoon we ran a modified tconvert which access a different leap seconds HTTP page and the LHO CDS database file is now good through 8th August. In the mean time we will raise an FRS and work with Peter S for a long term solution.
There seem to be two new rf oddities that appeared after maintenance today:
Nothing immediately obvious from either the PR or SR bottom-stage OSEMs during this time. Ditto the BS and ITM oplevs.
Nothing immediately obvious from distribution amp monitors or LO monitors.
A bit more methodically now: all the OSEM readbacks for DRMI optics, including the IMC mirrors and the input mirrors. No obvious correlation with POP90 fluctuations.
I am tagging detchar in this post. Betsy and I spent some more time looking at sus electronics channels, but nothing jumped out as problematic. (Although I attach the fast current monitors for the beamsplitter penultimate stage: UR looks like it has many fast glitches. I have not looked systematically at other current or voltage monitors on the suspensions.)
Most likely, noise hunting cannot continue until this problem is fixed.
We would greatly appreciate some help from detchar in identifying which sus electronics channels (if any) are suspect.
In this case, data during any of the ISC_LOCK guardian states 101 through 104 is good to look at (these correspond to DRMI locking with arms off resonance). Higher-numbered guardian states will also show this POP90 problem. This problem only started after Tuesday afternoon local time.
I said above that nothing can be seen in the osems, but that is based only on second-trends of the time series. Perhaps something will be revealed in spectrograms, as when we went through this exercise several months before.
Comparing MASTER and NOISEMON spectra from a nominal low noise time on Feb 3 with Jan 10, the most suspicious change is SR2 M3 UL. Previously, this noisemon looked similar to the other quadrants, but with an extra forest of lines above 100 Hz. Now, the noisemon looks dead. Attached are spectra of the UR quadrant, showing that it hasn't changed, and spectra of SR2 M3 UL, showing that something has failed - either the noisemon or the driver. Blue traces are from Feb 3 during a nominal low noise time, and red are a reference from science time on Jan 10. I'm also attaching two PDFs - the first is spectra of master and noisemon channels, and their coherence, from the reference time. The second is the same from the current bad time. Ignore the empty plots, they happen if the drive is zero. Also, it seems like the BS M2 noisemon channels have gone missing since the end of the run, so I had to take them out of the configuration. Also, I took out the ITMs, but I should probably check those too.
Attached are spectra comparing the SUS ETM L3 LV ESD channels from lock segments before (Jan 24, 2016 03:35:00 UTC) and after (Feb 1, 2016 01:50:00 UTC) the ESD Driver update last Tuesday alog 25175 (and the subsequent ESD fixes on Wed 25204). Before the install, the channels looked like ADC noise, while they look live now. The ETMx plot is included, but of course the ETMx ESD is not ON during either lock stretch, so all the plot really says is that something happened to the channels. Whether or not the ETMy channels look like they actually should, according to various ECRs, is to be determined.
Keita helped me make more useful plots to evalute the new LV chassis mods. See attached.
The alog that motivated all of these is alog 22199.
In Betsy's new plots, reference traces are with the old hardware, current traces are new, both in full low noise lock. In summary, this is good.
The spectrum/coherence plot shows that the new whitening is useful, the monitor is actually monitoring what we send rather than just ADC noise. (It used to be totally useless for f>7Hz or so as you can see from the reference coherence.) You can also see that there's a small coherence dip at around 30Hz, and there are deeper dip at around various band stop filters, but otherwise it's actually very good now.
In the second plot, you see Y channel spectrum together with X. Since we don't send anything to X during low noise lock, X is just the sensing noise. When you compare the signal (red) and the sensing noise (light green), we can see that the signal is larger than the noise across the entire frequency range except the stopbands.
At around 30Hz (where we saw a tiny coherence dip in the first plot) the noise is only a factor of 3 or 4 below the signal. We expect the higher frequency drive above f=100Hz to drop as we increase the power, so the signal/noise ratio there might drop in the future. There's still a large headroom before we rail the ADC (RMS is 600 counts), so if necessary I'm sure we can make some improvement, but this is already very good for now.
The only thing is, what's these lines even when we don't send anything (X)?
It seems as if 57Hz and harmonics, whatever they are, in the non-driven channel are at least as large as in the driven channel.
57Hz was the HWS at the end stations.
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=25383
This change is the one we requested through the ECR E1600030-v1. As reported by Dave above, we increased the number of available filter modules in DARM, PRCL, SRCL and MICH by inserting a filter in series for each of them.
The attached screenshots show how the simulink models now look like.
The changes and new modules are highlighted by circles and arrows in the screenshots. As shown, the DARM filter module is split into DARM1 and DARM2. DARM1 still has the triggered filters while DARM2 does not. Additionally, in order to save the important channels DARM_IN1 and DARM_OUT, we added two test points with their names. The same change was applied to PRCL, SRCL and MICH. This change will not impact on the online calibration, calibration lines or hardware injection.
The last attachment is a screenshot of the new medm screen.
The models and screens are checked into SVN.
P.S. I have updated all the ISC-related guardian codes so that they can handle this new filter situation. I did not get a chance to test them. So please watch out for some bugs in the guardians