Again Masayuki.Nakano reported with Stefan's account
Kiwamu, Masayuki
We measured spectrum of the OMC DCPD signals with a single bounce beam. It would help a noise budget of a DARM signal.
1. Increase the IMC power
IMC power was increased up to 21W. Also H1:PSL-POWER_SCALE_OFFSET was changed to 21.
2. Turn of the guardian of isc-lock
Requested 'DOWN' to the isc-lock guardian to not do anything during the measurement.
3.Miss align the mirrors
For leading the single bounce beam, all of mirrors were misaligned by requesting 'MISALIGN' to guardians of each mirrors except for ITMX.
4.Aligned the OM mirrors
When we got single bounce beam from IFO, there was no signal from ASC-AS-A, B, C QPDs initially. We aligned OM1,OM2,OM3,OMC suspensions with the playback data of OSEM signals
5.Locked the OMC
The servo gain, 'H1:OMC-LSC_SERVO_GAIN', was set to 10 and master gain of the OMC-ASC was set to 0.1.
The DCPD output was 34 mA.
6.Measurement (without a ISS second loop)
The power spectrum of below channels are measured. Measurement frequency was 1-7kHz and BW was 0.1 Hz. The measured channel was as below.
H1:OMC-DCPD_SUM_OUT
H1:OMC-DCPD_NULL_OUT
H1:PSL-ISS_SECONDLOOP_SUM58_REL_OUT
H1:PSL-ISS_SECONDLOOP_SUM58_REL_OUT was used as the out-of-loop sensor of the ISS.
7.Closed the ISS second loop
The ISS second loop was closed. The sensors used to gain error signal was PD1-4.
8.Measurement (with a ISS second loop)
Same measurement as step5. In addition to that, the coherence function between DCPD-SUM and SECONDLOOP_SUM was measured.
I scaled out-of-loop sensor signals of ISS, i.e. the residual intensity noise after the ISS second loop, to the same unit as OMC-DCPD signals. The scaling factor was estimated by dividing the H1:OMC-DCPD_SUM_OUT spectrum (without ISS) by H1:PSL-ISS_SECONDLOOP_SUM58_REL_OUT spectrum (also without ISS) at 100Hz.
I scaled those spectrum both (hereafter 'both' means with and without closing ISS) by same scaling factor.
You can see the DCPD-SUM spectrum, DCPD-NULL spectrum and scaled second loop ISS out of loop sensor signals in attached plots.
The both NULL signals agree with the shot noise of a PD with 34mA signal (cyan curve) above 30Hz, and below that it would be limited by ADC noise.
About the SUM signals, it seems to consistent with the scaled intensity noise above 300 Hz. Also they have some coherence between the intensity noise and the OMC PD signal upper than 300Hz(see another plot). On the other hand, there seems to be some unknown noise below 300 Hz when the second ISS loop was closed.
Possibly this unkown noise might come from the length motion of the OMC. I attached another plot. This plot is the one of same channel(upper) and the OMC error signal with a different servo gain of OMC LSC loop. The error signal and DCPD-SUM signal seem to have similar structure around 100Hz. I haven't any analysis yet because these plots are measred after whitening filter had some trouble and we are planing to do same measurement again with whitening filter.
As Masayuki reported above, we see unexplained coherent noise on DCPDs in 10-200 Hz frequency band. However, according to an offline analysis with spectrogram, they appear to be somewhat non stationary. This indicates the existence of uncontrolled (and undesired) interferometry somewhere.
We should repeat the measurement with a different misalignment configuration.
Later, we concerned about noise artefact which can be introduced by not-quite-misaligned mirrors making scattering shelf or some sort in this measurement. To test this theory, we looked back the data in spectrogram and searched for non stationary behavior. It seems that we had two different non-stationary components; one below 10-ish Hz and the other between 10 and 200 Hz. The attached are the spectrograms produced by LIGODV web for 20 sec where we had 20 W PSL, OMC locekd with a gain of 10 and ISS closed using the PDs 1 through 4 as in-loop sensors.
In DCPD-SUM, it is clear that the component below 10 Hz was suddenly excited at t = 13 sec. Also, the shelf between 100 and 200 Hz appear to move up and down as a function of time.
Also, here are two relevant ISS signals which did not show obvious correlation with the observed non stationary behavior.
Masayuki and I noticed that ADC noise in OMC was too high. It turned out that the whitening filters turned off for no obvious reason at around 17:00 pm local according to trend.
Looking at the control screen, we noticed that some channels were not accessible. See the attached. We seem to be able to toggle the analog whitenings and digital anti-whitenings independently now. The rest of the PDs seem OK -- they don't have inaccessible channels.
Dave, JimB, Kiwamu,
We investigated a bit more today. Interestingly, we discovered that these blank channels never existed before. The only remaining oddity is the fact that we were unable to synchronously switch the analog and digital whitenings by clicking the buttons in the OMC_DCPD screen.
Everything has been plugged to the right port. The read out seems reasonable. The medm screen now have the right channel on HWS Plate Temp.
H1:TCS-ITMX_HWS_TEMPERATURESENSORPLATE has been reading a nonsence value of -200 since 2019-08-06 (alog 51087). When work on / move the table in the future, we should check this tempurature sensor is working.
De-energized Y2-8 beam tube ion pump and cut HV cable to length now that controller is in its permanent location. 1st attempt at installing new controller HV connector didn't work (I'm blaming dim lighting in compressor room). I'll try again tomorrow after setting up some temporary lighting. Note: BT pressures are a factor of ~2 higher at the ends while BT ion pumps are off.
~1525 - 1540 hrs. local Next over fill to be Saturday, Feb. 6th before 4:00 pm
Activity Log: All Times in UTC (PT) 15:58 (07:58) Chris – Beam tube sealing ~ 150 yards from End-X 17:40 (09:40) Peter – Transition LVEA to laser safe 17:51 (09:51) Bubba – In the LVEA working on crane rail shimming 17:52 (09:52) Peter – Reset PSL External Shutter flow sensor. 17:55 (09:55) Keita – Testing on ETM-X 18:31 (10:31) Kyle – Going to Y-Arm Compressor room 18:43 (10:43) Reset tripped BS Stage 1 & Stage 2 WDs 18:45 (10:45) Mitch – Working in the Optics Lab 19:16 (11:16) Betsy – Going onto LVEA to look for cables 19:21 (11:21) Mitch – Out of Optics Lab 19:22 (11:22) Betsy – Out of LVEA 19:49 (11:49) Reset tripped ETMX Stage 1 & Stage 2 WDs 19:50 (11:50) Kyle – Back from Y-Arm 21:38 (13:38) Kyle – Going back to Y-Arm Compressor room 21:54 (13:54) Mitch – Going into the Optics Lab 21:55 (13:55) Christina – Forklift package from LSB to VPW 22:08 (14:08) Christina – Finished and forklift is parked back at the LSB 22:15 (14:15) Kiwamu – Opened main shutter (with Peter K approval) – LVEA is still laser safe 22:27 (14:27) Alastair & Nutsinee – Going into the LVEA 22:34 (14:34) Mitch – Out of Optics lab 22:53 (14:53) Bubba & John – In the LVEA working on the main crane shimming 22:59 (14:59) Reset tripped OMC SUS WD. 23:40 (15:40) Alastair – Going into LVEA to turn on CO2 laser 00:00 (16:00) Turn over to Travis End of Shift Summary: Title: 02/04/2015, Day Shift 16:00 – 00:00 (08:00 – 16:00) All times in UTC (PT) Support: Kiwamu, Hugh, Incoming Operator: Travis Shift Detail Summary: IFO was not locked during the day due to elevated seismic and microseism levels. Various groups took advantage of the no locking to do crane maintenance in the LVEA and make several fixes/upgrades. Ongoing commissioning work and testing during the shift.
FRS 4296, 4333 Bugzilla 969 The gds tools are now version gds-2.17.1.3-1, which should fix manual channel name input problems in DTT. When the "Start" button is pressed to run an analysis, DTT checks active channel names to see if they exist in the channel list. If any do not, a popup window will appear to inform you of the channel names that could not be found. You then have the option to cancel the test or continue with the channels which are valid. This release is meant to allow users to copy/paste or manually enter channel names for analysis. Since a manual entry does not provide any channel rate data, it is still possible to get the wrong channel using NDS2 when there are more than one channel rate to choose from.
Kiwamu, Jim, Dave:
New h1lsc and h1omc models were created and installed this morning. The DAQ was then restarted.
The change was to expand a single filter module to two in series (providing the potential for 20 filters) for the DARM, MICH, PRCL and SRCL paths.
models were chaged at 11:11 PST and the DAQ at 11:13 PST.
This change is the one we requested through the ECR E1600030-v1. As reported by Dave above, we increased the number of available filter modules in DARM, PRCL, SRCL and MICH by inserting a filter in series for each of them.
The attached screenshots show how the simulink models now look like.
The changes and new modules are highlighted by circles and arrows in the screenshots. As shown, the DARM filter module is split into DARM1 and DARM2. DARM1 still has the triggered filters while DARM2 does not. Additionally, in order to save the important channels DARM_IN1 and DARM_OUT, we added two test points with their names. The same change was applied to PRCL, SRCL and MICH. This change will not impact on the online calibration, calibration lines or hardware injection.
The last attachment is a screenshot of the new medm screen.
The models and screens are checked into SVN.
P.S. I have updated all the ISC-related guardian codes so that they can handle this new filter situation. I did not get a chance to test them. So please watch out for some bugs in the guardians
Changes attached.
Updated userapps: isi/common/guardian/isiguardianlib/watchdog:
hugh.radkins@opsws1:watchdog 0$ svn up
U states.py
Updated to revision 12546.
guardctrl restart ISI_ETMX_ST2: this caused the guardian to deisolate stage2. Not what I experienced Tuesday when I did similar.. When I restarted stage1 guardian, it did not touch the platform, as I expected.
Tripped the platform and now as expected, the damping loops were turned off when a state4 trip occurs. Okay, so update fixed issue.
At ETMY, restart Stage1 first, then Stage2 and Guardian did nothing to the platforms...
Restarted HAM3, no problem. Tested (tripping PR2 in the process) and the DAMPing path was turned off. So good.
When restarting ISI_HAM6, this ISI guardian switched its request state to NONE but did not do anything, chamber manager was throwing messages. Toggled ISI to HIGH_ISOLATED and everyone became happy.
TCS crew is working on SRC so I'll not touch them.
Guardian restart & test ETMX, HAM3.
Guardian restart ETMY, HAM2, HAM6.
I'll complete the remainder of the restarts and maybe test a couple more when TCS is clear.
restarted the guardian for the BS ISI stages.
Completed the Guardian restarts but I did not do any further trips to test the function.
Restarted, HAMs 4 & 5 and the ITMs. Interesting about half of these, toggled the ISI to NONE and some became unmanaged, but, switching them back to HIGH_ISOLATED and INITing the CHamber manager set things back right without troubling the platforms.
WP closed.
Turns out that HWS at the end stations were temporarily turned on. When HWS is on and not on a separate power supply, it injects 57Hz noise into the ESD, which is picked up by the monitor. Another mystery solved.
On the attached top left, red is with the EX HWS off, blue is on, IFO unlocked, ETMs uncontrolled. EX ESD UL digital output was turned off. The 57Hz line and its harmonics are gone with the HWS.
On the bottom left are the same thing but with EY HWS off/on. No 57Hz in either of the two traces. That's because EY HWS is on a separate temporary power unit (alog 18355). The reason why we also saw 57Hz in EY LVESDAMON in lock (alog 25310) is not clear, maybe EX ESD was not turned off (though all outputs were zeroed) and shook the ETMX, and the line might have propagated to EY through DARM loop (we can see the line in DARM).
I don't know if X-Y difference (bottom right), apart from the callines in Y, is for real or just the sensing noise.
WP#5722
While we have high useism I wanted to revisit how this node is doing since I haven't touched it since before ER8. I created, started the node, and then had it switch blends with no problem. I then tried to mess it up to see how it would react. This is where I left off last time.
Some of my tests:
Overall it went well and I have a few things to work on. When I get another chance to test this out I will try it on ETMX with the BRS SC as well, the BLENSCOR node as I have been calling it.
I flushed the chiller water for both X and Y arms today, with advice from Jeff on what to do. The water in the LHO chillers has not been a problem so far, and shows no evidence of particulate or discoloration. However the chiller manual suggests that replacing the water every 6months would be a reasonable maintenance schedule.
Given that there was no problem with the water we could have got away with just draining and refilling the chillers a couple of times to replace the water. However at LLO we will want to actively flush the system, so for that reason I also flushed the chillers here.
First I turned off the lasers, then their power supplies and the power supply to the AOM driver. The laser, laser driver, AOM head and AOM driver are all water cooled. For each chiller in turn I performed the flush as follows:
*Tools required - flat blade screwdriver for hose clamp
1) Turn off the chiller then disconnect the chiller using the quick connects, then drain the chiller using the drain plug at the bottom on the back.***Suggest making sure there are no power cables around on the ground at this point becase water may get on ground***
2) Close drain plug, refill chiller with distilled water ("laboratory grade" = reverse osmosis filtered then distilled). Reconnect cables and the "Process Outlet" pipe to the top of back of chiller.
3) Leave the process inlet pipe disconnected and remove the quick disconnect attachment from the pipe. Run this pipe into a water collection basin (I used a mop bucket which has wheels and is a good size for this)
4) Note that when filling the water collection basin, make sure that it doesn't get so full that you can't move it back down the stairs. 5 gallons is about as much as you would want to carry.
5) Repeat the following steps until all water is replaced
a. Turn on chiller while holding pipe into basin
b. Wait for approx 10 seconds of water to come out of chiller, then turn chiller off using the switch on the back (not the one on the front which takes too long to turn chiller off)
c. Top chiller back up to full with distilled water
6) Once the water is replaced reconnect the process inlet pipe and hose clamp
7) Turn on the chiller and run for several minutes. Check water level and top up as necessary.
Transition Summary: 02/04/2016, Day Shift 16:00 – 00:00 (08:00 – 16:00) All times in UTC (PT) State of H1: IFO unlocked. The wind is a light breeze (< 7mph). Seismic and microseism are still as elevated as they were last night. Locking may be somewhat difficult at this time.
There seem to be two new rf oddities that appeared after maintenance today:
Nothing immediately obvious from either the PR or SR bottom-stage OSEMs during this time. Ditto the BS and ITM oplevs.
Nothing immediately obvious from distribution amp monitors or LO monitors.
A bit more methodically now: all the OSEM readbacks for DRMI optics, including the IMC mirrors and the input mirrors. No obvious correlation with POP90 fluctuations.
I am tagging detchar in this post. Betsy and I spent some more time looking at sus electronics channels, but nothing jumped out as problematic. (Although I attach the fast current monitors for the beamsplitter penultimate stage: UR looks like it has many fast glitches. I have not looked systematically at other current or voltage monitors on the suspensions.)
Most likely, noise hunting cannot continue until this problem is fixed.
We would greatly appreciate some help from detchar in identifying which sus electronics channels (if any) are suspect.
In this case, data during any of the ISC_LOCK guardian states 101 through 104 is good to look at (these correspond to DRMI locking with arms off resonance). Higher-numbered guardian states will also show this POP90 problem. This problem only started after Tuesday afternoon local time.
I said above that nothing can be seen in the osems, but that is based only on second-trends of the time series. Perhaps something will be revealed in spectrograms, as when we went through this exercise several months before.
Comparing MASTER and NOISEMON spectra from a nominal low noise time on Feb 3 with Jan 10, the most suspicious change is SR2 M3 UL. Previously, this noisemon looked similar to the other quadrants, but with an extra forest of lines above 100 Hz. Now, the noisemon looks dead. Attached are spectra of the UR quadrant, showing that it hasn't changed, and spectra of SR2 M3 UL, showing that something has failed - either the noisemon or the driver. Blue traces are from Feb 3 during a nominal low noise time, and red are a reference from science time on Jan 10. I'm also attaching two PDFs - the first is spectra of master and noisemon channels, and their coherence, from the reference time. The second is the same from the current bad time. Ignore the empty plots, they happen if the drive is zero. Also, it seems like the BS M2 noisemon channels have gone missing since the end of the run, so I had to take them out of the configuration. Also, I took out the ITMs, but I should probably check those too.
Attached are spectra comparing the SUS ETM L3 LV ESD channels from lock segments before (Jan 24, 2016 03:35:00 UTC) and after (Feb 1, 2016 01:50:00 UTC) the ESD Driver update last Tuesday alog 25175 (and the subsequent ESD fixes on Wed 25204). Before the install, the channels looked like ADC noise, while they look live now. The ETMx plot is included, but of course the ETMx ESD is not ON during either lock stretch, so all the plot really says is that something happened to the channels. Whether or not the ETMy channels look like they actually should, according to various ECRs, is to be determined.
Keita helped me make more useful plots to evalute the new LV chassis mods. See attached.
The alog that motivated all of these is alog 22199.
In Betsy's new plots, reference traces are with the old hardware, current traces are new, both in full low noise lock. In summary, this is good.
The spectrum/coherence plot shows that the new whitening is useful, the monitor is actually monitoring what we send rather than just ADC noise. (It used to be totally useless for f>7Hz or so as you can see from the reference coherence.) You can also see that there's a small coherence dip at around 30Hz, and there are deeper dip at around various band stop filters, but otherwise it's actually very good now.
In the second plot, you see Y channel spectrum together with X. Since we don't send anything to X during low noise lock, X is just the sensing noise. When you compare the signal (red) and the sensing noise (light green), we can see that the signal is larger than the noise across the entire frequency range except the stopbands.
At around 30Hz (where we saw a tiny coherence dip in the first plot) the noise is only a factor of 3 or 4 below the signal. We expect the higher frequency drive above f=100Hz to drop as we increase the power, so the signal/noise ratio there might drop in the future. There's still a large headroom before we rail the ADC (RMS is 600 counts), so if necessary I'm sure we can make some improvement, but this is already very good for now.
The only thing is, what's these lines even when we don't send anything (X)?
It seems as if 57Hz and harmonics, whatever they are, in the non-driven channel are at least as large as in the driven channel.
57Hz was the HWS at the end stations.
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=25383