(Jenne, Corey)
Following Evan, Richard, & Dave's work of adding the ability to listen to LSC Error Signals in the Control Room (via the big speaker in the front/right of the Control Room), I built a rudimentary medm screen to put all of the relevant interfaces needed for this on one screen (basically following the snapshot Evan posted in his alog.
This was also an medm learning experience for me--thanks for the tutorial from Jenne! I went through and made sure a signal from each of the LSC error signals indeed makes it through a matrix and out to the AUIDO output filter bank.
The medm (H1OAF_AUDIO_OVERVIEW.adl) has been saved & committed to the svn. Snapshot of the medm screen is attached.
Let me know if anything should be added (I wanted to have a boombox image as the background image, but medm didn't seem to like that idea.).
Look forward to hearing this in the Control Room soon!
(Deleted this task from the Operator Commissioning Task wiki.)
[Alastair, Nutsinee]
We did a power budget of the Y-arm table, which Nutsinee is tidying up and we'll post soon. The gist of this was that we are losing a significant fraction of power due to clipping on one of the 2" mirrors, partly due to alignment and partly because the mount is so deep that the mirror sits very far back resulting in a smaller clear aperture than we had intended.
The beam is also off-center on the lens that sits in front of this mirror, suggesting that at some point this has moved and caused this clipping. We still have the beam aligned to the irises on the final part of the table so it can be realigned through this section wthout redoing the alignment to the CP (although this will be needed after the table is moved for TMDS anyway). I'm not changing the alignment right now, but this probably needs done during my next visit.
The faulty in-loop ISS photodiode has been removed and replaced temporarily with a beam dump. I'll take this back to Caltech for Ben to look at. It is serial number 003, with S-number 1400286.
The SUS medm-screen-modifying-team (Kissel, Weaver, Sadecki?) recently made a change to the ETM ESD's in "medm land" by taking an older screen stand-alone medm and splitting it between the SUS OVERVIEW & SUS AUX Channel Monitor screen (which is located via button on upper right of the SUS OVERVIEW). (perhaps this comes from this alog?).
Since we no longer need the ESD stand-alone medms, I removed both from the SUS/ETM ESD pull-downs on the sitemap. The sitemap was svn-committed afterward.
This was an item on the Operator Commissioning Task wiki, and this task is now complete and will be removed.
The actual (now-moot) ESD stand-alone medm screens were not touched and are still located at: $(USERAPPS)/sus/common/medm/quad/SUS_CUST_QUAD_MONITOR_E[x/y]_OVERVIEW.adl
Transition Summary: 02/05/2016, Day Shift 16:00 – 00:00 (08:00 – 16:00) All times in UTC (PT) State of H1: IFO is unlocked. The wind is a light breeze (< 7mph). Seismic and microseism are still elevated, as they were last night, which may make locking a bit difficult.
This is the first diagnostic breadboard scan conducted since the end of O1. The last scan was conducted back on August 10th, 2015. The power noise looks about the same as before. Slight decrease in power on the photodiode, which is not surprising given that alignment may need some tweaking. Frequency noise: The control signal is now lower from 1-10 Hz. The noise is lower between 100-1000 Hz. Slightly higher elsewhere. Lower than the reference measurement however. I would say that the frequency noise is slightly higher now that the NPRO is in its dotage. Pointing noise: Something is wrong with the measurement. This might also be an alignment problem, although nothing wrong was reported by the MEDM screens. The control signal is off a little. This might be due to the PZT mirror mounts not quite coming back to where they were after electrical power was restored. Mode scan: The number of higher order modes has decreased, which is surprising (to me at least), but the higher order mode power has increased by about 0.5%. Which doesn't seem too bad to me. ISS power noise: The noise on one of the photodetectors (PDA) is slightly higher than before. The horizontal beam pointing has increased by just over a factor of 2. At low frequencies, less than 5 Hz, the beam pointing noise is worse by a factor of ~5. This might be due to the re-alignment of the beam onto the quadrant photodiode inside the ISS photodiode box. It is good to know that the diagnostic breadboard seems to work, modulo some tweaking, after its O1 hibernation.
The LVEA has transitioned back to LASER HAZARD.
Summary: Another rough night for locking. Commissioners were doing unlocked commissioning tasks for the first half of my shift. For the latter half, we have been struggling to get much past DRMI locked, only making it to ENGAGE_ASC once. Microseism is still elevated, but should be borderline passable for locking. All else seems normal. On a bright note, I did manage to witness the elusive PRMI_2_DRMI transition work!
Activity log in UTC:
0:34 John and Bubba done with crane work in LVEA
0:51 Kyle back from EY
Again Masayuki.Nakano reported with Stefan's account
Kiwamu, Masayuki
We measured spectrum of the OMC DCPD signals with a single bounce beam. It would help a noise budget of a DARM signal.
1. Increase the IMC power
IMC power was increased up to 21W. Also H1:PSL-POWER_SCALE_OFFSET was changed to 21.
2. Turn of the guardian of isc-lock
Requested 'DOWN' to the isc-lock guardian to not do anything during the measurement.
3.Miss align the mirrors
For leading the single bounce beam, all of mirrors were misaligned by requesting 'MISALIGN' to guardians of each mirrors except for ITMX.
4.Aligned the OM mirrors
When we got single bounce beam from IFO, there was no signal from ASC-AS-A, B, C QPDs initially. We aligned OM1,OM2,OM3,OMC suspensions with the playback data of OSEM signals
5.Locked the OMC
The servo gain, 'H1:OMC-LSC_SERVO_GAIN', was set to 10 and master gain of the OMC-ASC was set to 0.1.
The DCPD output was 34 mA.
6.Measurement (without a ISS second loop)
The power spectrum of below channels are measured. Measurement frequency was 1-7kHz and BW was 0.1 Hz. The measured channel was as below.
H1:OMC-DCPD_SUM_OUT
H1:OMC-DCPD_NULL_OUT
H1:PSL-ISS_SECONDLOOP_SUM58_REL_OUT
H1:PSL-ISS_SECONDLOOP_SUM58_REL_OUT was used as the out-of-loop sensor of the ISS.
7.Closed the ISS second loop
The ISS second loop was closed. The sensors used to gain error signal was PD1-4.
8.Measurement (with a ISS second loop)
Same measurement as step5. In addition to that, the coherence function between DCPD-SUM and SECONDLOOP_SUM was measured.
I scaled out-of-loop sensor signals of ISS, i.e. the residual intensity noise after the ISS second loop, to the same unit as OMC-DCPD signals. The scaling factor was estimated by dividing the H1:OMC-DCPD_SUM_OUT spectrum (without ISS) by H1:PSL-ISS_SECONDLOOP_SUM58_REL_OUT spectrum (also without ISS) at 100Hz.
I scaled those spectrum both (hereafter 'both' means with and without closing ISS) by same scaling factor.
You can see the DCPD-SUM spectrum, DCPD-NULL spectrum and scaled second loop ISS out of loop sensor signals in attached plots.
The both NULL signals agree with the shot noise of a PD with 34mA signal (cyan curve) above 30Hz, and below that it would be limited by ADC noise.
About the SUM signals, it seems to consistent with the scaled intensity noise above 300 Hz. Also they have some coherence between the intensity noise and the OMC PD signal upper than 300Hz(see another plot). On the other hand, there seems to be some unknown noise below 300 Hz when the second ISS loop was closed.
Possibly this unkown noise might come from the length motion of the OMC. I attached another plot. This plot is the one of same channel(upper) and the OMC error signal with a different servo gain of OMC LSC loop. The error signal and DCPD-SUM signal seem to have similar structure around 100Hz. I haven't any analysis yet because these plots are measred after whitening filter had some trouble and we are planing to do same measurement again with whitening filter.
As Masayuki reported above, we see unexplained coherent noise on DCPDs in 10-200 Hz frequency band. However, according to an offline analysis with spectrogram, they appear to be somewhat non stationary. This indicates the existence of uncontrolled (and undesired) interferometry somewhere.
We should repeat the measurement with a different misalignment configuration.
Later, we concerned about noise artefact which can be introduced by not-quite-misaligned mirrors making scattering shelf or some sort in this measurement. To test this theory, we looked back the data in spectrogram and searched for non stationary behavior. It seems that we had two different non-stationary components; one below 10-ish Hz and the other between 10 and 200 Hz. The attached are the spectrograms produced by LIGODV web for 20 sec where we had 20 W PSL, OMC locekd with a gain of 10 and ISS closed using the PDs 1 through 4 as in-loop sensors.
In DCPD-SUM, it is clear that the component below 10 Hz was suddenly excited at t = 13 sec. Also, the shelf between 100 and 200 Hz appear to move up and down as a function of time.
Also, here are two relevant ISS signals which did not show obvious correlation with the observed non stationary behavior.
Masayuki and I noticed that ADC noise in OMC was too high. It turned out that the whitening filters turned off for no obvious reason at around 17:00 pm local according to trend.
Looking at the control screen, we noticed that some channels were not accessible. See the attached. We seem to be able to toggle the analog whitenings and digital anti-whitenings independently now. The rest of the PDs seem OK -- they don't have inaccessible channels.
Dave, JimB, Kiwamu,
We investigated a bit more today. Interestingly, we discovered that these blank channels never existed before. The only remaining oddity is the fact that we were unable to synchronously switch the analog and digital whitenings by clicking the buttons in the OMC_DCPD screen.
Everything has been plugged to the right port. The read out seems reasonable. The medm screen now have the right channel on HWS Plate Temp.
H1:TCS-ITMX_HWS_TEMPERATURESENSORPLATE has been reading a nonsence value of -200 since 2019-08-06 (alog 51087). When work on / move the table in the future, we should check this tempurature sensor is working.
De-energized Y2-8 beam tube ion pump and cut HV cable to length now that controller is in its permanent location. 1st attempt at installing new controller HV connector didn't work (I'm blaming dim lighting in compressor room). I'll try again tomorrow after setting up some temporary lighting. Note: BT pressures are a factor of ~2 higher at the ends while BT ion pumps are off.
~1525 - 1540 hrs. local Next over fill to be Saturday, Feb. 6th before 4:00 pm
Kiwamu, Jim, Dave:
New h1lsc and h1omc models were created and installed this morning. The DAQ was then restarted.
The change was to expand a single filter module to two in series (providing the potential for 20 filters) for the DARM, MICH, PRCL and SRCL paths.
models were chaged at 11:11 PST and the DAQ at 11:13 PST.
This change is the one we requested through the ECR E1600030-v1. As reported by Dave above, we increased the number of available filter modules in DARM, PRCL, SRCL and MICH by inserting a filter in series for each of them.
The attached screenshots show how the simulink models now look like.
The changes and new modules are highlighted by circles and arrows in the screenshots. As shown, the DARM filter module is split into DARM1 and DARM2. DARM1 still has the triggered filters while DARM2 does not. Additionally, in order to save the important channels DARM_IN1 and DARM_OUT, we added two test points with their names. The same change was applied to PRCL, SRCL and MICH. This change will not impact on the online calibration, calibration lines or hardware injection.
The last attachment is a screenshot of the new medm screen.
The models and screens are checked into SVN.
P.S. I have updated all the ISC-related guardian codes so that they can handle this new filter situation. I did not get a chance to test them. So please watch out for some bugs in the guardians
Updated userapps: isi/common/guardian/isiguardianlib/watchdog:
hugh.radkins@opsws1:watchdog 0$ svn up
U states.py
Updated to revision 12546.
guardctrl restart ISI_ETMX_ST2: this caused the guardian to deisolate stage2. Not what I experienced Tuesday when I did similar.. When I restarted stage1 guardian, it did not touch the platform, as I expected.
Tripped the platform and now as expected, the damping loops were turned off when a state4 trip occurs. Okay, so update fixed issue.
At ETMY, restart Stage1 first, then Stage2 and Guardian did nothing to the platforms...
Restarted HAM3, no problem. Tested (tripping PR2 in the process) and the DAMPing path was turned off. So good.
When restarting ISI_HAM6, this ISI guardian switched its request state to NONE but did not do anything, chamber manager was throwing messages. Toggled ISI to HIGH_ISOLATED and everyone became happy.
TCS crew is working on SRC so I'll not touch them.
Guardian restart & test ETMX, HAM3.
Guardian restart ETMY, HAM2, HAM6.
I'll complete the remainder of the restarts and maybe test a couple more when TCS is clear.
restarted the guardian for the BS ISI stages.
Completed the Guardian restarts but I did not do any further trips to test the function.
Restarted, HAMs 4 & 5 and the ITMs. Interesting about half of these, toggled the ISI to NONE and some became unmanaged, but, switching them back to HIGH_ISOLATED and INITing the CHamber manager set things back right without troubling the platforms.
WP closed.
Vern, Jim, Dave:
Several times in the past few weeks we have seen an invocation of tconvert produce the output:
tconvert NOTICE: Leap-second info in /ligo/apps/linux-x86_64/ligotools/config/public/tcleaps.txt is no longer certain to be valid; We got valid information from the web, but were unable to update the local cache file: No permission to write in directory /ligo/apps/linux-x86_64/ligotools/config/public
This afternoon we spent some time tracking this down. tconvert uses a web server in france to access the IERS bulletin-C information page which alerts us to upcoming leap seconds. It appears that this server was pure HTTP until January 11th this year. It is now an HTTP server which redirects the request to a HTTPS page. The TCL code inside of tconvert interprets the redirection return message as an error. It then marks the expiration of the leap second information for 24 hours in the future.
The procedure we used to maintain future leap second notification was to twice a year run tconvert as user controls, which updated a leap seoncds database file (owned by controls) which was good for the next six months. With the recent HTTP issues, controls now produces a file which is good for only a day.
This afternoon we ran a modified tconvert which access a different leap seconds HTTP page and the LHO CDS database file is now good through 8th August. In the mean time we will raise an FRS and work with Peter S for a long term solution.
There seem to be two new rf oddities that appeared after maintenance today:
Nothing immediately obvious from either the PR or SR bottom-stage OSEMs during this time. Ditto the BS and ITM oplevs.
Nothing immediately obvious from distribution amp monitors or LO monitors.
A bit more methodically now: all the OSEM readbacks for DRMI optics, including the IMC mirrors and the input mirrors. No obvious correlation with POP90 fluctuations.
I am tagging detchar in this post. Betsy and I spent some more time looking at sus electronics channels, but nothing jumped out as problematic. (Although I attach the fast current monitors for the beamsplitter penultimate stage: UR looks like it has many fast glitches. I have not looked systematically at other current or voltage monitors on the suspensions.)
Most likely, noise hunting cannot continue until this problem is fixed.
We would greatly appreciate some help from detchar in identifying which sus electronics channels (if any) are suspect.
In this case, data during any of the ISC_LOCK guardian states 101 through 104 is good to look at (these correspond to DRMI locking with arms off resonance). Higher-numbered guardian states will also show this POP90 problem. This problem only started after Tuesday afternoon local time.
I said above that nothing can be seen in the osems, but that is based only on second-trends of the time series. Perhaps something will be revealed in spectrograms, as when we went through this exercise several months before.
Comparing MASTER and NOISEMON spectra from a nominal low noise time on Feb 3 with Jan 10, the most suspicious change is SR2 M3 UL. Previously, this noisemon looked similar to the other quadrants, but with an extra forest of lines above 100 Hz. Now, the noisemon looks dead. Attached are spectra of the UR quadrant, showing that it hasn't changed, and spectra of SR2 M3 UL, showing that something has failed - either the noisemon or the driver. Blue traces are from Feb 3 during a nominal low noise time, and red are a reference from science time on Jan 10. I'm also attaching two PDFs - the first is spectra of master and noisemon channels, and their coherence, from the reference time. The second is the same from the current bad time. Ignore the empty plots, they happen if the drive is zero. Also, it seems like the BS M2 noisemon channels have gone missing since the end of the run, so I had to take them out of the configuration. Also, I took out the ITMs, but I should probably check those too.