I started doing a power budget for the table, but got side tracked investigating the in-loop diode that doesn't have any response. I tried realigning onto it but got a strange response that included negative voltages. As a result the lens isn't in place in front of the diode now, so I've blocked this path using a 10W power meter head (this path gets ~200mW so this head can easily cope). The laser is turned off just now at the controller.
There is already a fairly large power reduction through the AOM, from 63W down to 56.6W, then down to 56.1 after the polarizer. The PDs are getting 450mW, so even after these stages there is more than 50W going into the rest of the table. It is pretty tricky getting power meters in becasue there isn't a lot of room, so we may not get a complete picture of the power losses. I'll work more on this tomorrow.
Hugh asked why there were 'dot nfs' files in his directory and why he could not remove them, but later when he exited matlab they were removed. Here is a part of his directory listing when the files were there
-rw-rw-r-- 1 hugh.radkins controls 73076 Feb 18 2015 .nfs000000000d840bb80000002f
-rw-rw-r-- 1 hugh.radkins controls 76376 Feb 18 2015 .nfs000000000d840bb70000002e
-rw-rw-r-- 1 hugh.radkins controls 5974 Feb 18 2015 .nfs000000000d840bb60000003e
-rw-rw-r-- 1 hugh.radkins controls 5974 Feb 18 2015 .nfs000000000d840b9a0000002d
-rw-rw-r-- 1 hugh.radkins controls 73076 Feb 18 2015 .nfs000000000d840b2c0000002b
-rw-rw-r-- 1 hugh.radkins controls 73076 Feb 18 2015 .nfs000000000d840b210000002a
-rw-rw-r-- 1 hugh.radkins controls 73076 Feb 18 2015 .nfs000000000d840b1200000029
-rw-rw-r-- 1 hugh.radkins controls 73076 Feb 18 2015 .nfs000000000d84096200000028
-rw-rw-r-- 1 hugh.radkins controls 73076 Feb 18 2015 .nfs000000000d84095700000027
-rw-rw-r-- 1 hugh.radkins controls 73076 Feb 18 2015 .nfs000000000d84094c00000026
-rw-rw-r-- 1 hugh.radkins controls 76376 Feb 18 2015 .nfs000000000d84093400000025
-rw-rw-r-- 1 hugh.radkins controls 89335 Feb 3 16:15 .nfs000000000d840bc600000046
-rw-rw-r-- 1 hugh.radkins controls 89335 Feb 3 16:15 .nfs000000000d840bbe0000003c
-rw-rw-r-- 1 hugh.radkins controls 77540 Feb 3 16:15 .nfs000000000d840b4c00000035
-rw-rw-r-- 1 hugh.radkins controls 74240 Feb 3 16:15 .nfs000000000d840b9900000036
-rw-rw-r-- 1 hugh.radkins controls 74240 Feb 3 16:15 .nfs000000000d840b9c00000037
-rw-rw-r-- 1 hugh.radkins controls 74240 Feb 3 16:15 .nfs000000000d840ba200000038
-rw-rw-r-- 1 hugh.radkins controls 74240 Feb 3 16:15 .nfs000000000d840ba300000039
-rw-rw-r-- 1 hugh.radkins controls 74240 Feb 3 16:15 .nfs000000000d840ba40000003a
-rw-rw-r-- 1 hugh.radkins controls 74240 Feb 3 16:15 .nfs000000000d840ba50000003b
-rw-rw-r-- 1 hugh.radkins controls 77540 Feb 3 16:19 BSC9_H1_Valve_Check.xml
...
In Linux is is permitted for a program to open a file and then to unlink it. On a local file system this creates a file with no name (wont show up in 'ls' for example) but the inodes still exist for the file and the program can still read and write to the phantom file. Only when the program stops running does the OS clear the inodes for reuse. This is great for temporary files which should disappear when the program stops, even if it crashes and does not cleanly close the files.
The situation Hugh encountered is when the file is on an NFS mounted file system. In this case when the file is unlinked, the NFS client renames the file to a dot nfs file with a random name. If a user on the same NFS client machine tries to delete the file, they get a "cannot remove, Devicee or resource busy" error and the file is not deleted.
Interestingly we found that on a different NFS client it is possible to delete the file since the controlling process is not on that machine.
So we have potentially two problems. One is that dot nfs files get stuck open (like the ones from last Feb 18th in Hugh's listing, we presume the NFS client was abruptly shutdown). Second is that someone deletes a file on one NFS client which is actually being used by another NSF client.'s program.
Activity Log: All Times in UTC (PT) 15:43 (07:43) Reset tripped ISI WD on HAM6 15:53 (07:53) Peter – Going into the H2 enclosure to look for electronics equipment 16:00 (08:00) Peter – Added 350ml water to the Diode chiller 16:16 (08:16) Peter – Out of the H2 enclosure 16:45 (08:45) Richard & Filiberto – At HAM6 for Triplexer installation 16:53 (08:53) Joe – Going to X-Arm for beam tube sealing 17:15 (09:15) Carpet installers on site to finish OSB installation 17:35 (09:35) Completed initial alignment 17:36 (09:36) IFO in DOWN for Kiwamu, who is working on the ISS Second Loop 18:05 (10:05) Richard & Filiberto – Out of the LVEA 18:50 (10:50) Christina & Karen – Forklifting boxes from LSB to VPW and High Bay 18:53 (10:53) Bubba – in LVEA near HAM4/5 working on snorkel lift 18:53 (10:53) Chris – Bean tube sealing on the X-Arm 19:30 (11:30) Bubba – Out of the LVEA 19:56 (11:56) Joe & Chris – Back from X-Arm 20:24 (12:24) Completed operator assigned FAMIS ticket tasks 21:15 (13:15) Joe – Beam tube sealing on X-Arm 21:27 (13:27) Kyle – Going to Y-End compressor room 21:50 (13:50) Jenne & Cao – Going to check the Triplexer installed at HAM6 22:16 (14:16) Aidan & Nutsinee – Going to X-Arm HWS table 22:57 (14:57) Jenne & Cao – Out of the LVEA 23:15 (15:15) Joe – Back from X-Arm 23:45 (15:45) Kyle – Back from End-Y 00:00 (16:00) Turn over to Travis End of Shift Summary: Title: 02/03/2015, Day Shift 16:00 – 00:00 (08:00 – 16:00) All times in UTC (PT) Support: Jenne, Kiwamu, Incoming Operator: Travis Shift Detail Summary: Ongoing commissioning work during the
[Jenne, Cao]
What was once the 90 MHz local oscillator for just the ASAIR 90 WFS now goes to a distribution amplifier box. 3 of the outputs of that board now go to the local oscillator inputs for each of ASAIR_90, AS_A_90 and AS_B_90. The local oscillator inputs want 10dBm each, so we measured the output of the distribution box, and each output was 15dBm. So, we put 5dB RF attenuators on each spigot. Next up, we'll lock DRMI and look at phasing.
Yesterday I put both X and Y lasers into the DOWN guardian state, left them for a while to run unlocked. I then set both guardian states to LASER_UP and left them to lock themselves and run over night. Attached is the chart (guardian_locking_X_and_Y_lasers.png ) showing the output power, which is +/-0.015W for the Y arm laser and +/-0.008W for the X arm laser.
The PZT movement is +/-5V on both lasers, with the full range available being +/-35V, so this seems good. The chiller movement is approximately 0.05C on both chillers, which again is very small compared to the chiller range.
There is a glitch seen periodically in the Y-arm laser power output with a 15minute interval. We believe that we have tracked this down to being the interval that the FLIR camera uses to re-calibrate (this involves some internal mechanical motion) and looking at the attached chart (Y_Laser_FLIR_GLITCH.png) it seems that we have got rid of it by remotely turning off the FLIR camera from the MEDM screen.
The chiller setpoint for both lasers looks to have some small amount of oscillation. The period is 31mins and it is seen at about the same level in both lasers. Looking at the LVEA temperature data (temperature_fluctuations.png ) there is no correlation to actual lab temperature changes. Also we can see that the oscillation appears in the PZT output channel, showing that the faster PZT actuator is subtracting out the length change caused by the oscillation in the chiller. The chiller uses this voltage as the setpoint for its actuation and it looks like the chiller lags the PZT by 180 degrees. In order to reduce the unit gain point of the servo I have reduced the gain by a factor of 3 on both X and Y chiller servo loops. I then put the lasers into DOWN guardian state and then back into LASER_UP. I will leave them to run and we can look to see if this fixes the problem.
Evan told me that there seems to be an analog pole of 40Hz or so that is unaccounted for if he makes the transfer function from H1:SUS-ETMY_L3_LOCK_L_OUT to the LVESD monitor whitening outputs ( L3_LVESDAMON_LL_OUT and such) after removing all known analog and digital poles/zeros.
I measured the transfer function for EX, not from LOCK_L but from ESDOUTF_LL_IN2 to LVESDAMON_LL_OUT and confirmed that the problem is also in EX driver.
In the attached, blue is the measured transfer function divided by the transfer function of the whitening that is supposed to be (z, p)=([2;2;19.3k;19.3k], [40;40;965;965]) Hz (https://dcc.ligo.org/LIGO-D1500389).
I also multiplied the bule with 42Hz zero and got the green one, it seems like there's really 42Hz-ish analog pole that is not accounted for. (There are also high frequency things I'm not worried at the moment.)
Eventually I remembered that, in the old version of LVESD that we are using, there is an LPF for monitor output which had a 42Hz pole though it was supposed to be 1kHz (the LPF in question is on page 11 of D1500016). Rich Abbott fixed all spares, but not the ones we're using. One mystery solved.
I talked with Richard and Filiberto, and since they have some PI-related task scheduled on next Tue., I hope this can be fixed at the same time by either modifying or swapping.
As part of my larger investigation into IM alignment jumps, I drove IM1, IM2, and IM3, in pitch and yaw by -300 "slider urad" to measure the change in the OSEMs.
The first chart shows the response to the change in alignment drive of -300 "slider urad", change in "OSEM urad", the change on IM4 Trans QPD, and the expected change to the alignment slider drive to change the OSEM urad by 5 urad.
In pitch, the alignment slider changes are about 40 times the OSEM value changes, which is due to the magnetic damping in pitch.
In yaw, the alignment slider changes are about 10 times the OSEM value changes, so closer, but not all that close.
This alignment slider calibration may be something that should be updated.
The first plot also shows what alignment slider change is necessary to move the optic 5urad. I used these numbers to produce the second table.
The second table shows the drive change calculated in table one did produce a 5urad OSEM change. It also showed that in the axis that wasn't being drive, there was up to a 1urad shift in alignment, which happened in both pitch and yaw.
Diagonalization of the IMs may be something that should be updated as well.
I was trying to get the EY St2 guardian to turn on RX/RY isolation filters and was not having much luck. I would make the necessary changes in the ISI_ETMY_ST2.py file, save and hit reload on all of the chamber's SEI nodes, then tried cycling the ISI down to damped and back up. When I couldn't get the guardian to turn on the loops I wanted, I tried going the other way and leaving loops off. When I left Z out of the DOF and ALL_DOF lists, the ISI would turn ON the ST2 Z loop, then not turn it off. When I checked the logs, guardian was still turning ON the Z loop, then turning OFF (the still not engaged) RX and RY loops. When I talked to TJ about this, he said that he had seen that it was necessary to sometimes restart Seismic nodes to get them to fully digest changes. When I did guardctrl restart ISI_ETMY_ST2, then tried another cycle of ST2 to damped then isolated, this time the node turned on the requested loops.
Odd.
I've updated the record of calibration references for the TCS channels.
It is attached here:
Masayuki, Kiwamu,
With the two functions that we implemented yesterday (alog 25316), today we tested the automation to see how they improved the engagement of the second loop.
It locks without a fail out of more than 20 trials within approximately several seconds every time. Very good.
On the other hand, I have noticed that I was missing some signal conditioning filters in order to handle multiple error signals in a user-friendly way. So we need another round of modification on the front end model which should finializes this implementation.
I reverted the settings back to nominal. So the guardian still handles the engagement as usual.
[Advantage over guardian in this case]
The front end code allows for a fast and more complicated servo filter. This is the biggest advantage and therefore the key point of the succeess in the test today.
The reference signal or offset of the second loop has been adjusted by the IMC guardian which employed a simple PID for servoing the reference signal so as to minimize the "acquisition kick" during the engagement. Looking at the behavior of the suppressed and unsuppressed signals, I found the UGF of this loop to be lower than 0.1 Hz. At one point in the past, this servo became insufficient due to excess intensity fluctuation imposed by the IMC. The IMC seemingly adds extra intensity noise below 1 Hz presumably via misalignment. As a consequence, the guardian servo became unable to keep up with such a large and fast fluctuation. We started assisting the guardian by the manual engagement (alog 22449) where the operator activates the second loop when the second loop error signal momentarily crosses zero.
The new implementation tested today was able to achieve a UGF of 0.8-ish Hz which sufficiently suppressed the fluctuation and therefore a smooth engagement of the 2nd loop with a help from the trigger. In the digital filter, I had to install 3 pairs of poles and zeros ( i.e. zpk([0.03;0.03;0.05], [2;3;3], 1) ) in addition to an integrator in order to compensate for a 0.1 Hz second order analog low-pass. Currently the UGF is limited by the DAC range. The rms fluctuation monitored by SECONDLOOP_SIGNAL reduced by roughly a factor of 5. The dominant component was from around 0.1 Hz and can now be well suppressed with the fast servo.
This is related to Jenne's elog entry about spot centering on the test masses and my retuning of the P2L gains.
I computed the coherence between DARM (CAL-DELTAL_EXTERNAL) AND DHARD pitch (ASC-DHARD_P_OUT) for chunks of half hour of data, thorugh all O1, selecting all observing mode segments longer than 0.5 hours. I also computed the transfer functon between DHARD and DARM.
The first plot shows a 'coherence-gram' of the results. Each vertical section of the plot correspond to one lock segment, at a given time (see x axis for the number of days from O1 beginning). The color code shows the coherence. It's clear that the coherence in the low frequency region (10-20 Hz) changed quite a lot duing the run.
In a similar way, the second plot shows a 'transfer-function-gram' of the results. Each vertical section of the plot correspond to one lock segment, at a given time (see x axis for the number of days from O1 beginning). The TF is shown only for points with coherence above 0.05. Again, the variability over long periods is quite large. To better see this, the third plot shows the averaged TF between 15 and 20 Hz (the TF is flattish in this region). This plot shows even more clearly that the DHARD pitch to DARM coupling changed by large factors during the run, on a time scale of days.
[Aidan, Alastair]
Per the discussion on the position dependent coupling of CO2 noise to DARM, we injected a 23.8Hz line into CO2Y yesterday (using a function generator) yielding a 1.5E-2 /sqrtHz line in RIN for that laser.
We turned the laser on to inject 100mW onto ITMY. A line appeared in DARM at 23.8Hz
We used PICO_G Motor 3 to steer the TCSY beam around on ITMY and observed the magnitude of the line in DARM. By eyeballing the live spectra, I could roughly maximize the line around PICO motor counts [-1E4, 3E4]. Then we swept the beam in the vertical direction on the mirror until the line disappeared in DARM. We reversed the direction of the sweep, moving the beam through maximum coupling through to minimum coupling again. We returned the beam to the rough position of maximum coupling and repeated the procedure in the horizontal direction.
The results were a bit noisy. We used the following channels to track the position of the PICO_G mirror and the line magnitude vs time.
We left the mirror at position [-1E4, 3E4] last night.
Following the scanning, I analyzed the data this morning and found that the, despite being noisy, we could estimate the maximum coupling point as [-5E3, 3.1E4] with an uncertainty of approximately +/- 3000 counts. I moved the mirror to this position this morning.
----
A word on the uncertainty:
We can estimate the maximum single-pass thermal tilt we should see because of the uncertainty in the alignment:
If we operate the TCS in the range [0, 500mW], then we should anticipate up to 400nrad of thermal tilt induced in the IFO beam
Added 200ml water to Crystal Chiller. Peter K. added 350ml water to the Diode chiller (see aLOG #25343)
Ran HEPI Pump Trends per FAMIS #1935. End-Y showed a cavitation after the power outage, which was corrected.
T240 Centering script results: There are 18 T240 proof masses out of range ( > 0.3 [V] )! ITMX T240 1 DOF X/U = -3.297 [V] ITMX T240 1 DOF Z/W = 0.439 [V] ITMX T240 2 DOF Y/V = 0.505 [V] ITMX T240 3 DOF X/U = -3.301 [V] ITMX T240 3 DOF Z/W = -0.341 [V] ITMY T240 1 DOF X/U = -0.433 [V] ITMY T240 1 DOF Y/V = 0.359 [V] ITMY T240 1 DOF Z/W = 0.34 [V] ITMY T240 2 DOF Y/V = 0.43 [V] ITMY T240 2 DOF Z/W = -0.345 [V] ITMY T240 3 DOF X/U = -0.941 [V] ITMY T240 3 DOF Y/V = -0.319 [V] ITMY T240 3 DOF Z/W = -3.275 [V] BS T240 1 DOF Y/V = 0.904 [V] BS T240 1 DOF Z/W = 0.384 [V] BS T240 2 DOF X/U = 0.96 [V] BS T240 2 DOF Z/W = 0.481 [V] BS T240 3 DOF Z/W = 0.927 [V] All other proof masses are within range ( < 0.3 [V] ): ETMX T240 1 DOF X/U = 0.158 [V] ETMX T240 1 DOF Y/V = 0.143 [V] ETMX T240 1 DOF Z/W = 0.19 [V] ETMX T240 2 DOF X/U = 0.061 [V] ETMX T240 2 DOF Y/V = -0.052 [V] ETMX T240 2 DOF Z/W = 0.168 [V] ETMX T240 3 DOF X/U = 0.153 [V] ETMX T240 3 DOF Y/V = 0.008 [V] ETMX T240 3 DOF Z/W = 0.113 [V] ETMY T240 1 DOF X/U = 0.066 [V] ETMY T240 1 DOF Y/V = 0.006 [V] ETMY T240 1 DOF Z/W = 0.046 [V] ETMY T240 2 DOF X/U = -0.126 [V] ETMY T240 2 DOF Y/V = 0.059 [V] ETMY T240 2 DOF Z/W = 0.148 [V] ETMY T240 3 DOF X/U = 0.045 [V] ETMY T240 3 DOF Y/V = 0.038 [V] ETMY T240 3 DOF Z/W = 0.152 [V] ITMX T240 1 DOF Y/V = -0.16 [V] ITMX T240 2 DOF X/U = -0.227 [V] ITMX T240 2 DOF Z/W = 0.219 [V] ITMX T240 3 DOF Y/V = 0.18 [V] ITMY T240 2 DOF X/U = 0.219 [V] BS T240 1 DOF X/U = 0.207 [V] BS T240 2 DOF Y/V = 0.182 [V] BS T240 3 DOF X/U = 0.072 [V] BS T240 3 DOF Y/V = 0.241 [V]
ITMX & ITMY T240 Masses Centered WP 5719
JeffB's post of the Mass Positions prompted us to center the ITM ISI T240s. This was successful. The BS T240 masses are a bit above action level but not by much; we'll wait a few before we mess with it.
There seem to be two new rf oddities that appeared after maintenance today:
Nothing immediately obvious from either the PR or SR bottom-stage OSEMs during this time. Ditto the BS and ITM oplevs.
Nothing immediately obvious from distribution amp monitors or LO monitors.
A bit more methodically now: all the OSEM readbacks for DRMI optics, including the IMC mirrors and the input mirrors. No obvious correlation with POP90 fluctuations.
I am tagging detchar in this post. Betsy and I spent some more time looking at sus electronics channels, but nothing jumped out as problematic. (Although I attach the fast current monitors for the beamsplitter penultimate stage: UR looks like it has many fast glitches. I have not looked systematically at other current or voltage monitors on the suspensions.)
Most likely, noise hunting cannot continue until this problem is fixed.
We would greatly appreciate some help from detchar in identifying which sus electronics channels (if any) are suspect.
In this case, data during any of the ISC_LOCK guardian states 101 through 104 is good to look at (these correspond to DRMI locking with arms off resonance). Higher-numbered guardian states will also show this POP90 problem. This problem only started after Tuesday afternoon local time.
I said above that nothing can be seen in the osems, but that is based only on second-trends of the time series. Perhaps something will be revealed in spectrograms, as when we went through this exercise several months before.
Comparing MASTER and NOISEMON spectra from a nominal low noise time on Feb 3 with Jan 10, the most suspicious change is SR2 M3 UL. Previously, this noisemon looked similar to the other quadrants, but with an extra forest of lines above 100 Hz. Now, the noisemon looks dead. Attached are spectra of the UR quadrant, showing that it hasn't changed, and spectra of SR2 M3 UL, showing that something has failed - either the noisemon or the driver. Blue traces are from Feb 3 during a nominal low noise time, and red are a reference from science time on Jan 10. I'm also attaching two PDFs - the first is spectra of master and noisemon channels, and their coherence, from the reference time. The second is the same from the current bad time. Ignore the empty plots, they happen if the drive is zero. Also, it seems like the BS M2 noisemon channels have gone missing since the end of the run, so I had to take them out of the configuration. Also, I took out the ITMs, but I should probably check those too.
Jeff Kissel graciously made 4 StripTool templates for OPS use when having issues with WFS running away during green locking, as happened today. These templates are:
ALSX_GreenWFS_Pitch_Metrics.stp
ALSX_GreenWFS_Yaw_Metrics.stp
ALSY_GreenWFS_Pitch_Metrics.stp
ALSY_GreenWFS_Yaw_Metrics.stp
and can be found in the usual place for OPS templates: /ligo/home/ops/Templates/StripTool.
These will likely be used with the assistance of a commissioner since I didn't catch all the intricacies of manipulating the WFS enough to explain it here, but they are there to save some time when needed.
J. Kissel, J. Driggers A little more information / motivation here. There are 4 degrees of freedom when it comes to keeping the green arms aligned: X and Y arms, in both Pitch and Yaw. There are three optics involved in each of these systems -- each arm's ITM, ETM, and TMS. As such, we've constructed a sensor array that has three sensors for each arm: a green WFS A, a green WFS B, and that arm's ITM camera. We've arranged it such that WFS A (in the I phase) is "DOF 1" which is controlled by the ETM, WFS B (again in the I phase) is "DOF 2" controlled by the TMS, and the ITM camera is "DOF3," which is controlled by a little bit of all the optics; ETM, ITM, and TMS -- but mostly the ITM. As such, each of these templates contain the error signals for each of these loops for the 4 degrees of green alignment. The goal during their use: Minimize (bring the absolute value toward zero) each of the sensor, or "error," signals, while maximizing the last signal in each template: the (normalized) power stored in the arms. When you'll typically need these templates: - Only when trouble shooting the green WFS alignment step of initial alignment, where the "trouble" is that every time the green WFS engage, it steers the arm (X or Y) off into the weeds (in Pitch or Yaw) and breaks the green arm lock. - We've found that this will only typically be after a particularly rough event on one (or several) of the chamber's isolation system, for example, an ISI or HEPI platform has tripped. (Or a maintenance day when the SEI platforms are brought down for a model change, or if there's been a site-wide power outage, etc.) - In such a case where you have to invoke this technique, it's likely that only one of the error signals is too large for the alignment system to handle. Strategy: - Be sure to turn OFF the automatic alignment before getting started. It's the auto-alignment that's the problem, so you must be sure it's not fighting you while you're manually 6pushing the optics to the right place. Do this by heading to the ALS controls overview screen of which ever arm and angular DOF you're fighting (X or Y, Pitch or Yaw), open each of the WFS "DOF" filters, and turn ON a limit of 0.0 for each. (If you turn OFF the input or output, guardian will fight you and automatically turn these things back ON. No good.) - Keep the alignment adjustments small (0.1 micro radians on each optic's slider), and make sure to write down where you've started on each so you yourself don't get lost in the weeds. - Be mindful of the optic indicated by the DOF which has a particularly large error signal. If WFS A error signal is large, it's likely that the ETM should be adjusted. - The error signals only should be used when the cavity is locked on a zero-zero mode. That means you've got to get the alignment most of the way there *by hand* before using this system. Further, you have to make sure that the *automatic* green alignment system is OFF so that it's not fighting your manual alignment. - Most typically, it's the camera error signal that's large. Unfortunately that error signal corresponds to a degree of freedom that is control by all three of the optics, so use your best judgement as to which optic to try first (based on your knowledge of what happened to the chambers and optics prior to you starting the alignment process). However, if no major alignment changing events have occured, start with the ITMs. As usual, if you change the alignment of an optic in either direction and it only seems to make all metrics worse, or break the lock, then that's not the problem optic! - Once you have the error signals relatively small (Under +/- ~1000 - 2000 counts for the WFS, Under +/- ~0.1 to 0.2 counts for the Camera), re-engage the auto-alignment by releasing the limits of 0.0 in each of the "DOF" filter banks. - Rinse and repeat until the arm cavity stays locked on the 00 mode with the auto-alignment engaged.
In Jeff's first paragraph, he forgot to include the other 2 degrees of freedom: the input beam pointing, which is defined by the TMS for the green lasers. So, there are 6 degrees of freedom for each arm.
Svn up'd LHO's common guardian masterswitch and watchdog folders:
hugh.radkins@opsws1:masterswitch 0$ svn up
U states.py
Updated to revision 12509.
hugh.radkins@opsws1:masterswitch 127$ pwd
/opt/rtcds/userapps/release/isi/common/guardian/isiguardianlib/masterswitch
hugh.radkins@opsws1:masterswitch 0$
hugh.radkins@opsws1:masterswitch 0$ cd ../watchdog/
hugh.radkins@opsws1:watchdog 0$ svn st
hugh.radkins@opsws1:watchdog 0$ svn up
U states.py
Updated to revision 12509.
I restarted HAM6 and tested by disabling an output leg. The trip appeared to execute the functions as expected-I did not notice any problem. Restarted HAM5 and tested, it too turned off the FF and reduced the GS13 gains as expected, no problem noticed. Restarted HAM4 but did not test (by tripping the platform.)
Restarted HAM3 after enabling the GS13 Switching feature as I wanted to test the problem of the guardian being unable to turn the GS13 gain up without tripping the platform. This is where I noticed the problem.
When guardian turned up the GS13 gain, the HAM3 tripped and it successfully turned off the FF and lowered the GS13 gains but it left the DAMPing loops engaged. I thought the restart of the guardian may have been responsible for this behaviour. I cleared the trip but did not turn off the DAMPing path first. The ISI did not trip until the GS13s were again toggled to high gain but this time, the DAMPing path was turned off as I would expect. Okay, maybe first time around problem. Cleared the trip and again the platform tripped when the GS13 gain changed and again the DAMPing path was left on. I repeatd and again the DAMPing path was left on. I disabled the GS13 Gain Switching feature and we made it to isolated and I set the GS13 gains to high with the Command Script.
I've repeated the test on HAM5 and there too, DAMPing path remained on.
Meanwhile ITMX tripped due to LVEA activity, DAMPing path not turned off and this guardian has not been restarted with the new update. Repeated this on ITMY and it too left the DAMPing path enabled. Okay, it looks like this DAMPing problem is not related to the current code upgrade.
I will continue to restart the guardian with this current upgrade though as the turning off of the GS13s when tripped is a good thing and generally, the platform can deal with untripping the watchdog with stuff coming out of the DAMPing path, as long as the GS13 are in low gain. And since HAM2 & 3 can't handle guardian gain switching, they must have the gains toggled manually.
I've restarted all the LHO ISI Guardians. Tested the function/features and problems and they are all present on ITMX ISI too.
I modified watchdog/states.py to accomodate for this additional request to the T150549 update. The update was comited to the SVN:
[Alastair, Aidan]
Attached are some plots of RIN for the X and Y arm lasers. There are now channels in the front end for RIN for in loop and out of loop photodiodes on each table. Here we show the RIN and dark noise for both lasers. The in loop photodiode for the Y-arm table is giving zero response so either there is a problem with it or the beam is mis-aligned. I've only attached the data for the other 3 diodes.
The dark noise is consitent between all three diodes. The RIN of the X-arm laser is approx 5x10^-7 at 20Hz. The Y-arm laser is higher at 2x10^-7. The Y-arm result is similar to the best we measured at Caltech (also attached). It may be that the X arm is lower than we have previously measured due to the low noise enviroment. The spectrums show no excess at low frequency such as would be attributed to air motion on the table, which is good news. Also the in loop and out of loop diodes on the X-table show very similar spectrums which is good for any future intensity noise cancellation.
*****EDIT***** should say Y arm is 2x10^-6 RIN at 20Hz.
H1 has 30 sender RFM channels on each arm, of which only 26 have corresponding receiver(s). So 4 are being sent and no model is using the data.
L1 has 29 senders on the X-ARM (of which there are 26 receivers), and 30 senders on the Y-ARM (of which there are 26 receivers).
So the two sites are very close in number of sending channels.
Analysis details: The base number of potential senders was derived from the main IPC file, looking for RFM0 and RFM1 ipc types. This resulted in 30 for H1-X and H1-Y, and 42 for L1-X and L1-Y. Because the ipc file is only appended to during compilation, if it has not been cleanly regenerated recently it may overcount the number of sending channels.
For each channel, I searched the models' RCG generated IPC_STATUS.adl medm file for the channel name (e.g. H1LSC_IPC_STATUS.adl). Assuming that no two ipc channels share the same name, if I found the channel name in the adl file this means it is a running sender with a receiver. For the remaining possible senders without receivers (H1-X=4, H1-Y=4, L1-X=16, L1-Y=16) I looked for the channels in the top level simulink source files (e.g. /opt/rtcds/userapps/release/*/l1/models/l1*.mdl). This showed that all four channels on H1-X and H1-Y do have sending models, and for L1-X 3 of the 16 had sending models, and for L1-Y 4 of the 16 had sending models.
If we can possibly remove some of the RFM channels which are not being received, additional RFM channels can be added to the loop with no risk.
For H1 the sending channels with no receivers are:
Tagging people interested in adding new (or rather, trading for) RFM channels. SEI: IFO Basis SEI channels CAL: Sending PCAL excitations to the corner.
Dave, Thanks for the count. For the SUSpoint motion in the IFO basis, (see ECR E1600028 , or Integration Issue 1193 , or Tech Doc T1500610 ) we need 2 RFM channels, 1 per arm for the ETMX SUS-WIT and ETMY SUS-WIT each to OAF in the corner. For completeness, I note that we also need some PCIe channels from the top level of other 12 suspensions (3 IMCs, 3 SRMs, 3 PRMs, BS, ITMX, ITMY). These can replace the GS-13 X/Y signals now being used by OAF. Evidently the RFM senders for these are living in the PEM model at LLO. I do not know why it is done this way, but it may be related to the configuration of the FE machine for ISI at LLO. ALSO (1) : for more complete monitoring, it would be useful to also send the STS-2 X/Y signal from the end to OAF. ALSO (2): For Earthquake common mode control (still hypothetical) we would need to send the End X or Y STS-2 to the corner, and ALSO send the corner X/Y out to the ends. Summary of RFM: 1 per arm from SUS to OAF (high priority) 1 per arm from ISI-GND to OAF (med priority) 1 per arm from GND-ITMY X to End X and ITMY-Y to End Y (med priority) NOTE- these signals don't need to be 16k. We want accurate data at 1 Hz and below, so 512 sample/sec would be fine. Thus, it is not crazy to think about ways to de-stress the RMF system (e.g. interleave several slow channels on one fast RFM connection, or something like this.)
Looks like the RF amp gets overdriven at the input. The outputs should be around 13dBm.