Krishna
The attached pdf file shows the comparison between BRS-Y, the ground seismometer (an STS-2) and the platform seismometer (a T240), called a PEM signal in this case (since we are using spare PEM channels for it). Also shown are the online tilt-subtracted super-sensor output (online residual) and a second tilt-subtracted signal between the T240 and BRS-Y (PEM residual). Page 2 shows the coherences between various signals and Page 3 shows the wind-speeds. As I showed before, the coherence between BRS-Y and the platform seismometer (T240) is better than that between BRS-Y and the ground seismometer (STS-2).
Note that in this plot I have used the normal calibration factor for BRS-Y and STS-2. For the T240, I matched its output to the STS-2 at the secondary microseism. You'll notice that while the STS-2 and BRS-Y are coherent below 0.1 Hz, their outputs differ by 20 percent. Initially some had suspected we got our calibration wrong, but this is now known to be due to differential floor tilt. For the online tilt-subtraction we use a factor of 0.78 for BRS-Y to correct for this. However, the platform seismometer agrees very well with the raw BRS-Y calibration (we did get our calibration right!). This confirms that the two instruments are now seeing the same tilt.
The tilt-subtracted T240 signal (PEM residual) is now marginally better than the tilt-subtracted STS-2 signal in the 15-50 mHz range. It is possible that this is still limited by temperature noise on the T240, since the tilt-subtracted noise looks similar at lower wind speeds too (not shown). Hugh will attempt to improve the thermal insulation when possible.
The most important test is to now see if the two residual curves diverge even more when wind speeds pickup above 20 mph. If so, this would confirm the differential floor tilt hypothesis. If not, it would suggest some other noise/nonlinearity limits the tilt-subtraction. Unfortunately, it looks like we'll have to wait till next week for higher wind speeds.
Adding one other case with slightly higher wind speeds.
Not sure how relevant this is right now.
Laser Status:
SysStat is good
Front End Power is 35.66W (should be around 30 W)
HPO Output Power is 154.0W
Front End Watch is GREEN
HPO Watch is GREEN
PMC:
It has been locked 0 days, 0 hr 27 minutes (should be days/weeks)
Reflected power = 26.11Watts
Transmitted power = 50.55Watts
PowerSum = 76.66Watts.
FSS:
It has been locked for 0 days 0 hr and 27 min (should be days/weeks)
TPD[V] = 2.491V (min 0.9V)
ISS:
The diffracted power is around 0.71% (should be 3-5%)
Last saturation event was 0 days 0 hours and 26 minutes ago (should be days/w
Possible Issues:
PMC reflected power is high
ISS diffracted power is Low
All the STSs are okay. Corner station ISIs are in air. ETMY is somewhat out of spec, though.
2017-11-08 10:52:36.354536
All STSs prrof masses that within healthy range (< 2.0 [V]). Great!
Here's a list of how they're doing just in case you care:
STS A DOF X/U = -0.718 [V]
STS A DOF Y/V = 0.064 [V]
STS A DOF Z/W = -0.195 [V]
STS B DOF X/U = 0.388 [V]
STS B DOF Y/V = 0.944 [V]
STS B DOF Z/W = -0.609 [V]
STS C DOF X/U = 0.368 [V]
STS C DOF Y/V = -0.065 [V]
STS C DOF Z/W = 0.938 [V]
STS EX DOF X/U = -0.164 [V]
STS EX DOF Y/V = 0.47 [V]
STS EX DOF Z/W = 0.151 [V]
STS EY DOF X/U = 0.144 [V]
STS EY DOF Y/V = 0.169 [V]
STS EY DOF Z/W = 0.51 [V]
Assessment complete.
Averaging Mass Centering channels for 10 [sec] ...
2017-11-08 10:52:39.355680
There are 17 T240 proof masses out of range ( > 0.3 [V] )!
ETMX T240 2 DOF X/U = -0.327 [V]
ETMX T240 2 DOF Y/V = -0.574 [V]
ETMY T240 3 DOF Z/W = 0.347 [V]
ITMX T240 1 DOF X/U = -0.418 [V]
ITMX T240 1 DOF Z/W = -0.337 [V]
ITMX T240 2 DOF Y/V = -0.37 [V]
ITMX T240 3 DOF X/U = -0.652 [V]
ITMY T240 1 DOF X/U = -0.386 [V]
ITMY T240 1 DOF Z/W = -0.549 [V]
ITMY T240 2 DOF Z/W = -0.654 [V]
ITMY T240 3 DOF X/U = -1.001 [V]
ITMY T240 3 DOF Y/V = -0.389 [V]
ITMY T240 3 DOF Z/W = -0.604 [V]
BS T240 1 DOF Z/W = -0.564 [V]
BS T240 2 DOF Y/V = -0.491 [V]
BS T240 3 DOF X/U = -0.683 [V]
BS T240 3 DOF Z/W = -0.469 [V]
All other proof masses are within range ( < 0.3 [V] ):
ETMX T240 1 DOF X/U = 0.296 [V]
ETMX T240 1 DOF Y/V = 0.19 [V]
ETMX T240 1 DOF Z/W = 0.222 [V]
ETMX T240 2 DOF Z/W = 0.125 [V]
ETMX T240 3 DOF X/U = 0.171 [V]
ETMX T240 3 DOF Y/V = 0.284 [V]
ETMX T240 3 DOF Z/W = 0.223 [V]
ETMY T240 1 DOF X/U = -0.003 [V]
ETMY T240 1 DOF Y/V = 0.109 [V]
ETMY T240 1 DOF Z/W = -0.123 [V]
ETMY T240 2 DOF X/U = 0.201 [V]
ETMY T240 2 DOF Y/V = -0.161 [V]
ETMY T240 2 DOF Z/W = 0.0 [V]
ETMY T240 3 DOF X/U = -0.132 [V]
ETMY T240 3 DOF Y/V = -0.004 [V]
ITMX T240 1 DOF Y/V = -0.046 [V]
ITMX T240 2 DOF X/U = -0.106 [V]
ITMX T240 2 DOF Z/W = -0.093 [V]
ITMX T240 3 DOF Y/V = -0.047 [V]
ITMX T240 3 DOF Z/W = 0.016 [V]
ITMY T240 1 DOF Y/V = -0.178 [V]
ITMY T240 2 DOF X/U = 0.263 [V]
ITMY T240 2 DOF Y/V = -0.239 [V]
BS T240 1 DOF X/U = 0.131 [V]
BS T240 1 DOF Y/V = -0.262 [V]
BS T240 2 DOF X/U = -0.041 [V]
BS T240 2 DOF Z/W = 0.032 [V]
BS T240 3 DOF Y/V = -0.024 [V]
Assessment complete.
FAMIS Task #6923 Attached are the plots for the BSC and HAM ISI CPS checks.
FAMIS Task #8302 Both Crystal and Diode chiller filters look good. No debris or discoloration noted. Added 100ml water to Crystal Chiller. Added 200ml water to Diode Chiller. No problems or concerns noted with either chiller.
Activities in IO Land:
- Cheryl, JeffK, Ed
Kyle, R., Rakesh K. Today we coupled the ion pump assembly to the 1.5" Vent/Purge valve at CP2. Upon testing, the ion pump was found to be shorted (50 ohms). Ughh! This is a newish pump that had passed testing when it was briefly used as an emergency backup a few years ago -> Oh well. It had to be replaced nontheless. Again the new joints were leak tested and the turbo pump removed. Now both CP1 and CP2 have permanent, dedicated 55 L/s ion pumps installed. The purpose of these are mostly to pump hydrogen and nitrogen for those times when the 80K pump has to be isolated from adjacent volumes for prolonged periods. Nominally, these ion pumps are running but valve-out from the 80K pumps until needed. All that remains now is for final power and signal cabling to be routed. For now we have no read back and are using extension cords to power them. This completes WP #7204
Sheila, Jonathan, Dave:
Sheila had a question about the relative timing of an analog event which was recorded by both a front end computer and a Beckhoff slow controls computer. When looking at the two channels from the DAQ, there was a time difference of about a tenth of a second.
As a test, I have some slow signals coming from front end models and being a acquired by the DAQ through two paths: one direct alongside the fast data, and an equivalent channel coming in via the EDCU. To show the delay, I looked at the h1ioplsc0 model's state word, and manually changed it by running an excitation to set the EXC bit. In the attached plot, the signal being sent directly to the DAQ data-concentration is the upper red plot, the signal being acquired over the network via the EDCU is the lower purple plot. As can be seen, the front end data is packetized into a 1/16th block before sending to DC, which in turn packetizes the data into a 1/16 block and adds the Beckhoff data point. This results in an apparent 2 sample delay (0.125S) in the red plot. In actuality, the red plot is timed correctly, and the Beckhoff event is incorrectly time stamped as preceding the event by 0.1 S.
We think moving the EDCU to a front end would resolve this.
The use of a front-end based EDCU (instead of the DAQ Data Concentrator) is already implemented in the CDS code base. It has already been tested extensively on test stand and, and briefly on L1. There are installation instructions in the DCC. This change is required to ensure EPICS data agreement when multiple Data Concentrators are used to increase redundancy.
Subsequent inquiries concerning my alog has made me realize my entry could be made more clear. The issue is that slow data coming directly from the front end computer to the DAQ data concentrator has a pipe-line delay, whereas data being acquired by the EDCU does not. When looking at the two signals in real time mode, the EDCU signal changes first, followed 3 cycles later (0.18S) by the FEC channel. The data-concentrator takes the pipe-line delay into account by timing the signal from the FEC's time stamp (not the EPICS IOC), meaning that when this event is later replayed by opening frame files, the FEC signal has the correct time, and the EDCU signal erroneously appears to have happened before the event time.
J. Kissel, C. Vorvick While Cheryl and I were aligning the IMC this afternoon, we noticed that we were getting much faster fringing than we saw yesterday. However, comparing spectra of all suspensions in HAM2 / HAM3, in L, T, and V from yesterday evening after work (2017-11-07 02:20 UTC) vs this evening after work (2017-11-08 00:20 UTC) show that there is a large, high Q feature at ~11.3 Hz. Comparing the spectra from over the weekend, in which HAM2 was unlocked, it becomes obvious that the problematic feature dominating the RMS is the HEPI cross-beam resonance, transmitted directly through the locked platform. Our best supposition is that yesterday, the lower frequency motion (0.5 Hz < f < 5 Hz) was much *larger* than it currently is, dominating the RMS velocity of the IMC, resulting in slower fringing. So, we'll think more about how we can slow the suspensions down enough to get alignment done tomorrow.
S Dwyer, S Cooper, J Kissel, TJ Shaffer
We've been thinking about how to better protect our suspensions and optics during earthquakes, and one thing that would help to keep them from banging around would be to keep top mass damping on whenever we can. Brian Lantz is working on changes to the ISI watchdogs (see 39241), and Jeff K has proposed some changes to the sus watchdogs for this and other reasons (38948). What I am suggesting here is a relatively simple change to the suspension guardian to allow the suspensions to stay damped if a lower stage trips but not the top masses.
Currently, the suspension guardian reacts to any watchdog trip on the suspension by turning off all damping, so if any of the lower stages trips the top mass damping will be turned off. Also, if M0 trips the reaction chain will become undamped, and the other way around. I am proposing a simple change to the guardian, shown by the graph image that is attached.
Currently, if any watchdog trips the suspension guardian is redirected to the TRIPPED state, where a function called reset_safe turns off all the outputs to the suspension, including alignment offsets and damping, then a different function turned off the master switch. This was probably done out of convenience, so that when the watchdog is reset by an operator, the guardian can follow the same set of steps to bring it back to its nominal state no matter which watchdog had tripped. It is unnecessary in most cases to turn everything off like this: we could have the guardian respond to each watchdog trip differently, and turn on the controls again differently in each case.
What I would like to do instead is have the tripped state do nothing (thanks TJ for catching the masterswitch), and rely on the watchdogs to do their job by turning off any stages which need to be turned off. In the old graph, the tripped state was a protected state (redirect=False, red outline) which means it will not move on to another state until it returns true (which it will not do unless the watchdogs are reset). I want the operator to be able to request that the suspension turn everything off (or the manager), which they would be able to do in the new graph by requesting reset or safe. To make sure that the guardian will not move through its entire graph with a watchdog tripped, I've made RESET a protected state which will not exit until the watchdog is untripped.
With the graph, if an earthquake hits and trips some suspension watchdogs, the top mass damping will be left on unless the top mass trips. When an operator resets the watchdogs, the suspension will turn off all outputs, and then re-damp, realign, ect. This means that we will be better off if an EQ hits when no one is here. The operators can do nothing until the EQ passes if only lower stages trip, if the top masses trip the operators should try to un-trip them as soon as possible. (Jeff's changes to the SUS watchdogs will hopefully make it easier to untrip the suspensions.)
Carlos and Jonathan. We did some updates to the EPICS CA Gateway cdsegw0 under WP 7200. The goal was to replace it with a newer machine running a current operating system. The new system would be running Debian 9, EPICS 3.15.3, the EPICS CA Gateway, and move the config into puppet control. The current system ran 6 EPICS gateway processes, with 4 of the processes sharing the same server IP/port. The new system had problems running in the same configuration. We could read the vacuum network from all of the interfaces, but could not read the FE/slow controls/and aux gateways. Further investigations are required to state exactly why. At this time we are not sure if it was a configuration setting that did not get transferred over, or just a difference in EPICS base versions as we moved from 3.14.x to 3.15.3. After discussion with Dave we questioned the need for so many gateway processes. In previous revisions of the system, the gateways were needed to rate limit access to VxWorks boxes. These boxes have all been replaced, and the gateways are really are only needed to provide a read-only view of the vacuum channels. The plan was adapted to run the gateway server with only the vacuum network gateway processes, turning off the server -> FE, server -> slow controls, server -> aux gateways. This was done this morning, using the old system with a smaller number of EPICS CA gateway processes running. If this works out we will likely open a new work permit and replace the old gateway box with the new box, but without attempting to multiplex 4 gateways onto one interface (which is the problem we ran into).
Jonathan, Dave:
Following this morning's removal of three epics gateways on the internal gateway machine (the three gateways with SIP=10.20.0.5 and clients on h1fe, h1aux and h1slow) my cell phone texter crashed and would not reconnect to any subnet other than h0ve. We found that the CA env vars were not set and the code was dependent upon a gateway on the cds vlan (10.20.0/24). Defining the env vars in the startup script fixed this issue.
WP7208 FRS9390
While we are vented in the corner station, the Y-Arm vacuum pressure gauge is running slightly above nominal (around 6.0e-09 Torr). The standard alarm level for cell phone text alarms is 5.0e-09 Torr for these gauges, resulting in a false alarm being sent every time I restart my alarm system.
I've opened a WP and FRS to temporarily increase the alarm level for H0:VAC-LY_Y4_PT124B_PRESS_TORR to 7.0e-09 until the corner station vent is complete. The code was restarted with this new configuration.
In addition, I modified the code to send a second daily keep alive text (in addition to the 13:01/13:02 summary texts) to sysadmin only. This is configured to text at 20:00 to confirm code is running.
D. Barker, S. Dwyer, J. Kissel, E. Merilh, C. Vorvick While resuming alignment of the IMC this morning, we were confused and surprised to see the IMC mirrors and PZTs being driven around without our doing. The problem was that the MC WFS servos had triggered at around 19:10 UTC after we opened the light pipe and began shooting light down to MC2 TRANS, the SUM of which serves as the trigger PD**. Because the P and Y path from these servos' output -- all the way to each MC1, MC2, and MC3 mirrors -- is, by default, always on, this non-sense drive was driving around the suspensions and PZTs. Because we don't want anything but humans driving the suspensions for the foreseeable future (until chamber doors start to go back on) I've taken the following disabling measures: - Turned of the input to the MC1, MC2, and MC3 top mass (M1 LOCK) P and Y filter banks, - Accepted this change in each suspensions safe.snap SDF file. - Turned up the trigger on/off thresholds for the MC WFS (H1:IMC-IMC_TRIGGER_THRESH_ON / H1:IMC-IMC_TRIGGER_THRESH_OFF) from 40 / 20 to 5000 / 500. - Accepted into the safe.snap file of h1ascimc - Un-managed MC2 from the IMC_LOCK guardian. - Moved the IMC_LOCK guardian to PAUSED so that it's no longer running (just a precaution, really, the MC WFS are all triggered via front-end logic). - Cleared the history on all MC WFS servos. - Restored all alignment offsets (PSL PZTs, MC1, MC2, and MC3) to there values reported at the close of business yesterday (see LHO aLOG 39299) An as-of-yet unsolved mystery is why we're getting so much more sum counts on the MC2 trans (during yesterday's flashing we hit 2 or 3 [ct] max, now we've hit 25 - 30 [ct]). **The trigger is not just MC TRANS SUM, but H1:IMC-MC2_TRANS_SUM_OUT / H1:IMC-PWR_IN_NORM_MON. The demoninator, H1:IMC-PWR_IN_NORM_MON, is set to 0.1. So with a sum of 25-30 [ct], the triggered value becomes 250-300, and surpasses the (former) threshold of 40 [ct].
Sheila has opened FRS Ticket 9387 so as to better reflect the trigger logic on the IMC / IMC WFS MEDM screens.
For the record: the above unsolved mystery of strangely high MC2 TRANS SUM was a result of an LED Work Lamp being left on pointed on the +Y (global IFO coordinates) of HAM3, which Jim had turned on to rotate and align the new PR2 scrapper baffle (aLOG pending). Once we arrived at HAM3 for our afternoon session, we turned off the lamp, and the SUM returned to there expected value of 2-3 counts (supposedly [uW]) with the parachute cover OFF (while looking in chamber at MC2's iris) and 0.5-0.9 [uW] when the parachute cover is on.
Dan, Dave:
Prior to this morning's DAQ restart I reconfigured h1nds1 (default NDS) to serve the archived raw minute trend data which Dan had recovered over the weekend onto the new SATABOY-RAID (a ZFS files system with compression enabled). I tested this to verify minute trend data is now available from the aLIGO DAQ epoch which is 12:25 PST Wed 16 Jan 2013. Attached plot shows a pem seismic channel for the whole year of 2013.
The raw minute trends are stored on a compressed ZFS file system, built on a re-purposed SATABOY RAID. Compression is done in software on the Oaracle/Sun Microsystems 4270 computer. Getting 500 days of minute trend data took h1nds1 only 5 seconds to retrieve and plot.
The compressed ZFS file system is 10.8TB in size. The current four years of raw minute trend data is consuming only 2.6TB of this (24%). We should be good for raw minute trend storage for many years to come.
Dan is now restoring all the archive min trend data onto h1ldasgw1's SATABOY. ETA is Friday.