FMCS EPICS channels added
WP5631. John, Bubba, Patrick, Dave
The FMCS to EPICS system was modified to add the corner station control average temperature channel. Both the raw channel (in degF) and the degC channels were added to the IOC. The new channel names are:
H0:FMC-LVEA_CONTROL_AVTEMP_DEGF
H0:FMC-LVEA_CONTROL_AVTEMP_DEGC
These channels were added to the DAQ for trending by modifying H0EDCU_FMCS.ini
Beckhoff Vacuum Gauge channels added to EX and EY.
Patrick, Dave:
Please see Patrick's alog for full details. After the Beckhoff changes were made, the ini files were generated and copied into the DAQ chans area.
DAQ restart
Jim, Dave:
To apply the Beckhoff and FMCS changes the daq was restarted at 12:32PST. This was a very messy start and took about 25 minutes to get the DAQ back. The main issue was the monit system on h1dc0 was trying too rapidy to start the data concentrator, resulting in duplicate ini and par files in the running configuration directory. In the end we started the data concentrator manually to get the system running. We will investigate the monit settings offline.
DIAG_MAIN guardian has code change, found error and fixed it
Jeff, Dave
After the long h1nds0 downtime, Jeff was getting the guardian nodes which use NDS back by reloading them. The DIAG_MAIN node would not run, giving an error saying float and int datatypes cannot be bitwise-anded. I found the error in DIAG_MAIN.py and fixed it by casting the ezca to an INT before bitwise AND. We tested the new code by stopping and restarting the CW hw-inj. The operator saw the message on Guardian MEDM and the verbal system announced the error every second.
h1lsc stuck testpoint cleared
Dave:
we had a stuck testpoint on h1lsc for some while, I cleared them.
Test change of filter module on h1pemmx
Dave:
To test the robot updates of filter files to daqsvn, I made a blank-line change to H1PEMMX.txt and loaded it on the front end.
An unnecessary trip of the ISI occurs everytime the complete platform is deisolated and then reisolated.
The model code keeps the T240 Saturations out of the watchdog bank for tripping the ISI when ever all the isolation gains are zero. But if the T240s are riled up the Saturations still accumulate. As soon as the T240 Monitor has alerted the Guardian that the T240 has settled and the Guardian then starts Isolating, the watchdog trips because the T240 saturations are too many. This only trips the ISI and so the T240 does not get upset again, and after the operator has untripped the watchdog (clearing the saturations,) the ISI isolates fine.
It seems we missed this loophole, if the HEPI does not trip, the T240s often don't get too upset so it isn't a problem. Otherwise, usually something is happening, EQ, platform restart, etc, and the operator (Jim & Me too) just untrip and chalk it up to whatever.
This should be fixed and I'm sure Jamie/Hugo will have some ideas but I suggest something like adding lines:
reset (push) H1:ISI-{platform}_SATCLEAR
wait 60+ seconds
after line 51 in .../guardian/isiguardianlib/isolation/states.py
ISSUEs: 1) The reset will clear all saturations, not just the T240s. 2) The wait is required because the Saturation bleed off code still has the bug of needing a bleed cycle to execute so any reset can take up to 60 seconds. Wasted time not locking/observing.
Integration Issue #1163 filed
JeffK HughR
Looking closer and studying, it looks like the model has logic to send a reset into the T240 WD when Isolation starts but it may have been fouled with the WD saturation bleed off upgrade done a couple months ago. Continuing.
I just checked and it looks like you have the latest models svn up'ed on your machines. We need to look into the models/code. My notes are attached.
Something that might be the issue: Your version of /opt/rtcds/userapps/release/isi/common/src/WD_SATCOUNT.c is out-dated (see below). It looks like there was a bug fix to the saturation counter code you did not receive. Updating is pretty invasive (recompile/restart all the ISI models). We need to make sure that this will solve all the issues you pointed out first.
controls@opsws2:src 0$ pwd
/opt/rtcds/userapps/release/isi/common/src
On the SVN:
controls@opsws2:src 0$ svn log -l 5 ^/trunk/isi/common/src/WD_SATCOUNT.c
------------------------------------------------------------------------
r11267 | brian.lantz@LIGO.ORG | 2015-08-11 16:36:13 -0700 (Tue, 11 Aug 2015) | 1 line
fixed the CLEAR SATURATIONS bug - cleanup of comments
------------------------------------------------------------------------
r11266 | brian.lantz@LIGO.ORG | 2015-08-11 16:32:19 -0700 (Tue, 11 Aug 2015) | 1 line
fixed the CLEAR SATURATIONS bug
------------------------------------------------------------------------
r11131 | hugo.paris@LIGO.ORG | 2015-07-30 18:37:24 -0700 (Thu, 30 Jul 2015) | 1 line
ISI update detailed in T1500206 part 2/2
------------------------------------------------------------------------
On the computers at LHO
controls@opsws2:src 0$ svn log -l 5 WD_SATCOUNT.c
------------------------------------------------------------------------
r11131 | hugo.paris@LIGO.ORG | 2015-07-30 18:37:24 -0700 (Thu, 30 Jul 2015) | 1 line
ISI update detailed in T1500206 part 2/2
------------------------------------------------------------------------
controls@opsws2:src 0$
J. Kissel, S. Karki, B. Weaver, R. McCarthy, G. Merano, M. Landry After gathering the weekly charge measurements, I've compared H1 SUS ETMY ESD's relative Pitch/Yaw actuation strength change (as measued by the optical levers) against the Longitundinal Actuation Strength (as measured by PCAL / ESD calibration lines). As has been shown previously (see LHO aLOG 22903), the pitch/yaw strength's slope trends very nicely along with the longituinal stength change -- if you take a quick glance. Upon closer investigation, here are things that one begins to question: (1) We still don't understand why the optical level actuation strength assessments are offset from the longitunidal strength assessment after the ESD bias sign flip. (2) One *could* argue that, although prior to the flip the eye-ball-average of oplev measurements trackes the longitudinal strength, after the flip there are periods where two quadrants (magenta, in pitch, which is LR, from Oct 25 to Nov 8; black, in yaw, which is UR, from ~Nov 11 to Dec 06) track the longitudinal strength. As such, one *could* argue that the longitudinal actuation strength trend is dominated by a single quadrant's charge, instead of the average. Maybe. (3) If you squint, you *could* say that the longitudinal actuation strength increase rate is slowly tapering off, where as the optical lever strength increase *may* be remaining constant. One could probably also say that the rate of strength increase is different between oplevs and cal lines (oplev P/Y strength is increasing faster that cal line L strength). All this being said, we are still unsure whether we want to flip the ETMY ESD bias sign again before the observation run is out. Landry suggests we either do it mid-December (say the week of Dec 14), or not at all. So we'll continue to track via optical lever, and compare against the longitudinal estimate from cal lines. Results continue to look encouraging for ETMX -- ever since we've had great duty cycle, and turned off the ETMX ESD Bias when we're in low-noise and/or when the IFO is down, the charging rate has decreased. Even though the actuation strength of ETMX doesn't matter at the few % level like it does for ETMY (because ETMX is not used as the DARM actuator in nominal low-noise, so it doesn't affect the IFO calibration), it's still good to know that we can get an appreciable effect by simply reducing the bias voltage and/or turning it off for estended periods of time. This again argues for going the LLO route of decreasing the ETMY bias by a factor of 2, which we should certianly consider doing after O1. --------------- As usual, I've followed the instructions from the aWiki to take the measurements. I had much less trouble today than I had last week gathering data from NDS, which is encouraging. One thing I'd done differently was wait a litle longer before requesting the gathering and analysis (I waited until the *next* measurement had gone through -9.0 and -4.0 [V] bias voltage points and started the 0.0 [V] point, roughly 5 minutes after the measurement I wanted to analyze ended). As such, I was able to get 6 and 4 oplev data point to compose the average for ETMX and ETMY, respectively (as opposed to the 3 and 1 I got last week; see LHO aLOG 23717). Once all data was analyzed, I created the usual optical-lever-only assessment using /ligo/svncommon/SusSVN/sus/trunk/QUAD/Common/Scripts/Long_Trend.m and saved the data to here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Results/CAL_PARAM/2015-12-01_H1SUSETMY_ChargeMeasResults.mat However, I'd asked Sudarshan to gather the latest calibration line estimates of the ESD longitudinal actuation strength (aka kappa_TST), which he gathered from his matlab tool that gathers the output of the GDS function "Standard Line Monitor." (He's promised me an updated procedure and an aLOG so that anyone can do it). This is noteably *not* the output of the GDS pipeline, but the answers should be equivalent. His data lives here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Results/CAL_PARAM/2015-12-01_Sep-Oct-Nov_ALLKappas.mat Finally, I've made the comparison between oplev and cal live strength estimates using /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Scripts/CAL_PARAM/compare_chargevskappaTST_20151201.m
J. Kissel, G. Merano, J. Worden In order to facilitate figuring out what's left on the chambers that might be charging the test masses (and also to compare against LLO who has a few bonkers quadrants that had suddenly gained charge), I attach a drawing (apologies for my out-of-date SolidWorks version) of what gauges remain around the end-station chambers. The "Inficon wide-range gauge" is the BPG402-Sx ATM to UHV Gauge, and the "Gauge Pair" are separate units merged together by LIGO. Also, PS -- we're valving in the ol' ion pumps today (in their new 250 [m]-from-the-test-masses locations). Kyle and Gerardo are valving in the X-arm today (stay tuned for details from them).
Not sure what Jeff meant by "ol' ion pumps". Kyle and Gerardo valved in a "bran' new ion pump" at the 250m location. The ol ion pump remains mounted in the end station but valved out from the chamber. Only the Xarm pump has been valved in at the 250 m location. The Yarm pump has yet to be baked prior to opening to the tube.
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=23916
Finishing up with maintenance window and getting ready to relock. Nutsinee and Jeff did a sweep of the LVEA.
Attached are plots of the IOO IMs, IM1-4, and their alignments in pitch and yaw. Also plots of IM4 Trans pitch and yaw over 90 days.
Plot 1: IM4 Trans pitch and yaw
Plot 2: IM4 Trans and IM1-4 pitch
Plot 3: IM4 Trans and IM1-4 pitch
In all plots, changes can be attributed to both alignment shifts after shaking (HAM2 ISI trips / earthquakes) and intentional alignment adjustments.
Current alignment: DAMP_P and DAMP_Y signals
Robert and Nutsinee had tested the noise from an additional cleanroom in the Optics Lab in alog 23169. With Michael's OK, I've turned this cleanroom on and expect it to be on for the duration of O1 as we use it to inventory oustanding hardware.
LLO and LHO use somewhat different RZ blends on St1 of our BSCs. Because the T240s have a RZ coupling to Z drives, we use a high CPS blend to avoid injecting this coupling into the control loops. The blends are shown in my first plot. The low pass CPS (red) is common to both, but LLO uses a CPS/L4C blend ( their L4C high pass is dashed green), while LHO is using a blend that has both the T240 (light blue) and L4C (brown). RichMs seismic log 647 shows that the CPS/L4C ends up injecting a bunch of L4C noise at low frequency. Arnaud and Rich have worked on testing a CPS/L4C blend that rolls the L4C off more at low frequency, see LLO alog 21941. The blend that LHO uses rolls off the L4C more, but replaces that signal with possibly spurious T240 signal. I tested that this morning and it looks like this is not a problem for LHO. Second plot is the CPS and oplev for ITMX, third plot is L4C and T240, references are with the L4C/CPS RZ blend, the live traces are with the T240/L4C/CPS blend. The CPS shows that the L4C only blend does indeed move a lot more at low frequency, though the oplev doesn't see any difference. The T240 and L4C don't really see a difference, but the T240 is suspect and the L4C is a poor low frequency sensor.
There is one reason why LHO maybe okay with T240s in our RZ blend while LLO may not, we are using HEPI sensor correction to offload low frequency Z isolation ( and thus, drive) to HEPI. Arnaud has also tried lower blends on St1 with L4C/CPS RZ blends succesfully, something I want to test, which may not work as well at LHO because we are using T240s in RZ. I would also like to try Rich's new 750mhz blend, with and without T240s, but all of that will probably wait until after 01.
J. Oberling, P. King
I reset the 35W FE watchdog this morning at 18:24 UTC (10:24 PST). I also noticed that the crystal chiller was low so I added 125 mL of water to top it off. While doing that Peter noticed the warning light on the diode chiller that coincides with low water was lit, so we added water until the light went off.
After yesterday's report that CER temperature was high (alog 23826), Bubba made some adjustment of the diverter(???) for CER yesterday, which made the temperature go down by 0.5 C, and RF output level got somewhat higher.
The second attachment shows that the temperature jump on 27/Nov in CER (red vertical lines) didn't coinside with the time they made the adjustment to the air handler (blue vertical lines). Though we don't understand why the temperature went up at that timing, since no catastrophy was observed, and since the temperature seems to be well regulated, I guess we need to operate like this.
Note that the PEM temperature screen is reporting many out-of-tolerance errors including CER and Sup. It doesn't prevent us from going observation, but when we figure out that the current temperature is good, I think we'd better update eiither the tolerance or the nominal temperature.
The beam alignment into the reference cavity was tweaked. Most of the alignment was done by adjusting the mirror mounts on the periscope. The other mounts did not yield as much gain as the ones on the periscope. With the pitch adjustments the transmission signal went from ~0.75 - 0.78 to ~1.16. Relatively minor tweaks were made to the AOM alignment without any big improvements, which suggests that the adjustments for the AOM are in the mid-range. Went back to the periscope adjustments and adjusted yaw which improved things to ~1.2. Walking the beam in yaw, followed by small pitch adjustments improved the signal to ~1.5 with the HEPA fans on and the ISS unlocked. The power into the reference cavity was measured to be 30.1 mW - measured with Ophir stick calorimeter. Both mounts on the periscope were locked as best as possible. The transmission signal was 1.52. The signal on the RFPD was: unlocked : -245 mV locked : -40 mV offset : -2 mV With the ISS on, the reference cavity transmission was measured to be 1.54. Attached, for reference, is the camera image of the various spots from the cavities. As an aside, the recent drift in reference cavity transmission seems to coincide with some jumps in the pre-modecleaner temperature. Jason, Ed, Peter
This morning before maintenance really took off, we tried switching blends on all of the BSCs, while the IFO was still locked. Contrary to my findings a couple weeks ago (alog 23610), it now seems like it's possible to switch blends while the IFO is locked and not break lock. We started out carefully, switching only the ETMs, one at a time, then the corner station ISIs. We then tried a little faster, doing the corner station all at once, then both ETMs. Finally, we switched all chambers all at once. For each of these tests, we switched from our nominal 90mhz blends to the 45mhz blends then back. The lock survived each switch, although the ASC loops would ring up some, especially when we switched the ETMs. The corner station ISIs didn't seem to effect ASC as much.
The only IFO difference I know of between the last time I looked at this and now is that Hugh went down and recentered the ETMY's T240s. Environment is also different today, with pretty quiet winds (<10mph) and only moderate microseism (rms < .5 microns).
The attached trends are for the ASC D/C Hard, D/C soft, ETM oplevs and corner station oplevs. Similar to what I found a while ago, ETMY seemed to have the biggest effect on the IFO (based on what we saw on the ASC foms in the control room), although the ITMY oplev actually moved more. Still, the oplevs didn't see more the about 1.5 microradians motion at any point.
You can also tell what blends were running on the traces based on the eye-ball rms of the ASC signals. The 90mhz blends don't filter out the microseism which is moderate today, so the ASC pitch signals get noisier. This is also visible on the oplevs.
This test makes us ~80% confident we can probably switch blends while locked. With caveats about the current environment not being very extreme. If it;s windy or the microseism is high, the answer could change.
Title: 12/01/2015, Day Shift 16:00 – 00:00 (08:00 – 16:00) All times in UTC (PT) State of H1: 16:00 (08:00), The IFO locked at NOMINAL_LOW_NOISE, 22.3w, 79Mpc. In Commissioning mode. Outgoing Operator: TJ Quick Summary: Start of maintenance window.
Doubling these limits should virtually eliminate locklosses of the nature reported here.
The SDF has been updated, both OBSERVE and safe files; and they are committed to the svn.
Title: 12/1 OWL Shift: 08:00-16:00UTC (00:00-8:00PDT), all times posted in UTC
State of H1: Maintenance Tuesday
Shift Summary: Still cruising on a 28 hour lock. The DARM spectrum was getting a tiny bit noisy in frequencies < 50Hz for the last hour or two (that I noticed at least). Since LLO was down and the inpending maintenance day upon us, I didn't do look too far into it.
Incoming Operator: Jeff B
Activity Log:
Just checking on the reference cavity transmission, which continues to fall. The ambient temperature didn't change much. Doesn't appear to be due to variation in the pre-modecleaner transmission, nor temperature.
Starting some maintenance tasks since LLO has been out. We are currently still locked, but that won't last for long.
I'm testing a new commissioning reservation system I wrote this afternoon using python. This is a file based system replacing the old EPICS system. The main reason for the change is that the old EPICS system developed problems related to updates and required a lot of maintenance. It was also awkward to configure and restrictive in what it could do.
The reservation file is /opt/rtcds/lho/h1/cds/reservations.txt
There are three python scripts:
make_reservation.py is available to all users, it allows you to create your reservation
display_reservations.py script loops every second and shows the currently open reservations
decrement_reservations.py script is ran as a cronjob every minute, it decrements the time-to-live of each reservation and deletes them when they expire.
I have a display running on the left control room TV closest to the projector screen.
Here is the usage document for the make_reservation.py script
controls@opsws6:~ 0$ make_reservation.py -h
usage: make_reservation.py [-h] system name task contact length
create a reservation for a system
positional arguments:
system the system you are reserving, e.g. model/daq
name your name
task description of your task
contact your contact info (location, phone num)
length length of time (d:hh:mm)
optional arguments:
-h, --help show this help message and exit
If you need to use spaces, surround the string with quotes. There are no limitations on string contents. Here is an example reservation
make_reservation.py "PEM and DAQ" david.barker "update PEM models, restart DAQ" "phone 255" 0:2:0
Here's an example, reserving Nutsinee for 1 hour on TSCX / BSC3: opsws8:~$ make_reservation.py TCSX nutsinee 'BSC Temp Sensor Replacement' LVEA 00:01:00
due to the extended DAQ downtime, there are no H1 DAQ frames between the GPS times:
1133036992 (Dec 01 2015 20:29:35 UTC)
1133038720 (Dec 01 2015 20:58:23 UTC)
These times are those of the last frame file before the outage and the first file after recovery.
I have modified the FMCS Cornerstation oveview MEDM screen to show the new Control Average temperature (both degF and degC). The new chans are located in the top right corner of the screen.