We lost the changes to the ASC model that we made before to add POP90 to ADS. I added it again, and did a make but not a make install. Hopefully we can do this tomorow, if needed after the timing system work, so that we can make a dither loop for SRM. This is svn version 13780
I restored all the HEPI safe.snap files for SDF review. HAM1 sensor correction was the only settings needing accepting.
For the BSC ISIs, I accepted all the blend filter changes--these arose from Jim's filter bank cleanup. The resolution for all the other diffs between the OBSERVE state and the safe.snap file are captured on the attached snapshots of the SDF diff screen before they were confirmed.
I'll do the HAM ISI in the morning.
17:14 John back
18:00 back to locking after SDF party
19:43 Tested Fast Shutter function. Fil witnessed the proper operation.
22:55 Hand off to Jeff
Over-filled CP3 with the exhaust bypass valve fully open and the LLCV bypass valve 1/2 turn open.
Flow was noted after 141 seconds, closed LLCV valve, and 3 minutes later the exhaust bypass valve was closed.
AT 21:11 ITMX bias was ramped to 9.5V DAC output over ten seconds and at 21:12 ITMY bias was ramped on over ten seconds. The interferometers was in DC_READOUT_TRANSITION with 2W. There was no apparent disturbance.
These setting have been incorperated into the SUS_PI guardian such when it enters QPD_PI_DAMPING the bias is ramped on. This currently happens in the ISC_LOCK guardians DC_READOUT state. The bias is ramped to zero when entering the IFO_DOWN state.
J. Kissel, J. Driggers, B. Weaver, E. Merilh. E. Hall, H. Radkins, D. Sigg In prep for the timing system software update, which'll need all front-ends to be turned off and restarted tomorrow, we've reconciled the SDF system in the ISC_LOCK guardian's "DOWN" state (note that in this state, some front ends are using the "safe" some are using "down" and some are using "OBSERVE" snap files). Notable things that we accepted: - Bounce and Roll monitoring FMs (H1:OAF-BOUNCE_[ETMX, ETMY, ITMY, ITMX]) were found to have FM3 (a broad band-pass) in place instead of FM4 (a narrow band-pass). We reverted to have FM4 ON and FM3 OFF. - PCAL Y 1083.7 Hz line has been off since June 28. We couldn't find an aLOG, but I recall we turned the 1 kHz line off to preserve range on the PCAL. We've accepted that it's off. - The Beam Splitter M2 stage LOCK_P and LOCK_Y has bounce and roll mode notching filters for both the BS's bounce and roll modes at 17.7 & 25.7 Hz (FM5 "BounceRoll"), as well as for the QUADs at 9.63 and 13.4 Hz (FM7 "EvanBR"). We found that FM7 was OFF since Jun 28th, and we think we want it ON, so that MICH isn't affected by the ITM bounce and roll modes when actuated in Pitch and Yaw. - We found the IMC Master WFS gain at 0.09 instead of 0.1, and has been so since July 13th. Seems silly, so we reverted to the gain of 0.1. - The newish Daniel-style integrators for POP X WFS DC centering servo were found with there bleed-off engaged (via H1:ASC-POP_X_PZT_PIT/YAW_BLEEDEN), as desired since Jun 28. We accepted the bleed off ads engaged it. - The INP1 Yaw offset was previously -800, and is now 0.0. We accepted the 0.0, because this offset was used to PRC gain spelunking, and is now no longer in favor. After we went through one round of clearing out, accepting, and reverting, we got mid-way through the lock acquisition sequence and lost lock back to the DOWN state again. For any further channels and changes that occurred, (especially on the suspensions), I UN-monitored those channels.
Also, we added a limiter to the ISS third loop, but couldn't find this channel in SDF, even in the full list.
Also, note that even though people reconciled SDF with the mask applied, there are still diffs if you look at all channels, I'm not sure all of these channels are set by guardian to what they should be.
I looked at the HEPI tidal offloading signals from one of Friday's locks to try to extract the power circulating in the arms.
As the interferometer powed up from 2 W to 40 W, the offloading signals compensated for 4.3(3) µm of radiation-induced displacement. This amounts to 130(14) kW of power circulating in each arm for 40 W of PSL power. The uncertainty comes from (1) the difference between the X and Y offloading signals, (2) nonlinear drift in the earth tides (only the linear component has been subtracted here), and (3) an assumed 10% uncertainty in the displacement calibration.
On the other hand, if we believe that there was 95(10) kW of power in each arm given 22.5 W of PSL power, we would expect 169(18) kW of power in each arm at 40 W, assuming no power-dependent arm loss. The discrepancy between the expectation and the observation amounts to 30(20)% missing power. This is consistent with previous estimates of the recycling gain loss during power-up.
This constraint could be improved by (1) making the PSL power increase faster, so that the earth tide behavior can be better constrained, and (2) turning off the HEPI offloading and instead watching the UIM offloading.
As I happened to be debugging one of my MEDM user screens I noticed that there was some odd behaviour with the FSS. It wasn't going out of lock or oscillating but the PZT voltage exhibited some small discrete jumps. Then I noticed that it was perhaps due to the ISS acquiring and breaking lock. Attached is a plot of the last hour's data. There's a gap in the data stream starting from about 06:40 and lasting about 13 minutes. The AOM drive should hover somewhere between 0.500 and 0.550. This is a better indicator of how things are (at the moment) rather than the % diffracted power, as this is close to where the response is linear. Lower than this value and the response is somewhat parabolic.
Richard called saying the laser was off when he entered the Control Room this morning. According to the laser status screen, the laser tripped due to a crystal chiller flow rate error, somewhere in the power meter circuit. Attached are two plots of the previous one day's trend data. The signal from head 2 is somewhat noisier than the signals for the 3 other heads. Looking back even further it seems to have been that way for at least the 6 days. Unfortunately we do not have enough diagnostics to pinpoint the location of the potential blockage and at this point in time it is a bit of a waiting game unless time is scheduled for invasive maintenance. After restarting the laser at ~06:30, things seem to have settled down by 06:40.
Attached is a plot of the high power oscillator's PZT voltage during the warm up time. During the warm up, the injection locking broke lock once. Also attached is a trend plot of the PZT voltage. As another side note, I happened to notice that the FSS locked quite happily even though the noise eater was off. This might be due to the reduced FAST gain. Both shutters were open when the laser shut down.
I set up a magnetometer at EY near the ESD chassis in the VEA for a blip glitch study (see upcoming log from Paul). The plot shows the channel names (12 and 13 are the Y and Z axes) and very high coherence between the mag and DARM at odd harmonics of 1 Hz. This is one of the new combs that Ansel identified and so this channel may also be useful in seeing if it goes away during the reboots on Tuesday.
Carl, Evans, Stefan,
We had h1ecatx1 crash. A reboot didn't bring back the connection - we had to log in and manually start start.bat.
Then we ran into strange guardian behaviour, which was tracked down to epics channels having different values on h1guardian0 and operator machines.
In particular, on h1guardian0:
In [8]: ezca['ALS-Y_LOCK_ERROR_FLAG']
Out[8]: 1
while on operator 0 I get:
In [7]: ezca['ALS-Y_LOCK_ERROR_FLAG']
Out[7]: 0
Problem was with an epics gateway, which was stuck with the incorrect value. I restarted the gateway between the slowcontrols-lan and the fe-lan, guardian now connects directly to the h1ecaty1 Beckhoff IOC and is seeing the correct value.
Here is the diagnostics:
On the workstation nucws20, I did a 'caget -d 5' to return an integer value rather than the enumerated string
david.barker@nucws20: caget -d 5 H1:ALS-Y_LOCK_ERROR_FLAG
H1:ALS-Y_LOCK_ERROR_FLAG
Value: 0
Same command on h1guardian0
controls@h1guardian0:~ 0$ caget -d 5 H1:ALS-Y_LOCK_ERROR_FLAG
H1:ALS-Y_LOCK_ERROR_FLAG
Value: 1
More information can be obtained with the cainfo command
controls@h1guardian0:~ 0$ cainfo H1:ALS-Y_LOCK_ERROR_FLAG
H1:ALS-Y_LOCK_ERROR_FLAG
State: connected
Host: h1egw0.cds.ligo-wa.caltech.edu:42076
Access: read, write
Native data type: DBF_ENUM
Request type: DBR_ENUM
Element count: 1
CA.Client.Exception...............................................
Warning: "Identical process variable names on multiple servers"
Context: "Channel: "H1:ALS-Y_LOCK_ERROR_FLAG", Connecting to: h1egw0.cds.ligo-wa.caltech.edu:42076, Ignored: h1ecaty1.cds.ligo-wa.caltech.edu:5064"
Source File: ../cac.cpp line 1297
Current Time: Sat Jul 16 2016 19:56:42.065413401
..................................................................
After the errant gateway was restarted:
controls@h1guardian0:~ 0$ cainfo H1:ALS-Y_LOCK_ERROR_FLAG
H1:ALS-Y_LOCK_ERROR_FLAG
State: connected
Host: h1ecaty1.cds.ligo-wa.caltech.edu:5064
Access: read, write
Native data type: DBF_ENUM
Request type: DBR_ENUM
Element count: 1
controls@h1guardian0:~ 0$ caget -d 5 H1:ALS-Y_LOCK_ERROR_FLAG
H1:ALS-Y_LOCK_ERROR_FLAG
Value: 0
I've opened an FRS ticket for this, we should either remove having two options for guardian connection to remote IOCs or ensure only one connection is reliably used.
https://services.ligo-la.caltech.edu/FRS/show_bug.cgi?id=5892
Matt, Stefan
The switching of coil drivers to low noise takes quite a long time with the current guardian code (20+ seconds per coil, with 20 coils, so over 400 seconds or about 7 minutes). This guardian state has recently been updated to make it more responsive (moved code to run, and used timer rather than sleep, see 28353), but this didn't make it any faster. I just changed the code to switch all of the optics together, rather than do them serially (see below). This should reduce the total time to ~90 seconds. The old function is still in ISC_LOCK.py as COIL_DRIVERS_SLOW.
We have not locked yet, so this code is untested.
=============================
def run(self):
eul2osemTramp = 8
analogExtraSleep = 7
path = '/opt/rtcds/userapps/release/isc/h1/scripts/sus/'
optics = ['PRM', 'PR2', 'SRM', 'SR2', 'BS'] # what optic?
stage = ['M3', 'M3', 'M3', 'M3', 'M2'] # what stage are we switching?
newState = [ 3, 3, 3, 3, 3 ] # what is our final coil state?
opt_stage_state = zip(optics, stage, newState) # tuple of these
coils = ['UL', 'UR', 'LL', 'LR']
coil = coils[self.coil_num]
log(str(self.reset_counter))
if self.reset_counter == 0:
# first step, set matrix values
for opt, stg, state in opt_stage_state:
log('-- Switching ' + opt + ' ' + coil)
ezca.burtwb(path + opt.lower() + '_' + stg.lower() +'_out_' + coil.lower() + '.snap')
time.sleep(0.1)
ezca['SUS-' + opt + '_' + stg + '_EUL2OSEM_LOAD_MATRIX'] = 1
self.timer['mtrxRamp'] = eul2osemTramp
self.reset_counter = 1
elif self.reset_counter == 1 and self.timer['mtrxRamp']:
# second step, clear filters
for opt, stg, state in opt_stage_state:
ezca['SUS-' + opt + '_' + stg + '_COILOUTF_' + coil + '_RSET'] = 2
self.timer['extraSleep'] = analogExtraSleep
self.reset_counter = 2
elif self.reset_counter == 2 and self.timer['extraSleep']:
# third step, switch coil drivers
for opt, stg, state in opt_stage_state:
ezca['SUS-' + opt + '_BIO_' + stg + '_' + coil + '_STATEREQ'] = 1 # go to intermediate state
time.sleep(0.1)
ezca['SUS-' + opt + '_BIO_' + stg + '_' + coil + '_STATEREQ'] = state
time.sleep(0.1)
self.reset_counter = 3
< ... more run code ... >
=============================
The wind direction part of the anemometers still seems to not be reading correctly. There was some discussion on the SEI call today that maybe it had started working, but a look at the DetChar page at https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20160714/pem/wind/ shows good info for wind speed, but nothing sensible for wind direction. I put some data into the SEI log about wind direction as seen at the Tri-cities airport. https://alog.ligo-la.caltech.edu/SEI/index.php?callRep=1035 The FAA and Jim agree that strong winds come from the southwest.
Speaking as an ex hang glider I don't think we will get reliable wind measurements unless the sensors are placed well above the buildings. Here is a quote from the World Meteorlogical Organization;
https://www.wmo.int/pages/prog/www/IMOP/publications/CIMO-Guide/Prelim-2014Ed/Prelim2014Ed_P-I_Ch-5.pdf
John W. is right that the wind direction at our roof weather stations is not what it would be on a weather mast far above land topography. The wind affecting the buildings is funneled by the buildings themselves as well as by the berms and other topography around them. Since we care more about the direction of the wind that is flowing over and around the buildings, than we do about the direction at altitude, I have not pushed to build weather masts as was done at LLO, but, of course, this means that the sensors do not read what masts read. Mast wind directions are available from Hanford weather services (http://www.hanford.gov/page.cfm/HMS/RealTimeMetData), station 9 is closest to the corner station and station 1 is closest to EY. But I think you have to contact them to get historical data.
To reflect that our roof weather station direction sensors are very local sensors and do not report what meteorologists think of as wind direction, for aLIGO we started using our own X, Y coordinates:
Wind travelling in +X direction (from corner station towards X end): 0 (degrees)
Wind travelling in the +Y direction (from the corner station toward EY): 270
Wind travelling in the -Y direction, EY to CS (approx. direction of typical storm): 90
Wind travelling in the -X direction, EX to CS (the other most common storm direction): 180
That being said, the wind direction system has not yet been installed. This is because all sensors were broken by the beginning of aLIGO, long past their life span. Paul Schale and I installed a new direction sensor at EX in summer of 2014 for BRS studies. I looked back at the data, and for some reason it starts in April of 2015, but from then on the data is good. The channel is: H1:PEM-EX_WIND_WEATHER_DEG. I just now went up on the roof and made sure that it was still functioning with directions as given above. However, even this new sensor has problems typical of the Davis system: it sometimes produces huge values, I believe when the brushes loose contact around 0 degrees.
Let me say that for most studies, I am inclined to use proxies that are closer to what we care about than wind direction at one particular location on the roof. Thus for studies of how the BRS performs under different wind tilt conditions, such as dominant tilt direction, I suggest using the 0.03 to 0.08 Hz seismic band of uncorrected seismometers. This gives both tilt axes so that the performance can be compared when most of the tilt is in the Y direction or in the X direction (we only have real tilt sensors (BRS) at EX and EY for the beam axis direction, hence the need for a proxy). Of course earthquake spikes must be filtered out. This is how Dipongkar did his year long study of tilt behavior (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=17574, https://wiki.ligo.org/viewauth/DetChar/WindInducedTilt). Included in this study are plots showing how well this seismometer band correlates with the tilt as measured by the BRS.
For the person installing the new anemometer/wind direction sensors (Hugh is considering taking this big job on), in order to get the direction system above, align the bar in the Y-direction with the anemometer towards Rattlesnake mountain. Use a CDS laptop displaying the weather screens for fine adjustment.
Robert
We are hoping to have the the units replaced in time for O2. This will be the 3rd exchange on 2 buildings and the 4th on the rest.