= Ops Summary =
7:30 Richard and Ken to Mid X, End X, End Y to do heater work
7:45 Let in truck for ldas work
7:50 Jeff in LVEA PSL area
8:03 Jeff and Peter K. to PSL
8:17 Jason and Ed to PSL
8:28 turned off EY dust monitor. The alarm goes off constantly even when nobody's in the VEA.
8:31 Doug to LVEA (OPLEV work)
8:45 Kyle to bolt HAM1 west door
Filiberto running cable from HAM7
Betsy to LVEA
9:08 Andreas to Betsy in the LVEA
Sudarshan and his guest to LVEA for a tour
9:15 OMC down
Travis to LVEA (3IFO work)
9:35 Sudarshan to End Y
9:49 Doug back
9:52 Hugh at EY - HEPI pump down
Greg to LVEA (3IFO work)
Model reboot going on all morning
10:07 Betsy back o LVEA
10:31 Jeff B., et al. out of PSL
10:41 Picomoter HAM1, 3 disabled
10:50 Jeff B., et al. out of LVEA
11:00 Sudarshan back
11:26 Sudarshan to LVEA - put PEM patch panel on IOT2R and ISCT6 (WP5075)
12:46 Karen to Mid Y
12:51 Jeff B. to End X (Not VEA area)
13:15 Travis back to LVEA
13:37 Dave rebooted Guardian
13:48 Karen leaving Mid Y
14:03 Jeff B. to clean area
14:04 Ken the Electrician to End X
14:17 Jeff B. back
14:21 Ken out of End X
15:48 Travis and Betsy to LVEA
16:02 Kyle opening gate valves ~ 15 mins
16:06 Mitchell to LVEA
16:23 Mitchell back
Hard close on GV-5 and soft close on GV-7 all day. No green beam on the monitor.
Several systems have been brought down/rebooted
Caltech server is dead around 3:30 - 4 pm Can't get data from ligo-wa. nds.ligo left me hanging.
Happy Maintenance Day.
(Kiwamu, Dave, Jim, Jeff, Daniel)
The LSC and OMC models have been re-partitioned, so that DARM and CARM (to the ifo) are processed in the OMC.
The current processing times are:
mean (µs) | max (µs) | |
---|---|---|
lsc | 30 | 37 |
omc | 13 | 14 |
alsex | 32 | 35 |
alsey | 32 | 34 |
iscex | 5 | 7 |
iscey | 5 | 7 |
susetmx | 48 | 50 |
susetmy | 50 | 56 |
Only the OMC sends data to the ETMX/Y, and only ISCEX/Y send data back to the LSC. With no more than 13µs processing time one might expect that there is plenty of time to send the DARM signal to the ETMs without an IPC errors. But no! We still have one every couple of seconds. This is down from a ~10Hz error rate. Strangely, the ALS, which picks up the common tidal signal and is located after the SUS in the fiber loop, does not see any errors. We now suspect a problem at the receiving end. More investigations ongoing.
GregG, KenM, MyronM, TJS, JimW
The last 3IFO BSC-ISI went in the can this afternoon. Whole operation went off relatively smoothly. The can is now sitting by the door, waiting for transport to it's final resting place in the LVEA.
Installed the Variac system for the VEA heaters at both end stations. Had quite the temperature excursion (69degF) due to the feed back to the controller not being properly setup For some reason in this state the Variacs go to full on. We corrected this and the system is now functioning. We still have problems at EX. Half of the heater elements are bad so we are only running on a single stage at most. I have Y end set to 7mAmps and X to 8mAmps. Will probably turn Y down as the temperature settles to require less cooling for control. At some point we will convert over to the heater being in the control loop but that will wait.
[Stuart A, Jeff K] As was recently carried-out at LLO (see LLO aLOG entry 16663), during this mornings maintenance period I was able to roll-out independent switching of BSFM M2 stage coil driver BIO filter states, as outlined in ECR E1500045. Hopefully, this should provide commissioners with the capability to smoothly (without breaking lock) transition from acquisition to the low-noise state coil driver, just for the Beamsplitter M2 stage, which was problematic at LLO. This was implemented as follows: (1) Svn'd up the common library parts: /opt/rtcds/userapps/trunk/sus/common/models/ U STATE_BIO_MASTER.mdl U BSFM_MASTER.mdl (2) Svn'd up the common MEDM screens: /opt/rtcds/userapps/trunk/sus/common/medm/bsfm/ U SUS_CUST_BSFM_OVERVIEW.adl U SUS_CUST_BSFM_BIO.adl U SUS_CUST_BSFM_M2_EUL2OSEM.adl (3) Rebuilt, installed and restarted the h1susbs model without incident. (4) Noticed some white boxes on EUL2OSEM Ramping Matrix MEDM screen, which were site specific to L1, so I fixed these with a $(IFO) substitution and re-committed SUS_CUST_BSFM_M2_EUL2OSEM.adl to the svn. (5) 4 new EPICS channels are present for the M2 BIO states, as well as an additional 4 for the test coil enable, all which need to be configured: - All coil enables were set to 1, and the BSFM M2 BIO state was set to the 'default' of 1, as I was informed by Jeff K was being used by commissioners n.b. LLO 'default' acquisition is BIO state 2, and low-noise is BIO state 3 (see LLO aLOG entry 16565). Screen-shots below show an example use case for the EUL2OSEM Ramping Matrix. with green indicating a nominal state i.e. set values and current value are equal, red when a new value has been set, and yellow after activating the LOAD MATRIX ramping in over the required duration. Finally, I took a new safe SDF snapshot using the same front-end technique I use at LLO: - Transition the Suspension to a SAFE state via Guardian - On the SDF_RESTORE MEDM screen available via the Suspension GDS_TP screen select FILE TYPE as "EPICS DB AS SDF" & FILE OPTIONS as "OVERWRITE" then click "SDF SAVE FILE" button to push a SDF snap shot to the target area (which is soft-linked to userapps). - This safe SDF snapshot was then checked into the userapps svn: /opt/rtcds/userapps/release/sus/h1/burtfiles/ M h1susbs_safe.snap This should close ECR E1500045 and Integration Issue #1003 for LHO.
LLO scripts for transitioning the BSFM M2 coil driver from acquisition (BIO state 2) to low-noise (BIO state 3) once IFO RF LOCK is archived have been checked into the repo and svn'd up here at LHO: /opt/rtcds/userapps/release/lsc/l1/scripts/transition/als/ A bs_m2_switch.sh A bs_m2_out_normal.snap A bs_m2_out_ll.snap A bs_m2_out_lr.snap A bs_m2_out_ul.snap A bs_m2_out_ur.snap The above script applies a BURT snapshot to the BSFM M2 EUL2OSEM Ramping Matrix zeroing a quadrant/channel at a time using the other 3 to actuate while it's state is transitioned. Once transitioned the script moves on to the next quadrant, until all 4 are complete. This functionality should be incorporated into a Guardian transition state.
I copied this scripts and the burt files into userapps/release/isc/h1/scripts/sus
, replaced L1
with H1
throughout, and committed them to the SVN.
The script seems to work fine in terms of writing the right values to the right channels. We haven’t tried it yet with the interferometer locked.
ISS AOM Diffracted power was adjusted from 15% to 8.5% by taking the refsignal from -2.07 to -2.14.
[Jeff K, Stuart A, Kiwamu I] To accommodate QUAD top stage bounce and roll mode damping using AS WFS as the sensor the h1asc model has been modified as follows: (0) Observation that the h1asc model uses the library ASC part (the l1asc does not), and that h1asc is already checked in the the svn. Snapshots of the l1asc & h1asc top-level models are below. (1) Removed 2 RFM receivers from h1asc which were producing IPC errors (see IPC snapshot below). After discussing with Daniel it was established that these channels are no longer used (i.e. H1:ALS-X_REFL_B_SLOW_RFM & H1:ALS-Y_REFL_B_SLOW_RFM), thus inputs were terminated to the ASC block (IN_GR_PDHX & IN_GR_PDHY), as is also the case at LLO. (2) Had to disable the link to the ASC library part, after discussing with Kiwamu, so that the AS_B_RF45_Q_PIT signal could be routed though to the top-level (see h1asc ASC snapshot below). (3) Added 2 RFM and 1 IPC senders from h1asc to each QUAD i.e. channels H1:ASC-ETMX_AS_B_RF45_Q_P (card 0), H1:ASC-ETMY_AS_B_RF45_Q_P (card 1), and H1:ASC-ITM_AS_B_RF45_Q_P (see snapshot of new h1asc top-level model below). (4) Renamed OAF channel names with CAL. This model has been built, installed, and restarted during this mornings maintenance, with no issues. I’ve checked the h1asc model into the svn: /opt/rtcds/userapps/trunk/asc/h1/models/ M h1asc.mdl
P. King, J. Oberling, J. Bartlett, E. Merilh
We realigned the FSS RefCav this morning. It was down around 0.7V on the FSS TPD. We aligned both mirrors of the input periscope in both pitch and yaw. Yaw alignment did not increase the RefCav TPD signal by much; most of the adjustmen was to pitch on both mirrors. We left the FSS TPD reading 1.56V and locked the adjustment screws on both periscope mirrors. We also found that the input periscope mirror mounts (they both mount to a ~1.5 inch diameter post that in turn is mounted to the PSL table itself) were a little loose. Peter was able to tighten both mounts by ~1/8 to ~1/4 of a turn on the big screws that mount the mirrors to the post. Will continue to monitor the FSS TPD to see if it continues to drift down.
We measured the laser power in 2 places after the alignment:
We also measured the voltage on the FSS REFL PD when the RefCav was locked and unlocked:
Luckily it seems that the reference cavity transmission signal in volts is one-tenth that of the transmitted power in milliwatts. As I recall ALS needs something like 10 mW in its beam path. So if the reference cavity transmission drops below 1 V, we should think about tweaking the alignment.
model restarts logged for Mon 23/Feb/2015
2015_02_23 21:27 h1fw1
one unexpected restart. Conlog frequently changing channels report attached.
Evan, Alexa, Dan, Sheila
Tonight we made some more measurements of the plants for DHARD WFS, this time with no watchdogs tripped. The screens shots attached show both the open loop gains and the plant measurements for pitch and yaw. We plan to fit these and make a better loop for on resonance during maintence day today.
All of these measurements were made with the oplev damping on the ETMs off. The plant inversion used for the measurements at 50 pm was a little different than what we used on resonance.
In full lock, we had the following loops closed:
After engaging ASB36Q, we ramped down the yaw oplev damping on the BS. After a few seconds, ASB36Q began ringing at 1 Hz or so and the lock broke. Unclear whether this was a WFS loop instability or unsuppressed motion of the BS.
I altered the plant inversion for the diff ETM loops so that we are not applying so much drive above 10 Hz. Instead of five poles at 100 Hz and an ELP at 30 Hz, there are now three poles at 20 Hz, two poles at 100 Hz, and an ELP at 10 Hz. We lose some phase though, so we might not be able to get 1 Hz bandwidth out of these loops.
Today I followed up on last night's commissioning of the ETMY violin mode damping. I tested the ETMY L2 darm damping filter that was set up for the 508.1Hz violin mode (MODE2), feeding back to pitch. Last night I found that a positive gain of 500k had an effect on the mode, but this was while the L2 current watchdog was tripped. Tonight a gain of -30k worked well, so the current watchdog apparently reduces the drive AND flips the sign at high frequencies.
The attached plot shows progress of the mode with different gain settings. As of this writing the mode is as low as it's ever been. (To get this mode lower, we'll need to compete with the mode at 508.2Hz. Tricky.)
SUS_PRM guardian became unresponsive, but has how been restored.
The control room reported that the SUS_PRM guardian had become completely unresponsive. The log was showing the following epicsMutex error:
2015-02-24T03:45:51.412Z SUS_PRM [ALIGNED.run] USERMSG: Alignment not enabled or offsets have changed from those saved.
epicsMutex pthread_mutex_unlock failed: error Invalid argument
epicsMutexOsdUnlockThread _main_ (0x2999f20) can't proceed, suspending.
The first line is the last legit guardian log message before the hang. The node was not responding to guardctrl stop
or restart
commands. I then tried killing the node using the guardctrl interface to the underlying runit supervision interface:
controls@h1guardian0:~ 0$ guardctrl sv kill SUS_PRM controls@h1guardian0:~ 0$
This did kill the main process, but it unfortunately left the worker subprocess orphaned, which I then had to kill manually:
controls@h1guardian0:~ 0$ ps -eFH | grep SUS_PRM ... controls 18783 1 4 130390 37256 7 Feb19 ? 05:13:18 guardian SUS_PRM (worker) controls@h1guardian0:~ 0$ kill 18783 controls@h1guardian0:~ 0$
After everything was cleared out, I was able to restart the node normally:
controls@h1guardian0:~ 0$ guardctrl restart SUS_PRM stopping node SUS_PRM... ok: down: SUS_PRM: 49s, normally up starting node SUS_PRM... controls@h1guardian0:~ 0$
At this point SUS_PRM appears to be back to funcitoning normally.
However, I have no idea why this happened or what it means. This is the first time I've seen this issue. The setpoint monitoring in the new guardian version installed last week means that nodes with the monitoring enabled (such as SUS_PRM) are doing many more EPICS reads per cycle than they were previously. As the channels being monitored aren't changing very much, these additional reads shouldn't incur much of a perfomance hit. I will investigate and continue monitoring.
Sheila, Alexa, Elli, Evan, Gabriele
Because the DHARD WFS plant for pitch changes significantly with the CARM offset reduction, which we think could be caused by miscentering on the optics combined with radiation pressure. We have attempted to center the green beams on the ETMs this morning. We used the PCAL cameras and the code that Elli had been using, but did not use the fitting function, instead just looking by eye at the corsshairs that mark the center of the optic relative to the position of our beams.
For the Y arm we moved the green QPD A yaw from 0.5 (photo 100) to -0.5 (photo 105). For the X arm we ended up not moving, there are no QPD offsets on the X arm.
We found that we were not well aligned after this work, which might have been unrelated to what we did. Now we have reverted the offsets, but it woul be worth trying to put them back in at some time.
We had some suspicious alignment issues today. To align the two arms, we ran the green wfs which feeds back to the ITMs, ETMs, and TMSs, as we have been doing for several weeks now. Then we followed our input pointing, PRM aligning, MICH aligning, and SRM aligning. We could then lock IR with ALS COMM and produce a good build up in the x-arm; however, when we locked ALS DIFF, we were about 5% below the nominal IR build up in the y-arm. We kept seeing an oscillation in AS 45Q before transitioning to RF DARM, and we consistently were losing lock. DRMI was also taking longer to lock (about 12min). We were frusturated and decided to start aligning from scratch. We cleared all the WFS history, used the nominal QPD offsets as described above, and then used the baffle PDs to align the TMS. We then adjusted ITMs and ETMs by hand to get the green build up high in both the arms. This restored the IR build up in the y-arm when locked on ALS DIFF, and allowed us to reach full lock with DRMI locking on the order of minutes. For sanity check, I looked at the PCAL image in green of the Y-arm and it looked the same as above with the corresponding QPD offset (image LHOX 106). I am not sure why the green wfs took us to a "bad" alignment today; these have been reliable in the past.
[Stuart A, Jeff K] To accommodate QUAD top stage bounce and roll mode damping using AS WFS as sensors (see LHO aLOG entry 16868) it has been necessary to prepare h1sus QUAD models ready for tomorrows planned roll-out. All h1sus QUAD models links to the common library (QUAD_MASTER.mdl) were previously broken to provide DARM ERR damping at M0 (see LHO aLOG entry 16655). Therefore, we proceeded to work with the locally modified QUAD models, until such a time that the 'best' damping approach can be incorporated into the common library part for use at both sites. The following list of changes have been made: (1) ETMs, added RFM receiver for H1:ASC-ETMX_AS_B_RF45_Q_P & H1:ASC-ETMY_AS_B_RF45_Q_P at the top-level (e.g. see h1susetmx_top-level_new.png below). (2) ITMs, added IPC receiver for H1:ASC-ITM_AS_B_RF45_Q_P. (3) All QUADs, added input matrix within QUAD/M0/DARM_DAMP block for DARM_ERR + AS_B_RF45 (e.g. see h1susetmx_DARM_DAMP.png below). (4) All QUADs, routed AS WFS from top-level through to DARM_DAMP and into summation block (e.g. see h1susetmx_M0.png below). These models were test built which produced the following errors (IPC related): - H1:ASC-ETMX_AS_A_RF45_Q_P & H1:ASC-ETMX_AS_A_RF45_Q_P - H1:LSC-ETMX_L_SUSETMX & H1:LSC-ETMY_L_SUSETMY - H1:LSC-OAF_DARM_ERR The first is expected due to changes needing to be made to h1asc model. The remaining issues we need to coordinate with Kiwamu who is updating/splitting the h1lsc model. It is planned to build, install and restart models tomorrow.
The cdsutils installation has been updated to r440, which includes a new CDSMatrix object that, once initialized, can be used to reference CDS front end EPICS matrix elements by name.
NOTE: this object uses the standard (row, column) ordering, for consistency with the most matrix element reference standards.
Example usage:
jameson.rollins@operator1:~ 0$ cdsutils --version cdsutil 440 jameson.rollins@operator1:~ 0$ guardian -i -------------------- aLIGO Guardian Shell -------------------- ezca prefix: H1: In [1]: from cdsutils import CDSMatrix In [2]: m = CDSMatrix('LSC-PD_DOF_MTRX', rows={'DARM': 1, 'MICH': 2,}, cols={'OMC': 1, 'POP_A_RF45_Q': 5}) In [3]: m Out[3]:
In [4]: m('MICH', 'POP_A_RF45_Q') Out[4]: 'LSC-PD_DOF_MTRX_2_5' In [6]: m['MICH', 'POP_A_RF45_Q'] Out[6]: 0.0 In [7]: m['DARM', 'POP_A_RF45_Q'] = 0
Jeff, Dan, others...
Sheila and I noticed last night that the ETMY L2 stage RMS current watchdog had tripped - the circled indicator lights in the first figure were red. Today with Jeff we tested whether this has any effect on the L2 actuation and reset the watchdog.
Turns out that with the watchdog tripped you can still actuate on the L2 stage, but with about 3x less force at DC than with the watchdog untripped. So, maybe this explains our crummy alignment stability last night.
The second plot shows the effect of applying a large offset in pitch to ETMY L2 before and after the watchdog was reset, as observed by the optical lever.
I've added the SUS-ETM{X,Y}_BIO_L2_MON channels as conditions to the ETM warning lights on the OPS overview screen.