Evan, Stefan - Loops worked - simply lowered DHARD gain by 6dB. - Had some low frequency transients right after - not clear whether they are related to the matrix - seemed to diminish later on.
This is the spectrum at the end of today's work. We'll spend tomorrow looking at the calibration, but tentatively this spectrum was >65 Mpc after engaging the outer ISS loop (it was about 60 Mpc) beforehand.
There is also some fake 80 Mpc stretch where we were examining the settings in the online calibration. Our uncertainty about the calibration has increased as a result of this investigation.
Rana, Stefan
Using Rana's filter fitting tool (available in
/ligo/home/rana.adhikari/Templates/DTT/LSC/FF
or
/ligo/home/controls/sballmer/today/FF
we remeasured and fitted the MICH FF transfer function.
The new fitted filter (FF3 in FM4) is
sos(-0.000543835088, [ -1.93662264300273; 0.95485272408082; 0.30546714694311; -0.69453285305689;
-1.99510327713713; 0.99517904131748; -1.99243149865104; 0.99250763632503;
-1.99954772753581; 0.99957826550970; -1.99936615655289; 0.99939939939844;
-1.99965218293192; 0.99975661861859; -1.99993380253468; 1.00003473727562;
-1.99969845455122; 0.99979417820973; -1.99992371170330; 1.00002274176099;
-1.99981093117311; 0.99985759151419; -1.99995617290421; 1.00000284176776;
-1.99985737603421; 0.99996014647150; -1.99984813627844; 0.99995086001340;
-1.99992036795975; 0.99996576562300; -1.99993708678199; 0.99998253575133;
-1.99944212960262; 0.99996998756589; -1.99944272164552; 0.99997065091132;
-1.99975193818562; 0.99999930668374; -1.99975187384463; 0.99999929435542; ],"o")
Plot 1 shows the fit quality,
Plot 2 shows the achieved subtraction, with an artificial MICH drive.
Rana, Matt, Hang We did some further investigation of HAM5's coherence with DARM, as suggested by Gabriele (aLog # 19756). A list of what we did: 05:35:10 (UTC): Tapped the lower middle flange south of HAM5. 05:35:32 (UTC); Tapped the same flange harder. 05:35:54 (UTC): Tapped gull wing. The interferometer lost lock. Before the unlock, the IFO was operating at 24 W with LSC FF on. A plot of the corresponded time series was attached (TSaccHam5vsDarm.png). We could see ringing corresponding to tapping the flange in HAM5 channel, yet they did not seem to have a significant effect on DARM. The frequency of the ringing was 207Hz with a decay time of 1.6s. No 90 Hz feature seen. The gull wing tapping did not appear in the ACC_HAM5_SR1 that Gabriele noted. Summary: no clear connection between this ACC and DARM.
evan, rana
we did further slapping and shouting around HAM5/6 (from 10:25 - 10:39 UTC) and saw a few interesting things:
afterwards, we ran Hang's A2L script. It ran well, but takes awhile. We ought to run these in parallel.
Some times and events for analysis:
10:25:45 knocking on HAM5-south 10:26:01 knock on south door, upper west side 10:26:40 wiggle HAM5's curtains 10:27:40 wiggle HAM6's curtains 10:27:53 acoustic injection near HAM6 east side 10:28:12 ISCT6 acoustic noise 10:28:46 HAM5 north door 10:30:22 HAM5-HAM4 manifold 10:32:32 septum plate (north end) 10:34:39 HAM4-5 manifold whack 10:35:45 tube between bellows near HAM5 10:37:34 more of same 10:38:14 more of same
As previously noted, there is a new fast front end running at EY. I made a few (bad) screens to help access the filters and RMS outputs. There is no link from the sitemap yet, so access is via
/opt/rtcds/userapps/release/isc/h1/medm$ medm -x ISC_PI_START.adl
Thus far we have had only a few multi-hour high-power locks, but no parametric instabilities have yet been seen. (The fast output was, however, useful to confirm that the 6kHz line seen last night was not aliased from a higher frequency.)
I updated the IMC_LOCK guardian to make the ISS_ON state more reliably accessible when the IFO is locked. Much of this was just moving some code from CLOSE_ISS.main to CLOSE_ISS.run, and using cdsutils.avg to acquire data. I also created several helper functions to abstract some of the ISS switching.
The current guardian ISS enabling code has been tested a few times at high power, and it has worked without trouble. One new feature is that the IMC guardian keeps the second loop output value small, averaged over long times. This is a sort of "digital AC coupling" which may prevent problems with ISS actuator saturation over long locks.
With the ISS on, the DARM noise is significantly improved between 100 and 400Hz. It doesn't seem that more gain is necessary at this point.
J. Kissel, B. Weaver, J. Driggers, R. McCarthy, D. Barker, D. Sigg, J. Batch Here are the list of tomorrow's maintenance day tasks organized as we intend to execute them chronologically, and prioritized such that the tasks with the most global impact on the IFO are done first (such that we have the most time to recover from them). As with last Tuesday (LHO aLOG 19600), all tasks, associated estimated times for completion, and responsible persons (or "task manager") will be added to the reservation system when they are *actually happening*, and removed after the task manager has checked in with the operator and confirmed completion. PLEASE PAY ATTENTION TO THE RESERVATION SYSTEM (to help, we're going to put it on the big projector during maintenance). As always, please keep the operators informed of your activities as accurately as possible / reasonable throughout the maintenance day so the reservation list can be adjusted accordingly and remain accurate. We appreciate your cooperation! Group 0 -- prep for maintenance (to be done either the night before, or just before start of maintenance): - Clear out all SDF system differences - Ensure an alignment offset back up snap has been captured / define a reference time to which we will restore them - Bring ISC_LOCK guardian to DOWN, Bring IMC_LOCK guardian to OFFLINE - Bring all SEI manager guardians to OFFLINE - Bring all SUS guardians to SAFE Group 1 -- (tasks that can be performed simultaneously) to begin as soon as tasks dependent on group 0 are complete, otherwise, 08:00a PT - Timing master's GPS reference swapped for external reference 30 min - 2 hours (R. McCarthy) - We expect that this timing system swap will not glitch the timing system, and therefore crash all front-ends, sitewide. However, we are preparing for the worst, and bringing all systems to their respective DOWN / OFFLINE / SAFE state. HOWEVER if the front-ends do crash, the recovery time will of order 2 hours to get all front-ends back up and running. If not, we expect there to be little-to-no recovery time other than to bring guardians back to their nominal states. - IF AND ONLY IF the front ends do crash and we have to restart them, we will recompile any front-ends that have crashed against RCG 2.9.6 in order to gather in the bug-fixes that come with RCGs 2.9.1 through 2.9.6. - HEPI Pump Station Repair 30 min (H. Radkins) - Only one of four pump station appears noisey. The corner station can be run on only three pump stations, so Hugh will merely ramp the errant pump out of the system and convert to running the corner station on three pumps. This ramp out should only cause a brief minor HEPI actuator pressure glitch. The 30 minutes is a conservative over-estimate of how long it will take. Group 1.5 -- can begin immediately after the effects of group 1 are known: - Potential recompilation and install of the front-end models of all front-end machine's that have crashed. See above 2 hours (J. Batch, D. Barker, J. Kissel) - Cable routing / pulling for PEM Cosmic Ray Detector 2 hours (F. Clara, V. Roma, J. Palamos) Recovery of corner station SEI / SUS, and relocking the IMC can begin upon assessment of effects of switching the timing master's GPS reference Group 2 -- can begin while or after the corner station is being or has been recovered: - PEM sensor calibrations 1 hour (V. Roma, J. Palamos) - Replace / Repair Timing Fanout at EY 30 minutes (J. Batch, D. Barker) - Upgrade BIOS on new EY SUS fast front-end 30 minutes (J. Batch, D. Barker) - EY SUS, EY Parametric Instability front-end models recompiled against RCG 2.9.6 and installed 30 minutes (J. Batch, D. Barker) - EX Low-Voltage, Low-Noise driver installation and cabling 1 hour (R. McCarthy) Recovery of all models at EY, restoration of settings, and bring up ETMY SEI / SUS, measure charge on ETMY SUS ESD to confirm ESD health Group 2.5 -- can begin once work at EY is complete and/or while EY is being recovered: - Upgrade BIOS on new EX SUS fast front-end 30 minutes (J. Batch, D. Barker) - EX SUS, EX Parametric Instability front-end models recompiled against RCG 2.9.6 and installed 30 minutes (J. Batch, D. Barker) Recovery of all models at EX, restoration of settings, and being up ETMX SEI/ SUS. Confirm / commission the functionality of new ETMX LVLN ESD driver. measure charge on ETMX SUS ESD to confirm ESD health. Group 3 -- can begin once work and retoration at EX is finished - Power cycle corner station front-end's network switch 10 minutes (J. Batch, D. Barker) - Work stations will briefly loose their connection to the h1boot server, so workstations will be down briefly. - Preventative maintenance reboots of the following computers - Conlog - EPICs gateway - Guardian machine Restoration of all alignment settings; recovery of FULL IFO can begin. Group 3.5 -- can begin once workstations are back and preventative maintenance is complete. - Rename and include Mid Station / Beam Tube PEM Accelerometers into PEM MX and MY front-end models 10 minutes (J. Batch, D. Barker) - Parametric Instability monitor model install at EX 10 minutes (J. Batch, D. Barker) - SUS AUX model upgrade 10 minutes (J. Kissel, B. Weaver) - Fix LDAS communication fiber hardware 1 hour (J. Batch, D. Barker) Complete IFO recovery and commission of new bits and pieces. As seen last Tuesday, and many prior Teusdays, the above plan will not happen exactly as described above, as reality strikes. But, we will try our darnedest! Wish us luck!
Hannah, EvanStefan
Since we redid the AS_A_RF36 re-phasing (alog 19572), we never re-did a src coupling test while moving the SRC1_YAW offset (see alog 18436), so this was on the menu today - before redoing the SRCL decoupling.
- First we lowered the AS_A_RF36 whitening gain from 21dB to 18dB because some quadrants had too much signal.
- Again we found that for the matrix (see alog 19572)
H1:ASC-AS_A_RF36_I_MTRX_2_1 0
H1:ASC-AS_A_RF36_I_MTRX_2_2 0
H1:ASC-AS_A_RF36_I_MTRX_2_3 -2
H1:ASC-AS_A_RF36_I_MTRX_2_4 2
an offset of -2500 counts in H1:ASC-SRC1_Y_OFFSET gives the lower POP90, higher AS90, and lower high frequency SRCL coupling. (see plot)
- Since I don't like running with offsets in WFS loops, I tried the following sensing matrix, which puts us to the same position:
H1:ASC-AS_A_RF36_I_MTRX_2_1 0
H1:ASC-AS_A_RF36_I_MTRX_2_2 -1
H1:ASC-AS_A_RF36_I_MTRX_2_3 0
H1:ASC-AS_A_RF36_I_MTRX_2_4 3
This admittedly looks odd - it should also have a pitch content - but in alog 19572 we saw that the pitch signal is in a different phase anyway... whatever...
- With that new lock point we observed:
- The same SRCL coupling at low frequencies - this one seems steady
- A lower average SRCL coupling at high frequencies - as a result the notch moved up in frequency from ~75Hz to ~110Hz
- The high frequency part is also the more variable part - before and after the offset shift. Thus - even though the coupling now seems worse around the old notch frequency - that disadvantage should easily be compensated by the SRCLFF path.
- We also updated the FM8 cut-off filter in SRCL - it is now a less aggressive low pass filter starting at 80Hz. This still kills the variable part of the coupling, but also reduces gain peaking in the SRCL loop - which before made the coupling worse.
Matt, Hang We ran the a2l decoupling optimization code this evening for all test masses and for both pitch and yaw. It successfully reduced the low frequency noise. Please see the attachment (darm_spectrum.png). The changes were: H1:SUS-ETMX_L2_DRIVEALIGN_P2L_GAIN: 0.93 -> 1.21 H1:SUS-ETMX_L2_DRIVEALIGN_Y2L_GAIN: 0.93 -> 1.32 H1:SUS-ETMY_L2_DRIVEALIGN_P2L_GAIN: 0.00 -> -0.02 H1:SUS-ETMY_L2_DRIVEALIGN_Y2L_GAIN: -0.70 -> -0.59 H1:SUS-ITMX_L2_DRIVEALIGN_P2L_GAIN: 1.95 -> 2.04 H1:SUS-ITMX_L2_DRIVEALIGN_Y2L_GAIN: 0.63 -> 0.74 H1:SUS-ITMY_L2_DRIVEALIGN_P2L_GAIN: 1.05 -> 1.06 H1:SUS-ITMY_L2_DRIVEALIGN_Y2L_GAIN: -2.05 -> -1.48 H1:SUS-ETMX_L2_DRIVEALIGN_P2L_GAIN More details of the measurements could be found under: /opt/rtcds/userapps/trunk/isc/h1/scripts/a2l/rec. It contained both the raw measured data (I, Q and total), and plots of our linear fits as well as rotation. The optimal a2l gains corresponded to the zeros of rotated I's. Please again note that since our data were likely to be correlated, the errorbars shown should just be treated as a rough estimation. ================================================================================================================================================================================= Besides, we also wrapped the python code into a bash shell that could be easily called in the future. It could be found under: /opt/rtcds/userapps/trunk/isc/h1/scripts/a2l To rerun the optimization, you can simply enter ./run_a2l.sh in the command line, and the code will do the optimization for all test masses and all angular dofs. If you just want to optimize some specific optics and, say, only their pitch to length coupling, you can just edit the 'a2l_input.in' file. In cases that the interferometer loses lock, please press "ctrl + c" to terminate the code. With this keyboard interruption, it will automatically set the not yet optimized drive align gains back to their original values and disable the dither input. For more instructions, please refer to 'readme.txt' under the same directory.
I have updated the RCG_VERSIONS MEDM screen to show the currently running versions before tomorrow's upgrade.
I have put together a few python functions which allow for briefly spawning multiple threads to switch many filters at (roughy) the same time. The idea here is NOT to provide synchronous filter switching, but rather to speed up Guardian transitions which change the state of many filter modules (or more generally, write many channels).
The new code is in:
userapps/release/isc/h1/guardian/
fast_ezca.py - new functions for writing, switching, and generally doing things quickly
test_write_many.py - test functions for multi-thread writing
test_switch_many.py - test functions for multi-thread switching
test_do_many.py - test functions for multi-thread compound actions
and it is being used in the ISC_library function slow_offload_fast. There is a single-thread version of this function in ISC_library in case of trouble: slow_offload_many. The only caller is gen_OFFLOAD_ALIGNMENT_MANY in ISC_GEN_STATES, so go there if you need to switch this out.
This allows the guardian to move on without setting a setting, and can cause problems because settings can be wrong and the user has no clues.
I want to delete this completely.
We continue the charge measurements on ETMs. Results for ETMX are consistent with negative trend, now the charge is from 10 to 20 [V] Effective Bias Voltage for all the quadrants. Results for ETMY do not not show a significant trend (probably, the data are beginning to be consistent with positive trend). Charge is below the 10 [V] Effective Bias Voltage for all the quadrants. Note: We had positive bias on ETMX and negative bias on ETMY after discharging procedure. So it seems possible that charging is caused by the bias voltage.
Has the bias on ETMX and ETMY remained positive and negative respectively for the duration of this observation?
Bias was the same for this and next charge measurements. It was changed on July, 22: alog 19821 Today we have the first measurements after changing the bias sign: alog 19848
The nex RCG release will fix the TrueRMS issues. For the record, here is a list of the H1 user models which use this part:
h1susetmypi, h1susetmy, h1susetmx, h1susitmx, h1susitmy, h1omc, h1susomc, h1oaf
The SUS IOP models also use the part for the Software Watchdog:
h1iopsusb123, h1iopsusex, h1iopsusey, h1iopsush2a, h1iopsush34, h1iopsush56
Times PST
9:58 Richard to EX to reconnect HEPI pump ground
10:15 Richard back
10:25 HFD on sit
11:20 Leo taking charge measurements
13:44 Joe D to both Mid stations
14:11 Joe D back
14:41 Richard to roof checking out GPS antenna work for tomorrow
Laser Status:
SysStat is good
Front End power is 32.64W (should be around 30 W)
Frontend Watch is GREEN
HPO Watch is RED
PMC:
It has been locked 6.0 days, 0.0 hr 31.0 minutes (should be days/weeks)
Reflected power is 2.464Watts and PowerSum = 25.33Watts.
FSS:
It has been locked for 0.0 days 0.0 h and 1.0 min (should be days/weeks)
TPD[V] = 1.643V (min 0.9V)
ISS:
The diffracted power is around 7.629% (should be 5-9%)
Last saturation event was 0.0 days 0.0 hours and 1.0 minutes ago (should be days/weeks)
J. Oberling, E. Merilh
On June 23rd we swapped the laser on the ETMx oplev, see alog 19290. We spent the next couple weeks tweaking the operating power of the laser to get it in a stable zone; this has to be done since the thermal environment is different between the end station VEA and the LSB lab the laser was originally stabilized in. After I got back from vacation last week I've been looking at quiet times (no apparent optic movement; I looked at the pitch & yaw oplev signals and picked times where the optics were quiet) to see if the laser is stable and glitch-free. I've attached 3 spectrograms of quiet times during the last week
There is no obivious glitching in the ETMx oplev laser as shown in these spectrograms. I think it is safe to say this oplev is healthy. I have also attached a spectrum of the ETMx oplev pitch & yaw signals for the same 4 hour stretch on 2015-7-19 as the attached spectrogram.
As usual if anyone notices anything not right with any of the optical levers, let me know.
Glitch Performance evaluation of diode lasers:
It is sometimes difficult to see whether a laser is glitch free because of the tools we employ display the data. It is good to compare performance of a laser under test with one that is known to be glitch free and another known to be glitchy. This way the display tool is validated (since it shows the glitches in the glitchy laser). At the same time we would know if the laser under test has achieved the Reference Laser's performance level.
Please see my post on laser evaluation. And it would be preferable if the same kind of plotting tools (ligodv) are used as in the past in order to make the comparisons easier.
This morning around 16:45 UTC, the ETMY ISI Stage 1 and 2 WatchDogs tripped reporting a "payload trip". Jim cleared these trips and then watched Guardian bring ISI back up in an odd way. Guardian brought the ST1 node to HIGH_ISOLATED but not all of the the preferred isolation loops on Stage1 were turned on, along with the input, output, and decimation. (Snipet of the log attached) Stage 2 proceeded to try and isolate but since stage1 was not entirely isolated the WatchDogs tripped again. This time, after clearing the WatchDogs, Guardian brought eveything back up properly and everything seems to have been working well since.
Jamie is unsure of the reason behind this, but suspects some epics connection issues.
Another odd bit to add is that the payload was never tripped...
At 17:40 UTC the ISC_DRMI Guardian had an the following error: "EzcaError: Could not get value from channel: <PV 'H1:LSC-PD_DOF_MTRX_SETTING_5_18', count=1, type=double, access=read/write>, None"
A caget of the channel yielded a value so it seems as though it was just the Guardian that was not seeing it. Reloading the code did not fix the error, and we were already on the phone with Jamie with another Guardian oddity (alog to follow). Jamie suggested that we STOP the node and then set it back to EXEC. This worked!
This has been seen before (example: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=19673) and seems to not be the same channels but from H1:LSC-PD_DOF_MTRX. Jamie and Dave both have their thinking caps on and are trying to solve the problem.
08:00-10:30 I got a little extra time for locking this morning.
Locking this morning was a little tricky as there was some FSS oscillation and ISS instability that was giving IMC a bit of trouble. Common Gain manipulation got that bit settled down and a bit of AOM diffracted power adjustment recitfied the remainder.
Here is a summary of the brute force coherence report already posted in a previous comment to an elog entry describing the good sensitivity lock of last Friday.
Basically, there is no large coherence anywhere, except for the well known periscope peaks that are coherent with ISS signals, IMC angular signals and PSL periscope (figure 1-3)
At low frequency, there is coherence with SUS-ETMY_L3_ESDAMON_?? signals. This was not there in the past, so I guess this coherence is just due to a change in the control strategy. If I'm not mistaken, this signal is just a monitor of the correction sent to the ESD, so coherence with DARM is normal. Please correct me if wrong... (figure 4)
In the 10-30 Hz there is coherence with ASC-MICH_P (figure 5)
In the 10-70 Hz region one dominant source of noise is longitudinal control, since there is coherence with MICH and SRCL (figures 6-7). This noise is not dominant and still a factor of few from the measured sensitivity.
In the higher frequency region (above 100 Hz), there is coherence with ISS and PSL periscope as already pointed out, but there is also some coherence with AS signal: ASC-AS_A/B_DC_SUM, ASC-AS_A_RF36_I_PIT/YAW etc... Together with the main jitter peaks, there is a broadband noise floor at about 1e-10 m/rHz from 100 to 1000 Hz. This might be intensity noise or noise in high order modes that is not completely filtered by the OMC (figure 8).
Finally, a 90 Hz bump seems to be coherent with HAM5 signals (figure 9)
SUS-ETMY_L3_ESDAMON_?? are the recently well-connected and well-digitized analog monitors of the ESD (i.e. TST or L3 stage) actuators. Since we're using ETMY L3 as most of our DARM actuator, it's no surprise that there is coherence with DARM below the DARM UGF. What's strange is that you post that they have coherence before they were connected correctly in the SUS-AUX model from whence they come (see LHO aLOG 19780) ...