Bottom Line--When using Guardian, engaging the RZ loop twists the MICH too far. It may be doing the CPS Offset zeroing in a weird method. When the loops are engaged with the old Command Scripts, the RZ input does not get driven off and no bad RZ twist occurs.
Further, with MICH locked, the X & Y ST2 ISO loops are not stable--they ring up and trip. Without Mich Locked, the loops are stable.
Details: Let's start with the first attachment; it is 25 minutes of second trends. The Guardian turns on the Z loop after first LOAD_CART_BIAS_FOR_ISOLATION so this is the first thing to do. Manually one can do this on the CART_BIAS screen (Reset CPS offsets button.) In outward appearances, there is a difference in how this button and Guardian perform this operation; however, Jamie says no it doesn't. Maybe it is the slow ramping of the new setpoint. Anyway, my observation is that the residuals for this platform are always small. After the CPS offsets are Reset, the Guardian turns on the Z dof. The gain steps and the ramp times are the same for the Command Scripts and Guardian.
Using the Command Scripts, engaging the Z dof ISO is seen at point 1. Notice how at point 2, the RZ INMON is not doing anything awful while the Z loop is turned on and the RZ loop turns on (point 3) nicely and well behaved. However, when Guardian does this...
When Guardian turns things on at point 4, look at what happens to the RZ INMON. The Z_OUT16 is much larger when Guardian turns it on and the RZ_INMON is driven way off. And, even though the RZ_INMON is much lower by the time the RZ loop comes on, turning on the RZ loop at point 5 produces a too large an output--The RZ_OUT16 looks just like the OpLev Yaw.
You can see that the manual turn on of the Z then RZ loop was repeated a few times with the same result; like wise the Guardian turn on was similarly consistent. Of course, Guardian also turns on the X & Y DOFs when it engages the RZ loop and as I allude to in the summary above, these loops may be unstable. However, at this point I return you to the point of how the Z_OUT is so different for Guardian and how that impacts RZ_INMON all before the X Y & RZ loops are engaged.
I left the ST2 in fully isolated and later Ellie locked the MICH. After a minute or so, it didn't look stable and MICH was turned back off. Shortly after that the ISI tripped. So then we tested things. Ellie locked MICH and then I engaged the ST2 ISO Loops. Things were fine with the Command script turn on of Z and RZ but the ISI slowly rang up and tripped on either X or Y DOF even without the boost on.
Solution--Identify why the Command Script and Guardian do something different at the CPS Reset stage. Identify the loop instability.
This might be the MICH controllers beating on the ISI. I've attached a 4 page pdf, taken at t=0 is at GPS=1109443997 I think what is happening is: 1) ISI st2 X and Y ISO loops are off 2) MICH control gets turned on, and starts injecting lots of noise onto the ISI table 3) ISI isolation loop X (or Y) gets turned on, much higher BW than damping loop. 4) ISI iso loop tries to suppress the ISI motion. but the high freq motion is too big, and the actuators start saturating 5) after a few seconds, the WD trips. evidence: pg 1) The H1 actuator drive signal, it starts ramping up at ~T=65 seconds. the drive is getting bigger, but not in a classic oscillation sort of way. It grows to about T=72, then sort of levels off, until it gets tripped about T=82. pg 2) The WD state, at about T = 82. pg3) the GS-13 for X, pretty noisy, doesn't change as the loop comes on or when it turns off. - Implies that the main driver for the GS-13X is not the X isolation loop. pg4) Detail of the drive signal. not like a classic oscillation.
model restarts logged for Sun 01/Mar/2015
no restarts reported
model restarts logged for Mon 02/Mar/2015
2015_03_02 05:42 h1fw0
2015_03_02 12:01 h1broadcast0
one unexpected restart. DAQ broadcaster started to add 3 channels for DMT.
The DAQ broadcaster was configured to add 3 channels on Monday (the h1broadcast0 was restarted) and 1 channel today (pending full DAQ restart later).
Monday's channels:
Tuesday channel:
P. King, J. Oberling, E. Merilh
Summary
We went in to the H1 PSL this morning to touch up the FSS RefCav alignment; it had started to drift again. We checked 4 mirrors (M24 - M27) to see if their pitch adjustment screws were locked; we found M26 and M24 to be loose. Adjusting the pitch of M26 had no effect on the RefCav TPD voltage (and therefore we don't believe this mirror is the problem), but adjusting M24 pitch we were able to recover the RefCav TPD voltage to ~1.6V. Both adjustment screws were locked. We measured the power after WP4 and WP6, and measured the DC voltage of the RefCav Refl PD (RFPD) with the FSS locked and unlocked. We are going to continue to monitor the FSS RefCav TPD to see if the loose adjustment screw on M24 was the cause of our drift.
Details
At around 11:00pm PST on 2/27/2015 (last Friday) the RefCav TPD began to drift down again. By yesterday afternoon it was down to 1.1V (this roughly corresponds to ~11mW in the ALS beam path, see Peter's comment to alog 16887). This time we did not adjust the RefCav input periscope mirrors, since we adjusted and tightened them last time. We decided to look for unlocked pitch adjustment screws for mirrors in the FSS path, since the alignment adjustments we have been doing previously have been almost entirely in pitch. We started at M27 and worked back to M24 (see the PSL table layout in D0902114 for mirror locations).
We are going to continue to montior the RefCav TPD voltage to see if this loose adjuster screw on M24 was the cause of our drift.
After this adjustment we measured the laser power after WP4 (before the FSS AOM) and after WP6 (before the RefCav input periscope):
We also measured the DC voltage of the RefCav RFPD with the FSS locked and unlocked.
J. Kissel While poking around the CAL-CS model after the recent changes (see LHO aLOG 17034), I noticed two things: (1) All of the EPICs settings had been lost, even though I'd made sure to capture a safe.snap of the model before restarting the front end model. Not sure what happened there. In the mean time, I've burtrestored the model to 02:10a PST (i.e. before I got started, while DARM was locked, happy, and producing megaparsecs), which hopefully has captured / restored most of Kiwamu's hard work (LHO aLOGs 16698, 16733, 16780, 16798, and 16843). (2) Because of a version control mix up in the CAL_CS_MASTER.mdl library part, the names for the whitened versions of the closed-loop and open-loop DARM displacement signals, i.e. (what are now) H1:CAL-CS_DARM_RESIDUAL_WHITEN and H1:CAL-CS_DARM_DELTAL_EXTERNAL_WHITEN have changed since we last updated the h1calcs model. The bank names are now typo free -- but that means I've had to re-install the 1^5 : 100^5 (5 zeros at 1 [Hz], 5 poles at 100 [Hz]) whitening filter used to get above the double precision noise in frame storage (see LLO aLOG 16789). Maybe I'll ask my fellow detector engineers to help me get an SDF system up and running for this model, so we can better track the changes.
J. Kissel I've figured out what happened: when capturing the h1calcs safe.snap, I used the canned script /opt/rtcds/userapps/release/cds/common/scripts/makeSafeBackup which writes the output .snap file to the sanctioned location in the userapps repository. HOWEVER, because the h1calcs.mdl is relatively new, we (read as: ME, when I installed it) didn't remember to replace the automatically generated safe.snap file, here /opt/rtcds/lho/h1/target/h1calcs/h1calcsepics/burt/safe.snap with a soft link to the sanctioned userapps location, /opt/rtcds/userapps/release/cal/h1/burtfiles/h1calcs_safe.snap Perhaps I had gotten complacent, because I had thought that makeSafeBackup *makes* the softlink if it doesn't exist. But, looking at the code line-by-line, it may try to make the code, but if your account account doesn't have permission to *remove* a file created by controls (because we're forced to compile and install on the front-ends as controls), then it doesn't overwrite the file and create the softlink. Whichever, excuses, excuses -- I missed it. I've now (a) created the softlink by hand, (b) captured a NEW safe.snap to gather all of the new channels I added with the IMC/CARM upgrade as well as the typo-fix to the DARM channels, (c) ran an sdf_set_monitor over the entire file (d) pushed the "monitor all channels" safe.snap to the front end via the SDF system -- i.e. hitting the "load table only" button, because the "safe" is the file chosen by default to define the table, (e) confirmed there were NO channels unmonitored, and the only channels that show an error are some hardware injection channels that show up because of a known bug in the parsing of string channels, and (e) committed the newly SDF tagged safe.snap file to the userapps repo As I (or someone else) continue(s) to fill out the infrastructure for IMC and CARM, (and eventually MICH, SRCL and PRCL) we'll update the data base accordingly, but I don't expect DARM to change much for the forseeable future.
Reminder: Morning meetings will be on Monday, Tuesday, Thursday at 8:30am until further notice. Tuesday and Thrusday are reserved for extended maintenance hours (till 12) as well as 3IFO work (till 4).
SEI
SUS
3IFO
VE
CDS
J. Kissel I'll file an aLOG later about the philosophy of design as I create the MEDM screens and fill out the infrastructure, but for now I'll indicate that I've completed the model / DAQ restarts necessary to install the infrastructure updates to the IMC / CARM calibration paths in the h1lsc, h1omc, h1susmc2, and h1calcs front-end models. A detailed log of what I've done is below. I've restarted the DAQ at ~07:31 UTC. - brought ISC LOCK and IMC LOCK guardians to DOWN - saved SUS MC2 alignment offsets - brought SUS MC2 to safe state - captured and committed safe.snaps for /opt/rtcds/userapps/release/ sus/h1/burtfiles/h1susmc2_safe.snap << -- used SDF system, following instructions in LHO aLOG 16949 lsc/h1/burtfiles/h1lsc_safe.snap << -- used "makeSafeBackup lsc h1lsc" omc/h1/burtfiles/h1omc_safe.snap << -- (as above) cal/h1/burtfiles/h1calcs_safe.snap << -- (as above) - compiled and installed relevant front-end models on h1build make h1lsc; confirmed success; make install-h1lsc; confirmed success; make h1omc; confirmed success; make install-h1omc; confirmed success; make h1susmc2; confirmed success; make install-h1susmc2; confirmed success; make h1calcs; confirmed sussess; make install-h1calcs; confimed success; - h1lsc has an IPC error before getting started, but cleared and stayed clear after diag reset (no other models had IPC errors reported) - brought HAM3 SEI to OFFLINE (with a spurious GS13 watchdog trip along the way) - restarted relevant front-end code on h1lsc0 -- cdsCode; starth1lsc This lit up the CDS overview with IPC errors as expected, on h1omc, oaf, lscaux, asc, ascimc, and the globally controlled sus (the MCs 1-3, PRs M&2, SRs M&2, BS, OMs, RMs, and TMs). Decided to complete model restarts before clearing errors. on h1omc -- starth1omc on h1calcs -- starth1calcs released the model still needs new IPC senders from MC2 before its IPC errors go away on h1sush34 -- starth1susmc2 on h1oaf0 -- starth1calcs - Hand-cleared all IPC error messages with a diag reset on all affected model's GDS_TP screens; confirmed no lingering errors were present. - cleared SUS MC2 and ISI HAM3 watchdogs from SUS model restart - brought HAM3 SEI up to ISOLATED (accompanied by another spurious GS13 watchdog trip) - brought SUS MC2 to ALIGNED - brought IMC_LOCK guardian back to LOCKED, confirmed successful relock of IMC, appearance of light on AS port in the same camera position it was when I got started - shutdown DAQ / FB / h1dc0 at 07:31 UTC to clear remaining DAQ errors; confirmed the only remaining error is from h1oaf model, who is now missing some CARM / IMC senders - committed all model changes to the userapps repo, ${userapps}/lsc/h1/models$ Sending h1lsc.mdl ${userapps}/omc/h1/models$ Sending h1omc.mdl ${userapps}/sus/h1/models$ Sending h1susmc2.mdl ${userapps}/cal/common/models$ Sending CAL_CS_MASTER.mdl ${userapps}/cal/h1/models$ Sending h1calcs.mdl Committed revision 9944. DONE!
J. Kissel, K. Izumi
What have I changed in the models?
LSC Model, lsc/h1/models/h1lsc.mdl (and omc/common/models/lscimc.mdl)
-- Inside the IMC (library) block,
- removed obsolete GUARD block (which removes many, now unused EPICs channels from the lsc model)
- removed obsolete FRINGE block (originally thought to be used to count the number of fringes passed in a given computation cycle,
never used in practice)
- removed redundant shipping of IMC L control signal from surrounding the IMC-MCL path, which now needs only be picked off of MC2
- removed now unnecessary flags between IMC-L / IMC-TRANS and IMC-MCL paths
- installed new spigot for shipping IMC common mode board's error signal (which starts as the "IMC-I") to CAL-CS front end model
-- Top-level
- removed all instances of IMC L control signal IPCs to cal CS model
- installed new IPC for IMC common mode board's error signal, called H1:LSC-CAL_IMC_ERR
- renamed IPC for digitized fast control signal (typically called some form of IMC-F) to H1:LSC-CAL_IMC_F_CTRL
OMC Model, omc/h1/models/h1omc.mdl (and omc/common/models/omclsc.mdl)
-- Inside the LSC (library) block,
- removed all spigots for CARM ERR and CARM CTRL signals surrounding the CARM bank (in the OMC model, this is really only meant for
the eventual, if necessary small corrections for CARM send to the ETMs, they'll now again, be gathered elsewhere to simplify the
calibration scheme)
-- Top-level
- installed new IPC sender for the IFO / REFL9 common mode board's error signal, called H1:OMC-CAL_CARM_ERR
- removed all former CARM ERR and CARM CTRL IPC senders
SUS MC2 model, sus/h1/models/h1susmc2.mdl
-- Top-level
- installed 3 new IPC senders for the "CTRL" output of the LOCK filters for each stage, to be fed to the CAL-CS model, called
H1:SUS-MC2_M1_LOCK_L_CTRL, H1:SUS-MC2_M2_LOCK_L_CTRL, and H1:SUS-MC2_M3_LOCK_L_CTRL.
CAL-CS model, cal/h1/models/h1calcs.mdl (and cal/common/models/CAL_CS_MASTER.mdl)
-- Top-level
- removed all former IPC recievers of various versions / pick-offs of the IMC control signals
- installed all new IPC receivers mentioned above,
Error Signals:
H1:LSC-CAL_IMC_ERR from h1lsc.mdl error signal for IMC when CARM is not locked Digitized IMC Common Mode Board signal (digitized after the sum of the two input signals)
H1:OMC-CAL_CARM_ERR from h1omc.mdl error signal for CARM when CARM is locked Digitized IFO / REFL9 Common Mode Board signal (digitized after the sum of the two input signals)
Control Signals
H1:LSC-CAL_IMC_F_CTRL from h1lsc.mdl "fast" control signal for IMC going to PSL Digitized IMC Common Mode Board signal that goes to PSL AOM, after all analog filtering on CM board
H1:SUS-MC2_M1_LOCK_L_CTRL from h1susmc2.mdl "slow", hierarchical control signal for M1 stage of MC2
H1:SUS-MC2_M2_LOCK_L_CTRL from h1susmc2.mdl "slow", hierarchical control signal for M2 stage of MC2
H1:SUS-MC2_M3_LOCK_L_CTRL from h1susmc2.mdl "slow", hierarchical control signal for M3 stage of MC2
- reconnected all of the IPCs to the newly reshuffled inputs to the CS block
-- Inside the CS (library) block
- Pulled out the CTRL signal calculation for the IMC since the sum of the FAST / F and SLOW / L is needed for both the IMC and
CARM calibration signals, depending on the configuration of the IFO. Put it in a new block called SUM_IMC_CTRL
- Pushed the output of SUM_IMC_CTRL to the now single CTRL inputs of SUM_IMC and SUM_CARM, where they're added to the respective
IMC and CARM error signals.
- Created new channels intended to be the "final answer," (though they're not yet stored in the frames, sothey don't yet have
the "_DQ" extension)
H1:CAL-CS_IMC_DELTAF (_DQ) --- open loop frequency noise for the IMC when CARM is not controlled
H1:CAL-CS_IMC_CTRL (_DQ) --- total frequency control signal sent to the IMC and PSL either when the error signal is either IMC or CARM
H1:CAL-CS_IMC_DELTAF (_DQ) --- open loop frequency noise for CARM when CARM is controlled
Dan Hoak asked me about upconversion around the OMC dither lines at 575.1 Hz, 600.1 Hz, 625.1 Hz and 650.1 Hz, since the upconversion seen in L1 has been quite large at those frequencies (amplitude and band size). We don't yet have the long stretches of DC readout data used to look at L1 lines, but a quick look at the Feb 26 H1 lock confirms that upconversion around dither lines in H1 is large too. Plots made with ldvw are attached. The fine structures around 575.1 and 600.1 Hz are nearly identical to each other. The same goes for 625.1 and 650.1 Hz, but their structures differ significantly from those of the two lower-frequency bands. To repeat a comment made in the LLO alog originally, CW searches in such severely contaminated bands are pointless. We CWers hope that the bands affected by the dithers can be shrunk before O1 begins. (Of course, there are many more pressing issues to address before that...)
J. Kissel The IFO is down (and has been for ~4 hours, most likely because of the 6.4 [mag] EQ in Indonesia) and I'm beginning updates to the LSC, OMC, SUSMC2, and CALCS models to fix up the IMC/CARM calibration paths, so I've turned OFF the observation intent bit around 14:50 UTC (6:50a PT). I've also switched the ISC LOCK and IMC LOCK guardians to the DOWN state. I found that MC2 had not had its alignments saved, so I've saved them.
Gabriele, Sheila, Alexa, Evan
We have engaged the DHARD WFS Y (and P) at 3 Hz on resonance with a reduced oplev damping gain in the ETMs. Again, to start off we closed the DHARD Y WFS with 3 Hz BW at 50pm CARM offset. Since this loop is also stable at low BW, we will leave it in the low BW configuration at this point, so that we are at a 3 Hz BW on resonance.
We had tried engaging the new DHARD Y loops as described in LHO#17006. However, we quickly found that this configuration was unstable. So, we removed the partial plant inversion FM6 and took a plant TF. We found that the plant that Gabriele had measured with the oplevs was slighlty different than the @50pm plant (see Gabriele's comment). We adjusted FM6 accordingly to compenstate for the peaks seen between 1 and 5Hz. FM6 is now zpk([-0.3303+i*15.1459;-0.3303-i*15.1459;-1.9027+i*16.6711;-1.9027-i*16.6711; -0.2672+i*19.227;-0.2672-i*19.227],[-0.6659+i*18.7;-0.6659-i*18.7;-0.509+i*11.519; -0.509-i*11.519;-1.0404+i*15.4844; -1.0404-i*15.4844], 1)gain(0.469248).
To close the loop at low BW at 50pm CARM offset, we engage FM2, FM3, FM4, FM6, FM9 with a gain of 30. FM6 is described above, and the remaining filters are the same as in LHO#17006. With a gain of 360, this gives a UGF of 3 Hz and a phase margin of 36 deg.
On resoncance with a gain of 30, we measured that the UGF is 3.5 Hz with a phase margin of 36 deg.
This is in the guardian now.
In the first attached plot the blue circles show the measured DHARD plant transfer function, at 50 pm CARM offset. The red trace is a fit, which matches quite well the measurement. To be able to run the loop with a 3 Hz bandwidth and a simple controller like the one we used for pitch, we had to compensate for the two higher pole/zero pairs.
The second plot compares the DHARD plant measured today at 50pm using the ASC signals, with the one I measured on Saturday using only ETMY and its optical lever. They are clearly quite different. It's unclear to me why this happens. It can be that ETMX and ETMY are significantly different, and when driving DHARD we are using the sum of the two.
Sheila, Gabriele, Evan
We are on dc readout with the following loops locked (pitch and yaw):
dETM is high bandwidth (~3 Hz), as is BS. cETM is lower bandwidth (probably by a factor of 10 or so) because we found it was injecting noise into the DARM spectrum up to ~50 Hz. PRM is very low bandwidth (more than 30 s time constant; this is probably too long). IM4 and PR2 are something like 100 mHz or less.
The CHARD P,Y WFS have the same filters engaged as for the DHARD P, Y WFS respectively. The gains for CHARD (P,Y), are (-20, -40). If we want a 3 Hz BW, the open loop we took last night indicated we were about 10dB too low.
Here is an estimate of DAC noise propagated forward to the ETM ESDs. I've used Peter's recent DAC noise model, an ETM ESD force coefficient of 2×10−10 N/V2, a bias of 380 V on each ETM, and some hints from Jeff about the DAC → ESD signal chain.
Evidently this is somehow an overestimate, but the shape and magnitude are roughly in agreement with the spectrum between 50 and 100 Hz.
As a quick test of whether DAC noise is really a limiting source here, we could try ramping down the ETMY bias during full lock (since we're not using the ETMY ESD).
Also, Nic and Jamie have inquired about the uptick in the ASD above a few kilohertz. The noise there seems to be largely uncorrelated between the two DCPDs (see attachment), which seems to suggest that it's still shot noise. (Based on measurements that Dan and I took of the DCPD dark noise, I believe this feature is too big to be explained by excess noise in the DCPDs or their signal chain.)
Evan, Alexa, Gabriele, Sheila
We have closed 8 DRMI ASC loops with the arms off resonance. Ordered fast to slow they are:
These loops are now all turned on by the guardian, mich comes on first, there is a pause and the rest come on. We can leave these on for the first steps of the CARM offset reduction. We have manually been turning off all of them except for MICH at this point. We tried leaving the PRM loop on once, this caused bad drift without the 2 refl loops closed.
WARNING: the guardian turns on 6 loops durring the DRMI ASC step that need to be turned off manually before the CARM offset is reduced too much
Washing is complete from corner station to double door X-1-4. Lights and equipment are relocated to single door between X-1-4 and X-1-5 doors. Cleaning will begin tomorrow moving south.
730 Karen, Cris - LVEA
849 Corey - EY
902 Kyle - LVEA Looking for parts for tomorrow
906 Bubba - LVEA Check on the cleanroom test stand
916 Bubba - Out
918 Kyle - Out
918 Corey - Out
1537 Dave B. - CER
Have posted two sets of plots; one is with no zoom and the other with zoom in on ITMX and BS. The four peaks on ITMX has some time correlation with the peaks on ITMY and ETMY. The time of the second peak on the BS correlates with the third peak on ITMX. There is no clear pattern with the peaks on all 7 OpLevs being plotted. Will discuss with the OpLev folks. These plots are run with minute trends not second trends, due to an error with plotting the second trend data. I an looking into the cause of this error.
Laser Status: SysStat is good Output power is 32.3 W (should be around 30 W) FRONTEND WATCH is Green HPO WATCH is Red PMC: It has been locked 0 day, 15 hr 14 minutes (should be days/weeks) Reflected power is 2.0 Watts and PowerSum = 24.9 Watts. (Reflected Power should be <= 10% of PowerSum) FSS: It has been locked for 0 h and 17 min (should be days/weeks) Threshold on transmitted photo-detector PD = 1.18V (should be 0.9V) ISS: The diffracted power is around 9.3% (should be 5-15%) Last saturation event was 15 h and 17 minutes ago (should be days/weeks)
The wind gusts are at around 30mph, we could see from the ALS control signals that the arms are moving more than usual, so I changed the end stations to the high blends and we are using BRS sensor correction at end X. (configuration described in alog 16583)
we are back to 45mHz blends, since the wind has died down, but the BRS sensor correction is still on.
S. Dwyer, J. Kissel Speaking with Sheila this morning, the improvement in the ALS performance was "not as clear" as the last time, when the winds were 40 [mph] at EY (i.e. LHO aLOG 16526). This could be that the wind only got to roughly ~30 [mph] during the above configuration switch. Recall that in LHO aLOG 16526, the X-end was *not* changed, and the wind amplitude was large only at the Y-end.
I was checking ISI configurations this morning and found that X&Y sensor correction at EX was actually OFF on the ISI, but it was turned off at a different point in the path than I usually try to steer commissioners and operators toward using.This would have made it look like sensor correction was on, when no STS signal was actually going to the ISI. I hope this explains some of why "the improvement was not as good as before". I've been meaning to make some edits to Hugo's new SensCor MEDM to make this clearer, but haven't gotten around to it. I also found a few other configuration errors, but I didn't bother writing them down. Time to get more serious about SDF's, I guess.