PI mode 19 started to ring up a bit to cause the SUS_PI node to change some of the PLL filters causing an SDF diff. Mode 19 is not one of the modes that we regularly damp while at 30W, but it may have been excited from the wind. I went to clear the SDF diff but just trying to click the unmonitor button would not work, there was an exclamation point next to the "MON" but I'm not quite sure if that was suppose pop up a screen for me that didn't work remotely, or if it was something else. I managed to select the unmonitor all, with only that channel having an FM3 diff, and getting it to accept it that way. I'm worried that this may have accepted the entire filter bank, and there was another screen that I couldn't see that would allow me to choose what parts on that filter bank to monitor.
We are back to Observing and there is a screenshot below of the SDF diff.
I brought this back to how it was before I unmonitored this entire bank, but I unmonitored the FMs since the PI Guardian can change them. I was correct that I was suppose to get another screen for the filter bank when clicking the !MON button, but I could not get this remotely.
Smooth sailing so far. The wind just jumped from about 5mph to 30. It is forecasted to get very windy today, so we will see this may be the beginning.
TITLE: 04/07 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 69Mpc
OUTGOINGING OPERATOR: Jim
SUMMARY: OPERATOR IS WORKING REMOTELY OF THE DURATION OF THIS SHIFT. I still have not recovered from my pinkeye and we could not find an emergency replacement, so I am logged in and will be monitoring from home. I will do as much as I can from home, and Corey offered to drive on site in the early morning if need be. I have contacted LLO and I am on the Control Rooms TeamSpeak channel if anyone needs to get a hold of me.
TITLE: 04/07 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 69Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: Commissioning changes made staying locked difficult, but sorted now. TJ attempting to do his shift remotely (otherwise he's "out sick")
LOG:
3:30 UTC Noticed range was falling off & ASC was ringing up
4:00 UTC Lockloss, started trying to detangle commissioning changes from earlier, got locked again, but range deteriorated again and ASC range up again. Sheila helped me figure out that her CSOFT changes hadn't been added to the ISC_LOCK guardian, so CSOFT currently still has to be turned up by hand. We should sort this out tomorrow.
Just to clarify what happened:
ITMY oplev has been going bad, it got bad enough to start causing locklosses during the commissioning window yesterday, so we spent some of our commissioning time doing corrective maintence to get rid of oplev damping. Since my change to the CSOFT gain got overwritten in the guardian (not sure why) it didn't come on with the same gain next lock. This meant that we had the radiation pressure instability that the oplev damping was originally intended to suppress, which is different from the 0.44 Hz oscillations we've been having for the past few weeks which are actually caused by the oplev.
So the original cause of the trouble was not commissioning, but the oplev going bad.
Summary:
Full DQ shift here
Brett, Jeff and I have been looking a little at Brett's suspension model, and trying to asses how our damping design (both top mass and oplev) is at high power, based on some conversations at the LLO commissioning meeting. Today I turned off oplev damping on all test masses.
Top mass damping:
The first attached plot shows the magnitudes of the top to top transfer functions for L, P, and Y for 3 different states (no damping and no power in the arms, top mass only damping and no power in the arms, and top mass only damping and 30 Watts of input power). You can see that while it would be possible to design better damping loops to deal with the radiation pressure changes in the plant, there isn't anything terrible happening at 30 Watts. I also looked at 50 Watts input power, and these aren't any large undamped features. I'm not plotting T, V, and R because the radiation pressure doesn't have any impact on them according to the model.
Oplev damping:
The next three plots show the transfer functions from pum to test mass pitch, in the optic basis, hard and soft. One thing that is clear is that we should not try to damp the modes above 1 Hz in the optic basis, so if we really want to use oplev damping we should do it in the radiation pressure basis and probably have a power dependent damping filter. The next two plots show the impact on the hard and soft modes. You can see that the impact of the oplev damping on the hard modes (and therefore the hard ASC loops) is minimal, but there is an impact on the soft loops around half a Hz. DSOFT pitch has a low bandwidth, but we use a bandwidth of something like 0.7Hz for CSOFT P to suppress the radiation pressure instability.
Krishna had noted that the ITMY oplev seems to have been causing several problems recently, (aog 34351), and this afternoon we think that this caused us a lockloss. I tried just turning the opelv damping offf, which was fine, except that we saw signs of the radiation pressure instability returning after a few minutes at 30 Watts. I turned the gain up by a factor of 2 (from 0.2 to 0.4), and this instability went away. The last attached plot is a measurement of the csoft pitch loop with 2 Watts, no oplev damping, before I increased the gain to 0.4.
The guardian is now set to not turn on the oplev damping, and this is accepted in SDF. Hopefully this saves us some problems, but it is probably still a good idea to fix the ITMY oplev on tuesday.
I don't know if we can run like this. 5 hours in and all of the ASC pitch loops are angry and the range is suffering. Pretty sure the lock won't last in this configuration. Attached plot for ASC DHARD pitch shows all of the increased signal is in the .44 hz quad mode. Red is from 4 hours ago, blue is current.
We just lost lock. I think we should re-engage the oplevs.
Recall that top to top transfer functions to do not show the damping of the test mass. They can mislead you that there is high damping when there isn’t really at the test mass. So all transfer functions in this case should show the test mass as an output. Input probably isn’t too important, but from a consistency point of view may as well make all transfer functions between the same stages.
I agree that doing the oplev damping in hard and soft coordinates may be better than local coordinates. In local coordinates, all test masses need high bandwidth damping to accommodate the hard modes. In hard and soft coordinates, only the hard loops need high bandwidth damping. So that might give us a win of up to sqrt(2) in noise for the same amount of damping.
After getting relocked, the range started dropping again and the ASC foms showed .44hz mode was still ringing up even with oplev damping. After talking to Sheila, we realized the CSOFT gain wasn't in the guardian yet (it was still at .2, should be .4 with oplev damping off), so I set 30 second tramps on the CSOFT P and oplev damping and with some quick clicking ramped the CSOFT gain up and the oplev gains back to zero. The lock survived this, ASC foms settled down and the range started recovering. I've accepted the changes in SDF. Hopefully the lock will survive the night, but if it doesn't the CSOFT gain will show up in SDF. It should be set to .4.
TITLE: 04/06 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Jim
SHIFT SUMMARY:
Sheila updated the Guardian for the violin MODE10 filters used to damp out ETMx's 4.735kHz mode yesterday. Unfortunately, upon a lock today, this was rung up. So disabled FM6 (only enabling FM4, FM9, & FM10). Thank you Sheila for updating the Guardian.
H1 is now back to OBSERVING after the COMMISSIONING window today.
LOG:
Added 175ml to the crystal chiller.
Evan, Miriam
With H1 in commissioning mode (and L1 intermittently going in and out of lock), we performed a last set of blip-like injections in the H1:SUS-ETMY_L2_DRIVEALIGN_Y2L_EXC channel as in aLog 35228 and 35116. Because somehow I couldn't get ldvw to generate the omega scans, I ran wdq-batch and full omega scans can be found here. For security, we started quiet and slowly increased the SNR. Injections that can be seen in DARM are marked with *
500 Hz single pulse sine Gaussian:
1175550531
1175550554
1175550579
1175550607
1175550633 *
1175550656 *
1175550687 *
700 Hz single pulse sine Gaussian:
1175550728
1175550757
1175550779
1175550805 *
1175550827 *
1175550851 *
1175550881 *
500 Hz step-function like sine Gaussian:
1175550937
1175550959
1175550998
1175551047 *
1175551077 *
1175551108 *
700 Hz step-function like sine Gaussian (we used the same scaling as for the 500 Hz injection, but they are much more quiet):
1175551139
1175551159
1175551181
1175551206
1175551229
1175551253 *
Filtered step function (same as last time):
1175551292.5 *
1175551318.5 *
1175551345.5 *
None of these injections reproduce the raindrop blip glitches that Robert Schofield wanted to see.
J. Kissel, J. Driggers, S. Dwyer WP 6557 After a good bit of debugging, we installed and confirmed the functionality of the new ISC_LOCK guardian state used to prep a nominal-low-noise IFO for calibration measurements (see original design in LHO aLOG 35295). The debugged version has been committed to the userapps svn repo. The user functionality from the "ladder" screen is a little weird on the return from NLN_CAL_MEAS to NOMINAL_LOW_NOISE, but this state really only be used every two weeks or so, and most likely by me, so I'm not to worried. Just remember to be patient -- the transition takes ~220 [sec] because it's waiting for the 10 sec and 128 sec low-pass filter histories to settle after they've been successively cleared. I'll talk with TJ / Jamie to see if there's a better way to write the state that makes the user interface act more normal. This closes the above mentioned work permit.
J. Kissel The message -- new recommendations: - if and when we decide to vent the corner, let's run the ETMY ESD with a constant high requested bias of the opposite sign, so we recover back to zero then resume normal regular bias flipping operation. (And lets just make sure that the ETMX requested bias is OFF.) OR - if it looks like The Schmutz has been successfully removed from ITMX after the vent, there will likely be less SRC detuning, so we'll need to create a new calibration reference time and model. We'll use that opportunity to measure and reset the strength to which the ESD is relative as well. --------------------- How I came to this conclusion: I've taken this week's charge measurements with the usual method -- drive each quadrant of the ESD, and measure the angular response in that test mass' optical lever Pitch and Yaw as a function of requested bias voltage. Where the angular actuation strength crosses zero actuation strength is the effective bias voltage, which we suspect is due to accumulated charge in / on / around the ESD system. In the past, we've used this as a proxy for the change in longitudinal ESD actuation strength, which influences / affects calibration of the DARM loop. We also have a direct measure of the longitudinal actuation strength relative to a given reference time, as measured by the 35.9 Hz vs. 36.7 Hz ETMY SUS vs. PCALY calibration lines. Traditionally, (i.e. in O1) when we were not correcting for longitudinal actuation strength change, we wished to keep the effective bias voltage (as measured by angular actuation strength) less than +/- 10-20 [V], because -- if interpreted as longitudinal actuation strength -- meant that any more would result in greater than a 10-20 [V] / 400 [V] = 2.5-5% strength change, which meant the low frequency DARM loop calibration was off by 2.5-5%. Several things have happened since then (i.e. in O2): - We regularly flip the bias when each ETMs ESD is not in use, so charge accumulates slower (assuming 50-60% IFO duty cycle, but that's worked less well in times of 80-90% duty cycle) - We compensate for longitudinal actuation strength change - We regularly create reference times that "reset" the model to which the longitudinal strength is relative All of this is to set up the conclusion and plots attached. While we see that H1 SUS ETMY's effective bias voltage in each quadrant is at -40 [V] and trending more negative, and if mapped to longitudinal actuation strength relative to zero effective bias voltage is pushing 10%, we're not yet to the point where we need to consider doing anything because the longitudinal actuation strength relative to the 2017-01-04 reference time is only 3-4%. The last 7 plots show how the relative longitudinal actuation strength has slowly grown over the past 3 months (with a snap shot from the summary pages taken every 2 Saturdays, including today). So -- new recommendation: - if and when we decide to vent the corner, let's run the ETMY ESD with a constant high requested bias of the opposite sign, so we recover back to zero then resume normal operation. OR - if it looks like the schmutz has been successfully removed from ITMX after the vent, there will likely be less detuning, so we should create a new reference model, and reset the strength to which the ESD is relative.
Activities brought up for next Maintenance Day:
Vern also discussed the status of a possible upcoming vent--will find out decision on Tuesday.
WP 6562; Nutsinee, Kiwamu,
As a follow up of Aidan's analysis (35336), we did a simple measurement in this morning for determining the HWS coordinate.
- Preliminary result (currently being double checked with Aidan):
[Measurement]
[Verification measurement]
I've independently checked my analysis and disagree with the above aLOG. I get the same orientation that I initially calculated in aLOG 35336.
After discussing the matter with Kiwamu, it turned out there was some confusion over the orientation of the CCD. The following analysis should clear this up.
1. ABCD matrix for ITMX to HWSX (T1000179):
-0.0572 | -0.000647 |
0.0035809 | -17.4852 |
So, nominally the X&Y coordinates are inverted by this matrix. However, the X coordinates will be inverted in horizontal reflection off a mirror. Fortunately, there are an even number of horizontal reflections (plus the periscope but the upper and lower mirrors cancel each other).
Therefore, we can illustrate the optical system of the HWS as below:
As viewed from above, the return beam propagates from ITMX back toward the HWSX (from right to left in this image). A positive rotation of ITMX in YAW is a counter-clockwise rotation of ITMX when viewed from above. So the return beam rotates down in the image as illustrated. The conjugate plane of the HWS Hartmann plate (plane A) is at the ITMX HR surface (plane A'). The conjugate plane of the HWS CCD (plane B) is approximately 3m from the ITMX HR surface (going into the PRC - plane B').
The even mirror reflections cancel each other out. The only thing left is the inversion from the ABCD matrix. Hence, the ray that rotates counter-clockwise at ITMX rotates clockwise at the HWS - as illustrated here. In this case, towards the right of the HWS CCD.
Lastly, the HWS CCD coordinate system is defined as shown here (with the origin in the lower-left). I verified this in the lab this morning.
Therefore: the orientation in aLOG 35336 is correct.
CP3 log file DOES NOT exist! CP4 log file DOES NOT exist!
this is a test of the new vacuum controls cryopump autofill system. The striptools are now running on the virtual machine cdsscript1, so we are not tying up a workstation to display these plots. Because an autofill did not happen today, the warnings that the data files do not exist is expected.
The robo alog now has two attached png files, one per cryopump. In the old system it was a single file because the entire desktop of the workstation was captured.
J. Oberling, E. Merilh
This morning we swapped the oplev laser for the ETMy oplev, which has been having issues with glitching. The swap went smooth with zero issues. Old laser SN is 130-1, new laser SN is 194-1. This laser operates at a higher power than the previous laser, so the SUM counts are now ~70k (used to be ~50k); the individual QPD segments are sitting between 16k and 19k counts. This laser will need a few hours to come to thermal equilibrium, so I will assess whether or not glitching has been improved this afternoon; I will keep the work permit open until this has been done.
For those investigating the possibility of these lasers causing a comb in DARM, the laser was off and the power unplugged for ~11 minutes. The laser was shut off and unplugged at 16:14 UTC (9:14 PDT); we plugged it back in and turned it on at 16:25 UTC (9:25 PDT).
Attached are spectrograms (1500-18:00 UTC vs 20-22 Hz) of the EY optical lever power sum over a 3-hour period today containing the laser swap and of a witness magnetometer channel that appeared to indicate on March 14 that a change in laser power strengthened the 0.25-Hz-offset 1-Hz comb at EY. Today's spectrograms, however, don't appear to support that correlation. During the 11-minute period when the optical lever laser is off, the magnetometer spectrogram shows steady lines at 20.25 and 21.25 Hz. For reference, corresponding 3-hour spectrograms are attached from March 14 that do appear to show the 20.25-Hz and 21.25-Hz teeth appear right after a power change in the laser at about 17:11 UTC. Similarly, 3-hour spectrograms are attached from March 14 that show the same lines turning on at EX at about 16:07 UTC. Additional EX power sum and magnetometer spectrograms are also attached, to show that those two lines persist during a number of power level changes over an additional 8 hours. In my earlier correlation check, I noted the gross changes in magnetometer spectra, but did not appreciate that the 0.25-Hz lines were relatively steady. In summary, those lines strengthened at distinct times on March 14 (roughly 16:07 UTC at EX and 17:11 at EY) that coincide (at least roughly) with power level changes in the optical lever lasers, but the connection is more obscure than I had appreciated and could be chance coincidence with other maintenance work going on that day. Sigh. Can anyone recall some part of the operation of increasing the optical lever laser powers that day that could have increased coupling of combs into DARM, e.g., tidying up a rack by connecting previously unconnected cables? A shot in the dark, admittedly, but it's quite a coincidence that these lines started up at separate times at EX and EY right after those lasers were turned off (or blocked from shining on the power sum photodiodes) and back on again. Spectrograms of optical level power sum and magnetometer channels Fig 1: EY power - April 4 - 15:00-18:00 UTC Fig 2: EY witness magnetometer - Ditto Fig 3: EY power - March 14 - 15:00-18:00 UTC Fig 4: EY magnetometer - Ditto Fig 5: EX power - March 14 - 14:00-17:00 UTC Fig 6: EX witness magnetometer - Ditto Fig 7: EX power - March 14 - 17:00-22:00 UTC Fig 8: EX witness magnetometer - Ditto Fig 9: EX power - March 15 - 00:00-04:00 UTC Fig 10: EX witness magnetometer - Ditto
Laser continued to glitch after the swap; see attachment from 4/5/2017 ETMy oplev summary page. My suspicion is that the VEA temp was just different enough from the Pcal lab (where we stabilize the lasers before install) that the operating point of the laser once installed was just outside the stable range set in the lab. So during today's commissioning window I went to End Y and slightly increased the laser power to hopefully return the operating point to within the stable range. Using the Current Mon port on the laser to monitor the power increase:
Preliminary results look promising, so I will let it run overnight and evaluate in the morning whether or not further tweaks to the laser power are necessary.