TITLE: 04/06 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Jim
SHIFT SUMMARY:
Sheila updated the Guardian for the violin MODE10 filters used to damp out ETMx's 4.735kHz mode yesterday. Unfortunately, upon a lock today, this was rung up. So disabled FM6 (only enabling FM4, FM9, & FM10). Thank you Sheila for updating the Guardian.
H1 is now back to OBSERVING after the COMMISSIONING window today.
LOG:
Added 175ml to the crystal chiller.
Evan, Miriam
With H1 in commissioning mode (and L1 intermittently going in and out of lock), we performed a last set of blip-like injections in the H1:SUS-ETMY_L2_DRIVEALIGN_Y2L_EXC channel as in aLog 35228 and 35116. Because somehow I couldn't get ldvw to generate the omega scans, I ran wdq-batch and full omega scans can be found here. For security, we started quiet and slowly increased the SNR. Injections that can be seen in DARM are marked with *
500 Hz single pulse sine Gaussian:
1175550531
1175550554
1175550579
1175550607
1175550633 *
1175550656 *
1175550687 *
700 Hz single pulse sine Gaussian:
1175550728
1175550757
1175550779
1175550805 *
1175550827 *
1175550851 *
1175550881 *
500 Hz step-function like sine Gaussian:
1175550937
1175550959
1175550998
1175551047 *
1175551077 *
1175551108 *
700 Hz step-function like sine Gaussian (we used the same scaling as for the 500 Hz injection, but they are much more quiet):
1175551139
1175551159
1175551181
1175551206
1175551229
1175551253 *
Filtered step function (same as last time):
1175551292.5 *
1175551318.5 *
1175551345.5 *
None of these injections reproduce the raindrop blip glitches that Robert Schofield wanted to see.
J. Kissel, J. Driggers, S. Dwyer WP 6557 After a good bit of debugging, we installed and confirmed the functionality of the new ISC_LOCK guardian state used to prep a nominal-low-noise IFO for calibration measurements (see original design in LHO aLOG 35295). The debugged version has been committed to the userapps svn repo. The user functionality from the "ladder" screen is a little weird on the return from NLN_CAL_MEAS to NOMINAL_LOW_NOISE, but this state really only be used every two weeks or so, and most likely by me, so I'm not to worried. Just remember to be patient -- the transition takes ~220 [sec] because it's waiting for the 10 sec and 128 sec low-pass filter histories to settle after they've been successively cleared. I'll talk with TJ / Jamie to see if there's a better way to write the state that makes the user interface act more normal. This closes the above mentioned work permit.
J. Kissel The message -- new recommendations: - if and when we decide to vent the corner, let's run the ETMY ESD with a constant high requested bias of the opposite sign, so we recover back to zero then resume normal regular bias flipping operation. (And lets just make sure that the ETMX requested bias is OFF.) OR - if it looks like The Schmutz has been successfully removed from ITMX after the vent, there will likely be less SRC detuning, so we'll need to create a new calibration reference time and model. We'll use that opportunity to measure and reset the strength to which the ESD is relative as well. --------------------- How I came to this conclusion: I've taken this week's charge measurements with the usual method -- drive each quadrant of the ESD, and measure the angular response in that test mass' optical lever Pitch and Yaw as a function of requested bias voltage. Where the angular actuation strength crosses zero actuation strength is the effective bias voltage, which we suspect is due to accumulated charge in / on / around the ESD system. In the past, we've used this as a proxy for the change in longitudinal ESD actuation strength, which influences / affects calibration of the DARM loop. We also have a direct measure of the longitudinal actuation strength relative to a given reference time, as measured by the 35.9 Hz vs. 36.7 Hz ETMY SUS vs. PCALY calibration lines. Traditionally, (i.e. in O1) when we were not correcting for longitudinal actuation strength change, we wished to keep the effective bias voltage (as measured by angular actuation strength) less than +/- 10-20 [V], because -- if interpreted as longitudinal actuation strength -- meant that any more would result in greater than a 10-20 [V] / 400 [V] = 2.5-5% strength change, which meant the low frequency DARM loop calibration was off by 2.5-5%. Several things have happened since then (i.e. in O2): - We regularly flip the bias when each ETMs ESD is not in use, so charge accumulates slower (assuming 50-60% IFO duty cycle, but that's worked less well in times of 80-90% duty cycle) - We compensate for longitudinal actuation strength change - We regularly create reference times that "reset" the model to which the longitudinal strength is relative All of this is to set up the conclusion and plots attached. While we see that H1 SUS ETMY's effective bias voltage in each quadrant is at -40 [V] and trending more negative, and if mapped to longitudinal actuation strength relative to zero effective bias voltage is pushing 10%, we're not yet to the point where we need to consider doing anything because the longitudinal actuation strength relative to the 2017-01-04 reference time is only 3-4%. The last 7 plots show how the relative longitudinal actuation strength has slowly grown over the past 3 months (with a snap shot from the summary pages taken every 2 Saturdays, including today). So -- new recommendation: - if and when we decide to vent the corner, let's run the ETMY ESD with a constant high requested bias of the opposite sign, so we recover back to zero then resume normal operation. OR - if it looks like the schmutz has been successfully removed from ITMX after the vent, there will likely be less detuning, so we should create a new reference model, and reset the strength to which the ESD is relative.
Activities brought up for next Maintenance Day:
Vern also discussed the status of a possible upcoming vent--will find out decision on Tuesday.
WP 6562; Nutsinee, Kiwamu,
As a follow up of Aidan's analysis (35336), we did a simple measurement in this morning for determining the HWS coordinate.
- Preliminary result (currently being double checked with Aidan):
[Measurement]
[Verification measurement]
I've independently checked my analysis and disagree with the above aLOG. I get the same orientation that I initially calculated in aLOG 35336.
After discussing the matter with Kiwamu, it turned out there was some confusion over the orientation of the CCD. The following analysis should clear this up.
1. ABCD matrix for ITMX to HWSX (T1000179):
| -0.0572 | -0.000647 |
| 0.0035809 | -17.4852 |
So, nominally the X&Y coordinates are inverted by this matrix. However, the X coordinates will be inverted in horizontal reflection off a mirror. Fortunately, there are an even number of horizontal reflections (plus the periscope but the upper and lower mirrors cancel each other).
Therefore, we can illustrate the optical system of the HWS as below:

As viewed from above, the return beam propagates from ITMX back toward the HWSX (from right to left in this image). A positive rotation of ITMX in YAW is a counter-clockwise rotation of ITMX when viewed from above. So the return beam rotates down in the image as illustrated. The conjugate plane of the HWS Hartmann plate (plane A) is at the ITMX HR surface (plane A'). The conjugate plane of the HWS CCD (plane B) is approximately 3m from the ITMX HR surface (going into the PRC - plane B').
The even mirror reflections cancel each other out. The only thing left is the inversion from the ABCD matrix. Hence, the ray that rotates counter-clockwise at ITMX rotates clockwise at the HWS - as illustrated here. In this case, towards the right of the HWS CCD.
Lastly, the HWS CCD coordinate system is defined as shown here (with the origin in the lower-left). I verified this in the lab this morning.

Therefore: the orientation in aLOG 35336 is correct.
CP3 log file DOES NOT exist! CP4 log file DOES NOT exist!
this is a test of the new vacuum controls cryopump autofill system. The striptools are now running on the virtual machine cdsscript1, so we are not tying up a workstation to display these plots. Because an autofill did not happen today, the warnings that the data files do not exist is expected.
The robo alog now has two attached png files, one per cryopump. In the old system it was a single file because the entire desktop of the workstation was captured.
J. Kissel
Gathered regular bi-weekly calibration / sensing function measurements. Preliminary results (screenshots) attached; analysis to come.
The data have been saved and committed to:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Measurements/SensingFunctionTFs
2017-03-21_H1DARM_OLGTF_4to1200Hz_25min.xml
2017-03-21_H1_PCAL2DARMTF_4to1200Hz_8min.xml
2017-03-06_H1_PCAL2DARMTF_BB_5to1000Hz_0p25BW_250avgs_5min.xml
J. Kissel
After processing the above measurement, the fit optical plant parameters are as follows:
DARM_IN1/OMC_DCPD_SUM [ct/mA] 2.925e-7
Optical Gain [ct/m] 1.110e6 (+/- 1.6e3)
[mA/pm] 3.795 (+/- 0.0053)
Coupled Cavity Pole Freq [Hz] 355.1 (+/- 2.6)
Residual Sensing Delay [us] 1.189 (+/- 1.7)
SRC Detuning Spring Freq [Hz] 6.49 (+/- 0.06)
SRC Detuning Quality Factor [ ] 25.9336 (+/- 6.39)
Attach are plots of the fit, and how these parameters fit in within the context of all measurements from O2.
In addition, given that the spread of the course of the detuning spring frequency is between, say 6.5 Hz and 9 Hz, I show the magnitude ratio of two toy transfer functions, where the only difference is the spring frequency. One can see that -- if not compensated for, that means a systematic magnitude error of 5%, 10%, 27% at 30, 20, and 10 Hz, respectively.
Bad news for black holes! We definitely need to track this time dependence, as was prototyped in LHO aLOG 35041.
Attached are plots comparing the sensing and response function with and without detuning frequency. Compared to LLO (a-log 32930), at LHO the detuning frequency of ~7 Hz has significant effect on the calibration around 20 Hz (see response function plot). The code used to make this plot is added to svn,
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/SRCDetuning/springFreqEffect.m
Attached are plots showing differences in sensing functions and response functions for spring frequencies of 6 Hz and 9 Hz. Coincidentally they are very similar to the plots in the previous comment which show differences when the spring frequencies are 0 Hz and 6.91 Hz.
This morning Dick noticed that the Garb Room/LVEA Card Reader was OFF (not sure how long it was OFF). We like to keep these ON & so I turned it back on.
Noticed the VEA Sweep checklists do NOT mention checking these readers (it's an action for me to update these documents).
Activity On The Docket:
TITLE: 04/06 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 69Mpc
OUTGOING OPERATOR: Cheryl
CURRENT ENVIRONMENT:
Wind: 7mph Gusts, 5mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.39 μm/s
QUICK SUMMARY:
H1 locked for over 31hrs. Still plan to have Commissioning Break from 9am - 1pm (due to LLO Commissioning break) with Calibration sweep & Blip glitch activities on the docket.
TITLE: 04/06 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 67Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
LOG: 12:32UTC - GRB, with a clean hour of stand down time after
TITLE: 04/06 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 70Mpc
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
Wind: 15mph Gusts, 12mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.43 μm/s
QUICK SUMMARY: locked 24 hours as of 08:10UTC
TITLE: 04/06 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 72Mpc
INCOMING OPERATOR: Cheryl
SHIFT SUMMARY:
LOG: Not much happening. Environment is quiet, lock is almost 23 hrs long. Our range for the last ~4 hours has been pretty good, someone should figure out what we've been doing right.
TITLE: 04/05 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 71.5Mpc
INCOMING OPERATOR: Jim
SHIFT SUMMARY:
H1 locked over 14hrs with a fairly quiet shift. Took H1 out of OBSERVING to address a rung up violin mode.
LOG:
This Morning noticed that the GWI.stat tool is froze with regards to the Detector states (the time stamp at the top is updating though). Right now it currently lists H1 & L1 in NOT OK states.
I am looking into this. It looks like it's a problem with the gstlal_gw_stat process which extracts the status information from frames -- some side effect of Tuesday maintenance since it has been that way since yesterday at 10:10 PDT. I will ask Chad Hanna for help.
Chad modified the way the gstlal_gw_stat process checks checksums, and now GWIstat is working again.
This morning there is a different problem -- condor is not running properly on ldas-grid.ligo.caltech.edu . I've emailed the Caltech LDAS admins.
J. Oberling, E. Merilh
This morning we swapped the oplev laser for the ETMy oplev, which has been having issues with glitching. The swap went smooth with zero issues. Old laser SN is 130-1, new laser SN is 194-1. This laser operates at a higher power than the previous laser, so the SUM counts are now ~70k (used to be ~50k); the individual QPD segments are sitting between 16k and 19k counts. This laser will need a few hours to come to thermal equilibrium, so I will assess whether or not glitching has been improved this afternoon; I will keep the work permit open until this has been done.
For those investigating the possibility of these lasers causing a comb in DARM, the laser was off and the power unplugged for ~11 minutes. The laser was shut off and unplugged at 16:14 UTC (9:14 PDT); we plugged it back in and turned it on at 16:25 UTC (9:25 PDT).
Attached are spectrograms (1500-18:00 UTC vs 20-22 Hz) of the EY optical lever power sum over a 3-hour period today containing the laser swap and of a witness magnetometer channel that appeared to indicate on March 14 that a change in laser power strengthened the 0.25-Hz-offset 1-Hz comb at EY. Today's spectrograms, however, don't appear to support that correlation. During the 11-minute period when the optical lever laser is off, the magnetometer spectrogram shows steady lines at 20.25 and 21.25 Hz. For reference, corresponding 3-hour spectrograms are attached from March 14 that do appear to show the 20.25-Hz and 21.25-Hz teeth appear right after a power change in the laser at about 17:11 UTC. Similarly, 3-hour spectrograms are attached from March 14 that show the same lines turning on at EX at about 16:07 UTC. Additional EX power sum and magnetometer spectrograms are also attached, to show that those two lines persist during a number of power level changes over an additional 8 hours. In my earlier correlation check, I noted the gross changes in magnetometer spectra, but did not appreciate that the 0.25-Hz lines were relatively steady. In summary, those lines strengthened at distinct times on March 14 (roughly 16:07 UTC at EX and 17:11 at EY) that coincide (at least roughly) with power level changes in the optical lever lasers, but the connection is more obscure than I had appreciated and could be chance coincidence with other maintenance work going on that day. Sigh. Can anyone recall some part of the operation of increasing the optical lever laser powers that day that could have increased coupling of combs into DARM, e.g., tidying up a rack by connecting previously unconnected cables? A shot in the dark, admittedly, but it's quite a coincidence that these lines started up at separate times at EX and EY right after those lasers were turned off (or blocked from shining on the power sum photodiodes) and back on again. Spectrograms of optical level power sum and magnetometer channels Fig 1: EY power - April 4 - 15:00-18:00 UTC Fig 2: EY witness magnetometer - Ditto Fig 3: EY power - March 14 - 15:00-18:00 UTC Fig 4: EY magnetometer - Ditto Fig 5: EX power - March 14 - 14:00-17:00 UTC Fig 6: EX witness magnetometer - Ditto Fig 7: EX power - March 14 - 17:00-22:00 UTC Fig 8: EX witness magnetometer - Ditto Fig 9: EX power - March 15 - 00:00-04:00 UTC Fig 10: EX witness magnetometer - Ditto
Laser continued to glitch after the swap; see attachment from 4/5/2017 ETMy oplev summary page. My suspicion is that the VEA temp was just different enough from the Pcal lab (where we stabilize the lasers before install) that the operating point of the laser once installed was just outside the stable range set in the lab. So during today's commissioning window I went to End Y and slightly increased the laser power to hopefully return the operating point to within the stable range. Using the Current Mon port on the laser to monitor the power increase:
Preliminary results look promising, so I will let it run overnight and evaluate in the morning whether or not further tweaks to the laser power are necessary.