Using the pictures recently taken using the Pcal camera system I have determined the Pcal beam spot position. The numbers quoted below are the Pcal beam offset (in mm) from their nominal position of [0,111.6] for upper beam and [0, -111.6] for lower beam.
LHOX: Upper Beam [1.4+/-0.2, 1.3+/-0.2]
Lower Beam [-0.4+/-0.3, 1.3+/-0.3]
LHOY: Upper Beam [-0.6+/-0.2, 1.1+/-0.1]
Lower Beam [-3.1+/-0.1, 1.3+/-0.3]
The pictures for X-end were taken by Travis on 2017/03/28 and the pictures along with the analyzed figures can be found at:
The pictures for Y-end were taken by me on 2017/03/21 and the pictures along with the analyzed figures can be found at:
The last beam spot analysis for Xend can be found at: LHO alog # 29873 and Y-end at: LHO alog # 30105
Krishna, Jeff
There are several recent cases of sudden range drops, coincident with increased quad motion, oplev noise, CAL line upconversion, OMC dither heights and angular motion etc. See Jeff's log for some details and also 34999. It happened again last night on 2017-04-05 from ~05:30 UTC to 06:40 before the earthquake knocked us out. DetChar has suspected ITMY Oplev.
As seen from the previous alogs, it is difficult to follow the causal chain, but I suspect OMC is unlikely to be the problem since it can't influence the QUAD motion which clearly goes up each time. All the QUADs show similar PITCH motion, so it is not clear which was the cause. So far, a smoking gun for the problem seems to be ITMY Oplev YAW 0.3Hz to 1Hz blrms: crossing a threshold of ~0.04 (microrad?) seems to trigger a range drop. Remember that oplevs are used only to damp PITCH motion of the quads but they are independent witnesses in YAW.
I have attached four cases where the ITMY Oplev - YAW sees an increase in the 0.3 to 1 Hz blrms correlated with range drop: April 5, April 2, March 29, March 22. A quick look at the summary pages shows that this increase is only associated with ITMY - Oplev YAW. There are more cases but this is sufficient I think.
It is interesting that even though other Oplevs (such as ETMY) glitch more, it looks like only ITMY-Oplev shows broad low-frequency increase in apparent angular motion.
Edit: For clarification, I think ITMY Oplev sees an apparent increase in PITCH and YAW during these times. Since we use it for damping in PITCH, all QUADS start pitching more affecting DARM. ITMY Oplev may need to be tuned/fixed. The other less likely possibility is that ITMY is rubbing occasionally.
It looks like there is a line in the Oplev laser at ~0.44 Hz, which gets larger during the periods corresponding to the range drops. In the attached pdf, dashed data is on April 5 starting from 05:30 UTC, while the solid lines are from 03:30 UTC.
Edit: The reason we think this problem is in the Oplev and not the QUAD chain is that the 0.3-1 Hz blrms for the ITMY YAW shows a clear difference in behavior before and after the laser swap on March 7th.
BSC CPS Spectra Notes: Only item worth noting is that the ITMx V1 spectra is at a higher value overall compared to other ITMx spectra.
HAM CPS Spectra Notes: Nothing To Note
I've attempted to orient the coordinate system of the ITMX Hartmann sensor relative to the ITM surface. The following analysis suggests that the coordinate system of the Hartmann sensor is flipped horizontally and vertically relative to the ITM, like so:
Finally, the point absorber appears in the upper right of the HWS coordinate system, so that means the lower left of the ITM (as viewed from the BS).
This Morning noticed that the GWI.stat tool is froze with regards to the Detector states (the time stamp at the top is updating though). Right now it currently lists H1 & L1 in NOT OK states.
I am looking into this. It looks like it's a problem with the gstlal_gw_stat process which extracts the status information from frames -- some side effect of Tuesday maintenance since it has been that way since yesterday at 10:10 PDT. I will ask Chad Hanna for help.
Chad modified the way the gstlal_gw_stat process checks checksums, and now GWIstat is working again.
This morning there is a different problem -- condor is not running properly on ldas-grid.ligo.caltech.edu . I've emailed the Caltech LDAS admins.
TITLE: 04/05 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 71Mpc
OUTGOING OPERATOR: Cheryl
CURRENT ENVIRONMENT:
Wind: 14mph Gusts, 10mph 5min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.42 μm/s
Winds are inching up to 10mph over the last couple of hours.
QUICK SUMMARY:
Nicely-running H1 handed off (going on 7+hrs of lock). L1 was down, but quickly rejoined us. Nuc5 was rebooted. Going through Ops Checksheet.
Krishna is here and investigating a range drop from last night before the EQ.
TITLE: 04/05 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 69Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
LOG:
STS CENTERING: 2017-04-05 02:03:39.354195
All STSs prrof masses that within healthy range (< 2.0 [V]). Great!
Here's a list of how they're doing just in case you care:
STS A DOF X/U = -0.501 [V]
STS A DOF Y/V = 0.243 [V]
STS A DOF Z/W = -0.639 [V]
STS B DOF X/U = 0.524 [V]
STS B DOF Y/V = 0.316 [V]
STS B DOF Z/W = -0.297 [V]
STS C DOF X/U = 0.361 [V]
STS C DOF Y/V = 0.847 [V]
STS C DOF Z/W = -0.281 [V]
STS EX DOF X/U = 0.081 [V]
STS EX DOF Y/V = 0.613 [V]
STS EX DOF Z/W = 0.066 [V]
STS EY DOF X/U = 0.097 [V]
STS EY DOF Y/V = 0.103 [V]
STS EY DOF Z/W = 0.463 [V]
T240 CENTERING: 2017-04-05 01:56:53.355462
There are 6 T240 proof masses out of range ( > 0.3 [V] )!
ETMX T240 2 DOF X/U = -0.615 [V]
ETMX T240 2 DOF Y/V = -0.708 [V]
ETMX T240 2 DOF Z/W = -0.341 [V]
ETMY T240 3 DOF Z/W = 0.312 [V]
ITMX T240 1 DOF X/U = -0.423 [V]
ITMX T240 3 DOF X/U = -0.385 [V]
All other proof masses are within range ( < 0.3 [V] ):
ETMX T240 1 DOF X/U = 0.057 [V]
ETMX T240 1 DOF Y/V = 0.069 [V]
ETMX T240 1 DOF Z/W = 0.143 [V]
ETMX T240 3 DOF X/U = 0.079 [V]
ETMX T240 3 DOF Y/V = -0.011 [V]
ETMX T240 3 DOF Z/W = 0.052 [V]
ETMY T240 1 DOF X/U = 0.012 [V]
ETMY T240 1 DOF Y/V = -0.02 [V]
ETMY T240 1 DOF Z/W = -0.189 [V]
ETMY T240 2 DOF X/U = 0.18 [V]
ETMY T240 2 DOF Y/V = -0.205 [V]
ETMY T240 2 DOF Z/W = 0.037 [V]
ETMY T240 3 DOF X/U = -0.206 [V]
ETMY T240 3 DOF Y/V = 0.005 [V]
ITMX T240 1 DOF Y/V = -0.167 [V]
ITMX T240 1 DOF Z/W = -0.103 [V]
ITMX T240 2 DOF X/U = -0.11 [V]
ITMX T240 2 DOF Y/V = -0.12 [V]
ITMX T240 2 DOF Z/W = -0.141 [V]
ITMX T240 3 DOF Y/V = -0.101 [V]
ITMX T240 3 DOF Z/W = -0.042 [V]
ITMY T240 1 DOF X/U = 0.024 [V]
ITMY T240 1 DOF Y/V = -0.003 [V]
ITMY T240 1 DOF Z/W = 0.007 [V]
ITMY T240 2 DOF X/U = 0.148 [V]
ITMY T240 2 DOF Y/V = 0.168 [V]
ITMY T240 2 DOF Z/W = 0.063 [V]
ITMY T240 3 DOF X/U = -0.067 [V]
ITMY T240 3 DOF Y/V = 0.091 [V]
ITMY T240 3 DOF Z/W = -0.012 [V]
BS T240 1 DOF X/U = -0.13 [V]
BS T240 1 DOF Y/V = 0.007 [V]
BS T240 1 DOF Z/W = 0.264 [V]
BS T240 2 DOF X/U = 0.109 [V]
BS T240 2 DOF Y/V = 0.241 [V]
BS T240 2 DOF Z/W = 0.031 [V]
BS T240 3 DOF X/U = 0.081 [V]
BS T240 3 DOF Y/V = -0.055 [V]
BS T240 3 DOF Z/W = -0.062 [V]
TITLE: 04/05 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 67Mpc
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
Wind: 4mph Gusts, 3mph 5min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.41 μm/s
QUICK SUMMARY: 08:30UTC - in Observe
TITLE: 04/05 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: Cheryl
SHIFT SUMMARY:
LOG:
Recovery from maintenance was painless, for once. We got locked pretty quickly after PCal wrapped up. IFO was locked until just a bit before my shift ended, and an earthquake from Iran arrived. Cheryl is trying to relock now.
I've been looking to see if LHO needs to pursue better L2A de-coupling in the corner station suspensions to improve our wind and earthquake robustness. The good news is I had to look for a while to find a candidate, but I know better what to look for now, so I'll see what else I can find. Looking at a couple of recent earthquakes, I noticed that we seemed to lose lock when the IM4 TRANS qpd pitch hit a threshold of -.6. After talking to Jenne about it, we looked at other QPDs close by and it was immediately obvious that MC2 trans qpd pitch was being driven by MC2 M1 length drive. The attached plot shows the story.
Both plots are time series for and earthquake on March 27 of this year, where we lost lock at around 1174648460 UTC. The top plot shows the MC2_TRANS_PIT_INMON, MC2_M1_DRIVEALIGN_L_OUTMON and MC2_TRANS_SUM_OUT16. The bottom plot is the ITMY STS in the Y direction. The first 600 seconds are before the earthquake arrives and is quiet. The spike in the STS at about 700 seconds is the arrival of the P waves. This causes MC2 sus to move more, but the MC2 trans sum isn't affected much. At about 900 seconds the R waves arrive and MC2 starts moving more and more, moving the spot on the qpd more and driving down the qpd sum. I've looked at the other pds used for asc and only IM4 trans and MC2 trans seem to move this much during an earthquake.
[Vaishali, JimW, Jenne]
We started looking at transfer functions yesterday to do the length-to-angle decoupling, but I mis-read Jim's plot, and focused on the lowest M3 stage, rather than the low frequency top stage.
Anyhow, hopefully we can take some passive TFs over the next few days (especially now, with the >90%ile useism and >98%ile wind), and have a decoupling filter ready for the next commissioning window.
The interferometer had been locked and we had been observing for about 40 minutes when the TCS_ITMY_CO2 guardian knocked us out of observing. It created 3 diffs in TCSCS, and the ITMY_CO2 guardian complained it was not nominal. We couldn't get back to observing until the guardian had finished FIND_LOCK_POINT and returned to LASER_UP. Verbal has also complained several times that TCSY chill-air is low. I'm assuming for now that this is all related to the TCS work earlier.
Dave came in to talk about this. This sounds similar to this post from last month: alog#34861. This was followed by email discussion between Keita & Alastair.
Guardian had reported that the ITMY CO2 laser became unlocked at 18:09:32 PDT last night:
2017-04-05T01:09:32.92258 TCS_ITMY_CO2 [LASER_UP.run] laser unlocked. jumping to find new locking point
2017-04-05T01:09:32.98424 TCS_ITMY_CO2 JUMP target: FIND_LOCK_POINT
So the SDF differences raised and the fact they terminated observation mode appears to be correct.
Jeff K cleared up the confusion of what should and shouldn't be monitored in this case.
The filter modules in question should not be monitored, they are being changed by guardian during observation. The TSTEP channel records the GPS time a step is made, and should never be monitored.
Taking TSTEP as an example, I checked through the SVN repository at the observe.snap file for TCS CO2 ITMX and ITMY and found that in Oct 2016 both were not monitored. In 3rd March this year they both were monitored. On 22 March ITMX was not monitored but ITMY was. We suspect that by accident too many changes are being applied to the snap files, for example perhaps monitor-all was applied.
TravisS, KarlT, PeterK, RickS
We captured several series of images with the exposure for each successive image about a factor of three higher than the previous image.
Attached are four multi-page .pdf files containing the photos with:
- Resonant Green only (ITM OptLev on)
- Resonant IR and Green (ITM OptLev on)
- Resonant IR at 2W incident (ITM OptLev off)
- Resonant IR at 20W incident (ITM OptLev off)
The camera settings for the images are in the fifth attached .pdf file.
Is there any chance that these images are flipped left-right? If not, the bright spots seem to be in a position that is inconsistent with the position of the heat absorption, as shown in Aidan's alog 35336. According to Aidan's alog, the absorber is on the bottom right when viewing the ITMX HR side, however the bright spots here seem to be on the bottom left when viewing ITMX's HR side.
J. Oberling, E. Merilh
This morning we swapped the oplev laser for the ETMy oplev, which has been having issues with glitching. The swap went smooth with zero issues. Old laser SN is 130-1, new laser SN is 194-1. This laser operates at a higher power than the previous laser, so the SUM counts are now ~70k (used to be ~50k); the individual QPD segments are sitting between 16k and 19k counts. This laser will need a few hours to come to thermal equilibrium, so I will assess whether or not glitching has been improved this afternoon; I will keep the work permit open until this has been done.
For those investigating the possibility of these lasers causing a comb in DARM, the laser was off and the power unplugged for ~11 minutes. The laser was shut off and unplugged at 16:14 UTC (9:14 PDT); we plugged it back in and turned it on at 16:25 UTC (9:25 PDT).
Attached are spectrograms (1500-18:00 UTC vs 20-22 Hz) of the EY optical lever power sum over a 3-hour period today containing the laser swap and of a witness magnetometer channel that appeared to indicate on March 14 that a change in laser power strengthened the 0.25-Hz-offset 1-Hz comb at EY. Today's spectrograms, however, don't appear to support that correlation. During the 11-minute period when the optical lever laser is off, the magnetometer spectrogram shows steady lines at 20.25 and 21.25 Hz. For reference, corresponding 3-hour spectrograms are attached from March 14 that do appear to show the 20.25-Hz and 21.25-Hz teeth appear right after a power change in the laser at about 17:11 UTC. Similarly, 3-hour spectrograms are attached from March 14 that show the same lines turning on at EX at about 16:07 UTC. Additional EX power sum and magnetometer spectrograms are also attached, to show that those two lines persist during a number of power level changes over an additional 8 hours. In my earlier correlation check, I noted the gross changes in magnetometer spectra, but did not appreciate that the 0.25-Hz lines were relatively steady. In summary, those lines strengthened at distinct times on March 14 (roughly 16:07 UTC at EX and 17:11 at EY) that coincide (at least roughly) with power level changes in the optical lever lasers, but the connection is more obscure than I had appreciated and could be chance coincidence with other maintenance work going on that day. Sigh. Can anyone recall some part of the operation of increasing the optical lever laser powers that day that could have increased coupling of combs into DARM, e.g., tidying up a rack by connecting previously unconnected cables? A shot in the dark, admittedly, but it's quite a coincidence that these lines started up at separate times at EX and EY right after those lasers were turned off (or blocked from shining on the power sum photodiodes) and back on again. Spectrograms of optical level power sum and magnetometer channels Fig 1: EY power - April 4 - 15:00-18:00 UTC Fig 2: EY witness magnetometer - Ditto Fig 3: EY power - March 14 - 15:00-18:00 UTC Fig 4: EY magnetometer - Ditto Fig 5: EX power - March 14 - 14:00-17:00 UTC Fig 6: EX witness magnetometer - Ditto Fig 7: EX power - March 14 - 17:00-22:00 UTC Fig 8: EX witness magnetometer - Ditto Fig 9: EX power - March 15 - 00:00-04:00 UTC Fig 10: EX witness magnetometer - Ditto
Laser continued to glitch after the swap; see attachment from 4/5/2017 ETMy oplev summary page. My suspicion is that the VEA temp was just different enough from the Pcal lab (where we stabilize the lasers before install) that the operating point of the laser once installed was just outside the stable range set in the lab. So during today's commissioning window I went to End Y and slightly increased the laser power to hopefully return the operating point to within the stable range. Using the Current Mon port on the laser to monitor the power increase:
Preliminary results look promising, so I will let it run overnight and evaluate in the morning whether or not further tweaks to the laser power are necessary.