TITLE: 04/05 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 69Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
LOG:
STS CENTERING: 2017-04-05 02:03:39.354195
All STSs prrof masses that within healthy range (< 2.0 [V]). Great!
Here's a list of how they're doing just in case you care:
STS A DOF X/U = -0.501 [V]
STS A DOF Y/V = 0.243 [V]
STS A DOF Z/W = -0.639 [V]
STS B DOF X/U = 0.524 [V]
STS B DOF Y/V = 0.316 [V]
STS B DOF Z/W = -0.297 [V]
STS C DOF X/U = 0.361 [V]
STS C DOF Y/V = 0.847 [V]
STS C DOF Z/W = -0.281 [V]
STS EX DOF X/U = 0.081 [V]
STS EX DOF Y/V = 0.613 [V]
STS EX DOF Z/W = 0.066 [V]
STS EY DOF X/U = 0.097 [V]
STS EY DOF Y/V = 0.103 [V]
STS EY DOF Z/W = 0.463 [V]
T240 CENTERING: 2017-04-05 01:56:53.355462
There are 6 T240 proof masses out of range ( > 0.3 [V] )!
ETMX T240 2 DOF X/U = -0.615 [V]
ETMX T240 2 DOF Y/V = -0.708 [V]
ETMX T240 2 DOF Z/W = -0.341 [V]
ETMY T240 3 DOF Z/W = 0.312 [V]
ITMX T240 1 DOF X/U = -0.423 [V]
ITMX T240 3 DOF X/U = -0.385 [V]
All other proof masses are within range ( < 0.3 [V] ):
ETMX T240 1 DOF X/U = 0.057 [V]
ETMX T240 1 DOF Y/V = 0.069 [V]
ETMX T240 1 DOF Z/W = 0.143 [V]
ETMX T240 3 DOF X/U = 0.079 [V]
ETMX T240 3 DOF Y/V = -0.011 [V]
ETMX T240 3 DOF Z/W = 0.052 [V]
ETMY T240 1 DOF X/U = 0.012 [V]
ETMY T240 1 DOF Y/V = -0.02 [V]
ETMY T240 1 DOF Z/W = -0.189 [V]
ETMY T240 2 DOF X/U = 0.18 [V]
ETMY T240 2 DOF Y/V = -0.205 [V]
ETMY T240 2 DOF Z/W = 0.037 [V]
ETMY T240 3 DOF X/U = -0.206 [V]
ETMY T240 3 DOF Y/V = 0.005 [V]
ITMX T240 1 DOF Y/V = -0.167 [V]
ITMX T240 1 DOF Z/W = -0.103 [V]
ITMX T240 2 DOF X/U = -0.11 [V]
ITMX T240 2 DOF Y/V = -0.12 [V]
ITMX T240 2 DOF Z/W = -0.141 [V]
ITMX T240 3 DOF Y/V = -0.101 [V]
ITMX T240 3 DOF Z/W = -0.042 [V]
ITMY T240 1 DOF X/U = 0.024 [V]
ITMY T240 1 DOF Y/V = -0.003 [V]
ITMY T240 1 DOF Z/W = 0.007 [V]
ITMY T240 2 DOF X/U = 0.148 [V]
ITMY T240 2 DOF Y/V = 0.168 [V]
ITMY T240 2 DOF Z/W = 0.063 [V]
ITMY T240 3 DOF X/U = -0.067 [V]
ITMY T240 3 DOF Y/V = 0.091 [V]
ITMY T240 3 DOF Z/W = -0.012 [V]
BS T240 1 DOF X/U = -0.13 [V]
BS T240 1 DOF Y/V = 0.007 [V]
BS T240 1 DOF Z/W = 0.264 [V]
BS T240 2 DOF X/U = 0.109 [V]
BS T240 2 DOF Y/V = 0.241 [V]
BS T240 2 DOF Z/W = 0.031 [V]
BS T240 3 DOF X/U = 0.081 [V]
BS T240 3 DOF Y/V = -0.055 [V]
BS T240 3 DOF Z/W = -0.062 [V]
TITLE: 04/05 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 67Mpc
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
Wind: 4mph Gusts, 3mph 5min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.41 μm/s
QUICK SUMMARY: 08:30UTC - in Observe
TITLE: 04/05 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: Cheryl
SHIFT SUMMARY:
LOG:
Recovery from maintenance was painless, for once. We got locked pretty quickly after PCal wrapped up. IFO was locked until just a bit before my shift ended, and an earthquake from Iran arrived. Cheryl is trying to relock now.
I've been looking to see if LHO needs to pursue better L2A de-coupling in the corner station suspensions to improve our wind and earthquake robustness. The good news is I had to look for a while to find a candidate, but I know better what to look for now, so I'll see what else I can find. Looking at a couple of recent earthquakes, I noticed that we seemed to lose lock when the IM4 TRANS qpd pitch hit a threshold of -.6. After talking to Jenne about it, we looked at other QPDs close by and it was immediately obvious that MC2 trans qpd pitch was being driven by MC2 M1 length drive. The attached plot shows the story.
Both plots are time series for and earthquake on March 27 of this year, where we lost lock at around 1174648460 UTC. The top plot shows the MC2_TRANS_PIT_INMON, MC2_M1_DRIVEALIGN_L_OUTMON and MC2_TRANS_SUM_OUT16. The bottom plot is the ITMY STS in the Y direction. The first 600 seconds are before the earthquake arrives and is quiet. The spike in the STS at about 700 seconds is the arrival of the P waves. This causes MC2 sus to move more, but the MC2 trans sum isn't affected much. At about 900 seconds the R waves arrive and MC2 starts moving more and more, moving the spot on the qpd more and driving down the qpd sum. I've looked at the other pds used for asc and only IM4 trans and MC2 trans seem to move this much during an earthquake.
[Vaishali, JimW, Jenne]
We started looking at transfer functions yesterday to do the length-to-angle decoupling, but I mis-read Jim's plot, and focused on the lowest M3 stage, rather than the low frequency top stage.
Anyhow, hopefully we can take some passive TFs over the next few days (especially now, with the >90%ile useism and >98%ile wind), and have a decoupling filter ready for the next commissioning window.
The interferometer had been locked and we had been observing for about 40 minutes when the TCS_ITMY_CO2 guardian knocked us out of observing. It created 3 diffs in TCSCS, and the ITMY_CO2 guardian complained it was not nominal. We couldn't get back to observing until the guardian had finished FIND_LOCK_POINT and returned to LASER_UP. Verbal has also complained several times that TCSY chill-air is low. I'm assuming for now that this is all related to the TCS work earlier.
Dave came in to talk about this. This sounds similar to this post from last month: alog#34861. This was followed by email discussion between Keita & Alastair.
Guardian had reported that the ITMY CO2 laser became unlocked at 18:09:32 PDT last night:
2017-04-05T01:09:32.92258 TCS_ITMY_CO2 [LASER_UP.run] laser unlocked. jumping to find new locking point
2017-04-05T01:09:32.98424 TCS_ITMY_CO2 JUMP target: FIND_LOCK_POINT
So the SDF differences raised and the fact they terminated observation mode appears to be correct.
Jeff K cleared up the confusion of what should and shouldn't be monitored in this case.
The filter modules in question should not be monitored, they are being changed by guardian during observation. The TSTEP channel records the GPS time a step is made, and should never be monitored.
Taking TSTEP as an example, I checked through the SVN repository at the observe.snap file for TCS CO2 ITMX and ITMY and found that in Oct 2016 both were not monitored. In 3rd March this year they both were monitored. On 22 March ITMX was not monitored but ITMY was. We suspect that by accident too many changes are being applied to the snap files, for example perhaps monitor-all was applied.
TravisS, KarlT, PeterK, RickS
We captured several series of images with the exposure for each successive image about a factor of three higher than the previous image.
Attached are four multi-page .pdf files containing the photos with:
- Resonant Green only (ITM OptLev on)
- Resonant IR and Green (ITM OptLev on)
- Resonant IR at 2W incident (ITM OptLev off)
- Resonant IR at 20W incident (ITM OptLev off)
The camera settings for the images are in the fifth attached .pdf file.
Is there any chance that these images are flipped left-right? If not, the bright spots seem to be in a position that is inconsistent with the position of the heat absorption, as shown in Aidan's alog 35336. According to Aidan's alog, the absorber is on the bottom right when viewing the ITMX HR side, however the bright spots here seem to be on the bottom left when viewing ITMX's HR side.
G. Moreno, T. Sadecki, R. Savage, K. Toland
With the newly modified telescope mount parts in hand, we finally successfully mounted the ITM camera (some refer to it as the ITM PCal camera, which I hope would cease since in reality is has nothing to do with PCal whatsoever) on the X arm A-1 spool adapter, port VP2. We took photos locally as we didn't have time to embark on setting up the remote, Control Room accessibility, although the power and ethernet cable were available. We will work on setting up remote access at a later time. Gate valves were soft closed and all lasers were shuttered during installation.
TITLE: 04/04 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC STATE of H1: Preventive Maintenance INCOMING OPERATOR: Jim SHIFT SUMMARY: X arm is locked on IR. ALS is shuttered. ITMX optical lever damping is off. Travis said they would turn ITMX optical lever off. Input power is at 19.6 W. PCAL team is taking images of ITMX with this configuration. We are riding through the tail end of a 5.5 magnitude earthquake in Alaska. LOG: Apollo is at mid Y 15:00 UTC Took IFO down. Peter to LVEA to transition. 15:00 UTC Jeff to LVEA, measurements and dust monitors. 15:00 UTC Travis to LVEA. 15:00 UTC Krishna to end Y. Set to SC_OFF_NOBRSXY. 15:07 UTC Balers on X arm. Apollo and Chris back from end Y. 15:14 UTC Peter back. 15:27 UTC Filiberto looking at PEM chassis in CER. 15:30 UTC Rotorooter and LN2 through gate. 15:35 UTC Karen and Christina to end stations. 15:53 UTC Jason and Ed to end Y to swap optical lever laser. 15:58 UTC Pest control on site. 15:58 UTC Cintos through gate. 16:00 UTC Chandra back, gate valves closed. 16:01 UTC GRB. Ignored. 16:13 UTC Chris taking pest control to LVEA. 16:21 UTC Nutsinee to LVEA to work on HWS X camera install 16:24 UTC Chris taking pest control down Y arm. 16:28 UTC Apollo to end X mechanical room. Bubba to LVEA. 16:30 UTC Jeff B. done in LVEA. Heading to both end station VEAs for dust monitor maintenance. 16:37 UTC Dick G. to CER with signal analyzer to chase RF noise. 16:43 UTC Bubba back from LVEA. 16:44 UTC Chris and pest control moving from Y arm to X arm. 16:51 UTC Krishna done. 16:52 UTC Jason and Ed done. Restarted all nuc computers in CR per Carlo's request. 17:36 UTC Jeff B. done 17:40 UTC Filiberto done in CER. 17:44 UTC Karen leaving end Y. 18:01 UTC Karen to LVEA 18:22 UTC Jeff B. to LVEA to take pictures 18:27 UTC Tumbleweed baling in front of OSB 18:30 UTC Set ISI config to windy 18:34 UTC Pest control done on site 18:42 UTC TCS Y chiller flow is low verbal alarm 19:47 UTC Rick, Travis and crew out for lunch. 19:48 UTC Nutsinee out. Not completing remaining work (no laser hazard) 19:48 UTC Starting attempt to lock X arm on green 19:54 UTC Peter: PSL unshuttered. TCS back on. 20:12 UTC Dick G. done 20:40 UTC Rick, Travis, Carl to LVEA to take pictures with new ITMX camera 20:44 UTC Peter to join PCAL team at ITMX. 21:32 UTC Dave WP 6547 21:35 UTC Nutsinee to LVEA to take picture 21:56 UTC Nutsinee back 22:11 UTC X arm locked on IR 22:13 UTC PCAL crew to LVEA to take next set of pictures. Closed ALS shutters. Turned off optical lever damping on ITMX. Increased power to 19.5 W.
Found INJ_TRANS guardian set to INJECT_KILL upon start of shift. Just set to INJECT_SUCCESS.
5.5 Adak, Alaska Was it reported by Terramon, USGS, SEISMON? Yes, Yes, No Magnitude (according to Terramon, USGS, SEISMON): 5.5, 5.5, NA Location: 69km SSE of Adak, Alaska; 51.269°N 176.440°W Starting time of event (ie. when BLRMS started to increase on DMT on the wall): ~22:16 UTC Lock status? L1 remained lock. H1 out of lock for maintenance. EQ reported by Terramon BEFORE it actually arrived? Not sure
Fil, Richard, Nutsinee
Quick conclusion: The camera and the pick-off beam splitter is in place, but not aligned.
Details: First I swapped the camera fiber cable back so h1hwsmsr can stream images from HWSX camera while I install the BS.
While looking at the stream images, I positioned the BS in such a way that doesn't cause a significant change to the stream images (I didn't take the HW plate off).
Then I installed the camera (screwed on to the table). Because the gate valves were close for the Pcal camera installation, I didn't have the green light to do the alignment.
Richard got the network streaming to work. Now we can look at what the GigE sees though CDS >> Digital Video Cameras. There's nothing there.
The alignment will have to wait until a next opportunity now that green and the IR are back (LVEA is still laser safe).
The camera is left powered on, connected to the Ethernet cable, and CCD cap off.
I re-ran the python script, retook the reference centroids. From 1175379652 GPS time the data written to /data/H1/ITMX_HWS comes from HWSX camera.
Specification
Camera: BASLER scA1400-17gc
Beam Splitter: Thorlabs BSN17 2"diameter 90:10 UVFS BS Plate, 700-110nm t=8mm
WP6547, ECR-E1700111
John Z, Dave:
The DAQ broadcaster was restarted after 7 additional slow channels were added (H1:OMC-BLRMS_32_BAND{1,7}_RMS_OUTMON)
Once again we noticed that after the broadcaster restart, a portion of the seismic fom data went missing (see attached). This was also observed last Tuesday.
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=35155
Jenne D., Patrick T. We suspect that the ITMX green camera got moved during the PCAL camera installation today. Initial alignment for green arm was not working right. We closed the initial alignment loops for green WFS and aligned ITMX by hand to maximize the transmitted power. We then set the camera nominal position such that the output was zero at that place. Attached is a screen shot of the nominal reference position before our change.
Inspection of the inline filters in the PSL Chiller Room showed no contamination, discoloration, or debris. See attached photos. The inline filters in the PSL enclosure were inspected two weeks ago; no problems were noted. Closing FAMIS #8295
Did a zero count and flow rate test on all pump-less dust monitors in the CS and at both end stations. The PSL monitors were check a couple of week ago. All the monitors are running well whit no problems or issues to report. Closing FAMIS task # 7314.
ISI config is back to WINDY. Gate valves are back open. PSL is unshuttered. TCS lasers are back on. IMC is locked. X and Y arms are locked on green. PCAL team is working on taking pictures.
WP 6559
PEM group reported possible issues with some of the PEM "Test" channels in one of the AA chassis in the CER (PEM/OAF Rack slot U7 & U6). Channels 23-32 were all verified to be working.
F. Clara, R. McCarthy
Soft closed GV 5,7 at 15:25 UTC and re-opened at 19:00 UTC during viewport pcal camera installation. We let the accumulated gas load in gate annuli be burped in.
Took the opportunity to train a couple operators on stroking pneumatic valves: Jeff Bartlett and Nutsinee
Thanks to all for transitioning to laser safe for the installation.
J. Oberling, E. Merilh
This morning we swapped the oplev laser for the ETMy oplev, which has been having issues with glitching. The swap went smooth with zero issues. Old laser SN is 130-1, new laser SN is 194-1. This laser operates at a higher power than the previous laser, so the SUM counts are now ~70k (used to be ~50k); the individual QPD segments are sitting between 16k and 19k counts. This laser will need a few hours to come to thermal equilibrium, so I will assess whether or not glitching has been improved this afternoon; I will keep the work permit open until this has been done.
For those investigating the possibility of these lasers causing a comb in DARM, the laser was off and the power unplugged for ~11 minutes. The laser was shut off and unplugged at 16:14 UTC (9:14 PDT); we plugged it back in and turned it on at 16:25 UTC (9:25 PDT).
Attached are spectrograms (1500-18:00 UTC vs 20-22 Hz) of the EY optical lever power sum over a 3-hour period today containing the laser swap and of a witness magnetometer channel that appeared to indicate on March 14 that a change in laser power strengthened the 0.25-Hz-offset 1-Hz comb at EY. Today's spectrograms, however, don't appear to support that correlation. During the 11-minute period when the optical lever laser is off, the magnetometer spectrogram shows steady lines at 20.25 and 21.25 Hz. For reference, corresponding 3-hour spectrograms are attached from March 14 that do appear to show the 20.25-Hz and 21.25-Hz teeth appear right after a power change in the laser at about 17:11 UTC. Similarly, 3-hour spectrograms are attached from March 14 that show the same lines turning on at EX at about 16:07 UTC. Additional EX power sum and magnetometer spectrograms are also attached, to show that those two lines persist during a number of power level changes over an additional 8 hours. In my earlier correlation check, I noted the gross changes in magnetometer spectra, but did not appreciate that the 0.25-Hz lines were relatively steady. In summary, those lines strengthened at distinct times on March 14 (roughly 16:07 UTC at EX and 17:11 at EY) that coincide (at least roughly) with power level changes in the optical lever lasers, but the connection is more obscure than I had appreciated and could be chance coincidence with other maintenance work going on that day. Sigh. Can anyone recall some part of the operation of increasing the optical lever laser powers that day that could have increased coupling of combs into DARM, e.g., tidying up a rack by connecting previously unconnected cables? A shot in the dark, admittedly, but it's quite a coincidence that these lines started up at separate times at EX and EY right after those lasers were turned off (or blocked from shining on the power sum photodiodes) and back on again. Spectrograms of optical level power sum and magnetometer channels Fig 1: EY power - April 4 - 15:00-18:00 UTC Fig 2: EY witness magnetometer - Ditto Fig 3: EY power - March 14 - 15:00-18:00 UTC Fig 4: EY magnetometer - Ditto Fig 5: EX power - March 14 - 14:00-17:00 UTC Fig 6: EX witness magnetometer - Ditto Fig 7: EX power - March 14 - 17:00-22:00 UTC Fig 8: EX witness magnetometer - Ditto Fig 9: EX power - March 15 - 00:00-04:00 UTC Fig 10: EX witness magnetometer - Ditto
Laser continued to glitch after the swap; see attachment from 4/5/2017 ETMy oplev summary page. My suspicion is that the VEA temp was just different enough from the Pcal lab (where we stabilize the lasers before install) that the operating point of the laser once installed was just outside the stable range set in the lab. So during today's commissioning window I went to End Y and slightly increased the laser power to hopefully return the operating point to within the stable range. Using the Current Mon port on the laser to monitor the power increase:
Preliminary results look promising, so I will let it run overnight and evaluate in the morning whether or not further tweaks to the laser power are necessary.