We remotely powered down the CDS WAPs in both mids and end stations by disabling the POE switch ports around 10:45PST. When opportunity arises we'll go to these locations, disconnect the ethernet cables and restart the switch ports.
I have looked at all the A2L data that we have since the last time the alignment was significantly changed, which was Monday afternoon after the PSL PZT work (alog 31951). This is the first attached plot.
The first data point is a bit different than the rest, although I'm not totally sure why. Other than that, we're mostly holding our spot positions quite constant. The 3rd-to-last point, taken in the middle of the overnight lock stretch (alog 32004) shows a bit of a spot difference on ETMX, particularly in yaw, but other than that we're pretty solid.
For the next ~week, I'd like operators to run the test mass a2l script (a2l_min_lho.py) about once per day, so that we can track the spot positions a bit. After that, we'll move to our observing run standard of running a2l once a week as part of Tuesday maintenence.
The second attached plot is just the last 2 points from the current lock. First point was taken immediately upon lock, second was take about 30 min into the lock. The maximum spot movement in the figure appears to be about 0.2mm, but I think that is within the error of the A2L measurement. I can't find it right now, but once upon a time I ran A2L 5 or 7 times in a row to see how consistent the answer is, and I think I remember the stdev was about 0.3mm.
The point of the second plot is that at 30W, it doesn't seem to make a big difference if we run a2l immediately or a little later, so we can run it for our once-a-days as soon as we lock, or when we're otherwise out of Observe, and don't have to hold off on going to Observe just for A2L.
In case you don't have it memorized, here's the location of the A2L script:
A2L: How to know if it's good or bad at the moment.
Here is a dtt template to passively measure a2l quality: /opt/rtcds/userapps/release/isc/common/scripts/decoup/DARM_a2l_passive.xml
It measures the coherence between DARM and ASC drive to all test masses using 404 seconds worth of data.
All references started 25 seconds or so after the last a2l was finished and 9 or 10 seconds before the intent bit was set (GPS 116467290).
"Now" is actually about 15:00 UTC, 7AM PT, and you can see that the coherence at around 20Hz (where the ASC feedback to TM starts to be dominated by the sensing noise) significantly worse, and DARM itself was also worse, so you can say that the a2l was worse AT THIS PARTICULAR POINT IN TIME.
Thing is, this might slowly drift around and go better or worse. You can run this template for many points in time (for example each hour), and if the coherence seems to be consistently worse than right after a2l, you know that we need a2l. (A better approach is to write a script to plot the coherence as a time series, which is a good project for fellows.)
If it is repeatedly observed over multiple lock stretches (without running a2l) that the coherence starts small at the beginning of lock and becomes larger an hour or two into the lock, that's the sign that we need to run a2l an hour or two after the lock.
J. Kissel, J. Driggers Jenne identified that the IM overview screens had an incorrect order of channels in their DAC output, where the IOP model outputs had mistakenly come before the USER model in left-to-right fashion. I found while trying to commit the fix to the macro files that Stuart had already found, fixed and commit changes back in March of this year -- see LHO aLOG 25320. So, I've reverted my changed, and svn up'd to the repo version, and all is well. Thanks Stuart!
So we ended up & down from OBSERVING a few times for the items above. Now we are back OBSERVING & should be have most loose ends done.
18:02 (10:02amPST): Chatted with William Parker at LLO. He mentioned they are battling seismic. They have had high useism due to storm in the Gulf and they also have winds of 10-15mph which make locking problematic.
Morning Meeting Minutes
I must admit only having one ear to our 8:30am meeting (busy busy with prepping for OBSERVING), but it seemed short. Basically announced we are in a new operational state for LHO with O2.
Kiwamu, Nutsinee
We had another camera glitch this morning and restarted the computer didn't solve the problem. Kiwamu tried turning off the all the camera and frame grabber switches while running the stream image code but it seems to be streaming something glitchy even without any real inputs (this is also true with SLED off). We also tried streaming images on one camera at a time, X appeared to be running fine but started to glitch as soon as we stream images from Y camera. This is also true with the HWS code. No matter which order we talk to the camera, HWSX will always be the one that glitch if we are talking to Y camera at the same time. We used to be able to stream images from both camera at the same time. And clearly we were able to run both HWSX and HWSY scripts simultaneously without any issues.
We will keep HWSX code running for now. HWSY code is not running.
J. Kissel Continuing the schedule for this roaming line with a move from 1001.3 to 1501.3 Hz. We (as in operators and I instead of just I) will make an effort to pay closer attention to this, so we can be done with the schedule sooner and turn off this line for the duration of the run. Frequency Planned Amplitude Planned Duration Actual Amplitude Start Time Stop Time Achieved Duration (Hz) (ct) (hh:mm) (ct) (UTC) (UTC) (hh:mm) --------------------------------------------------------------------------------------------------------------------------------------------------------- 1001.3 35k 02:00 39322.0 Nov 28 2016 17:20:44 UTC Nov 30 2016 17:16:00 UTC days @ 30 W 1501.3 35k 02:00 39322.0 Nov 30 2016 17:27:00 UTC 2001.3 35k 02:00 39322.0 2501.3 35k 05:00 39322.0 3001.3 35k 05:00 39322.0 3501.3 35k 05:00 39322.0 4001.3 40k 10:00 39322.0 4301.3 40k 10:00 39322.0 4501.3 40k 10:00 39322.0 4801.3 40k 10:00 39222.0 5001.3 40k 10:00 39222.0 Frequency Planned Amplitude Planned Duration Actual Amplitude Start Time Stop Time Achieved Duration (Hz) (ct) (hh:mm) (ct) (UTC) (UTC) (hh:mm) --------------------------------------------------------------------------------------------------------------------------------------------------------- 1001.3 35k 02:00 39322.0 Nov 11 2016 21:37:50 UTC Nov 12 2016 03:28:21 UTC ~several hours @ 25 W 1501.3 35k 02:00 39322.0 Oct 24 2016 15:26:57 UTC Oct 31 2016 15:44:29 UTC ~week @ 25 W 2001.3 35k 02:00 39322.0 Oct 17 2016 21:22:03 UTC Oct 24 2016 15:26:57 UTC several days (at both 50W and 25 W) 2501.3 35k 05:00 39322.0 Oct 12 2016 03:20:41 UTC Oct 17 2016 21:22:03 UTC days @ 50 W 3001.3 35k 05:00 39322.0 Oct 06 2016 18:39:26 UTC Oct 12 2016 03:20:41 UTC days @ 50 W 3501.3 35k 05:00 39322.0 Jul 06 2016 18:56:13 UTC Oct 06 2016 18:39:26 UTC months @ 50 W 4001.3 40k 10:00 39322.0 Nov 12 2016 03:28:21 UTC Nov 16 2016 22:17:29 UTC days @ 30 W (see LHO aLOG 31546 for caveats) 4301.3 40k 10:00 39322.0 Nov 16 2016 22:17:29 UTC Nov 18 2016 17:08:49 UTC days @ 30 W 4501.3 40k 10:00 39322.0 Nov 18 2016 17:08:49 UTC Nov 20 2016 16:54:32 UTC days @ 30 W (see LHO aLOG 31610 for caveats) 4801.3 40k 10:00 39222.0 Nov 20 2016 16:54:32 UTC Nov 22 2016 23:56:06 UTC days @ 30 W 5001.3 40k 10:00 39222.0 Nov 22 2016 23:56:06 UTC Nov 28 2016 17:20:44 UTC days @ 30 W (line was OFF and ON for Hardware INJ)
DCS (LDAS) successfully switched from using the ER10 locations to archive data to the O2 locations starting from:
1164554240 == Nov 30 2016 07:17:03 PST == Nov 30 2016 09:17:03 CST == Nov 30 2016 15:17:03 UTC.
This change should be transparent to users requesting data.
Jeff K., Evan G. At 17:16:30 Nov 30 2016 UTC, we turned off the 1001.3 Hz Pcal X line was turned off as a test for the DetChar group. At 17:21:30 Nov 30 2016 UTC, we shuttered the Pcal X laser. At 17:26:30 Nov 30 2016 UTC, we un-shuttered the Pcal X laser. The line frequency will be moved to 1501.3 Hz shortly.
Trends are the last 20 days due to trends not being taken last week.
WeeklyXtal - Nothing unusual. Amp powers following humidity. Normal power dergading in Osc diode power. Blown readback on Osc DB3 current.
WeeklyLaser - normal. Incursions Monday.
WeeklyEnv - normal
WeeklyChiller - normal except for a trip on Tuesday morning.
Around 8:30 AM local. Cameras glitched again.
8:40 AM restarted h1hwsmsr computer again. This time with cameras and frame grabbers turned off.
~9:00 AM We restarted h1hwsmsr again.
State of H1: relocking, at ENGAGE_SOFT_LOOPS
Activities:
Could someone on site check the coherence of DARM around 1080 Hz with the usual jitter witneses? We're not able to do it offsite because the best witness channels are stored with a Nyquist of 1024 Hz. What we need is the coherence from 1000 to 1200 Hz with things like IMC WFS (especially the sum, I think). The DBB would be nice if available, but I think it's usually shuttered. There's indirect evidence from hVeto that this is jitter, so if there is a good witness channel we'll want to increase the sampling rate in case we get an SN or BNS that has power in this band.
@Andy I'll have a look at IOP channels.
Evan G., Keita K. Upon request, I'm attaching several coherence plots for the 1000-1200 Hz band between H1:CAL-DELTAL_EXTERNAL_DQ and many IMC WFS IOP channels (IOP-ASC0_MADC0_TP_CH[0-12]), ISS intensity noise witness channels (PSL-ISS_PD[A,B]_REL_OUT_DQ), PSL QPD channels (PSL-ISS_QPD_D[X,Y]_OUT_DQ), ILS and PMC HV mon channels, and ISS second loop QPD channels. Unfortunately, there is low coherence between all of these channels and DELTAL_EXTERNAL, so we don't have any good leads here.
A2L: How to know if it's good or bad at the moment.
Here is a dtt template to passively measure a2l quality: /opt/rtcds/userapps/release/isc/common/scripts/decoup/DARM_a2l_passive.xml
It measures the coherence between DARM and ASC drive to all test masses using 404 seconds worth of data.
All references started 25 seconds or so after the last a2l was finished and 9 or 10 seconds before the intent bit was set (GPS 116467290).
"Now" is actually about 15:00 UTC, 7AM PT, and you can see that the coherence at around 20Hz (where the ASC feedback to TM starts to be dominated by the sensing noise) significantly worse, and DARM itself was also worse, so you can say that the a2l was worse AT THIS PARTICULAR POINT IN TIME.
Thing is, this might slowly drift around and go better or worse. You can run this template for many points in time (for example each hour), and if the coherence seems to be consistently worse than right after a2l, you know that we need a2l. (A better approach is to write a script to plot the coherence as a time series, which is a good project for fellows.)
If it is repeatedly observed over multiple lock stretches (without running a2l) that the coherence starts small at the beginning of lock and becomes larger an hour or two into the lock, that's the sign that we need to run a2l an hour or two after the lock.
[EDIT] Sorry wrong alog.
I've scheduled a CBC injection to begin at 9:20 UTC (1:20 PT).
Here is the change to the schedule file:
1164532817 H1L1 INJECT_CBC_ACTIVE 1 1.0 Inspiral/{ifo}/imri_hwinj_snr24_1163501538_{ifo}_filtered.txt
I'll be scheduling more shortly.
I've scheduled another two injections. The next one is a NSBH inspiral scheduled at 10:30 UTC (2:30 PT) and the following one is another BBH scheduled for 11:40 UTC (3:40 PT).
Here is the update to the schedule file:
1164537017 H1L1 INJECT_CBC_ACTIVE 1 1.0 Inspiral/{ifo}/nsbh_hwinj_snr24_1163501314_{ifo}_filtered.txt
1164541217 H1L1 INJECT_CBC_ACTIVE 1 1.0 Inspiral/{ifo}/imri_hwinj_snr24_1163501530_{ifo}_filtered.txt
The xml files can be found in the injection svn in the Inspiral directory.
All three of these scheduled injections were successfully injected at LHO. The first two were coincident with LLO, the third wasn't injected at LLO as the L1 IFO was down at the time. The relevant section of the INJ_TRANS guardian log is attached.
Betsy, Keita, Daniel
As part of the LVEA sweep, prior to the start of O2, this morning, we spent over an hour doing a cleanup of misc cables and test equipment in the LVEA and electronics room. There were quite a few cables dangling from various racks, here's the full list of what we cleaned up and where:
| Location | Rack | Slot | Description |
| Electronics Room | ISC C2 | Found unused Servo controller/cables/mixer from top of rack. Only power was connected, but lots of dangling cables. Removed entire unit and cables. | |
| Electronics Room | ISC C3 | 19 | D1000124 - Port #7 had dangling cable - removed and terminated. |
| Electronics Room | ISC C4 | Top | Found dangling cable from "ALS COM VCO" Port 2 of 6. Removed and terminated. |
| Electronics Room | Rack next to PSL rack | Dangling fiber cable. Left it... | |
| LVEA near PSL | ISC R4 | 18 | ADC Card Port stickered "AO IN 2" - Dangling BNC removed. |
| LVEA near PSL | ISC R4 | 18 to PSL P1 | BNC-Lemo with restor blue box connecting "AO2" R4 to "TF IN" on P1 PMC Locking Servo Card - removed. |
| LVEA near PSL | ISC R4 | 20 | T'd dangling BNC on back of chassis - removed T and unused BNC. |
| LVEA near PSL | Disconnected unused O-scope, Analyzer, and extension cords near these racks. | ||
| LVEA | Under HAM1 south | Disconnected extension cord running to powered off Beckhoff Rotation stage termination box thingy. Richard said unit is to be removed someday altogether. | |
| LVEA | Under HAM4 NE cable tray | Turned off via power cord the TV monitor that was on. | |
| LVEA | HAM6 NE corner | Kiwamu powered off and removed power cables from OSA equipment near HAM6 ISCT table. | |
| LVEA | Unplugged/removed other various unused power strips and extension cords. |
I also threw the main breaker to the OFF position on both of the free standing unused transformer units in the LVEA - one I completely unplugged because I thought I could still hear it humming.
No monitors computers appear to be on except the 2 VE BECKHOFF ones that must remain on (in their stand alone racks on the floor).
We'll ask the early morning crew to sweep for Phones, Access readers, lights, and WIFI first thing in the morning.
Final walk thru of LVEA was done this morning. The following items were unplugged or powered off:
Phones
1. Next to PSL Rack
2. Next to HAM 6
3. In CER
Card Readers
1. High Bay entry
2. Main entry
Wifi
1. Unplugged network cable from patch panel in FAC Rack
Added this to Ops Sticky Notes page.