Kiwamu, Nutsinee
We had another camera glitch this morning and restarted the computer didn't solve the problem. Kiwamu tried turning off the all the camera and frame grabber switches while running the stream image code but it seems to be streaming something glitchy even without any real inputs (this is also true with SLED off). We also tried streaming images on one camera at a time, X appeared to be running fine but started to glitch as soon as we stream images from Y camera. This is also true with the HWS code. No matter which order we talk to the camera, HWSX will always be the one that glitch if we are talking to Y camera at the same time. We used to be able to stream images from both camera at the same time. And clearly we were able to run both HWSX and HWSY scripts simultaneously without any issues.
We will keep HWSX code running for now. HWSY code is not running.
J. Kissel Continuing the schedule for this roaming line with a move from 1001.3 to 1501.3 Hz. We (as in operators and I instead of just I) will make an effort to pay closer attention to this, so we can be done with the schedule sooner and turn off this line for the duration of the run. Frequency Planned Amplitude Planned Duration Actual Amplitude Start Time Stop Time Achieved Duration (Hz) (ct) (hh:mm) (ct) (UTC) (UTC) (hh:mm) --------------------------------------------------------------------------------------------------------------------------------------------------------- 1001.3 35k 02:00 39322.0 Nov 28 2016 17:20:44 UTC Nov 30 2016 17:16:00 UTC days @ 30 W 1501.3 35k 02:00 39322.0 Nov 30 2016 17:27:00 UTC 2001.3 35k 02:00 39322.0 2501.3 35k 05:00 39322.0 3001.3 35k 05:00 39322.0 3501.3 35k 05:00 39322.0 4001.3 40k 10:00 39322.0 4301.3 40k 10:00 39322.0 4501.3 40k 10:00 39322.0 4801.3 40k 10:00 39222.0 5001.3 40k 10:00 39222.0 Frequency Planned Amplitude Planned Duration Actual Amplitude Start Time Stop Time Achieved Duration (Hz) (ct) (hh:mm) (ct) (UTC) (UTC) (hh:mm) --------------------------------------------------------------------------------------------------------------------------------------------------------- 1001.3 35k 02:00 39322.0 Nov 11 2016 21:37:50 UTC Nov 12 2016 03:28:21 UTC ~several hours @ 25 W 1501.3 35k 02:00 39322.0 Oct 24 2016 15:26:57 UTC Oct 31 2016 15:44:29 UTC ~week @ 25 W 2001.3 35k 02:00 39322.0 Oct 17 2016 21:22:03 UTC Oct 24 2016 15:26:57 UTC several days (at both 50W and 25 W) 2501.3 35k 05:00 39322.0 Oct 12 2016 03:20:41 UTC Oct 17 2016 21:22:03 UTC days @ 50 W 3001.3 35k 05:00 39322.0 Oct 06 2016 18:39:26 UTC Oct 12 2016 03:20:41 UTC days @ 50 W 3501.3 35k 05:00 39322.0 Jul 06 2016 18:56:13 UTC Oct 06 2016 18:39:26 UTC months @ 50 W 4001.3 40k 10:00 39322.0 Nov 12 2016 03:28:21 UTC Nov 16 2016 22:17:29 UTC days @ 30 W (see LHO aLOG 31546 for caveats) 4301.3 40k 10:00 39322.0 Nov 16 2016 22:17:29 UTC Nov 18 2016 17:08:49 UTC days @ 30 W 4501.3 40k 10:00 39322.0 Nov 18 2016 17:08:49 UTC Nov 20 2016 16:54:32 UTC days @ 30 W (see LHO aLOG 31610 for caveats) 4801.3 40k 10:00 39222.0 Nov 20 2016 16:54:32 UTC Nov 22 2016 23:56:06 UTC days @ 30 W 5001.3 40k 10:00 39222.0 Nov 22 2016 23:56:06 UTC Nov 28 2016 17:20:44 UTC days @ 30 W (line was OFF and ON for Hardware INJ)
DCS (LDAS) successfully switched from using the ER10 locations to archive data to the O2 locations starting from:
1164554240 == Nov 30 2016 07:17:03 PST == Nov 30 2016 09:17:03 CST == Nov 30 2016 15:17:03 UTC.
This change should be transparent to users requesting data.
Jeff K., Evan G. At 17:16:30 Nov 30 2016 UTC, we turned off the 1001.3 Hz Pcal X line was turned off as a test for the DetChar group. At 17:21:30 Nov 30 2016 UTC, we shuttered the Pcal X laser. At 17:26:30 Nov 30 2016 UTC, we un-shuttered the Pcal X laser. The line frequency will be moved to 1501.3 Hz shortly.
Trends are the last 20 days due to trends not being taken last week.
WeeklyXtal - Nothing unusual. Amp powers following humidity. Normal power dergading in Osc diode power. Blown readback on Osc DB3 current.
WeeklyLaser - normal. Incursions Monday.
WeeklyEnv - normal
WeeklyChiller - normal except for a trip on Tuesday morning.
Around 8:30 AM local. Cameras glitched again.
8:40 AM restarted h1hwsmsr computer again. This time with cameras and frame grabbers turned off.
~9:00 AM We restarted h1hwsmsr again.
State of H1: relocking, at ENGAGE_SOFT_LOOPS
Activities:
Could someone on site check the coherence of DARM around 1080 Hz with the usual jitter witneses? We're not able to do it offsite because the best witness channels are stored with a Nyquist of 1024 Hz. What we need is the coherence from 1000 to 1200 Hz with things like IMC WFS (especially the sum, I think). The DBB would be nice if available, but I think it's usually shuttered. There's indirect evidence from hVeto that this is jitter, so if there is a good witness channel we'll want to increase the sampling rate in case we get an SN or BNS that has power in this band.
@Andy I'll have a look at IOP channels.
Evan G., Keita K. Upon request, I'm attaching several coherence plots for the 1000-1200 Hz band between H1:CAL-DELTAL_EXTERNAL_DQ and many IMC WFS IOP channels (IOP-ASC0_MADC0_TP_CH[0-12]), ISS intensity noise witness channels (PSL-ISS_PD[A,B]_REL_OUT_DQ), PSL QPD channels (PSL-ISS_QPD_D[X,Y]_OUT_DQ), ILS and PMC HV mon channels, and ISS second loop QPD channels. Unfortunately, there is low coherence between all of these channels and DELTAL_EXTERNAL, so we don't have any good leads here.
A2L: How to know if it's good or bad at the moment.
Here is a dtt template to passively measure a2l quality: /opt/rtcds/userapps/release/isc/common/scripts/decoup/DARM_a2l_passive.xml
It measures the coherence between DARM and ASC drive to all test masses using 404 seconds worth of data.
All references started 25 seconds or so after the last a2l was finished and 9 or 10 seconds before the intent bit was set (GPS 116467290).
"Now" is actually about 15:00 UTC, 7AM PT, and you can see that the coherence at around 20Hz (where the ASC feedback to TM starts to be dominated by the sensing noise) significantly worse, and DARM itself was also worse, so you can say that the a2l was worse AT THIS PARTICULAR POINT IN TIME.
Thing is, this might slowly drift around and go better or worse. You can run this template for many points in time (for example each hour), and if the coherence seems to be consistently worse than right after a2l, you know that we need a2l. (A better approach is to write a script to plot the coherence as a time series, which is a good project for fellows.)
If it is repeatedly observed over multiple lock stretches (without running a2l) that the coherence starts small at the beginning of lock and becomes larger an hour or two into the lock, that's the sign that we need to run a2l an hour or two after the lock.
[EDIT] Sorry wrong alog.
Thanks for catching this Cheryl! Yes, please leave this channel unmonitored.
I made a few measurements tonight, and we did a little bit more work to be able to go to observe.
Measurements:
First, I tried to look at why our yaw ASC loops move at 1.88 Hz, I tried to modify the MICH Y loop a few times which broke the lock but Jim relocked right away.
Then I did a repeat of noise injections for jitter with the new PZT mount, and did repeats of MICH/PRCL/SRCL/ASC injections. Since MICH Y was about 10 times larger in DARM than pit, (it was at about the level of CHARD in DARM) I adjusted MICH Y2L by hand using a 21 Hz line. By chaning the gain from 2.54 to 1, the coupling of the line to DARM was reduced by a bit more than a factor of 10, and the MICH yaw noise is now a factor of 10 delow darm at 20Hz.
Lastly, I quickly checked if I could change the noise by adjusting the bias on ETMX. A few weeks ago I had changed the bias to -400V, which reduced the 60Hz line by a factor of 2, but the line has gotten larger over the last few weeks. However, it is still true that the best bias is -400V. We still see no difference in the broad level of noise when changing this bias.
Going to observe:
I've added round(,3) to the SOFT input matrix elements that needed it, and to MCL_GAIN in ANALOG_CARM
DIAG main complained about IM2 y being out the nominal range, this is because of the move we made after the IMC PZT work (31951). I changed the nominal value from -209 to -325 for DAMP Y IN1
A few minutes after Cheryl went to observe, we were kicked out of observe again because of fiber polarization, both an SDF difference becuase of the PLL autolocker and because of a warning in DIAG main. This is something that shouldn't kick us out of observation mode because it doesn't matter at all. We should change DAIG_MAIN to only make this test when we are acquiring lock, and perhaps not monitor some channels in SDF observes. We decided the easiest solution for tonight was to fix the fiber polarization, so Cheryl did that.
Lastly, Cheryl suggested that we orgainze the gaurdian state for ISC_LOCK so that states which are not normally used are above NOMINAL_LOW NOISE, I've renumbered the states but not yet loaded the guardian because I think that would knock us out of observation mode and we want to let the hardware injections happen.
REDUCE_RF9 modulation depth guardian problem:
It seems like the reduce RF9 modulation depth state somehow skips restting some gains (screenshot shows the problem). (noted before in alog 31558). This could be serious, and could be why we have occasionally lost lock in this state. I've attached a the log, this is disconcerting because the guardian log reports that it set the gains, but it seems not to have happened. For the two PDs which did not get set, it also looks like the round step is skipped.
We accepted the wrong values (neither of these PDs is in use in lock) in SDF so that Adam could make a hardware injection, but these are the wrong values and should be different next time we lock. The next time the IFO locks, the operator should accept the correct values
Responded to bug report: https://bugzilla.ligo-wa.caltech.edu/bugzilla3/show_bug.cgi?id=1062
Similar thing happened for ASC-REFL_B_RF45_Q_PIT during the last acquisition. I have added some notes to the bug so that Jamie can follow up.
We think that Jamie's comment that we're writing to the same channel too fast is probably the problem. Sheila is currently circulating the work permit to fix the bug.
I've scheduled a CBC injection to begin at 9:20 UTC (1:20 PT).
Here is the change to the schedule file:
1164532817 H1L1 INJECT_CBC_ACTIVE 1 1.0 Inspiral/{ifo}/imri_hwinj_snr24_1163501538_{ifo}_filtered.txt
I'll be scheduling more shortly.
I've scheduled another two injections. The next one is a NSBH inspiral scheduled at 10:30 UTC (2:30 PT) and the following one is another BBH scheduled for 11:40 UTC (3:40 PT).
Here is the update to the schedule file:
1164537017 H1L1 INJECT_CBC_ACTIVE 1 1.0 Inspiral/{ifo}/nsbh_hwinj_snr24_1163501314_{ifo}_filtered.txt
1164541217 H1L1 INJECT_CBC_ACTIVE 1 1.0 Inspiral/{ifo}/imri_hwinj_snr24_1163501530_{ifo}_filtered.txt
The xml files can be found in the injection svn in the Inspiral directory.
All three of these scheduled injections were successfully injected at LHO. The first two were coincident with LLO, the third wasn't injected at LLO as the L1 IFO was down at the time. The relevant section of the INJ_TRANS guardian log is attached.
TITLE: 11/30 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 72.6285Mpc
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
Wind: 8mph Gusts, 5mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.50 μm/s
SUMMARY:
Betsy, Keita, Daniel
As part of the LVEA sweep, prior to the start of O2, this morning, we spent over an hour doing a cleanup of misc cables and test equipment in the LVEA and electronics room. There were quite a few cables dangling from various racks, here's the full list of what we cleaned up and where:
| Location | Rack | Slot | Description |
| Electronics Room | ISC C2 | Found unused Servo controller/cables/mixer from top of rack. Only power was connected, but lots of dangling cables. Removed entire unit and cables. | |
| Electronics Room | ISC C3 | 19 | D1000124 - Port #7 had dangling cable - removed and terminated. |
| Electronics Room | ISC C4 | Top | Found dangling cable from "ALS COM VCO" Port 2 of 6. Removed and terminated. |
| Electronics Room | Rack next to PSL rack | Dangling fiber cable. Left it... | |
| LVEA near PSL | ISC R4 | 18 | ADC Card Port stickered "AO IN 2" - Dangling BNC removed. |
| LVEA near PSL | ISC R4 | 18 to PSL P1 | BNC-Lemo with restor blue box connecting "AO2" R4 to "TF IN" on P1 PMC Locking Servo Card - removed. |
| LVEA near PSL | ISC R4 | 20 | T'd dangling BNC on back of chassis - removed T and unused BNC. |
| LVEA near PSL | Disconnected unused O-scope, Analyzer, and extension cords near these racks. | ||
| LVEA | Under HAM1 south | Disconnected extension cord running to powered off Beckhoff Rotation stage termination box thingy. Richard said unit is to be removed someday altogether. | |
| LVEA | Under HAM4 NE cable tray | Turned off via power cord the TV monitor that was on. | |
| LVEA | HAM6 NE corner | Kiwamu powered off and removed power cables from OSA equipment near HAM6 ISCT table. | |
| LVEA | Unplugged/removed other various unused power strips and extension cords. |
I also threw the main breaker to the OFF position on both of the free standing unused transformer units in the LVEA - one I completely unplugged because I thought I could still hear it humming.
No monitors computers appear to be on except the 2 VE BECKHOFF ones that must remain on (in their stand alone racks on the floor).
We'll ask the early morning crew to sweep for Phones, Access readers, lights, and WIFI first thing in the morning.
Final walk thru of LVEA was done this morning. The following items were unplugged or powered off:
Phones
1. Next to PSL Rack
2. Next to HAM 6
3. In CER
Card Readers
1. High Bay entry
2. Main entry
Wifi
1. Unplugged network cable from patch panel in FAC Rack
Added this to Ops Sticky Notes page.