3:45pm local
Took 26 seconds to overfill CP3 from control room. Increased LLCV to 50% from 17%. Put it back to 17%.
Kiwamu Daniel
The attached spectrum shows the IMC-WFS_[AB]_[IQ][1234] signals. Electronics noise dominates above 2 kHz for WFS B and above 1 kHz for WFS A. IMC-WFS_A_Q4 is either exactly in the Q phase or broken. The DC signals seem to be even closer to the ADC noise, since they have no whitening filters.
Our Utility provider needed to do some switching of our power feed yesterday and returned it to normal today. At approximately 1545 PST 29Nov2016 DOE contractor changed our feed from one buss to another at the substation for work that needed to be done on other circuits in the substation. At approximately 1350 PST 30Nov2016 they restored our circuit to the normal feed. Fortunately with the new switch in the system we did not see this work. The IFO remain locked and everything seemed fine. A cursory look at Line monitors and Magnetometers did not reveal any anomalies. If any of our PEM monitors did see an event at this time please let us know as this is useful feedback.
Continuing the schedule for this roaming line with a move from 2001.3 to 2501.3 Hz. Frequency Planned Amplitude Planned Duration Actual Amplitude Start Time Stop Time Achieved Duration (Hz) (ct) (hh:mm) (ct) (UTC) (UTC) (hh:mm) --------------------------------------------------------------------------------------------------------------------------------------------------------- 1001.3 35k 02:00 39322.0 Nov 28 2016 17:20:44 UTC Nov 30 2016 17:16:00 UTC days @ 30 W 1501.3 35k 02:00 39322.0 Nov 30 2016 17:27:00 UTC Nov 30 2016 19:36:00 UTC 02:09 @ 30 W 2001.3 35k 02:00 39322.0 Nov 30 2016 19:36:00 UTC Nov 30 2016 22:07:00 UTC 02:31 @ 30 W 2501.3 35k 05:00 39322.0 Nov 30 2016 22:08:00 UTC 3001.3 35k 05:00 39322.0 3501.3 35k 05:00 39322.0 4001.3 40k 10:00 39322.0 4301.3 40k 10:00 39322.0 4501.3 40k 10:00 39322.0 4801.3 40k 10:00 39222.0 5001.3 40k 10:00 39222.0 Frequency Planned Amplitude Planned Duration Actual Amplitude Start Time Stop Time Achieved Duration (Hz) (ct) (hh:mm) (ct) (UTC) (UTC) (hh:mm) --------------------------------------------------------------------------------------------------------------------------------------------------------- 1001.3 35k 02:00 39322.0 Nov 11 2016 21:37:50 UTC Nov 12 2016 03:28:21 UTC ~several hours @ 25 W 1501.3 35k 02:00 39322.0 Oct 24 2016 15:26:57 UTC Oct 31 2016 15:44:29 UTC ~week @ 25 W 2001.3 35k 02:00 39322.0 Oct 17 2016 21:22:03 UTC Oct 24 2016 15:26:57 UTC several days (at both 50W and 25 W) 2501.3 35k 05:00 39322.0 Oct 12 2016 03:20:41 UTC Oct 17 2016 21:22:03 UTC days @ 50 W 3001.3 35k 05:00 39322.0 Oct 06 2016 18:39:26 UTC Oct 12 2016 03:20:41 UTC days @ 50 W 3501.3 35k 05:00 39322.0 Jul 06 2016 18:56:13 UTC Oct 06 2016 18:39:26 UTC months @ 50 W 4001.3 40k 10:00 39322.0 Nov 12 2016 03:28:21 UTC Nov 16 2016 22:17:29 UTC days @ 30 W (see LHO aLOG 31546 for caveats) 4301.3 40k 10:00 39322.0 Nov 16 2016 22:17:29 UTC Nov 18 2016 17:08:49 UTC days @ 30 W 4501.3 40k 10:00 39322.0 Nov 18 2016 17:08:49 UTC Nov 20 2016 16:54:32 UTC days @ 30 W (see LHO aLOG 31610 for caveats) 4801.3 40k 10:00 39222.0 Nov 20 2016 16:54:32 UTC Nov 22 2016 23:56:06 UTC days @ 30 W 5001.3 40k 10:00 39222.0 Nov 22 2016 23:56:06 UTC Nov 28 2016 17:20:44 UTC days @ 30 W (line was OFF and ON for Hardware INJ)
We are having issues accessing nds2 data at the LHO nds2 server that is from the last few days. I am looking at it now. The data is available on the cluster at LHO, nds2 is just not seeing it. I'm updating the frame source lists on the server and will restart it when that is done, hopefully that will clear up the problems. ETA less than an hour.
This will take a little longer. After some digging the NDS2 server was looking at an old frame cache file (the frame cache is used to map from frame time and time to frame file locations in ldas). I'm working with Dan Moraru now to make sure we are using a good source.
Recent data can once again be fetched from nds2 @LHO. nds2_channel_source -a -n nds.ligo-wa.caltech.edu H1:DAQ-DC0_GPS {... H-H1_R:1163880064-1164408128 H-H1_R:1164408320-1164578304} So up to 1164578304 is available in LDAS right now via nds2 at LHO.
A couple people asked today if Pcal calibration lines could be contributing to any of the noise humps. We have tested this previously, (recent LHO aLOGs 31101 and 31035). I looked at the time last night when we shuttered the Pcal Y laser (LHO aLOG 31991). Attached are power spectra from a few minutes before shuttering, and just after shuttering for the four Pcal Y lines. In blue is the reference time just before shuttering, while red is the time just after shuttering. Observe that the blue Pcal line frequencies (7.9 Hz, 36.7 Hz, 331.9 Hz, and 1083.7 Hz) disappear, but the noise humps around 330 Hz and 1080 Hz remain. In addition, as noted in LHO aLOG 31991, the glitch rate remained the same during this test. Conclusions: Pcal does not contribute to the glitching observed in DELTAL_EXTERNAL, and Pcal does not contribute to noise humps at ~330 Hz and ~1080 Hz (laser on versus off comparisons again confirm these).
h1cam17 was reported down this morning by Whatsup Monitoring system, we will leave the camera down since we are in Observation until is needed it, if someone has an imperative need to use it let cdsadmin know and we will reboot the camera remotely. From: cds-alerts@LIGO.ORG Subject: Device is Down (h1cam17.cds.ligo-wa.caltech.edu). Date: November 30, 2016 at 9:33:38 AM PST To: carlos.perez@ligo.org Ping is Down on Device: h1cam17.cds.ligo-wa.caltech.edu (10.106.0.37). Details: Monitors that are down include: Ping Monitors that are up include: Notes on this device (from device property page): This device was scanned by discovery on 10/6/2016 10:12:15 AM. ---------------------------------------- See More Ping is Down on Device: h1cam17.cds.ligo-wa.caltech.edu (10.106.0.37). Details: Monitors that are down include: Ping Monitors that are up include: Notes on this device (from device property page): This device was scanned by discovery on 10/6/2016 10:12:15 AM. ---------------------------------------- This mail was sent on November 30, 2016 at 09:33:32 AM Ipswitch WhatsUp Gold
Carlos, thank you for the report. The camera was removed on this Tuesday (31962). Richard will re-install this to a different place for monitoring the SRM cage at some point.
Day Shift: 16:00-00:00 UTC (08:00-16:00 PST)
With pomp & circumstance, O2 started with H1 down (& quickly on its way up) & a livestream of the start O2 going on in the Control Room. Cheryl took H1 to NLN & after a bit of shuffling, we are now in OBSERVING.
1) Hit LOAD on IMC_LOCK
2) Notify Carlos, so he can run to EX & EY to turn OFF the wifi routers.
20:25 Re-booted Video 1 FOM
20:30 Yellow PSL Dust Alarm
20:34 Dave B informed me that "Elli's" digital camera went down approx 9:30. It doesn't sem to be missed at the moment and I'm not sure what it as for.
20:42 Red PSL dust alarm
21:01 Intent bit set to Commissionig accidentally bye Sheila but left that way on order to do some vioin mode damping. 2nd order harmonics were showing prominent noise in DMT Omega glitch plot.
22:00 Richard M asked us to go hands off fo a few minutes while power to the facility was being switched between sub-stations.
22:55 Lockloss - probably due to trying to get the 4735Hz mode damped
23:15 re-locking - re-aligned X/Y ALS. Fiber Polarization 17% wrong on Y. Going to correct.
23:25 It seemed that simply turning the unit on to adjust the fiber polarization cause the Y channel to jump to 27%. None of the fibers were disturbed opening the door or activating the switch. Y is curently at 0% and X is at 4%.
23:29 reset ALS Y VCO
23:53 Nominal Low Noise - Jeff continuing to chase HF violin mode (4.735KHz)
23:59 Handing off to Jim
I did reload IMC Guradian Node and I called Carlos about the WiFis at the End Stations. These both happened as the Weekly meeting was starting. I don't know if he got to address thiese. I didn't hear from him.
No H2O added. The level is a little below the max line depending on the bubbles. Peter K. has an entry on the notepad that he added water yesterday. Are we duplicating work? Both filter canisters look clean. There were no active alarms.
J. Kissel Continuing the schedule for this roaming line with a move from 1501.3 to 2001.3 Hz. We (as in operators and I instead of just I) are making an effort to pay closer attention to this, so we can be done with the schedule sooner and turn off this line for the duration of the run. Frequency Planned Amplitude Planned Duration Actual Amplitude Start Time Stop Time Achieved Duration (Hz) (ct) (hh:mm) (ct) (UTC) (UTC) (hh:mm) --------------------------------------------------------------------------------------------------------------------------------------------------------- 1001.3 35k 02:00 39322.0 Nov 28 2016 17:20:44 UTC Nov 30 2016 17:16:00 UTC days @ 30 W 1501.3 35k 02:00 39322.0 Nov 30 2016 17:27:00 UTC Nov 30 2016 19:36:00 UTC 02:09 @ 30 W 2001.3 35k 02:00 39322.0 Nov 30 2016 19:36:00 UTC 2501.3 35k 05:00 39322.0 3001.3 35k 05:00 39322.0 3501.3 35k 05:00 39322.0 4001.3 40k 10:00 39322.0 4301.3 40k 10:00 39322.0 4501.3 40k 10:00 39322.0 4801.3 40k 10:00 39222.0 5001.3 40k 10:00 39222.0 Frequency Planned Amplitude Planned Duration Actual Amplitude Start Time Stop Time Achieved Duration (Hz) (ct) (hh:mm) (ct) (UTC) (UTC) (hh:mm) --------------------------------------------------------------------------------------------------------------------------------------------------------- 1001.3 35k 02:00 39322.0 Nov 11 2016 21:37:50 UTC Nov 12 2016 03:28:21 UTC ~several hours @ 25 W 1501.3 35k 02:00 39322.0 Oct 24 2016 15:26:57 UTC Oct 31 2016 15:44:29 UTC ~week @ 25 W 2001.3 35k 02:00 39322.0 Oct 17 2016 21:22:03 UTC Oct 24 2016 15:26:57 UTC several days (at both 50W and 25 W) 2501.3 35k 05:00 39322.0 Oct 12 2016 03:20:41 UTC Oct 17 2016 21:22:03 UTC days @ 50 W 3001.3 35k 05:00 39322.0 Oct 06 2016 18:39:26 UTC Oct 12 2016 03:20:41 UTC days @ 50 W 3501.3 35k 05:00 39322.0 Jul 06 2016 18:56:13 UTC Oct 06 2016 18:39:26 UTC months @ 50 W 4001.3 40k 10:00 39322.0 Nov 12 2016 03:28:21 UTC Nov 16 2016 22:17:29 UTC days @ 30 W (see LHO aLOG 31546 for caveats) 4301.3 40k 10:00 39322.0 Nov 16 2016 22:17:29 UTC Nov 18 2016 17:08:49 UTC days @ 30 W 4501.3 40k 10:00 39322.0 Nov 18 2016 17:08:49 UTC Nov 20 2016 16:54:32 UTC days @ 30 W (see LHO aLOG 31610 for caveats) 4801.3 40k 10:00 39222.0 Nov 20 2016 16:54:32 UTC Nov 22 2016 23:56:06 UTC days @ 30 W 5001.3 40k 10:00 39222.0 Nov 22 2016 23:56:06 UTC Nov 28 2016 17:20:44 UTC days @ 30 W (line was OFF and ON for Hardware INJ)
Looking at the current glitch rate, it is elevated compared to previous locks in the last days. Figure 1: current glitch rates (Nov 30) Figure 2: Nov 28 glitch rates Note that the current glitch rates are all elevated. It's easier if you open these in windows you can swap back and forth, or just stare at them side-by-side. Looking at the SNR distribution, there is a large population of SNR<30 glitches (note large hump instead of linear decay on the log-log plot) Figure 3: current SNR distribution (Nov 30) Figure 4: Nov 28 SNR distribution Now looking at the histogram of glitch SNR versus frequency, it's clear there are more numerous higher SNR glitches at low frequencies, but the higher glitch rate for low SNR glitches seems to be coming mostly from the (new?) 2 kHz and 3 kHz glitches. Figure 5: current SNR versus frequency (Nov 30) Figure 6: Nov 28 SNR versus frequency
Our violin mode second harmonics are rung up, which is most likely to be the problem here. We had a rough lockloss late monday night in which things got rung up. For now Ed, Jeff B and Jeff K are working on damping some of the rung up first harmonics, that is why we are not in observing mode right now.
The guardian automatically damps the first harmonic violin modes, so they are normally small after we have had some long lock stretches, but the second harmonics will only get damped if operators actively work on them. It would be a good idea for operators to try to watch these and try to damp them as well as we can. Allowing for operators to damp these and change settings while we are in observing mode would facilitate getting these modes damped.
We have been having ISI trips on locklosses recently which is probably how these are getting rung up. We are hoping that the tidal triggering change described in alog 31980 will prevent the trips, so that harmonics will not get rung up as often.
I made a few measurements tonight, and we did a little bit more work to be able to go to observe.
Measurements:
First, I tried to look at why our yaw ASC loops move at 1.88 Hz, I tried to modify the MICH Y loop a few times which broke the lock but Jim relocked right away.
Then I did a repeat of noise injections for jitter with the new PZT mount, and did repeats of MICH/PRCL/SRCL/ASC injections. Since MICH Y was about 10 times larger in DARM than pit, (it was at about the level of CHARD in DARM) I adjusted MICH Y2L by hand using a 21 Hz line. By chaning the gain from 2.54 to 1, the coupling of the line to DARM was reduced by a bit more than a factor of 10, and the MICH yaw noise is now a factor of 10 delow darm at 20Hz.
Lastly, I quickly checked if I could change the noise by adjusting the bias on ETMX. A few weeks ago I had changed the bias to -400V, which reduced the 60Hz line by a factor of 2, but the line has gotten larger over the last few weeks. However, it is still true that the best bias is -400V. We still see no difference in the broad level of noise when changing this bias.
Going to observe:
I've added round(,3) to the SOFT input matrix elements that needed it, and to MCL_GAIN in ANALOG_CARM
DIAG main complained about IM2 y being out the nominal range, this is because of the move we made after the IMC PZT work (31951). I changed the nominal value from -209 to -325 for DAMP Y IN1
A few minutes after Cheryl went to observe, we were kicked out of observe again because of fiber polarization, both an SDF difference becuase of the PLL autolocker and because of a warning in DIAG main. This is something that shouldn't kick us out of observation mode because it doesn't matter at all. We should change DAIG_MAIN to only make this test when we are acquiring lock, and perhaps not monitor some channels in SDF observes. We decided the easiest solution for tonight was to fix the fiber polarization, so Cheryl did that.
Lastly, Cheryl suggested that we orgainze the gaurdian state for ISC_LOCK so that states which are not normally used are above NOMINAL_LOW NOISE, I've renumbered the states but not yet loaded the guardian because I think that would knock us out of observation mode and we want to let the hardware injections happen.
REDUCE_RF9 modulation depth guardian problem:
It seems like the reduce RF9 modulation depth state somehow skips restting some gains (screenshot shows the problem). (noted before in alog 31558). This could be serious, and could be why we have occasionally lost lock in this state. I've attached a the log, this is disconcerting because the guardian log reports that it set the gains, but it seems not to have happened. For the two PDs which did not get set, it also looks like the round step is skipped.
We accepted the wrong values (neither of these PDs is in use in lock) in SDF so that Adam could make a hardware injection, but these are the wrong values and should be different next time we lock. The next time the IFO locks, the operator should accept the correct values
Responded to bug report: https://bugzilla.ligo-wa.caltech.edu/bugzilla3/show_bug.cgi?id=1062
Similar thing happened for ASC-REFL_B_RF45_Q_PIT during the last acquisition. I have added some notes to the bug so that Jamie can follow up.
We think that Jamie's comment that we're writing to the same channel too fast is probably the problem. Sheila is currently circulating the work permit to fix the bug.
[Jenne, JimW, JeffK, Sheila, EvanG, Jamie]
We were ready to try hitting the Intent bit, since SDF looked clear, but kept failing. We were auto-popped out of Observation. With Jamie on the phone, we realized that the ODCMASTER SDF file was looking at the Observatory intent bit. When the Observe.snap file was captured, the intent bit was not set, so when we set the intent bit, SDF saw a difference, and popped us out of Observe. Eeek!
We have not-monitored the observatory intent bit. After doing this, we were able to actually set the bit, and stick in Observe.
Talking with Jamie, it's perhaps not clear that the ODCMASTER model should be under SDF control, but at least we have something that works for now.
I think unmonitoring the intent bit channel is the best thing to do. I can see why we would like to monitor the other settings in the ODC models. So I think this is the "right" solution, and no further action is required.
I lowered LLCV to 16% open. I think 17% was slightly too much flow based on exhaust pressure and TC temp dips periodically (and colder temps outside). I drove out to CP3 to inspect. Less than half the horizontal exhaust pipe is frosted. Nothing unusual.