O2 days 1,2:
model restarts logged for Thu 01/Dec/2016 No restarts reported
h1guardian was restarted at 08:50PDT, its RAM was increased from 12GB to 48GB.
model restarts logged for Wed 30/Nov/2016 No restarts reported
ER10:
model restarts logged for Tue 29/Nov/2016 No restarts reported
Lock# 8
Times are in: GPS (UTC)
Start time: 1164672438.0 (Dec 2 00:07:01 UTC)
Itbit Engage: [0, 1164672969.0, 1164673578.0, 1164674812.0, 1164700372.0, 1164745711.0]
Itbit Disengage: [0, 1164673508.0, 1164674325.0, 1164700288.0, 1164745010.0, 1164747085.0]
End time: 1164747085.0 (Dec 2 20:51:02 UTC)
Total length: 74647.0 , 20hr 44min 7sec
Total Science: 72774.0 , 20hr 12min 54sec
Lock# 9
Times are in: GPS (UTC)
Start time: 1164750245.0 (Dec 2 21:43:48 UTC)
Itbit Engage: [0, 1164750308.0, 1164756280.0, 1164756461.0, 1164757097.0]
Itbit Disengage: [0, 1164756100.0, 1164756401.0, 1164756773.0, 1164758206.0]
End time: 1164758206.0 (Dec 2 23:56:26 UTC)
Total length: 7961.0 , 2hr 12min 41sec
Total Science: 7334.0 , 2hr 2min 14sec
End of Day Summary
Current Status: Unlocked
Total Day Locked: 22hr 56min 48sec [95.6%] (82608/86400)
Total Day Science: 22hr 15min 8sec [92.7%] (80108/86400)
This is the first trial of a daily post by the Day shift operator of a summary of the locks for the UTC day. This information is created by (userapps)/sys/h1/scripts/VerbalAlarms/Lock_Logging.py which runs in the background of the VerbalAlarms program. The Lock# was started at the begining of O2 and is used only as a reference, it is only used in the control room. The idea behind these daily posts is that it will give the CR something to reference. The lock clock that is displayed in the CR will soon show the current Lock# to match these.
Any question, please let me know. Hopefully its helpful!
I forgot to tag OpsInfo
Corey alerted us to a low TCSY chiller flow - as alarmed on the side ops station. Jason inspected at the chiller and found the usual nominal ~4 GPM, all good there. A trend shows the flow sensor (which causes the alam and is out on the floor on a pipe under ~BSC2) has dropped, similar to what it has been doing (alog 31900). However this time seems to be taking longer to recover. The value dropped from ~3.1 to 2.6 and is now bouncing around more than it had at the 3.1 value. In the last 50 minutes it seems to be on a slight rise, but not out of the woods. We spoke with Alastair just now to find out how low this can go before something else happens such as triggering the laser off. He is going to look up to see if there is any hard coded trip set points somewhere. However, in the 90 day trend attached Jason and I note (puprle ellipses) that there have been quite a few times that this flow sensor has dipped briefly below 2 and even 1 and the laser has been uneffected. SO, we're hoping the flow sensor rides out long enough for us to schedule a Tues maint period to swap in the spare flow sensor.
Ops - if the TCSY laser does anything funny this weekend AND the flow drops well below 3 and is staying there, call us.
Jenne, Jim W, Dave:
In order to run the a2l_min_LHO.py script from an MEDM shell launcher, I had to explicitly define full paths for all files used. Jenne created a testWrite.py which I used to test this out.
The MEDM command block in the A2L_screen.adl file (callable from SITEMAP via the SUS pulldown) is
command[0] {
label="A2L Script"
name="xterm"
args="-hold -e 'cd /opt/rtcds/userapps/trunk/isc/common/scripts/decoup; /usr/bin/python /opt/rtcds/userapps/trunk/isc/common/scripts/decoup/a2l_min_LHO.py'"
}
This is used on the A2L screen that Jim made, accessible from the SUS tab on the sitemap. You can now run the A2L script by just clicking the button on the screen.
The first 3 times that this was used / tested, the A2L ran successfully and wrote the appropriate EPICS values, but the text files that summarize the results (so we don't have to trend for the data) didn't get written when the script was run from medm. Dave's work was to fix this.
I plan to trend the data and hand-create the summary files for the three times they didn't get written, so that it's easier to run the beam position calculation script without losing these data points:
As requested by Jeff and the calibration review committee, I've done a number of checks related to tracking the behavior of PCAL lines in the online-calibrated strain. (Most of these checks accord with the "official" strain curve plots contained in https://dcc.ligo.org/DocDB/0121/G1501223/003/2015-10-01_H1_O1_Sensitivity.pdf) I report on these review checks below.
I started by choosing a recent lock stretch at LHO that includes segments in which the H1:DMT-CALIBRATED flag is both active and inactive (so that we can visualize the effect of both gated and ungated kappas on strain, with the expected behavior that gstlal_compute_strain defaults each kappa factor to its last computed median if ${IFO}:DMT-CALIBRATED is inactive). There is a 4-hour period from 8:00 to 12:00 UTC on 30 November 2016 that fits the bill (see https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20161130/). I re-calibrated this stretch of data in --partial-calibration mode without kappas applied, and stored the output to
LHO: /home/aurban/O2/calibration/data/H1/
All data were computed with 32 second FFT length and 120 second stride. The following plots are attached:
The script used to generate these plots, and a LAL-formatted cache pointing to re-calibrated data from the same time period but without any kappa factors applied, is checked into the calibration SVN at https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Runs/PreER10/H1/Scripts/TDkappas/. A similar analysis on a stretch of Livingston data is forthcoming.
I have re-run the same analysis over 24 hours of Hanford data spanning the full UTC day on December 4th (https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20161204/), during which time LHO was continuously locked. This time the lowest-frequency PCAL line has a PCAL-to-DARM ratio that improves when kappas are applied, which is the expected behavior. This suggests that whatever was going on in the November 30 data, where the 36.5 Hz line briefly strayed to having worse agreement with kappas applied, was transient -- but the issue may still be worth looking into.
Now that we have a few days of O2 under H1's belt. I wanted to give a shout out to the absolute latest Operator Sticky Notes we have so far. And a reminder is that these Sticky Notes are a wiki page here: https://lhocds.ligo-wa.caltech.edu/wiki/OperatorStickyNotes. Some of the older ones have been moved to an "old sticky notes" section at the bottom of the page. Anyone should feel free to update this list to make it pertinent and useful for operations.
Operators please note these latest Sticky Notes:
12/2-12/9: High freq Calib Line Roaming changes (WP#6368)
12/1: TCS Power Stabilization Guardian knocking out of Observing (Kiwamu alog#32090)
12/1: Surviving rung up resonant modes via "RemoveStageWhitening" (Jenne alog#32089)
12/1: ISI_CONFIG during earthquakes (Sheila's comment for alog#32086)
12/1: Resonant Mode Scratch Pad! (Jeff's alog #32077)
11/30: LVEA SWEPT on 11/29&11/30 by Betsy, Dave, & Fil (alogs 31975, 32024).
11/30: Run A2L ( once a day until ~12/8. Always run A2L on Tues Maintenance Day. (Jenne alog#32022))
Curious...After reading the aLog about Operator Sticky notes regarding running a2l once a day until 12/8 and running Kiwamu's DTT coherence templates to determine if the script would even be necessaryat this point and asking Patrick if he had run a2l script today (he had not), Keita called 03:15UTC to check on the IFO status. I told him that I had run the DTT measurement and didn't see any loss of coherence in the 20Hz area and asked If I still needed to follow the once-a-day prescription that was aformentioned. He said to me that that plan had changed. If I understood him correctly, only during the first two hours of the lock would it be necessary to run the script if the coherence showed to be out. If this is the case (i'm still not 100% certain) then the Sticky Note needs to be updated and the new plan needs to be disseminated amongst the operators?
Dave, TJ:
A recent plot of free memory showed that the rate-of-decrease increased around noon Tuesday 15th Nov. TJ tracked this to a DIAG_MAIN code change wherein a slow channel is being averaged over 120 seconds every 2 seconds. Doing the math, this equates to 0.33GB per day. This matches the increased memory consumption rate seen since Nov 15.
To test this, during the lunch time lock loss today, we killed and restarted the DIAG_MAIN process. Attached is a plot of free memory from 9:30am Thursday PST (after the memory size of h1guardian was increased to 48GB) and 2:30pm PST today. The last data points show the memory recovered by the restart of DIAG_MAIN, and it agrees with 330 MB per day.
With the increased memory size we anticipate no memory problems for 3 months at the current rate of consumption. However we will schedule periodic restart of the machine or the DIAG_MAIN node during maintenance.
BTW: free memory is obtained from the 'free -m' command, and taking the free value from the buffers/cache row. This does not use the recoverable buffers/cache memory usage in calculating the used size.
This maybe points to a memory leak in the nds2-client. We should figure out exactly what's leaking the memory and try to plug it, rather than just relying on node restarts. The DIAG_MAIN node is not the only one to make cdsutils.avg calls.
Vern pointed out that you can see the scattered light moving around, by looking at the video cameras.
Attached are 2 videos captured from our digital cameras. They start within about 1 sec of eachother, but they're not exactly the same times. On the PR3 camera, the motion is very obvious. On the PRM camera, you can kind of see some of the scatter in the center of the image changing with a similar period as that of PR3.
I also tried to take a 1 min video with my phone of the analog SRC camera that is on the front small TV display where you can kind of see some scatter moving, particularly in the center vertical "bar" of the screen. But, the quality isn't so good and it's hard to see in the video-of-a-video. But, it seems like there is some motion of the scatter pattern there too.
Calibrated (in m/rtHz) ground displacements and St1 & St2 displacement for the ITMX ISI, comparing now and 12 hours ago. First plot shows the X and Z ground displacments, solid lines are from 12 hours ago, dashed are from the last half hour. The peak frequency has moved down in frequency, but gone up about an order of magnitude. Second plot shows the St1 and St2 displacements, solid red and blue are from 12 hours ago, pink and light blue are from the last half hour. It looks like the sensor correction not doing quite as good as I had hoped when we worked on this during the summer, there is room for improvement here.
If we can't lock due the ground motion, operators can try some of the USEISM configurations in SEI_CONF, probably USEISM_MOD_WIND given the 20mph winds.
I'm opening a work permit to test a new configuration, using some narrow band sensor correction on ST2, keeping the WINDY configuration on ST1. I'll leave some instructions with the operators, but they can call me if they have questions or need some guidance.
Additional CBC injections are scheduled: 1164747030 H1L1 INJECT_CBC_ACTIVE 1 1.0 Inspiral/{ifo}/bbhspin_hwinj_snr24_1163501502_{ifo}_filtered.txt 1164751230 H1L1 INJECT_CBC_ACTIVE 1 1.0 Inspiral/{ifo}/bbh_hwinj_snr24_1163501502_{ifo}_filtered.txt 1164755430 H1L1 INJECT_CBC_ACTIVE 1 0.5 Inspiral/{ifo}/imri_hwinj_snr24_1163501530_{ifo}_filtered.txt 1164759630 H1L1 INJECT_CBC_ACTIVE 1 0.5 Inspiral/{ifo}/imri_hwinj_snr24_1163501538_{ifo}_filtered.txt
Since there was a GRB alert, Karan wanted to reschedule the injection that would be skipped and to reschedule one that didn't happen because both detectors weren't locked. 1164747030 H1L1 INJECT_CBC_ACTIVE 1 1.0 Inspiral/{ifo}/bbhspin_hwinj_snr24_1163501502_{ifo}_filtered.txt 1164751230 H1L1 INJECT_CBC_ACTIVE 1 1.0 Inspiral/{ifo}/bbh_hwinj_snr24_1163501502_{ifo}_filtered.txt 1164755430 H1L1 INJECT_CBC_ACTIVE 1 0.5 Inspiral/{ifo}/imri_hwinj_snr24_1163501530_{ifo}_filtered.txt 1164761100 H1L1 INJECT_CBC_ACTIVE 1 0.5 Inspiral/{ifo}/imri_hwinj_snr24_1163501538_{ifo}_filtered.txt 1164765300 H1L1 INJECT_CBC_ACTIVE 1 1.0 Inspiral/{ifo}/bbhspin_hwinj_snr24_1163501502_{ifo}_filtered.txt 1164769500 H1L1 INJECT_CBC_ACTIVE 1 1.0 Inspiral/{ifo}/bbh_hwinj_snr24_1163501502_{ifo}_filtered.txt
Continuing the schedule for this roaming line with a move from 2501.3 to 3001.3 Hz. Frequency Planned Amplitude Planned Duration Actual Amplitude Start Time Stop Time Achieved Duration (Hz) (ct) (hh:mm) (ct) (UTC) (UTC) (hh:mm) --------------------------------------------------------------------------------------------------------------------------------------------------------- 1001.3 35k 02:00 39322.0 Nov 28 2016 17:20:44 UTC Nov 30 2016 17:16:00 UTC days @ 30 W 1501.3 35k 02:00 39322.0 Nov 30 2016 17:27:00 UTC Nov 30 2016 19:36:00 UTC 02:09 @ 30 W 2001.3 35k 02:00 39322.0 Nov 30 2016 19:36:00 UTC Nov 30 2016 22:07:00 UTC 02:31 @ 30 W 2501.3 35k 05:00 39322.0 Nov 30 2016 22:08:00 UTC Dec 02 2016 20:16:00 UTC days @ 30 W 3001.3 35k 05:00 39322.0 Dec 02 2016 20:17:00 UTC 3501.3 35k 05:00 39322.0 4001.3 40k 10:00 39322.0 4301.3 40k 10:00 39322.0 4501.3 40k 10:00 39322.0 4801.3 40k 10:00 39222.0 5001.3 40k 10:00 39222.0 Frequency Planned Amplitude Planned Duration Actual Amplitude Start Time Stop Time Achieved Duration (Hz) (ct) (hh:mm) (ct) (UTC) (UTC) (hh:mm) --------------------------------------------------------------------------------------------------------------------------------------------------------- 1001.3 35k 02:00 39322.0 Nov 11 2016 21:37:50 UTC Nov 12 2016 03:28:21 UTC ~several hours @ 25 W 1501.3 35k 02:00 39322.0 Oct 24 2016 15:26:57 UTC Oct 31 2016 15:44:29 UTC ~week @ 25 W 2001.3 35k 02:00 39322.0 Oct 17 2016 21:22:03 UTC Oct 24 2016 15:26:57 UTC several days (at both 50W and 25 W) 2501.3 35k 05:00 39322.0 Oct 12 2016 03:20:41 UTC Oct 17 2016 21:22:03 UTC days @ 50 W 3001.3 35k 05:00 39322.0 Oct 06 2016 18:39:26 UTC Oct 12 2016 03:20:41 UTC days @ 50 W 3501.3 35k 05:00 39322.0 Jul 06 2016 18:56:13 UTC Oct 06 2016 18:39:26 UTC months @ 50 W 4001.3 40k 10:00 39322.0 Nov 12 2016 03:28:21 UTC Nov 16 2016 22:17:29 UTC days @ 30 W (see LHO aLOG 31546 for caveats) 4301.3 40k 10:00 39322.0 Nov 16 2016 22:17:29 UTC Nov 18 2016 17:08:49 UTC days @ 30 W 4501.3 40k 10:00 39322.0 Nov 18 2016 17:08:49 UTC Nov 20 2016 16:54:32 UTC days @ 30 W (see LHO aLOG 31610 for caveats) 4801.3 40k 10:00 39222.0 Nov 20 2016 16:54:32 UTC Nov 22 2016 23:56:06 UTC days @ 30 W 5001.3 40k 10:00 39222.0 Nov 22 2016 23:56:06 UTC Nov 28 2016 17:20:44 UTC days @ 30 W (line was OFF and ON for Hardware INJ)
TITLE: 12/02 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 73.5385Mpc
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY: Mostly a quiet shift, but there were a couple SDF issues
LOG:
Corey had just relocked after an earthquake when I arrived. Shortly after going to OBSERVE the TCS guardians knocked us out, as Kiwamu logged. Then quiet until just a couple minutes ago. SDF_DIAG kick us out of OBSERVE, looking at the log I find:
2016-12-02T07:51:08.18971 DIAG_SDF [RUN_TESTS.run] USERMSG 0: DIFFS: ngn: 1
2016-12-02T07:51:10.70676 DIAG_SDF [RUN_TESTS.run] USERMSG 0: DIFFS: ngn: 3
2016-12-02T07:51:17.18839 DIAG_SDF [RUN_TESTS.run] USERMSG 0: DIFFS: ngn: 1
I can't find an (the? I didn't know we had one) NGN guardian, but I know where the CS_BRS guardian lives. When I looked at that guardian, it had just started a recenter cycle at the same time:
2016-12-02T07:51:07.99681 CS_BRS JUMP target: TURN_OFF_DAMPING
2016-12-02T07:51:07.99694 CS_BRS [RUN.exit]
2016-12-02T07:51:08.05901 CS_BRS JUMP: RUN->TURN_OFF_DAMPING
2016-12-02T07:51:08.05920 CS_BRS calculating path: TURN_OFF_DAMPING->RUN
2016-12-02T07:51:08.05959 CS_BRS new target: TURN_OFF_PZT_CTRL
...
Did the CS_BRS guardian throw an SDF difference in NGN that dropped us out of OBSERVE?
That's excatly what happend. I went and UNmonitored all of the CBRS channels in SDF so this cant happen again.
The rest of the NGN channels are being monitored, but I'm not sure if they should be since they are not tied into the IFO at all. I'll talk to the right people and find out.
Oh, yeah, I'm glad that you not-mon'ed the cBRS channels. Anything in the NGN Newtonian noise model is totally independent of the IFO, and shouldn't be stuff that'll knock us out of observing.
Probably the cBRS and its need for occassional damping is the only thing that will change some settings and knock us out of Observe, so maybe we can leave things as-is for now. The rest of the NGN channels are just seismometers, whos output doesn't go anywhere in the front ends (we collect the data offline, and look at it). Since all of those calibrations are in, and should be fine, I don't anticipate needing to change any other settings in the NGN EPICS channels.
I've been working on a prototype epics interface to the seismon system. It currently has several moving parts.
EPICS IOC
Runs on h1fescript0 as user controls (in a screen environment). Code is
/ligo/home/controls/seismon/epics_ioc/seismon_ioc.py
It is a simple epics database with no processing of signals.
EVENT Parser, EPICS writer
A python script parses the data file produced by seismon_info and sends the data to EPICS. It also handles the count-down timer for future seismic events.
This runs on h1hwinj2 as user controls. Code is
/ligo/home/controls/seismon/bin/seismon_channel_access_client
MEDM
A new MEDM called H!SEISMON_CUST.adl is linked to the SITEMAP via the SEI pulldown (called SEISMON). Snapshot attached.
The countdown for the P,S,R waves are color coded for the arrival time of the seismic wave
ORANGE more than 2 mins away
YELLOW between 1 and 2 minutes away
RED less than 1 minute away
GREY in the past
If the system freezes and the GPS time becomes older than 1 minute, a RED rectangle will show to report the error.
Just noticed this post, this is great.
Let us know if you run in any bug/trouble with the code.
We had been in a GRB standown period from 23:13UTC before H1 lost lock.