As requested by Jeff and the calibration review committee, I've done a number of checks related to tracking the behavior of PCAL lines in the online-calibrated strain. (Most of these checks accord with the "official" strain curve plots contained in https://dcc.ligo.org/DocDB/0121/G1501223/003/2015-10-01_H1_O1_Sensitivity.pdf) I report on these review checks below.
I started by choosing a recent lock stretch at LHO that includes segments in which the H1:DMT-CALIBRATED flag is both active and inactive (so that we can visualize the effect of both gated and ungated kappas on strain, with the expected behavior that gstlal_compute_strain defaults each kappa factor to its last computed median if ${IFO}:DMT-CALIBRATED is inactive). There is a 4-hour period from 8:00 to 12:00 UTC on 30 November 2016 that fits the bill (see https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20161130/). I re-calibrated this stretch of data in --partial-calibration mode without kappas applied, and stored the output to
LHO: /home/aurban/O2/calibration/data/H1/
All data were computed with 32 second FFT length and 120 second stride. The following plots are attached:
The script used to generate these plots, and a LAL-formatted cache pointing to re-calibrated data from the same time period but without any kappa factors applied, is checked into the calibration SVN at https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Runs/PreER10/H1/Scripts/TDkappas/. A similar analysis on a stretch of Livingston data is forthcoming.
I have re-run the same analysis over 24 hours of Hanford data spanning the full UTC day on December 4th (https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20161204/), during which time LHO was continuously locked. This time the lowest-frequency PCAL line has a PCAL-to-DARM ratio that improves when kappas are applied, which is the expected behavior. This suggests that whatever was going on in the November 30 data, where the 36.5 Hz line briefly strayed to having worse agreement with kappas applied, was transient -- but the issue may still be worth looking into.
Now that we have a few days of O2 under H1's belt. I wanted to give a shout out to the absolute latest Operator Sticky Notes we have so far. And a reminder is that these Sticky Notes are a wiki page here: https://lhocds.ligo-wa.caltech.edu/wiki/OperatorStickyNotes. Some of the older ones have been moved to an "old sticky notes" section at the bottom of the page. Anyone should feel free to update this list to make it pertinent and useful for operations.
Operators please note these latest Sticky Notes:
12/2-12/9: High freq Calib Line Roaming changes (WP#6368)
12/1: TCS Power Stabilization Guardian knocking out of Observing (Kiwamu alog#32090)
12/1: Surviving rung up resonant modes via "RemoveStageWhitening" (Jenne alog#32089)
12/1: ISI_CONFIG during earthquakes (Sheila's comment for alog#32086)
12/1: Resonant Mode Scratch Pad! (Jeff's alog #32077)
11/30: LVEA SWEPT on 11/29&11/30 by Betsy, Dave, & Fil (alogs 31975, 32024).
11/30: Run A2L ( once a day until ~12/8. Always run A2L on Tues Maintenance Day. (Jenne alog#32022))
Curious...After reading the aLog about Operator Sticky notes regarding running a2l once a day until 12/8 and running Kiwamu's DTT coherence templates to determine if the script would even be necessaryat this point and asking Patrick if he had run a2l script today (he had not), Keita called 03:15UTC to check on the IFO status. I told him that I had run the DTT measurement and didn't see any loss of coherence in the 20Hz area and asked If I still needed to follow the once-a-day prescription that was aformentioned. He said to me that that plan had changed. If I understood him correctly, only during the first two hours of the lock would it be necessary to run the script if the coherence showed to be out. If this is the case (i'm still not 100% certain) then the Sticky Note needs to be updated and the new plan needs to be disseminated amongst the operators?
Dave, TJ:
A recent plot of free memory showed that the rate-of-decrease increased around noon Tuesday 15th Nov. TJ tracked this to a DIAG_MAIN code change wherein a slow channel is being averaged over 120 seconds every 2 seconds. Doing the math, this equates to 0.33GB per day. This matches the increased memory consumption rate seen since Nov 15.
To test this, during the lunch time lock loss today, we killed and restarted the DIAG_MAIN process. Attached is a plot of free memory from 9:30am Thursday PST (after the memory size of h1guardian was increased to 48GB) and 2:30pm PST today. The last data points show the memory recovered by the restart of DIAG_MAIN, and it agrees with 330 MB per day.
With the increased memory size we anticipate no memory problems for 3 months at the current rate of consumption. However we will schedule periodic restart of the machine or the DIAG_MAIN node during maintenance.
BTW: free memory is obtained from the 'free -m' command, and taking the free value from the buffers/cache row. This does not use the recoverable buffers/cache memory usage in calculating the used size.
This maybe points to a memory leak in the nds2-client. We should figure out exactly what's leaking the memory and try to plug it, rather than just relying on node restarts. The DIAG_MAIN node is not the only one to make cdsutils.avg calls.
Vern pointed out that you can see the scattered light moving around, by looking at the video cameras.
Attached are 2 videos captured from our digital cameras. They start within about 1 sec of eachother, but they're not exactly the same times. On the PR3 camera, the motion is very obvious. On the PRM camera, you can kind of see some of the scatter in the center of the image changing with a similar period as that of PR3.
I also tried to take a 1 min video with my phone of the analog SRC camera that is on the front small TV display where you can kind of see some scatter moving, particularly in the center vertical "bar" of the screen. But, the quality isn't so good and it's hard to see in the video-of-a-video. But, it seems like there is some motion of the scatter pattern there too.
Calibrated (in m/rtHz) ground displacements and St1 & St2 displacement for the ITMX ISI, comparing now and 12 hours ago. First plot shows the X and Z ground displacments, solid lines are from 12 hours ago, dashed are from the last half hour. The peak frequency has moved down in frequency, but gone up about an order of magnitude. Second plot shows the St1 and St2 displacements, solid red and blue are from 12 hours ago, pink and light blue are from the last half hour. It looks like the sensor correction not doing quite as good as I had hoped when we worked on this during the summer, there is room for improvement here.
If we can't lock due the ground motion, operators can try some of the USEISM configurations in SEI_CONF, probably USEISM_MOD_WIND given the 20mph winds.
I'm opening a work permit to test a new configuration, using some narrow band sensor correction on ST2, keeping the WINDY configuration on ST1. I'll leave some instructions with the operators, but they can call me if they have questions or need some guidance.
Additional CBC injections are scheduled: 1164747030 H1L1 INJECT_CBC_ACTIVE 1 1.0 Inspiral/{ifo}/bbhspin_hwinj_snr24_1163501502_{ifo}_filtered.txt 1164751230 H1L1 INJECT_CBC_ACTIVE 1 1.0 Inspiral/{ifo}/bbh_hwinj_snr24_1163501502_{ifo}_filtered.txt 1164755430 H1L1 INJECT_CBC_ACTIVE 1 0.5 Inspiral/{ifo}/imri_hwinj_snr24_1163501530_{ifo}_filtered.txt 1164759630 H1L1 INJECT_CBC_ACTIVE 1 0.5 Inspiral/{ifo}/imri_hwinj_snr24_1163501538_{ifo}_filtered.txt
Since there was a GRB alert, Karan wanted to reschedule the injection that would be skipped and to reschedule one that didn't happen because both detectors weren't locked. 1164747030 H1L1 INJECT_CBC_ACTIVE 1 1.0 Inspiral/{ifo}/bbhspin_hwinj_snr24_1163501502_{ifo}_filtered.txt 1164751230 H1L1 INJECT_CBC_ACTIVE 1 1.0 Inspiral/{ifo}/bbh_hwinj_snr24_1163501502_{ifo}_filtered.txt 1164755430 H1L1 INJECT_CBC_ACTIVE 1 0.5 Inspiral/{ifo}/imri_hwinj_snr24_1163501530_{ifo}_filtered.txt 1164761100 H1L1 INJECT_CBC_ACTIVE 1 0.5 Inspiral/{ifo}/imri_hwinj_snr24_1163501538_{ifo}_filtered.txt 1164765300 H1L1 INJECT_CBC_ACTIVE 1 1.0 Inspiral/{ifo}/bbhspin_hwinj_snr24_1163501502_{ifo}_filtered.txt 1164769500 H1L1 INJECT_CBC_ACTIVE 1 1.0 Inspiral/{ifo}/bbh_hwinj_snr24_1163501502_{ifo}_filtered.txt
Continuing the schedule for this roaming line with a move from 2501.3 to 3001.3 Hz. Frequency Planned Amplitude Planned Duration Actual Amplitude Start Time Stop Time Achieved Duration (Hz) (ct) (hh:mm) (ct) (UTC) (UTC) (hh:mm) --------------------------------------------------------------------------------------------------------------------------------------------------------- 1001.3 35k 02:00 39322.0 Nov 28 2016 17:20:44 UTC Nov 30 2016 17:16:00 UTC days @ 30 W 1501.3 35k 02:00 39322.0 Nov 30 2016 17:27:00 UTC Nov 30 2016 19:36:00 UTC 02:09 @ 30 W 2001.3 35k 02:00 39322.0 Nov 30 2016 19:36:00 UTC Nov 30 2016 22:07:00 UTC 02:31 @ 30 W 2501.3 35k 05:00 39322.0 Nov 30 2016 22:08:00 UTC Dec 02 2016 20:16:00 UTC days @ 30 W 3001.3 35k 05:00 39322.0 Dec 02 2016 20:17:00 UTC 3501.3 35k 05:00 39322.0 4001.3 40k 10:00 39322.0 4301.3 40k 10:00 39322.0 4501.3 40k 10:00 39322.0 4801.3 40k 10:00 39222.0 5001.3 40k 10:00 39222.0 Frequency Planned Amplitude Planned Duration Actual Amplitude Start Time Stop Time Achieved Duration (Hz) (ct) (hh:mm) (ct) (UTC) (UTC) (hh:mm) --------------------------------------------------------------------------------------------------------------------------------------------------------- 1001.3 35k 02:00 39322.0 Nov 11 2016 21:37:50 UTC Nov 12 2016 03:28:21 UTC ~several hours @ 25 W 1501.3 35k 02:00 39322.0 Oct 24 2016 15:26:57 UTC Oct 31 2016 15:44:29 UTC ~week @ 25 W 2001.3 35k 02:00 39322.0 Oct 17 2016 21:22:03 UTC Oct 24 2016 15:26:57 UTC several days (at both 50W and 25 W) 2501.3 35k 05:00 39322.0 Oct 12 2016 03:20:41 UTC Oct 17 2016 21:22:03 UTC days @ 50 W 3001.3 35k 05:00 39322.0 Oct 06 2016 18:39:26 UTC Oct 12 2016 03:20:41 UTC days @ 50 W 3501.3 35k 05:00 39322.0 Jul 06 2016 18:56:13 UTC Oct 06 2016 18:39:26 UTC months @ 50 W 4001.3 40k 10:00 39322.0 Nov 12 2016 03:28:21 UTC Nov 16 2016 22:17:29 UTC days @ 30 W (see LHO aLOG 31546 for caveats) 4301.3 40k 10:00 39322.0 Nov 16 2016 22:17:29 UTC Nov 18 2016 17:08:49 UTC days @ 30 W 4501.3 40k 10:00 39322.0 Nov 18 2016 17:08:49 UTC Nov 20 2016 16:54:32 UTC days @ 30 W (see LHO aLOG 31610 for caveats) 4801.3 40k 10:00 39222.0 Nov 20 2016 16:54:32 UTC Nov 22 2016 23:56:06 UTC days @ 30 W 5001.3 40k 10:00 39222.0 Nov 22 2016 23:56:06 UTC Nov 28 2016 17:20:44 UTC days @ 30 W (line was OFF and ON for Hardware INJ)
Ops has been seeing high dust count alarms in the PSL. Trended the PSL and LVEA dust monitors for 15 and 90 days. The trends show constant major alarm dust levels in the PSL enclosure (Monitor PSL101) in the 0.3u particles and several 0.5u alarm level events. This trend holds for the PSL Anti-Room (PSL102) as well, although at a lower level of alarm raising events. The PSL Enclosure and the Anti-Room have not been cleaned for quite some time. Will arrange for a cleaning of the PSL during the next maintenance window. If this does not lower the PSL dust counts, will need to start looking into the air filtration system feeding the PSL enclosure.
This was a quick EQ during the OWL shift at 6:16amPST (14:16utc). H1 rode through it.
TITLE: 12/02 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 73.5385Mpc
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY: Mostly a quiet shift, but there were a couple SDF issues
LOG:
Corey had just relocked after an earthquake when I arrived. Shortly after going to OBSERVE the TCS guardians knocked us out, as Kiwamu logged. Then quiet until just a couple minutes ago. SDF_DIAG kick us out of OBSERVE, looking at the log I find:
2016-12-02T07:51:08.18971 DIAG_SDF [RUN_TESTS.run] USERMSG 0: DIFFS: ngn: 1
2016-12-02T07:51:10.70676 DIAG_SDF [RUN_TESTS.run] USERMSG 0: DIFFS: ngn: 3
2016-12-02T07:51:17.18839 DIAG_SDF [RUN_TESTS.run] USERMSG 0: DIFFS: ngn: 1
I can't find an (the? I didn't know we had one) NGN guardian, but I know where the CS_BRS guardian lives. When I looked at that guardian, it had just started a recenter cycle at the same time:
2016-12-02T07:51:07.99681 CS_BRS JUMP target: TURN_OFF_DAMPING
2016-12-02T07:51:07.99694 CS_BRS [RUN.exit]
2016-12-02T07:51:08.05901 CS_BRS JUMP: RUN->TURN_OFF_DAMPING
2016-12-02T07:51:08.05920 CS_BRS calculating path: TURN_OFF_DAMPING->RUN
2016-12-02T07:51:08.05959 CS_BRS new target: TURN_OFF_PZT_CTRL
...
Did the CS_BRS guardian throw an SDF difference in NGN that dropped us out of OBSERVE?
That's excatly what happend. I went and UNmonitored all of the CBRS channels in SDF so this cant happen again.
The rest of the NGN channels are being monitored, but I'm not sure if they should be since they are not tied into the IFO at all. I'll talk to the right people and find out.
Oh, yeah, I'm glad that you not-mon'ed the cBRS channels. Anything in the NGN Newtonian noise model is totally independent of the IFO, and shouldn't be stuff that'll knock us out of observing.
Probably the cBRS and its need for occassional damping is the only thing that will change some settings and knock us out of Observe, so maybe we can leave things as-is for now. The rest of the NGN channels are just seismometers, whos output doesn't go anywhere in the front ends (we collect the data offline, and look at it). Since all of those calibrations are in, and should be fine, I don't anticipate needing to change any other settings in the NGN EPICS channels.
Corey, Sheila, Jim W
TerraMon and LLO warned Corey that this EQ was coming, with a predicted R wave velocity of 4.8 um/second (it showed up in our EQ band BLRMS peak at about 1 um/second RMS at about the time predicted). Our useims blrms is around 0.3-0.4 um/second right now.
Since Corey had a warning he consulted with Jim W who suggested trying BLEND_QUIET_250_SC_EQ for both end station ISIs (one at a time). The attached screenshot shows the transition from BLEND_QUIET_250_SC_EQ back to our normal windy configuration BLEND_QUIET_250_SC_BRS, which is much quieter at 50-70 mHz.
Jim explans that this sensor correction has a notch at around 50mHz (he will attach a plot), and that this worked OK durring the summer when the microseism was verry low. However, it will reduce the amount of isolation that we get at the microseism, which was fine when Jim first tested it in the summer months.
If an EQ moves the whole site in common, we can lock all the chambers to the ground at EQ frequencies to reduce the common motion. Our problem this time was probably that we switched only the end stations without changing the corner.
For now, the recomeneded operator action durring earthquakes is:
If the IFO is locked, don't do anything. We want to collect some data about what size EQ we can ride out with our normal WINDY configuration.
If the IFO unlocks, and the earthquake is going to be large enough to trip ISIs (several um/sec) switch the ISI configuration node to LARGE_EQ_NOBRSXY. This just prevents tripping of ISIs
Once the BLRMS are back to around 1 um/sec you can set the SEI_CONF back to WINDY, and ask ISC_LOCK to try LOCKING_ARMS_GREEN. If the arms stay locked for a miute or so, you can try relocking the IFO.
I took a quick look at Seismon performance on the MIT test setup. The internal notice was written a few hundred seconds after the earthquake. Internal: File: /Seismon/Seismon/eventfiles/private/pt16336050-1164667247.xml EQ GPS: 1164667247.0 Written GPS: 1164667525.0 H1 (P): 1164667949.1 L1 (P): 1164667773.9 We beat the p-wave arrival by about 200s at LLO and 400s at LHO. Arrivals below: -bash-4.2$ /usr/bin/python seismon_info -p /Seismon/Seismon/seismon/input/seismon_params_earthquakesInfo.txt -s 1164667243 -e 1164670843 --eventfilesType private --doEarthquakes --doEPICs /Seismon/Seismon/all/earthquakes_info/1164667243-1164670843 1164667246.0 6.3 1164667949.1 1164667963.2 1164671462.6 1164669655.5 1164668932.7 4.52228e-06 1164667900 1164671500 -15.3 -70.5 8.433279e+06 H1 1164667246.0 6.3 1164667773.9 1164667787.6 1164670002.2 1164668821.0 1164668348.5 1.12682e-05 1164667700 1164670100 -15.3 -70.5 5.512348e+06 L1 1164667246.0 6.3 1164668050.7 1164668064.9 1164672594.3 1164670302.2 1164669385.3 6.73904e-06 1164668000 1164672600 -15.3 -70.5 1.069658e+07 G1 1164667246.0 6.3 1164668041.4 1164668055.5 1164672479.8 1164670236.7 1164669339.5 3.22116e-06 1164668000 1164672500 -15.3 -70.5 1.046759e+07 V1 1164667246.0 6.3 1164667831.5 1164667845.3 1164670438.5 1164669070.3 1164668523.0 6.99946e-06 1164667800 1164670500 -15.3 -70.5 6.385045e+06 MIT 1164667243.2 6.3 1164667948.9 1164667953.5 1164671451.9 1164669648.2 1164668926.7 4.74116e-06 1164667900 1164671500 -15.3 -70.8 8.417411e+06 H1 1164667243.2 6.3 1164667773.6 1164667778.0 1164669993.8 1164668815.0 1164668343.5 1.15920e-05 1164667700 1164670000 -15.3 -70.8 5.501199e+06 L1 1164667243.2 6.3 1164668052.2 1164668056.8 1164672601.7 1164670305.2 1164669386.6 7.10833e-06 1164668000 1164672700 -15.3 -70.8 1.071690e+07 G1 1164667243.2 6.3 1164668043.0 1164668047.6 1164672488.9 1164670240.7 1164669341.5 3.35518e-06 1164668000 1164672500 -15.3 -70.8 1.049125e+07 V1 1164667243.2 6.3 1164667832.1 1164667836.6 1164670436.3 1164669067.8 1164668520.5 7.31460e-06 1164667800 1164670500 -15.3 -70.8 6.386137e+06 MIT 1164667247.0 6.2 1164667941.5 1164667978.2 1164671455.2 1164669651.7 1164668930.3 2.75907e-06 1164667900 1164671500 -15.4 -71.0 8.416356e+06 H1 1164667247.0 6.2 1164667767.1 1164667802.5 1164669998.4 1164668819.2 1164668347.6 7.79549e-06 1164667700 1164670000 -15.4 -71.0 5.502860e+06 L1 1164667247.0 6.2 1164668045.1 1164668082.4 1164672612.9 1164670313.2 1164669393.4 3.86408e-06 1164668000 1164672700 -15.4 -71.0 1.073178e+07 G1 1164667247.0 6.2 1164668035.9 1164668073.2 1164672500.5 1164670249.0 1164669348.4 1.94756e-06 1164668000 1164672600 -15.4 -71.0 1.050694e+07 V1 1164667247.0 6.2 1164667825.7 1164667861.5 1164670443.9 1164669073.8 1164668525.8 4.24775e-06 1164667800 1164670500 -15.4 -71.0 6.393872e+06 MIT
I have looked at all the A2L data that we have since the last time the alignment was significantly changed, which was Monday afternoon after the PSL PZT work (alog 31951). This is the first attached plot.
The first data point is a bit different than the rest, although I'm not totally sure why. Other than that, we're mostly holding our spot positions quite constant. The 3rd-to-last point, taken in the middle of the overnight lock stretch (alog 32004) shows a bit of a spot difference on ETMX, particularly in yaw, but other than that we're pretty solid.
For the next ~week, I'd like operators to run the test mass a2l script (a2l_min_lho.py) about once per day, so that we can track the spot positions a bit. After that, we'll move to our observing run standard of running a2l once a week as part of Tuesday maintenence.
The second attached plot is just the last 2 points from the current lock. First point was taken immediately upon lock, second was take about 30 min into the lock. The maximum spot movement in the figure appears to be about 0.2mm, but I think that is within the error of the A2L measurement. I can't find it right now, but once upon a time I ran A2L 5 or 7 times in a row to see how consistent the answer is, and I think I remember the stdev was about 0.3mm.
The point of the second plot is that at 30W, it doesn't seem to make a big difference if we run a2l immediately or a little later, so we can run it for our once-a-days as soon as we lock, or when we're otherwise out of Observe, and don't have to hold off on going to Observe just for A2L.
In case you don't have it memorized, here's the location of the A2L script:
A2L: How to know if it's good or bad at the moment.
Here is a dtt template to passively measure a2l quality: /opt/rtcds/userapps/release/isc/common/scripts/decoup/DARM_a2l_passive.xml
It measures the coherence between DARM and ASC drive to all test masses using 404 seconds worth of data.
All references started 25 seconds or so after the last a2l was finished and 9 or 10 seconds before the intent bit was set (GPS 116467290).
"Now" is actually about 15:00 UTC, 7AM PT, and you can see that the coherence at around 20Hz (where the ASC feedback to TM starts to be dominated by the sensing noise) significantly worse, and DARM itself was also worse, so you can say that the a2l was worse AT THIS PARTICULAR POINT IN TIME.
Thing is, this might slowly drift around and go better or worse. You can run this template for many points in time (for example each hour), and if the coherence seems to be consistently worse than right after a2l, you know that we need a2l. (A better approach is to write a script to plot the coherence as a time series, which is a good project for fellows.)
If it is repeatedly observed over multiple lock stretches (without running a2l) that the coherence starts small at the beginning of lock and becomes larger an hour or two into the lock, that's the sign that we need to run a2l an hour or two after the lock.
Could someone on site check the coherence of DARM around 1080 Hz with the usual jitter witneses? We're not able to do it offsite because the best witness channels are stored with a Nyquist of 1024 Hz. What we need is the coherence from 1000 to 1200 Hz with things like IMC WFS (especially the sum, I think). The DBB would be nice if available, but I think it's usually shuttered. There's indirect evidence from hVeto that this is jitter, so if there is a good witness channel we'll want to increase the sampling rate in case we get an SN or BNS that has power in this band.
@Andy I'll have a look at IOP channels.
Evan G., Keita K. Upon request, I'm attaching several coherence plots for the 1000-1200 Hz band between H1:CAL-DELTAL_EXTERNAL_DQ and many IMC WFS IOP channels (IOP-ASC0_MADC0_TP_CH[0-12]), ISS intensity noise witness channels (PSL-ISS_PD[A,B]_REL_OUT_DQ), PSL QPD channels (PSL-ISS_QPD_D[X,Y]_OUT_DQ), ILS and PMC HV mon channels, and ISS second loop QPD channels. Unfortunately, there is low coherence between all of these channels and DELTAL_EXTERNAL, so we don't have any good leads here.
A2L: How to know if it's good or bad at the moment.
Here is a dtt template to passively measure a2l quality: /opt/rtcds/userapps/release/isc/common/scripts/decoup/DARM_a2l_passive.xml
It measures the coherence between DARM and ASC drive to all test masses using 404 seconds worth of data.
All references started 25 seconds or so after the last a2l was finished and 9 or 10 seconds before the intent bit was set (GPS 116467290).
"Now" is actually about 15:00 UTC, 7AM PT, and you can see that the coherence at around 20Hz (where the ASC feedback to TM starts to be dominated by the sensing noise) significantly worse, and DARM itself was also worse, so you can say that the a2l was worse AT THIS PARTICULAR POINT IN TIME.
Thing is, this might slowly drift around and go better or worse. You can run this template for many points in time (for example each hour), and if the coherence seems to be consistently worse than right after a2l, you know that we need a2l. (A better approach is to write a script to plot the coherence as a time series, which is a good project for fellows.)
If it is repeatedly observed over multiple lock stretches (without running a2l) that the coherence starts small at the beginning of lock and becomes larger an hour or two into the lock, that's the sign that we need to run a2l an hour or two after the lock.
[EDIT] Sorry wrong alog.
Betsy, Keita, Daniel
As part of the LVEA sweep, prior to the start of O2, this morning, we spent over an hour doing a cleanup of misc cables and test equipment in the LVEA and electronics room. There were quite a few cables dangling from various racks, here's the full list of what we cleaned up and where:
Location | Rack | Slot | Description |
Electronics Room | ISC C2 | Found unused Servo controller/cables/mixer from top of rack. Only power was connected, but lots of dangling cables. Removed entire unit and cables. | |
Electronics Room | ISC C3 | 19 | D1000124 - Port #7 had dangling cable - removed and terminated. |
Electronics Room | ISC C4 | Top | Found dangling cable from "ALS COM VCO" Port 2 of 6. Removed and terminated. |
Electronics Room | Rack next to PSL rack | Dangling fiber cable. Left it... | |
LVEA near PSL | ISC R4 | 18 | ADC Card Port stickered "AO IN 2" - Dangling BNC removed. |
LVEA near PSL | ISC R4 | 18 to PSL P1 | BNC-Lemo with restor blue box connecting "AO2" R4 to "TF IN" on P1 PMC Locking Servo Card - removed. |
LVEA near PSL | ISC R4 | 20 | T'd dangling BNC on back of chassis - removed T and unused BNC. |
LVEA near PSL | Disconnected unused O-scope, Analyzer, and extension cords near these racks. | ||
LVEA | Under HAM1 south | Disconnected extension cord running to powered off Beckhoff Rotation stage termination box thingy. Richard said unit is to be removed someday altogether. | |
LVEA | Under HAM4 NE cable tray | Turned off via power cord the TV monitor that was on. | |
LVEA | HAM6 NE corner | Kiwamu powered off and removed power cables from OSA equipment near HAM6 ISCT table. | |
LVEA | Unplugged/removed other various unused power strips and extension cords. |
I also threw the main breaker to the OFF position on both of the free standing unused transformer units in the LVEA - one I completely unplugged because I thought I could still hear it humming.
No monitors computers appear to be on except the 2 VE BECKHOFF ones that must remain on (in their stand alone racks on the floor).
We'll ask the early morning crew to sweep for Phones, Access readers, lights, and WIFI first thing in the morning.
Final walk thru of LVEA was done this morning. The following items were unplugged or powered off:
Phones
1. Next to PSL Rack
2. Next to HAM 6
3. In CER
Card Readers
1. High Bay entry
2. Main entry
Wifi
1. Unplugged network cable from patch panel in FAC Rack
Added this to Ops Sticky Notes page.
Kyle R., Gerardo M., Richard M. (completes WPs #6332 and #6360) Initial pressure indication shortly after being energized was 3 x 10-3 torr (PT180 is a "wide-range" gauge). If real, this would be higher than expected for the ~17 hrs. of accumulation -> "burped" the accumulated gas into the connected pump setup while Gerardo monitored its pirani gauge -> it gave no indication of a change and remained steady at 1.9 x 10-3 torr which is as expected. Neither gauge is calibrated in this pressure region of interest. Noted PT180 responded as expected to being combined with local turbo -> Isolated temporary local pump setup, valved-in (exposed/combined) PT180 to site vacuum volume, vented locally mounted turbo and removed from PT180 hardware -> Installed 1 1/2" O-ring valve and 2.75" CF to NW40 adapter in place of the turbo and pumped to rough vacuum the space between the two 1 1/2" pump port valves.
Jenne, Sheila Keita
We had another instance of a jump in POP90, in which both the I and Q phases increased. We think this is a problem with the readback, similar to what is described in 31181.
We were acquiring lock, and had no ASC running on the SRC. We looked at witness sensors for all the interferometer optics, and it looks like none of them moved at the time. We also don't see changes in other RF sensors, like AS36, AS90, or POP18. We looked at both quadratures of POP90 before rotation and it seems to have both a phase shift and a 3.5 dB increase in the sum of I+Q. The RFmon and LO mon on the demod don't have any jumps nearly that large, so if it is a problem in the demod it is probably downstream of the directional coupler for the RFmon.
This seems not to be the same as the jumps in SRC alingment that started after last tuesday's maintence, (31804 31865 and other alogs), but since the symptom is very similar it would make debugging the other problem easier if this issue could be fixed. Since we use POP90 for a dither lock of the SRM angle durring lock acquisition, this can cause a lockloss if it happens while we are trying to lock.
I tested the chassis that was pulled out (S1000977). During the testing I did not see any level changes or glitches in either the I or Q channel outputs, except when a pair of cables to attached to the front panels via a BNC tee were strongly wiggled. Removal of the tee and wiggling the cables directly didn't induce any changes. Attached is an oscilloscope trace of the I&Q monitor output for the POP90 channel. It is fuzzy because of an RF amplitude modulation I was applying, however the distortion discontinuities are present with the modulation off. Daniel pointed out to me that the distortion is due to my not looking at the signal differentially on the oscilloscope. Sure enough it looks a lot cleaner when processed differentially. I did however notice that if the RF input is more than -11 dBm, the monitor signals on the rear panel are saturated/distorted. The only other output level changed that was observed was when the chassis was turned off in the evening and back on again the following morning. The chassis (strictly) failed the following tests: - front panel monitor coupling factors (all channels at 100 MHz, failed by less than 1 dB) - IF beat note amplitude versus frequency (all channels, I & Q, at 100 kHz, failed by as little as 50 mV and as much as 360 mV) - IF output noise level (channel 3, I & Q, failed by as little as 3 dB and as much as 4 dB). Channel 3 is labelled as REFLAIR_27. By any chance when the chassis was in the rack, was there more than one cable attached to the (one) front panel BNC connector?
I've been working on a prototype epics interface to the seismon system. It currently has several moving parts.
EPICS IOC
Runs on h1fescript0 as user controls (in a screen environment). Code is
/ligo/home/controls/seismon/epics_ioc/seismon_ioc.py
It is a simple epics database with no processing of signals.
EVENT Parser, EPICS writer
A python script parses the data file produced by seismon_info and sends the data to EPICS. It also handles the count-down timer for future seismic events.
This runs on h1hwinj2 as user controls. Code is
/ligo/home/controls/seismon/bin/seismon_channel_access_client
MEDM
A new MEDM called H!SEISMON_CUST.adl is linked to the SITEMAP via the SEI pulldown (called SEISMON). Snapshot attached.
The countdown for the P,S,R waves are color coded for the arrival time of the seismic wave
ORANGE more than 2 mins away
YELLOW between 1 and 2 minutes away
RED less than 1 minute away
GREY in the past
If the system freezes and the GPS time becomes older than 1 minute, a RED rectangle will show to report the error.
Just noticed this post, this is great.
Let us know if you run in any bug/trouble with the code.