SEI - All good.
SUS - All good.
ISC - Sheila and Nutsinee were moving PR3 last night to increase optical gain. Pop beam diverters are still open due to REFL WFS trouble.
CDS - HW - Running, anemometers are all up except at EY. Heading to the vault today.
- SW - Stable. Working on installing Debian8 on various work stations. Seems sucessful so far. Impressed enough that they may want to replace everything with it, please be giving them feedback.
PSL - Still running. TCS as well.
Vac - Today Chandra would like ot tune the filaments on the RGAs, can be done from the CR. Looking for partial pressure changes in locklosses.
Fac - If you find something broken, please just call the appropriate person rather than emailing because they may not get to the email in a timely manner.
Cal - Got the last set of measurements last night, about ready to update the calibration.
Reviewed Fault reports
Robert and Annamaria will be coming to do PEM injections next week.
if you see cds macbooks 3 & 7, please bring them back to Carlos or Jim B.
TITLE: 11/18 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 66.2053Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: Locked for 22 hours and counting.
LOG:
07:50 Started ITMX HWS script (left it off to test some other script on Wednesday and forgot to run it)
08:01 Increased violin mode damping gain for ETMY 508.585
08:08 Chaged CO2Y power upper limit so the ODC bit is green. Accepted the change in SDF.
12:43 LLO lost lock. Took IFO out of Observing so I can do some PRM test for Sheila. Here's what happened:
13:36 LLO back to Observe. Slowly revert PRM gain and took some DARM spectrum in the process.
13:51 Back to Observing
~15:20 Joe moved a ladder from Mid of sec1 Y beamtube to Y12.
15:10-15:15 Weird noise in AS90/POP90 (see attachment). I was looking at Sheila's Build_ups striptool and only POPAIR_B_RF90 seems to see this.
Aidan, Terra
A continuation of alog 30522
Previously we'd found that frequency drift over a lock stretch is mode dependent - differential drumhead mode drifts more than that same optic's drumhead mode - but surmised that the ratio of (change of frequency / frequency) between two modes would be constant. However, this assumed a spacially uniform change in temperature --- fm = g(T) --- when a more realistic assumption is some radial dependence of T, so:
fm = g(T(r)), where r is radial vector
Since the energy distribution of a given mode is also dependent on r, we can expect some modally unique self heating response. In other words, the more overlap between a given mechanical mode and the hot spots of the optic, the larger the frequency drift we'd expect that mode to have.
I've looked at ETMY 15009 Hz diff drumhead mode shift compared with ETMY 8158 Hz drumhead shift during twelve hours of a recent lock. 15009 Hz relative frequency change is larger than drumhead relative change and the ratio begins to level off towards a constant after about six hours. We expect (and model) the opposite - the drumhead mode would have more thermal overlap and thus would shift more. This assumes a mostly centered beam spot (or equivalently, assumes r = 0 at the center of the optic). Looking at Jenne's recent beam spot investigation, ETMY spot position was fairly centered during this time, off in yaw by a few mm. The 15009 Hz mode is differential in pitch. (I still suspect some dependence on torsional vs. longitudinal mode movement as suggested in alog 30522.)
We've started having operators run a2l more regularly and before and after powerup at times to get assurance of beam spot position.
Quick look at the frequency drift for ETMY 15009 Hz and Drumhead during the first 12 hours of the current lock. Ignore the giant vertical line of glitch terribleness. Also I tried fitting the Drumhead df/f with a sum of weighted exponentials (though note that tau == b is the same for each series element) as a very preliminary comparison with thermal lens response to power step.
TITLE: 11/18 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: OBSERVING
OUTGOING OPERATOR: Corey
QUICK SUMMARY: H1 has been locked 16 hours and counting. Sheila sent me an instruction and asked me to play with PRM. I'm waiting for LLO to be out of Observing before doing that. If that doesn't happen I will do it towards the end of shift.
After getting wack values for PI mode ring ups (many orders of magnitude off from expected) earlier in the week, I've refit ring ups and taken new ones and gotten much more reasonable values (I wasn't looking at long enough time stretches to get accurate ring up data before). Note that we haven't had any instability in Mode3 in many days so I haven't been able to remeasure it.
Mode # | Freq | Optic | tau | Q |
3 | 15606 Hz | ITMX | TBD | TBD |
26 | 15009 Hz | ETMY | 316 s | 15 M |
27 | 47495 Hz | ETMY | 92 s | 5 M |
28 | 47477 Hz | ETMY | 89 s | 5.2 M |
Mode 3 hasn't been unstable in many days and Modes 27 & 28 are only unstable during the initial ~ 1-2 hours of transient. Attached are two 30 hour stretches of the damping output of the three modes during recent long locks (Mode3 had no output so I left off) and the simulated HOOM spacing to get an idea of when the modes are ringing up enough to engage damping loops. Left is a few days ago and right is the current lock. Note that Mode26 looks continuously unstable during the 11/16 lock, but it could be that the damping gain is triggering below an actual unstable amplitude; compare to the current lock where we set the gain for Mode26 to zero just after 19:30 (so there would be no damping output) but it has remained stable and low since then with no need to damp.
Operators and myself are currently turning off gain and measuring ring ups during different times of lock stretches to get gains at different stages of the thermal transient.
Current damping status of this now > 25 hour 32 W lock: no damping required after the first 2 hours. Attached plots again show damping loop output over the past 26 hours and HOOM spacing for reference.
TITLE: 11/18 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY:
Did not get to ANY of my ToDoList! Because H1 has been locked the whole shift (going on 14hrs). So I passed these items on to Nutsinee. Nutsinee did notice big Violin Modes. I talked to the LLO Operator to check their status (Robert still doing PEM injections), so Nutsinee will try to damp the violins (because if they get rung up more it will make locking difficult.). LLO will contact us when they go to OBSERVING.
LOG:
To Do:
The only Oplev worth note is HAM2 pit/yaw which is well over +/-10urad.
This closes FAMIS #4702.
Jim's new WEEKLIES medm is pretty cool!
TITLE: 11/17 EVE Shift: 0:00-08:00 UTC (16:00-0:00 PST), all times posted in UTC
STATE of H1: OBSERVING
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
Wind: under 5mph
Primary useism: Trending over last 24hrs
Secondary useism: --
QUICK SUMMARY:
Handed an H1 in Observing (9+hrs). Taken out of Observing for Sheila to improve PRC alignment (related to the Allowing the Refl WFS & Beam Diverter Closing steps in Guardian). H1 has now been transitioned to calibration for the next few hours (Kissel).
I do have a To Do List for when we drop out of lock:
This fan has been on continuously for the past several years. I noticed that it is starting to get noisy - mostly due to an unexplained increase in flow. Even at the minimal speed setting, the flow seems to be excessive for some reason. This fan exhausts the area above VBOA in Room 169, as well as, the two air-bake ovens in the adjacent Vacuum Prep Lab -> I will add signage to the air bake ovens so as to notify would be users that the exhaust is off and describe the details for turning it back on.
SudarshanK, DarkhanT, TravisS, RickS
Yend calibration measurements were performed on 10/31/16 using the WS1 working standard. The results of those measurements were used to generate the calibrations for the Pcal Rx and Tx power sensors.
On 11/9/16 we repeated the measurements using a new working standard, WS3, because WS1 was accidentally dropped on the concrete floor, changing it's responsivity. We decided to retire it and replace it with WS3. The time series for these measurements revealed that the peak-peak variations were sevaral times higher than usual. The power sensor calibrations differed from the earlier measurements by as much as 0.5%. The power source to which the WS electronics were connected was suspected to be the cause of the excess noise.
On 11/15/16 (two days ago) we repeated the measurements with WS3 but with the WS electronics plugged into a different power source. The signal variations returned to the expected values so it seems that the power source was the issue.
The new (previous) calibrations (N/V) are:
Tx: 1.5171e-9 (1.5160e-9) N/V (0.07% difference)
Rx: 1.0497e-9 (1.0475e-9) N/V (0.21% difference)
We will continue to use the "previous" values for the signal calibrations and make periodic checks during the oberving run at about 2-month intervals, as for all the end stations.
With full lock, the H1:LSC-Y_TIDAL_REDTRIG_INMON has a value of 175000. The H1:LSC-Y_TIDAL_REDTRIG_THRESH_OFF had a value of 10 and it takes almost 4 seconds to reach that threshold. Meanwhile, the ISC was still driving HEPI and does so quickly causing the ISI T240s to be driven to trip.
In the attached 10 seconds trend, the lock loss is clear with the ASC TR SUM dropping along with the REDTRIG power. The signal finally reaches 10 almost 4 seconds later seen on the TIDAL_CMD trace. Meanwhile, ISC has rapidly driven HEPI >100um, quite quickly. By then the T240s are too far gone (Red Trace #1) and will trip the WD.
Sheila & Daniel have increased the ON/OFF thresholds to 8000/5000. This should work for 2 watt lock and reduce our delay to under two seconds--maybe enough. But why is this suddenly a problem? Or is it some series of unfortunate events that made it happen this time? Scaling the thresholds with power is the better albeit complex solution.
Not all lock losses are the same. Some will loose power fast (thru the AS port) while other are much slower (REFL port).
TITLE: 11/17 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 64.0999Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Lockloss in the early part of my shift, it tripped HAM6, ETMY ISI, and then 7min later HAM3. After ITMY stopped swinging, it was straight foward to bring back up. Since then it has been locked for 6hrs in and out of Observing for me to run a2L and Terra will sometimes turn on/off PI gains.
LOG:
I made a few small tools that others may want to use. Please let me know if you have any questions, bugs, or complaints.
(userapps)/guardian/grd_grep.bsh - This allows you to grep through the files of a specified Guardian node.
grd_grep.bash - Grep through the files of a particular Guardian node
Usage: grd_grep <NODE_NAME | -h | --help> <GREP_ARGS>
Where:
NODE_NAME (required) Is the caps name of the Guardian node (Ex: ISC_LOCK)
<-h | --help> Print this usage
GREP_ARGS (required) are all of the normal arguments used with grep.
By default, it is ran with -n
(Ex: [grep] -ni "Test String"
Examples:
$ grd_grep.bsh DIAG_MAIN -i "SEISMIC"
or
$ grd_grep.bsh ISC_LOCK "SUS-ETMY_M0_LOCK_L"
I just aliased this as "grdgrep" and it becomes very easy to find the lines of wanted code in one or more files of that node.
(userapps)/cds/h1/scripts/ws_timer.bsh - A simple timer that will ding when complete.
ws_timer: A simple timer for control room use. Ctrl-z to stop.
Usage: wstimer [seconds] [(-h|-m|-t) arg]
[seconds] Set a timer for that number of seconds
-h|--help Print this help menu
-m|--miniute Followed by an int will set a timer for that many minutes
-t|--time Followed by a time will ding at time. Times are local and
in the form of "hh:mm"
This ended up being more useful that I thought. Alias as just "timer" and it is a great tool while in the chair.
Replaced and verified direction of both Mid Station Anemometers.
model restarts logged for Wed 16/Nov/2016 No restarts reported
DAQ: fw2 and fw0 are running framecpp_2.5.2, fw1 still running older version. Test code getting and comparing framed data from nds0 and nds1 showing no differences over past 30 hours. LDAS also not reporting any data differences between fw0 and fw1 frames.
Beverly
Complete results may be found on the DetChar Wiki.
I have determined new frame data rates for H1 and L1. This was last updated before ER9 ( see aLOG 26706 ) I have only run the 'commissioning' rate scripts, as we have no science frames anymore You can find the scripts (work at either site) at https://llocds.ligo-la.caltech.edu/data/keith.thorne/DataRates You can find the L1 data at https://llocds.ligo-la.caltech.edu/data/keith.thorne/DataRates/2016-11-17/, the H1 data at https://lhocds.ligo-wa.caltech.edu/exports/keith.thorne/DataRates/2016-11-17/. I have attached the summary data sheets in PDF and XLS formats as well as channel lists. These are all raw (uncompressed) rates. Note: The H1 data rates are inflated (by 2.1 MB/s) by 8 64KHz channels from the SUS/OMC PI models. These 'artificial' channels (set = 0) are no longer needed after RCG 3.1.1 and have been removed from L1. This should be done on H1. The raw H1 fast data size has shrunk from 56.19 to 55.20 MB/s (decrease of 1%). The raw L1 fast data size has shrunk from 41.67 to 40.56 MB/s (decrease of 3%). Mostly removal of 6 64KHz channels. The raw H1 fast data rates are now 7% larger than L1 fast data rates From DAQ MEDM screens H1 frames are ~1670 MB / 64 sec -> 26 MB/s L1 frames are ~1400 MB / 64 sec -> 20 MB/s
I've added a Weeklies screen to the Sitemap to make it easier for the operators to do FAMIS tasks. It can be found under the OPS tab. Pretty simple, there are a number of buttons that launch different scripts for different FAMIS tasks, so operators don't have to navigate from the command line. Operators and detector engineers should feel free to add tasks to this screen, I added the ones I could think of. Hopefully we can rely on this screen to make sure that operators use 1 version of a script, and give cognizant engineers some control over which version is used.
I also encourage anyone with any artistic inclinations to make it look nicer. I got nothin'.
There are now THREE a2l scripts that we will be running once during EACH lockstretch. Patrick mentioned them in his time log. I'm tagging Opsinfo with the information again.
cd /opt/rtcds/userapps/release/isc/common/scripts/decoup ./a2l_min_LHO.py ./a2l_min_PR2.py ./a2l_min_PR3.py
that is all :)
A note on the (3) A2L measurements.
As of this week, we want to run the ...LHO.py file at the beginning of locks (i.e. right after we reach NLN). It takes on the order of 10min.
Sometimes you might not want to run it. If the DARM specrtra looks good around 20Hz, then you are good. It's a judgement call, but in general, this helps with sensitivity. Will put this in the Ops Sticky Note wiki & should get in the Ops Checksheet soon.
(Thanks to Jenne & Ed for sharing the alogs about this!)