The front loader used to clear snow at LHO was picked up today by a large flatbed truck.
The timeline for this activity:
With the new modes popping up this weekend I added the new modes to Verbal. Since 22 and 23 have a higher noise floor around them, they have a separate threshold for the RMSMON (3 normally, 300 for these two). Cheryl has restarted Verbal and it should alert these modes starting immediately. As always, please call me if there are any issues.
I revisited my investigation into the IM alignment jumps which I originally posted in alog 27646.
In that alog I found evidence that while the HAM2 ISI was tripped, and the IMs were tripped (undamped, and rung up), the IM OSEM oscillations damp at different rates, suggesting mechanical interference.
That test data was from 19 May 2016, and only one ISI/IM trip, so I've looked at a number of other ISI/IM trips, and with the exception of IM4, find exactly the same results.
Attachment 1 is the original data and four additional ISI/IM trip events
Attachment 2 is the alignment change of IM4 from May 2016 to Nov. 2016, which is the time frame that OSEM LL dropped below 90% of the max. OSEM
Attachment 3 is the alignment change of IM4 from Oct. 2014 to Feb. 2017
Attachment 4 shows the angular change needed to move an IM by the total EQ stop gap, and how that compares to the change of IM4 yaw
Attachment 5 shows the total alignment changes for all IMs
This data supports my original finding, and in the case of IM4, confirms that alignment changes matter, and it's my belief that IM4 OSEM LL is now touching an EQ stop.
Robert was interested in when this change happened.
I found an ISI trip 11 Oct 2016, which places the change between that date and 13 Nov 2016.
Attached is a plot of IM4 pitch and yaw between those dates, and I've circled the change in IM4 yaw alignment of ~130urad which is the only candidate for OSEM LL not touching to OSEM LL touching.
This change in IM4 yaw was made on 27 oct 2016 approx. between 1:30 and 4:30UTC.
Per Carlos' request, rebooted vacuum1 machine in control room (10.20.0.89/24).
Robert, Anamaria
During PEM injections in the beginning of O2, we did HAM1 HEPI injections. It turns out that H1 has much less coupling than L1 (see alog), so it's interesting to take a closer look and compare the sites. A few interesting points:
1) The first three plots show the H1 injection in various relevant signals, as well as the budgeted contribution to DARM. It's very close below some 20 Hz, but not perfectly reproducing DARM like L1 does. Also, the CHARD P is seemingly less sensitive to HAM1 motion (the estimated ambients aren't identical - which is the case for L1).
2) HAM1 moves more at H1 than L1, so the contribution to DARM could be reduced further by a factor of a few if the HAM1 filtering is improved (Arnaud did this at LLO). I checked the filtering for both sites in the suspensions; the CHARD signals get sent to L2 with not much gaining, so we can directly compare the controls at higher frequencies. However, at LLO we offload to L1 and at LHO we offload to M0.
3) The overall DARM contribution from L1 HAM1 motion, presumably through CHARD, is a factor of 10 higher but, even worse, the coupling is almost a factor of 100 higher. I'm hoping that by looking at such comparisons we can figure out how to reduce the L1 coupling.
model restarts logged for Sun 12/Feb/2017 - Thu 09/Feb/2017 No restarts reported
Starting CP3 fill. LLCV enabled. LLCV set to manual control. LLCV set to 50% open. Fill completed in 17 seconds. LLCV set back to 15.0% open. Starting CP4 fill. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Fill completed in 103 seconds. TC A did not register fill. LLCV set back to 44.0% open.
I lowered CP3's LLCV %open to 14% from 15%
TITLE: 02/13 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 66Mpc
OUTGOING OPERATOR: Ed
CURRENT ENVIRONMENT:
Wind: 4mph Gusts, 3mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.34 μm/s
QUICK SUMMARY:
Morning Meeting:
There have been reports of not being able to reset a watchdog on SUS while using the new OPS workstation - request from Keita is to report this to CDS if/when it happens again.
The generic watchdog reset scritp is userapps/sus/common/scripts/wdreset_all.pl
This is an extremely simple script, just does two caputs with a sleep inbetween. I propose we migrate this script from Perl to Python.
08:26 Switched ISI CONFIG to WINDY_NO_BRSY
08:30 Re-aligning arms
08:30 TCSY Chiller flow is low
10:04 TCSY Chiller flow is low
12:05 TCSY Chiller flow is low
15:24 TCSY Chiller flow is low
09:16UTC 75.453Mpc
TITLE: 02/13 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 70Mpc
INCOMING OPERATOR: Ed
SHIFT SUMMARY:
Decent shift with H1 up most of the shift, until our lock was ended at 15hrs44min by an Alaska EQ.
SEI_CONFIG: useism is between the 50 &90 percentile. Perhaps we should switch the from WINDY_USEISM_NOBRSY[63] --> WINDY_NO_BRSY[60] at next lockloss? Ed will try this.
Also mentioned to Ed the FM1 filter option for PI MODE26 (which Cheryl handed off to me).
LOG:
After tonight's Alaska earthquake, we had several ISIs trip. For ETMy, the watchdogs were RESET, but it tripped again while trying to ISOLATE, AND it also tripped the ETMy SUS. No biggie methinks.....
When I tried to Reset ETMy, had the message that the L1 stage was tripped. Hitting the RESET button could not clear the L1 trip. The L1 output values were well below the RMS threshold value of 8000 counts, so was perplexed that the Watchdog would NOT RESET!
I was working on the left-ops workstation while Ed was on the right (new) workstation. When he left the room, I made another attempt to RESET, AND this time it worked :-/ Nothing done was different. Ed noticed that this was our first time trying something from the a different workstation.....we're wondering if the new workstation could have been the issue?? This is just us speculating because we couldn't see anything else which was done differently.
Anyway....we are back & Ed is on his way to recovering us from this late-night EQ jolt.
H1 Going on 12.5Hr lock with fairly quiet conditions.
MODE26 is a bit fat looking (as Cheryl alogged).
I didn't feel comfortable with the extreme change in fill time for CP4 of late being just the increase in ambient temperature alone. As such, I decided to do another fill at the 48 hr. mark instead of waiting for the nominal auto fill which wouldn't be until Monday (72 hr. mark). Attached is the data. Any chance that we are seeing the needle/seat beginning to show signs of an ice ball obstruction? "I'm just saying!" As shown in the plots, I set the LLCV to 70% open for 30 minutes and then increased it to 100% open. The exhaust temperature finally responded 5 minutes after being increased to 100% open. I went out of the building and cracked the manual bypass valve and personally observed LN2 at the exhaust. I increased the nominal value to 44% from 42%. Note that I also verified that the valve stem pointer does agree with the CDS 4-20 mA output %open values so it isn't like the valve isn't opening etc... Also, the dewar head pressure is ~10 psi, which when combined with the liquid height should be more than adequate to make the elevation "hump" into the 80K pump etc..
Here is a plot over the year showing Dewar level correlated with LLCV setting. We changed the actuator this summer from pneumatic to electronic, and had to apply loctite to the valve stem threads because it kept trying to decouple from actuator stem; the calibration was significantly different from the previous actuator. Looks like LLCV was set to around 44% open during an almost empty Dewar back in Sept. so I think what we're experiencing here is a mismatch of LLCV setting to ambient conditions. Weather plays a big role and because it has been polar vortex cold we didn't have to raise LLCV as much during the winter and now we're back to "norm". We should double check the valve stem to make sure the loctite is still functional.
I had to flip the noise eater, run an initial alignment, wait longer than usual for a well-aligned DRMI to lock (~12min), lose lock at SWITCH_TO_QPDS, struggle getting the OMC ready for handoff, and then good.
Did you get a VERBAL alarm or DIAG_MAIN message about the Noise Eater? Just curious. (Or should operators know to look for the Noise Eater when the IMC isn't locking.)
DIAG_MAIN will notify if the Noise Eater needs to be flipped.