Title: 12/1 OWL Shift: 08:00-16:00UTC (00:00-8:00PDT), all times posted in UTC
State of H1: Observing at 78Mpc for the last 20hrs
Outgoing Operator: Travis
Quick Summary: Travis had it easy, still locked from my shift yesterday. Wind minial, useism 0.4 um/s, all lights off, CW inj running, timing error on H1SUSETMY, IPC error on H1ISCEY (I'll clear them when I get the chance).
Title: 11/30 Eve Shift 0:00-8:00 UTC (16:00-24:00 PST). All times in UTC.
State of H1: Observing
Shift Summary: Very quiet shift. Only 6 ETMy saturations not related to any RF45 glitching. 20 hours of coincident observing with LLO.
Incoming operator: TJ
Activity log: None
Nothing of note. A few ETMy saturations not related to any RF45 glitching. Coincident observing with LLO just over 16 hours.
Laser Status:
SysStat is good
Front End power is 31.28W (should be around 30 W)
Frontend Watch is GREEN
HPO Watch is RED
PMC:
It has been locked 6.0 days, 7.0 hr 24.0 minutes (should be days/weeks)
Reflected power is 1.635Watts and PowerSum = 25.14Watts.
FSS:
It has been locked for 0.0 days 14.0 h and 16.0 min (should be days/weeks)
TPD[V] = 0.7883V (min 0.9V)
ISS:
The diffracted power is around 8.313% (should be 5-9%)
Last saturation event was 0.0 days 14.0 hours and 16.0 minutes ago (should be days/weeks)
At 12:58:31 PST the IOP for h1susey took 69uS for a single cycle which in turn caused a single IPC receive error on h1iscey. This TIM error has been occuring approx once a week for EY, in this case it is the IPC error which is unusual. We should clear these accumulated errors the next time we are not in observation mode or during Tuesday maintenance, whichever is the soonest.
TITLE: 11/30 DAY Shift: 16:00-00:00UTC (08:00-16:00PDT), all times posted in UTC
STATE of H1: Locked at 72Mpc for 12+hrs
Incoming Operator: Travis
Support: None needed
Quick Summary: Very quiet shift with H1 hovering between 75-80Mpc. 0.03-0.01 seismic is noticeably trending down over the last 24hrs. useism is holding just under 0.5um/s (so we still have 45mHz Blends ON).
Shift Log:
Last Tuesday (24th Nov) Jim and I modified the monit on h1hwinj1 machine such that when it restarts the psinject process it smoothly ramps the excitation amplitude over a time period of 10 seconds. We manually started the new system on Tuesday and since then there have been no crashes of psinject until the last 24 hours. There have been 4 stops (with subsequent automatic restarts) in the past 24 hours, each stop was logged as being due to the error:
SIStrAppend() error adding data to stream: Block time is already past
Here are the start and crash times (all times PST). Monit automatic restarts are maked with an asterix
time of start | time of crash |
Tue 11/24 14:55:47 | Sun 11/29 17:15:56 |
Sun 11/29 17:16:00* | Mon 11/30 00:00:14 |
Mon 11/30 00:01:13* | Mon 11/30 13:09:07 |
Mon 11/30 13:09:36* | Mon 11/30 13:12:43 |
Mon 11:30 13:13:39* | still running |
On Nov3, the ITMY Coil Driver was powered down in an attempt to clear its brain which had been giving false bad status indicators. We also changed the code so as to not drop out during these glitches, see T1500555.
Trending the channels show no status drop outs since the 3 Nov power cycle. Before the power cycle, the status had indicated a problem erroneously several times with at least twice dropping the IFO out of observing.
For quick reference, and if it wasn't made clear from the Primary Task, these are the ISI ST1 and ST2 coil drivers (not SUS coil drivers).
VerbalAlarm script was stopped, but unfortunately when trying to restart it, I get an error (see below). Not sure of a back-up alarm system to employ if this goes down. I have the Calibration medm (with GraceDB up on my work station), but not sure what else we should have up as a back up.
It's back. TJ called back and let me know of some missing parenthesis needed on Line 638.
No notable changes for first part of shift. Seismic is pretty much unchanged.
H1 is going on an 8+hr lock with a range which is hovering a little over 75Mpc.
Beamtube work has been canceled today due to a non-functioning manlift, and freezing conditions.
Attached are 7 day trends for all active H1 oplevs in pitch, yaw, and sum.
O1 days 72,73
model restarts logged for Sun 29/Nov/2015 No restarts reported
model restarts logged for Sat 28/Nov/2015 No restarts reported
As Patrick surmised in alog 23788, the lockloss at ~2240 pst Saturday, this lockloss came about after the Tidal signals ultimately driving HEPI hit its ISC imposed limit of 250um about 75 minutes prior.
The attached 1 day trend shows the HPI-ETMX_ISCMON hitting the 250 limit imposed just upstream at the HPI-ETMX_ISCINF filter. HEPI has no problem opening up this relief and I'm sure I should have done so already.
This amount of "Tide" is not real though so something else must be driving the requirement. I suspect an initial acquisition offset that ends up at HEPI or temperature changes either at the ends or in the PSL area? Attached too is a 30 day trend of the 'tidal' drives to the End HEPIs. One other lockloss is attributable to this limit being reached on 10 November.
TITLE: 11/30 DAY Shift: 16:00-00:00UTC (08:00-16:00PDT), all times posted in UTC
STATE of H1: NLN at
Outgoing Operator: TJ
Quick Summary:
TJ mentioned issue with getting the OMC to lock on the carrier during overnight lockloss. He made a change to how the slider sweeps for the carrier in Guardian (will send an email to Kiwamu).
useism is about 0.4um/s in LVEA. Winds are calm (under 10mph). Range is hovering around 75Mpc.
CER temperature has been higher by 5 degrees C than before since 11-27-2015 13:00 UTC-ish (right bottom in the attached). This is a day-ish later than when they started receiving air handler alarms. I notified Richard.
An interesting side effect of this is that I can now see that the output level of RF distribution amplifiers has a small temperature dependence. It's about -0.04dBm/K for 45MHz amplifier in CER (eyeballing right middle and right bottom). This is picked up by the RF AM stabilization (left middle and middle bottom). Not that this is a bad thing in itself.
Also, diode room air conditioning stopped switching on and off repeatedly (middle top) at the same time the CER temperature went up. I notified Peter.
Discussed IFO status:
Substem Reports: Nothing for SEI, SUS, CDS, FACILITIES
Maintenance Activities Tomorrow:
Title: 11/30 OWL Shift: 08:00-16:00UTC (00:00-8:00PDT), all times posted in UTC
State of H1: Observing ~80Mpc for the last 4hrs
Shift Summary: Quiet shift besides one issue after a lockloss from 4.5 in Oklahoma. <a href="https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=23821">alog23821</a> describes the OMC issue and my solution.
Incoming Operator: Corey
Activity Log: