There's some evidence that something strange is happening in the ETMY drive electronics and it's associated with the range drops. The ETMY L2 coil monitors (the noisemons) get quieter by about a factor of two around the same time as the range drops, but there's also excess noise that's not accounted for by the drive signal. The correlation of strange behavior of the noisemon is not perfectly correlated with the drops in range, and the excess noise in the noisemon does not have the same shape as the excess in DARM and is not coherent with it. However, this does point to further problems with drive problems at EY. So would it be possible to switch control over to EX as Sheila suggested before? The range loss due to this problem is dropping the event rate by at least a factor of 2 (20% range loss cubed), so we should do anything with a chance of solving it. Power-cycling the EY electronics might also help, or checking any fast readbacks of the driver (the version in frames go only up to 1 kHz so if there's high-frequency junk we can't see it. Attached are three plots. The first is a comparison of the noisemon during the reference (Jan 13 1 UTC) and the bad time (Jan 13 14:45 UTC). The second is a comparison of the noisemons with the drive signals (L2 MASTER UL) subtracted. The transfer function is measured from the data, since it is clearly changing. The transfer functions have the same shape and phase, but are different by a scale factor, about 0.42. There's excess noise during the bad time that is clearly above the noise floor of the coil monitor. The final plot is the RMS of the monitor channel during Jan 13 (ending at the last lock), showing that it starts to drop around the same time as the range. On Jan 8, the RMS is steady for the whole day, while on Jan 12 it also shows a drop during a time when the DARM range drops.
Lost lock in POWER_UP, same as Ed reported. Something seems wrong with the ISS (see previous alog). I have stopped at DC_READOUT while I investigate, but I'm not making much progress. I have left a message with Keita and Peter.
It seems similar to alog 30335.
Could it be a problem with the ISS AOM power supply? The summary page shows that the voltage has dropped (plot).
I'm not certain, but my guess is that that signal is actuated on to set the diffracted power.
I think it might be a control loop somewhere, because locking and unlocking it can sometimes make the noise go away.
I watched the diffracted power signal and it got noisy as soon as it got to the DIFF portion of FIND_IR.
Peter is on his way to the site.
Could this be a sign of a problem related to the lock losses on POWER_UP? See attached.
TITLE: 01/14 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC STATE of H1: Lock Acquisition OUTGOING OPERATOR: Ed CURRENT ENVIRONMENT: Wind: 2mph Gusts, 1mph 5min avg Primary useism: 0.02 μm/s Secondary useism: 0.22 μm/s QUICK SUMMARY: IFO is at down. Going to start an initial alignment.
17:50 lockloss during re-locking. HAM6 ISI Watchdog trip.
08:15 Not holding on to what looks lioke a good DRMI lock. Touching up cavity alignments.
12:15 lockloss @ Increase Power/PI mode 27 ringing up sharply/instantly
12:41 lockloss @ increase power. Going to try manually increasing power.
13:22 I manually moved to ADJUST_POWER. my first attempt was to move to 10W. OMC DCPD saturation, then lockloss. I shall try smaller moves on the next lock.
16:00 Finally got to the point where could try the manual increase. 1W increase broke the lock.
Unfortunate turn of events considering the trouble I'm having.
Appears to be a general problem with the authentication system across the Lab/LSC this morning, not just the Ops wiki server.
LIGO-SAML-DS, which finds authentication servers, is unhappy with some very recent metadata changes (example from an affected GC server):
2017-01-14 07:51:56,396 ERROR Metadata signature cannot be verified: xmlsec1 returned non-zero code 1
If you (or CDS admins) need a workaround in the mean time while the central system is worked on, give me a call.
here are the recent ssl_error logs from cdswiki when I try to open the web service
[Sat Jan 14 09:59:00.588103 2017] [wsgi:error] [pid 23984:tid 140688044332800] [remote 71.84.186.186:38281] mod_wsgi (pid=23984): Exception occurred processing WSGI script '/usr/lib/cgi-bin/wsgi/ligo-saml-discovery-service/LIGO_SAML_DiscoveryService.wsgi'.
[Sat Jan 14 09:59:00.588145 2017] [wsgi:error] [pid 23984:tid 140688044332800] [remote 71.84.186.186:38281] Traceback (most recent call last):
[Sat Jan 14 09:59:00.588170 2017] [wsgi:error] [pid 23984:tid 140688044332800] [remote 71.84.186.186:38281] File "/usr/lib/cgi-bin/wsgi/ligo-saml-discovery-service/LIGO_SAML_DiscoveryService.wsgi", line 212, in application
[Sat Jan 14 09:59:00.588206 2017] [wsgi:error] [pid 23984:tid 140688044332800] [remote 71.84.186.186:38281] if returnURL.split('?')[0] in spDict[entityIDclaimed]:
[Sat Jan 14 09:59:00.588234 2017] [wsgi:error] [pid 23984:tid 140688044332800] [remote 71.84.186.186:38281] KeyError: 'https://lhocds.ligo-wa.caltech.edu/shibboleth-sp'
Bubba turned on heater 3A yesterday to help deal with the cold conditions. You can see in the plot that the heater is cycling, probably on a protective temperature switch.
Nutsinee reported some misbehavior in the LVEA 10-30 Hz seismic activity.
I'm not sure there is any correlation between these two events but someone might look at this. The heater is likely ~25kw so there is some significant EMI from the contactor opening and closing.
Re-locking has been a daunting task to say the least. PRC gave me a really hard time trying to get it locked. Upon getting it locked and the control signals converged, DRMI alignment still looked pretty terrible. BS alignment was checked repeatedly but to no avail to the alignment. I then decided to do a complete initial alignment. The IMC became unstable while trying to re-align the arms and I had to clear he WFS and re-align the optics. After letting the loops have the alignment for tuning during a brief break I continued with the initial alignment. Locking attempts following have been more sucesssful but not completely successful. So far, the farthest I've gotten is DC Readout.
4.8M Indonesia
Was it reported by Terramon, USGS, SEISMON? Yes, Yes, No
Magnitude (according to Terramon, USGS, SEISMON): 4.8, 4.8, NA
Location: 56km ENE of Taniwel, Indonesia
Starting time of event (ie. when BLRMS started to increase on DMT on the wall): ~06:30 UTC
Lock status? Caused lockloss.
EQ reported by Terramon BEFORE it actually arrived? Yes
Note: There were multuple earthquakes happened around the same time. It's not neccessary that this one in particular caused the lockloss. But the timing reported by Terramon matches.
TITLE: 01/14 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: Ed
SHIFT SUMMARY: The range was awfully low but the IFO stayed locked until an earthquake knocked us out. Otherwise nothing exciting.
LOG:
07:19 Lockloss. There were several earthquakes happened right around the same time but the arrival time of the 4.8M Indonesia earthquake is closest to the peak time observed in the control room. The peak reached 0.4 um/s on the 0.03-0.1Hz FOM.
About 2 hours into this lock stretch the range started to drop very fast. About the same time 10-30 Hz seismic activity (LVEA vertex) is behaving funny. I don't believe that it's real (no one is out there). I don't see funny things in magnetometers. Still looking for more clues. Electronics related?
The temperature is dipping a couple of degrees in the LVEA again. This morning, Bubba had to turn on some extra heat in the LVEA, which was close to the time of the drop in range. Are some new fans getting turned on somehwhere producing extra seismic noise??
If that's the case shouldn't we see this behavior more often in the past 24 hours?
The weird glitches have stopped but the range doesn't recover. They seem to be 15 minutes spacing.
It's been set to KILL since Jan 4th. Are we leaving it at KILL on purpose?
This applies to the transient injections (not continuous pulsars). It is likely that we are trying to accumulate a good stretch of data without such injections since we came back up in January to better measure sensitivity backgrounds.