There is still some noise seen in DARM <70Hz, but not as bad as before. I am also not seeing the oscillations like before, or at least not yet. Something is not healthy though...
Range: 71Mpc
H1:LSC-POP_A_LF_OUTPUT started to get a few hz oscillation in it, and the noise in the DARM spectrum < 100Hz picked up a bit.
Had to run through an initial alignment, and then lost it a few times on the way up. The wind is still staying below 20mph, the useism is still trending down, CW inj running, lights are off, LLO is down.
I was having issue keeping the IMC locked, I looked at the PLS status and the chiller alarm was red. I went in to check it out and the bottom of the chiller was in error with "Low Level", the top seemed to be hovering around the Max line. I am not sure if the bottom is seperate from the top, but I added 200mL of water to the bottom. Alarms all good now.
Not sure of the cause yet, the control signals didn't show any signs of struggle, winds <20mph, useism 0.8um/s.
Generic Lockloss tool set of plots, not showing anything useful (at least not to me).
Vern told me to keep an eye on the BS Oplev, so here are some plots with the ASAIR powers and the Oplev sum out. Not sure which one caused the other one, if any.
LLO is down due to high wind, Robert is jumping on this opportunity to do some PEM injections.
Back to Observing for a bit. Robert wants to get another test ready, in the mean time we will observe. LLO is still down and out from wind.
TITLE: "12/12 DAY Shift: 16:00-00:00UTC (08:00-16:00 PDT), all times posted in UTC"
STATE Of H1: Observing at 78Mpc for 12hr
OUTGOING OPERATOR: Travis
QUICK SUMMARY: Wind is on the rise peaking just above 22mph for now, the useism is trending down at 0.9 um/s, all else seems calm.
Title: 12/12 Owl Shift 8:00-16:00 UTC (0:00-8:00 PST). All times in UTC.
State of H1: Observing
Shift Summary: Locked for my entire shift, 12+ hours total. A handful of ETMy saturations. Microseism is slowly coming down, currently ~0.6 um/s. Wind is picking up a bit to ~15 mph. There is a H1SUSETMX timing error on the CDS overview screen that can be cleared next time we are out of Observing.
Incoming operator: TJ
Activity log: None
Locked in Observing for ~9 hours. A couple of ETMy saturations. No other issues.
I noticed a couple of errors that showed up in the VerbalAlarms window running on the Alarm Handler computer. These were not announced verbally by VerbalAlarms. See attached screenshot.
There were 2 more of these errors (one set of not responding/alive again) ~10:00 UTC.
I see three errors on cdsfs0 for this morning which resulted in the raid card being reset.
Dec 12 00:17:51 cdsfs0 kernel: [306069.208085] sd 0:0:0:0: WARNING: (0x06:0x002C): Command (0x8a) timed out, resetting card.
Dec 12 01:23:42 cdsfs0 kernel: [310009.835370] sd 0:0:0:0: WARNING: (0x06:0x002C): Command (0x2a) timed out, resetting card.
Dec 12 02:57:34 cdsfs0 kernel: [315627.297842] sd 0:0:0:0: WARNING: (0x06:0x002C): Command (0x8a) timed out, resetting card.
Here is a full set of logs for the 01:23 event:
Dec 12 01:23:42 cdsfs0 kernel: [310009.835370] sd 0:0:0:0: WARNING: (0x06:0x002C): Command (0x2a) timed out, resetting card.
Dec 12 01:23:45 cdsfs0 snmpd[1716]: Connection from UDP: [10.20.0.85]:53320->[10.20.0.11]
Dec 12 01:24:13 cdsfs0 kernel: [310040.313376] 3w-9xxx: scsi0: AEN: INFO (0x04:0x005E): Cache synchronization completed:unit=0.
Dec 12 01:24:13 cdsfs0 kernel: [310040.433071] 3w-9xxx: scsi0: AEN: INFO (0x04:0x0063): Enclosure added:encl=0.
Dec 12 01:24:45 cdsfs0 snmpd[1716]: Connection from UDP: [10.20.0.85]:54550->[10.20.0.11]
Dec 12 01:25:01 cdsfs0 CRON[13730]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Dec 12 01:25:03 cdsfs0 kernel: [310090.356268] 3w-9xxx: scsi0: AEN: INFO (0x04:0x0029): Verify started:unit=0.
Dec 12 01:25:03 cdsfs0 kernel: [310090.385280] 3w-9xxx: scsi0: AEN: INFO (0x04:0x0029): Verify started:unit=1.
Dec 12 01:25:03 cdsfs0 kernel: [310090.385714] 3w-9xxx: scsi0: AEN: INFO (0x04:0x0029): Verify started:unit=2.
Note that this problem is not related to the recent /ligo is 100% full error (it has about 220GB of disk free).
Looks like most NFS clients rode through these restarts without logging any system errors. Presumably they were short outages and perhaps only computers which were actively trying to access the file system at the time are reporting the error.
Please contact me over the weekend if the errors show up again.
Activity Log: All Times in UTC (PT) 00:00 (16:00) Take over from Ed 00:43 (16:43) Vern & Sheila – Into the LVEA to tweak BS OpLev 00:43 (16:43) GRB Alert – Ignore due to IFO down 01:32 (17:32) GRB Alert – Ignore due to IFO down 03:39 (19:39) NOMINAL_LOW_NOISE 22.0W, 79Mpc 03:45 (19:45) In Observing mode 08:00 (00:00) Turning over to Travis End of Shift Summary: Title: 12/11/2015, Evening Shift 00:00 – 08:00 (16:00 – 00:00) All times in UTC (PT) Support: Sheila, Jenne, Vern, Hugh Incoming Operator: Travis Shift Detail Summary: Reestablished lock and observing at 03:45 (19:45). Biggest problem seemed to be a glitching BeamSplitter OpLev laser. Per Jason’s instructions Sheila and Vern, made a power adjustment to the OpLev laser, which helped to quiet the glitching. For the past 4 hours, the IFO has been locked in observing mode. Power is 21.9W, with a range of 82.5Mpc. The wind has calmed down a bit (7 – 12mph range), seismic activity has been holding steady around 0.07um/s. Microseism remains around 1.0 – 0.8um/s, however the amplitude may be decreasing a bit. In general, the second half of the shift has been good for observing. The instrument has been stable and environmental conditions are not causing any issues at this time.
IFO has been locked and in Observing mode for the past 30 minutes. Lock was regained by the valiant efforts of Sheila and Jenne and a well executed tweak of the BS OpLev. Wind is now a solid moderate breeze (up to 18mph) and rising. Seismic and microseism continue at the elevated rates of the past 24 plus hours.
Sheila, Jenne, Evan, Ed, Jeff B, Hugh, Vern
TOday we have had low winds but microseism above 1um/sec all day. ALS has been very stable all day, unlike yesterday when we also had high winds. Today DRMI was the problem. We revived MICH freeze, and also increased the power of the BS oplev laser which was glitching badly but seems better now.
We had a few ideas for improving DRMI locking, which aren't implemented yet and that we won't pursue tonight since we are locked at last. We noticed that PRMI was locking reliably even when DRMI was not.
Over the summer we made some efforts to implement the MICH CPS feedforward (MICH freeze) that is used at LLO here. (19461 19399) The idea is to slow down the michelson fringes by predicting the motion using CPSs and sending this to the suspension. In the summer when the ground motion was low, there was no benefit for us in slowing down the Michelson fringe, so we haven't been using it.
This morning Evan turned it on again, and it did reduce the RMS MICH control signal a little bit. (plot attached, with ground STS for reference.)
This seems to possibly be helping us acquire PRMI lock, and make DRMI look more promising.
If operators want to try it, which is recomended while the microseism is high, the screen is linked from the LSC overview screen (2nd screen shot). The third filter module, labeled L1LSC-CPSFF is the relevant one. If all the settings are the same as shown in the 3rd screen shot, just type 1 into the gain to turn on MICH freeze. When either PRMI or DRMI locks, this will be set to zero by guardian, and if it breaks look and you want to use it again, you will have to type a 1 in again.
As a potential solution to the BS oplev glitching problem that we were having earlier (before the oplev laser power was increased a bit), we had thought about creating a PRMI ASC state, much like the DRMI ASC state. The idea was that we would try to engage ASC for MICH so that we could stop using the BS oplev for feedback.
However, since the BS oplev seems to no longer be glitching, this is no longer a high priority. So, the state still exists, but both the main and run parts have "if False" loops around all the code in them, so that this state is just a placeholder for now. If we decide to go forward with ASC for PRMI, we've already got a healthy start on it.
The message is: There is no practical change to the guardian code, although there is a new state if you open up the "all" screen on the ISC_DRMI guardian.
SRC2 P also had this same oscillation, and PR2 and SRM looks like they were affected by it.
Vern told me to keep an eye on the BS Oplev, so here is a quick trend of that with some ASAIR powers. I am not sure if it is a glitch seem in the BS or if it is just from the lockloss itself.