Title: 12/12 Owl Shift 8:00-16:00 UTC (0:00-8:00 PST). All times in UTC.
State of H1: Observing
Shift Summary: Locked for my entire shift, 12+ hours total. A handful of ETMy saturations. Microseism is slowly coming down, currently ~0.6 um/s. Wind is picking up a bit to ~15 mph. There is a H1SUSETMX timing error on the CDS overview screen that can be cleared next time we are out of Observing.
Incoming operator: TJ
Activity log: None
Locked in Observing for ~9 hours. A couple of ETMy saturations. No other issues.
I noticed a couple of errors that showed up in the VerbalAlarms window running on the Alarm Handler computer. These were not announced verbally by VerbalAlarms. See attached screenshot.
There were 2 more of these errors (one set of not responding/alive again) ~10:00 UTC.
I see three errors on cdsfs0 for this morning which resulted in the raid card being reset.
Dec 12 00:17:51 cdsfs0 kernel: [306069.208085] sd 0:0:0:0: WARNING: (0x06:0x002C): Command (0x8a) timed out, resetting card.
Dec 12 01:23:42 cdsfs0 kernel: [310009.835370] sd 0:0:0:0: WARNING: (0x06:0x002C): Command (0x2a) timed out, resetting card.
Dec 12 02:57:34 cdsfs0 kernel: [315627.297842] sd 0:0:0:0: WARNING: (0x06:0x002C): Command (0x8a) timed out, resetting card.
Here is a full set of logs for the 01:23 event:
Dec 12 01:23:42 cdsfs0 kernel: [310009.835370] sd 0:0:0:0: WARNING: (0x06:0x002C): Command (0x2a) timed out, resetting card.
Dec 12 01:23:45 cdsfs0 snmpd[1716]: Connection from UDP: [10.20.0.85]:53320->[10.20.0.11]
Dec 12 01:24:13 cdsfs0 kernel: [310040.313376] 3w-9xxx: scsi0: AEN: INFO (0x04:0x005E): Cache synchronization completed:unit=0.
Dec 12 01:24:13 cdsfs0 kernel: [310040.433071] 3w-9xxx: scsi0: AEN: INFO (0x04:0x0063): Enclosure added:encl=0.
Dec 12 01:24:45 cdsfs0 snmpd[1716]: Connection from UDP: [10.20.0.85]:54550->[10.20.0.11]
Dec 12 01:25:01 cdsfs0 CRON[13730]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Dec 12 01:25:03 cdsfs0 kernel: [310090.356268] 3w-9xxx: scsi0: AEN: INFO (0x04:0x0029): Verify started:unit=0.
Dec 12 01:25:03 cdsfs0 kernel: [310090.385280] 3w-9xxx: scsi0: AEN: INFO (0x04:0x0029): Verify started:unit=1.
Dec 12 01:25:03 cdsfs0 kernel: [310090.385714] 3w-9xxx: scsi0: AEN: INFO (0x04:0x0029): Verify started:unit=2.
Note that this problem is not related to the recent /ligo is 100% full error (it has about 220GB of disk free).
Looks like most NFS clients rode through these restarts without logging any system errors. Presumably they were short outages and perhaps only computers which were actively trying to access the file system at the time are reporting the error.
Please contact me over the weekend if the errors show up again.
Activity Log: All Times in UTC (PT) 00:00 (16:00) Take over from Ed 00:43 (16:43) Vern & Sheila – Into the LVEA to tweak BS OpLev 00:43 (16:43) GRB Alert – Ignore due to IFO down 01:32 (17:32) GRB Alert – Ignore due to IFO down 03:39 (19:39) NOMINAL_LOW_NOISE 22.0W, 79Mpc 03:45 (19:45) In Observing mode 08:00 (00:00) Turning over to Travis End of Shift Summary: Title: 12/11/2015, Evening Shift 00:00 – 08:00 (16:00 – 00:00) All times in UTC (PT) Support: Sheila, Jenne, Vern, Hugh Incoming Operator: Travis Shift Detail Summary: Reestablished lock and observing at 03:45 (19:45). Biggest problem seemed to be a glitching BeamSplitter OpLev laser. Per Jason’s instructions Sheila and Vern, made a power adjustment to the OpLev laser, which helped to quiet the glitching. For the past 4 hours, the IFO has been locked in observing mode. Power is 21.9W, with a range of 82.5Mpc. The wind has calmed down a bit (7 – 12mph range), seismic activity has been holding steady around 0.07um/s. Microseism remains around 1.0 – 0.8um/s, however the amplitude may be decreasing a bit. In general, the second half of the shift has been good for observing. The instrument has been stable and environmental conditions are not causing any issues at this time.
IFO has been locked and in Observing mode for the past 30 minutes. Lock was regained by the valiant efforts of Sheila and Jenne and a well executed tweak of the BS OpLev. Wind is now a solid moderate breeze (up to 18mph) and rising. Seismic and microseism continue at the elevated rates of the past 24 plus hours.
Sheila, Jenne, Evan, Ed, Jeff B, Hugh, Vern
TOday we have had low winds but microseism above 1um/sec all day. ALS has been very stable all day, unlike yesterday when we also had high winds. Today DRMI was the problem. We revived MICH freeze, and also increased the power of the BS oplev laser which was glitching badly but seems better now.
We had a few ideas for improving DRMI locking, which aren't implemented yet and that we won't pursue tonight since we are locked at last. We noticed that PRMI was locking reliably even when DRMI was not.
Over the summer we made some efforts to implement the MICH CPS feedforward (MICH freeze) that is used at LLO here. (19461 19399) The idea is to slow down the michelson fringes by predicting the motion using CPSs and sending this to the suspension. In the summer when the ground motion was low, there was no benefit for us in slowing down the Michelson fringe, so we haven't been using it.
This morning Evan turned it on again, and it did reduce the RMS MICH control signal a little bit. (plot attached, with ground STS for reference.)
This seems to possibly be helping us acquire PRMI lock, and make DRMI look more promising.
If operators want to try it, which is recomended while the microseism is high, the screen is linked from the LSC overview screen (2nd screen shot). The third filter module, labeled L1LSC-CPSFF is the relevant one. If all the settings are the same as shown in the 3rd screen shot, just type 1 into the gain to turn on MICH freeze. When either PRMI or DRMI locks, this will be set to zero by guardian, and if it breaks look and you want to use it again, you will have to type a 1 in again.
As a potential solution to the BS oplev glitching problem that we were having earlier (before the oplev laser power was increased a bit), we had thought about creating a PRMI ASC state, much like the DRMI ASC state. The idea was that we would try to engage ASC for MICH so that we could stop using the BS oplev for feedback.
However, since the BS oplev seems to no longer be glitching, this is no longer a high priority. So, the state still exists, but both the main and run parts have "if False" loops around all the code in them, so that this state is just a placeholder for now. If we decide to go forward with ASC for PRMI, we've already got a healthy start on it.
The message is: There is no practical change to the guardian code, although there is a new state if you open up the "all" screen on the ISC_DRMI guardian.
Sheila, Vern, Jenne, Jason, Jeff B
We have seen that BS oplev glitching is making it difficult for us to keep PRMI locked, and probably is adding to our DRMI locking difficulties. Vern called Jason, who suggested changing the power a bit. We went out on the floor, turned the knob a wee bit and increased the power by about 5%. For now at least the glitching seems a bit better, which you can see in Jenne's plot which is attached.
Locking update coming soon...
Title: 12/11/2015, Evening Shift 00:00 – 08:00 (16:00 – 00:00) All times in UTC (PT) State of H1: 00:00 (16:00), The IFO is unlocked. High microseism and problems with PRMI. Outgoing Operator: Ed Quick Summary: IFO is down. Shelia and Jenne are working on problems with PRMI locking and a glitching BS OpLev. Wind is a gentle breeze at 8-12mph. Seismic activity is around 0.07um/s, microseism is high, above 1.0um/s.
On Wednesday TJ and I tripped the ISI several times executing the HAM3 ISI GS13 gain switch in various ways. I have several examples to look at but here is a comparison of one trip to a successfull switch of the gains using the SEI COMMAND perl script.
The first attachment has 20 seconds of the trip example next to the switch that does not trip. It looks like there isn't anything different going on before the two cases: There are larger swings in X & Y in both and the other DOFs all look fairly similar too. There is a larger glitch 5+ seconds before the trip in Z but that seems to be settled by the switch/trip time; these would be related to Reference Position ISO servoing.
In the second attachment, again side by side are the two cases zoomed in a bit closer (10 sec.) The two stage switch by the perl script is evident with the gain switch doing more glitching and the whitening stage doing nothing. There is a delay of the sensors wrt to the switching by the guardian which seems weird but I'm not looking at outputs from the filter bank as we have no DQ channels here... These channels are the inputs to the blend bank after transforming to cartesian basis. Again, nothing stands out as to why there is a problem with the guardian script. We (TJ & I) did adjust the guardian code to separate the FMs switching like the perl script does but it still tripped the ISI.
The third attachment is a close zoom in on the ISI trip switch. It is what it is... Given that the Y and RZ are the dofs that ultimately go big, I'll suspect the horizontals but I could be fooled.
4 trips of the ISI with Guardian. Three are H3 and one is V3. Problem with corner 3?
I thought I had a trend going seeing the only excursion hitting the trip level on H3 first (see first three attachments.) Watch the channel colors as I added a channel but they are all H3. The fourth plot shows that the V3 sensor goes to the rail and causes the trip. If I saw a bunch more instances and they all hapened on corner3 I'd be very suspicious of bio switching or something; as it is I'm just suspicious.
The fifth plot compares the switching sequence on the six sensors between the COMMAND perl (left) and the Guardian (right.) Yes, the perl script does manage to hit the switches all at once whereas the guardian is clearly spaced in time. Of course, guardian manages to switch these on all the other platforms (except HAM2) without problem. Hmmmm. I'd like to work HAM2 over too and see wwhat that tells us.
Actually, we, TJ & I, are getting confused as to which of the above is guardian or perl as we mucked with the guardian code.
We think the one on the right is guardian with sleeps between the switches trying to mimic the perl Command script shown on the left.
TITLE: Dec11 DAY Shift 16:00-00:00UTC (08:00-16:00 PDT), all times posted in UTC
STATE Of H1: Environmental
SUPPORT: Sheila, Jenne, Hugh, Evan
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY:
Another day of record high microseism.
Sheila and Jenne doing some workarounds for the glitchy beam splitter oplev
no locking past PRMI
ACTIVITY LOG:
17:24 Kyle and Gerardo are going to EX and EY mechanical rooms.
17:54 Carlos to MY
18:08 Carlos back
18:10 Kyle returning to corner station towing trailer.
18:39 Kyle and Gerardo back to control room
22:18 Reset timing error on H1SUSETMY
22:27 LASER diode chiller indicator was RED on the Status screen. I added 125mL of water.
Kyle, Gerardo New ion pump had shut down with "excess arcing" error shortly after its initial start yesterday -> Restarted this morning and seems normal now -OK
RichM pointed me to look at the RZ drive as a way to confirm that the Z 45mHz blend is not coupling T240 RZ into the platform.
The attached 24 hour trend at ETMX suggests the nice reduction of inertial Z motion on Stage2 GS13s does not come at the expense of increase drive to the RZ of Stage1. Our z-RZ subtraction is doing well. The Blend switch points are shown too.
TITLE: Dec 11 DAY Shift 08:00-16:00UTC (00:00-08:00 PDT), all times posted in UTC
STATE Of H1: Environment - µSeism extremely high
OUTGOING OPERATOR: Travis - short shift
QUICK SUMMARY: …..it’s Friday.
Title: 12/11 Owl Shift 8:00-16:00 UTC (0:00-8:00 PST). All times in UTC.
State of H1: Unlocked due to microseism
Shift Summary: After Nutsinee's frustrating shift, I have had similar luck. I made it as far as ENGAGE_ASC_PART3 twice, but while waiting there for the ASC loops to do their deeds, we lost lock. Talked to Mike and he gave the OK to throw in the towel for the night. Microseism is still above 1 um/s and winds are still 5-10 mph.
Incoming operator: Ed
Activity log: Attempting to lock all night.
No luck with locking so far. Made it as far as DRMI_ON_POP once in several attempts. Microseisem continues its upward trend above 1 um/s. Winds have calmed from earlier but still ~10 mph.
TITLE: 12/10 EVE Shift 00:00-08:00UTC (16:00-00:00 PST), all times posted in UTC
STATE Of H1: Lock Acquisition/Environment
SUPPORT: Sheila, Jenne
Incoming Ops: Travis (failed to reach him so he's already on site)
END-OF-SHIFT SUMMARY: I've been trying to lock most of the evening. The shift started off with high wind (20-40 mph) then died down towards the end of shift (now ~10mph and below). Useism has been increasing (now at ~1.5 um/s). Locking was unsuccessful even with wind below 10mph.
Activity:
00:19 Kyle driving back from Y28