Jeff had a lock loss during the coil switching. This state temporarily changes the suspension output matrix. The lockloss however prevented setting the matrix back, and no DOWN state took care of that.
I added corresponding code into the DRMI down state:
	        # put back in regular matrix
	        coilburtpath = '/opt/rtcds/userapps/release/isc/h1/scripts/sus/'
	        stage='M3'
	        for optic in ['PRM','PR2','SRM','SR2']:
	            ezca.burtwb(coilburtpath+ optic.lower() + '_' + stage.lower() +'_out_normal.snap')
	        optic='BS'
	        stage='M2'
	        ezca.burtwb(coilburtpath+ optic.lower() + '_' + stage.lower() +'_out_normal.snap')
	        time.sleep(1) # for burt to work
	        stage='M3'
	        for optic in ['PRM','PR2','SRM','SR2']:
	            ezca['SUS-' + optic + '_' + stage + '_EUL2OSEM_LOAD_MATRIX'] = 1
	        optic='BS'
	        stage='M2'
	        ezca['SUS-' + optic + '_' + stage + '_EUL2OSEM_LOAD_MATRIX'] = 1
	 
In looking at HAM2 optics, I noticed that all three IMC optics, MC1, MC2, and MC3 have alignment changes from before to after Mantenance yesterday.
| change in alignment | ||
| MC1 P | +25 | urad | 
| MC1 Y | +20 | urad | 
| MC2 P | +3 | urad | 
| MC2 Y | +3 | urad | 
| MC3 P | -44 | urad | 
| MC3 Y | +15 | urad | 
Sliders did change, however I can see where the old safe.snap alignment offsets were loaded, and then corrected, for MC1 and MC3. MC2 is odd because I don't see where the safe.snap alignments was loaded since it's alignment offsets are not seen on MC2 in the same time frame. I started my plot at 6/28 at 15:00UTC which is 8AM local, when Maintenance starts.
IM4 Trans pitch and yaw also changed by 0.02 (+ for pitch, - for yaw).
DAC cards were changed for MC1 and MC3 yesterday, so this is a plausable expalnation for the large change in MC1 and MC3 OSEM readouts - electronic offset changes.
MC2 DAC card was not swapped, and it's OSEMs show a change of pitch and yaw of 3urad, and the IMC is stably locked, and IM4 Trans shows only a small change, so that suggests that MC1 and MC3 pitch and yaw alignments changed by much less than the 25-44urad seen on the OSEM readouts.
IM4 Trans shows some alignment change of 0.02 on pitch and yaw, so this is noticable but much smaller than a 0.1 change that would warent more investigation.
While trying to initiate "Manual" mode to begin DBB scans, the DBB Interlock tripped. The resetting of one or both of the AA/AI chassis seems to be the "fix" for this. See: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=27404
BSC ISI model updates successful charge measurements rescheduled for Thursday morning new wire successfully run for PT100 CP5 still being worked on property audit ongoing
Marie, Sheila, Kiwamu, Lisa, Stefan
We tried a few things to improve the recycling gain tonight. We believe that the problem is with something in our PRC. We can see that the green power transmitted through the X arm stays stable as we power up, we think this means the optics are stable. When we move the soft offsets to improve the recycling gain we change the X arm alignment to follow something in the vertex.
As TJ noted, since the computer problems earlier we are having trouble relocking the PSL after locklosses, with noise eater, PMC and FSS and ISS problems. It seems like trying to lock the FSS causes the noise eater to oscillate again after it has been reset.
Jeff K, Stefan, Peter, Kiwamu
We kept having the same issues with the PSL in this morning as reported by Sheila in the above entry. We now think that it is all fixed. Here are what we have done for fixing the issues.
Not that all of the settings that Kiwamu describes were not found by the SDF system because I'd mistakenly set the SDF system to look at the OBSERVE.snap record instead of the SAFE.snap record after maintenance yesterday. The OBSERVE.snap was out of date, having not been updated since O1 and/or the HPO turn-on. Both the safe and OBSERVE.snaps have not been updated. Another case of the difficulty of maintaining several .snap files per front-end model...
	TITLE: 06/29 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
	STATE of H1: Commissioning
	INCOMING OPERATOR: None
	SHIFT SUMMARY: Commissions working hard. Had one hicup (alog28044) with EY SUS computer crashing, and then the PMC and FSS gve us trouble. After another lockloss the PMC and FSS continued to fight us. The only way we have found to make the PMC lock is to click the autolocker ON and OFF (while waiting between) until it locks on a good mode. Then the FSS may still be oscillating and the PMC HV drifting,  but do the same trick with the FSS autolocker and after fifteen minutes or so you'll have it! But keep an eye on the noise eater.
	LOG:
Sheila, Stefan, Lisa, Nutsinee
Had a lockloss that seemed to have been caused by an H1IOPSUSEY computer crash. After we figured that this was the issue and started to start the recovery process by restarting the computer, then clearing all of the watchdogs, and bring alignment sliders back to where they should be. No issues recovering there.
Then we noticed that the front StripTool looked odd and the FSS, ISS, and PMC seemed to be having issues. We called Dave to have him look at all this before he went to bed.
The PMC Locked Light showed that it was locked (green) but the HV was basically zero and the PMC Trans camera image was caught on some terrrible mode. We managed to get the PMC locked again by turning the autolocking on and off and on and off till eventually it locked on a good mode. BUT after we would think that we had it locked nicely, we would watch as the HV would drift away and lose the lock or lose it immediately as we went to engage the FSS. During all of these locks and unlocks the noise eater would start to oscillate and we would end up having to reset it 3 times before we managed to get everything stable. Then we had to fight the ISS from oscillating as well. In the end we won the battle, but feel very battered and confused.
We seemed to have recoved everything except TrendWriter0 which went down in the chaos sometime after the EY crash.
Kiwamu, Alastair (via email/phone), Nutsinee
Quick Conclusion: No more nasty particulates going into the chiller system. Both AOM and AOM driver were drained but the hoses were left undrained.
Since we're done using AOM to align the CO2 beam to the carrier light, it's time to remove them out of the water system so the nasty particulates don't travel into the water system any further. First we went out on the mezzanine to turn off the RF oscillator, then we headed to the RF distribution amplifier. We couldn't find any on/off switch so Richard suggested we just disconnect the cables and replaced it with 50 Ohms terminators. We disconnects TCSX and Y RF driver cables (#3 and #8 -- see first attachment). Then we headed back to the mezzanine and switched off the AOM power supply (which hasn't been tagged out. DO NOT TURN THIS UNIT ON.). The CO2 laser was turned off from the control room at this point. PSL light pipes were shuttered. TCS chillers were left running.
Starting with TCSY table, First I removed the quick connects. Then I removed the hoses from the AOM and the AOM driver. Seeing that the water isn't going to drain itself out, I blew out water using compressed air duster. I didn't have time to drain water out of the hoses to I wrapped them with clean room cloth and left them there (for now, at least until next Tuesday). I also left some dry cloths below the AOM and the driver just in case any more water drip out. I did the same at the quick connects. I repeated the same thing with TCSX table. See 2nd and 3rd attachment for images. There were no more water dripping or seeping out when I closed the tables. I turned the oscillator back on, opened the PSL light pipes, and restored both CO2 lasers at ~ 14:13 PT.
I attached a picture (final attachment) of some nasty particulates I blew off one of the units for your amusement.
earlier today we had a raid issue with h1tw0 which cleared on reboot. This evening it failed again and h1tw0 was continuously restarting its daqd. I have shut this down.
+750ml to TCSY chiller, +100mL to TCSX chiller
Cleaned TCSY chiller filter
We lost lock when the sus EY computer went down, there was a red light by DK. We followed the instructions here to restart it, and now the ISI is re isolating.
Jeff K and myself
Was: SAFE
Now: INIT
A few weeks back we changed the initial request state for the SUS Guardians from ALIGNED to SAFE (alog27627) with the thought that it was too aggressive of a move immediately after a reboot. Today the Guardian machine was rebooted again and the SUSs all went to SAFE, but this had apparently messed up the PCAL team. We decided to just have the nodes run INIT and go back to where we had them previously. Hopefully it actually goes that way.
All of the SUS nodes have been reloaded with the new code that is also committed to the SVN.
Manually set CP5 LLCV to 10% open to lower LN2 level, then adjusted % full set point to 98% and switched to PID mode to maintain overnight.
Now it's set in manual mode to drive level back down to 92%.
Terminated and landed a new cable for PT100-A, old cable was 22 AWG, new one is 18 AWG.
As a note, we used a pigtail cable with a HD DB15 connector.
Great! I closed FRS 5634 ticket on this.
Evan G., Darkhan T., Travis S.
Both end station PCals were calibrated today during the maintenance period. Stay tuned for results.
TravisS, EvanG, Darkhan,
Pcal calibration measurements were taken at both end stations (the measurement procedure can be found in T1500063).
Pcal EndX measurements (DCC T1500129-v08) showed that Pcal optical efficiency (OE) at the this end station have decreased since the last measurement (LHO alog 27029), the OE of the inner beam and outer beams dropped from 84% and 61% (on 2016-05-05) to 78% and 46% (on 2016-06-28).
At the same time, WS/TX response ratio did not change for more than 0.5% since Aug 2015. So for now, at EndX Pcal TxPD output should be used.
Note: initially DAQ restart happened during measurements with WS at the receiver module. Since we repeated the measurements affected by the DAQ restart, the results reported in T1500129-v08 are not affected by this.
Pcal EndY measurements (DCC T1500131-v05) were consistent with previous measurements (LHO alog 27029).
The measurement data is committed to calibration SVN at:
	trunk/Projects/PhotonCalibrator/measurements/LHO_EndX/D20160628
	trunk/Projects/PhotonCalibrator/measurements/LHO_EndY/D20160628
(Richard M, Fil C, Ed M, Daniel S)
ECR E1600192.
Split Whitening Chassis S/N S1101627:
AA Chassis S/N S1102788 & S1202201:
The attached plots show the transfer functions of
Whitening chassis S1101603 was removed from ICS-R5 U18. New chassis S1101627 was reinstalled with modifications listed above. New unit is the split variant.
[Kiwamu, Carl]
First indications are that the DCPD HF channels are behaving as expected. With the OMC locked on an RF sideband the DCPD A is compared to the new DCPD HF A. The transfer function between them at low frequency has a 'f' relation which transistion fto f^3 at 10kHz as expected from the AC coupling change and the removal of 2 poles at 10kHz. Coherence is lost at about 20kHz.
	I have changed the whitening gain of the AHF DCPD path (A and B) to 15dB.  This pushed the noise floor in the 15-29kHz region to a factor ~2 above what I guess is ADC noise.  In the attached plots the original DCPD can be seen to reach a common noise floor with the AHF DCPD path with no whitening gain at about 17kHz.  Turning the whitening gain up we can get some coherence with the shot noise of the DCPD through original electronics. 
	A forest of new peaks is visible, mainly in the 17-29kHz region.  There are 80 peaks in teh 25-26kHz band.  I stepped the ETMX ring heater at 7:01 up by 0.5W and down again at teh end of lock at 7:22.  This may give some mode identificatiion data.
This morning we removed the Twin T notch on the AA board by removing C3, C4, C5, C10, C11 leaving in the 100 Ohm resistors in place.
Nutsinee, Jim, Dave:
The HWS code crashed at 07:50 PDT this morning, Nutsinee tried to restart at 11:46 PDT, but it failed. We found that the 1TB raid is 100% full (has data back to December 2014). We are contacting Aidan to see how to proceed.
BTW: the /data file system on h1hwsmsr is NFS mounted at the end stations, so no HWS camera information is being recorded at the moment.
We deleted December 2014 ITMX and ITMY data to free up 21GB of disk space on /data. The code now runs.
We need a long term plan on how to keep these data if they need permanent archiving.
I have restarted both X and Y HWS codes this evening.
The disk was full again today. I deleted Jan-Feb 2015 data from ITMX folder. Freed up 194GB. HWS code now runs again.