similarly as for TMSx few weeks ago, some of SRM and SR3 osem signals were suspiciously noisy (have been for few days), and were showing ~1700Hz oscillations. As Richard suspected, power recycling the AA chassis of those osems fixed the pb. attached is a spectra of one of the osem for SR3 and SRM before (blue/red) and after (brown/green) rebooting the chassis.
Morning's Activities (covering Justin's Morning 8:30am - 12:45pm)
Today's Maintenance Day, but some CDS maintenance work will be postponed to Thurs due to NSF Meeting.
The ISIs ODC bits were reporting ST1/ST2 isolation to be off for ETMX/ETMY whereas the isolation was on. This was due to the "Correct Gain" setting to be wrong for all DOFs in the ISO filter bank. They were corrected for ETMy and ETMx. At the same time ETMX and ETMY suspensions M0 and R0 ODC damp state were also updated.
A safe.snap should be done on ETMx/ETMy ISI/SUS before next model restart.
sus/isi snap files were saved and commited under the svn.
model restarts logged for Mon 23/Jun/2014
2014_06_23 11:15 h1susetmy
2014_06_23 11:16 h1susetmy
2014_06_23 12:11 h1susetmy
2014_06_23 12:30 h1susetmy
2014_06_23 12:58 h1susetmy
2014_06_23 13:12 h1susetmy
2014_06_23 13:16 h1susetmy
2014_06_23 15:05 h1sustmsy
2014_06_23 15:11 h1broadcast0
2014_06_23 15:11 h1dc0
2014_06_23 15:11 h1fw0
2014_06_23 15:11 h1fw1
2014_06_23 15:11 h1nds0
2014_06_23 15:11 h1nds1
no unexpected restarts. HWWD work at ETMY followed by supporting DAQ restart.
model restarts logged for Fri 20/Jun/2014
2014_06_20 13:18 h1broadcast0
2014_06_20 13:18 h1dc0
2014_06_20 13:18 h1fw0
2014_06_20 13:18 h1fw1
2014_06_20 13:18 h1nds0
2014_06_20 13:18 h1nds1
Saturday and Sunday, no restarts reported. No unexpected restarts.
Nowhere near bypassing yet.
PSL power was set to 200mW, and the polarization was set to low finess.
Position of the PSL beam reflected off of the HAM1 PSL input viewport was marked on the wall of the PSL enclosure. It was off by about 2mm to the right. The beam motion is smaller than previous vent.
Good MC mirror biases were put back.
MC didn't flash though there were already multiple bounces.
Installed irises for MC1, MC2 and MC3. There are dog clamps on the ISI as a part of permanent installation that are meant to work as stops for the base of these irises. We just picked the irises with right height, pressed them against the fixed dogs, and fixed them down using temporary dogs.
On MC2 the beam is somewhat too high and maybe about 3 or 4mm to the east. We will fix this by touching PSL.
Om MC3, it seems like the iris position is wrong for whatever reason. Maybe this is intentional, but as is, the iris should move about half inches to the west. In the attached picture, MC3 and the baffle are positioned correctly, but the iris has a big offset from the center line of the baffle. See also T1300327-V2, Figure 1.
I'd like to move the MC3 iris to where it's supposed to be and realign MC tomorrow. I will not proceed without IO's consent.
Found that MC2 intermediate mass EQ stops are VERY close (gap much smaller than 1mm, some gaps are paper thin).
Why? Is it OK to back them off a bit?
Sloppy lock just for the Septum pull. That is, not at alignment at the moment. As soon as Septum work is done we'll unlock again.
Made the moves per IAS/Jason, reattaching Actuators now using DIs for position monitoring. The two West side Verticals are complete; will continue Tuesday.
JimW GregG Hugh
Day Shift Summary LVEA Laser Safe 09:00 Gerardo & Joe – Inspect viewports at HAM2 and HAM3 09:00 Hugh – Disconnect BSC3 HEPI 09:20 Mike L – SURF tour through LVEA 09:30 Jason – Alignment work at ITMX 09:35 Jeff – Adding dust monitors at HAM2 and HAM3 09:45 Apollo & Jeff – Removing doors on HAM2 and HAM3 13:10 Hugh – Working on ITMX move 13:15 Jeff – Remove witness optics & 4” wafer from HAM2 13:45 Calum & Kate - Remove witness optic and 4" wafre from HAM3 13:52 KingSoft on site working on RO filter 15:00 Travis – General SUS cleanup in the LVEA 15:05 Dave – Reboot SUS-TMSY for hardware watch dogs 15:00 Dave – Restart the DAC
Installed new dust monitors in HAM2 and HAM3. Removed the dust monitor from the HAM assembly/staging cleanroom between HAM4 and HAM5. The dust monitor in HAM6 is showing battery/power problems. Filiberto found no problems with the power supply circuit. I am going to put this monitor on a local power supply to see if the problem is in the batteries.
After our success in controlling the EY HWWD with the h1susetmy model, I reverted the h1sustmsy back to its original model and rebuilt it against RCG2.8.3. This required a DAQ restart due to INI changes.
Once HEPI actuators are re-attached, I will take another look at position/yaw to make sure alignment is maintained within spec. After that IAS only needs to align the ITMx in pitch and the CPx in pitch/yaw.
Olli, Justin and Volker
Task: Set power at bottom of the PSL periscope to 200mW and rotate the polarization by 90 deg to resonate in the low finesse mode of the IMC.
The HWP for the main power control was "unlocked/untagged" and energized outside of the PSL enclosure.
First we measured the power after the TFP after the PMC. Initial value 20.5W.
Next we turned to the main power contro stage. While temporarily turning the power after the TFP down to 200mW we assured that the main power stage HWP is set to a value that dumps most of the power. After that we maximized the power after the TFP after the PMC to 23.1W (HWP turned to 208deg).
Now the power at the bottom of the periscope was set to 205mW by rotating the main power control HWP to 63deg.
Currently there is a HWP at the pottom of the periscope to control the polarization after the power control stage. Using this HWP the polarization was rotated to "p" to operate the IMC in low finesse mode. The angle of this HWP is now set to 46 deg.
We verified that the polarization is rotated by 90 degree using an additional PBS.
Finally, the HWP for the main power control was "locked/tagged" and deenergized outside of the PSL enclosure.
SEI HEPI work at BSC3 Apollo removing the doors on HAM2 and HAM3 PSL power being set to 200mW SEI locking the HAM2 and HAM3 ISI SEI working on payload of HAM6 EE working on chassis modifications Kyle working on HAM2/HAM3 purge air
Jim and Dave. We are now running a HWWD version of h1susetmy compiled against branch2.8. The model was restarted at 11:16PDT. If we determine there is an error with channel 25 of the binary output chassis, we will replace that chassis.
We determined that this was not a hardware problem. I removed an EPICS-OUTPUT part between the HWWD and the delay part in h1susetmy and we were able to control the HWWD. So no binary chassis change at EY and we are able to control the HWWD. Unfortunately when I re-installed the epics part the communication continued to work, so we are not able to reproduce the problem.
[Mark Arnaud]
as reported on thursday, sustools.py has been updated in order to account for the new IOP wd state channel names.
By typing in the command line :
/opt/rtcds/userapps/release/sus/common/scripts/./sustools.py -o ETMX wdNames
The output returns all wd state channel names associated with this suspension, including the new IOP channel name:
['H1:SUS-ETMX_R0_WDMON', 'H1:SUS-ETMX_M0_WDMON', 'H1:SUS-ETMX_L1_WDMON', 'H1:SUS-ETMX_L2_WDMON', 'H1:IOP-SUS_EX_ETMX_DACKILL', 'H1:SUS-ETMX_DACKILL']
All sus guardians will be restarted and tested tomorrow
By doing this change (adding "ifo-rooted" iop channels), guardian machine ran into a memory issue, similarly as few months ago with ISI guardians.
Without going into details, we basically reverted the upgrade, meaning that guardian won't look at iop wd until further notice.
All sus guardians were restarted.
The MC1, 2 and 3 suspensions are now damped manually with the guardians paused for the HAM2 in-vac work. There was an issue with these guardians who reported some Ezca errors when they were in SAFE state. Arnaud and Mark will have a look at this issue.
For record purpose, here are the error messages in the SUS_MC2 guardian:
20140617_19:55:36.156 SUS_MC2 [SAFE]
20140617_19:57:31.994 SUS_MC2 W: if is_tripped(sustools.Sus(ezca)):
20140617_19:57:31.994 SUS_MC2 W: File "/opt/rtcds/userapps/release/sus/common/guardian/SUS.py", line 27, in is_tripped
20140617_19:57:31.994 SUS_MC2 W: trippedwds = susobj.trippedWds()
20140617_19:57:31.995 SUS_MC2 W: File "/opt/rtcds/userapps/release/sus/common/guardian/sustools.py", line 211, in trippedWds
20140617_19:57:31.995 SUS_MC2 W: result = []
20140617_19:57:31.995 SUS_MC2 W: File "/ligo/apps/linux-x86_64/cdsutils-238/lib/python2.7/site-packages/ezca/ezca.py", line 233, in read
20140617_19:57:31.995 SUS_MC2 W: pv = self._connect(channel)
20140617_19:57:31.995 SUS_MC2 W: File "/ligo/apps/linux-x86_64/cdsutils-238/lib/python2.7/site-packages/ezca/ezca.py", line 223, in _connect
20140617_19:57:31.995 SUS_MC2 W: pv.pvname))
20140617_19:57:31.995 SUS_MC2 W: EzcaError: Could not connect to channel (timeout=2s): H1:SUS-MC2_M1_WDMON_STATE
20140617_20:28:40.013 SUS_MC2 [SAFE]
20140617_20:28:42.076 SUS_MC2 [SAFE.run] ezca: connecting to ifo-rooted channel: H1:IOP-SUS_H34_DACKILL_STATE
20140617_20:28:46.083 SUS_MC2 W: if is_tripped(sustools.Sus(ezca)):
20140617_20:28:46.084 SUS_MC2 W: File "/opt/rtcds/userapps/release/sus/common/guardian/SUS.py", line 27, in is_tripped
20140617_20:28:46.084 SUS_MC2 W: trippedwds = susobj.trippedWds()
20140617_20:28:46.084 SUS_MC2 W: File "/opt/rtcds/userapps/release/sus/common/guardian/sustools.py", line 214, in trippedWds
20140617_20:28:46.084 SUS_MC2 W: trig = self.ezca.read(pv+'_STATE')
20140617_20:28:46.084 SUS_MC2 W: File "/ligo/apps/linux-x86_64/cdsutils-238/lib/python2.7/site-packages/ezca/ezca.py", line 233, in read
20140617_20:28:46.084 SUS_MC2 W: pv = self._connect(channel)
20140617_20:28:46.084 SUS_MC2 W: File "/ligo/apps/linux-x86_64/cdsutils-238/lib/python2.7/site-packages/ezca/ezca.py", line 223, in _connect
20140617_20:28:46.084 SUS_MC2 W: pv.pvname))
20140617_20:28:46.084 SUS_MC2 W: EzcaError: Could not connect to channel (timeout=2s): H1:IOP-SUS_H34_DACKILL_STATE
Arnaud's post from Friday implied that some of the watchdog channel names have changed. This is almost certainly what's going on here.
Fix the names in the guardian code! Presumably sustools.py needs to be updated.