Day's Activities
This afternoon the following items tripped/went down:
After much head-scratching, ultimately pointed our focus to the PSL (the PSL was completely down, no light coming from the PSL, Gerardo noted an Error on the Diode Chiller). Luckily Rick was here and was already on his way up to the OSB. Justin & I accompanied him to the Chiller & Diode Room.
[(1)Before heading to Diode Room, we turned the ISS AutoLocker OFF. (2)When heading to Diode Room, Kissel took the IMC Guardian to DOWN]
In the Chiller room, we noticed an Error light and message on the Diode Chiller (the one on the top which has a clear spout on it); the water in the spout wasn't moving much--it's generally more turbulent. We couldn't clear the error at the Chiller, so we garbed up & went into adjacent Diode Room.
Here we had the Beckhoff computer to work with. On the System Status screen we saw the following Trips (in red):
We hit a RESET here, but the Diode Chiller Flow was still tripped. We then went to the Chiller screen & clicked on the RED Chiller button to bring it back. We also clicked on the red Watchdog button. At this point we were GREEN, and done in the Diode Room. On the Diode Chiller the Error light was OFF (but we still saw the Error message on the LCD screen).
When we walked back into the Control Room everything was good, green, & we had PSL light. The ISS Diffracted Power Was a little low, so we clicked it more positive (i.e. made absolute value of voltage a coule hundredths smaller.].
[when we had PSL light, Kissel requested "LOCKED" for IMC Guardian]
Lisa
At Livingston they have to realign the arms very frequently in order to mantain high power build up. It seemed like we were not seeing the same problem here, but we didn't really have any numbers for Valera.
So, I left both arms locked on green last night. The plot shows a 6 hours trend of the green transmission power for both arms, and amplitudes of the beat notes. The guardian was left on, and it was automatically recovering lock.
The Y arm power dropped by ~10-15% over 6 hours. For whatever reason X was drifting much more than Y. Still, on a 1 hour timescale, you can't really see any significant drift; over 6 hours, the X arm drift was ~50%. I don't know if it has been already observed in the past that X drifts more than Y, or if it was just an isolated event.
I've upgraded the memory in all of the wall display computers in the control room from the woefully inadequate 2GB, up to 8GB. This should make them run much better and prevent swapping and the general malaise that makes things quit working or otherwise grind to a halt on them after a while.
(These are hosts video0-6 and projector0.)
TCSY: - continued with CO2 beam alignment... transmitted beam through the second beam splitter (1% pickoff) is now aligned to the two quad thermopile position sensors. - temperature sensors run to the water cooled beam dump, AOM driver and cooling water manifold. Tested and appear to be operational. Once the clamps are received back from LLO these can be fixed in position. - AOM driver power cable re-routed TCSX: - CO2 laser turned off at 2130 UTC - chiller and power supplies for both TCSX and TCSY were turned off at 2135 UTC - FLIR camera/screen cover removed. Camera front face to screen distance is 8" (rather than the 13" on the design). Photo attached. TCS install checklist was also updated.
After the PSL went down, MC2 IOP watchdogs tripped this afternoon, which cascaded to HAM2/HAM3 ISIs.
It tripped because of a large ISC length control offset signal applied on the left and right osems of the top mass.
Similarly as what Jeff did on the sus WD models (cf his alog), the trigger on DC osem values in the iop models (upper path of screenshot) should be removed to avoid this kind of situations.
This could be done during the vent period in two weeks.
The vent period would be an excellent time for me to upgrade the IOP models to the targeted SUS/SEI watchdog system. In that system the DC component of the OSEMs is only monitored.
[Justin Corey Arnaud]
Even though the ISIs/HEPI were working fine, guardian has been reporting errors on ST1 and ST2 of each bsc chambers, cf screenshot. After reloading all of them, guardian is not reporting errors any more.
Authentication for ligo.org accounts in the control room started to fail at about 1:40PM (or at least that was when it was first reported). I followed it as far as cdsldap0 logs stating slapd couldn't contact the remote server; rock.ligo-wa.caltech.edu was not pingable. So I called Jonathan and he was able to walk me through reconfiguring slapd to use the alternate server, quartz.ligo-wa.caltech.edu which is still up and reachable. ligo.org logins should now be functional in the control room again - Jonathan will investigate further remotely later in the day.
I've swapped out the workstations located by the test stands, they were no longer useable as configured. The new workstations, lveaws1,2 are iMacs and connect via the wireless AP.
Reading before turning it off, 1.34X10-04 torr.
Left the CC on for 1.5 hours.
It is off now.
Turned it ON again this afternoon, 4:03 pm, until 4:45 pm, CC is off now and it will remain off for the long weekend.
Pressure reading, 1.47X10-04 torr.
(Corey, Gerardo, Patrick)
Two days ago (about 4am Thurs morning) the ISS diffracted power started to drift down from about 6.7% down to 2.0%. Once at 2%, it started to go into oscillations (roughly around 3am this morning); see attached 2-day trend.
Without instructions on how to address this (and no recent alogs noting this behavior), we scratched our heads a bit, moved some gains and hit buttons to no effect. We returned back to the original state. Then Patrick suggested we try switching from PD A to PD B, and this appeared to stop the oscillations. It took us to a value of about 5%. Unfortunately, it looks like the power is still sloping down however. (Curious to see if the ISS goes back into oscillations at 9pm tonight).
Actually switching PDs was not the way to go here; we generally want to stay with PD-A.
So here, you want to adjust the REFSIGNAL (H1:PSL-ISS_REFSIGNAL) channel in 0.01 amounts. With Rick on the phone, I went from -1.63 to -1.55. This stablized the Diffracted Power and took it from ~2% up to ~10%.
After checking with commissioners to verify no one was performing any measurements on h1lsc, I manually cleared the test point list to avoid overloading the awgtpman process. Test points had apparently been orphaned at some point, corresponding to the following channels: H1:LSC-REFLAIR_A_RF45_I_ERR_DQ H1:LSC-REFLAIR_A_RF45_Q_ERR_DQ H1:LSC-REFLAIR_B_RF135_I_ERR_DQ H1:LSC-REFLAIR_B_RF135_Q_ERR_DQ H1:LSC-PRCL_EXC
The long term storage Nitrogen purge system has started testing today. It will be venting through a regulator and hose near HAM12 over the weekend. This purging shouldn't have an impact on any activities inside the LVEA. CP2 levels may need increased attention until we can see the impact that the purge line has on the LN2 dewar.
After the Nitrogen gas has had an opportunity to flow for a couple of hours the initial signs are looking good. The dewpoint is < -40° tdC and the pressure drop from the CP2 dewar to the end of the line pressure regulator looks to be < 1psi. I added a flow meter to the exhaust of CP2 and it looks like most of the boil-off is being redirected. Everything is looking good to start a test next week and check the overall cleanliness.
Yesterday I started my new EPICS alarms alerting system with a full configuration. It monitors critical EPICS channels and sends text alerts to engineering staff (cell phone texts and regular email). This system is monitoring vacuum, fmcs and DAQ channels at a very low rate (once per minute). I do not anticipate any problems at the IOC end.
SR3 measurements will be running overnight on opsws1.
Guardian states "undamping/undamped" were added to sus.py. It basically set the correct switches for testing purposes (undamped tfs). This state can be called from matlab before running a TF using (e.g for SR3):
system('caput H1:GRD-SUS_SR3_REQUEST UNDAMPED')
Only SR3 guardian was restarted
Measurements still running this morning
Daniel and I restarted green WFS effort.
WFSA and WFSB were rephased using an injection into PDH CM board. Apparently somebody tried to compensate the signal differences across the quadrants when doing a similar injection as mine, and I followed that same path such that seg1 and seg3 are matched as well as seg2 and seg4.
Then I dithered PZTs at 2, 8, 11 and 13.5Hz to measure the sensing matrix, inverted the matrix, and put them in the input matrix, such that WFS_DOF3 becomes PZT1 and DOF4 PZT2. Note that both WFSs are much more sensitive to PZT2 than PZT1.
I didn't put any filter in DOF3/4 nor IP PITOFS and IP YAWOFS, but set the gain to some negative numbers.
In the input matrix of QPD alignment servo, I added a straight-through matrix to add the WFS signal.
I had a hard time working with state definition thing and maybe the guardian, so I still don't know if it works or not, but the intent is to do a high BW feedback to the PZT.
This was due to the guardian still containing old WFS code. I removed the offending lines from the python script. Then, I reloaded the guardian. Unfortunately, I missed adding a pass statement in a hanging if clause, which caused the guardian to commit suicide. Unfortunately, the guardian took the restart button with it. No guardian, no restart button, ???
While the Gate Valves were closed and we were Laser Safe, I installed a pair of aLIGO Camera Housings on the Spool where the ITMx cameras will be.
Additional Notes:
Lexan covers were already installed on these viewports.
Checked out unused viewports: VP1 & VP4 (see attached map). Line of sight looks good for both of these viewports. I installed camera housings and kept them on both of these viewports.
Currently VP5 has the sole ITM camera.
Pictures of this work are located here.