Kiwamu, Nutsinee
------------------------------------------------------------------------------------------------------------------------------
Code added to line 115 - line 136
First test checks if the number of "peaks" measured by the HWS code makes sense. If below 800 -- not enough return SLED beam. If above 4096 (12 bit), something's wrong.
Second test checks if HWS code time stamp is correct compared to the actual GPS time. If |dt| > 15 seconds then the code must have stopped running.
------------------------------------------------------------------------------------------------------------------------------
@SYSDIAG.register_test
def TCS_HWS():
#Make sure # of pixels are good
pkcnts_X = ezca['TCS-ITMX_HWS_TOP100_PV']
pkcnts_Y = ezca['TCS-ITMY_HWS_TOP100_PV']
#If below 800 -- not enough return SLED light
#If above 4096 -- above theoretical number (12 bit) -- something's wrong
if not (pkcnts_X > 800) and (pkcnts_X < 4096):
yield "HWSX Peak Counts BAD!!"
if not (pkcnts_Y > 800) and (pkcnts_Y < 4096):
yield "HWSY Peak Counts BAD!!"
#Check if HWS GPS time agrees with current GPS time
a = subprocess.Popen(['tconvert','now'], stdout = subprocess.PIPE)
gpstime, err = a.communicate()
gpsfloat = float(gpstime.strip('
'))
#Fetch HWS GPS time
liveGPS_X = ezca['TCS-ITMX_HWS_LIVE_ACQUISITION_GPSTIME']
liveGPS_Y = ezca['TCS-ITMY_HWS_LIVE_ACQUISITION_GPSTIME']
if np.abs(gpsfloat - liveGPS_X) > 15 :
yield "HWSX Code Stopped Running"
if np.abs(gpsfloat - liveGPS_Y) > 15:
yield "HWSY Code Stopped Running"
Michael, Hugh, Krishna
A lot of progress was made with the installation of BRS-2 today. In the morning, we cleared out a space on the VEA floor. We had to slightly reposition the SEI ground seismometer (STS2) and the PEM Guralp seismometer. We then cleaned the BRS-2 vacuum can, foam box and other parts and moved them in. Upon opening the vacuum can, we noticed that the epoxy (TorrSeal) on one of the capacitor plates had come off, likely during the drive. We found a suitable alternative (Loctite 1c) and reattached the capacitor plate.
We the proceeded with the assembly. The beam-balance was pulled out, the flexures were carefully attached and the beam-balance was then reinserted into the can and suspended from the flexures. We did a crude adjustment of the horizontal center of mass. The vacuum can was then partially closed up and the autocollimator attached to it. We then had to realign the optics to get the reflections from the reference mirror and the main mirror algined well. Tomorrow we will continue with autocollimator adjustments to get good reflection patterns on the CCD.
In the meantime, Jim B and Carlos installed the BRS-2 Beckhoff computer at End Y. Also, Filiberto and Peter laid out the GiGE, Ethercat and Fiber-Optic (For the autocollimator light source) cables going from the VEA to the computer room.
Plan for tomorrow:
1. Get the C# code to read the CCD data and start measuring the tilt signal. At this point, the instrument is still in air, so will have excess noise and drifts.
2. Hook up the electronics for the capacitive control and the DACs for writing out the signals. Once the cables to go from these to the SEI Frontends is complete, the tilt data will be accessible to SEI.
3. Close up the rest of the vacuum can.

Added 686 channels. Removed 526 channels. See attached. All channels are now connected.
14:59 UTC Karen and Christina to end Y 15:07 UTC Jeff B. to LVEA to check on dust monitors and cleanrooms 15:13 UTC Corey checking chillers 15:17 UTC Corey done 15:18 UTC Filiberto installing electronics outside of ISCT1 15:33 UTC Not making it past CHECK_IR. Starting initial alignment. 15:38 UTC Y arm intermittently losing lock on green. Going to 45 Mhz blends on both DOFs at end X and end Y per high microseisem. 15:42 UTC John and Chandra to LVEA to work on vacuum 15:42 UTC Filiberto done 15:48 UTC Hugh, Krishna and Michael starting BRS install at end Y, opening rollup doors (WP 5789) 15:53 UTC Adjusting PR3 to improve COMM beatnote. Y arm still intermittently losing lock on green. 15:55 UTC Jeff B. done 15:59 UTC Can't seem to improve COMM beatnote above 3.8 with PR3. Moving on. 16:24 UTC Done initial alignment. 16:31 UTC Y arm VCO is hitting low limit? 16:40 UTC Jeff B. to HAM6 and beer garden 16:42 UTC Setting end Y ISI blends back to 90 mHz per people walking around in VEA 16:49 UTC Jeff B. done 17:03 UTC Jeff B. to end X and end Y, briefly in VEA at each 17:09 UTC John and Chandra done 17:16 UTC Truck to pickup item from Christina at gate, directed to LSB 17:16 UTC Jason to optics lab to look for part 17:36 UTC Jason back 17:48 UTC Jeff B. back 17:55 UTC Betsy to cleaning area 17:56 UTC Nutsinee to LVEA to look in cabinets 18:11 UTC Nutsinee done 18:13 UTC Terra and Nutsinee to end Y 18:24 UTC Filiberto to end Y to check on cabling requirements 18:41 UTC Filiberto, Hugh, Krishna and Michael back 18:43 UTC Putting IFO_LOCK to down 18:53 UTC Terra and Nutsinee back, did nothing 19:07 UTC Dave, Jim and Carlos to end Y to install BRS computer 19:38 UTC DAQ restart 19:39 UTC Filiberto to end Y to pull cables for BRS 20:14 UTC Filiberto and Peter to end Y 21:58 UTC Kiwamu attempting to lock 21:59 UTC Brynley to electronics bay to look at LEDs 22:05 UTC Brynley back 22:40 UTC Carlos changing vlan for DMT switch in MSR 22:49 UTC Dave shutting down fw0 22:55 UTC Kiwamu giving up on locking. Will lock simple Michelson to measure contrast defect. Jim B., Carlos, Dave: Installation of BRS computer at end Y New PI model on LSC frontend, h1omcpi Updates to susitmpi, susetmxpi, susetmypi DAQ restart to add h1omcpi model
I investigated the HWS glitching issue in more detail. I ran the following tests to try to reproduce the glitching seen previously:
take command (number of ring buffers = 4): no glitchingtake command (number of ring buffers = 1): no glitchingtake command (number of ring buffers = 1): no glitchingtake command (number of ring buffers = 4): no glitchingRun_HWS_ITMX and Run_HWS_ITMY at the same time: no glitchingSince we couldn't reproduce the glitching after everything restarted, Kiwamu and I decided that we create some Guardian diagnostics to run continuously on the HWS data. The following channels will be monitored and will issue warning in the case of aberrant behaviour:
Daniel, Terra, Jim, Dave WP5790
A new model was installed on the final core of the h1lsc0 front end machine. Its name is h1omcpi, dcuid=9, rate = 64kHz.
A PI_MASTER.mdl file change was also made.
The new h1omcpi model was installed and started. The h1susitmpi, h1susetmxpi and h1susetmypi models were rebuild and restarted. The DAQ was restarted.
Unfortunately this precipitated an instability in h1fw0 which we are investigating.
h1fw0 continued to be unstable. First I power cycled h1fw0 but this did not help. Then I powered down h1fw0 and h1ldasgw0 (in that sequence). When I powered up h1ldasgw0 it went through an extensive set of startup diagnostics. I then had to manually mount the QFS file system and NFS export it. Finally I powered up h1fw0. Its been running for 5 minutes, so too early to tell if we have fixed this.
Darkhan, Kiwamu,
We found that the optical follower servo (aka OFS) for the Pcal Y had stopped running. Pressing around some buttons on the medm screen, we could not make it back up running. We are suspecting some kind of hardware issue at the moment. According to trend data, it seems that it stopped running at around 18:30 UTC on Mar 18th for some reason. We will take a look at the hardware in the next Tuesday during the maintenance period.
J. Kissel, E. Hall, [R. Savage via remote communication at LLO] This was re-discovered again tonight. Opened FRS ticket #5241 so we don't forget. Email sent to E. Goetz and T. Sadecki suggesting they address on Monday 4/4. Further information from our re-discovery: Looks like Kiwamu had left the calibration line amplitude in a low / off state. We restored the line amplitudes to nominal, and it caused excess noise in the DARM spectrum (lots of lines, a little bit of elevated noise floor). We don't see any sine-wave-like excitation in the OFS, TX, or RX PDs with a single calibration line on at reasonable amplitude (which is contradictory to elevated noise in DARM). Rick suggests: - Check the AOM. - Check the shutter. - Check that the laser hasn't tripped.
Related alogs: 25932 and its comment
Yesterday, we started the preloading test. The CO2 lasers now have an additional common power of about 250 mW, corresponding to an additional common substrate lensing of roughly 31 uD.
Two major conclusions so far:
We need to decide what optimization we need to for this common lensiong. For testing purpose, we will keep running with this pre-loaded configuration for lock acquisition and for full lock with 2 W PSL.
After vent test, current pressures at aux carts: HAM 7/8: 2e-5 Torr HAM 9: 3e-5 Torr HAM 11/12: 4e-5 Torr Continue to monitor pressures to detect any potential outer oring leaks.
John, Chandra Vented each annuli system currently being pumped with aux carts to see which are leaking into diagonal volume (inner oring leak). Attached is plot showing leaks in HAM 7/8 and HAM 11/12 systems, each containing seven annuli. HAM 9 did not leak when vented.
Dave B rebooted the H1HWSMSR machine yesterday. Unfortunately, an investigation of the HWSX images shows that they are still glitching for an unknown reason.

Nutsinee, Kiwamu,
As requested by Aidan, we have restarted the cameras and the computer yesterday after the LHO ISC meeting. We remotely shut the cameras by pressing the buttons on the medm screens. Then we went to the MSR and pressed the reset button to restart the computer. Aidan reported that the corrupted image issue has gone before we shut off the cameras. After rebooting the computer and cameras, we restarted the HWS codes using the old tmux sessions.
Also Aidan discovered that when the images were corrupted it reported an anomalously high pixel mean peak values and in fact it exceeded the theoretical limit with the 12 bits data = 4096 counts. This motivated us to set up a diagnosis test in the DIAG guardian as reported in 26243.
Checked the PSL Chillers per FAMIS#4143.
Aoplogies; I went in to do my weekly reset of the FE Watchdog last night and on my way out i noticed that the Xtal chiller level was at minimum so I added 250ml to bring it upo to the max line. I neglected to aLog this.
120mL seems like a good bit of water for only an ~12 hour period. It's conceivable that Ed would have had to add 250mL Wednesday night since we did fix a water leak on Tuesday and had to drain the laser head circuit in order to do so. We had to add more water to the chiller to get it operational, but only added enough to do this (was not topped off). Corey having to add 120mL the next morning is somewhat concerning.
I have attached 3 and 7 day trends of the relative humidity of the HPO box and the PSL enclosure laser room. Our activity on Tuesday is obvious, and once we put the lid back on the HPO box the RH slowly increased (over the course of ~17 hours) in the box until it reached its previous level. I see nothing that to me obviously indicates that we have a bad water leak, although the slow RH increase in the HPO box could indicate we still have a slow water leak. Will continue to monitor.
[Sheila, Jenne]
We measured the PRC1 ASC loops, and adjusted the suspension compensation such that we could increase the UGF to about 0.3 Hz for each. This included sending the ASC control signals to M3 of the PRM, in addition to M1. There's room to increase the gain, but we want to maintain our heirarchy with the PRC2 loops of 2Hz ugf from last night. See screenshots for new loop TFs.
We tried to modify the way the MICH ASC loops come on, so that we're less rough with the turn-on, but were unsuccessful. The idea was to have FM2 (0.03:0) off but FM3 (:0.03) on so that we don't have a pure integrator. We planned to turn the input on with zero gain, then ramp up the gain. Once the loop is on, we were going to engage FM2 to cancel FM3's pole and give us a pure integrator. This kept kicking us out of lock though, so we've backed out these changes, and just put a sleep in after the MICH loops come on, before any other loops come on. We did however make the MICH yaw loop use the -20dB FM1 initially and then ramp it off, in the same way that the pitch loop does. We also increased FM1's ramp time from 2 sec to 4 sec. This seemed to help enough that we're leaving it for now, and may come back to it in the future.
The "msboost"s in the CHARD loops are causing loop oscillations now, so we've commented them out of the guardian. We'll have to revisit why this is happening, and fix it (refl wfs phasing due to preloading?). This may need to be addressed before we can power up.
The environment is starting to get fussy (high winds and moderate useism), so we're going to leave things as-is for now. Locking should all be the same as usual through DC readout. We make no warranty for power-up, since we haven't had a chance to try it.
Plan: Get to Part3 or DC readout, measure phasing of refl wfs by shaking mcl and checking that signals all in I phase (hopefully). Should be okay with the new preloading, since it's all fine with regular hot IFO. Fix if needed. Then try powering up!
Krishna, Michael,
We left Seattle at 10 AM and reached LHO at 2ish PM. Hugh helped us unload the BRS-2 vacuum can and other parts into the 'loading bay' at the Y-End-station. Tomorrow morning, we will clean them and move them in and begin assembly, which is expected to take 2-3 days and commissioing/debugging may take a week. We will try to complete activities in the VEA by mid-day to minimize impact on commissioining.
ITMY ring heater was left on for an overnight measurement. The upper and lower segments of the ring heater were set to 1 W (i.e. 2W in total). The interferometer is aligned but in the down state. I started the HWS codes on h1hwsmsr because they were not running. I have not updated the reference images for the HWS codes this time. Therefore they use whatever the reference images that are in ~/temp/. The ring heater will be automatically switched off at around 6 am in local time by a script running on opsws4.
The HWS code should have been running in a single tmux session containing two windows. Clearly this is too easy to circumvent. I'll talk to Jamie about seeing if we can get this set up as a daemon process instead with the new version of the code.
JimB and I are working on this issue at the moment. We are trying to get rid of tmux sessions by implenting monit. As of yesterday, we succeeded in running the hartman codes under the managmenet of monit. The implementation is still underway and about 80% done at the moment.