BRS installation continues at end Y Possible TCS and ASC work over the weekend Landscapers will be spraying for weeds over the weekend There will be a preliminary planning meeting at 10 am for the upcoming vent
Preliminary conclusion: the DARM cavity pole seems to be a strong function of the differential lensing. I was able to change it from 357 Hz to 220 Hz (!!!)
I will post more details tomorrow.
The cavity pole measurement is not valid until t=80 min. and also in the time band approximately between 230 and 250 min. The interferometer was locked on the DC readout with ASC fully engaged, The PSL power stayed at 2 W throughout the measurement.
Learning this behavior, I would like to do the followings in the next test:
By the way the second attachment is trend of various channels during the test.
Actually, Hang pointed out that SRM and SR2 showed much more visible reactions in their alignment. See the attached.
In particular, SR2 pitch seems to trace the lensing curve.
Also, looking at PRM and PR2, we did not see drift or anything interesting.
A simulation with substrate lensings as reported in the elog did not show a large variation of the cavity pole: about 1% or so. My suspect is that the change in differential lensing is causing the IFO working point to change: alignment or longitudinal offsets? In my simulation the longitudinal working point is obtained from simulated error signals, so I don't see any offset in the locking error signals.
The differential lens change is about 18 microdiopters. For what it's worth, there is ~2.3% of power scattered from the TEM00 mode on a double-pass through such a lens. Whether such a purely differential lens in the SRC would manifest solely as a 2.3% round-trip loss in the differential TEM00 mode of the arms is questionable. I still need to run the numbers for the effect on the DARM cavity pole if we simply added this loss to the SRM mirror.
Here are some more small points to note.
[Two cavity pole measurements]
At the beginning of the run before I started changing the CO2 power, I ran a Pcal swept sine measurement in order to get the cavity pole frequency. The DARM open loop was also measured within 10 minutes or so in order for us to be able to take out the loop suppression. In addition, I ran another pair of Pcal and DARM open loop measurements to double check the measurement. The attached below shows the transfer functions with fitting. The fitting was done with some weighted least square algorithm using LISO.
As shown in the plot, the shift in cavity pole is obvious. Also the optical gain is different between the two measurements.
[Evolution of the sensing function throughout the test]
The optical gain and cavity pole are negatively-correlated. The trend of the optical gain looks very similar to the one for the power recycling gain, but the variation in the optical is much larger-- the optical gain increased by 20 % at most relative to the beginning. As pointed out by Valera in the ISC call today, a fraction of the variation in the optical gain could be due to the OMC mode matching.
[Alignment drift]
As Gabriele pointed out in the comment, it may be possible that the CO2 lasers affected the alignment of the interferometer and changed the amount of losses in some parts of the interferometer or introduced some other impact on the cavity pole. Hang and I have looked at trend of optical levers during the time.
There are two optics that seemingly reacted to the differential lensing, that are BS yaw and ETMX PIT. The showed a kink point at the time when the CO2 power changed. In addition, ETMY pit slowly drifted by 2 urad and ETMY yaw moved by 1-ish urad. Other large optics also moved but were within 1 urad. From a naive point of view, the alignment does seem to explain the behavior of the cavity pole going down and up during the measurement because none of them clearly showed a going-up-and-down type behavior. However, it is possible that the true misalignment was covered by drift of the oplev itself.
[Online cavity pole measurement]
The cavity pole was measured by injecting a line at 331.9 Hz at the DARM output. The DARM loop is notched out at the same frequency. The measurement method is described in 18436.
Some simulations I did months ago for the MIT commissioning meeting (https://dcc.ligo.org/LIGO-G1500593) showed that the cavity pole is very sensitive to SRC matching. I therefore expect the cavity pole to be also very sensitive to SRC alignment, as seems to be sugegsted by the SR* mirror drifts.
Kiwamu, Nutsinee
------------------------------------------------------------------------------------------------------------------------------
Code added to line 115 - line 136
First test checks if the number of "peaks" measured by the HWS code makes sense. If below 800 -- not enough return SLED beam. If above 4096 (12 bit), something's wrong.
Second test checks if HWS code time stamp is correct compared to the actual GPS time. If |dt| > 15 seconds then the code must have stopped running.
------------------------------------------------------------------------------------------------------------------------------
@SYSDIAG.register_test
def TCS_HWS():
#Make sure # of pixels are good
pkcnts_X = ezca['TCS-ITMX_HWS_TOP100_PV']
pkcnts_Y = ezca['TCS-ITMY_HWS_TOP100_PV']
#If below 800 -- not enough return SLED light
#If above 4096 -- above theoretical number (12 bit) -- something's wrong
if not (pkcnts_X > 800) and (pkcnts_X < 4096):
yield "HWSX Peak Counts BAD!!"
if not (pkcnts_Y > 800) and (pkcnts_Y < 4096):
yield "HWSY Peak Counts BAD!!"
#Check if HWS GPS time agrees with current GPS time
a = subprocess.Popen(['tconvert','now'], stdout = subprocess.PIPE)
gpstime, err = a.communicate()
gpsfloat = float(gpstime.strip('
'))
#Fetch HWS GPS time
liveGPS_X = ezca['TCS-ITMX_HWS_LIVE_ACQUISITION_GPSTIME']
liveGPS_Y = ezca['TCS-ITMY_HWS_LIVE_ACQUISITION_GPSTIME']
if np.abs(gpsfloat - liveGPS_X) > 15 :
yield "HWSX Code Stopped Running"
if np.abs(gpsfloat - liveGPS_Y) > 15:
yield "HWSY Code Stopped Running"
Michael, Hugh, Krishna
A lot of progress was made with the installation of BRS-2 today. In the morning, we cleared out a space on the VEA floor. We had to slightly reposition the SEI ground seismometer (STS2) and the PEM Guralp seismometer. We then cleaned the BRS-2 vacuum can, foam box and other parts and moved them in. Upon opening the vacuum can, we noticed that the epoxy (TorrSeal) on one of the capacitor plates had come off, likely during the drive. We found a suitable alternative (Loctite 1c) and reattached the capacitor plate.
We the proceeded with the assembly. The beam-balance was pulled out, the flexures were carefully attached and the beam-balance was then reinserted into the can and suspended from the flexures. We did a crude adjustment of the horizontal center of mass. The vacuum can was then partially closed up and the autocollimator attached to it. We then had to realign the optics to get the reflections from the reference mirror and the main mirror algined well. Tomorrow we will continue with autocollimator adjustments to get good reflection patterns on the CCD.
In the meantime, Jim B and Carlos installed the BRS-2 Beckhoff computer at End Y. Also, Filiberto and Peter laid out the GiGE, Ethercat and Fiber-Optic (For the autocollimator light source) cables going from the VEA to the computer room.
Plan for tomorrow:
1. Get the C# code to read the CCD data and start measuring the tilt signal. At this point, the instrument is still in air, so will have excess noise and drifts.
2. Hook up the electronics for the capacitive control and the DACs for writing out the signals. Once the cables to go from these to the SEI Frontends is complete, the tilt data will be accessible to SEI.
3. Close up the rest of the vacuum can.
I attach a collection of pictures I took during this day's installation. I was primarily focused on pictures of the electronics readout system that's new for BRS 2.0, but there are also some pictures of the balance and vacuum can. Let me know if you like and need any of the originals. For my record, the originals live on my laptop, in the folder /Users/kissel/Desktop/scratch/2016-03-24/
Added 686 channels. Removed 526 channels. See attached. All channels are now connected.
14:59 UTC Karen and Christina to end Y 15:07 UTC Jeff B. to LVEA to check on dust monitors and cleanrooms 15:13 UTC Corey checking chillers 15:17 UTC Corey done 15:18 UTC Filiberto installing electronics outside of ISCT1 15:33 UTC Not making it past CHECK_IR. Starting initial alignment. 15:38 UTC Y arm intermittently losing lock on green. Going to 45 Mhz blends on both DOFs at end X and end Y per high microseisem. 15:42 UTC John and Chandra to LVEA to work on vacuum 15:42 UTC Filiberto done 15:48 UTC Hugh, Krishna and Michael starting BRS install at end Y, opening rollup doors (WP 5789) 15:53 UTC Adjusting PR3 to improve COMM beatnote. Y arm still intermittently losing lock on green. 15:55 UTC Jeff B. done 15:59 UTC Can't seem to improve COMM beatnote above 3.8 with PR3. Moving on. 16:24 UTC Done initial alignment. 16:31 UTC Y arm VCO is hitting low limit? 16:40 UTC Jeff B. to HAM6 and beer garden 16:42 UTC Setting end Y ISI blends back to 90 mHz per people walking around in VEA 16:49 UTC Jeff B. done 17:03 UTC Jeff B. to end X and end Y, briefly in VEA at each 17:09 UTC John and Chandra done 17:16 UTC Truck to pickup item from Christina at gate, directed to LSB 17:16 UTC Jason to optics lab to look for part 17:36 UTC Jason back 17:48 UTC Jeff B. back 17:55 UTC Betsy to cleaning area 17:56 UTC Nutsinee to LVEA to look in cabinets 18:11 UTC Nutsinee done 18:13 UTC Terra and Nutsinee to end Y 18:24 UTC Filiberto to end Y to check on cabling requirements 18:41 UTC Filiberto, Hugh, Krishna and Michael back 18:43 UTC Putting IFO_LOCK to down 18:53 UTC Terra and Nutsinee back, did nothing 19:07 UTC Dave, Jim and Carlos to end Y to install BRS computer 19:38 UTC DAQ restart 19:39 UTC Filiberto to end Y to pull cables for BRS 20:14 UTC Filiberto and Peter to end Y 21:58 UTC Kiwamu attempting to lock 21:59 UTC Brynley to electronics bay to look at LEDs 22:05 UTC Brynley back 22:40 UTC Carlos changing vlan for DMT switch in MSR 22:49 UTC Dave shutting down fw0 22:55 UTC Kiwamu giving up on locking. Will lock simple Michelson to measure contrast defect. Jim B., Carlos, Dave: Installation of BRS computer at end Y New PI model on LSC frontend, h1omcpi Updates to susitmpi, susetmxpi, susetmypi DAQ restart to add h1omcpi model
I investigated the HWS glitching issue in more detail. I ran the following tests to try to reproduce the glitching seen previously:
take command (number of ring buffers = 4): no glitchingtake command (number of ring buffers = 1): no glitchingtake command (number of ring buffers = 1): no glitchingtake command (number of ring buffers = 4): no glitchingRun_HWS_ITMX and Run_HWS_ITMY at the same time: no glitchingSince we couldn't reproduce the glitching after everything restarted, Kiwamu and I decided that we create some Guardian diagnostics to run continuously on the HWS data. The following channels will be monitored and will issue warning in the case of aberrant behaviour:
Daniel, Terra, Jim, Dave WP5790
A new model was installed on the final core of the h1lsc0 front end machine. Its name is h1omcpi, dcuid=9, rate = 64kHz.
A PI_MASTER.mdl file change was also made.
The new h1omcpi model was installed and started. The h1susitmpi, h1susetmxpi and h1susetmypi models were rebuild and restarted. The DAQ was restarted.
Unfortunately this precipitated an instability in h1fw0 which we are investigating.
h1fw0 continued to be unstable. First I power cycled h1fw0 but this did not help. Then I powered down h1fw0 and h1ldasgw0 (in that sequence). When I powered up h1ldasgw0 it went through an extensive set of startup diagnostics. I then had to manually mount the QFS file system and NFS export it. Finally I powered up h1fw0. Its been running for 5 minutes, so too early to tell if we have fixed this.
Darkhan, Kiwamu,
We found that the optical follower servo (aka OFS) for the Pcal Y had stopped running. Pressing around some buttons on the medm screen, we could not make it back up running. We are suspecting some kind of hardware issue at the moment. According to trend data, it seems that it stopped running at around 18:30 UTC on Mar 18th for some reason. We will take a look at the hardware in the next Tuesday during the maintenance period.
J. Kissel, E. Hall, [R. Savage via remote communication at LLO] This was re-discovered again tonight. Opened FRS ticket #5241 so we don't forget. Email sent to E. Goetz and T. Sadecki suggesting they address on Monday 4/4. Further information from our re-discovery: Looks like Kiwamu had left the calibration line amplitude in a low / off state. We restored the line amplitudes to nominal, and it caused excess noise in the DARM spectrum (lots of lines, a little bit of elevated noise floor). We don't see any sine-wave-like excitation in the OFS, TX, or RX PDs with a single calibration line on at reasonable amplitude (which is contradictory to elevated noise in DARM). Rick suggests: - Check the AOM. - Check the shutter. - Check that the laser hasn't tripped.
Related alogs: 25932 and its comment
Yesterday, we started the preloading test. The CO2 lasers now have an additional common power of about 250 mW, corresponding to an additional common substrate lensing of roughly 31 uD.
Two major conclusions so far:
We need to decide what optimization we need to for this common lensiong. For testing purpose, we will keep running with this pre-loaded configuration for lock acquisition and for full lock with 2 W PSL.
After vent test, current pressures at aux carts: HAM 7/8: 2e-5 Torr HAM 9: 3e-5 Torr HAM 11/12: 4e-5 Torr Continue to monitor pressures to detect any potential outer oring leaks.
John, Chandra Vented each annuli system currently being pumped with aux carts to see which are leaking into diagonal volume (inner oring leak). Attached is plot showing leaks in HAM 7/8 and HAM 11/12 systems, each containing seven annuli. HAM 9 did not leak when vented.
Dave B rebooted the H1HWSMSR machine yesterday. Unfortunately, an investigation of the HWSX images shows that they are still glitching for an unknown reason.

Nutsinee, Kiwamu,
As requested by Aidan, we have restarted the cameras and the computer yesterday after the LHO ISC meeting. We remotely shut the cameras by pressing the buttons on the medm screens. Then we went to the MSR and pressed the reset button to restart the computer. Aidan reported that the corrupted image issue has gone before we shut off the cameras. After rebooting the computer and cameras, we restarted the HWS codes using the old tmux sessions.
Also Aidan discovered that when the images were corrupted it reported an anomalously high pixel mean peak values and in fact it exceeded the theoretical limit with the 12 bits data = 4096 counts. This motivated us to set up a diagnosis test in the DIAG guardian as reported in 26243.
Checked the PSL Chillers per FAMIS#4143.
Aoplogies; I went in to do my weekly reset of the FE Watchdog last night and on my way out i noticed that the Xtal chiller level was at minimum so I added 250ml to bring it upo to the max line. I neglected to aLog this.
120mL seems like a good bit of water for only an ~12 hour period. It's conceivable that Ed would have had to add 250mL Wednesday night since we did fix a water leak on Tuesday and had to drain the laser head circuit in order to do so. We had to add more water to the chiller to get it operational, but only added enough to do this (was not topped off). Corey having to add 120mL the next morning is somewhat concerning.
I have attached 3 and 7 day trends of the relative humidity of the HPO box and the PSL enclosure laser room. Our activity on Tuesday is obvious, and once we put the lid back on the HPO box the RH slowly increased (over the course of ~17 hours) in the box until it reached its previous level. I see nothing that to me obviously indicates that we have a bad water leak, although the slow RH increase in the HPO box could indicate we still have a slow water leak. Will continue to monitor.
ITMY ring heater was left on for an overnight measurement. The upper and lower segments of the ring heater were set to 1 W (i.e. 2W in total). The interferometer is aligned but in the down state. I started the HWS codes on h1hwsmsr because they were not running. I have not updated the reference images for the HWS codes this time. Therefore they use whatever the reference images that are in ~/temp/. The ring heater will be automatically switched off at around 6 am in local time by a script running on opsws4.
The HWS code should have been running in a single tmux session containing two windows. Clearly this is too easy to circumvent. I'll talk to Jamie about seeing if we can get this set up as a daemon process instead with the new version of the code.
JimB and I are working on this issue at the moment. We are trying to get rid of tmux sessions by implenting monit. As of yesterday, we succeeded in running the hartman codes under the managmenet of monit. The implementation is still underway and about 80% done at the moment.