...as per WP6316. ≈ -1.6 µrad offset in YAW due to 'kick' when the positioner is turned on/off. Multiple attempts to offset made no difference.
Currently h1oaf0 has been stable for 22 hours following the one-stop cable-transceiver replacement (as suggested by Daniel).
When the oaf stopped driving the DAC, the h1iop model's proc status file showed a very large value for adcHoldTimeEverMax (in the 90's) while most systems showed this value around 17uS.
If we can take this value was an indicator of a failing PCI-bus extender transceiver, I have written a script to scan all the front end computers and report this value. This was ran at 10:10PST and the results are tabulated below.
Note that they are all in the 16-20uS range except for the h1suse[x,y] systems which are in the 70's. The end station SUS machines are the newer type and this is a known issue not related to possible one-stop fibers.
h1iopsush2a | 17 |
h1iopsush2b | 18 |
h1iopsush34 | 19 |
h1iopsush56 | 20 |
h1iopsusauxh34 | 18 |
h1iopsusauxh56 | 18 |
h1iopsusauxh2 | 18 |
h1iopsusauxb123 | 19 |
h1ioppsl0 | 17 |
h1iopsusex | 74 |
h1iopseiex | 21 |
h1iopiscex | 18 |
h1iopsusauxex | 20 |
h1iopsusey | 71 |
h1iopseiey | 20 |
h1iopiscey | 18 |
h1iopsusauxey | 19 |
h1iopoaf0 | 17 |
h1iopsusb123 | 17 |
h1iopseib1 | 18 |
h1iopseib2 | 18 |
h1iopseib3 | 21 |
h1ioplsc0 | 17 |
h1iopseih16 | 19 |
h1iopseih23 | 16 |
h1iopseih45 | 17 |
h1iopasc0 | 17 |
h1ioppemmx | 18 |
h1ioppemmy | 19 |
As CP3 Dewar is being filled right now with LLCV set to 21% in manual mode, the exhaust pressure rose to 0.5 psi and the TCs are reading lower than normal temps. So I lowered LLCV to 16% open which was the setting we used after the last Dewar fill.
Per WP 6320, yesterday I opened the exhaust bypass valves on cryopumps along x-arm and CP1. CP 3,4 at MY are already open. The only one left to open is CP7 at EY. These valves will remain open during normal operations as an added layer of safety for over pressurization. LLO has been operating in this mode for some time.
WP 6319 Updated nds2_client software to version 0.13.1 for Ubuntu 12, Ubuntu 14, and Debian 8.
TITLE: 11/15 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: TJ
SHIFT SUMMARY:
LOG:
10:32 Set SRC1_P and SRC1_Y gain to 0 per Sheila's recommendation. Reopened POP beam diverters to monitor POP90 signal. Successfully made it to NLN. LLO lost lock just as we were getting to NLN, so I am going to wait 30-45 minutes before making Kissel's measurements and going to Observe to see if things seem stable.
11:04 Running a2l_min_LHO.
11:09 PI mode 27 ringing up. Changed phase from 130 to 180 and gain from 3000 to 4000.
11:10 Running a2l_min_PR2.
11:15 Running a2l_min_PR3.
11:26 Closed POP beam diverters. Starting Kissel's PCAL2DARMTF measurement.
11:37 Finished PCAL2DARMTF. Saved as /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/Measurements/PCAL/2016_11_15_H1_PCAL2DARMTF_4to1200Hz_fasttemplate.xml.
11:38 Started Kissel's DARMOLGTF measurement.
11:51 Saved DARMOLGTF measurement as /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/Measurements/DARMOLGTF/2016_11_15_H1_DARM_OLGTF_4to1200Hz_fasttemplate.xml.
11:55 Restarted PCAL lines.
11:56 Set to Observing.
12:15 Out of Observing to damp PI mode 28. Changed phase from 60 to -60, no gain change.
12:23 Observing
12:32 Lockloss. From the error signal striptools, it appears that MICH_P, DHARD_P, and SRC1_P rang up over the course of 10 minutes prior to lockloss. Recall that I had set the SRC1 gains to 0 at the beginning of this lock stretch. Perhaps it needed to be turned back on at some point during the lock, but wasn't an issue for 2 hours or so. HAM6 ISI WD tripped at lockloss.
14:29 NLN. Took SRC1 gains to 0 again since it seemed to work last time.
14:35 Observing.
14:52 PI mode 28 ringing up. Changed phase from -60 to 60. Forgot to go out of Observing to do so.
After a bit of a struggle to get to NLN, with SRC1 loop turned off, we are back to Observing. Unfortunately, coincident with LHO coming to full lock, LLO lost lock.
TITLE: 11/15 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Travis
SHIFT SUMMARY:
A bit of a rough shift with H1 making it to NLN, but only staying locked on the order of ~25min. Observe ASC signals growing (over a period of 4-5min) just before it breaks out of lock.
LOG:
Notes:
At 30 (or 32) W we're actively damping PI modes 3, 26, 27, 28. I spent some time the last two days with damping turned off one at a time to get the natural ring ups of each. Rough values below (got to see one ringup each for modes 3, 27, 28, two for 26).
Mode # | Freq | Optic | tau |
3 | 15606 Hz | ITMX | 0.068 s |
26 | 15009 Hz | ETMY | 0.092 s |
27 | 47495 Hz | ETMY | 0.024 s |
28 | 47477 Hz | ETMY | 0.022 s |
I also ran Dr. Evan's circulating-power-from-HEPI script on a recent power up and am getting an estimated circulating arm power of 133.67 kW for 30 W PSL power, though the HEPI displacement should be recalibrated. The usual simple calculation of 30 W (input power) * 0.88 (IMC and faraday) * 0.5 (50/50 BS) * 40 W/W (PRG) * 283 W/W (arm build up) = 149.4 kW.
Summary: Back to locking, but locks do not last long so far (15-25min).
After Sheila, Patrick, & Jeff restored H1 to a point we could lock it, have been going to Nominal Low Noise. Unfortunately, it only stays locked for a little while. During the last NLN lock, could see:
Have turned off some Calibration Lines because of wanting to run some measurements for Jeff K. (will need to remember to turn them back on).
Chatted with Sheila about the recent locklosses (noted above) & Sheila suggested a few things to try:
Run Lockloss Tool
Zoomed in on SRC1_P_OUT & DHARD_P_OUT to see what frequency they were ringing up at and it looks like just under 0.5Hz. (see attached)
Take SRC1 Pit to 0.0, Open Beam Diverter, & Tweak SRM
Haven't tried this yet. Want to either skip the CLOSE BEAM DIVERTER step or Close them after the fact. Then want to take SRC1_P's gain to zero, and then tweak the alignment of SRM such that you:
I added a path in ISC_LOCK to skip the Reduce_RF9_modulation_depth state. It seems like the last 2 locklosses were at that state, and Sheila and I aren't sure why. It's also not really clear that we need to reduce the 9MHz modulation depth, so we're going to run for the night without the reduction.
Perhaps tomorrow we can look at some BruCos to see if this really changes anything other than what we see in the OMC trans camera when we turn the exposure up super high.
Here is an overview of how the h1oaf0 problem is presenting and what has been tried so far:
Problems started when we added a 7th ADC for PEM expansion.
Initial ADC card was a PMC card on PMC-2-PCIe adapter.
An old style adapter was used, computer would not attempt to
boot (no bios screen) if the chassis was powered.
Eventually a pci-e ADC was installed as the 7th card.
h1iopoaf0 was recording ADC/DAC errors at random times.
Investigating the proc files, each event is an ADC/DAC timing error.
ADC records a very high adcHoldTimeEverMax (>90) which is recoverable.
16bit DAC records fifo_status of 0 (not first quarter) and is not recoverable.
----------------------------------------------------------------------------------------------
Thu 11/10:
Removed 7th ADC from chassis, reverted h1iopoaf0.mdl model back to earlier version.
Reseated and screwed down all cards.
----------------------------------------------------------------------------------------------
Fri 11/11:
09:55PST
replaced DC power supply in chassis.
Reseated ADC, DAC, BIO and interface cards.
Replaced IO Chassis with x1psl0. onestop/BIO/ADC/DAC/interface/ribbon-cables transfered over.
Timing, OneStop came with new chassis.
Replaced 1st ADC set (ADC+ribbon+i/f) from x1psl0.
----------------------------------------------------------------------------------------------
Mon 11/14:
went back to original chassis.
installed new one stop card in chassis (original card had problems with heat-sink)
swapped first and second ADC card sets (ADC+ribbon+i/f)
replaced 18bit DAC with modified card
replaced SFP on fanout, and returned to second slot. Replaced SFP on timing slave.
pull power cords out of h1oaf0 to get it to boot.
Replaced long run one-stop fibre between computer and chassis
We are currently testing if the fiber change has helped, has been running for 4 hours so far.
Attached plot shows last 14days trend of h1iopoaf0 STATE_WORD, showing the ADC+DAC+DK+OVF events.
TITLE: 11/15 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC STATE of H1: Lock Acquisition INCOMING OPERATOR: Corey SHIFT SUMMARY: Started the shift with the IFO locked at DC_READOUT. Shortly after the morning meeting the h1oaf frontend was powered down to try another hardware fix. When it was powered back up it would not boot and glitched the Dolphin network. Upon recovery, Jeff, Sheila and Cheryl restored the positions of the optics. I ran through an initial alignment and once again had trouble locking the SRC. In this case it appeared to be a misalignment of SR3 (see Jenne's alog). The next difficulty was with locking ALS DIFF. At first we thought that this was due to a high ETMY bounce mode, however the problem persisted after it was damped. Sheila and Jeff then found wrong settings in two LSC DARM filter banks. When these were fixed we were able get through locking ALS DIFF. I then handed the IFO over to Corey. Sheila, Jenne and Jeff are helping with further recovery. LOG: 16:24 UTC Restarted video2. The uptime of the Beckhoff computers was pausing on the CDS overview medm. The h1oaf timing bit is flashing. 17:11 UTC Jim B. took h1oaf down. Richard to CER to work on h1oaf IO chassis. Verbal alarm log is constantly repeating 'TypeError while running test: TCS'. 17:29 UTC Kyle to LVEA to assess requirements for bakeout of vacuum gauge valves 17:38 UTC Kyle back 17:46 UTC Stopped verbal alarms to see if restarting would clear the repeating error. It now crashes when I try to start it with: 'NameError: global name 'to_terminal' is not defined'. 18:00 UTC Richard to LVEA to take pictures by PSL rack while Jim B. restarts software. 18:04 UTC h1oaf frontend computer will not boot. Dolphin network has glitched. 18:45 UTC Frontends are being brought back. A complete power down of h1oaf with the power cord removed was necessary for it to boot on powerup. I have successfully restarted verbal alarms. 18:51 UTC Richard done in CER. Jeff, Sheila, Cheryl bringing optics back. Jason bringing PSL back. 19:21 UTC Starting initial alignment 19:50 UTC Cheryl had to move IM4 to lock X arm on IR in initial alignment 20:21 UTC h1oaf crashed. Used script to restart. 20:33 UTC h1oaf crashed 20:36 UTC Richard to CER 20:49 UTC Richard back 20:50 UTC Filiberto to LVEA to restart TCS lasers 20:57 UTC Initial alignment done 21:02 UTC Filiberto back 21:19 UTC Losing lock at ALS DIFF. Jim W. adjusted the fiber polarization in the MSR. 21:22 UTC Fire department through gate 21:25 UTC Still losing lock at ALS DIFF 22:00 UTC Fire department done at LSB and leaving site 23:03 UTC Chandra to CP3 to fill with valve at 100% open
Just an update on where we are--
Since the computer problems this morning, we struggled with locking ALS, Patrick Jeff K and I finally tracked it down using conlog, time machine, ect, to a wrong setting in SDF for the DARM filter bank. We edited the ALS guadian so that now these fiter banks get set with "only on" before locking ALS DIFF.
We also spent a long time damping ETMY bounce, which was unusually rung up; now Jeff and Corey are working on ETMY roll which is also quite rung up.
3:15 pm local Filled CP3 this time by opening LLCV to 100% open (while physically at exhaust port to monitor behavior). Took 80 sec. to overfill. Exhaust pressure peaked at 0.8 psi. Thermocouple profile looks good - shorter droop shoulder and lower absolute temperatures measured. However, there seemed to be more vapor at a faster rate coming out both exhaust pipes which means unnecessary waste of LN2. Tomorrow is a CP3 Dewar fill, so I left LLCV at 21% nominal. It will likely need to be lowered to 16% based on last fill.
Attached is a 180 day minute trend that shows the decay of the output power of the 4 HPO pump diode boxes. Everything looks as expected except for one thing: the decay for diode box #1 (H1:PSL-OSC_DB1_PWR) seems to have accelerated since Peter's adjustment of the HPO diode currents on October 6 (alogged here). Just doing a quick by eye comparison, the diode box lost ~0.9% over 55 days (from 05-25-2016 to 07-18-2016). More recently it has lost the same ~0.9%, this time over 35 days (from 10-12-2016 to 11-14-2016). This is likely simply due to the age of the diode box; I can find no entry in the LHO alog where any of the HPO diode boxes were swapped (and we have not performed any HPO diode box swaps since I joined the PSL team), so it is likely these are still the original diode boxes installed with the PSL in 2011. Will keep an eye on this.
Looking at the overall trends I think we should be good for the duration of ER10/O2a; the HPO diode box currents will have to be adjusted before the start of O2b.
These are the original diode boxes from the H2 installation (October 2011). The oscillator was running for quite some time in the H2 PSL Enclosure before being moved to the H1 PSL Enclosure (which was a consequence of the 3rd IFO decision).
J. Kissel, P. Thomas, J. Warner We're having trouble get ALS locked after the corner station crashed this morning, so we took a look at the percentage of wrong polarization in the ALS fiber transmissions. Y was at an acceptable 2%, while X was a boardline 12%. Just to make ourselves feel better, Jim and Patrick adjusted and reduced the percentage to under 5% for both arms. Out of curiousity, to see how often this needs doing, I took a 1 year trend. I don't have much to say about the results, hopefully someone who knows more about this system can draw conclusions. Also, there is interest in moving the fiber polarization adjustment out of the CER.
A. Staley (posted by J. Kissel on her behalf) Alexa -- still looking out for us -- had sent me an email about the above trend. Thanks Alexa (and Zsuzsa for prompting her to look)! I quote it below: The fiber polarization controller drifted quite a bit -- Sheila, Evan, and I would post alogs about it keep track of the drift. There are a couple of things worth noting: We discovered that the fiber polarization controller creates a peak in the green locking noise at 27 kHz (LHO aLOG 10275). So we decided to keep the polarization controller off. We didn't spend time characterizing how much the wave plates would drift with the controller off. We ensured the drift/hysteresis of the polarization wasn't drastic over a period of a day or so, but were still expecting some drift over time. We also know that the fiber polarization drifts with temperature in the MSR (LHO aLOG 7023, LHO aLOG 11509) and the temperature is not very well controlled in that room. A bit unrelated, but I had also seen funny behavior with the motorized polarization controller (MPC) upon turning it off and on (LHO aLOG 11505). I am not sure if anyone went into more detail than that to characterize this drift or mitigate it -- it was always pretty trivial to adjust. The DCC has the manual (T1200496), which states the expected rotational drift of the wave plates over time--this is very small. Between that variation, temperature drifts, and drifts in the laser out SOP, I would expect some drift in the polarization. Again, I don't think this has been characterized or quantified and compared with the trends we've seen.
I also centered the BS oplev, using a different picomotor controller from the EE shop (see LHO alog 31333 for details on BS oplev centering issues). This closes WP 6316.
I noticed that YAW had changed to ≈2.8µrad after I returned to the corner. PCal work began immediately after I left the alignment. I noticed the change in position and re-centered after the PCal work was completed. During the PCal work, the PCal shutter was open and closed so that I could observe any action on the alignment. There didn't seem to be any affect with the shutter in either of it's positions. After a brief period of time it seems that PIT has drifted to ≈ -1.8. This seems to be an inherent issue thoughout most of the OpLev system.