Jenne, SHeila, Patrick, Ketia
We are still not able to lock ALS for very long. We tried locking only ALS COMM, and were able to stay locked for more than 10 minutes. We have been sitting here watching signals from the green arms, and we see the same ALS glitches (in the Yarm) that are described in many old alogs: 31053 lists som.
Keita Patrick and I followed Jenne's instructions in alog 30519 to lock the green laser to the arm directly by passing the PLL. In this configuration we did not see the glitches, but now that we have swapped the cables back to the original configuration the glitches are bac. So it seems likely that the PLL is somehow involved in these glitches. I am not sure if these glitches are the reason we cannot lock, but in the past they have prevented locking.
Edited attachment with annotations to make it more clear. (ignore the grey smudge) We had the green arm locked bypassing the PLL from about 6-2-2017 1:38 UTC to 1:44 UTC
The second screenshot shows a repeat of this test using a gain of 6dB instead of 10dB. I think that the loop was oscillating with the 10dB setting, so this setting is better.
Not sure if this is helpful, but attached are screenshots of coherences of various ALS channels with H1:ALS-Y_REFL_CTRL_OUT_DQ for a time around a glitch and a time without a glitch.
Here are 5 examples of 25 minuites of second trends of the Y arm transmission over the last 5 days, you can see that the situation has been getting worse although we have had glitches the entire time. (4 examples in first attachment, second attachment is the last 25 minutes, the problem is still pretty bad.)
The pneumatic regulator connected to the supply line for the cylinder on GV8 has failed. The cylinder was still getting enough pressure to hold the gate open. The ball valve used to isolate the pneumatic regulator is up stream of both GV 7 & 8 so I mechanically locked out both valves. I talked with Chandra and Kyle before I locked out the valves. I will fill out a FRS for to replace the regulator this morning and confer with Chandra before any pressure is applied. This also has ties to FRS 8259.
During this exercise both PT-134 and PT-114 cold cathode pressure gauges went out which seemed to correlate with inserting the stop pins which normally engage a limit switch when open. Do we have a grounding issue?
This sounds strange to me also.
Sheila is working on ALSY and working in close proximity to BRS.
TITLE: 06/02 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR:
CURRENT ENVIRONMENT:
Wind: 2mph Gusts, 0mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
H1 remains in a state of disrepair. Sheila is down at EY investigating ALS.
TITLE: 06/02 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC STATE of H1: Planned Engineering INCOMING OPERATOR: None SHIFT SUMMARY: Jenne, Sheila, Jeff K., Kiwamu and Keita investigating ALS, including glitches on the Y arm. Received a corner station instrument air alarm (see separate entry). LOG: 23:14 UTC Kiwamu to end X to adjust ALS laser current to make it more stable. 23:28 UTC Dave and I restarted the ops teamspeak computer. It was not connecting to the wireless network, and we did not have administrative access to make it, but it did connect upon reboot. It is now connected to teamspeak again. 23:58 UTC Kyle to mid Y 00:08 UTC Kiwamu back. Jenne aborting measurement. Kiwamu attempting to lock x arm on green. 00:17 UTC Kyle back. Trouble with ALS Y glitches the remainder of night. Leaving the ISC_LOCK guardian node in DOWN.
I received a corner station instrument air alarm. I called Bubba and per his instructions went out to look at the compressor in the chiller yard. One compressor had a green running light lit and the other did not. The belts on both motors were spinning. He said that this is indicative of a bypass valve failure on one of the pumps. He said as long as the other pump continued to run it should be fine and he will replace the valve in the morning. I called Chandra and left a message with her, because I believe some of the gate valves in the LVEA are controlled by this instrument air. I also called Kyle. He indicated that there is no danger to the vacuum and will make a separate alog entry. 3 hour trend attached.
The loss of one of two instrument air compressors is a non-issue to the Vacuum System these days as the control of the LN2 level in the 80K pumps no longer rely on instrument air. For the case of the Corner Station, GV5, GV6, GV7 and GV8 do rely on instrument air pressure to hold their gates open. Should the air pressure fall below ~40 psi tonight, the IFO beam might get blocked by one or more of these valve's gates "sagging" due to insufficient air pressure. But no damage to the Vacuum System would result - only IFO interruption.
Daniel (on phone), Peter, Jason, Keita, Sheila, Kiwamu,
Today we set the diode current of the X end laser to the factory setting of 1.95 A. It had been for some reason set to 1.8 A.
This was motivated by the power fluctuation issue that Jenne reported the other day (36550). As a result, the green laser power became more stable, although it doesn't look like it has fixed the recent ALS issue of being difficult going through the ALS states.
(Symptom)
The end X laser intermittently ran into a situation where the power fluctuated by a few 10 % for unknown reason as reported in 36550. Today we ran into the same situation and therefore we decided to investigate this issue. As we studied the behavior, it turned out that the green laser power had a dependency on the laser temperature. This may be an indication that the laser was close to some bad mode hop condition or some sort. The behavior was clearly repeatable as we went to back to the same frequency (or crystal temperature).
(Fix)
Earlier Peter noticed that the laser current had been set to a lower value. However, since we weren't sure whether it had been purposely set or not, we initially decided to change the crystal temperature to escape from the mode hop region. However, in the end this didn't really work out because when we shifted all three lasers (PSL and both end lasers) by +800 MHz, the end X laser went into a good region where the laser power didn't change as a function of the laser frequency while the end Y laser went into a bad region.
Consulting with Keita, Sheila and Daniel. We decided to try out the diode current setting as another knob to escape from the bad region. This actually worked pretty well. All three lasers are now set to back to their nominal frequencies.
After this work, I reset the normalization on the Xarm green transmission photodiode such that full lock gives a normalized value of 1.
As a reminder to myself, these filter banks are accessible from the ALS corner overview screen: ALS-C_TR[X,Y]_A_LF
J. Kissel As we're finding trouble with the new ALSX laser either producing multiple modes or mode hoping, I'm find more reasons to pay attention to the VEA temperature, and more reasons to wish for the stability of the older HVAC system. Bubba's recent modification to the error signal (i.e. removing an errant sensor from the average; see LHO aLOGs 36527 and LHO aLOG 36536) did change the sign of the diurnal oscillations, but they're they're only slightly reduced in amplitude as they were, at ~0.2 [deg C] or 0.5 [deg F]. Apparently, the former system kept the VEAs stable to within 0.05-0.1 [deg C] (0.1-0.2 [deg F]) -- notably much better than the canonical iLIGO requirement of 1 [deg C]. Attached are 10 day trends of the past for each end station, before and after the upgrade. The after upgrade trend takes us up to the present. For the record, I think at this point we should stick with whatever set-point temperature we have since we've been able to regain IFO alignment and sensitivity, adjust references to this new point, and we're now tuning the ALS laser performance to this temperature. Let's focus on improving (decreasing) the amplitude of diurnal oscillations. P.S. Remember I continue to show the PCAL sensors because they're the most calibrated and the receiver module's temperature was most similar to what the former FMCS sensors reported. All other temperature sensors in the VEA *besides* the current FMCS sensors show this oscillation, so it's not just a function of the location of this particular sensor.
TITLE: 06/01 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC STATE of H1: Planned Engineering OUTGOING OPERATOR: Ed CURRENT ENVIRONMENT: Wind: 24mph Gusts, 17mph 5min avg Primary useism: 0.04 μm/s Secondary useism: 0.08 μm/s QUICK SUMMARY: Kiwamu and Jason working on ALS.
TITLE: 06/01 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: TJ
SHIFT SUMMARY:
LOG:
15:15 Jim running TFs on HAMs
15:19 Fil to EX for continued cable terminating - SC = off
16:00 Fill back
16:13 Corey going to check the chiller
16:15 Betsy in and out of the optics lab all day
16:26 Chandra headed to MY
16:32 Peter into optics lab- testing Prometheus.
16:50 Bubba back from Mids
17:00 Locking attempts begin
19:30 02 weekly meeting
20:51 Jason, Kiwamu and Peter to EX to investigate ALS laser mode-hopping.
22:50 EX group back.
FRS8182, WP7001.
I have completed the copy/verify/delete procedure to move old raw_minute_trend data from h1tw1's SSD raid (which was at 91% full) over to h1fw1's HDD raid. Process took 3 days, spread over 6 days due to holiday. SSD raid now at 11% full.
Using the new WP system, I have electronically closed WP7001 and requested FRS8182 be closed.
This required restarts of the daqd processes on h1nds0 and h1nds1 at 11:58 PDT today.
Jim, Jonathan, Dave:
Jim was having autoquack errors of the kind:
libstdc++.so.6: version 'GLIBCXX_3.4.20' not found
The solution is to LD_PRELOAD a path the OS version of this file at the time matlab is started.
The Debian8 path for this file is /usr/lib/x86_64-linux-gnu/libstdc++.so.6, and running the command strings
on it verifies it contains GLIBCXX_3.4.20
strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6|grep GLIBCXX_3.4.20
GLIBCXX_3.4.20
The command to start matlab and have autoquack work is:
LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libstdc++.so.6 matlab
Thanks to Jamie Rollins, I created a MATLAB utility (linux() in/trunk/cds/utilities) that you can replace all your system() and unix() calls with. This function tears out the path to all the MATLAB versions for Linux libraries, makes your system call, and puts everything back. Add a soft-link in /ligo/cds/usermatlab to it -- The root problem is (any) MATLAB normally adds its own versions of critical Linux libraries to the LD_LIBRARY_PATH within MATLAB. So unless you are using a MATLAB version specifically built on the specific OS revision you are using (mostly never the case), you will always get screwed up
Vern, Dave, Dwayne:
We disabled the old workpermit system and rolled out the new system during the 1pm Run Meeting today.
Links to the new system are available on the LHO CDS home page
The old work permit form is now a web page with links to the new form (see attached)
If you would like to subscribe to the mailing list wp-change-notices and get emails whenever a permit's status is changed, please email barker@ligo-wa.caltech.edu