This morning I changed the temperature set points at both Mid Stations to 72* F.
Looking at data from last night to evaluate any improvements from adding the z tilt decoupling, I found that the changes made things worse above 100mhz. I'm speculating that this maybe due to the actual tilt coupling having some frequency dependence (like we've seen with the suspensions) but the matrix that we use for these corrections only allows me to use a dc gain. Attached plot shows the coherence for ITMX, first plot shows red Z/RY (which got decoupled) coherence is "worse" than blue, second plot shows the Z/RX (which is didn't get decoupled) coherence is unchanged. Because we used the Z T240 as a witness for the decoupling measurements, I think that pushing the blends lower should reduce the coherence.
We have run in this configuration before briefly (45mhz blends in Z, broad band sensor correction in Z on HEPI), but I switched away from it when I saw this made RZ move more during an earthquake (via the BSC Z to RZ coupling, even when using our Z drive subtraction), but I think the grounds displacements at the time were several microns (well above the 1-1.5microns rms that seems to be our upper limit). Since then I have done a couple measurements with HEPI putting in similar displacements to what we have seen during the biggest earthquakes we've survived and it doesn't look like 45mhz Z blends make RZ any worse than the 90mhz blends when I "simulate" an earthquake with HEPI.
Jenne, Sheila, Ed
We think that the problems we are having in the later steps of the CARM offset reduction are caused by a bad alignment and low recycling gain which causes the CARM offset to be incorrect as the guardian goes through the reduction sequence. (Similar to situation descripbed in alog 29082).
We have a few reasons to suspect that the beam has shifted in the PSL, (36412 36408) and after lunch plan to enter the PSL to align onto the reference irisises, and to bring the IM4 spot back to its pre-vent postiion (P=-0.38, Y=-.27) -0.27
The IM4 trans QPD spot position was -0.38 pitch, and +0.27 yaw. The -0.27 was a mis-type in the previous alog.
S. Dwyer, J. Kissel, J. Oberling Given our problems yesterday with ALSX laser diode current surpassing an old threshold (see LHO aLOG 36381), I was reviewing whether we should accept the new higher threshold as the new normal. Here, I attach 20 day and 700 day (~2 year) trends of all four ALS laser diode's current (2 at each end). Also, here is the state of current values (a simple 10 sec average) vs. nominal and thresholds: Diff from Current [A] Nominal H1:ALS-X_LASER_HEAD_LASERDIODEPOWERNOMINAL 1.842 H1:ALS-X_LASER_HEAD_LASERDIODEPOWERTOLERANCE 0.5 (was 0.2) H1:ALS-X_LASER_HEAD_LASERDIODE1POWERMONITOR 1.863 (0.021) H1:ALS-X_LASER_HEAD_LASERDIODE2POWERMONITOR 2.041 (0.199) H1:ALS-Y_LASER_HEAD_LASERDIODEPOWERNOMINAL 1.520 H1:ALS-Y_LASER_HEAD_LASERDIODEPOWERTOLERANCE 0.2 H1:ALS-Y_LASER_HEAD_LASERDIODE1POWERMONITOR 1.430 (0.090) H1:ALS-Y_LASER_HEAD_LASERDIODE2POWERMONITOR 1.588 (0.068) As Sheila suggests, "It's not crazy that the diode current follows temperature, it's that the temperature has gone crazy over the past 20 days." However, one can see that this ALS X diode 2 current has been slowly increasing over the past 2 years, so we would have hit this threshold in a few months anyways. For now, we'll keep the new ALS X threshold at 0.5 [A] (and leave the ALS Y threshold at 0.2 [A]). Question: Is this OK? The User Manual doesn't explicitly mention a limit on the diode current, but there are several mentions of a temperature controller in section 3.4 "Recommended Operation," and perhaps the most relevant is this statement in the trouble-shooting section: "Diagnosis: The temperature controller is not able to stabilize the diode laser temperature at the given value. Reaction: Try to increase the set temperature for the diode laser slightly using the trimmer at the front panel of the control electronics unit, especially if it is set below room temperature. Otherwise contact InnoLight GmbH." Note the manual also suggests that these diodes are only under warranty for 6 months ;-). Perhaps we should check the temperature settings on this diode? The other three diodes on site have been pretty insensitive to temperature over the past year, through and including the recent HVAC upgrade. I'll open an FRS ticket.
Opened FRS Ticket 8216.
Temperature and current are used to make sure we at the correct laser frequency and far away from a mode hope region. We shouldn't change this unless there is a problem with the frequency locking. It seems to me that the nominal was always somewhat low, maybe 1.95 would be better.
The current limit is normally listed on the datasheet that comes with the laser. In this case it does not appear to be. Failing that the diode current is limited inside the power supply. So turning the knob beyond a certain point won't have any effect.
There seem to be a couple of problems with our new lockloss script, (alog 34878) I have filed bug 1093
For now if people are having trouble with it being extremely slow (more than 5 minutes to generate a list of locklosses) or getting errors when you try to use lockloss select, or want the guardian log displayed, you can use lockloss2 in exactly the same way you would normally use lockloss
Dave and Jonathan had a temporary fix for this problem, which has been working most of the weekend. However, I ran into a new problem when trying to use the saturation feature. Here is the error message:
0: 2017-05-28_00:25:54Z ISC_LOCK LOCKING_ARMS_GREEN -> LOCKLOSS
select event by index [0]: 1
selected event:
2017-05-28_00:25:37Z ISC_LOCK FIND_IR -> LOCKLOSS
/ligo/apps/linux-x86_64/guardian-1.0.3/bin/guardlog DB gc.xlaunch cmd = --after DB gc.xlaunch cmd = 2017-05-28T00:23:37+00:00 DB gc.xlaunch cmd = --before DB gc.xlaunch cmd = 2017-05-28T00:25:38+00:00 DB gc.xlaunch cmd = ISC_LOCK
The channel file: channels_to_look_at_ALS.txt
The number of given channels: 24
Saturation value is 131072.
lockloss gps time = 1179966355.0
saturation checking time window: [-10,30]
fetching data from h1nds1:8088...
Traceback (most recent call last):
File "/opt/rtcds/userapps/release/sys/common/scripts/lockloss3.py", line 699, in <module>
args.func(args)
File "/opt/rtcds/userapps/release/sys/common/scripts/lockloss3.py", line 368, in cmd_select
saturations = check_saturated(args,schannels,shortener,ll_time=time.gps())
File "/opt/rtcds/userapps/trunk/sys/common/scripts/lockloss_utils.py", line 1126, in check_saturated
nds_result = conn.fetch(start_time, end_time, channels[idx:idx+nds_chunk])
File "/ligo/home/jonathan.hanks/nds2/bin/nds2_scratch/lib/python2.7/site-packages/nds2.py", line 2815, in fetch
return _nds2.connection_fetch(self, *args)
RuntimeError: Low level daq error occured [13]: Requested data were not found.
There is a gap in H1:SUS-ETMX_M0_MASTER_OUT_F1_DQ
Rebuilt the vacuum pump from End-Y which failed a couple of weeks ago. Pump failed because one (or more) of the carbon vanes broke and the debris shattered the remaining vanes. With no air flow through the air chamber, for cooling, the pump overheated and thermal checked. This is a normal safety feature of these pumps. In the photo you can see the broken vanes in the bottom of the air chamber. After cleaning and repair the pump is ready for service. At the same time, I also rehabed a second one of these "failed" pumps. There are now two functional backup vacuum pumps for the dust monitor system.
J. Oberling, P. King
After Peter's check of the PSL this morning, we noticed that the power out of the HPO was down to ~154W; it was at ~165W before yesterday's PSL shut-down for the water leak fix. We began adjusting the currents and temperatures of various pump diodes to see if we could find where the power loss came from. All work was done with the ISS OFF.
The operating temperature for pump diode 1 in HPO Diode Box (DB) 1 was adjusted from 24.0 °C to 20.0 °C. This increased the relative output of the diode box from 89.0% to 89.5%, but only increased the total power out of the HPO by ~1.0W. Recall that diode 1 in HPO Diode Box 1 has been on a steady downward trend. I then noticed that the 35W FE was reading 33.9W, so I proceeded to adjust the FE pump diode operating currents. The change is summarized in the table below.
Operating Current (A) | ||
Old | New | |
Diodes 1/2 | 46.0 | 46.3 |
Diodes 3/4 | 49.5 | 49.7 |
This only increased the power out of the FE to 34.1W; adjusting the pump diode operating temps did not change the output power. Our theory here is that the NPRO output power is low enough that the MOPA is unable to increase the power anymore than this.
We then began adjusting the HPO pump diode currents. In the interest of preserving DB1, we began by increasing the current to only DBs 2, 3, and 4. Interestingly, increasing these pump diode currents lowered the output power of the HPO. We returned the currents to their original value and increased the current for DB1 only; same result, lower HPO output power. We then decreased the currents for all DBs and, lo-and-behold, the HPO pump power increased. The currents were decreased by 0.4 A in total, and the HPO is now outputting ~157.4 W. The theory here is that the decay of DB1, specifically pump diode 1 in DB1, has caused the thermal lens in Head 1 to change, thereby changing the location of the stability range of the HPO. The stability range moved lower, which resulted in the HPO output power being on the opposite side of the range (hence why increasing pump diode currents lowered the HPO output power). This then required a decrease in pump diode currents to increase HPO output power. A summary of the new and old HPO pump diode currents is shown in the table below.
Operating Current (A) | ||
Old | New | |
DB1 | 52.7 | 52.3 |
DB2 | 52.2 | 51.8 |
DB3 | 52.2 | 51.8 |
DB4 | 52.2 | 51.8 |
I have attached a screenshot showing the new values of both the HPO DBs and the FE DB for future reference. The above behavior is indicative of a DB that is near the end of its useful life. I have spoken with LLO (the holder of all spare HPO DBs) and they are shipping 2 HPO DBs our way, hopefully out the door this week.
This closes FAMIS 8423.
Yesterday a file owned by user A needed to have its file permissions changed by a user other than A (in this case A no longer works at LHO). Normally this is not permitted other than by user A or sysadmin, however the file in question resided in the shared userapps area, and so it was indirectly possible. Here is the how and why:
All directories under the /opt/rtcds/userapps/release area have permissions of 2775 (rwx for user, rws for group [set-group-id active], r-x for all) and have controls group ownership*. This means all users who belong to the controls group (which is everyone) can delete files inside these directories even though they cannot change the files permissions. So to change the permissions on a file (e.g. make it executable) the procedure is:
the file will now belong to the new user and has the correct permissions.
Note: if the Sticky Bit were to be set on the parent directory, this would be prohibited.
* Every Tuesday I run scripts to ensure this is the case for userapps and svncommon.
model restarts logged for Wed 24/May/2017
2017_05_24 13:04 h1ecatx1
Restart of EX Beckhoff slow controls PLCs in attempt to clear an error (did not clear the error)
model restarts logged for Tue 23/May/2017
2017_05_23 09:57 h1iopseih23
2017_05_23 09:59 h1hpiham2
2017_05_23 09:59 h1hpiham3
2017_05_23 09:59 h1isiham2
2017_05_23 09:59 h1isiham3
Restart of all models on h1seih23 after timing glitch in CER.
After the initial work on installing the air bleed in the cooling circuit, the flow rate in head 3 was lower than it was before. There is no reason why the air bleed should affect things. All the other heads are the same as previously. I lifted the lid off the oscillator and looked at the turbine sensor. Nothing unusual was seen. I tapped the flow sensor to see if it might dislodge anything stuck inside but the flow rate did not change. The reduced output power of the laser is real. I checked the alignment of the monitoring photodiode that is located near the diagnostic breadboard and it appeared more or less okay. I also checked the iris underneath the IO periscope. The beam appeared to be off there too. Unfortunately there are no other irises earlier in the beam bath, so it not easy to tell where exactly the beam may be off centre.
TITLE: 05/25 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Environment
OUTGOING OPERATOR: N/A
CURRENT ENVIRONMENT:
Wind: Calm
Primary useism:~ .01um/s
Secondary useism: ~.2um/s
QUICK SUMMARY: ops lazy script isn't doing transition (-t).
Subbing for TJ
One of the things we ran into today was that the ALS laser had an error. One of the diode currents has drifted outside of the tolerance, which caused an error. (Trends coming tomorrow).
So, we should do an update of the beckhoff code sometime soon. We had not encountered this problem before because we haven't had any errors from the laser head before.
Corresponds to FRS Ticket 8217. See also LHO aLOG 36381.
Aidan (remotely), Nutsinee, TJ (in spirit), Kiwamu,
The return beam of the Hartman system for ITMX (HWSX, alog 36332) was successfully found on the HWS table today after people performed the full initial alignment process once.
We then centered the HWSX return beam to the Hartman camera. We are ready to repeat the measurement of the hot spot (34853).
(some details)
We followed the procedure written in T1700101-v2.
Nutsinee and I went to the HWS table after the initial alignment had been done. First of all, looking at the Hartman beam with respect to the two irises on the table, we confirmed that the Hartman beam had stayed in a good shape. We then started checking the periscope mirrors to look for the green light down from the X arm. We immediately found the green light on the top periscope mirror. Carefully inspecting the spot position on the mirror, we determined that the beam was well centered with a precision of about 5mm or so.
We steered the top and bottom periscope mirrors to make the green light centered on both irises. Finally, we steered the steering mirror in front of the Hartman camera while watching the stream live image of the camera. We found the Hartman return beam on the stream image right away. We then touched the top periscope mirror and the steering mirror to fine-tune the beam/aperture locations.
The attached are the images after we finished aligning the return beam to the camera with and without the Hartman plate.
I later went back and put the HWSY plate back on. I attached pictures of HWSY with plate on and off.
The codes are running for both HWSX and HWSY and the reference centroids have been reinitialized.
My analysis is that the HWSX alignment is off center by 20 to 40 mm. The EQ stops appear too high in the image compared to previous alignments (for reference 100 pixels corresponds to about 21mm on the optic). I would exercise caution when analyzing the results from this alignment.
Some previous alignments can be seen here:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=34813
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=28880
Kiwamu, Nutsinee
I attached HWSX screenshots after the adjustment (with HWS plate off and on). I also fixed the weird-looking HWSY image with HWS plate on. The rtd temperature sensor snapped and made room for the HWS plate to move slightly out of place. There's also a small dent where the temperature sensor was (forgot to take a picture), no damage to the hole-ly part and I screwed down the metal plates quite tightly to make sure the whole thing laid flat.
J. Driggers, K. Izumi, S. Dwyer, J. Kissel, V. Adya, T. Shaffer We've been able to find green alignment quite easily. Yes!! However, once we were able to get ALS X well aligned and locked, with Green WFS and ITM camera alignment systems ON -- the ALS X Beckhoff state machine turned into a blinking MEDM light show. The arm would remain locked, with good alignment, the ALSX guardian would go into fault. The obvious symptoms were several momentary errors on the (from the ALS Overview Screen) Beckhoff screens for PDH, Fiber PLL, and VCO that each complained of each other. We started with slow calculated attempts of trying to disable various parts of the state machine, e.g. - using H1:ALS-X_FIBR_LOCK_LOGIC_FORCE to force the fiber PLL lock, or - by hitting reset (H1:ALS-X_VCO_CONTROLS_CLEARINT) on the H1:ALS-X_VCO_TUNEOFS to reset the frequency finding servo. we then degraded to a bit of button mashing, after which the state machine would just restore everything to what it was before we started. Finally, Sheila showed us how to dig down an alternate path for finding errors via the sitemap > SYS > EtherCAT overview and follow the error messages from there. However the screens only show explanatory text when there are errors present, which makes tracing a momentary error frustrating at best. Our path down this rabbit hole was sitemap > SYS > EtherCAT overview > ECAT_CUST_SYSTEM.adl X-End PLC2 (because it showed a text "ALS-X; ISC-EX") > H1_X1PLC2.adl Als (which had no text) > H1ALS_X1_PLC2.adl X (had no text) > H1ALS_X1_PLC2_X.adl Potential BUG: On this screen the "Lock" and "Refl" were showing constant errors but "Laser" ended up being the problem Laser (maybe showed only momentary text) > H1ALS_X1PLC2_X_LASER.adl Head > H1ALS_X1_PLC2_X_LASER_HEAD.adl After careful scrutiny of this screen we found that the ALS-X laser diode 2's powr monitor H1:ALS-X_LASER_HEAD_LASERDIODE2POWERMONITOR was bouncing between 2.038 and 2.039, with is just hovering along the edge of the user defined tolerance of H1:ALS-X_LASER_HEAD_LASERDIODEPOWERTOLERANCE == 0.2 from the user-defined nominal H1:ALS-X_LASER_HEAD_LASERDIODEPOWERNOMINAL == 1.842 After increasing the threshold on deviations from the nominal from 0.2 to 0.5, the entire state machine became happy and normal. This is a problem we'd never seen before, but upon further inspection while writing this log (because we found it hard to believe that laser diodes would produce *more* power than before), I took a look at the 15 day trend of this laser power vs. the X VEA temperature (as measured by the PCAL Receiver's Temperature Sensor), and indeed, the laser power follows it nicely. We should be prepared for the HVAC upgrade to be impacting a lot more than just suspension alignments (LHO aLOG 36331). Lesson Learned The state machine for the ALS system is really hard to debug when there are momentary errors. - We should change the beckhoff error reporting to be latching - We should change these automatically generated screens to *always* display text, so that one can navigate around them with comfort - There may actually be a bug in the reporting system
Inspecting the TwinCAT code for the laser head indeed revealed a mistake when calling the error handler: the list of error messages was never passed down which in turn prevents the bits from "lightening up."
Open FRS Ticket 8217.
J. Kissel Measurements were driven and the data was recorded successfully, but the log files that are written during the measurement to record what was driven when for offline later analysis were corrupted this time around because of some sort of missing SIStrLog /broken function and/or file access issue. I've got emails out to Dave Barker, T. Shaffer, and S. Aston in hopes that this code (written years ago over months by transient guest stars B. Sorazu, B. Barr, and L. Prokorov) can be fixed. Will try again tomorrow after some expert help debugging (or, rather, after Fil gets finished pulling cables for this new... vacuum gauge thing? I dunno, didn't see an aLOG. Maybe comes as a consequence of the new ESD driver power supplies that were replaced while I was out, and coincidentally while I was out).
Opened FRS ticket 8218.