K. Venkateswara
The tilt subtraction is working correctly now. The attached pdf shows the output of the tilt-subtracted super-sensor (in red) along with the GND_STS_X (in blue). The wind at EX is a measly 4-5 mph and in the wrong direction so there isn't much RY tilt. The red barely dips below the blue where the two sensors are coherent. It will be more apparent when the wind picks up a bit :)
Note that there are some odd features in the output of the filter above 7 Hz, which appear to be due to the low-frequency high pass filters, which foton doesn't like for some reason. I had to increase the high-pass frequencies to reduce this effect.
Also, note that most of what the STS is measuring below ~50 mHz is probably tilt but the tilt-subtraction isn't working well below ~20 mHz because we lose phase due to the high-pass filters and the finite d correction has not been implemented yet.
We will test this with the sensor correction next week when we get a chance.
To protect the OMCR QPD sled from pulse damage in case of the lock loss without working fast shutter (that is, once we're in full lock, see E1400405), OMC refl beam was misaligned from the sled in PIT.
5th pico mount for HAM6/ISCT5, which corresponds to one of the two pico mounts in front of the OMCR QPD sled, was moved in PIT by four magnum steps (+40000 counts).
Before: (X,Y)=(7900, -3402)
After: (X,Y)=(7900, 36598)
Before the move, the beam was more or less centered on the QPDs.
Note that the picomotor names for HAM6/ICST6 are messed up, and you cannot tell what is what from the MEDM screen nor H1:SYS-MOTION_C_PICO_C_CURRENT_NAME, according to Dan.
Pico 1 = AS A centering
Pico 2 = AS B centering
Pico 3 = AS C centering
Pico 4 = nothing
Pico 5 = Upstream(?) OMCR QPD sled pico
Pico 6 = Downstream(?) OMCR QPD sled pico
Took faraday analog scan of the Y-end at the request of others -> Data is useless (as expected) as the unbaked RGA (background) dominates the entire spectra. Note that the RGA is pumped only by the Y-end volume (i.e. the local pump cart is valved-out while the RGA is valved-in to the Y-end). A valid RGA scan was not attempted here as the setup time isn't justified (at this time).
Thanks Kyle.
AMU 20 and 22 (neon isotopes) certainly do not appear to be significant but as Kyle notes the RGA is too wet to say much else. After other priorities are dealt with the RGA will be baked. It is unknown when we can get around to this.
If neon were present we should expect to see both 20 and 22 at the ratio of the isotopes ( 90/9 ).
The amu 20 that we normally see in our systems is most likely doubly ionized argon(40). We have argon (and other air components) because of the large quantity of viton in our system which acts as an air reservoir. In addition the ion pumps do not pump argon or other noble gases very well.
I do not think there is much neon in the spectrum. The 20 peak looks like doubly ionized argon and there is not much at 22. The pressure of neon is less than 10^-10 torr, so if there is a leak it is less than 10^-7 torr liters/sec. Much too small to account for the reduction in overall pressure measurement in the pods. RW
Lowering the Electron Energy to 50eV should lower the rate of doubly ionized Argon without significantly decreasing the sensitivity to Neon. This is the setting we used in the Pod leak check, thanks to Rai's calculations.
9:16 Aaron and Filiberto to EX for PCal cabling
9:19 Hugh to electronics bay to rezero ground seismometer
9:46 Alsco onsite for Bubba
10:05 Jeff and Andres to LVEA for SUS storage box hunt
10:30 Jeff and Andres to EX and EY on continued storage box hunt
11:20 Kyle to LVEA retreiving vac parts
12:43 Fil and Aaron to EX
13:21 Krishna to EX
14:00 Jeff and Andres to MX for storage box
15:02 Kyle done @ EY
Notes for tomorrow's operator:
IP-01 Pump B is flashing red 'Not OK'. Kyle will investigate when available. Ignore for now.
HAM 6 DACKILL WD was tripped when my shift started this morning. I reset it. It tripped again at ~3:50pm. Reset again.
While the spectrum looks pretty good, the DC level which typically appears DC coupled, the offset from zero is >4000 counts. It would appear from trends that this has been the case for months but since we don't use these sensors for anything we've decided to ignore this. And if the spectrum is good, maybe the sensor is still usable
Attached is spectra from ~3am this morning and the IPS weirdness is evident, the issue with the H1 L4C is not as obvious. Historically looking at past data (back to Dec 2013,) the L4C variance is noted but the IPS looks worse now.
Replaced non functioning whitening chassis S1101631 with S1101602 on 10/15
Is this X end or something else?
It's X-End Keita.
The X channel was on the rail and the Z had 20000 or something. So, I rezero'd again this morning. Currently showing 12000, 1000, & -10000. So a little better. I would think these should ultimately be moving around zero unless suffering from large temperature cycling. I'd think it would be pretty steady in the LVEA under the igloo.
Ideally, we'd look at the U V W signals but those aren't available that I've found for the ground seismometer. They are accessible on the chassis front panel, maybe I can look at them.
Attached is two days trend showing me disturbing the machine yesterday, subsequent zeroing, followed by about 12 hours of drift on X & Z and again this morning's zeroing. Interesting the Y axes don't really show any dramatic drift that X & Z exhibit.
Like HAMs4 & 6 re my 14373, HAM5 needed to have the ST0 L4C to Cartesan matrices corrected. These signals go to Sensor Correction and Feed Forward and aren't being used now. So I again checked all the other matrices and yes they were still correct so I reloaded all the matrices. The platform tripped (I should have expected that and taken the platform down first.) The untripping of the watchog was not usual and it is noteworthy.
The watchdog medm showed a red Rogue Exc WD. Usually, a reset all, a Rogue Exc reset, and finally another reset all will green everything. This time however that did not work. The Rogue Exc finally cleared when I hit the reset on the already green SEIHAM5 DACKILLIOP found on the SWWD HAM5 medm.
I'll press Dave as to why this was because it might slow down an operator getting the machine back up.
At the request of Jameson, I had done the output HAMs on Monday and this morning I completed all the others. None of the platforms were disturbed so this was completely non-invasive. In general though, the restarted node switched to EXEC with a request of NONE. Returning the mode to MANAGED and the request to whatever put things back with no fuss. The Chamber manager typically lost communication with the subordinate but I always restarted it last and all finished in good shape. I say "In general...and ...typically..." because the behaviour was not universally consistent.
The SEI guardian systems *should* be completely robust against a "blind" restart of all nodes in a particular chamber, e.g.:
guardctrl restart {SEI,ISI,HPI}_<chamber>*
If the manager comes back up before the subordinates, that's ok because it will recover from the transient NO CONNECT error condition once the subordinate does come back up.
The subordinates also do come up in EXEC, but the manager will handle setting them to MANAGED once they resolved their current state.
If such a fully blind restart of all nodes in a chamber doesn't come up cleanly, please let me know and we can try to fix the problem.
Jameson--Are you saying restart all the guardians for a chamber in a single command? I did each alone.
I'm saying that you can, with the command I included above, and that everything should still come up fine.
I've been meaning to look into a way to make the single-chamber restart a little easier.
no restarts reported. First day after memory increase in QFS writers.
Kiwmau, Alexa, Sheila
Tonight we were having difficultly keeping DRMI locked stably, so we decided to spend some time looking into it rather than continuing to try locking the arms +DRMI.
First we had a look at our ASC loops. We copied the filters from the OMC QPD centering servos for the AS WFS centering servo, this keeps the beam centerd on the WFS much better. We also had to use the picomotor to steer it a few times because we ran out of range wth OM2. Sometimes our 4 DRMI ASC loop (AS A 45 Q to BS, REFL A 9I to PRM) worked, well, sometimes they did not. We loked again for a signal to use for SRM, (they are using refl B 45 Q at livingston according to 13513) So, far, all of the 45 signals have a large offset. We haven't looked at 36, or AS B since we don't have AS B centered.
We decided to try some TCS CO2 laser on ITMX to see if this would alleviate our mode hopping problem. We tunred the CO2 laser on at 1 Watt. We had DRMI locked with our 4 ASC loops working, however the buildup in DRMI degraded. We then measured the contrast by locking mich on the dark fringe then the bright fringe. 32 minutes after we turned the laser on, the contrast was 97.8%, (worse than what Kiwamu measured 13824) it continued to get worse for the next ten minutes. We then turned off the laser, and watched the contrast improve (kiwamu has the screenshot).
We then looked at G1401119, to gues that we need about 370/572 of the laser power that livingston uses, ( 14673), so we set the power to 180 mW, according to H1:TCS-ITMX_CO2_LSRPWR_MTR_OUTPUT, which is a requested power of 0.12 and an angle of 45 degrees on the rotation stage. (we set this at 8:09 UTC on october 16)
Here are some lockloss times for DRMI 2:51:29 UTC 2:57:07 (arms were locked with ALS)
DIFF 2:34:40 2and 2:37:20 UTC all of these times are october 16
Here is the evolution of the dark port while the MICH has kept locked.
This morning the contrast was 99.84% (bright 4594 counts in AS air LF, mich dark 2.6 counts, dark offset -1)
Alastair and I walked through the calibration of the H1:TCS-ITMX_CO2_LSRPWR_MTR_OUTPUT readback, last night it was calibrated in volts, but I had assumed this was Watts. It turns out that this is close to correct. This means that our power has been about 0.193 Watts on compensation plate since 8 UTC this morning.
The calibration from watts at the compmensation plate to volts at the ADC is:
(1 Volt at ADC/1mV at power meter head)*(1Watt onto power meter/1.878mV out of power meter head)*approximately 2 Watts at CP per watt at power meter head = 1.06 Volts at ADC/Watt at compensation plate.
Greg also sent numbers for the power he measured at the base of the persicope using a power meter, he found the minimum powerto be 0.05 Watts at 37 degrees on the rotation stage, and the max power 5.41 W at 82 degrees. I entered these into the rotation stage calibration screen, based on this we would think we have 0.154 Watts at the bottom of the periscope.
Rick, Filiburto, Richard, Jim, Dave
During the PCAL install yesterday Rick was having problems driving DAC outputs through the 8 channel 18bit AI chassis. Today we put this chassis onto the DAQ Test Stand and verified that signals were indeed coming out of the front panel. However when we reinstalled the chassis at EX we discovered that channel 1 was being output as channel 5, 2 as 6 etc and channel 5 was outputing as channel 1 (basically the lower and upper blocks of four channels were reversed). Internally the filter board has two DB9 connectors, so this is where the 8 channels are divided into two sets of four. It would appear that the ribbon cables connecting the filter board to the input board are reversed judging by the cable lengths. We reversed the cables and this unit is ready for reinstall at EX.
We will discuss this issue tomorrow with Richard to see how we can proceed on getting the fix to all the 8 channel PEM/PCAL AI chassis. Note these are different from the SUS AI chassis which have 16 output channels and are driven by two DAC cards.
SudarshanK, TravisS, RickS We moved the ETMY camera from the viewport above the Pcal input viewport to the viewport adjacent to the optical lever input port down at the bottom of the A1 adapter. This location gives a more complete view of the ETM (less occluded by the Arm Cavity Baffle). Fitting the camera can in that location required rotating it about 10 deg. due to interference with the OptLev sheet metal cover. The GigE camera was rotated to roughly orient the view such that the flat on the ETM is vertical. An image of the ETM with the new camera view is attached below.
Worked continued with installation of PCAL electronics at EX. All long field cabling were pulled in yesterday morning. This afternoon, we continued with dressing of cables and started installing DC power supplies.
The ISC Whitening chassis S1101631 mentioned in Kiwamu's log entry (14430) was removed from the X-End LVEA electrnoics rack. The -15VDC failed. I also removed a small snake from the computer cart all snuggled up to the warm monitor and keyboard.
I did not notice the snake!
It was the +15 rail, not the -15. My bad.