TITLE: 09/06 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC STATE of H1: Commissioning INCOMING OPERATOR: None SHIFT SUMMARY: TMDS at end Y done. GV opened. Charge measurements run but not complete. Ran through initial alignment. Had to skip initial alignment of Y arm, something kept pushing the lock off. Sheila destroyed a guardian node that was interfering with locking ALS DIFF. Back to NLN at ~56 MPc. LOG: 14:40 UTC Peter working in back of optics lab 14:41 UTC Restarted video0 15:04 UTC Set observatory mode from maintenance to commisioning Jim running excitation on HAM4 15:49 UTC TJ to optics lab to retrieve item 16:53 UTC Kyle opening GV 17:03 UTC Peter done 17:22 UTC Richard to end Y to turn on high voltage for ESD and ring heater 17:44 UTC Richard done 19:45 UTC Starting locking 19:46 UTC Peter to optics lab 20:33 UTC Kyle to LVEA 20:51 UTC Kyle back 21:05 UTC Peter done 21:28 UTC Kyle to end Y to check on pump (adjacent room to VEA) 22:01 UTC Kyle back 23:44 UTC NLN ~56 MPc
Were not being used and will be replaced with hardware.
The ISI CPS noise spectra plots all look OK. The big rise/dip in the BRS-Y plot was Jim recentering the BRS. Close FAMIS task #6914.
Hang Tivo Daniel
We resurrected the 72 MHz WFS chain and locked in DRMI. Since we don't have harmonics generators yet, we are using ifr signal generators. We started with the modulation set at 13 x 9.1 MHz and the demodulation set at 8 x 9.1 MHz + 205 Hz. Using a double demodulation technique at 8 x 9.1 MHz + 205 Hz (RF) and 205 Hz (digital) we should be able to derive WFS signals for the SRM. However, we noticed that before the digital demodulation the line at 205 Hz was highly variable in amplitude, plus we saw stable harmonics at 410 and 615 Hz. These lines stayed after we turned the 13x modulation off! Meaning, there is a contamination due to higher order intermodulation products from our main modulation drives. As a consequence, we switched the scheme to use an 8 x 9.1 MHz demodulation and 13 x 9.1 MHz + 205 Hz modulation. This eliminated the contamination due to higher order intermodulation products and produced a clean line at 205 Hz before the second digital demodulation. At 410 and 615 Hz no lines were visible anymore. As a result we were able to cleanly lock the PLL to the 205 Hz line of the WFS sum and demodulate individual WFS segments.
PT410B is CC and PT425 is a nude Bayard-Alpert
Laser Status:
SysStat is good
Front End Power is 33.77W (should be around 30 W)
HPO Output Power is 154.9W
Front End Watch is GREEN
HPO Watch is GREEN
PMC:
It has been locked 4 days, 6 hr 44 minutes (should be days/weeks)
Reflected power = 17.36Watts
Transmitted power = 57.32Watts
PowerSum = 74.68Watts.
FSS:
It has been locked for 0 days 0 hr and 7 min (should be days/weeks)
TPD[V] = 0.9954V (min 0.9V)
ISS:
The diffracted power is around 2.9% (should be 3-5%)
Last saturation event was 0 days 6 hours and 21 minutes ago (should be days/weeks)
Possible Issues:
PMC reflected power is high
TITLE: 09/05 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Ed
SHIFT SUMMARY:
LOG:
15:01 Turned off Sensor Correction (BRS) at both ends
15:01 Verified Pico motors at EY off. This should reamin the case for the next two days
15:02 Kyle out to EY to begin discharge procedure. Thids task will take all day.
15:17 Karen to EY to clean.
15:20 Chris taking the pest control person into the LVEA and then down the arms
15:27 Port-a-potty service on site
15:30 Christina to EX for cleaning
15:48 re-booted Video2 and 4. Updated Observatory mode to Preventative Maintenance
15:49 Jeff B into LVEA to identfy network cables for Dust Monitors
15:45 Patrick out to LVEA to execute WP#7137
16:05 Corey out to LVEA
16:14 Cintas on site
16:16 Richard and Ken out to the floor
16:21 LN2 headed to dewar 76 (X CS)
16:25 Richard, Patrick and Ken out
16:28 Soike out to LVEA
16:31 Corey craning internal to squeezer bay.
16:32 Ken back into the LVEA
16:56 Jason out to TCS spares cabinet in LVEA
17:00 Jason out
17:08 Hugh into LVEA
17:10 Kyle called and said he was prepping to close GV at EY.
17:19 Hugh out
17:19 Second EY 300u dust alarm of the day
17:29 FIl out to the floor to document cables for table disconnects
17:31 Jim out to floor to check abou moving te BRS
17:31 Richard out to the floor
17:36 Pest Control checking out for the day.
17:46 Patrick and Dave to End stations to try and re-program the GPS WP#7136
17:46 Elizabeth out to the floor to pick up seismometers
18:36 Dave and Patrick as well as Soike are all back. Chris is going into the LVEA.
18:59 Dave and Patrick back at EX
19:40 Brian with Apollo through the gate.
20:27 Fil out to CER and associated Mezzanine
20:29 Patrick and Dave to EY.
21:30 resarting nuc5
21:45 HFD on site responding o an alarm that was triggered by Bubba at MY.
21:52 Dave and Patrick back
22:18 Jason out to TCS table to grab a power meter for Travis.
This afternoon while checking the PSL chiller filters, I added 200ml of water to the crystal chiller. The diode chiller water level was good.
Kyle, Gerardo, Daniel
The first (and only) cycle was started around 13:45PM. We reached ~10 torr after about 24 minutes. The flow was around 65 slm, the HV at 200mV rms, and the electrometer readout with a 12Vpk square wave was 15-16Vpp throughout the discharging.
The first ion gun suffered from a HV failure and had to be replaced. We measured around 3-4kΩ resistance at the HV feedthrough (should be several MΩ).
Kyle, Gerardo Following the gas/ion admission and before beginning to pump down (Y-end ~10Torr), we isolated and decoupled the Surface Discharge Ionizer from BSC10's door, installed a 2.75" CF blank flange in its place and then "dumped" this small volume of room air into the Y-end vacuum volume. We are leaving Y-end pumping via the Turbo overnight. Gerardo and I will leak test the 2.75" blank followed by opening GV18 when we get in in the morning.
PSL Diode and Crystal filters are clean. No debris or discoloration observed. Closing FAMIS task #8300.
Grateful for GariLynn Billingsley‌ availability to visit from Caltech last week, we successfully bonded the ears to the new ITM (ITM07) destined to become the new H1-ITMx. Ear 104 and 176 were bonded via silicate Hydroxide-Catalysis silicate bonded to the flats of the optic. They will now cure for a month prior to be suspended.
We were also able to finish gluing the magnets to the new SRM (SRM06) destined for H1 HAM5. This optic had a prism glued off nominal, so we had to do some jockeying of the magnet ring fixture to properly place the magnets such that they were "centered" nominally when the optic is hung. Again, was great to have GariLynn here for banter on this.
Attached is a pic of the ITM on the bench with an ear showing, next to the next AERM ready for prism placement. The other picture is the SRM with it's new magnets. While in the lab late last week, we moved the SRM into the bespoke optics airbake oven and we also prepped the AERM for work sometime ~the week. I will get the oven going this week also to outgass the SRM magnet glue joints.
WP7136 Patrick, Dave:
We have reprogrammed the EX CNS-II GPS receiver 'Position Fix' setting from "3DFix" to "PositionHold". After 30 minutes it is still not consistently tracking satellites (number of tracked satellites oscillates between 0 and 1).
Occasionally, we also get a PPS error from the atomic clock that has drifted towards the edge of its monitor window. I adjusted the nominal to 1.5µs from 1µs.
WP7125 TJ, Dave:
we rebooted h1guardian0, it had been running for 49 days
Nothing to see here. Everything looks congruent with current increases. There may be an epics glitch in there as well?
Concur with Ed, all looks normal.
Since the cleanroom is running for the TMDS exercise today, I have lowered the chilled air supply temperature by setting it to manual control with a 60F setpoint. We'll monitor this and restore it to Auto once the cleanroom is off. The supply temperature was 61.1 F at the time I switched to manual.
I've restored the supply temperature setpoint to Auto.
model restarts logged for Mon 04/Sep/2017 No restarts reported
model restarts logged for Sun 03/Sep/2017
h1boot 09:39:08 Sun 03 Sep 2017
restart of h1boot due to freeze-up.
model restarts logged for Sat 02/Sep/2017 No restarts reported
h1boot locked up due to 208.5 days bug.
I am reminded of a kernel 2.6.34 bug whereby the system is prone to lockup after 208.5 days have elapsed. At the time of its freeze, h1boot had been running for 215 days. This bug is also most probably the reason for h1build's freeze ten days before h1boot's freeze. The dates agree with restart data shown in this alog: Link
This will all be resolved soon when we transition the front ends, boot and build machines to a later kernel.
the longest running front ends have been up for 123 days, so no need to reboot these soon. The gentoo DAQ machines are running kernel 2.6.35, which has a bug fix for this problem. This is evidenced by h1tw1 which has been running for 239 days, well beyond the 208.5 days onset of the problem.