Was able to get ITMY tfs this morning. The stage 2 resonances are not super clean, but I'm think that's just because we are in air. Otherwise, looks okay.
The LVEA has been to transitioned to Laser HAZARD by Peter King.
Got the ClassA blanks installed and torqued down to 140" lbs. I can still see copper so they are not metal to metal yet. I was getting gun shy on the bolts and these will be replaced with the actual fiber feedthrus before a potential leak might be found or maybe even matters.
TITLE: 12/19 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
Wind: 2mph Gusts, 1mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.24 μm/s
QUICK SUMMARY: Some work in the LVEA is already starting on this rainy day; Hugh investigating SEI BSC2 trips, Bubba Mark Tyler working on the neg pump, and Karen Vanessa making things spotless for us.
Closeout tasks completed in the biergarten today were:
J. Driggers, S. Dwyer, G. Grabeel, P. King, J. Kissel, E. Merilh, J. Oberling, R. Savage We've finished in-chamber alignment work including aligning / checking beams incident on IM4 Trans QPD, the ISS 2nd Loop PD Array, and installed then used the temporary viewport simulator for the IMC REFL and Trans beams exiting the HAM2 in-vac table. (The temp viewport remains attached to the chamber). Remaining tasks for tomorrow morning are to fine tune the ISS alignment using picomotors (Peter K) and moving IM4 cage a little bit in yaw to relieve bias which is currently on 80,000 DAC cnts for 2 osems. For all of today's alignments, we used the flashing IMC beam, *without* the bypassing the input mode cleaner. We elected not to bypass because (a) we're heavily constrained on the amount of experts' time for aligning these systems, (b) there's sufficient amount of input power (still ~170mW) that we could easily see the beam all the way to the input of the ISS array (which is behind (less than) 50ppm transmission of IM4 (aka PMMT2, ~25 ppm Trans), and two 90R-10T beam splitters ROM LH1 and ROM RH4; see E1300206), and (c) co-aligning the by-pass to the IMC flashing beam is a HUGE time-suck with its current functionality. (See notes in comments below.) Technically, although we've confirmed that the flashing IMC beam enters the ISS array box, Peter / Rick will continue to work with the in-vac steering mirror picomotors to center the beam on the ISS array's alignment QPD early tomorrow morning. Also, for IM4 Trans, while we weren't able to physically see the beam on a card, we were able to see the beam on the QPD readout, which was showing signal synchronous with IMC flashing, according MC2 trans. The alignment was 0.65 for P, -0.25 for Y (in "standard" normalized QPD units). For IMC REFL, we adjusted nothing, and the beam came out as shown in the IMCREFL picture. For IMC TRANS, we moved ROM RH2 (the steering mirror behind / in transmission of IM1) in YAW three full turns of the the allen key. This moved the spot position at the table port by about 45mm (we recreated a "virtual" table with a table measure), to move it from virtually clipping on the edge of the ring / viewport simulator. The beam now comes out of the "viewport" as shown in the IMCTRANS picture.
Here're some details of the failed bypass we installed today. Drawing and pictures attached. All pictures are of the 2017-12-18 configuration. I attach the 2014 configuration from the last vent, (see LHO aLOG 12702) for comparison.
More notes on bypassing the IMC: LLO has followed the procedure outlined in section 2.5 and figure 3 of T1300327, which shows a configuration much like the 2014-07-10 configuration. Examples from LLO: LLO aLOG 35957 and LLO aLOG 25361. What inspired us to bypass in the 2017-12-18 configuration was merely that we found the first bypass mirror (i.e. that which first receives the MC1 REFL beam, closest to the edge of the table) rigidly fixed / dog-clamped to the table aligned in such a way that looked like it would bypass the IMC in the 2017-12-18 configuration, and we found the second bypass off to the side in front of MC3. So -- maybe we invented a new way to bypass, or maybe this is a new way Cheryl invented to bypass (likely! she's smarter than us), and we just re-created it (albeit unsuccessfully).
Trying to get close out tfs on BSC 2 and it appears the Corner 3 actuators are all swapped around. The H3 to H3 l4c/gs13 tfs for both stages look bad, but the H3 to V3 l4c/gs13 tfs look okay. I looked at the cables out of the coil drivers and the order of the labeling is the same for all 3 drivers, so I would guess that some work was done on the outside of the chamber and the cables weren't hooked back up correctly. First attached plot is the St2 tfs (with the cross-over tfs for corner 3), second shows the St2 corner 3 tfs, H3 to H3/V3 gs13s, V3 to H3/V3 tfs. The better looking tfs are H3 to V3 and V3 to H3, the crappy looking ones are H3 to H3 and V3 to V3. The St1 tfs are similarly confused, see third plot.
Checked the cables at the Chamber feedthrus--All are consistent Chassis/Cable Name: C2 F2 F1 C1 maps to label at feedthru: St1 V St2 V St2 H St1 H. Same for all three corners. The only thing at Corner3 is the strain relief zip tie is gone but that could have been for some time. I pulled each cable from the feedthru, noticed nothing to report and resecured.
Ed pulled the BSC2 corner 3 coil driver this morning and found that some cables inside were swapped left to right. He's fixed that and re-installed the coil drive. Corner3 tfs look good now. BSC2 is good to go.
Hugh, Dave:
Hugh noticed that the ITMX Hardware Watchdog (HWWD) was in a tripped state and the BSC3 ISI coil drivers were powered down.
The attached plot shows a minute trend from last Wednesday afternoon (13th December). I have created a wiki page describing the bit encoding of the HWWD_STAT_OUT channel
The SUS signals go into error at 14:32 PST. The SEI trip occurs at 15:02. The SUS error is removed at 15:14. This is a minute trend, data points shown inside circles.
Prior to resetting the system this afternoon at 16:04 PST, Hugh verified that it was safe to re-energize the ITMX coil drivers.
The LVEA has been transitioned to LASER SAFE.
TITLE: 12/18 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
SHIFT SUMMARY: The drive to close early this week continues. SEI seems to be having troubling keeping BSC2 stable, possible electronics issues. The IO team is working hard to finish up. SUS is closing out the BSCs.
LOG: (Long log, see attached)
TJ, Dave:
we have found, by accident, that setting the DIAG_MAIN to the INIT state frees the memory locked due to the data-averaging bug in Ubuntu12. Previously we have been periodically restarting this node. From now onwards, we will just run the INIT state to clear the memory.
The long term fix is to upgrade h1guardian0 to debian9 before O3 starts.
Starting at 7pm Sunday PST, the hourly autoburt backup of h1ecaty1 started to fail. The backup of h1ecaty1plc1 fails more times than it succeeds, the backup of plc1 and plc3 have always failed since 7pm.
The problem appears to be with the local EPICS Gateway running on the machine autoburt (a virtual machine on cdsproxmox). Restarting the gateway will cause a re-broadcast of autoburt's connections, so it will wait until the code-restart moratorium is lifted. In theory the Beckhoff slow controls settings are stored locally, and therefore a lack of a burt snapshot may not be critical.
I added 150ml water to the crystal chiller. I also added about 100ml to the diode chiller. Note: There was no indication the diode chiller water level was low. The approximately 100ml added to the diode chiller was the remainder in the filler cup from topping off the crystal chiller. Adding the few milliliters of water left in the filler cup after filling the crystal chiller to the diode chiller has been enough to keep the water level above the low limit.
I"ve processed JeffK's B&K measurements of the PRM, PR3, SRM and SR3 cages. I'm posting plots comparing previous measurements, generally from April 2013 or June 2014 (based on dates in headers in the original measurement files). I've put the exports I used in the folders for the appropriate folders, i.e.:
/ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/PRM/BandK
/ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/SRM/BandK
/ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/SR3/BandK
/ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/PR3/BandK
I'm not sure where to stick the script to plot them with yet. I also need to move over the .pls files and the full exports, but Jeff and I both have USB sticks with full back ups of all of the B&K measurements from the vent.
Mid vent trends for HEPI
FAMIS7469
Laser Status:
SysStat is good
Front End Power is 35.66W (should be around 30 W)
HPO Output Power is 152.9W
Front End Watch is GREEN
HPO Watch is GREEN
PMC:
It has been locked 6 days, 3 hr 58 minutes (should be days/weeks)
Reflected power = 25.6Watts
Transmitted power = 48.03Watts
PowerSum = 73.64Watts.
FSS:
It has been locked for 5 days 0 hr and 29 min (should be days/weeks)
TPD[V] = 2.725V (min 0.9V)
ISS:
The diffracted power is around 1.8% (should be 3-5%)
Last saturation event was 6 days 2 hours and 14 minutes ago (should be days/weeks)
Possible Issues:
PMC reflected power is high
ISS diffracted power is Low
The upper E18 in the MSR (h1fw0's raid) was sounding an audible alarm this morning. The management web interface is showing that controller0 on this unit has/had an over-temp event, though its current temp of 45C is the same as h1fw1's raid which is not in alarm.
Trending the MSR MAX temp over three days does not show much variation. RACK1 temp shows a slightly elevated temp of 25C (75F) at 8am PST this morning.
I've silenced the audible alarm and I am working on resetting the latched alarm.
raid-msr-e18-0-0 error log confirms the 08:19 PST timing of the event:
0000:C0 01-Dec-2002 at 00:08:19:(E): A failure of controller 1, ID 000402C34787 has been detected
On further investigation, this is most probably not an over-temp alarm. I can find no logs for this event (the one posted above is the only one in the error log, but it is from the 1st of December). There are no red leds on the rear of the E18, there are two STAT red leds on the front of the unit (one on each side). Since the raid is still operational and no disks have failed, we have decided to hand this over the Dan for further investigation.