Did another initial y-arm alignment from scratch. - TMS pointing to hit the center of ITMY, with the hitting-catch-at-all-four-cardinal-directions technique (Pitch -107.0, Yaw: -25) - ITMY pointo to the Baffle PD's: BPD 1: Pitch: 197.0, Yaw -157.5 BPD 4: Pitch: 230.7, Yaw -191.5 Center: Pitch: 213.85,Yaw -174.5 - EMTY pointing to center, with the hitting-catch-at-all-four-cardinal-directions technique (Pitch -107.7, Yaw: -34.1) - This brought back good flashes at the center of the reference camera, without any move of the BS or PR3.
As mentioned in alog 11488, I had pulled the ALS EY REFL common mode board. I have modified the second boost stage so that it is the same as the first (pole at 100 Hz and a zero at 1 kHz). The serial number is SN S1102632. In detial:
The board is now back in the rack and the TFs look good.
This is a tabletop photodetector interface (D1002932-V5, S1103811) in ISCTEY that was misbehaving yesterday. I opened it and found this.
None of the power connectors for Thorlabs PDs and BBPDs were soldered to the board. And one of the connectors was loose (right-most one in the video). See how the legs move as I wiggle.
It's a miracle that it worked at some point. I soldered all of the connectors.
(If avi doesn't play, try mov file, though the time of the mov file seems to be out of whack.)
I opend and checked two randomly selected "Table-Top" boxes (D1002932-v4) from the twelve boxes in Mid-Y storage, serial numbers S1103796 and S1103810, for quality of build and found them to be in good shape. The front circuit card was soldered onto the panel connectors. All screws and fasteners were tight. The general quality of work looked good. Let's hope the box Keita found (S1103811) was an oversight.
I've rebooted h1guardian0 as a requested test to see if it recovers gracefully after a restart, which it appears to do. Also, I took the opportunity to configure the IPMI management port during the reboot. Note that when I logged in to reboot the machine, there were 20 zombie processes reported. That's probably not great since it was just rebooted yesterday.
The Apollo crew was breaking loose the bolts at the septum between HAM 5 and HAM6 to prepare for the upcoming HAM5 installation work. Monitored the dust counts before and during the work on the bolts. Before work started the dust counts in the cleanroom were zero. During the loosing process counts were in the 40 to 60 range while the bolts were actually being turned. Shortly after the wrench work stopped the counts dropped back to zero. I saw no counts over 100 0.3 micron particles during the monitoring; larger particle counts were generally less than the 0.3 micron counts.
I am about to start upgrading drivers on the DAQ as specified in WP4583. There will be NO DATA recorded during the period when the data concentrator is rebooted - as it is due for a full FSCK at boot, this will be a period of 10-15 minutes. For the remaining system upgrades, there will be intermittent access to the recorded data as those systems (h1nds1, h1fw1, h1broadcast0) are rebooted. Changes to h1nds0 and h1fw0 will happen at a later time. Further updates will be posted to this entry with specific downtime.
DAQ Downtime Report
h1dc0: 16:07:40 - 16:18:00 UTC There is NO data recorded by the DAQ during this period.
h1nds1: 16:26:00 - 16:28:20 UTC
h1fw1: 16:33:55 - 16:55:50 UTC There is NO data available via h1nds1/h1fw1 for this period. Use h1nds0/h1fw0 for frames ending/starting in this timeframe.
h1broadcast0: 17:06:55 - 17:16:00 UTC
Most installs were uneventful. However, on h1fw1, the MTU was not set to 9000 in /etc/conf.d/net as it is on h1fw0, which prevented daqd from running after restart. I changed /etc/conf.d/net to match and rebooted to fix; I have no idea how it ever worked before. On h1broadcast0, I disabled the items in local.start that are only useful for a data concentrator; h1broadcast0 being a clone of a data concentrator had these unnecessary additions.
Technical Details
(l inadvertently left these out of the original entry)
The change is to upgrade the Myricom ethernet adapter drivers for the DAQ broadcast network to version 1.5.3.p3, compiling them with the MYRI10GE_ALLOC_ORDER=2 option and using the big_rxring firmware at driver load. This is to attempt to reduce the number of dropped frames that are seen occasionally, most often on the framewriters, that also trigger 'retransmission request' errors in the daqd log. And additionally, on the data concentrator to make use of the MYRI10GE_THROTTLE option to see if tuning the packet emission rate has any effect for the receiving systems. The primary method of measuring any change is to use the SNMP monitoring of the DAQ broadcast switch to monitor the dropped/paused frames per host port. The same changes on the test stand indicate some improvement.
Things were working fine till 9:00 AM PT, then the PSL shut off due to a chiller error. Sheila was notified of this. When the PSL comes back online, I'll append its status with a comment.
This is Thomas
Sheila turned the laser back on, she thinks the chiller may have caused the trip, but it didn't need re-filling. - Laser is ON - Output power = 27.5 W - Watchdog is RED - PSL SYSSTAT.adl is all green - PMC just came back online - PMC Reflected power at 1.4 W and Transmission at 10.0 W - Ref Cav just came online - Camera looks ok and trans PD threshold is at .75 - ISS is at 6.58% diffracted power - Just came back online.
Aidan. Matt. Dave H.
The CO2X laser was energized and aligned to the initial polarizer and AOM. The AOM was energized (the external RF input and power appear to work successfully) and set to maximum modulation depth. The AOM was aligned to the Bragg angle to maximize the diffraction efficiency (this has yet to be measured).
The cable connecting the voltage and current monitor channels from the laser power supply to the ADC had to be rewired as the I_MON and GND pins were back to front.
model restarts logged for Mon 21/Apr/2014
2014_04_21 09:46 h1iopsush2a
2014_04_21 09:48 h1susmc1
2014_04_21 09:48 h1susmc3
2014_04_21 09:49 h1susmc3
2014_04_21 09:51 h1suspr3
2014_04_21 09:51 h1susprm
2014_04_21 09:52 h1suspr3
Reboot of h1sush2a Link. Unsure why some models got logged twice. (Purple=IOP, Green=User)
Find attached the watchdogs trips summary of last week/beginning of this week. We'll discuss about it during the SEI portion of the testing meeting today.
Stefan, Sheila
After locking the Xarm (we turned on the opLev damping, and slowed down the gain ramping in the guardian for the ALS COMM ). We aligned the IR to the X arm using IM4 and PR2. Then we tweaked to BS alignment, and saw flashes of up to 350 counts on ASC_Y_TR_A_SUM.
We went out to the Y end to try to aling the beam onto the camera and LSC PD, but we didn't see the beam while the cavity was flashing and we notied that we still need power to the analog camera on the table.
We came back to the corner and looked at the separation of the green beams on ISCT1; right before the prism they are separated by 9 mm. They were well placed on the prism as we found them.
We currently have both the X and Y transmitted beams on the ISCT1 camera, and have marked both of their current positions on the monitor in the control room. Tomorow we can use this as a reference for the BS alignment.
(Keita, Alexa)
We adjusted the phase shifter delay line to 20ns. We optimized this by looking at the X-Y plot of the demod IMON error signal and the PD DC readout (see first picture).
With this phase, and the following common mode board settings, we took an open loop transfer function of the PDH:
The OLTF gave a UGF of about 5.6kHz with a phase margin of about 20 deg. I have attached a picture and the data (119 --> mag, 120 --> phase). However, this was not exactly reproducible as the lock kept dropping and the alignment was drifting.
We pulled the PDH common mode board SN S1102632 to make adjustments to the second boost stage as made for EX (see alog 9357). We also pulled the generic interface for the ISC tables (D1002932) SN S1103811 since two out of the four ±12V DC output were not working. Both these units are currently in the EE shop.
We also found a bad connection between the phase modulation panel on ISCTEY and the pockel cell. We reconnected the cable and this seemed to help. We measured the EOM RF power, and found 240mV RMS.
Attached is a spectra of ITMY and ETMY optical lever with ISI blends in two different configurations :
BLUE : Tbetter on both stages and for every DOF (except for Ry which stayed on Tcrappy)
PINK : Tcrappy on both stages
The sensor correction on stage 1 was running during the measurement for both ISIs.
There is no major difference in the YAW spectra (bottom two plots) since Ry stays in Tcrappy, but the difference can be clearly seen in Pitch (top two plots). Tcrappy improves the microseism (~.1Hz) but gives less attenuation at the sus resonance frequencies (~.5Hz).
The reason we started looking at this is that we were bothered by the large (order of 1 urad peak) fluctuations at around 0.14Hz. The microseism is a bit higher today that it was when Rich was here designing Tbetter, it has been around 5e-5 decaum/s most of the day, (the upper dashed line on the 0.1-0.3 Hz PEM FOM) We are also running Tcrappy +sensor correction on the xarm, with OpLev damping as well.
J. Kissel, for S. Dwyer, J. Rollins, D. Barker, A. Pele, J. Batch, H. Radkins, D. Sigg, K. Kawabe, S. Ballmer Several things went down at the same time during my melee with the front-ends yesterday (see LHO aLOG 11464), with little indication of the problem, which gave us a proper Monday morning adventure. After much digging we think we've uncovered the time-line of how things went bad: (1) 2014-04-20 18:51 UTC (Sunday, 11:51 PDT) Kissel requests IMC Guardian to enter DOWN state, MC REFL Camera loses signal. Unclear what this did, but the guardian does not touch any alignment signals #foreshadowing. This is very shortly after I post the "I'm getting started" LHO aLOG 11463. (2) 2014-04-20 20:40 UTC (Sunday, 01:40p PDT) After two successful front-end model install/restarts (PR3 and MC1), the install/restart of MC3 causes h1sush2a front-end computer's IOP throws a FIFO error. This results in the familiar error that drive appears to come out of the user model, but does not get past the IOP out to the real world. (see LHO aLOGs 7385, 8424, 8964) (3) 2014-04-20 21:38 UTC (Sunday, 02:38p PDT) The guardian computer crashes because it ran out of memory. This rendered all guardians non-functional. Fixes (in chronological order): (A) (for 2) 2014-04-21 16:45-17:00 UTC (Monday, 09:45a-10:00a PDT) all h1sush2a user models (h1susmc1, h1susmc3, h1susprm, h1suspr3) killed, h1iopsush2a restarted, all models started (B) (for 3) 2014-04-21 16:50-18:00 UTC (Monday, 09:50a-11:00a PDT), Jim physically reboots guardian machine, Jamie logs in remotely and fixes things. (C) (for 1) 2014-04-21 18:25-19:00 UTC (Monday, 01:25p-02:00p PDT) Fix (2) and (3), and *realign MC WFS path* to regain WFS centering and good camera shot. Detailed Commentary: (A,2) The FIFO error (2) is frustrating, not only because I still haven't put the error indicator on the SUS screens (totally my fault, accepted), but also because the error indication itself is indicative of two different things: - When a USER DACKILL has said "I'm in a bad state, ignore the DAC outputs from my model." - When the IOP throws a FIFO error. Maybe other things of which I don't know, as well. This is bad because, although hopefully now much more rare, the USER DACKILL trips whenever the USER watchdogs trip to ensure that no drive signal gets out of the USER model's domain. This connection between USER watchdogs and USER DACKILLs was established in the overly-cautious time after the 2012 fiber break, when we weren't sure that stopping the last output (i.e. what the user watchdog does) actually stopped the drive. Perhaps it's time to just remove this layer... (B,3) The only information I have about the guardian failure is what you see in LHO aLOG 11470. Hopefully Jamie can give a more complete report in the coming days. (C) We have NO IDEA why the MC REFL path had changed. Note that Stefan recalls a similar incident in February (see LHO aLOG 10335). Once we recovered the MC SUS drive, and guardians re-aligned SUS and HEPIs, we spend the usual hour or two convincing ourselves that every drivable object was in the same place: - The TRANS path showed good behavior. The transmitted light from the mode cleaner looks as it did before, H1:IMC-TRANS_OUTPUT is ~3800 [ct], H1:IMC-IM4_TRANS_SUM_OUTPUT is ~25 [ct], and the splotch on the IMC TRANS camera looks centered and the same. - Moving around the PSL PZT only decreases the TRANS signal, indicating that the input pointing is still good. - SUS are in the same place. All bottom stage OSEMs showed the same locations as before, alignment sliders had correct offsets in place, MC WFS offload values were roughly the same (and small compared to the offsets). - HEPI were in the same place. Aside from trends of the CPS on both HEPIs and ISIs, we slowly translated HEPI in X and Y, which moved BOTH REFL and TRANS camera signals, indicating the REFL change is not common to both signals. - Sheila locked up the X arm to get a straight-shot to PR3, which bounces off of PR3, through PR2, then back to HAM1 and onto ISCT1. Since the beam is large on PR3, its also quite sensitive to HAM2's alignment. The green remained aligned on PDs in ISCT1. Given that there's *no* active steering between MC1 and the REFL WFS / Camera, we just don't understand how a reboot of any computer would change this paths alignment. The best theory is that there's a loose optic in the path on HAM2, which gets jostled during a HEPI / ISI trip/reboot. Conscious of shaking more important things, I always ramp down the control and offsets the ISIs and HEPIs before doing model restarts, but ... like I said, it's all we've got. Sarah would be proud.
More great progress was made on the assembly of 3IFO actuators. Scotty and I should be able to finish with the assemblies tomorrow. I was also able to get some ICS time in today.