(Sheila, Terra, Jenne, Matt, Lisa)
Locking Notes:
Had some grief with H1 locking.
Before the issues above, managed to get some high power locks to Terra for PI work.
SEI_CONFIG State Change Due To EQ
Additionally, while attempting to lock we had a 6.0 EQ in Colombia which peaked at 1um/s. I was a few minutes slow, but took the SEI_CONFIG to EARTH_QUAKE_V2 before the noise got to 1um/s.
Facilities Note: Noisy Air Conditioning Unit (one closest to entrance) was making a bit of noise. Bubba was contacted, and this should be addressed tomorrow. (he said we could turn it off and open the hallway doors if it was too noisy....but we never got to that point.).
I report short analysis on 18038 Hz and 18056 Hz instabilities which have been frequently observed on both OMC output and the arm transmission QPD. As it’s been pointed out this instabilities most probably corresponds to the ETMY mechanical modes which are aliased from 47498Hz and 47480Hz. In order to observed instabilities around f0 = 47.5 kHz the optomechanical interaction must involve either 2nd or 9th order optical transverse modes. (for FSR = 37.5 kHz and transverse mode spacing, TMS ~ 5.3 KHz, we get f0 ~ FSR +2*TMS kHz or f0 ~ 9*TMS kHz which is the requirement to observe instabilities). Following this argument, optical modes TEM02, TEM10, TEM17, TEM25, TEM33, TEM41 or TEM09 should be considered. I can rule out the 9th order modes since they have large diffraction losses and thus very small chance to cause instability. According to my FEA model of ETM, there are two spider-like modes: mode #533 and mode #535 (see attachment) which have large overlapping factor with TEM10 (see figure). Computed resonant frequencies are 47454 Hz and 47469 Hz, respectively. Note, these two modes are 16 Hz apart from each other what makes them very interesting. They should ring up at the same time. The overlapping factor for mode #533 is 0.13 on ITM and 0.22 on ETM, whereas for mode #535, the overlapping factor is 0.25 and 0.39 on ITM and ETM. TEM02 gives overlapping parameter of order ~0.001 and can be ignored. A larger beam spot size of TEM10 on ETMs is responsible for larger overlapping factor on ETMs than on ITMs. It is expected that modes #533 and #535 of ETMs are more susceptible to instabilities than ITMs counterparts. Estimated Q-factor for mode #533 is 22e+6 and for mode #535 is 23e+6. Q-factor was estimated based on losses due to HR coating, AR coating, silicate bond, and bulk loss of the substrate. The last figure (Gain) on the right shows parametric gain as a function of the ETM RoC in the IFO arm where instabilities occur. To compute this figure I used nominal values for aLIGO and P_arm = 200 kW.
The attachment shows several plots of the CARM error point (as measured by the monitor on the common-mode board) along with the monitor of REFL9I. Both have been referred to volts out of the REFL9 demod board. The left-hand plots show the situation during O1, when there was –13 dB of gain between the summing-node input and the CARM error point monitor. The right-hand plots show the situation as of last night, when the gain was instead +3 dB.
My initial guess is that the improvement in the error point noise is simply due to the gain redistribution between O1 and now, though I have not gone through and budgeted the noise.
[Jenne, Daniel, Ed, JeffB]
While recovering the IFO, Jeff and I could not engage the WFS centering for the AS WFS which needs to happen for us to engage the DRMI ASC. After some confusion about what the problem was, Daniel suggested that probably the high voltage supply for the fast shutter was off. We went to the CER mezzanine, and indeed both the OMC PZT and the fast shutter's HV were both off. This likely happened as a result of some vac/Beckhoff restarts that happened today, which probably triggered the interlock that shuts off the HV when the pressure in HAM6 is too high. As soon as we turned on the high voltage, everything was good again.
So, on the one hand, this is not a situation we expect to run into very often. On the other hand, we should probably put a test into the DIAG_MAIN guardian indicating that the HV is off so that we don't spend an hour confused again. There is a test in DIAG_MAIN for the OMC PZT's HV, but that doesn't start looking until we're close to DC readout. We could just use this, since both HVs will be turned off at the same time.
The fast shutter test that checks that the shutter fires by look at the GS 13s at the time of a lockloss where we had more than 14kW gave us one false alarm tonight.
(It correctly reported that the shutter did not fire in a lockloss where the circulating power dropped). Not enough of the power was sent to the AS port for H1:SYS-MOTION_C_SHUTTER_G_TRIGGER_VOLTS to get close to its threshold of 2 Volts.
We probably want to increase this threshold to avoid false alarms so people don't get in the habit of ignoring the warning. For now I have increased it to 25kW, and committed to the SVN.
Some features were added in the model (first attachment):
Closed loop response of the AC coupling was calculated and a filter that mimics its behavior was put in FM6 of the third loop ("ACcpl") (second attachment bottom). Together with FM10 ("white") which mimics the analog whitening of the outer loop error point (which is basically zp=(78mHz^2,3.6^2)), this will make the outer loop transparent when viewed from the 3rd loop.
Kyle R., Joe D. After years of tolerating "hit or miss" micro-production runs of made-when-ordered vendor spec'd wire seals for Vacuum Bake Oven D (VBOD), we finally convinced them to make a custom batch using our own, in-house, specifications. No leaks now, so far so good. The defining difference is that now the seals are slightly undersized and require a slight "stretching" to snap into place on the female flange. The vendor spec'd ones are fabricated to the final design length and then annealed to the point that they are so soft that they can't be handled without causing them to become elongated during packaging so, by the time we receive them, they are too long and are usually so loose as to be unreliably captured between the flanges. These, then, and end up leaking 1/2 of the time after the bake cycle (which is very expensive in time and labor to gamble).
Title: 09/13/2016, Day Shift 15:00 – 23:00 (08:00 –16:00) All times in UTC (PT) State of H1: IFO is unlocked. Scheduled maintenance day. Commissioning: Daniel & Keita commissioning the ISS Second Loop Outgoing Operator: N/A Activity Log: All Times in UTC (PT) 14:45 (07:45) Chris – Going to both end stations VEAs 15:00 (08:00) Christina & Karen – Cleaning in the LVEA 15:00 (08:00) Jeff B. & Peter – Forklift old chiller to Mechanical Building (WP #6147) 15:07 (08;07) Chris – Back from End Stations 15:20 (08:20) Jeff & Peter – Finished with chiller move 15:25 (08:25) Alfredo & Elizabeth – Going into Biergarten to make measurements 15:38 (08:38) Alfredo & Elizabeth – Out of LVEA 15:41 (08:41) Filiberto – Going into LVEA to swap ISS Second Loop chassis 15:42 (08:42) Cintas on site to swap mats 15:45 (08:45) Ed – Transitioned LVEA to laser safe (WP #6156) 15:52 (08:52) TJ – Guardian BCS work (WP #6152) 15:55 (08:55) Hugh & Jim – LVEA ITMX HEPI work (WP #6140) 15:56 (08:56) Alfredo – Going into LVEA to look for boards 16:00 (09:00) Filiberto & Marc – Going into CER 16:00 (09:00) Mike L. – Safety audit inspection tour (WP #6150) 16:11 (09:11) Alfredo – Out of the LVEA 16:12 (09:12) Jason – Going into the PSL to reset Power Meter & Check DBB PD 16:15 (09:15) Travis – Going to both End Stations to reset P-Cal camera 16:23 (09:23) Christina – Out of LVEA – Going to End-X to clean 16:43 (09:43) Kyle – Going to End-X for RGA work (WP #6157) 16:44 (09:44) Karen – Finished in the LVEA – Going to End-Y 16:55 (09:55) Filiberto & Marc – Out of CER 17:10 (10:10) Filiberto – Going to Mid-Y to recover cards 17:13 (10:13) Richard, Ed, - Working on BS Binary IO problem 17:15 (10:15) Elizabeth & Alfredo – Out of the LVEA 17:19 (10:19) Jenne – Taking a guest on LVEA tour and to the roof observation platform 17:23 (10:23) Karen – Finished at End-Y 17:30 (10:30) Dave – DAQ restart 17:35 (10:35) Norlco – Delivery of Nitrogen to End-X 17:37 (10:37) Christina – Finished at End-X 17:39 (10:39) Mike L. – Safety Audit finished in LVEA – Going to End-Y 17:40 (10:40) Jason – Out of the PSL 17:47 (10:47) Filiberto – Back from Mid-Y 17:48 (10:48) Filiberto – Pulling cable around HAM4 17:50 (10:50) Hugh – Finished in LVEA 17:53 (10:53) Hugh – Going to End Stations to check HEPI pump stations 18:14 (11:14) Karen – Going to Mid-Y 18:16 (11:16) Jenne – Finished with tour – Out of the LVEA 18:22 (11:22) Karen – Finished at Mid-Y 18:22 (11:22) Christina – Forklifting pallet into the OSB Receiving 18:25 (11:25) Travis – Finished with P-Cal cameras 18:31 (11:31) Filiberto – Finished in the LVEA 18:41 (11:41) Patrick – Updating user accounts on h1brsex & h1brsey (WP #6154) 18:57 (11:57) Hugh – Back from End Stations 19:52 (12:52) Dave – DAQ restart 20:00 (13:00) Kyle – Leaving End-X 20:10 (13:10) Dave & Nutsinee – TCS Chiller card upgrade 20:50 (13:50) Nutsinee – Going to restart TCS Chillers – Then to LVEA to check TCS 21:00 (14:00) Nutsinee – Out of the LVEA 22:21 (15:21) Ed – Going into CER to check HV power Fast Shutter supply Title: 09/12/2016, Day Shift 15:00 – 23:00 (08:00 – 16:00) All times in UTC (PT) Support: Jenne, Dave Incoming Operator: Corey Shift Detail Summary: Working on relocking after maintenance window.
WP 6140 FRS 4650 II 1151
DeIsolated ITMY HEPI and cranked on the DSCW Springs to put platform close to target position. Succeeded generally; 1/12 turn of spring does ~30um shifts--about as small a turn as we can repeatably do (must turn multiple springs the same amount to get desired result.) But, reduced the 4 horizontal Actuator Drives from ~5000 cts to 10s & few 100s, and, 3 of the 4 vertical drives a good bit. See 1st attachment.
Different from the BS last week, looking at the local IPS positions, the local postions are pretty much what they were before the drag and drop. All the IPS readings are within 2 or 300 counts, < 0.0005", see 2nd attachment. Hmm, now that I revisit the BS numbers, the BS verticals maybe moved 2 or 3x the ITMY... So maybe all in all not too big a deal given the overcontrolled/constrained nature of the HEPI.
MarcP, Fil, Hugh
This morning, the 'additive' Jameson gave to H1:ISI-BS_BIO_IN_BIO_IN_TEST was still in place and giving a fault to Stage1 V3 Status. I removed it and it has remained good since except during our testing.
Historically, this has not been glitchy for the past 60 days. The CD BIO IN must be bad for 10 seconds before the ISI WD triggers and that first occurred at 1230 PDT yesterday.
Around 9am today, monitoring the output from the BIO Chassis going to the I/O computer, the indications were that the V3 channel responds the same as others with very similar logic voltages when the input from the CD was present and absent. Not terribly satisfying but maybe suggesting the BIO Chassis is okay and implicating the I/O Computer.
Our best test would be to actually have the fault state and repeat this check of the BIO Chassis outputs.
Meanwhile, I hear Richard instructed EdM to power things down and reseat the I/O Computer cards but I haven't heard that directly or seen a log.
FRS 6203
Gerardo M., Kyle R. Today we transported pumps, variable alternating current source transformers and leak detector etc.. to the X-end station Residual Gas Analyzer (RGA), removed the redundant 1 1/2" valve and installed the 60 liter/second turbo molecular pump in its place. At the next opportunity, we will need to vent the RGA volume and re-flange the RGA as it still has the "factory" fasteners which are inadequate to achieve the desired "metal-to-metal" connection (which has proven to be impervious to bake cycles as far as leaking is concerned). We will then helium leak test all of the bolted joints and wrap up the assembly with aluminum foil and heat tapes. At that point we will await a five day window of opportunity to run the pumps 24/5 and do a 200C bake of the RGA assembly. In summary, the current WP is partially complete and will remain active until completed (dependent upon Interferometer downtime(s))
WP 6154 I created a controls account on h1brsex and copied the files over from the Administrator account. These were the folders 'C:/Users/Administrator/My Documents/BRS2 C#' and 'C:/Users/Administrator/My Documents/BRSX'. I also copied the Desktop shortcuts from the Administrator account to the controls account and changed them in the controls account to point to the controls directory. I stopped the BRS code running in the Administrator account and started it in the new controls account. During this time I changed the ISI configuration from WINDY to SC_OFF. It is now back to WINDY. PS: It is confusing that in the Windows explorer the folder named 'My Documents' is named 'Documents' in the Command Prompt.
I did not run the matlab plots, if I get a chance later I'll run and post them but no guarantees.
I went into the enclosure this morning and power cycled the power meter that measures the power reflected from the PMC. After the power cycle it is now working as expected.
While there I also reset the PSL power watchdogs (FAMIS task #3615).
I analyzed the Sep. 12th lock to see if any of the suspension drive signals were suffering from DAC calibration glitches. There were no suspension drive signals that showed glitches coupling into DARM, PRCL, or SRCL when the MASTER_OUT signal sent to the DAC crossed values of zero or +/-2^16. We'll continue to monitor for DAC calibration glitches, this should be running automatically on a daily basis for O2.
Today CP4 reservoir overfilled to 100%+ after a Dewar fill because the lower limit of LLCV in PID settings was left at 37% from CP4 flow meter data collection. This was my mistake. All lower limits are normally set to 20%. I reset it to the default 20%. While I was at it, I lowered CP1 and CP2 lower limits to 10% since these two pumps normally run at a lower % open.
New susprocpi model with daq restart. h1seib2 restart in attempt to clear BIO error.
(Test frame writer fw2 was restarting, not shown).
WP 6153 I updated h0vaclx to add the X1 PT140 BPG402 gauge. For now both this gauge and the pirani/cold cathode pair will run simultaneously. Dave is adding the channels to the DAQ. The restart tripped the ITMX ESD high voltage (as expected) and Filiberto reset it.
Keita Daniel
We were able to engage the new ISS with the most recent modifications, see alog 29570. The offset adjustment is very finicky. Trying to run with high ISS gain at low input power requires offset adjustments below the 0.1 count level as function of the gain value. We decided to leave the ISS gain fixed at 5 dB and set the offset through the 3rd loop filter module to 25.5 (or near there). This yields a unity gain frequency around 200 Hz. Ramping the power up to 50 W will then move the ugf up to about 4 kHz.
The first plot shows the transfer functions at 2.1 W (red), 50 W (blue) and 50 W with the boost engaged (green). The ISS gain was 5 dB for all measurements.
The second plot shows the PSD calibrated in RIN/√Hz. The red trace shows the error point which has not been corrected for the AC coupling, whereas the magenta trace shows the sixth PD (hooked up to channel 2). Both traces were taken at 50 W without boost. The black trace shows the error point at 2 W. The amber and dark blue traces show the RIN without the outer loop engaged. The dark blue trace is limited by ADC noise above 2 kHz. The ISS gain was again fixed at 5 dB.
Today, we locked the ISS at 2.1 W and measured the low end of the transfer function. Everything as expected.
The CO2 heating power was calculated based on Aidan's CO2 power vs. PSL power plot (alog25932). With the new thermal lensing measurement (alog28799) I fine-tuned the equation.
CO2 power = slope * PSL power + offset
ITMX slope was -0.01, now -0.012
ITMY slope was -0.01, now -0.014
ITMX offset was 0.5, now 0.6
ITMY offset was 0.3, now 0.4
Corrected actuation plot. Divided lensing by a factor of 2 to make it a single-pass value.
We are having another EQ now (a 6 in the Solomon Islands). We were riding out the first waves OK (we got through the peak of 0.3um/sec on the BLRMs) and then tried EQ mode v2. I think that switching to earthquake mode blew the lock. :(
We switched back to windy mode because ALS looked better that way, and made it to DRMI ASC by the time the R waves hit and blew us out of lock. Switched back to windy more at 8:22 UTC, when the EQ band blrms were back to 0.2-0.1 um/second. The attached screenshot shows a couple of trials of switching back and forth between EQ and windy more while trying to lock. It seems that at this level of ground motion, EQ mode v2 adds too much angular motion in 0.1-0.3 Hz for ALS to handle. (see JIms alog)
Other than the EQs the ground is quiet, no wind no useism.