Daniel, Keita, Sheila, Thomas, Hang
1,
Because the 118.3MHz and 9.1MHz&45.5MHz SBs' phase are not sync'ed perfectly, we need to use a double demodulation scheme to sense the AS72 signal, see T1700324. Previously, we offset the first demodulation frequency by 205Hz (demod at 72.8M + 205 Hz for the first demod). However, with this config we could not engage the PLL loop to stabalize the secondary demod freq.
Daniel pointed out the if we domod at 72.8M + 205, we would down convert both <45.5M, 118.3M> and <2x45.5M, 2x9.1M> to 205Hz, and the two signals could beat against each other since 118.3M SB's phase drifted relative to 9.1M. Therefore the PLL loop could not be locked.
To solve it, we instead demod'ed at exactly 72.8M for the first demod loop, but offset the 118.3M by 205Hz. After this modification we could have a stable PLL loop to derive the 72.8M Hz signal.
2,
We preformed some preliminary AS72 vs AS36 comparison. Under DRMI config (10 W input),the SRM pitch respose in cnt/rtHz were (also the first plot)
A_I | A_Q | B_I | B_Q | |
AS36 | 253 | 207 | 944 | 360 |
AS72 | 2.5 | 0.19 | 1.4 | 1.2 |
In order to see the signal, we had to have a very narrow bw of 0.01Hz. For reference, the noise level for AS72 is about ~0.1 cnt/rtHz.
Then in the NLN w/ full IFO, we did the measurement again (second plot)
A_I | A_Q | B_I | B_Q | |
AS36 | 318 | 77 | 786 | 223 |
AS72 | 1.6 | 0.39 | 1.1 | 0.7 |
The noise level for AS72 is still about 0.1 cnt/rtHz.
In addition we report the sum signal under the full IFO
A_I | A_Q | B_I | B_Q | |
AS36 | 25 000 | -4 300 | 33 000 | -14 000 |
AS72 | 169 | -3.3 | 211 | -1.8 |
We also tried to see the BS repose but could not see anything even if the line we drove dominated the BS rms motion...
Note that the 72 MHz signals is tiny because the modulation index for the 118 MHz is puny. See alog 37042. Since we are adding the new modulation to the EOM for the IMC, it is completely off-resonance. Obviously, we would need a dedicated EOM, if this installation should become permanent.
Patrick, Daniel, Hang, Sheila, TVo
We are back at NLN with a range of about 55 MPC.
Good news, it seems like the ETMY TMDS slightly lowered the noise floor in the 15-55 Hz band!
Attached is a spectra comparing a few different times, the first 3 are just to get a rough reference about what the noise floor was before the discharge measurement but after the Montana EQ.
It seems like ETMX discharge also helped a bit but it wasn't obvious unless you take on the order of hundreds of averages, this still doesn't reconcile all of the mystery noise but there is some progress.
We also looked at the spectra before/after discharging with jitter noise removed. Instead of running Jenne's code with time-domain causal filters, we just did a rough freq domain subtraction. The results are attached.
We'll try to check the calibration in the morning.
J. Kissel, S. Dwyer, T. Vo, Just in case any one is mistrusting that the changes in actuation strength of the ETMY from discharging were not covered by the time-dependent correction factors (i.e. kappa_TST), we show a zoom of the ~35 Hz PCAL calibration line. The largest discrepancy between line heights is 5%, which is within the stated limit of uncertainty and systematic error for 68% confidence interval. So, no need to mistrust the calibration here. We've decided against making a full sweep given that we're so crunched for time on this last night with the O1-O2-like H1 interferometer. May we come back more sensitive than ever!
Results attached. If you made one of these changes please commit it with a description of the reason the change was made.
Kyle, Gerardo Today we leak tested the new 2.75 CFF, valved-in the RGA and NEG pump, dumped GV18's unpumped gate annulus volume into the adjacent (pumped) annulus volume and opened GV18. This completes WP #7139 and #7139. Attached is the pumpdown. Note: the controller for the Y2-8 ion pump is showing 5600V, 1.0 x 10-11 Torr and 0.0 microamps pump current - hmmm. I STOPPED then STARTED the HV but get same reading. Neither the "Torr" nor the "current" are as expected. This might be related to the controller setup parameters. Otherwise, the pump isn't actually pumping! I'll consult Gerardo tomorrow.
Another ion pump cable issue?
What does X2-8 read?
Yes - the ion pump current could be at the limit of the controller's measurement circuit. Pressures on the two arms look to be the same now. I thought there was a factor of 10 difference yesterday but maybe I had the wrong glasses on. Kyle or Gerardo?
I checked the display of the controller for X2-8 this morning and found it displays the same as the Y2-8 controller. That's great and all and, obviously they are pumping, but I thought that we had observed leakage current in 300' HV cable that amounted to micro-amps. Also, as a general rule of thumb, you figure 1 micro-amp of pump current for a 100 l/s ion pump @ 1 x 10-9 torr. As such, we should be seeing 5 - 10 micro-amps of pump current in addition to any leakage current.
I suggest turning off one or both ion pumps and looking for a response on the vacuum gauge over a period of a day or more. Hopefully this would confirm that they are pumping. As to leakage current - maybe things have dried out significantly over the summer and therefore leakage has fallen? Need to get some of these signals into epics to enable trending.
TITLE: 09/06 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC STATE of H1: Commissioning INCOMING OPERATOR: None SHIFT SUMMARY: TMDS at end Y done. GV opened. Charge measurements run but not complete. Ran through initial alignment. Had to skip initial alignment of Y arm, something kept pushing the lock off. Sheila destroyed a guardian node that was interfering with locking ALS DIFF. Back to NLN at ~56 MPc. LOG: 14:40 UTC Peter working in back of optics lab 14:41 UTC Restarted video0 15:04 UTC Set observatory mode from maintenance to commisioning Jim running excitation on HAM4 15:49 UTC TJ to optics lab to retrieve item 16:53 UTC Kyle opening GV 17:03 UTC Peter done 17:22 UTC Richard to end Y to turn on high voltage for ESD and ring heater 17:44 UTC Richard done 19:45 UTC Starting locking 19:46 UTC Peter to optics lab 20:33 UTC Kyle to LVEA 20:51 UTC Kyle back 21:05 UTC Peter done 21:28 UTC Kyle to end Y to check on pump (adjacent room to VEA) 22:01 UTC Kyle back 23:44 UTC NLN ~56 MPc
Were not being used and will be replaced with hardware.
The ISI CPS noise spectra plots all look OK. The big rise/dip in the BRS-Y plot was Jim recentering the BRS. Close FAMIS task #6914.
Hang Tivo Daniel
We resurrected the 72 MHz WFS chain and locked in DRMI. Since we don't have harmonics generators yet, we are using ifr signal generators. We started with the modulation set at 13 x 9.1 MHz and the demodulation set at 8 x 9.1 MHz + 205 Hz. Using a double demodulation technique at 8 x 9.1 MHz + 205 Hz (RF) and 205 Hz (digital) we should be able to derive WFS signals for the SRM. However, we noticed that before the digital demodulation the line at 205 Hz was highly variable in amplitude, plus we saw stable harmonics at 410 and 615 Hz. These lines stayed after we turned the 13x modulation off! Meaning, there is a contamination due to higher order intermodulation products from our main modulation drives. As a consequence, we switched the scheme to use an 8 x 9.1 MHz demodulation and 13 x 9.1 MHz + 205 Hz modulation. This eliminated the contamination due to higher order intermodulation products and produced a clean line at 205 Hz before the second digital demodulation. At 410 and 615 Hz no lines were visible anymore. As a result we were able to cleanly lock the PLL to the 205 Hz line of the WFS sum and demodulate individual WFS segments.
PT410B is CC and PT425 is a nude Bayard-Alpert
Laser Status:
SysStat is good
Front End Power is 33.77W (should be around 30 W)
HPO Output Power is 154.9W
Front End Watch is GREEN
HPO Watch is GREEN
PMC:
It has been locked 4 days, 6 hr 44 minutes (should be days/weeks)
Reflected power = 17.36Watts
Transmitted power = 57.32Watts
PowerSum = 74.68Watts.
FSS:
It has been locked for 0 days 0 hr and 7 min (should be days/weeks)
TPD[V] = 0.9954V (min 0.9V)
ISS:
The diffracted power is around 2.9% (should be 3-5%)
Last saturation event was 0 days 6 hours and 21 minutes ago (should be days/weeks)
Possible Issues:
PMC reflected power is high
TITLE: 09/05 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Ed
SHIFT SUMMARY:
LOG:
15:01 Turned off Sensor Correction (BRS) at both ends
15:01 Verified Pico motors at EY off. This should reamin the case for the next two days
15:02 Kyle out to EY to begin discharge procedure. Thids task will take all day.
15:17 Karen to EY to clean.
15:20 Chris taking the pest control person into the LVEA and then down the arms
15:27 Port-a-potty service on site
15:30 Christina to EX for cleaning
15:48 re-booted Video2 and 4. Updated Observatory mode to Preventative Maintenance
15:49 Jeff B into LVEA to identfy network cables for Dust Monitors
15:45 Patrick out to LVEA to execute WP#7137
16:05 Corey out to LVEA
16:14 Cintas on site
16:16 Richard and Ken out to the floor
16:21 LN2 headed to dewar 76 (X CS)
16:25 Richard, Patrick and Ken out
16:28 Soike out to LVEA
16:31 Corey craning internal to squeezer bay.
16:32 Ken back into the LVEA
16:56 Jason out to TCS spares cabinet in LVEA
17:00 Jason out
17:08 Hugh into LVEA
17:10 Kyle called and said he was prepping to close GV at EY.
17:19 Hugh out
17:19 Second EY 300u dust alarm of the day
17:29 FIl out to the floor to document cables for table disconnects
17:31 Jim out to floor to check abou moving te BRS
17:31 Richard out to the floor
17:36 Pest Control checking out for the day.
17:46 Patrick and Dave to End stations to try and re-program the GPS WP#7136
17:46 Elizabeth out to the floor to pick up seismometers
18:36 Dave and Patrick as well as Soike are all back. Chris is going into the LVEA.
18:59 Dave and Patrick back at EX
19:40 Brian with Apollo through the gate.
20:27 Fil out to CER and associated Mezzanine
20:29 Patrick and Dave to EY.
21:30 resarting nuc5
21:45 HFD on site responding o an alarm that was triggered by Bubba at MY.
21:52 Dave and Patrick back
22:18 Jason out to TCS table to grab a power meter for Travis.
This afternoon while checking the PSL chiller filters, I added 200ml of water to the crystal chiller. The diode chiller water level was good.
Kyle, Gerardo, Daniel
The first (and only) cycle was started around 13:45PM. We reached ~10 torr after about 24 minutes. The flow was around 65 slm, the HV at 200mV rms, and the electrometer readout with a 12Vpk square wave was 15-16Vpp throughout the discharging.
The first ion gun suffered from a HV failure and had to be replaced. We measured around 3-4kΩ resistance at the HV feedthrough (should be several MΩ).
Kyle, Gerardo Following the gas/ion admission and before beginning to pump down (Y-end ~10Torr), we isolated and decoupled the Surface Discharge Ionizer from BSC10's door, installed a 2.75" CF blank flange in its place and then "dumped" this small volume of room air into the Y-end vacuum volume. We are leaving Y-end pumping via the Turbo overnight. Gerardo and I will leak test the 2.75" blank followed by opening GV18 when we get in in the morning.
model restarts logged for Mon 04/Sep/2017 No restarts reported
model restarts logged for Sun 03/Sep/2017
h1boot 09:39:08 Sun 03 Sep 2017
restart of h1boot due to freeze-up.
model restarts logged for Sat 02/Sep/2017 No restarts reported
h1boot locked up due to 208.5 days bug.
I am reminded of a kernel 2.6.34 bug whereby the system is prone to lockup after 208.5 days have elapsed. At the time of its freeze, h1boot had been running for 215 days. This bug is also most probably the reason for h1build's freeze ten days before h1boot's freeze. The dates agree with restart data shown in this alog: Link
This will all be resolved soon when we transition the front ends, boot and build machines to a later kernel.
the longest running front ends have been up for 123 days, so no need to reboot these soon. The gentoo DAQ machines are running kernel 2.6.35, which has a bug fix for this problem. This is evidenced by h1tw1 which has been running for 239 days, well beyond the 208.5 days onset of the problem.