WP 7271
Currently the pump is controlled by the new Beckoff system we are prototyping and this is in a non-standard configuration while the valving is in local recirculation state and servo controlled on pump station output pressure. Will run in this non-standard state for a couple weeks while the filter change debris is filtered out. Please do not adjust the controller or valves at EndX.
Attached is a 6 hour plot from last night, showing flow and temperature from both the TCS chiller RS232 readbacks, and the TCS flowmeter and temperature sensors that are on/near the TCS tables. TCSY flow glitches in the flowmeter are not seen in the RS232 flow channel. Both signals are highlighted by boxes in the plot.
This morning Kyle valved back in the pressure build circuit on CP4 Dewar by opening the valve 1/4 turn CCW. The head pressure is now 17 psig and flow is between 50-60 scfhx100. The GN2 temperature dropped as a result to below 150C, so I increased the variac from 56% to 58%.
Increased variac to 60%. GN2 temps 150-160C. Leaving as-is overnight.
Summary:
The IMC mode matching lens, IO_MB_L2, has been rotated multiple times over the last few years, as evident by having to move the beam dump for the reflection from it's back surface. I have looked but not found any records of such deliberate lens rotations, and without this record, my concern is that the lens may have also move along the beam path, altering the mode matching into the IMC.
The mode matching from the PSL to the IMC is done in the IO path with the lens pair IO_MB_L1 and IO_MB_L2. Since the initial install of these lenses, the lens IO_MB_L2 has been aligned in angle with respect to the main beam at three different angles. Changes along the beam path cannot be ruled out, which suggests the mode matching into the IMC be measured.
Details:
Images:
The DAQ was restarted for:
H1EDCU_ECATC1PLC1.ini (polarizer install)
H1EDCU_ECATC1PLC3.ini (SR3 RH)
H1EDCU_GRD.ini (latest guardian channel list)
The EDCU is now only red due to h1pemmy being down (during CP4 bake-out). List is attached.
Patrick, TiVo, Dave:
Patrick modified h1ecatc1plc1 to add remote controls of the MSR fiber polarizer (2017 SURF project). TiVo changed some SR3 TCS ring heater channel names.
I ran my update_beckhoff_daq_target_sdf_configuration script, which created the new monitor.req files for these system. On h1build I restarted the SDF for C1PLC[1,3]. For the new, not-initialized channels, I ACCEPTED and MONITORED them all. Subsystem leads should un-monitor as required.
C1PLC1 snap changes:
+H1:SYS-FIBER_POL_CORR_COMMAND_REQ 1 0.00000000000000000000e+00 1
+H1:SYS-FIBER_POL_CORR_POL1 1 0.00000000000000000000e+00 1
+H1:SYS-FIBER_POL_CORR_POL2 1 0.00000000000000000000e+00 1
+H1:SYS-FIBER_POL_CORR_RATE1_REQ 1 1.00000000000000000000e+00 1
+H1:SYS-FIBER_POL_CORR_RATE2_REQ 1 1.50000000000000000000e+01 1
+H1:SYS-FIBER_POL_CORR_STEP_SIZE 1 5.00000000000000000000e+00 1
+H1:SYS-FIBER_POL_CORR_X1_POSITION_REQ_DEG 1 0.00000000000000000000e+00 1
+H1:SYS-FIBER_POL_CORR_X2_POSITION_REQ_DEG 1 0.00000000000000000000e+00 1
+H1:SYS-FIBER_POL_CORR_Y1_POSITION_REQ_DEG 1 0.00000000000000000000e+00 1
+H1:SYS-FIBER_POL_CORR_Y2_POSITION_REQ_DEG 1 0.00000000000000000000e+00 1
+H1:SYS-FIBER_POL_CORR_Z1_POSITION_REQ_DEG 1 0.00000000000000000000e+00 1
+H1:SYS-FIBER_POL_CORR_Z2_POSITION_REQ_DEG 1 0.00000000000000000000e+00 1
C1PLC3 snap changes:
-H1:AWC-SR3_HEATER_DRV_ISET_GAIN 1 2.23999999999999985789e+01 1
-H1:AWC-SR3_HEATER_DRV_ISET_OFFSET 1 0.00000000000000000000e+00 1
+H1:AWC-SR3_HEATER_DRV_VSET_GAIN 1 3.30000000000000015543e-01 1
+H1:AWC-SR3_HEATER_DRV_VSET_OFFSET 1 0.00000000000000000000e+00 1
With this set of measurement we should be able to grab power measurement from either ISCT6 or SQZT6 and figure out what goes into the OPO and what comes out of the OPO.
At the time 11mW was measured at ISCT6 input coupler. The SHG launch DC diode resistivity has been adjusted to agree with the measurement. From 0.192 A/W to 0.22 A/W. The change has been accepted in the SDF. BS reflectivity is 6.2%. So 6.2% of what SHG_LAUNCH_DC reads should equal to what goes in to the fiber coupler.
Green power measured at VOPO fiber was 3.3 mW (30% drop from ISCT6). 14% loss by the time the beam come out to the SQZT6 and hit the refl PD. Sheila believes this is due to the thin film polarizer.
The OPO refl diode calibration was fine tuned just today (resistivity was 0.22 A/W, now 0.21 A/W). It should reads 14% of what comes out of the OPO.
The measurement was taken while scanning the cavity at 1Hz.
After reading the BBPD design document more carefully I realized the DC transimpedance and responsivity is quite difference for green. The calibration has been changed to 0.3 A/W responsivity and 1400 Ohms transimpedance for the OPO green refl PD. V/W calibration here is still the same. SDF difference accepted.
WP7502
Jeff K, Corey, TJ, Jamie, Dave:
All front end computers with up-times approaching or exceeding 208 days were rebooted. The sequence was: stop all models on the computers before rebooting (leaving PSL till last) then reboot all computers. Dolphin'ed machines waited until the last computer was rebooted before starting their models.
I had a repeat of yesterday's h1susex problem, after a reboot it lost communications with its IO Chassis. Today's sequence was:
remotely rebooted h1susex, it did not come back (no ping response)
remotely reset h1susex via IPMI management port, it booted but lost communication with IO Chassis
at the EX end station, powered h1susex down, power cycled IO Chassis, powered h1susex back. This time models started.
Despite removing h1susex from the Dolphin fabric, h1seiex and h1iscex glitched and had their models restarted. Ditto for h1oaf0 and h1lsc0 in the corner station.
Machines rebooted (as opposed to just having their models restarted) were:
h1psl0, h1seih16, h1seih23, h1seih45, h1seib1, h1seib2, h1seib3, h1sush2b, h1sush34, h1sush56, h1susb123, h1susauxh2, h1susauxh56, h1asc0, h1susauxey, h1seiex, h1iscex
Some guardian nodes stopped running as a result of these restarts. Jamie and TJ are investigating.
At roughly 10:05 AM local time, 45 of the guardian nodes went dead (list at bottom). This time was coincident with all the front end reboots. Technically this was not a crash of the guardian nodes. Instead systemd actually killed the processes because they did not check in within their 3 second watchdog timeout:
...
2018-04-24_17:06:09.417388Z ISI_HAM5 CERROR: State method raised an EzcaConnectionError exception.
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: Current state method will be rerun until the connection error clears.
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: If CERROR does not clear, try setting OP:STOP to kill worker, followed by OP:EXEC to resume.
2018-04-24_17:06:10.018595Z ISI_HAM5 EZCA CONNECTION ERROR. attempting to reestablish...
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: State method raised an EzcaConnectionError exception.
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: Current state method will be rerun until the connection error clears.
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: If CERROR does not clear, try setting OP:STOP to kill worker, followed by OP:EXEC to resume.
2018-04-24_17:06:10.018595Z ISI_HAM5 EZCA CONNECTION ERROR. attempting to reestablish...
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: State method raised an EzcaConnectionError exception.
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: Current state method will be rerun until the connection error clears.
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: If CERROR does not clear, try setting OP:STOP to kill worker, followed by OP:EXEC to resume.
2018-04-24_17:06:10.018595Z ISI_HAM5 EZCA CONNECTION ERROR. attempting to reestablish...
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: State method raised an EzcaConnectionError exception.
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: Current state method will be rerun until the connection error clears.
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: If CERROR does not clear, try setting OP:STOP to kill worker, followed by OP:EXEC to resume.
2018-04-24_17:06:10.018595Z ISI_HAM5 EZCA CONNECTION ERROR. attempting to reestablish...
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: State method raised an EzcaConnectionError exception.
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: Current state method will be rerun until the connection error clears.
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: If CERROR does not clear, try setting OP:STOP to kill worker, followed by OP:EXEC to resume.
2018-04-24_17:06:10.018595Z ISI_HAM5 EZCA CONNECTION ERROR. attempting to reestablish...
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: State method raised an EzcaConnectionError exception.
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: Current state method will be rerun until the connection error clears.
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: If CERROR does not clear, try setting OP:STOP to kill worker, followed by OP:EXEC to resume.
2018-04-24_17:06:10.018595Z ISI_HAM5 EZCA CONNECTION ERROR. attempting to reestablish...
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: State method raised an EzcaConnectionError exception.
2018-04-24_17:06:10.014956Z guardian@ISI_HAM5.service: Watchdog timeout (limit 3s)!
2018-04-24_17:06:10.015011Z guardian@ISI_HAM5.service: Killing process 1797 (guardian ISI_HA) with signal SIGABRT.
2018-04-24_17:06:10.015060Z guardian@ISI_HAM5.service: Killing process 2839 (guardian-worker) with signal SIGABRT.
2018-04-24_17:06:10.084021Z guardian@ISI_HAM5.service: Main process exited, code=dumped, status=6/ABRT
2018-04-24_17:06:10.084278Z guardian@ISI_HAM5.service: Unit entered failed state.
2018-04-24_17:06:10.084289Z guardian@ISI_HAM5.service: Failed with result 'watchdog'.
This is both good and bad. It's good that systemd has this watchdog fascility to keep potentially dead processes. But it's bad the guardian processes did not check in in time. The guardian loop runs at 16 Hz, and it checks in with the watchdog once a cycle, so missing three seconds of cycles is kind of a big deal. There were even logs, from the main daemon process, reporting EPICS connection errors right up until the point it was killed. If the daemon was reporting those logs it should have also been checking in with the watchdog.
Clearly the issue is EPICS related. The only connection that I am aware of between guardian and the front end models is EPICS. But all the EPICS connections from guardian to the front ends is done via ezca in the guardian-worker process, and even if that process got hamstrung it shouldn't affect the execution of the daemon.
Very confusing. I'll continue to look in to the issue and see if I can reproduce.
Here's the full list of nodes that died (times are service start times, not death times (I'll try to make those times be last service status change instead)):
ALIGN_IFO enabled failed 2018-04-20 09:39:57-07:00
DIAG_EXC enabled failed 2018-04-20 09:39:51-07:00
DIAG_MAIN enabled failed 2018-04-20 09:40:03-07:00
DIAG_SDF enabled failed 2018-04-20 09:39:51-07:00
HPI_BS enabled failed 2018-04-20 09:39:52-07:00
HPI_HAM1 enabled failed 2018-04-20 09:39:52-07:00
HPI_HAM2 enabled failed 2018-04-20 09:39:52-07:00
HPI_HAM3 enabled failed 2018-04-20 09:39:52-07:00
HPI_HAM4 enabled failed 2018-04-20 09:39:52-07:00
HPI_HAM5 enabled failed 2018-04-20 09:39:52-07:00
HPI_HAM6 enabled failed 2018-04-20 09:39:52-07:00
HPI_ITMX enabled failed 2018-04-20 09:39:52-07:00
HPI_ITMY enabled failed 2018-04-20 09:39:52-07:00
ISI_BS_ST1 enabled failed 2018-04-20 09:39:53-07:00
ISI_BS_ST1_BLND enabled failed 2018-04-20 09:39:51-07:00
ISI_BS_ST1_SC enabled failed 2018-04-20 09:39:51-07:00
ISI_BS_ST2 enabled failed 2018-04-20 09:39:53-07:00
ISI_BS_ST2_BLND enabled failed 2018-04-20 09:39:51-07:00
ISI_BS_ST2_SC enabled failed 2018-04-20 09:39:51-07:00
ISI_HAM2 enabled failed 2018-04-20 09:39:53-07:00
ISI_HAM2_SC enabled failed 2018-04-20 09:39:51-07:00
ISI_HAM3 enabled failed 2018-04-20 09:39:53-07:00
ISI_HAM3_SC enabled failed 2018-04-20 09:39:51-07:00
ISI_HAM4 enabled failed 2018-04-20 09:39:53-07:00
ISI_HAM4_SC enabled failed 2018-04-20 09:39:51-07:00
ISI_HAM5 enabled failed 2018-04-20 09:39:53-07:00
ISI_HAM5_SC enabled failed 2018-04-20 09:39:51-07:00
ISI_HAM6 enabled failed 2018-04-20 09:39:53-07:00
ISI_HAM6_SC enabled failed 2018-04-20 09:39:51-07:00
ISI_ITMX_ST1 enabled failed 2018-04-20 09:39:53-07:00
ISI_ITMX_ST1_BLND enabled failed 2018-04-20 09:39:51-07:00
ISI_ITMX_ST1_SC enabled failed 2018-04-20 09:39:51-07:00
ISI_ITMX_ST2 enabled failed 2018-04-20 09:39:53-07:00
ISI_ITMX_ST2_BLND enabled failed 2018-04-20 09:39:51-07:00
ISI_ITMX_ST2_SC enabled failed 2018-04-20 09:39:51-07:00
ISI_ITMY_ST1 enabled failed 2018-04-20 09:39:53-07:00
ISI_ITMY_ST1_BLND enabled failed 2018-04-20 09:39:51-07:00
ISI_ITMY_ST1_SC enabled failed 2018-04-20 09:39:51-07:00
ISI_ITMY_ST2 enabled failed 2018-04-20 09:39:53-07:00
ISI_ITMY_ST2_BLND enabled failed 2018-04-20 09:39:51-07:00
ISI_ITMY_ST2_SC enabled failed 2018-04-20 09:39:51-07:00
SEI_HAM2 enabled failed 2018-04-20 09:39:45-07:00
SEI_HAM3 enabled failed 2018-04-20 09:39:46-07:00
SEI_HAM4 enabled failed 2018-04-20 09:39:46-07:00
SUS_ETMX enabled failed 2018-04-20 09:39:46-07:00
SUS_TMSX enabled failed 2018-04-20 09:39:46-07:00
While I was restarting the dead nodes listed above, two had to be restarted a second time: ISI_ITMX_ST1, ISI_ITMY_ST1. Both had the same "GuardDaemonError: worker exited unexpectedly, exit code: -11". I didn't think too much of it at the time because I had tried to restart large groups of nodes at once and thought this may have been the issue. They came back after another restart without any problems.
But, SUS_MC2 node just crashed 3 seconds after getting a request for aligned with the same error code as the ISI_ITMs (screenshot attached). Restarted and it seems to be okay now.
Sheila, Nutsinee
The percentage in the 00 mode power in green had decreased to ~60% from 74% we previously had in the optics lab (alog40594). This is likely due to the fiber swap in chamber causing the pointing to change (10 mode became higher). Sheila tweaked the green alignment on Friday (alog41572). The 00 mode calculated from transmitted power is now 70.8% with the 20 mode being 26% of the 00 mode. The PZT calibration is still ~20V/FSR. The data has been corrected for dark noise and pzt nonlinearity.
The total dips in reflected power including the higher order modes are 49% of the power off resonance. According to the VOPO final design document, the M2 mirror (output coupler for green) is >99.9%. With M1 reflectivity of 98%, modulation depth of 0.094 rad (see alog41622 for the calculation), the measurement suggests a loss of at least 0.3% inside the cavity itself. The input green power into the cavity at the time of measurement was 3.3 mW. The measured power off resonance was 2.83 mW (The BBPD has a transimpedance of 2000 Ohms, a responsivity of 0.2 A/W). That's another 14% loss between the fiber to the refl PD on SQZT6 table. The largest dip without taken higher order modes into account is 35% of the power off resonance.
From the refl scan we measured the higher order modes to be 14% of the power off resonance, from the trans scan we measured the higher order modes to be 30% of total power. These higher order modes are expected to come back as shotnoise. This discrepancy between refl and trans measurements could be due to the high acoustic noise in HAM6. We should have a much better scan once the VOPO is in vacuum.
The transmitted signal was taken with Thorlabs silicon diode (SM1PD1A) scanned at 1mHz and was monitored through PD concentrator (D1700176 with dual PD amplifier D1200543, a measured gain of 22.4). This seems to give a cleaner data compared to Thorlabs PD100A diode. The refl power was monitored with the BBPD via another PD concentrator PD monitor (D1201349).
When first coding the driver chassis in Beckhoff, I wrongly assumed it was a current driver like the TCS ring heaters, however, after looking at the drawings and chatting with the CIT folks I realized it was a voltage driver. So I modified the logic to reflect this as well as change the channel names and MEDM screen. Only the AWC library was changed, PLC3 reloaded the changes when running the script.
J. Bartlett, P. King, J. Kissel, H. Radkins, T. Vo
We've reviewed the SDF system prior today's scheduled fast front-end reboot. We've only accepted things we can confirm we know are necessary because of physical changes to the IFO. Otherwise, we reverted to the safe values and/or left things as DIFFs (because the reboot will revert).
Kissel, Vo (ISC / SUS / IMC)
- Accepted outputs and offsets on for all SUS by putting them in the ALIGNED state (i.e. when all OUTPUT switches are ON; a few new or neglected SUS needed this, like RMs, ZMs, OFI OPO), because the watchdog protects the system upon reboot.
- Accepted a few new misalignment offset values (SRM, ITMs, PRM) because we've physically changed the alignment of these suspensions during the Sep-Dec 2017 corner vent, and the misaligned position has now been confirmed by a few months worth of corner station commissioning
- SR3 optical lever input gains to ZERO until we make the front-end changes need to get it up and running (see LHO aLOG 41547)
- In ISC land, accepted dark offsets on AS A & B and OMC QPD (remeasured to make stuff work in air, will likely have to be remeasured in the future anyways)
- 180 phase change in ASC OMC DC centering loops because of the tip-tilt actuator sign flip (LHO aLOG 41441)
Radkins (SEI)
- Cleared out by Hugh -- most diffs are because some chambers are locked, and/or the sensor correction configuration is OFF which is "abnormal" for observing
Bartlett, King (PSL)
- FSS and PMC are stable, so J. Bartlett & P. King accepted the DIFFs on this.
- ISS and DBB
We're OK to Go for reboot of front-ends.
TITLE: 04/24 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
Wind: 10mph Gusts, 6mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.27 μm/s
QUICK SUMMARY:
Thanks to Sheila for the alert. I suspect something has changed and is pushing on the platform or preventing motion. I'll inspect and then do some range of motion tests to help find the interference and confirm freedom.
FRS 10483
False alarm--WHAM5 HEPI has been locked since the venting on 5 April. Closing ticket, no issue. HEPI remains locked with relatively small offsets of position and alignment: residuals are 20um in Y & Z and -9urads RZ, all others DOF residuals are much less. Guardian: SEI chamber manager is paused and the ISI guardian is High_Isolated. ISI isolates no problem. If tripping is a problem and isolation isn't required, maybe best to un-pause the manager and run at ISI_Damped_HEPI _Offline.
WP 7491: "Update PLC1 and the system manager on h1ecatc1 to provide remote control of the motorized polarization controller in the MSR through its RS232 interface."
Work on h1ecatc1 is complete. Channels still need to be added to the DAQ. I have an medm screen but will ask commissioners where they would like it to be linked from on the sitemap.
Replaced /opt/rtcds/userapps/release/als/common/medm/ALS_CUST_POLCORR.adl with the new screen. This is linked from the ALS overview from the box labeled "POLARIZATION CORRECTION BOX".
Corey, TJ, Thomas Vo, Terry, Dan Brown, Alexei, Sheila
Today we swapped the lenses in the squeezer path for a ROC +250mm lens (1st lens) and a +350 lens (translation stage), both were lenses from the enhanced LIGO squeezer (E1000077) which Corey cleaned this morning.
When Terry and I swapped the lenses the beam became very misaligned through the squeezer Faraday, so we spent some time re-aligning and then aligning through the apperatures we had placed on HAM6. To do this we made mechanical adjustments to ZM1 pitch.
After the PSL crew was done for the day Thomas Dan and I co-alinged the squeezer beam to the interferometer beam very roughly. (NB, HAM5 HPI has been tripping all evening, so we will need to redo this with HAM5 isolated.) We were able to close the centering and OMC QPD loops and get a mode scan, results are still being interpreted.
We have removed the apertures we placed in HAM6 this afternoon.
Here are two of the OMC scans taken using the OPO beam last night. The second scan was with the additional apertures removed. If we just take the ratio of the 2nd/0th order peaks it suggests we have around 3-5% mismatch, scan 2 looks slightly better. The model was predicting around 2% so this seems promising. However the high 1st and 3rd order mode content relative to the 2nd and 4th makes us think we could be clipping the beam somewhere. The culprit is likely to be the OPO faraday, as the new lenses mean the aperture is only 3x larger than the beam now.

Calibration of the field meter does not need knowledge of the input capacitance. With the calibration plates, the electic field on the sense plate is simply E(cal)= V(cal)/d where d is the calibration-sense plate separation. If you want to improve the accuracy you will need to account for the thickness of the copper disk on the sense plate and a few percent error due to the fringing field. The current sensitivity curves are pretty close to the ones measured in the prototype. How did you handle the factor of 2 due to the two plates on each coordinate and the output which is the difference?
We were a little confused about how to calibrate the EFM. It's not such an easy problem as it first seems.Calibration Plate Voltage to Electric Field TF
V_cal refers to the potential difference between the calibration plate and ground. Ground is connected to the body of the EFM. The sensor plate is kept isolated and should be at voltage V_sense = V_cal * d2/(d1 + d2) where d1 is the distance between the cal plate and sensor plate, and d2 is the distance between the sensor plate and the body. If we assume that the electric field E_cal is constant over the entire EFM, then I think we ought to be using the total distance d = d1 + d2 between the calibration plate and body for E_cal = V_cal/d. d1 = 1/2 inch = 1.27 cm, and d2 = 5/8 inch = 1.59 cm, so d ~ 2.86 cm and E_cal/V_cal = 1/d ~ 35.0 (V/m)/V using this method. However, we became concerned about the geometry of the EFM affecting this result. There is a copper disk which connects the sensor plate to the sensor pin, and there are a bunch of large screws between the sensor plate and the body. We decided to compute an "effective distance" using the capacitances we measured between the cal and sense plates (~11pF), and the sense plate and the body (~19pF) via E = Q/(2 A e0), where A is the area of the plates (~0.01 m^2), e0 is the vacuum permittivity, and Q is the charge on the cal plate. Q = C V, so we can recover E/V = C/(2 A e0) = 1/d, so our effective distance d = (2 A e0)/C, where C is the total capacitance between the cal plate and the body (~7pF). Using this method, E/V ~ 38.9 (V/m)/V, not much different than our result from 1/d. This is the number we used to calibrate from V_cal to E_cal. I don't know what value was used for the initial prototype.Differential Amplifier Factor of Two
We did not account for this. We did not understand that the EFM body was grounded, so that the body absorbs the E_cal field by inducing charge on its near face. In the presence of a large external electric field both sense plates will have voltage induced, so we will get twice the response from the EFM differential amplifier circuit. We measured a TF from V_cal to V_out where V_out is the voltage output of the EFM differential amplifier circuit, and got V_out/V_cal ~ 0.8 from 5 kHz down. This should be multiplied by 2 for the V_out/V_external TF.Corrected Plots
Plot 1 is the newly calibrated ambient electric field ASDs recorded by the EFM. Plot 2 is the V_out/V_cal TF.
We (the EFM calibration team) never understood that the sensor plates are virtually grounded by the op-amp inside the EFM until we saw Figure 2 of T1700103. This is why we kept insisting that E = Vcal/d should use d = distance between calibration plate and the EFM body: we thought that the sensor plate was an floating conductor. I fixed our calibration to account for the grounded sensor plates. If I use E = Vcal / d where d is the distance between the cal and sensor plates (d ~ 1/2 inch ~ 1.27 cm), I get. If I account for the copper plate and fringing fields by using our measured capacitance between the calibration plate and sensor plate (C ~ 14.7 pF), I get
(Area A of the plates is ~ 0.01 m^2). This is the E/V calibration I used for the plots below. Also included was our cal volts to EFM output volts measured calibration value of 0.8 V/V. This was multiplied by two to account for the differential response of the EFM to external electric fields, and inverted to give
. Unfortunately, with this corrected calibration our prototype EFM spectrum is worse than we originally thought. In fact, it's worse than your final prototype spectrum from T1700569 by about a factor of two. I am not sure why this should be the case. Rich's LT Spice model has a output voltage noise floor of about 200 nV/rtHz at 200 Hz upward. In your Figure 2 of T1700569, you report a Vn of 110 nV/rtHz, so maybe this result is correct.
The calibration is simpler than you make it. With the cube grounded and the calibration plates mounted on the sense plate, the electric field induced on the sense plate is E = V(cal)/d (with small correction for fringing and the copper plug). If you want to make a model for the calibration to predict the sensitivity that is more complicated and requires knowledge of the capacitances and the potentials between the sense plate and the cube.
Craig, you refer to T1700103 figure 2 to understand the virtual ground. This is not the correct schematic for the implementation of the EFM that was recently built. Each EFM input is simply 10^12 ohms to ground (in parallel with the sense plate capacitance). There is no virtual ground provided actively by the operational amplifier.
Final note on the EFM calibration. Conclusions:After a discussion with Rai and Rich we determined the correct calibration is
where
is the driven voltage on the cal plate,
is the induced voltage on the sense plate, and
is the distance between cal and sense plate. We need to know the voltage induced on the sense plate. To do this I simulated the circuit in the first picture. Again, we measured the capacitance between the cal and sense plate to be 14.7 pF, while the capacitance between the sense and body was 19 pF. I found
above 10 mHz. Solving for
gives the result above. The final plot is the correctly calibrated ambient electric field spectrum.
I am very sorry for having generated all this confusion. The sense plate is not a virtual ground, that was the case in earlier circuits. In this
circuit the proper formulation for the electric field on the sense plate from the calibration plate is
V(cal) - V(sp) V(cal) C(cal-sp)
E(cal) = ---------------- = ---------------------------------- So, the calibration field is smaller than in the case for the sense plate held
d(cal-sp) d(cal-sp)(C(cal-sp)+C(sp-allelse))
at ground potential which makes the field meter more sensitive. Which is what you found. The error is purely mine and not Rich Abbott's or any
of the people in the electronics group. It comes from my not thinking about the calibration again after the circuit was changed from one type
to another in my lab.
Armed with the knowledge about the OM3 sign flip, I was able to close the angular loops on the AS WFS DC centering as well as the OMC ASC loops at the same time with the IFO beam. I had to go pretty far with the alignment sliders on OM1 and OM2 to get the IFO beam back on the QPDs but this seems to let the control loops converge and the alignment offsets are closer to their zero position on OM1 and OM2. However, I was not able to turn on the integrators but this configuration might be good enough to do an OMC scan.
The squeezer crew could try to walk the OPO squeezer beam from last night towards this new alignment with the ZMs and try scanning from here, maybe it'll be less noisy with the angular loops closed.
One thing that is a little odd is that there seems to be an oscillation in the power of AS_C_SUM and AS_A/B_SUM, however, none the optics' suspensions seem to be moving excessively and all but one ISI was in isolated state. HAM5 was the only one in "ISI Damped HEPI Offline" but when I tried to go to "Isolated" the HEPI ACT limit watchdog tripped so I left it alone. This oscillation occurs both when the AS WFS DC centering loops are open and closed so it might be coming from further upstream of HAM6. Particularly, it seems like AS_A_DC_PIT is the noisiest of the WFS signals but I don't know where there source is coming from.
HAM5 HEPI is/was locked--that is why.