WP7502
Jeff K, Corey, TJ, Jamie, Dave:
All front end computers with up-times approaching or exceeding 208 days were rebooted. The sequence was: stop all models on the computers before rebooting (leaving PSL till last) then reboot all computers. Dolphin'ed machines waited until the last computer was rebooted before starting their models.
I had a repeat of yesterday's h1susex problem, after a reboot it lost communications with its IO Chassis. Today's sequence was:
remotely rebooted h1susex, it did not come back (no ping response)
remotely reset h1susex via IPMI management port, it booted but lost communication with IO Chassis
at the EX end station, powered h1susex down, power cycled IO Chassis, powered h1susex back. This time models started.
Despite removing h1susex from the Dolphin fabric, h1seiex and h1iscex glitched and had their models restarted. Ditto for h1oaf0 and h1lsc0 in the corner station.
Machines rebooted (as opposed to just having their models restarted) were:
h1psl0, h1seih16, h1seih23, h1seih45, h1seib1, h1seib2, h1seib3, h1sush2b, h1sush34, h1sush56, h1susb123, h1susauxh2, h1susauxh56, h1asc0, h1susauxey, h1seiex, h1iscex
Some guardian nodes stopped running as a result of these restarts. Jamie and TJ are investigating.
Sheila, Nutsinee
The percentage in the 00 mode power in green had decreased to ~60% from 74% we previously had in the optics lab (alog40594). This is likely due to the fiber swap in chamber causing the pointing to change (10 mode became higher). Sheila tweaked the green alignment on Friday (alog41572). The 00 mode calculated from transmitted power is now 70.8% with the 20 mode being 26% of the 00 mode. The PZT calibration is still ~20V/FSR. The data has been corrected for dark noise and pzt nonlinearity.
The total dips in reflected power including the higher order modes are 49% of the power off resonance. According to the VOPO final design document, the M2 mirror (output coupler for green) is >99.9%. With M1 reflectivity of 98%, modulation depth of 0.094 rad (see alog41622 for the calculation), the measurement suggests a loss of at least 0.3% inside the cavity itself. The input green power into the cavity at the time of measurement was 3.3 mW. The measured power off resonance was 2.83 mW (The BBPD has a transimpedance of 2000 Ohms, a responsivity of 0.2 A/W). That's another 14% loss between the fiber to the refl PD on SQZT6 table. The largest dip without taken higher order modes into account is 35% of the power off resonance.
From the refl scan we measured the higher order modes to be 14% of the power off resonance, from the trans scan we measured the higher order modes to be 30% of total power. These higher order modes are expected to come back as shotnoise. This discrepancy between refl and trans measurements could be due to the high acoustic noise in HAM6. We should have a much better scan once the VOPO is in vacuum.
The transmitted signal was taken with Thorlabs silicon diode (SM1PD1A) scanned at 1mHz and was monitored through PD concentrator (D1700176 with dual PD amplifier D1200543, a measured gain of 22.4). This seems to give a cleaner data compared to Thorlabs PD100A diode. The refl power was monitored with the BBPD via another PD concentrator PD monitor (D1201349).
When first coding the driver chassis in Beckhoff, I wrongly assumed it was a current driver like the TCS ring heaters, however, after looking at the drawings and chatting with the CIT folks I realized it was a voltage driver. So I modified the logic to reflect this as well as change the channel names and MEDM screen. Only the AWC library was changed, PLC3 reloaded the changes when running the script.
J. Bartlett, P. King, J. Kissel, H. Radkins, T. Vo We've reviewed the SDF system prior today's scheduled fast front-end reboot. We've only accepted things we can confirm we know are necessary because of physical changes to the IFO. Otherwise, we reverted to the safe values and/or left things as DIFFs (because the reboot will revert). Kissel, Vo (ISC / SUS / IMC) - Accepted outputs and offsets on for all SUS by putting them in the ALIGNED state (i.e. when all OUTPUT switches are ON; a few new or neglected SUS needed this, like RMs, ZMs, OFI OPO), because the watchdog protects the system upon reboot. - Accepted a few new misalignment offset values (SRM, ITMs, PRM) because we've physically changed the alignment of these suspensions during the Sep-Dec 2017 corner vent, and the misaligned position has now been confirmed by a few months worth of corner station commissioning - SR3 optical lever input gains to ZERO until we make the front-end changes need to get it up and running (see LHO aLOG 41547) - In ISC land, accepted dark offsets on AS A & B and OMC QPD (remeasured to make stuff work in air, will likely have to be remeasured in the future anyways) - 180 phase change in ASC OMC DC centering loops because of the tip-tilt actuator sign flip (LHO aLOG 41441) Radkins (SEI) - Cleared out by Hugh -- most diffs are because some chambers are locked, and/or the sensor correction configuration is OFF which is "abnormal" for observing Bartlett, King (PSL) - FSS and PMC are stable, so J. Bartlett & P. King accepted the DIFFs on this. - ISS and DBB We're OK to Go for reboot of front-ends.
TITLE: 04/24 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
Wind: 10mph Gusts, 6mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.27 μm/s
QUICK SUMMARY:
Thanks to Sheila for the alert. I suspect something has changed and is pushing on the platform or preventing motion. I'll inspect and then do some range of motion tests to help find the interference and confirm freedom.
FRS 10483
False alarm--WHAM5 HEPI has been locked since the venting on 5 April. Closing ticket, no issue. HEPI remains locked with relatively small offsets of position and alignment: residuals are 20um in Y & Z and -9urads RZ, all others DOF residuals are much less. Guardian: SEI chamber manager is paused and the ISI guardian is High_Isolated. ISI isolates no problem. If tripping is a problem and isolation isn't required, maybe best to un-pause the manager and run at ISI_Damped_HEPI _Offline.
WP 7491: "Update PLC1 and the system manager on h1ecatc1 to provide remote control of the motorized polarization controller in the MSR through its RS232 interface."
Work on h1ecatc1 is complete. Channels still need to be added to the DAQ. I have an medm screen but will ask commissioners where they would like it to be linked from on the sitemap.
Replaced /opt/rtcds/userapps/release/als/common/medm/ALS_CUST_POLCORR.adl with the new screen. This is linked from the ALS overview from the box labeled "POLARIZATION CORRECTION BOX".
Quick conclusion: The 80MHz EOM calibration is 0.98 rad/V. We are currently modulating the pump light at 0.094 rad.
--------------------------------------------------
The 20dB attenuator at RF power output to the 80MHz EOM (EO-80M3-NIR) was temporary taken off to let the SB show higher on the OPO cavity scan. 12.3 dBm (0.92 Vrms) was measured going into the patch panel. Using Sqrt(Sideband peak/Carrier peak) = J1(ß)/J0(ß) gives the modulation depth of 0.9 for 12.3 dBm input power, or 0.98rad/V.
The RF power going through 20dB attenuator measured -7.84 dBm (0.0906 Vrms). This gives the modulation depth of 0.094 rad and that's how much we're modulating the pump light currently.
The spec I found in the quote claimed ~16dBm (1.41Vrms) is required for 1 rad modulation for 532nm, off by 3dBm.
Corey, TJ, Thomas Vo, Terry, Dan Brown, Alexei, Sheila
Today we swapped the lenses in the squeezer path for a ROC +250mm lens (1st lens) and a +350 lens (translation stage), both were lenses from the enhanced LIGO squeezer (E1000077) which Corey cleaned this morning.
When Terry and I swapped the lenses the beam became very misaligned through the squeezer Faraday, so we spent some time re-aligning and then aligning through the apperatures we had placed on HAM6. To do this we made mechanical adjustments to ZM1 pitch.
After the PSL crew was done for the day Thomas Dan and I co-alinged the squeezer beam to the interferometer beam very roughly. (NB, HAM5 HPI has been tripping all evening, so we will need to redo this with HAM5 isolated.) We were able to close the centering and OMC QPD loops and get a mode scan, results are still being interpreted.
We have removed the apertures we placed in HAM6 this afternoon.
Here are two of the OMC scans taken using the OPO beam last night. The second scan was with the additional apertures removed. If we just take the ratio of the 2nd/0th order peaks it suggests we have around 3-5% mismatch, scan 2 looks slightly better. The model was predicting around 2% so this seems promising. However the high 1st and 3rd order mode content relative to the 2nd and 4th makes us think we could be clipping the beam somewhere. The culprit is likely to be the OPO faraday, as the new lenses mean the aperture is only 3x larger than the beam now.
The turbo pumping CP4 tripped off this afternoon. As found, the turbo controller was de-energized and the scroll baking pump had isolated itself and remained running. The power switch for the turbo controller doubles as a circuit breaker and was found in the OFF position. This switch only has two states ON and OFF as opposed to the more common ON, OFF and "TRIPPED" states ->
None of the power cords appeared to be disturbed and my conclusion is that the turbo had likely tripped due to over temperature. This pump is surrounded by hot, baking surfaces, has no fan but is, instead, cooled by a local chiller circulating 30C water through its electric motor. For weeks leading up to today, the relevant temperatures have been non-changing and the turbo has been 80C at its inlet flange and 45C - 48C internally (as per the thermocouple read by the controller). I think that the internal "trip" threshold is 50C. Ambient conditions are getting warmer ->
I was able to restart the turbo and reach full RPM while the backing pump remained isolated. Once at full RPM, I valved-in the baking pump. Everything seems normal. As a precaution, I added a small fan and folded back some of the aluminum foil to expose the turbo+RGA hardware and allow some, minimal, convection cooling.
This morning I valved in the "pressure build circuit" on CP4 Dewar (1/2 turn CCW). The Dewar head pressure is now at 17 psig (just a hair below 16 psig on other gauge) and GN2 flow is back up to ~50 scfhx100 (fluctuates between 40-55), up from 12 scfhx100 this morning. More flow means decreased regen temperature and therefore I also increased the variac today in increments: 58%, then 60%. Attached is 8 hr plot of the GN2 regen temperature - both inlet and exhaust.
Leaving pressure build circuit valved in overnight. Will valve out tomorrow morning before LN2 truck delivery.
I watched the GN2 flow surge up to 80 scfhx100, which is too high so I closed the pressure build valve and also lowered the variac to 56% so we don't get alarms all night. Kyle's adjustment on the economizer valve may still be stabilizing and increasing head pressure and tomorrow the Dewar is scheduled for a refill which will also increase flow.
Note that there is a 15 psig pressure relief valve just downstream of the GN2 heater.
14:46 UTC Mark to end Y to retrieve engine hoist for taking doors off BSC chambers 14:51 UTC Chris S. to beer garden to wrap BSC chambers in prep for vent 14:52 UTC Hugh to end Y to top off HEPI fluid 15:24 UTC Hugh leaving end Y 15:29 UTC APS through gate Chandra to mid Y and end Y 15:46 UTC Reset tripped Beckhoff laser safety interlock 15:47 UTC Nutsinee to HAM6 16:00 UTC Meeting in CR 16:26 UTC Karen to end Y 16:28 UTC Mark taking HEPI fluid barrel to end X 16:49 UTC Hugh to HAM6 16:49 UTC Filiberto to end X to unlock door h1boot server was down, restarted 17:11 UTC APS through gate 17:11 UTC Vacuum group restarting mid X cold cathode gauges 17:16 UTC TJ to LVEA to get equipment then to optics lab 17:25 UTC Corey to LVEA to get flashlights 17:51 UTC Chandra WP 7501 17:51 UTC Peter to H2 and then H1 PSL enclosure 17:54 UTC Mark moving cleanroom from mid Y to high bay 17:58 UTC Jason to PSL enclosure 18:01 UTC Thomas and Georgia to squeezer bay to take measurements 18:04 UTC Hugh back 18:13 UTC Karen back 18:13 UTC Nutsinee back 18:17 UTC Hugh to HAM6 18:18 UTC Thomas and Georgia back 18:48 UTC TJ back 19:03 UTC Dave to end X to reboot SUS 19:09 UTC Hugh back 19:28 UTC Dave back 19:44 UTC Dave to end X to reboot SUS IO chassis 19:51 UTC Filiberto to LVEA to look for RF cable 20:00 UTC Corey moving nitrogen tank from cleaning area to optics lab 20:07 UTC Jeff B. to LVEA and optics lab 20:10 UTC APS through gate 20:13 UTC Dave back 20:17 UTC TJ to optics lab 20:19 UTC Karen to mid Y 20:36 UTC Hugh to end Y to reset HV and mechanically adjust the trip level of the HEPI fluid reservoir 20:38 UTC Nutsinee to HAM6 to take measurements 20:48 UTC Filiberto to mid X to check on tripped vacuum gauges 20:49 UTC Karen leaving mid Y 21:06 UTC Peter and Jason out for lunch 21:38 UTC TJ back 21:45 UTC TJ to optics lab and then LVEA 21:51 UTC Tyler and Mark moving engine hoist and cleanroom into LVEA through high bay 21:56 UTC TJ back 22:11 UTC Peter and Jason to H1 PSL enclosure 22:11 UTC Corey done in optics lab for the day 22:18 UTC Sheila to HAM6 22:24 UTC Terry to HAM6 22:26 UTC Jeff B. done
The supply toggles were still 'On' so this is in response to a VE glitch.
EY vacuum rack was rebooted last Tuesday for a new gauge that was installed which caused high volts to trip off because HV is interlocked with PT-425 pressure gauge.
Georgia, Robert
This entry gives results from the test discussed by Georgia here: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=41559
Sometimes during PEM injections, we sanity check that external electric fields (like from a lightning strike) do not significantly affect DARM by sneaking into the chamber through a viewport. A repeat of these injections allowed for a comparison of the field measured by the EFM to the test mass motion that has been induced in the past by similar injections. In the past, we only did a single frequency in the bucket - because the integration time to see the signal in DARM was long.
We repeated the injection by placing an insulated plate over the illuminator port of the EX chamber, and driving at 211 Hz with a voltage of about +/- 11 V relative to the chamber. Similar injections have produced an rms DARM signal as high as 1.5e-21 m in the past, though this is variable, and sometimes we can’t integrate long enough to see the injection in DARM.
The figure shows an EFM spectrum for this injection and DARM for a similar injection in the past. The calibration from the log referenced above gives an rms field at 211 Hz of about 1e-5 V/m at the EFM. Assuming similar conditions to those in the past, we get a coupling of about 1.6e-16 meters of DARM motion per V/m measured by the EFM, for this injection configuration. The SNR of the EFM signal for the injection appeared to be nearly ten times greater than the DARM SNR had been during similar injections in the past. Shielding may differ for different injection points, but at least for this port injection, the EFM appears to be more sensitive than DARM.
I calibrated our EFM spectrum into DARM, this time using Robert and Georgia's electric field to meter TF numberI assumed that
due to the test mass suspensions, where
is some constant. From Robert's measurement I found
. Then, calibrating EFM voltage output noise
into displacement noise:
where
from alog 41591, and
. The estimated displacement noise is plotted below. Think of this as an upper bound for the ambient electric field noise, since we are not sure our EFM noise floor below 100 Hz is not sensor noise, and this is literally a single point measurement of
.
Armed with the knowledge about the OM3 sign flip, I was able to close the angular loops on the AS WFS DC centering as well as the OMC ASC loops at the same time with the IFO beam. I had to go pretty far with the alignment sliders on OM1 and OM2 to get the IFO beam back on the QPDs but this seems to let the control loops converge and the alignment offsets are closer to their zero position on OM1 and OM2. However, I was not able to turn on the integrators but this configuration might be good enough to do an OMC scan.
The squeezer crew could try to walk the OPO squeezer beam from last night towards this new alignment with the ZMs and try scanning from here, maybe it'll be less noisy with the angular loops closed.
One thing that is a little odd is that there seems to be an oscillation in the power of AS_C_SUM and AS_A/B_SUM, however, none the optics' suspensions seem to be moving excessively and all but one ISI was in isolated state. HAM5 was the only one in "ISI Damped HEPI Offline" but when I tried to go to "Isolated" the HEPI ACT limit watchdog tripped so I left it alone. This oscillation occurs both when the AS WFS DC centering loops are open and closed so it might be coming from further upstream of HAM6. Particularly, it seems like AS_A_DC_PIT is the noisiest of the WFS signals but I don't know where there source is coming from.
HAM5 HEPI is/was locked--that is why.
guardian processes killed for watchdog timeout when front ends were rebooted
At roughly 10:05 AM local time, 45 of the guardian nodes went dead (list at bottom). This time was coincident with all the front end reboots. Technically this was not a crash of the guardian nodes. Instead systemd actually killed the processes because they did not check in within their 3 second watchdog timeout:
This is both good and bad. It's good that systemd has this watchdog fascility to keep potentially dead processes. But it's bad the guardian processes did not check in in time. The guardian loop runs at 16 Hz, and it checks in with the watchdog once a cycle, so missing three seconds of cycles is kind of a big deal. There were even logs, from the main daemon process, reporting EPICS connection errors right up until the point it was killed. If the daemon was reporting those logs it should have also been checking in with the watchdog.
Clearly the issue is EPICS related. The only connection that I am aware of between guardian and the front end models is EPICS. But all the EPICS connections from guardian to the front ends is done via ezca in the guardian-worker process, and even if that process got hamstrung it shouldn't affect the execution of the daemon.
Very confusing. I'll continue to look in to the issue and see if I can reproduce.
Here's the full list of nodes that died (times are service start times, not death times (I'll try to make those times be last service status change instead)):
While I was restarting the dead nodes listed above, two had to be restarted a second time: ISI_ITMX_ST1, ISI_ITMY_ST1. Both had the same "GuardDaemonError: worker exited unexpectedly, exit code: -11". I didn't think too much of it at the time because I had tried to restart large groups of nodes at once and thought this may have been the issue. They came back after another restart without any problems.
But, SUS_MC2 node just crashed 3 seconds after getting a request for aligned with the same error code as the ISI_ITMs (screenshot attached). Restarted and it seems to be okay now.