Displaying reports 43701-43720 of 84157.Go to page Start 2182 2183 2184 2185 2186 2187 2188 2189 2190 End
Reports until 12:26, Tuesday 24 April 2018
H1 CDS
david.barker@LIGO.ORG - posted 12:26, Tuesday 24 April 2018 - last comment - 16:02, Tuesday 24 April 2018(41636)
Frontend with long uptime rebooted

WP7502

Jeff K, Corey, TJ, Jamie, Dave:

All front end computers with up-times approaching or exceeding 208 days were rebooted. The sequence was: stop all  models on the computers before rebooting (leaving PSL till last) then reboot all computers. Dolphin'ed machines waited until the last computer was rebooted before starting their models.

I had a repeat of yesterday's h1susex problem, after a reboot it lost communications with its IO Chassis. Today's sequence was:

remotely rebooted h1susex, it did not come back (no ping response)

remotely reset h1susex via IPMI management port, it booted but lost communication with IO Chassis

at the EX end station, powered h1susex down, power cycled IO Chassis, powered h1susex back. This time models started.

Despite removing h1susex from the Dolphin fabric, h1seiex and h1iscex glitched and had their models restarted. Ditto for h1oaf0 and h1lsc0 in the corner station.

Machines rebooted (as opposed to just having their models restarted) were:

h1psl0, h1seih16, h1seih23, h1seih45, h1seib1, h1seib2, h1seib3, h1sush2b, h1sush34, h1sush56, h1susb123, h1susauxh2, h1susauxh56, h1asc0, h1susauxey, h1seiex, h1iscex

Some guardian nodes stopped running as a result of these restarts. Jamie and TJ are investigating.

Comments related to this report
jameson.rollins@LIGO.ORG - 14:07, Tuesday 24 April 2018 (41642)

guardian processes killed for watchdog timeout when front ends were rebooted

At roughly 10:05 AM local time, 45 of the guardian nodes went dead (list at bottom). This time was coincident with all the front end reboots. Technically this was not a crash of the guardian nodes. Instead systemd actually killed the processes because they did not check in within their 3 second watchdog timeout:

...
2018-04-24_17:06:09.417388Z ISI_HAM5 CERROR: State method raised an EzcaConnectionError exception.
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: Current state method will be rerun until the connection error clears.
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: If CERROR does not clear, try setting OP:STOP to kill worker, followed by OP:EXEC to resume.
2018-04-24_17:06:10.018595Z ISI_HAM5 EZCA CONNECTION ERROR. attempting to reestablish...
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: State method raised an EzcaConnectionError exception.
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: Current state method will be rerun until the connection error clears.
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: If CERROR does not clear, try setting OP:STOP to kill worker, followed by OP:EXEC to resume.
2018-04-24_17:06:10.018595Z ISI_HAM5 EZCA CONNECTION ERROR. attempting to reestablish...
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: State method raised an EzcaConnectionError exception.
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: Current state method will be rerun until the connection error clears.
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: If CERROR does not clear, try setting OP:STOP to kill worker, followed by OP:EXEC to resume.
2018-04-24_17:06:10.018595Z ISI_HAM5 EZCA CONNECTION ERROR. attempting to reestablish...
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: State method raised an EzcaConnectionError exception.
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: Current state method will be rerun until the connection error clears.
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: If CERROR does not clear, try setting OP:STOP to kill worker, followed by OP:EXEC to resume.
2018-04-24_17:06:10.018595Z ISI_HAM5 EZCA CONNECTION ERROR. attempting to reestablish...
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: State method raised an EzcaConnectionError exception.
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: Current state method will be rerun until the connection error clears.
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: If CERROR does not clear, try setting OP:STOP to kill worker, followed by OP:EXEC to resume.
2018-04-24_17:06:10.018595Z ISI_HAM5 EZCA CONNECTION ERROR. attempting to reestablish...
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: State method raised an EzcaConnectionError exception.
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: Current state method will be rerun until the connection error clears.
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: If CERROR does not clear, try setting OP:STOP to kill worker, followed by OP:EXEC to resume.
2018-04-24_17:06:10.018595Z ISI_HAM5 EZCA CONNECTION ERROR. attempting to reestablish...
2018-04-24_17:06:10.018595Z ISI_HAM5 CERROR: State method raised an EzcaConnectionError exception.
2018-04-24_17:06:10.014956Z guardian@ISI_HAM5.service: Watchdog timeout (limit 3s)!
2018-04-24_17:06:10.015011Z guardian@ISI_HAM5.service: Killing process 1797 (guardian ISI_HA) with signal SIGABRT.
2018-04-24_17:06:10.015060Z guardian@ISI_HAM5.service: Killing process 2839 (guardian-worker) with signal SIGABRT.
2018-04-24_17:06:10.084021Z guardian@ISI_HAM5.service: Main process exited, code=dumped, status=6/ABRT
2018-04-24_17:06:10.084278Z guardian@ISI_HAM5.service: Unit entered failed state.
2018-04-24_17:06:10.084289Z guardian@ISI_HAM5.service: Failed with result 'watchdog'.

This is both good and bad. It's good that systemd has this watchdog fascility to keep potentially dead processes. But it's bad the guardian processes did not check in in time. The guardian loop runs at 16 Hz, and it checks in with the watchdog once a cycle, so missing three seconds of cycles is kind of a big deal. There were even logs, from the main daemon process, reporting EPICS connection errors right up until the point it was killed. If the daemon was reporting those logs it should have also been checking in with the watchdog.

Clearly the issue is EPICS related. The only connection that I am aware of between guardian and the front end models is EPICS. But all the EPICS connections from guardian to the front ends is done via ezca in the guardian-worker process, and even if that process got hamstrung it shouldn't affect the execution of the daemon.

Very confusing. I'll continue to look in to the issue and see if I can reproduce.

Here's the full list of nodes that died (times are service start times, not death times (I'll try to make those times be last service status change instead)):

ALIGN_IFO                enabled    failed     2018-04-20 09:39:57-07:00
DIAG_EXC                 enabled    failed     2018-04-20 09:39:51-07:00
DIAG_MAIN                enabled    failed     2018-04-20 09:40:03-07:00
DIAG_SDF                 enabled    failed     2018-04-20 09:39:51-07:00
HPI_BS                   enabled    failed     2018-04-20 09:39:52-07:00
HPI_HAM1                 enabled    failed     2018-04-20 09:39:52-07:00
HPI_HAM2                 enabled    failed     2018-04-20 09:39:52-07:00
HPI_HAM3                 enabled    failed     2018-04-20 09:39:52-07:00
HPI_HAM4                 enabled    failed     2018-04-20 09:39:52-07:00
HPI_HAM5                 enabled    failed     2018-04-20 09:39:52-07:00
HPI_HAM6                 enabled    failed     2018-04-20 09:39:52-07:00
HPI_ITMX                 enabled    failed     2018-04-20 09:39:52-07:00
HPI_ITMY                 enabled    failed     2018-04-20 09:39:52-07:00
ISI_BS_ST1               enabled    failed     2018-04-20 09:39:53-07:00
ISI_BS_ST1_BLND          enabled    failed     2018-04-20 09:39:51-07:00
ISI_BS_ST1_SC            enabled    failed     2018-04-20 09:39:51-07:00
ISI_BS_ST2               enabled    failed     2018-04-20 09:39:53-07:00
ISI_BS_ST2_BLND          enabled    failed     2018-04-20 09:39:51-07:00
ISI_BS_ST2_SC            enabled    failed     2018-04-20 09:39:51-07:00
ISI_HAM2                 enabled    failed     2018-04-20 09:39:53-07:00
ISI_HAM2_SC              enabled    failed     2018-04-20 09:39:51-07:00
ISI_HAM3                 enabled    failed     2018-04-20 09:39:53-07:00
ISI_HAM3_SC              enabled    failed     2018-04-20 09:39:51-07:00
ISI_HAM4                 enabled    failed     2018-04-20 09:39:53-07:00
ISI_HAM4_SC              enabled    failed     2018-04-20 09:39:51-07:00
ISI_HAM5                 enabled    failed     2018-04-20 09:39:53-07:00
ISI_HAM5_SC              enabled    failed     2018-04-20 09:39:51-07:00
ISI_HAM6                 enabled    failed     2018-04-20 09:39:53-07:00
ISI_HAM6_SC              enabled    failed     2018-04-20 09:39:51-07:00
ISI_ITMX_ST1             enabled    failed     2018-04-20 09:39:53-07:00
ISI_ITMX_ST1_BLND        enabled    failed     2018-04-20 09:39:51-07:00
ISI_ITMX_ST1_SC          enabled    failed     2018-04-20 09:39:51-07:00
ISI_ITMX_ST2             enabled    failed     2018-04-20 09:39:53-07:00
ISI_ITMX_ST2_BLND        enabled    failed     2018-04-20 09:39:51-07:00
ISI_ITMX_ST2_SC          enabled    failed     2018-04-20 09:39:51-07:00
ISI_ITMY_ST1             enabled    failed     2018-04-20 09:39:53-07:00
ISI_ITMY_ST1_BLND        enabled    failed     2018-04-20 09:39:51-07:00
ISI_ITMY_ST1_SC          enabled    failed     2018-04-20 09:39:51-07:00
ISI_ITMY_ST2             enabled    failed     2018-04-20 09:39:53-07:00
ISI_ITMY_ST2_BLND        enabled    failed     2018-04-20 09:39:51-07:00
ISI_ITMY_ST2_SC          enabled    failed     2018-04-20 09:39:51-07:00
SEI_HAM2                 enabled    failed     2018-04-20 09:39:45-07:00
SEI_HAM3                 enabled    failed     2018-04-20 09:39:46-07:00
SEI_HAM4                 enabled    failed     2018-04-20 09:39:46-07:00
SUS_ETMX                 enabled    failed     2018-04-20 09:39:46-07:00
SUS_TMSX                 enabled    failed     2018-04-20 09:39:46-07:00
thomas.shaffer@LIGO.ORG - 16:02, Tuesday 24 April 2018 (41648)GRD

While I was restarting the dead nodes listed above, two had to be restarted a second time: ISI_ITMX_ST1, ISI_ITMY_ST1. Both had the same "GuardDaemonError: worker exited unexpectedly, exit code: -11". I didn't think too much of it at the time because I had tried to restart large groups of nodes at once and thought this may have been the issue. They came back after another restart without any problems.

But, SUS_MC2 node just crashed 3 seconds after getting a request for aligned with the same error code as the ISI_ITMs (screenshot attached). Restarted and it seems to be okay now.

Images attached to this comment
H1 SQZ (SQZ)
nutsinee.kijbunchoo@LIGO.ORG - posted 11:56, Tuesday 24 April 2018 (41581)
VOPO green alignment

Sheila, Nutsinee

The percentage in the 00 mode power in green had decreased to ~60% from 74% we previously had in the optics lab (alog40594). This is likely due to the fiber swap in chamber causing the pointing to change (10 mode became higher). Sheila tweaked the green alignment on Friday (alog41572). The 00 mode calculated from transmitted power is now 70.8% with the 20 mode being 26% of the 00 mode. The PZT calibration is still ~20V/FSR. The data has been corrected for dark noise and pzt nonlinearity.

 

The total dips in reflected power including the higher order modes are 49% of the power off resonance. According to the VOPO final design document, the M2 mirror (output coupler for green) is >99.9%. With M1 reflectivity of 98%, modulation depth of 0.094 rad (see alog41622 for the calculation), the measurement suggests a loss of at least 0.3% inside the cavity itself. The input green power into the cavity at the time of measurement was 3.3 mW. The measured power off resonance was 2.83 mW (The BBPD has a transimpedance of 2000 Ohms, a responsivity of 0.2 A/W). That's another 14% loss between the fiber to the refl PD on SQZT6 table. The largest dip without taken higher order modes into account is 35% of the power off resonance. 

 

From the refl scan we measured the higher order modes to be 14% of the power off resonance, from the trans scan we measured the higher order modes to be 30% of total power. These higher order modes are expected to come back as shotnoise. This discrepancy between refl and trans measurements could be due to the high acoustic noise in HAM6. We should have a much better scan once the VOPO is in vacuum.

 

The transmitted signal was taken with Thorlabs silicon diode (SM1PD1A) scanned at 1mHz and was monitored through PD concentrator (D1700176 with dual PD amplifier D1200543, a measured gain of 22.4). This seems to give a cleaner data compared to Thorlabs PD100A diode. The refl power was monitored with the BBPD via another PD concentrator PD monitor (D1201349). 

Images attached to this report
Non-image files attached to this report
H1 AWC
thomas.vo@LIGO.ORG - posted 10:01, Tuesday 24 April 2018 (41634)
SR3 Code change implemented

When first coding the driver chassis in Beckhoff, I wrongly assumed it was a current driver like the TCS ring heaters, however, after looking at the drawings and chatting with the CIT folks I realized it was a voltage driver.  So I modified the logic to reflect this as well as change the channel names and MEDM screen.  Only the AWC library was changed, PLC3 reloaded the changes when running the script.

H1 SYS (AOS, DAQ, IOO, ISC, OpsInfo, SEI, SQZ, SUS, SYS, TCS)
jeffrey.kissel@LIGO.ORG - posted 09:58, Tuesday 24 April 2018 (41633)
SDF Review before Front-End Reboot
J. Bartlett, P. King, J. Kissel, H. Radkins, T. Vo

We've reviewed the SDF system prior today's scheduled fast front-end reboot. We've only accepted things we can confirm we know are necessary because of physical changes to the IFO. Otherwise, we reverted to the safe values and/or left things as DIFFs (because the reboot will revert).

Kissel, Vo (ISC / SUS / IMC)
    - Accepted outputs and offsets on for all SUS by putting them in the ALIGNED state (i.e. when all OUTPUT switches are ON; a few new or neglected SUS needed this, like RMs, ZMs, OFI OPO), because the watchdog protects the system upon reboot.
    - Accepted a few new misalignment offset values (SRM, ITMs, PRM) because we've physically changed the alignment of these suspensions during the Sep-Dec 2017 corner vent, and the misaligned position has now been confirmed by a few months worth of corner station commissioning
    - SR3 optical lever input gains to ZERO until we make the front-end changes need to get it up and running (see LHO aLOG 41547)
    - In ISC land, accepted dark offsets on AS A & B and OMC QPD (remeasured to make stuff work in air, will likely have to be remeasured in the future anyways)
    - 180 phase change in ASC OMC DC centering loops because of the tip-tilt actuator sign flip (LHO aLOG 41441)
Radkins (SEI)
    - Cleared out by Hugh -- most diffs are because some chambers are locked, and/or the sensor correction configuration is OFF which is "abnormal" for observing
Bartlett, King (PSL)
    - FSS and PMC are stable, so J. Bartlett & P. King accepted the DIFFs on this.
    - ISS and DBB 

We're OK to Go for reboot of front-ends.
Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 08:28, Tuesday 24 April 2018 (41629)
Maintenance Morning Status

TITLE: 04/24 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    Wind: 10mph Gusts, 6mph 5min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.27 μm/s
QUICK SUMMARY:

H1 SEI
hugh.radkins@LIGO.ORG - posted 08:19, Tuesday 24 April 2018 - last comment - 09:08, Tuesday 24 April 2018(41627)
WHAM5 HEPI repeatably tripping on horizontal drives

Thanks to Sheila for the alert.  I suspect something has changed and is pushing on the platform or preventing motion.  I'll inspect and then do some range of motion tests to help find the interference and confirm freedom.

FRS 10483

Comments related to this report
hugh.radkins@LIGO.ORG - 09:08, Tuesday 24 April 2018 (41632)

False alarm--WHAM5 HEPI has been locked since the venting on 5 April. Closing ticket, no issue.  HEPI remains locked with relatively small offsets of position and alignment: residuals are 20um in Y & Z and -9urads RZ, all others DOF residuals are much less.  Guardian: SEI chamber manager is paused and the ISI guardian is High_Isolated.  ISI isolates no problem.  If tripping is a problem and isolation isn't required, maybe best to un-pause the manager and run at ISI_Damped_HEPI _Offline.

H1 CDS
patrick.thomas@LIGO.ORG - posted 06:35, Tuesday 24 April 2018 - last comment - 11:02, Tuesday 24 April 2018(41625)
Starting work on h1ecatc1
WP 7491: "Update PLC1 and the system manager on h1ecatc1 to provide remote control of the motorized polarization controller in the MSR through its RS232 interface."
Comments related to this report
patrick.thomas@LIGO.ORG - 08:16, Tuesday 24 April 2018 (41626)
Work on h1ecatc1 is complete. Channels still need to be added to the DAQ. I have an medm screen but will ask commissioners where they would like it to be linked from on the sitemap.
patrick.thomas@LIGO.ORG - 11:02, Tuesday 24 April 2018 (41635)
Replaced /opt/rtcds/userapps/release/als/common/medm/ALS_CUST_POLCORR.adl with the new screen. This is linked from the ALS overview from the box labeled "POLARIZATION CORRECTION BOX".
Images attached to this comment
H1 SQZ (SQZ)
nutsinee.kijbunchoo@LIGO.ORG - posted 23:48, Monday 23 April 2018 (41622)
Squeezer 80MHz EOM Calibration

Quick conclusion: The 80MHz EOM calibration is 0.98 rad/V. We are currently modulating the pump light at 0.094 rad.

--------------------------------------------------

The 20dB attenuator at RF power output to the 80MHz EOM (EO-80M3-NIR) was temporary taken off to let the SB show higher on the OPO cavity scan. 12.3 dBm (0.92 Vrms) was measured going into the patch panel. Using Sqrt(Sideband peak/Carrier peak) = J1(ß)/J0(ß) gives the modulation depth of 0.9 for 12.3 dBm input power, or 0.98rad/V. 

The RF power going through 20dB attenuator measured -7.84 dBm (0.0906 Vrms). This gives the modulation depth of 0.094 rad and that's how much we're modulating the pump light currently. 

The spec I found in the quote claimed ~16dBm (1.41Vrms) is required for 1 rad modulation for 532nm, off by 3dBm.

Images attached to this report
H1 SQZ
sheila.dwyer@LIGO.ORG - posted 21:59, Monday 23 April 2018 - last comment - 08:58, Tuesday 24 April 2018(41621)
squeezer path lenses swapped

Corey, TJ, Thomas Vo, Terry, Dan Brown, Alexei, Sheila

Today we swapped the lenses in the squeezer path for a ROC +250mm lens (1st lens) and a +350 lens (translation stage), both were lenses from the enhanced LIGO squeezer (E1000077) which Corey cleaned this morning.  

When Terry and I swapped the lenses the beam became very misaligned through the squeezer Faraday, so we spent some time re-aligning and then aligning through the apperatures we had placed on HAM6.  To do this we made mechanical adjustments to ZM1 pitch. 

After the PSL crew was done for the day Thomas Dan and I co-alinged the squeezer beam to the interferometer beam very roughly.  (NB, HAM5 HPI has been tripping all evening, so we will need to redo this with HAM5 isolated.)  We were able to close the centering and OMC QPD loops and get a mode scan, results are still being interpreted.

We have removed the apertures we placed in HAM6 this afternoon. 

Comments related to this report
daniel.brown@LIGO.ORG - 08:58, Tuesday 24 April 2018 (41624)

Here are two of the OMC scans taken using the OPO beam last night. The second scan was with the additional apertures removed. If we just take the ratio of the 2nd/0th order peaks it suggests we have around 3-5% mismatch, scan 2 looks slightly better. The model was predicting around 2% so this seems promising. However the high 1st and 3rd order mode content relative to the 2nd and 4th makes us think we could be clipping the beam somewhere. The culprit is likely to be the OPO faraday,  as the new lenses mean the aperture is only 3x larger than the beam now. 

Images attached to this comment
LHO VE
kyle.ryan@LIGO.ORG - posted 17:31, Monday 23 April 2018 (41618)
Turbo pump pumping CP4 tripped off

The turbo pumping CP4 tripped off this afternoon.  As found, the turbo controller was de-energized and the scroll baking pump had isolated itself and remained running.  The power switch for the turbo controller doubles as a circuit breaker and was found in the OFF position.  This switch only has two states ON and OFF as opposed to the more common ON, OFF and "TRIPPED" states ->

None of the power cords appeared to be disturbed and my conclusion is that the turbo had likely tripped due to over temperature.  This pump is surrounded by hot, baking surfaces, has no fan but is, instead, cooled by a local chiller circulating 30C water through its electric motor.  For weeks leading up to today, the relevant temperatures have been non-changing and the turbo has been 80C at its inlet flange and 45C - 48C internally (as per the thermocouple read by the controller).  I think that the internal "trip" threshold is 50C.  Ambient conditions are getting warmer ->

I was able to restart the turbo and reach full RPM while the backing pump remained isolated.  Once at full RPM, I valved-in the baking pump.  Everything seems normal.  As a precaution, I added a small fan and folded back some of the aluminum foil to expose the turbo+RGA hardware and allow some, minimal, convection cooling. 

Non-image files attached to this report
LHO VE
chandra.romel@LIGO.ORG - posted 16:50, Monday 23 April 2018 - last comment - 18:01, Monday 23 April 2018(41619)
CP4 GN2 heat

This morning I valved in the "pressure build circuit" on CP4 Dewar (1/2 turn CCW). The Dewar head pressure is now at 17 psig (just a hair below 16 psig on other gauge) and GN2 flow is back up to ~50 scfhx100 (fluctuates between 40-55), up from 12 scfhx100 this morning. More flow means decreased regen temperature and therefore I also increased the variac today in increments:  58%, then 60%. Attached is 8 hr plot of the GN2 regen temperature - both inlet and exhaust.

Leaving pressure build circuit valved in overnight. Will valve out tomorrow morning before LN2 truck delivery.

Images attached to this report
Comments related to this report
chandra.romel@LIGO.ORG - 18:01, Monday 23 April 2018 (41620)

I watched the GN2 flow surge up to 80 scfhx100, which is too high so I closed the pressure build valve and also lowered the variac to 56% so we don't get alarms all night. Kyle's adjustment on the economizer valve may still be stabilizing and increasing head pressure and tomorrow the Dewar is scheduled for a refill which will also increase flow.

Note that there is a 15 psig pressure relief valve just downstream of the GN2 heater.

LHO General
patrick.thomas@LIGO.ORG - posted 16:24, Monday 23 April 2018 (41617)
Ops Shift Summary
14:46 UTC Mark to end Y to retrieve engine hoist for taking doors off BSC chambers
14:51 UTC Chris S. to beer garden to wrap BSC chambers in prep for vent
14:52 UTC Hugh to end Y to top off HEPI fluid
15:24 UTC Hugh leaving end Y
15:29 UTC APS through gate
Chandra to mid Y and end Y
15:46 UTC Reset tripped Beckhoff laser safety interlock
15:47 UTC Nutsinee to HAM6
16:00 UTC Meeting in CR
16:26 UTC Karen to end Y
16:28 UTC Mark taking HEPI fluid barrel to end X
16:49 UTC Hugh to HAM6
16:49 UTC Filiberto to end X to unlock door
h1boot server was down, restarted
17:11 UTC APS through gate
17:11 UTC Vacuum group restarting mid X cold cathode gauges
17:16 UTC TJ to LVEA to get equipment then to optics lab
17:25 UTC Corey to LVEA to get flashlights
17:51 UTC Chandra WP 7501
17:51 UTC Peter to H2 and then H1 PSL enclosure
17:54 UTC Mark moving cleanroom from mid Y to high bay
17:58 UTC Jason to PSL enclosure
18:01 UTC Thomas and Georgia to squeezer bay to take measurements
18:04 UTC Hugh back
18:13 UTC Karen back
18:13 UTC Nutsinee back
18:17 UTC Hugh to HAM6
18:18 UTC Thomas and Georgia back
18:48 UTC TJ back
19:03 UTC Dave to end X to reboot SUS
19:09 UTC Hugh back
19:28 UTC Dave back
19:44 UTC Dave to end X to reboot SUS IO chassis
19:51 UTC Filiberto to LVEA to look for RF cable
20:00 UTC Corey moving nitrogen tank from cleaning area to optics lab
20:07 UTC Jeff B. to LVEA and optics lab
20:10 UTC APS through gate
20:13 UTC Dave back
20:17 UTC TJ to optics lab
20:19 UTC Karen to mid Y
20:36 UTC Hugh to end Y to reset HV and mechanically adjust the trip level of the HEPI fluid reservoir
20:38 UTC Nutsinee to HAM6 to take measurements
20:48 UTC Filiberto to mid X to check on tripped vacuum gauges
20:49 UTC Karen leaving mid Y
21:06 UTC Peter and Jason out for lunch
21:38 UTC TJ back
21:45 UTC TJ to optics lab and then LVEA
21:51 UTC Tyler and Mark moving engine hoist and cleanroom into LVEA through high bay
21:56 UTC TJ back
22:11 UTC Peter and Jason to H1 PSL enclosure
22:11 UTC Corey done in optics lab for the day
22:18 UTC Sheila to HAM6
22:24 UTC Terry to HAM6
22:26 UTC Jeff B. done
H1 COC (SUS, VE)
hugh.radkins@LIGO.ORG - posted 14:52, Monday 23 April 2018 - last comment - 15:10, Monday 23 April 2018(41610)
EndY High Volts Power Reenabled

The supply toggles were still 'On' so this is in response to a VE glitch.

Comments related to this report
chandra.romel@LIGO.ORG - 15:10, Monday 23 April 2018 (41612)

EY vacuum rack was rebooted last Tuesday for a new gauge that was installed which caused high volts to trip off because HV is interlocked with PT-425 pressure gauge.

H1 AOS
robert.schofield@LIGO.ORG - posted 17:34, Sunday 22 April 2018 - last comment - 17:01, Monday 23 April 2018(41589)
E-field injection suggests that EFM is more sensitive to port fringe fields than DARM is

Georgia, Robert

This entry gives results from the test discussed by Georgia here: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=41559

Sometimes during PEM injections, we sanity check that external electric fields (like from a lightning strike) do not significantly affect DARM by sneaking into the chamber through a viewport. A repeat of these injections allowed for a comparison of the field measured by the EFM to the test mass motion that has been induced in the past by similar injections. In the past, we only did a single frequency in the bucket - because the integration time to see the signal in DARM was long.

We repeated the injection by placing an insulated plate over the illuminator port of the EX chamber, and driving at 211 Hz with a voltage of about +/- 11 V relative to the chamber. Similar injections have produced an rms DARM signal as high as 1.5e-21 m in the past, though this is variable, and sometimes we can’t integrate long enough to see the injection in DARM.

The figure shows an EFM spectrum for this injection and DARM for a similar injection in the past. The calibration from the log referenced above gives an rms field at 211 Hz of about 1e-5 V/m at the EFM. Assuming similar conditions to those in the past, we get a coupling of about 1.6e-16 meters of DARM motion per V/m measured by the EFM, for this injection configuration. The SNR of the EFM signal for the injection appeared to be nearly ten times greater than the DARM SNR had been during similar injections in the past. Shielding may differ for different injection points, but at least for this port injection, the EFM appears to be more sensitive than DARM.

Non-image files attached to this report
Comments related to this report
craig.cahillane@LIGO.ORG - 17:01, Monday 23 April 2018 (41616)
I calibrated our EFM spectrum into DARM, this time using Robert and Georgia's electric field to meter TF number 

I assumed that  due to the test mass suspensions, where  is some constant.  

From Robert's measurement I found .

Then, calibrating EFM voltage output noise  into displacement noise: 
where  from alog 41591, and .

The estimated displacement noise is plotted below.  Think of this as an upper bound for the ambient electric field noise, since we are not sure our EFM noise floor below 100 Hz is not sensor noise, and this is literally a single point measurement of .
Images attached to this comment
H1 AOS
thomas.vo@LIGO.ORG - posted 18:31, Saturday 14 April 2018 - last comment - 08:54, Tuesday 24 April 2018(41441)
AS WFS and OMC ASC loops closed

Armed with the knowledge about the OM3 sign flip, I was able to close the angular loops on the AS WFS DC centering as well as the OMC ASC loops at the same time with the IFO beam.  I had to go pretty far with the alignment sliders on OM1 and OM2 to get the IFO beam back on the QPDs but this seems to let the control loops converge and the alignment offsets are closer to their zero position on OM1 and OM2.  However, I was not able to turn on the integrators but this configuration might be good enough to do an OMC scan. 

The squeezer crew could try to walk the OPO squeezer beam from last night towards this new alignment with the ZMs and try scanning from here, maybe it'll be less noisy with the angular loops closed.

One thing that is a little odd is that there seems to be an oscillation in the power of AS_C_SUM and AS_A/B_SUM, however, none the optics' suspensions seem to be moving excessively and all but one ISI was in isolated state.  HAM5 was the only one in "ISI Damped HEPI Offline" but when I tried to go to "Isolated" the HEPI ACT limit watchdog tripped so I left it alone.  This oscillation occurs both when the AS WFS DC centering loops are open and closed so it might be coming from further upstream of HAM6.  Particularly, it seems like AS_A_DC_PIT is the noisiest of the WFS signals but I don't know where there source is coming from.

 

Images attached to this report
Comments related to this report
hugh.radkins@LIGO.ORG - 08:54, Tuesday 24 April 2018 (41631)

HAM5 HEPI is/was locked--that is why.

Displaying reports 43701-43720 of 84157.Go to page Start 2182 2183 2184 2185 2186 2187 2188 2189 2190 End