Displaying reports 47301-47320 of 84729.Go to page Start 2362 2363 2364 2365 2366 2367 2368 2369 2370 End
Reports until 10:59, Tuesday 05 September 2017
LHO FMCS
john.worden@LIGO.ORG - posted 10:59, Tuesday 05 September 2017 - last comment - 15:10, Tuesday 05 September 2017(38515)
END Y HVAC

Since the cleanroom is running for the TMDS exercise today, I have lowered the chilled air supply temperature by setting it to manual control with a 60F setpoint. We'll monitor this and restore it to Auto once the cleanroom is off. The supply temperature was 61.1 F at the time I switched to manual.

Comments related to this report
john.worden@LIGO.ORG - 15:10, Tuesday 05 September 2017 (38526)

I've restored the supply temperature setpoint to Auto.

H1 PSL
jason.oberling@LIGO.ORG - posted 09:53, Tuesday 05 September 2017 (38513)
PSL Weekly FAMIS Tasks (FAMIS 3666 & 8438)

Today I completed the weekly PSL FAMIS tasks.

HPO Pump Diode Current Adjust (FAMIS 8438)

With the ISS turned OFF, I adjusted the operating current of the HPO pump DBs, changes are summarized in the table below.  I also attached a screenshot of the PSL Beckhoff main screen for future reference.

  Operating Current (A)
Old New
DB1 50.0 50.2
DB2 52.7 52.8
DB3 52.7 52.8
DB4 52.7 52.8

I also adjusted the operating temperature of DB1: all diodes were running at 28.5 °C, they are now operating at 28.0 °C.  This ISS is now back ON and the HPO is outputting 154.9 W.  This completes FAMIS 8438.

PSL Power Watchdog Reset (FAMIS 3666)

I reset both PSL power watchdogs at 16:42 UTC (9:42 PDT).  This completes FAMIS 3666.

Images attached to this report
H1 ISC (ISC, SUS)
thomas.vo@LIGO.ORG - posted 09:52, Tuesday 05 September 2017 (38514)
Difficulty Locking due to Violin Modes

S Dwyer, T Vo

Last night we tried locking for several hours, the violin modes were rung up from the 5.3 earthquake in Soda Springs, Idaho so it took a while to damp them by hand and wait for them to ring down.  Then we had a lock loss when going to DC_READOUT and this rung the modes up even more.  When we have the full IFO back, we will try to lock again.

H1 SEI
patrick.thomas@LIGO.ORG - posted 09:43, Tuesday 05 September 2017 (38512)
SEI seismometer mass check - Monthly
FAMIS 6090

There are 3 T240 masses out of range (see attached screenshot).
All STS masses are in range (see attached screenshot).
Images attached to this report
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 09:01, Tuesday 05 September 2017 - last comment - 16:05, Tuesday 05 September 2017(38510)
CDS restart report: Saturday 2nd - Monday 4th September 2017

model restarts logged for Mon 04/Sep/2017 No restarts reported

model restarts logged for Sun 03/Sep/2017

h1boot 09:39:08 Sun 03 Sep 2017

restart of h1boot due to freeze-up.

model restarts logged for Sat 02/Sep/2017 No restarts reported

Comments related to this report
david.barker@LIGO.ORG - 09:16, Tuesday 05 September 2017 (38511)

h1boot locked up due to 208.5 days bug.

I am reminded of a kernel 2.6.34 bug whereby the system is prone to lockup after 208.5 days have elapsed. At the time of its freeze, h1boot had been running for 215 days. This bug is also most probably the reason for h1build's freeze ten days before h1boot's freeze. The dates agree with restart data shown in this alog: Link

This will all be resolved soon when we transition the front ends, boot and build machines to a later kernel. 

david.barker@LIGO.ORG - 16:05, Tuesday 05 September 2017 (38528)

the longest running front ends have been up for 123 days, so no need to reboot these soon. The gentoo DAQ machines are running kernel 2.6.35, which has a bug fix for this problem. This is evidenced by h1tw1 which has been running for 239 days, well beyond the 208.5 days onset of the problem.

H1 AOS
edmond.merilh@LIGO.ORG - posted 08:10, Tuesday 05 September 2017 (38508)
Shift Transition - Day

TITLE: 09/05 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Maintenance
OUTGOING OPERATOR:N/A
CURRENT ENVIRONMENT:
    Wind: 10mph Gusts, 8mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.13 μm/s
QUICK SUMMARY:

Maintenance day

EXTREMELY SMOKY atmospheric conditions.

 

H1 SUS (CAL, DetChar, ISC, SEI, SUS, SYS, VE)
jeffrey.kissel@LIGO.ORG - posted 16:55, Monday 04 September 2017 (38507)
Another ETMX TMDS Discharging Success: Noise Coupling from BSC ISI ST2 Eradicated
J. Kissel

I've processed ETMX-BSC-ISI-ST2-coupling-to-DARM data that I've been taking over the past few weeks prior to using the Test Mass Discharge System on ETMX, and now including that taken immediately following Friday's IFO recovery to nominal low noise (see LHO aLOG 38494) after the successful discharge of ETMX last week. 

The results show more excellence: All DOF's of ETMX BSC-ISI ST2 coupling has been reduced by about a factor 100, bringing this noise source from "just below" to "several orders of magnitude below" the current sensitivity.
It's exciting that we can pretty definitively say that we've measured and subsequently mitigated real DARM noise that results from electrostatic charge.

See attached plots:
Figures 1-3: Noise budget style projections of the ETMX ISI's coupling during ambient conditions, comparing before vs. after. The "before" data is the most recent data I collected on 2017-08-28.
Figures 4-6: Screen captures of the DTT templates showing the raw measurements. It shows the three data points for each degree of freedom  I collected before the discharge, and the one after the discharge (The three "before" data points for each DOF are date-separated by about ~1 week).

This reduction in noise coupling corroborates with the reduction in effective bias voltage seen LHO aLOG 38469. However, we know that this effective bias voltage is sensitive to all charge coupling mechanisms, so it's difficult to tell from that data alone which mechanism has been neutralized. With this new data showing drastic reduction of BSC-ISI coupling, it may indicate that some large fraction of this reduced effective bias voltage is from a reduction the voltage difference between the cage and the ESD system, decreasing the "C" and "D" terms in Eq. 1 of T1500467.

Also of note: We hadn't been watching this BSC-ISI / Cage coupling for long enough prior to discharging to notice any statistically significant trend / drift in the coupling like we see with the effective bias voltage measurements and/or the relative longitudinal actuation strength, but there is *some* evidence that it was evolving in this data: the X / L data shows a 10% descrease from 2017-07-25 to 2017-08-22 (but no change from 2017-08-22 to 2017-08-28). Neither the RX / P or RZ / Y data show any change. Tough to conclude any time dependence from this little of data. 

Details:
%%%%%%%%
The data are using the same excitation parameters as described in LHO aLOGs 37752 (for X), 38122 (for RY), and 38132 (for RZ) for all times.

The data was gathered using DTT templates,
/ligo/svncommon/SeiSVN/seismic/BSC-ISI/H1/Common/
    2017-09-02_H1ISIETMX_ST2_BB_X_Injections_ETMX.xml
    2017-09-02_H1ISIETMX_ST2_BB_RY_Injections_ETMX.xml
    2017-09-02_H1ISIETMX_ST2_BB_RZ_Injections_ETMX.xml

The noise budget plots were generated with a special offshoot of Shiela's simple noise budget, that lives in the same location:
/ligo/home/sheila.dwyer/Noise/Noise_projections/simple-noise-budget/
    SimpleNB_ISIETMXST2_Only_TMDS_BeforevsAfter.m           << Model
    noise_projections_ISIETMXST2_Only_TMDS_BeforevsAfter.m  << Noise Projection
Images attached to this report
H1 ISC
sheila.dwyer@LIGO.ORG - posted 16:06, Sunday 03 September 2017 (38506)
Locked with ETMY ESD grounded

When LLO had the RTD problem, they saw that their excess noise went away when the grounded all ESD cables at the end station with the bad RTD.  We wanted to check if we could have a similar electronics noise coupling that involved the ESD cables, so a few weeks ago we locked with the ITM cables grounded.  Today I locked with the ETMY ESD grounded, and saw no change in the noise.

Details:

In order to ground the end station ESDs, we needed to be able to transition DARM to the ITM during the acquisition sequence so that we could swap the one energized ETM ESD to the low noise mode. 

Attached is a script that was (mostly) used instead of the LOWNOISE_ESD_ETMY guardian state.  While this was run a few lines at a time for debugging this time, I think it should be ready to be used again if needed.

We were locked in low noise starting around 22:24:30 UTC Sept 2nd.  The lock only lasted about 20 minutes because we use the ETMY ESD to damp two PIs which ring up with it grounded.

Non-image files attached to this report
H1 OpsInfo
sheila.dwyer@LIGO.ORG - posted 10:32, Sunday 03 September 2017 (38504)
Initial alingment minor changes

Between O1 and O2, I had added automatic checkers to several Initial alignment states that give the operator a notification when the WFS have converged and are ready to be offloaded.  I think these are working well enough to be relied on, so that operators can just request the Offloaded states and skip the states where you wait and check that the loops have converged.  To encourage everyone to use these checkers, I have taken states that have them out of the list of requestable states in ALING_IFO. 

This means there are now three fewer steps in the Inital alignment procedure that are done by hand: INPUT_ALIGN, SRC_ALIGN, and PRM_ALIGN can all be skipped and the operator can simply request INPUT_ALIGN_OFFLOADED ect. 

I have also done a two things that I think should make MICH_DARK more reliable, I fixed a big which meant that it did not always wait for the suspensions to stop swinging before trying to lock, and I increased the input power for MICH locking to 10W which should make it less sensitive to dark offsets. 

H1 CDS
sheila.dwyer@LIGO.ORG - posted 09:15, Sunday 03 September 2017 - last comment - 14:02, Sunday 03 September 2017(38498)
epics not updating

Epics variables that originate from the RCG don't seem to be updating this morning, although ones that originate in Beckhoff are updating.  I don't see anything wrong on the CDS overview except the timing system errors which have been there since Tuesday.  

On the GDS screens, the GPS times are all stuck at slightly different times around Sep 03 2017 11:42:43 UTC (so far I have seen times within about 10 seconds of each other with all models on the same IOP stopped at the same time.)

We have had what looks like many nearby EQs over the last 16 hours. 

Comments related to this report
david.barker@LIGO.ORG - 09:34, Sunday 03 September 2017 (38499)

h1boot locked up around 04:40 PDT. Sheila is rebooting it.

david.barker@LIGO.ORG - 09:41, Sunday 03 September 2017 (38500)

h1boot is back, front ends look good. Sheila will try some testpoints and excitations.

david.barker@LIGO.ORG - 09:45, Sunday 03 September 2017 (38502)

here are h1boot's system messages for early this morning, last message before freeze up was an ntpd status change at approximately the time of the freeze. The next message is the reboot at 09:39:08

Sep  3 01:19:48 h1boot -- MARK --

Sep  3 01:39:48 h1boot -- MARK --

Sep  3 01:59:48 h1boot -- MARK --

Sep  3 02:19:48 h1boot -- MARK --

Sep  3 02:39:48 h1boot -- MARK --

Sep  3 02:59:48 h1boot -- MARK --

Sep  3 03:19:48 h1boot -- MARK --

Sep  3 03:39:48 h1boot -- MARK --

Sep  3 03:59:49 h1boot -- MARK --

Sep  3 04:19:49 h1boot -- MARK --

Sep  3 04:39:49 h1boot -- MARK --

Sep  3 04:41:40 h1boot ntpd[4865]: kernel time sync status change 6001

Sep  3 09:39:08 h1boot syslog-ng[4227]: Syslog connection established; fd='7', server='AF_INET(10.99.0.99:514)', local='AF_INET(0.0.0.0:0)'

 

david.barker@LIGO.ORG - 10:01, Sunday 03 September 2017 (38503)

Impact of h1boot freeze up:

The front end real-time processes were not affected by the freeze, neither was their data transfer to the DAQ. All EPICS IOCs on the front ends froze up, which mainly impacted the Guardian nodes which received stuck data. MEDMs were also frozen at their 04:41 PDT values, and conlog also did not receive any updates. I suspect testpoint and excitation operations would have been unavailable during the freeze.

keith.thorne@LIGO.ORG - 14:02, Sunday 03 September 2017 (38505)CDS
Given that both sites had the EtherCat front-end node lockup in the last week, and now H1 boot server, we likely are in the known bug in kernel 2.6.34 where it will fail after > 200 days (certainly the case for l1ecatc1).   We had restarted everything to be OK until end of O2, but it is now after that point.

Really looking forward to getting the OK to install updated OS on IFO front-ends.
H1 General
sheila.dwyer@LIGO.ORG - posted 17:42, Saturday 02 September 2017 (38497)
some locking today.

Today I ran some charge measurements and worked on the script for analyzing and plotting.  My script for transitioning DARM control to ETMX in low noise had a bug which caused a lockloss, after this I grounded the ETMY ESD cables and tried to relock but spent ~40 minutes on violin modes.  While I was damping violins we got a 5.3 in Idaho which tripped several ISIs, everything is re-isolated now but I'm leaving the SEI state as Large EQ. The next person to try locking should be careful about violins.  

ETMY ESD is re-connected for now, I hope to do the grounding test tomorow. 

H1 ISC (CAL, COC, DetChar, ISC, OpsInfo, SUS, SYS, VE)
thomas.vo@LIGO.ORG - posted 19:33, Friday 01 September 2017 (38494)
Return to Low Noise

S Dwyer, J Kissel, T Vo

After running absorption measurements and adjuisting the initial alignment once again, we were able to get past the CARM_5PM stage by adjusting the CARM gain manually.  Then Sheila adjusted the references for alignment to make re-locking easier.

We've returned to Nominal Low Noise, but there is no appreciable change in the sensitivity from before the discharging at X-End.  Sheila and Jeff took in-lock charge measurements and BSC-ISI to DARM coupling measurements, respectively, they will post results in a separate aLOG.

Images attached to this report
Non-image files attached to this report
H1 ISC
thomas.vo@LIGO.ORG - posted 19:25, Friday 01 September 2017 (38493)
Loss Measurement, Locked and Unlocked ARMS

Sheila, Daniel, TVo

Executive Summary

It doesn't look like the TMDS caused any extra absorption on ETMX.

Following my aLOG-38476, where I alluded to the possibility of increased loss due to absorption as a reason we had trouble locking last night, we wanted to try to measure a loss in each arm and compare them to each other.  This is done by looking at the reflected power and seeing the difference between locked and unlocked.  In the attached time series, the first drop is the X-ARM locking, the second drop is the Y-ARM locking.

For this measurement, we configured the IFO as such:

- The arm cavity of interest was first locked on ALS to get aligned well (it would be mis-aligned during the actual measurement)

- The other arm cavity was misaligned

- Then we locked a single arm with Guardian (ALIGN_IFO)-

- We also turned on the DC Centering on the AS WFS in order to maximize their power.

- Then to get a decent dip in the reflected power off the arm cavity, we increased the input power from the PSL from 2 Watts to 25 Watts.

Below is the summary of results:

XARM

Channel Locked Power(Cts) Unlocked Power(Cts) Visbility Abs PPM (Calc'd)
LSC-REFLAIR_B_LF 0.0114 0.0124 92% 343
LSC-ASCAIR_B_LF 0.0423 0.0462 91% 357
ASC-AS_A_DC_NSUM 4241 4454 95% 220

YARM

Channel Locked Power(Cts) Unlocked Power(Cts) Visbility Abs PPM (Calc'd)
LSC-REFLAIR_B_LF 0.0114 0.0124 92% 479
LSC-ASCAIR_B_LF 0.0417 0.0461 91% 409
ASC-AS_A_DC_NSUM 4265 4458 95% 209

 

Some of the sensors in the AS port didn't give us good results when we locked and unlocked but it's not fully understood why.  The total loss is a combination of mode-matching, alignment etc and these were not taken into account.

Images attached to this report
H1 SYS (ISC, OpsInfo, SYS)
jeffrey.kissel@LIGO.ORG - posted 17:21, Friday 01 September 2017 (38492)
TIMING Errors Temporarily Removed from Verbal Alarms
J. Kissel, S. Dwyer

We're getting verbal alarms about timing errors constantly because the end station GPS receivers are in the wrong configuration (see LHO aLOG 38439).

To save our sanity until the GPS receiver configurations are fixed, we've commented out the TIMING error function from the verbal alarms script by excluding it from the , all_tests list defined on line 1253 of 
     /opt/rtcds/userapps/release/sys/h1/scripts/VerbalAlarms/VerbalAlarms.py

This should be put back in once the GPS receiver configuration is fixed.
Displaying reports 47301-47320 of 84729.Go to page Start 2362 2363 2364 2365 2366 2367 2368 2369 2370 End