Displaying reports 67641-67660 of 85508.Go to page Start 3379 3380 3381 3382 3383 3384 3385 3386 3387 End
Reports until 16:01, Sunday 10 May 2015
H1 SEI (CDS, DetChar, GRD, SYS)
jeffrey.kissel@LIGO.ORG - posted 16:01, Sunday 10 May 2015 - last comment - 16:51, Sunday 10 May 2015(18348)
Interesting Conflict between SEI Guardian's Set Point Monitor and Watchdog Front-End Reset, Only ETMX HEPI Trips during EQ
J. Kissel

5.6 Mag Earthquake near Japan at 2015-05-10 21:25:46 UTC.

For quite some time, when an ISI's watchdog trips, it runs the equivalent of a "down" script via front-end code (see E1300685, installed circa Fall 2013). One of the many things the front-end down script does is to force the isolation feedback loop filter banks' gain to zero, 0.0. However, in the era of the Celerier system of SEI guardians, the ISI nodes are using some Jedi-Level-9 combination of the python tools in 
/opt/rtcds/userapps/release/isi/common/guardian/isigiardianlib/
to run the platform's isolation stages, and somewhere along the journey to "FULLY_ISOLATED," these filter banks are set to 0.1, and then 1.0 (and I don't think ever to zero, 0.0). Thus, when a watchdog trips, the new Set Point Monitor -- who thinks it's the only thing in control of these values -- sends a notification up the ladder that these gains to zero (0.0) have been changed by something other the guardian, i.e. the front end's down script. 

No big deal here, it just caught my eye while Elli and Nutsinee were asking me "Can I reset the SEI system... now? How about now?" after the earthquake mentioned above, and I recommended bringing the SEI manager to DAMPED (such they could regain alignment, but be robust while the EQ rings down). I presume we'd never notice, since rarely if every request this interim DAMPED state of the SEI systems these days, but in the interest of non-expert clarity, it would be good to sort this one out.

I attach a screenshot of the relevant MEDM screens in this state.

Also note ETMX HPI tripped which caused the ISI and SUS to go down a few seconds later, via horizontal actuators (see second attachment), and it looks to a pretty darn quick signal causing the trip. SYS_DIAG was reporting that the EX Tidal was near the edge of its range...
Images attached to this report
Comments related to this report
jameson.rollins@LIGO.ORG - 16:51, Sunday 10 May 2015 (18350)

This is, as you say, not a big deal, and pretty easy to work around in guardian.  We should modify the ISI guardians to reset the isolation gains to zero after a watchdog trip, thereby duplicating what is done in the front end watchdog reset code.  Watchdog trips will then be followed by a brief SPM notifcation blip in the ISI guardians, which will go away as soon as the guardian itself sets the gain to zero, thereby achieving parity between the guardian SPM setpoint and the actual front end value.  We should file this with the integration tracker.

This situation will keep happening during these kinds of watchdog trips until we make these changes to the ISI guardians.

General note: SPM notifications don't necessarily mean you need to do anything unusual or out of the ordinary.  In this case the ISI guardians were telling you that something was a little strange, but overall the state was actually fine.  The notification would have gone away as soon as you reset the watchdog and the ISI guardians started isolating the platforms (and changing the isolation gains).  You didn't need to do anything special to deal with this situation.

H1 PSL (PSL)
richard.savage@LIGO.ORG - posted 11:26, Sunday 10 May 2015 (18347)
PSL FSS Reference Cavity path needs an alignment tweak, and maybe PMC too.

The FSS was locking on a 01 mode when the resonant threshold is set to 0.55.

I raised it to 0.6 and it now locks on the 00 mode. 

However, the transmission is now only 0.93, down from about 1.4.

Apparently the alignment into the RefCav continues to drift.   The orientation of the 01 mode and the reflected spot indicat that is is off in both yaw and pitch.

Looks like the PMC input alignment could use a touchup too.

We should plan on touching up the alignments on Tuesday, if not before.

H1 PSL (PSL)
richard.savage@LIGO.ORG - posted 11:21, Sunday 10 May 2015 (18346)
PSL Shutdown and Restart

ElliK, NutsineeK, RickS

Looking at the CDS snapshots of PSL screens from home this morning, we realized that the laser had shut down.

As-found control room screens are shown in the snapshot below.

Epics channel trends indicate that the Beckhoff interlock (IL) transitioned about 6 hours ago, but none of the recorded inputs (including the chiller flow inputs) triggered (see trend plot below).

The External Shutter had not tripped on the PSL_laser.adl screen, and the chillers were still running.

In the Diode Room, the Status screen showed ony the "Interlock OK" field red, everything else under interlock green.

Because we had no indication that the source of the fault was the Diode Chiller Flow, I decided not to swap the Diode and Crystal chiller interlock cables.

 

We had some issues getting the system back on-line, apparently due to accidentally closing a screen on the Beckhoff computer.  Eventually, we rebooted the computer and restarted the PSL software.

Images attached to this report
Non-image files attached to this report
H1 ISC
evan.hall@LIGO.ORG - posted 02:48, Sunday 10 May 2015 (18345)
Some ASC and OMC measurements

Sheila, Evan

ASC

Sheila finished taking the ASC sensing matrix data, so we now have pitch and yaw data to analyze.

Last week, we were not able to turn up the gains on the AS36I→SRM loops by more than a factor of 3, even with the HSTS compensation engaged. Tonight we were mysteriously able to turn them up by a factor of 7. They have a step response time of 2 s or so.

dETM boosts (FM1 for pitch and yaw) now come on automatically in the ENGAGE_ASC state.

OMC length

We looked briefly for any evidence of DARM contamination from OMC length noise. We drove at the error point of the OMC length servo with broadband noise and watched the response in DARM. We cannot see any linear coupling between 10 Hz and 1 kHz (the coherence is <0.1). The coupling seems to be quadratic in drive strength; as the drive level is stepped up by a factor of 2, the noise in DARM increases by a factor of 4 (see attachment). If one looks at the red, green, and orange curves and extrapolates the quadratic behavior to the unexcited control level, it seems that the contamination in DARM near 80 Hz would be a little above 1×10−22 m/rtHz.

There is, however, high coherence between DARM and OMC length from 2 to 4 Hz, with no excitation.

Thermal (?) drift

We still seem to be battling a drift seen in POP90 (also seen in REFL_LF) which limits our locking time at 10 W to about 2 hours. This seems to be very repeatable: POP90 continually rises until, after about an hour, some of the ASC yaw loops start to show low-frequency instability (particularly the cETM and SRM loops). After another hour, the instability grows so large that the interferometer unlocks. If we want longer locks, we have to fix this.

Images attached to this report
H1 ISC
eleanor.king@LIGO.ORG - posted 17:25, Saturday 09 May 2015 (18343)
SRC length measurement attempt

Nutsinee, Elli

We came in today to lock DRMI and measure the SRC length.  Spent the morning locking trying to lock DRMI, we were finding initial alignment difficult.  We changed H1:LSC-REFL_A_RF9_WHITEN_GAINSTEP to 7 so that we could lock PRX.  In the afternoon we tried to take a DRMI cavity scan measurment using the auxiliary laser.    We unplugged BBPRrefl power (power to 3f PD) on ISCT1 while measuring, and plugged it back in at the end of the day.   The SNR looked poor so we have added an RF amplifier ZFL-2500VH+ to the 1611 PD on ISCT1. (This is now the same setup we were using on ISCT6.)  At this point we handed the interferometer over to Evan.  We had a chat to him about our troubles with DRMI alignment, and will try again soon.

H1 PSL (PSL)
richard.savage@LIGO.ORG - posted 08:01, Saturday 09 May 2015 (18340)
Summary of of PSL chiller work over the past few days

EdM, JasonjO, JeffB, PatrickT, PeterK, RickS

As the frequency of PSL shutdowns apparently initiated by the PSL Beckhoff interlock system increased, on Thursday morning we decided to take the system down and make some changes.  Note that all Beckhoff interlock triggers were apparently generated by the Diode Chiller flow interlock input.

We have had some experience with PSL chiller flow sensors that use a rotating rotor failing.  Replacement sensors that use optical sensing of eddies induced by a vane in the water flow (no moving parts) have been tested on the PSL reference system in Hannover.  We installed these sensors in the spare PSL chillers (there are two, the Crystal Chiller, and the Diode Chiller).

Because we don't have spare 480V 3-phase circuits available in the Chiller Room, we were not able to test the spare chillers but we tested the installaiton process.  Thursday morning we shut down the PSL and installed the new flow sensors in the chillers that had been operating.  We elected to do this rather than switch to the spare chillers for several reasons, including minimizing down time.

We found that the chillers would not operate with the new flow sensors.  Further, we found that the chiller controllers would not function normally, apparently unrelated to the flow sensors.  After consultation with the Thermotek (manufacturer) reps over the phone, we decided to switch to the spare chillers.

We found that the spare chillers would not operate with the new flow sensors either, though they had been installed and tested in Hannover and detailed instllation instructions had been provided and followed.

We swapped the flow sensors in the spare chillers back to the original-style sensors.   We first found that the Diode Chiller flow sensor appeared to be faulty so we swapped it with a similar sensor.  We eventually found that the chiller-side wiring of the flow sensor in the Diode Chiller is different than in the Crystal Chiller- the black and white wires on the sensor need to be swapped.

While writing an aLog entry Thursday evening, the chiller tripped again, twice, so we swapped out the Diode Chiller sensor one more time.  We left Thursday night with the chiller apparently functioning properly.  However within a few hours, Sheila and Evan gave up commissioning efforts after several shutdowns in quick succession.

Trends from Thursday night, when the chillers were operating but the laser had shut down due to a Beckhoff IL trip seemed to indicate Beckhoff IL events that were not triggered by any of the inputs, including the Diode Chiller flow input.  Having observed Beckhoff IL events from the Diode Chiller only, even with the spare chillers and several different flow sensors (with the Crystal Chillers apparently operating without incident and no indication that there actuall was a flow interruption in the Diode Chillers), we decided to investigate functionality issues with the Beckhoff IL controller.

We attempted to swap in the spare Beckhoff IL module, but found the issues detailed in Patrick Thomas's aLog entry.  We switched back to the original Beckhoff IL controller and eventually Patrick and PeterK were able to get it back on-line late yesterday (Friday) afternoon.

With the chiller apparently operating normally, we decided to leave it running and see if some commissioning work could be accomplished.  We expect that the shutdowns will continue until we remedy the real source of the problems.  However, we removed and re-connected most relevant cables and there is a remote possibility that we could have fixed an intermittent connection.

Next steps:

If we see the shutdowns, attributed to Diode Chiller Flow errors, again, we will swap the Diode Chiller and Crystal Chiller interlock cables to see if the shutdowns follow the cables (cable or Beckhoff IL problem) or chiller (Diode Chiller problem).

We also ordered three new old-style flow sensors that will be delivered today along with two new controllers for the originally-operating chillers.  Next week we will work to get the spare chillers operating, first with the old-style sensors, then maybe with the new-style sensors, so that we will have a backup ready.

We will also discuss the issues we encountered with the Beckhoff IL module replacement with the people at NeoLase in Hannover.

H1 ISC
sheila.dwyer@LIGO.ORG - posted 03:44, Saturday 09 May 2015 - last comment - 00:46, Monday 11 May 2015(18339)
SRCL feedforward, ASC sensing matrix

Evan, Sheila

The PSL has been working all night tonight.  We got a chance to try SRCL feedforward.  It works, but it doesn't improve the noise.  We saw that we were able to reduce the noise coupling by 12 dB at first, but then later saw that the coupling was changing by about 6dB at most. The second time we tried it we did not get as good subtraction.  Evan has new measurements of the SRCL coupling and frequency noise coupling to include in the noise budget over the weekend.  

Other things:

We used the TMSY picomotors to center the beams on the QPDs, this didn't change the combination of QPDs we used for the ITM loops.  We might want to check the normalization for the Y arm QPD next time we do inital alingment. 

We can switch SRM coil drivers in full lock, PRM, PR2, SRM, SR2 are now in the guardian in the DRMI_ON_POP state. 

We started to measure the ASC sensing matrix with all the loops closed including the ITM loops, we got a reasonable measurement for pitch, the data is all on disk although we had some trouble extracting it.  We were in the middle of tuning the yaw excitations when we got another earthquake.  We were moptivated to work on ASC because we have been aligning a little by hand before turning on the ASC all night, and we are hoping to find more diagonal signals so that we don't have to do this each lock. 

Comments related to this report
evan.hall@LIGO.ORG - 00:46, Monday 11 May 2015 (18351)

Here are the new SRCL and frequency noise couplings projected onto DARM.

The good news is that we no longer seem to have a frequency noise coupling shelf around 100 Hz. It also seems that the SRCL feedforward pushes the SRCL noise down below 10−19 m/rtHz around 80 to 100 Hz. But somehow the noise in DARM in this region still seems to be nonstationary and (qualitatively) we haven't really seen any noticeable noise reduction here.

I repeated the SRCL injection measurement on Saturday, and got similar results as what is shown here.

The MICH and intensity noise traces are stale and need to be retaken. However, I did not see coherence between MICH control and DARM when looking at the control noises.

Non-image files attached to this comment
H1 PSL
patrick.thomas@LIGO.ORG - posted 20:39, Friday 08 May 2015 - last comment - 12:19, Saturday 09 May 2015(18338)
PSL Beckhoff problems
Jason, Patrick, Peter, Rick

In summary:

1. We could not get the spare interlock chassis to work. The original is back in.
2. We restored the C:/TwinCAT directory to a backup that Peter took in June of 2014.
3. The PSL is running again.

The spare chassis was swapped in. The status screen no longer made sense. Pushing the interlock button made the wrong indicator turn red. I did a scan of the EtherCAT modules. It appeared to indicate that they were unchanged (that is the spare had the same modules in it as the original). Suspected maybe the safety PLC inside the interlock chassis had to be started separately? It seemed to require a username and serial number and password. Peter's attempt to enter these and start it did not appear to work. I tried restarting various things including the entire computer. No luck. I suggested resetting all of the variables including persistent. Bad idea.

Starting the PLC now gave divide by zero errors and would not run. It appeared that some of the persistent variables, which were now all zero, were used in the denominator of fractions. They appeared to be related to some calibration settings. We couldn't get by this. This confuses me. If the code does not start with them zero, and upon the first run they are by default zero... How were they initially set?

It appeared that the files for the saving of the persistent variables were located in the C:/TwinCAT/Boot directory. We tried replacing that directory with a backup that Peter had. This did not help, same divide by zero errors. We tried to delete and replace the entire TwinCAT directory. Windows would not let us delete it.

At some point scanning the EtherCAT modules started showing a whole bunch of differences and the light on the front of the interlock chassis no longer came on.

We decided to put the original interlock chassis back and try restoring the computer to a backup that Peter had taken in June of 2014. After the original chassis was put back the light on the front still did not come on. Peter tried restoring just the TwinCAT directory with the restore software. He was able to do so. We opened the link to the visual. It seemed to run but was blank. I closed it, opened the system manager and set it to run. I opened the PLC and logged in. The light on the front of the interlock chassis came on! The PLC was running again.


Remaining questions:

1. How were the persistent variables originally set? Are what they are set to now (from the backup) the same as they were before?
2. What is different about the spare interlock chassis? Could it be some programming in the safety PLC? Different wiring?
Comments related to this report
daniel.sigg@LIGO.ORG - 12:19, Saturday 09 May 2015 (18342)

This may not help, but

  • Safety PLCs from Beckhoff have their own microprocessor and code, which needs to be loaded into their EEPROM/flash.
  • You simply need to protect code against divide by zeroes. You might be able to log into the PLC and set the variable by hand before you start the program.
H1 GRD (SEI)
thomas.shaffer@LIGO.ORG - posted 18:25, Friday 08 May 2015 - last comment - 09:14, Saturday 09 May 2015(18337)
SEI configuration Guardian testing

With no PSL and no one else really around, I wanted to test out the new sensor correction/blend SEI configuration nodes.

I got the blend side of it working just fine, switching between desired blends perfectly with only a few syntax errors. I watched the NXT and CUR filter banks closely to make sure that they are doing what they were suppose to just incase. The sensor correction part didn't go as smoothly though. There were a few issues that I did not fifure out before I had to go.  Wish I had time tonight to finish this but unfortunately that is not the case.

I stoped the nodes just incase they wanted to mess with anything over the weekend.

On Monday:

Overall it went pretty well seeing this come semi-alive. I put all of the blends back where I found them as well as the SC filters so everything should still be set for the weekend.

Comments related to this report
jameson.rollins@LIGO.ORG - 09:14, Saturday 09 May 2015 (18341)

I need to find a way around the added on ezca prefix for a channel names. I have one test that looks at guardian state channels to check for transitions, and the added prefix I can't seem to get around...for now.

I'm not sure what you're trying to do here, but my guess is that we can find a better way to do it.

H1 SEI
hugh.radkins@LIGO.ORG - posted 16:09, Friday 08 May 2015 (18336)
STS2 Seismos returned to almost usual state

STS2-B in the BierGarten still is the PEM unit destined for the vault.  But the STS2-A (HAM2) is back at its home by HAM2 and all cables are returned to their original location.  More looks after setting.

H1 General
travis.sadecki@LIGO.ORG - posted 16:03, Friday 08 May 2015 (18335)
OPS Day Shift Summary

9:39 Hugh going to CER

11:07 Nutsinee to HWS table near HAM 4

11:33 Nutsinee out

11:39 Nutsinee to LVEA

11:54 Nutsinee out

12:42 Richard to EY for network cabling

12:45 Fil to EY

13:26 Elli to LVEA HAM4 HWS work

13:26 Bubba to LVEA for critter control

13:30 Jim B and Ryan to EY for network switch work

13:42 Bubba out

14:35 Jim B and Ryan out

14:48 Jim B, Ryan, and Elli to EY

H1 SEI
hugh.radkins@LIGO.ORG - posted 15:52, Friday 08 May 2015 (18334)
SEI SDFs greened

I greened up the SDFs for H1 by accepting the ETM & ITMs running the 90 mHz blends as opposed to the 45 mHz for the beam line DOF.  Also accepted the matrix changes for HAM1 2 & 3 for the STS ground seismo input switch to the C unit at HAM5.

H1 CDS
james.batch@LIGO.ORG - posted 15:52, Friday 08 May 2015 - last comment - 17:36, Saturday 09 May 2015(18333)
Remote control power switch installed for HWS camera at EY
A remote controlled power switch has been installed at EY to allow power for the HWS camera to be turned off or on from the control room.  The IP address for the switch is 10.105.0.155, access using telnet, port 23.  Ellie King has the instructions for controlling the power.  The camera is plugged in to outlet #1.
Comments related to this report
eleanor.king@LIGO.ORG - 17:36, Saturday 09 May 2015 (18344)

The HWS is plugged into outlet J1.  To turn on the power type into a terminal:

 

telnet 10.105.0.155 23          (open telnet)

@@@@                                   (start IPC)

?                                                (brings up help scrren with list off commands)

A10                                           (turns on all power outlets.  "A00"  turns them all off.

LO                                             (logs out)

^]                                               (close telnet)

H1 CDS
james.batch@LIGO.ORG - posted 15:48, Friday 08 May 2015 (18332)
Interruption of CDS network to EY
The CDS switch at EY needed to be power-cycled to allow us to do a password recovery procedure. This interrupted data collection for vacuum channels, the HEPI pump controller, and weather station for the end station.  Vacuum data has a gap from approximately 13:51 PDT to 14:06 PDT.

EY Weather was restored at 3:47

EY dust seems to have not been affected.



LHO VE
kyle.ryan@LIGO.ORG - posted 15:45, Friday 08 May 2015 (18331)
Unexpected vacuum alarms from Y-end - why?
Didn't interpret any of the WPs to result in vacuum alarms
LHO VE
bubba.gateley@LIGO.ORG - posted 14:06, Friday 08 May 2015 (18329)
Beam Tube Washing
Scott L. Ed P. Chris S. Cris M. (1/2day)

5/6/2015
Cleaned 45.7 meters ending at HNW-4-034. Removed lights and began moving equipment to next section north.

5/7/2015
Finished moving equipment and hanging lights. Start vacuuming support tubes and tube cleaning. Cleaned 36.5 meters of tube, ending 16 meters north of HNW-4-035.

5/8/2015
Cleaned 45.7 meters ending at HNW-4-038. Cleaning crew left at noon.
Non-image files attached to this report
H1 COC
betsy.weaver@LIGO.ORG - posted 15:32, Wednesday 11 January 2012 - last comment - 14:51, Friday 08 May 2015(2018)
QUAD Glass Mass measured weights
For the record, following are the measured weights of the QUAD glass penultimate masses (PUM) and test masses that are currently here at LHO.  Most of this data can also be found on the CIT optics Nebula we page.  Note the labels of the masses are slightly confusing as the optics have been coated specifically for the one-arm.

MASS     LABEL              INSTALL LOCATION
39,653g ETM02 (TM)    BSC8 ITMy
Didn't measure ITMy PUM because it was the first mass bonded and we were not wise to the need for weights.
39,626g ETM04 (PUM) BSC6 ITMy PUM
39,689g D050421-001 BSC6 ETMy PUM (was lasti mass)
39,613g ETM04 (PUM) 
39,641g ETM05 (PUM)
39,633g ITM01 (PUM)
39,621g ETM03 (PUM)

Comments related to this report
betsy.weaver@LIGO.ORG - 23:30, Friday 13 January 2012 (2046)
We found more weight numbers and I made a typo in my original alog.  The correct table is this:

MASS     LABEL        INSTALL LOCATION
39,653g  ETM02 (TM)   BSC8 ITMy
39,583g  ITM04 (PUM)  BSC8 ITMy PUM
39,626g* ETM04        BSC6 ETMy
39,689g  D050421-001  BSC6 ETMy PUM (was lasti mass)
39,613g  ETM04 (PUM)
39,641g  ETM05 (PUM)
39,633g  ITM01 (PUM)
39,621g  ETM03 (PUM "holy mass" has extra ground recesses) 
39,650g  ITM08 (PUM)
39,616g  ITM05 (PUM)

* Mass weighed with ears/prisms after binding/curing.

- Bland, Barton, Moreno
betsy.weaver@LIGO.ORG - 14:51, Friday 08 May 2015 (18330)

I have reason to doubt the weight listed for ETM02 "TM" - I do not know where this number came from.

Displaying reports 67641-67660 of 85508.Go to page Start 3379 3380 3381 3382 3383 3384 3385 3386 3387 End