Displaying reports 72261-72280 of 83091.Go to page Start 3610 3611 3612 3613 3614 3615 3616 3617 3618 End
Reports until 16:23, Thursday 03 April 2014
LHO VE
kyle.ryan@LIGO.ORG - posted 16:23, Thursday 03 April 2014 (11161)
~1600 hrs. local -> Switched IP5 voltage from 5000V to 7000V


			
			
H1 SUS
betsy.weaver@LIGO.ORG - posted 16:18, Thursday 03 April 2014 - last comment - 09:10, Friday 04 April 2014(11160)
ITMy fiber welding completed

Travis, Giles, Gary, Jason, Norna, Betsy

This afternoon, Travis, Gary, and Giles finished welding in fibers to the ITMy (ITM011) and it's glass PUM.  At some point during the welding process on Wed a roll of the Test Mass was introduced.  It was noticed this morning before the final destress/annealing took place.  The team attempted a correction to alleviate ~1/2 of the roll error which was somewhat successful.  After reevaluating the tolerance for roll with Dennis and Calum it was decided to leave the roll error as is.  The Test Mass is now hanging from the fibers.

 

Note, the roll error observed this morning was +/- 1.2mm (~7mRad).  We determined that this roll occured between welding the 2 sides of the suspension with the masses locked down the entire time.  The PUM did not show this large roll error.  After correcting and suspending the ITMy, the roll was found to be +/- 0.75mm (~4mRad).  While the roll "tolerance" has never really been set, we attempt to get the roll error within +/- 0.2 mm because that is what we have found we are able to do in all of the previous other monolithic suspensions.

Comments related to this report
betsy.weaver@LIGO.ORG - 16:26, Thursday 03 April 2014 (11162)

Email from Dennis regarding acceptable roll tolerances of an ITM:

Calum,
Using the Zemax model to ray trace, I find that a 7 mrad roll rotation
of the ITM causes only ~0.03 mm radial decentering of the beam at the BS
and a ~0.8 mm radial decentering of the beam at the PRM and at the SRM.
I think that this is acceptable.

Other effects of a DC roll error in the ITM:
- increased coupling of vertical ISI motion to roll motion of the ITM
... but so what?
- slight rotation in the of the phase maps of the ITM ... but so what?
- slight shift in the violin mode frequencies due to the length change
(I think the force in each is not changed due to the roll bias error),
but this is ~1 part in 1000 ... so likely within the violin mode
repeatability anyway?

-- 
Dennis Coyne
giles.hammond@LIGO.ORG - 18:07, Thursday 03 April 2014 (11164)SUS
Attached are some photos of the welds and the final suspension. As mentioned by Betsy, a +/-1.2mm (~7mRad) roll appeared between welding and with the ITM and PUM locked. As a result, during the destress, the PUM right welds developed a slight necking. The stock still has a large radius at this point and we will run some thermal noise models to verify that no significant change in performance is expected.
Images attached to this comment
mark.barton@LIGO.ORG - 08:58, Friday 04 April 2014 (11169)

To see if the residual roll was problematic, I prepared a case mark.barton/20140304TMproductionTMrollpert of the QuadLite2Lateral model with the latest monolithic parameters but with a perturbation to d4 (the height of the fibre attachment point above the optic COM) at each of the fibres, positive on one side and negative on the other. Even a perturbation of ±1.2 mm (the value reported to me before the extra de-stressing reduced it to 0.75 mm), there was no visible difference in any of the top mass force/torque to optic TFs and especially no extra peaks from cross coupling. This is to be expected because the optic is almost perfectly symmetrical in roll and the compliance of the fibres was not changed (the de-stressing did not touch the central section of the fibres so this was not changed in the model). So the performance of the suspension should not be degraded.

jason.oberling@LIGO.ORG - 09:10, Friday 04 April 2014 (11170)

Final alignment numbers for the monolithic (all directions/rotations reported from the view of the IAS equipment, i.e. looking at the HR face of the ITMy):

  • Pitch measurements
    • PUM: 740 µrad up
    • ITMy: 400 µrad down
    • Differential: 1.14 mrad down
    • Differential Spec: 2.0 mrad
  • Roll
    • PUM: ±0.15 mm CW
    • ITMy: ±0.9 mm CW
    • Differential: ±0.75 mm CW
  • Center of Mass Separation
    • Left: 600.5 mm
    • Right: 602.0 mm
    • Target (for both sides): 602.0 mm
H2 SEI (SEI)
mitchell.robinson@LIGO.ORG - posted 16:13, Thursday 03 April 2014 (11159)
Staging building, 3IFO (unit 3) progress
The stage 2 lower walls have been added. The bolts that go up through the optical table have all been torqued.
H2 SEI (SEI)
mitchell.robinson@LIGO.ORG - posted 16:13, Thursday 03 April 2014 (11158)
Staging building, 3IFO (unit 3) progress
The stage 2 lower walls have been added. The bolts that go up through the optical table have all been torqued.
LHO VE
kyle.ryan@LIGO.ORG - posted 16:11, Thursday 03 April 2014 (11157)
Connected portable RGA to beam tube port Y2-1 at Y-mid
Will be baking RGA over the next week or two(?) while connected to but isolated from Y2-1
H1 General
andres.ramirez@LIGO.ORG - posted 16:10, Thursday 03 April 2014 (11156)
Ops Shift Summary
8:58-11:09 Heading to EndX/EndY to attach some signs – Justin
9:09-12:00 Working by HAM3 TCS rack – Aaron
9:20-11:15 Using West crane to fly Leak Detector over YBM – Kyle
9:40-      Going to EndY to work on Oplev/Pcal Receiver Pylon – Craig 
9:40-10:00  Heading to EndY to remove excess grout from Op-Lev pier base – Jeff
10:15-12:00 Going into the LVEA West bay area – Betsy
10:48-12:00 Going to EndX TMS Lab – Corey
11:01-      Back to the LVEA for more TCS work - Aidan
11:09-12:00 Cleaning by the Beer garden area (LVEA) - Chris
12:40-14:16 Going to EndY to do cleaning work - Karen
12:40-14:49 Going to EndX to do cleaning work – Chris
13:37-14:20 Giving a tour in the LVEA - Fred
14:15-      Heading to End Y – Jeff
14:22-14:32 Closing PT-246 at Y-mid - Kyle


H1 SEI
sheila.dwyer@LIGO.ORG - posted 15:31, Thursday 03 April 2014 (11155)
HAM3 HEPI L4C trip

I have turned up the watchdog threshold, the BS also tripped around the same time I will leave it for who ever untrips that to post the plot. 

This is the HEPI L4C WD that as Fabrice has told us we could defeat by raising the threshold, because the L4Cs are not in the loop. 

Since we are routinely defeating this watchdog, it seems like it would be much better if it didn't exist. 

Red letters are to make Rich happy.

Images attached to this report
H1 TCS (TCS)
aidan.brooks@LIGO.ORG - posted 11:15, Thursday 03 April 2014 (11152)
D1101114 - PSL/IO/TCS chassis is online again

We've installed the rotation stage and this chassis is up and running again.

We've yet to test the rotation stage.

H1 PSL (PSL)
sheila.dwyer@LIGO.ORG - posted 10:23, Thursday 03 April 2014 (11150)
PSL shut down

The psl shut itself off this morning because of a flow sensor error. 

The only hitch was in opening the shutter, I had to call Andres and Corey for help from the control room.  The sequence that eventually opened it was that they reset the flow sensor, then reset the shutter, then I went back to the diode room and opened the shutter from the beckhoff computer.

This seems convoluted, is this the intended operation?

H1 TCS (TCS)
aidan.brooks@LIGO.ORG - posted 10:08, Thursday 03 April 2014 (11149)
PSL/IO/TCS rotation stage chassis is down

Aidan, Thomas

We turned off this chassis to hook up the TCSX rotation stage.

H1 General
andres.ramirez@LIGO.ORG - posted 09:32, Thursday 03 April 2014 (11148)
Craning Activity in the LVEA
Kyle will be operating West crane to transport Leak Detector over YBM
LHO VE
john.worden@LIGO.ORG - posted 08:20, Thursday 03 April 2014 (11147)
YEND Pumpdown

For Rai. Both the pirani and cold cathode data.

Images attached to this report
Non-image files attached to this report
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 08:02, Thursday 03 April 2014 (11146)
CDS model and DAQ restart report, Wednesday 2nd April 2014

no restarts reported

H1 SEI (CDS, SEI)
sheila.dwyer@LIGO.ORG - posted 21:58, Wednesday 02 April 2014 (11145)
Trips from tonight's earthquake, guardian

We had some trips durring the earth quake, the attached plot of ETMX is from the first trip, a later trip wthe plot of ITMX is from a trip that happened after Chris untripped the watch dog and the guardian tried to bring it back (the ground motion was still high and we were still using Tcrappy). 

Since everything is tripped and those we have untripped are tripping again, I am not going to try to post all of these plots  Anyone who is interested is free to find the data in the frames. 

some things about guardian:

It would be nice if the manager displays an more usefull error message if the ISI trips, the message we got was: node ISIS_BS_ST2: NOTIFICATION

The ITMY manager says WATCHDOG TRIPDAQKILL, a more usefull message. 

We tried moving ITMX to T750, it tripped again tryng to isolate.  Chris was able to get ITMX to damped, he had to figure out first that it is necessary to go to init, which is not obvious. ETMX did suceed in isolating with the blends on Start.   Since we have had huge earth quakes the last two nights it is not surprising that guardian couldn't bring the ISIs back on Tcrappy.  We will need to wait for more normal trips to test that more.  

Also, Rich said that I should write about problems in the alog in red letters, so here are a few, but I'm making them yellow since they are more like annoyances than huge problems:

ISI overview screens bring the control room work stations to a halt.  73% of my cpu time is curently devoted to medm (Xorg), and all I have open are 2 ISI screens and 2 ISI watchdog screens.  I'm not sure if this is a problem with the workstations or the screens, but it is pretty annoying.

 It does seems like the control room workstations are crashing much less than they were, thank you for that CDS!

The plotting tool for WD trips doesn't alway work, here is a sample error message:

  File "/opt/rtcds/userapps/release//isi/common/scripts/wd_plots/main.py", line 170, in _BufferDict
    abs_threshold_mask=abs_threshold_mask, host=host, port=port)
  File "/opt/rtcds/userapps/trunk/isi/common/scripts/wd_plots/pydv/bufferdict.py", line 243, in __init__
    self.add_channels(channel_names, wd_state_mask=wd_state_mask, abs_threshold_mask=abs_threshold_mask)
  File "/opt/rtcds/userapps/trunk/isi/common/scripts/wd_plots/pydv/bufferdict.py", line 382, in add_channels
    abs_threshold_mask=abs_threshold_mask)
  File "/opt/rtcds/userapps/trunk/isi/common/scripts/wd_plots/pydv/bufferdict.py", line 344, in __fetch_data
    self.conn.clear_cache()  # clear any saved information from a previous connection
RuntimeError: Input/output error

 

It would be nice to have the brief guardian screen in the ISI screens, so you can tell what is going on.  I would greatly prefer if this is the brief version that still allows you to select the requested state (at least for the manger), because these screens allow us to use gaurdians despite the bugs that plauge us with other guardian medm screens.  

It would also be nice if the guardian medm screens themselves worked no matter which workstation you use,  how you opened medm, or if they worked at all at an end station.  There seems to be no way to open a guardian screen from an end station, not even using guardmedm.  My current approach durring the day is to call the operator to ask them to misaling/ realign optics, I'm sure they are going to get sick of this soon. 

Images attached to this report
H1 ISC
sheila.dwyer@LIGO.ORG - posted 21:14, Wednesday 02 April 2014 - last comment - 16:31, Thursday 03 April 2014(11144)
ALS WFS

Chris, Sheila, Alexa, Daniel,

This morning Chris rephased the ALS WFSs, and balanced the gains of the different sections.  Results were verry sensitive to the cavity alingment, so a few iterations were needed.  Then he measured the sensing matrix again several times, this also depends on the cavity alingment.  These three measurements were with as good of an alingment as we could find by hand (phase in degrees in parens for those elements where the phase was inconsistent):

  ETMX PIT ITMX PIT ETMX YAW ITMX YAW
WFS A -944, -994, -1211 858, 807, 613 -61.8, -61.9, -59.3 1043(-27), 1043(-107), 1031(40)
WFS B -580.2, -669, -877 766(15), 935(114), 801(-179) 52.0(-132), 52.1(-123), 51.4(-10) 568, 541, 570

With this information we zeroed out WFS B to ITMX PIT and ETMX YAW and WFS A to ITMX YAW, the elements where the phase wandered, and then inverted this. We also offloaded the WFS alingment signals to the top stage, to avoid saturating the DAC. 

Chris has also written a script that centers the WFS using the picomotors.  It is userapps/als/h1/scrpits/als_x_wfs_center.py

We have also written a wfs relieve script at userapps/als/h1/scrpits/WFS/alsWFSreleive.sh. This just relives the top stage, onto the offset of the M)_LOCK filter bank. 

We were then sucsessfull in turning on all 4 DOFs, and we turned up the gain, we estimate that we got a ugf of 1 Hz for both of the Pitch loops.  We made the attached measurement. In the left plot, the COMM noise measurement (no cavity pole removed) from monday night is the blue reference, and our measurement from tonight is in red.  The rms down to 0.1 Hz is 10Hz, still almost 30Hz down to 0.02 Hz.  We also have higher noise at 1-3 Hz, we don't know if this is caused by the changes to the WFS, or if it is just a difference in the ground motion.  Because of the earth quake we aren't able to repeat the measurement without WFS right now, but we looked at some coherences to try to understand. The top panel in the middle plot shows that the oplevs have coherence with the WFS control signals up to about 1 Hz.  We looked at some end X seismometer coherences and don't see anything (bottom panel of middle plot). The right plot shows that the oplevs do have coherence with the COMM Noise from 0.4-0.8 Hz. 

This data is saved as COMM_NOISE_April_2.xml in my COMM folder (sheila.dwyer/ALS/HIFOX/COMM)

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 16:31, Thursday 03 April 2014 (11163)

While going over these sensing matrix  measurements we found some problems with the templates, we can probably disregard the measurement.  A btter one is coming soon, we hope.

H1 ISC
keita.kawabe@LIGO.ORG - posted 17:22, Wednesday 02 April 2014 - last comment - 10:36, Thursday 03 April 2014(11139)
ISCTEY work today (Kiwamu, Keita)

Almost all of the bolts for the mirror holder bases etc. were not tight enough, so we retightened them. In one case a bolt that attaches the post to the base plate  was not tight, and I actually ended up rotating the entire post while tightening the bolt for the mirror holder. That was a steering mirror upstream of the green EOM. I was able to recover the alignment easily.

We tried to see the red beat note but couldn't. The fiber power is decent (50-60uW), but we're throwing away a lot for polarization analyzer PD, then we throw away 50% on the BS, and we're only getting 5-8uW at BBPD. But we're sending almost 19mW from Prometheus.

Blocking the beam changes the noise floor of BBPD, we changed the NPRO temperature up and down and saw nothing. BBPD itself seems to be working OK, we connected AM laser to the network analyzer and BBPD was responding.

The mode matching looks suspicious though it could be just a trick on the eye as one beam has much more power than the other. Tomorrow we'll go back and try to improve mode matching.

Comments related to this report
jaclyn.sanders@LIGO.ORG - 10:36, Thursday 03 April 2014 (11151)

Ideally the fiber polarization should be adjusted so you're not pitching ~75% of the beam onto the polarization monitor PD. I'd noticed the polarization drift last week and just didn't get around to adjusting the controller.

H1 SEI
sheila.dwyer@LIGO.ORG - posted 17:44, Tuesday 01 April 2014 - last comment - 11:43, Thursday 03 April 2014(11120)
ISIs tripped

There was a magnitude 8 earthquake in Chile, all BSC ISIs tripped. 

Images attached to this report
Comments related to this report
jameson.rollins@LIGO.ORG - 18:01, Tuesday 01 April 2014 (11122)

How did the new BSC SEI guardians do in recovering everything?  Where there any problems or hiccups?  Did it go smoothly?

brian.lantz@LIGO.ORG - 13:41, Wednesday 02 April 2014 (11130)ISC, SUS, SYS
Thanks for posting the data, Sheila. 
We (by which I mean Hugo) will look at this a bit more. A quick look makes it seems that all three vertical drives on 3 of 4 platforms were pushed to the max within 1 second of each other. Many people (W. Hua, Rana, Dan Clark, me, Peter F, et. al.)  have suggested that using that using the sensor correction to only isolate against differential motion, rather than trying to get rid of the absolute motion as the whole site heaves up-and-down would be a smart thing to do. I think this will be a good example to look at. 
-Brian
hugo.paris@LIGO.ORG - 11:43, Thursday 03 April 2014 (11153)

TIme series were collected on both BS (ground STS, high gain) and ETMX (ground T240, low gain) at LHO. 

They can be found on the svn at:
ligo/svncommon/SeiSVN/seismic/Common/Data/2014_04_01__Chile_Earthquake_Data/

For reference, LHO ground sensor time series were also colected for previous earthquake data, recorded on March 3rd of 2014. ligo/svncommon/SeiSVN/seismic/Common/Data/2014_03_10__Earthquake_Data/

Images attached to this comment
H1 SEI (SEI)
greg.grabeel@LIGO.ORG - posted 17:26, Monday 31 March 2014 - last comment - 14:45, Thursday 03 April 2014(11102)
HAM 4 Parker Valve Replaced
Jim Warner, Greg Grabeel

Hugh noted there were some issues with a limited range of motion on HAM 4. Jim and I checked for rubbing but were not able to find anything readily apparent. We replaced the parker valve on the NW corner horizontal actuator. At ~2:45pm we started running the actuator in bleed mode. At ~4:30pm we changed the valve positions to run mode. There were no leaks on the parker valve. I did an air purge on the accumulators shortly after switching over to the run state. Jim will be running linearity and transfer function tests to see if this fixes the issues. 
Images attached to this report
Comments related to this report
greg.grabeel@LIGO.ORG - 14:45, Thursday 03 April 2014 (11154)
That should read resister stack and not accumulator.
Displaying reports 72261-72280 of 83091.Go to page Start 3610 3611 3612 3613 3614 3615 3616 3617 3618 End