Displaying reports 60341-60360 of 84759.Go to page Start 3014 3015 3016 3017 3018 3019 3020 3021 3022 End
Reports until 10:49, Thursday 21 January 2016
H1 PSL (PSL)
patrick.thomas@LIGO.ORG - posted 10:49, Thursday 21 January 2016 (25071)
Weekly PSL Chiller Reservoir Top-Off
Added 100 mL of H2O to the H1 PSL crystal chiller.
LHO General
patrick.thomas@LIGO.ORG - posted 10:36, Thursday 21 January 2016 (25070)
Earthquake
After moving RM1 and RM2, I performed an initial alignment and took the IFO to NLN. The lock lasted a couple of minutes before being taken out by a large earthquake.
H1 General
patrick.thomas@LIGO.ORG - posted 09:47, Thursday 21 January 2016 (25069)
What was that?
DRMI 1F locked, then dipped in power, and recovered (see attached screenshot).
Images attached to this report
H1 SUS
patrick.thomas@LIGO.ORG - posted 09:01, Thursday 21 January 2016 (25068)
Moved RM1 and RM2
I trended back the positions of RM1 and RM2 for the last 3 days (see attached). The differences appeared significant, so I used the alignment sliders to bring them back.

RM1:
Pitch: 380 -> 416
Yaw: 632 -> 331

RM2:
Pitch: -1149 -> -856
Yaw: 2664 -> 1306
Non-image files attached to this report
H1 SUS (ISC, SUS)
jenne.driggers@LIGO.ORG - posted 00:34, Thursday 21 January 2016 - last comment - 14:43, Wednesday 27 January 2016(25066)
Aligning and locking recovery attempt

tl;dr: Alignment of IMs and RMs should be checked, then re-run an initial alignment.


Xarm green alignment

Ed was struggling with the Xarm green alignment.  We set the guardian to locked no slow, no wfs.  I then turned on the loops one at a time.  Usually the camera centering loops are the problem, but they were the easy ones this time.  Eventually it was DOF2 that was causing trouble, so I had DOFs 1 and 3 closed and touched TMS by hand to get the error signals for DOF2 closer to zero.  I was able to close all the loops, and let the alignment run like normal after that.


Xarm IR alignment

Not really sure what the problem is here, but it's getting late and I'm getting frustrated, so I'm going to see if I can move on with just hand aligning the input. 

I suspect that the IMs need some more attention, so if Cheryl (or someone else following Cheryl's procedures) could check on those again in the morning, that would be great. 

Also, I'm not sure if the RMs got any attention today, but the DC centering servos are struggling.  I've increased the limits on DC servos 1 and 2, both pitch and yaw (they used to all be 555, now they're all 2000).  I also increased H1:SUS-RM2_M1_LOCK_Y_LIMIT from 1000 to 2000. 

Allowing the INP ASC loops to come on is consistently causing the arm power to decay from 0.85ish to lockloss. I didn't touch the ITM or ETM, but I'm not getting any more power by adjusting PR2 or IM4. 


MICH Dark not locking.  Discovered that BS's M2 EUL2OSEM matrix wasn't loaded, so no signal being sent out.  Hit load, MICH locked, moved on.


Bounce needed hand damping (guardian will prompt you in DARM_WFS state), roll will probably need it too.  This isn't surprising, since it happens whenever the ISI for a quad optic trips.  Recall that you can find the final gains (and the signs) for all of these in the ISC_LOCK guardian.  I like to start with something small-ish, and increase the gain as the mode damps away.  No filters need to be changed, just the gains.  My starting values are in the table below, and I usually go up by factors of 2 or 3 at a time. 

(starting gain values) Bounce Roll
ETMY +0.001 -1
ETMX +0.001 +1
ITMX -0.001 +1
ITMY -0.001 -1

I lost lock while damping the bounce mode (in DARM_WFS state), and the DRMI alignment is coming back much worse than the first few DRMI locks I had.  

I don't actually have a lot of faith in my input beam alignment, so I probably wouldn't be happy with any ASC loop measurements I take tonight even if I got the IFO locked.  Since we have an 8am meeting, I'm going to call it a night, and ask the morning operator to check my alignment and fix anything that sleepy-Jenne messed up.

Comments related to this report
jeffrey.kissel@LIGO.ORG - 16:13, Thursday 21 January 2016 (25082)GRD, ISC
I've filed FRS tickets on  few of these items:
Beam Splitter Load Matrix: FRS #4255
HTTS to be included in alignment restoration: FRS #4256
jenne.driggers@LIGO.ORG - 14:43, Wednesday 27 January 2016 (25208)

I put a quick line in the DOWN state that loads the BS EUL2OSEM matrix. 

H1 General
jenne.driggers@LIGO.ORG - posted 23:25, Wednesday 20 January 2016 (25067)
Water on floor

There is water all over the floor in the hallway leading to the control room from the water cooler leaking. 

I have dried up much of it, and left a cone in the hall.  I moved the electronics that are stored in that area out of the way.  I also opened an FRS ticket (hopefully not an ECR....).

H1 General
edmond.merilh@LIGO.ORG - posted 20:06, Wednesday 20 January 2016 (25065)
Shift Summary - Evening

TITLE: Jan 20 EVE Shift 20:00-04:00UTC (12:00-20:00 PDT), all times posted in UTC

STATE Of H1: Commissioning

SUPPORT: Jenne, Sheila, Jeff and Evan

DAY OPERATOR: Patrick

SHIFT SUMMARY:

Power outage recovery ongoing. Patrick still here. After some ESD woes and a few journeys to end stations the ESD situation seems to be resolved for now. Another trip to EX to power cycle TMS coil drivers to clear suspect oscillations occurring in the 1Khz neighborhood. Arm alignment was a daunting task and is still going on as Y arm WFS seem to be a destructive force rather than help, at this point. Jenne is working on it. There was a missing beatnote in the PLL for X-end ALS that was repaired, prior.

H1 CDS
patrick.thomas@LIGO.ORG - posted 19:20, Wednesday 20 January 2016 (25063)
Hardware injection is stopping and starting
Is being reported by the verbal alarms script.
H1 SUS
jenne.driggers@LIGO.ORG - posted 18:31, Wednesday 20 January 2016 (25062)
TMSX OSEMs oscillating

JeffK and Ed noticed that the ALS X spot was flashing much faster than normal, and they looked at the spectra of the TMSX OSEMs and saw large oscillations at a few kHz.  This seems to be a little worse than the situation in July (alog 20118).  

Ed and I went to the end station and power cycled both coil driver chassis, but the noise is still there.  Tomorrow (or tonight if we can't lock with this noise) someone should perhaps try Vern's trick of giving the connector an impulse force (a good ol' whack), which fixed the problem back in July.

Attached is a screenshot comparing the TMSX OSEMs with the TMSY OSEMs that do not have this problem.  Unfortunately, we only record these channels at 256 Hz, so we don't know exactly how long these peaks have been around (since this morning's power outage? longer?).

Images attached to this report
H1 CDS
patrick.thomas@LIGO.ORG - posted 17:55, Wednesday 20 January 2016 (25061)
Conlog started
Error logged: Jan 20 09:18:39 h1conlog1-master conlog: ../conlog.cpp: 301: process_cac_messages: MySQL Exception: Error: Incorrect string value: 'xEBxE26x1A?' for column 'value' at row 1: Error code: 1366: SQLState: HY000: Exiting.

I believe this may have happened after one of the end station Beckhoff computers was brought back online.
LHO General
patrick.thomas@LIGO.ORG - posted 17:24, Wednesday 20 January 2016 - last comment - 16:16, Thursday 21 January 2016(25057)
Ops Day Shift Summary
STATE OF H1: Still recovering from power outage.

ACTIVITY LOG (some missing):
15:44 UTC Unexpected power outage
16:38 UTC Fire department through gate to check on RFAR boxes
16:56 UTC Richard, Jeff B. and Jim W. to end stations to turn on high voltage and HEPI pumps
17:14 UTC Richard turning on h1ecaty1
17:14 UTC Jason and Peter to LVEA to look for an optic
17:39 UTC Richard and company done at end X and going to corner station to turn on TCS chillers and HEPI pumps
17:49 UTC Filiberto to end X to reset fire panel
17:56 UTC Vacuum group touring LVEA (6 or 7 people)
17:57 UTC HEPI pump stations and TCS chillers started in corner station
18:18 UTC Filiberto back from end X
18:24 UTC Jeff B. and Jason to LVEA to start TCS X laser
18:49 UTC Hugh bringing up and isolating HAM ISIs
19:16 UTC Richard and Jim W. to end stations to turn on ALS lasers
19:30 UTC Sheila to CER to power cycle all of the Beckhoff chassis
22:55 UTC Jason to LVEA to look at optical levers
23:02 UTC Jason back
23:34 UTC Dave restarting the EPICS gateway between the slow controls network and the frontend network in an attempt to fix an issue with the Beckhoff SDF
23:47 UTC Dave restarting the Beckhoff SDF code

Other notes:
End stations: end Y IRIG B chassis power cycled; end X, end Y high voltage turned on; end X, end Y HEPI pump station computers started; end X, end Y ISI coil drivers chassis reset; end Y Beckhoff computer turned on
Corner station: TCS chillers turned on; HEPI pump controllers turned on (Jeff B. had to push power button on distribution box); Sheila power cycled all Beckhoff chassis in CER; Sheila turned on the AOS baffle PD chassis in LVEA

I started the h0video IOC on h0epics2.

Conlog was running upon arrival. However it crashed during the recovery. It still needs to be brought back.

Joe D. and Chris worked on beam tube enclosure sealing.

There was some trouble starting the corner station HEPI pump controller computer.

There was some trouble finding the strip tool template for the wind speeds. I'm not sure if it was found or if TJ created a new one.

Things to add to the 'short power outage recovery' document:
- Turn on high voltage supplies
- Push reset on ISI coil drivers
- HEPI pump controller and pumps
- TCS chillers
- TCS lasers
- ALS lasers
- Turn on video server
Comments related to this report
patrick.thomas@LIGO.ORG - 17:29, Wednesday 20 January 2016 (25058)
Jeff K. turned and left on wifi at end stations since we are no longer running in science.
patrick.thomas@LIGO.ORG - 17:38, Wednesday 20 January 2016 (25059)
From Jeff K.:

- The PSL periscope PZTS had to be aligned using the IM4 and MC2 QPDs. Setting them back to their previous offsets did not work.
- There was trouble with the end Y ESD beyond the fact that the high voltage got tripped off with the Beckhoff vacuum gauges.
patrick.thomas@LIGO.ORG - 17:45, Wednesday 20 January 2016 (25060)
01:43 UTC TMSX guardian set to 'SAFE'. Ed and Jenne reset controller at end X. TMSX guardian set to 'ALIGNED'.
jeffrey.kissel@LIGO.ORG - 15:56, Thursday 21 January 2016 (25080)CDS, DetChar, SUS
J. Kissel, T. Shaffer, E. Merilh, E. Hall

A little more detail on the problems with the EY driver:
- We knew / expected the High Voltage Driver of the ESD system to need reseting because of the power outage, and because of several end-station Beckhoff computer reboots (which trip the vacuum interlock gauge, and kill the high voltage power supplies to the HV driver)
- However, after one trip to the end station, to perform the "normal" restart (turn on high voltage power supplies via rocker switches, set output voltage and current limit to 430 [V] and 80 [mA], turn on output, go into VEA and hit the red button on the front of the high voltage driver chassis), we found that the *high voltage* driver was railed in the "usual" fashion, in which the high voltage monitors for all 5 channels (DC, UL, LL, UR, LL) show a fixed -16k, regardless of request output.
- We'd tried the "usual" cure for an EY railed high voltage driver (see LHO aLOG 19480), where we turn off the driver from the red button, unplug the "preamp input" (see LHO aLOG 19491), turn on the driver from the red button, plug in the cable. That didn't work.
- Desperate, after three trips to EY, I tried various combinations of unplugging all of the cables and turning on and off the driver both from the remote switch on the MEDM screen and the front-panel red button, and only when I'd unplugged *every* cable from the front, and power cycled the chassis from the front-panel did it clear up the stuck output.

Sheesh!
jeffrey.kissel@LIGO.ORG - 16:16, Thursday 21 January 2016 (25084)
Three FRS Tickets files on this; problems/faults uncovered/re-exposed by the power outage:
EY HV ESD Driver Railed: FRS #4254
PSL Periscope PZT Offset Problems: FRS #4251
IM's Offset Problems: FRS #4252
H1 CDS
patrick.thomas@LIGO.ORG - posted 16:42, Wednesday 20 January 2016 - last comment - 16:57, Wednesday 20 January 2016(25055)
Beckhoff power outage recovery
Upon arrival, on the CDS overview, under the Slow Controls diagnostics, all of the corner station Beckhoff PLCS were green and updating. All of the end station Beckhoff PLCS had white boxes.

Corner Station:
I was informed that the TCSX laser was not working. There were errors on some of the terminals in the system manager, so I asked Sheila to powercycle all of the chassis in the CER. This seemed to fix the errors and the TCSX laser. She also found all of the AOS Baffle PD chassis in the LVEA off and turned them on. I burtrestored all the PLCs for h1ecatc1 to 6:10 AM PST.

End Stations:
Richard turned on or powercycled h1ecaty1 (I do not know if he found it off). I believe the Beckhoff PLCS for end Y turned green and started updating on the CDS overview after this. I was able to log into h1ecatx1. I found that the EPICS IOC had not started cleanly (see attached screenshot). I quit it and started it again. The Beckhoff PLCS for end X turned green and started updating on the CDS overview after this.

The EtherCAT vacuum gauges did not come back cleanly. All of them were reporting a pressure of flat 0. We have not been able to bring back the gauges in the beam tube enclosures and they have been disabled in the system managers. At end Y, on Vacuum Gauge ETM, under Process Data, Daniel hit 'Load info from device'. On Vacuum Gauge NEG, under Process Data, Daniel toggled the checkboxes under PDO Assignment. For some reason, once he did this and activated the configuration and relinked the variables, the gauges started reporting reasonable values. This did not work for me at end X. If I just hit 'Load info from device' or toggled the checkboxes, and then hit activate configuration, the gauges would start reporting reasonable values. However, as soon as I tried to relink the variables, the data went invalid again. I ended up having to remove and add back the gauges in the system manager and relink them. After that they seemed to work. None of these changes have been (and probably should not be) commited to subversion. We do not know the root cause of these problems.

I burtrestored all the PLCS for h1ecatx1 to 6:10 AM PST. At some point I did the same for h1ecaty1, but I do not remember if the IOC got restarted since. Things seem to be working for it for now, so I am not attempting to burtrestore it again.
Images attached to this report
Comments related to this report
patrick.thomas@LIGO.ORG - 16:57, Wednesday 20 January 2016 (25056)
At one point I may have accidentally requested 'DEGAS' for one of the EtherCAT vacuum gauges at end X or end Y. Unfortunately I can not recall which one and I do not know if it actually engaged.
H1 ISC
sheila.dwyer@LIGO.ORG - posted 16:19, Wednesday 20 January 2016 (25053)
DCDP upconversion is limiting DARM at 10 Hz

Gabriele, Sheila

The message:

We had a look at the upconversion we would expect from the quadratic response of DC readout.  The message is that the large DARM residual at around 3Hz (due to our LSC feedforward) limits our noise near 10Hz, and upconversion around the calibration lines is about a factor of 4 below DARM at 40 Hz. 

Details:

The DCPD photocurrent is proportional to the power on them:

P_DC offset current (20mA),  G_opt is the optical gain (3.3 mA/pm), and x_0 is the darm offset (12 pm).

If we inject a line we expect to see upconversion at the second harmonic, the amplitude of the second harmonic (seen in the DCPDs) over the fundamental should be:

This is the explanation for the upconversion mentioned in alogs 25001 and 21240 .

Since the quadratic term should be small we can approximate the DARM residual, and use it to predict the noise in the DC PDs due to upconversion:

We took the data from a time when I was injecting a 6 Hz line into DARM and made this projection.  The upconversion of the 6 Hz peak predicts the 12 Hz peak well. 

The rms of our DARM residual is dominated by bad feedforward around 3 Hz, this is upconverted and limits our sensitivity at around 10 Hz.  There is also upconversion around the calibration lines that is about a factor of 4 below DARM near 40 Hz. 

Images attached to this report
H1 TCS
nutsinee.kijbunchoo@LIGO.ORG - posted 12:35, Wednesday 20 January 2016 - last comment - 14:45, Wednesday 20 January 2016(25050)
TCS CO2X recovered

Peter, Jason, Jeff B, Nutsinee

The TCS CO2X laser was down due to the power outage this morning along with many other systems. Jeff Bartlett reported that the chiller was tripped and he restarted it. We later restarted the power supply on the mezzaine which then brought the system back up. However, the timeseries of the flow alarm didn't imply that the alarm was tripped when the chiller was shut off and the laser output continued to have non-zero reading for over an hour until the frontend was restarted. Moral of the story is, some of the channels would continue to record fault values after a power outage until the frontend gets restarted. Another moral of the story, maybe we should move TCS CO2 power supply to the CER.

Images attached to this report
Comments related to this report
alastair.heptonstall@LIGO.ORG - 14:45, Wednesday 20 January 2016 (25052)

I think the TCS power supplies were located in the mechanical room because they are switching supplies and we wanted them far away from other electronics.  By the way, there is a second mechanism that shuts off the laser if the chiller goes off (not just the flow meter) which is that the chiller has a built-in relay that acts on the laser controller to turn off the controller.  It was added so that we weren't reliant on low flow rate turning off the laser.

H1 ISC (ISC)
jenne.driggers@LIGO.ORG - posted 19:09, Tuesday 19 January 2016 - last comment - 15:59, Wednesday 20 January 2016(25035)
DHARD Yaw loop measurements at different powers

This afternoon I took measurements of the DHARD Yaw loop at different PSL powers.  In addition to general characterization of the O1 IFO, I will use this data to verify the ASC loop model.  Once we're confident in the loop model at powers that we can measure, we will use it to try to design ASC filters that we can use for high power operation in a few months.

In the first attached screenshot and .xml file, the measurement at 2 W is blue, the measurement at 10 W is orange, and the measurement at 20 W is red

The 2 W measurement was taken at the DC_READOUT state, and only FM6 of the DHARD Yaw filter bank was engaged. This measurement was taken from a lock stretch earlier in the day, using 40 points at 3 avg each. In the xml file, the 2 W data is saved as references 0-4.  In the second screenshot and .xml file, I include some higher resolution measurements of the peaks, with 5 avg for each point.

The 10 W measurement was taken at INCREASE_POWER, and both FM2 and FM6 were engaged.  This measurement used 60 points at 5 avg each. In the xml file, the 10 W data is saved as references 5-9.   I had modified the lscparams.py guardian code to stop the power increase at 10 W, but I have reverted that change, so everything should still be as normal.

The 20 W measurement was taken at INCREASE_POWER, and both FM2 and FM6 were engaged.  This measurement used 60 points at 5 avg each. In the xml file, the 20 W data is the "live" traces.

The 10W and 20W measurements today are broadly consistent with the measurements from 31 July 2015 (alog 20084), which is good.

Images attached to this report
Non-image files attached to this report
Comments related to this report
jenne.driggers@LIGO.ORG - 15:59, Wednesday 20 January 2016 (25054)

I have plotted yesterday's Dhard Yaw measurements against the ASC model that I have. 

The ASC model seems to be missing some gain related to the laser power, since I need a different fudge factor for each input power to get the upper UGF of the model to match the measurement. This is probably a problem with the Optickle part of the model since that's the only thing that should change very significantly in overall gain as a function of power. The suspension model (which includes radiation pressure) shows the peaks from the lower stages moving to higher frequency with higher input power as expected.  

In the individual plots (eg. 2W_only), I show the measurement (dark blue) with some error bars (light blue) derived from the measured coherence plotted against the model (black trace).  The 10 W and 20 W measurements match the model pretty well (except for the gain fudge required), but the 2 W measurement doesn't match the model very well below a few Hz.  I'm not yet sure why this is. 

In the final plot attached, I show all 3 models (solid traces) and all 3 measurements (dotted traces), but without error bars to avoid clutter. 

Non-image files attached to this comment
H1 CDS (CAL)
sheila.dwyer@LIGO.ORG - posted 18:20, Tuesday 19 January 2016 - last comment - 15:32, Saturday 05 March 2016(25034)
dangerous issue with changing DTT stop frequency when looking at CAL_DELTA_L

Robert, Sheila, Evan, Gabriele

I tried to look at one of Robert's injections from yesterday, and we noticed a dangerous bug, which had previously been reported by Annamaria and Robert 20410.  This is also the subject of https://bugzilla.ligo-wa.caltech.edu/bugzilla3/show_bug.cgi?id=804

When we changed the Stop frequency on the template, without changing anything else, the noise in DARM changes.  

This means we can't look at ISI, ASC, PEM, or SUS channels at the same time as DARM channels and get a proper representation of the DARM noise, which is what we need to be doing right now to improve our low frequency noise.  Can we trust coherence measurements between channels that have different sampling rates?

This is not the same problem as reported by Robert and Keita alog 22094

people have looked at the DTT manual and speculate that this could be because of the aggressive whitening on this channel, and the fact that DTT downsmaples before taking the spectrum.  

If there is no near term prospect for fixing the problem in DTT, then we would want to have less aggressive whitening for CAL_DELTA_L_EXTERNAL

Images attached to this report
Non-image files attached to this report
Comments related to this report
christopher.wipf@LIGO.ORG - 19:56, Wednesday 20 January 2016 (25064)

I spent a little time looking into this and added some details to the bug report. As you said, it seems to be an issue of high frequency noise leaking through the downsampling filter in DTT.

Until this gets fixed, any reason you can't use DARM_IN1 instead of DELTAL_EXTERNAL as your DARM channel? It's better whitened, so it doesn't suffer from this problem.

evan.hall@LIGO.ORG - 13:36, Monday 29 February 2016 (25785)

The dynamic range issue in the whitened channel can be improved by switching to five zeros at 0.3 Hz and five poles at 30 Hz.

The current whitening settings (five zeros at 1 Hz, five poles at 100 Hz) produce more than 70 dB of variation from 10 Hz to 8 kHz, and 130 dB of variation from 0.05 Hz to 10 Hz.

The new whitening settings can give less than 30 dB of variation from 10 Hz to 8 kHz, and 90 dB of variation from 0.05 Hz to 10 Hz.

We could also use 6 zeros at 0.3 Hz and 6 poles at 30 Hz, which would give 30 dB of variation from 10 Hz to 8 kHz, and 66 dB of variation from 0.05 Hz to 10 Hz.

Images attached to this comment
evan.hall@LIGO.ORG - 15:32, Saturday 05 March 2016 (25892)

The 6x p/z solution was implemented: LHO#25778

Displaying reports 60341-60360 of 84759.Go to page Start 3014 3015 3016 3017 3018 3019 3020 3021 3022 End