Displaying reports 63581-63600 of 83068.Go to page Start 3176 3177 3178 3179 3180 3181 3182 3183 3184 End
Reports until 16:06, Tuesday 28 July 2015
H1 IOO
keita.kawabe@LIGO.ORG - posted 16:06, Tuesday 28 July 2015 (20002)
60Hz comb filters were moved from IMC WFS inputs to IMC DOF[123] filters

IMC WFS inputs have an aggressive 60Hz comb filters. These are not desirable when we want to use these WFSs to control the PSL periscope PZT at around 300Hz to suppress some noise in DARM.

Though it's not clear if these filters are doing anything good for us, just to be on a safe side I moved all comb60 filters in IMC WFS inputs to IMC DOF filters.

Correct setting from now on:

comb60 in FM6 in H1:IMC-WFS_[AB]_[IQ][1234] (e.g. H1:IMC-WFS_A_Q3) are all OFF (they used to be ON).

comb60 in FM6 in H1:IMC-DOF_[123]_[PY] (e.g. H1:IMC-DOF_2_Y) are all ON (there used to be no filter in these).

DOF4 and DOF5 do NOT have comb60.

LHO General
corey.gray@LIGO.ORG - posted 16:00, Tuesday 28 July 2015 (19984)
Ops DAY Summary

Maintenance started with prep work before 8am by Jeff, Betsy, & Cheryl getting everything ready to take H1 down in a controlled way and then taking all guardians to a preferred states.

The all-encompassing RCG upgrade ended up taking up more time than planned, but it's looking like we can start seeing light at the end of the tunnel.  Despite valiant efforts Maintenance Day extended past noon.  There are a few remaining measurements which are going on now, but H1 looks close to being back at a state where it can be returned to locking.

H1 AOS
jason.oberling@LIGO.ORG - posted 15:44, Tuesday 28 July 2015 (20000)
PR3 Optical Lever laser replaced

J. Oberling, E. Merilh

We replaced the PR3 oplev laser with one that has been stabilized in the lab, as well as installing the cooler for laser thermal stabilization and an armored 3m fiber; this is in accordance with ECR E1500224.  The serial number of the new PR3 laser is 198-1; the old SN is 191.  The new laser will be left to thermally stabilize for the next 4-6 hours, then we will start looking at spectrograms of the SUM signal during quiet times to determine if we need to adjust the laser power (since the thermal environment of the LVEA differs from the LSB optics lab).  The old laser, since it was showing no signs of failure, just glitching, will be taken to the LSB optics lab to be stabilized.

After talking with the control room we determined the PR3 optic is in a good alignment, so we realigned the PR3 optical lever.

H1 CDS (SUS)
james.batch@LIGO.ORG - posted 15:23, Tuesday 28 July 2015 (19999)
Modified BIOS settings for h1susey
Modified BIOS settings for new faster h1susey computer to more closely match the original computer BIOS settings.  This makes the new computer a bit slower, but it is hoped that the IOP CPU usage will be stable with the new settings.  We should know the results in a few days.

This work required restart of dolphin IPC connected models at EY, which is all except the h1susauxey models.
LHO VE
kyle.ryan@LIGO.ORG - posted 15:19, Tuesday 28 July 2015 - last comment - 22:47, Thursday 30 July 2015(19998)
Y-end NEG pump regeneration (EY_NEG_REGEN1_.....)
Kyle, Gerardo

0900 hrs. local 

Added 1.5" O-ring valve in series with existing 1.5" metal valve at Y-end RGA pump port -> Valved-in pump cart to  RGA -> Valved-in Nitrogen calibration bottle into RGA -> Energized RGA filament -> Valved-out and removed pump cart from RGA -> Valved-in RGA to Y-end 

???? hrs. local 

Began faraday analog continuous scan of Y-end 

1140 hrs. local 

Isolated NEG pump from Y-end -> Began NEG pump regeneration (30 min. ramp up to 250C, 90 min. soak, stop heat and let cool to 150C) 

1410 hrs. local 

Valved-in NEG pump to Y-end 

1430 hrs. local 

Valved-out Nitrogen cal-gas from Y-end

1440 hrs. local 

Valved-in Nitrogen to Y-end -> Stop scanning
Comments related to this report
gerardo.moreno@LIGO.ORG - 16:52, Tuesday 28 July 2015 (20004)VE

Plot of pressure at Y-End station before, during and afer NEG regeneration.

Non-image files attached to this comment
gerardo.moreno@LIGO.ORG - 16:54, Tuesday 28 July 2015 (20005)

Response of PTs along the Y-arm to NEG pump regeneration.

Non-image files attached to this comment
kyle.ryan@LIGO.ORG - 14:38, Thursday 30 July 2015 (20058)
RGA and pressure data files for NEG regenerations to be centralized in LIGO-T1500408
Non-image files attached to this comment
michael.zucker@LIGO.ORG - 16:35, Thursday 30 July 2015 (20074)
Interesting!  As you predicted, the RGA is not super conclusive because of the background; but there seems a clear difference when you isolate the N2 calibration source. So your water and N2 may really be comparable to the hydrogen, say several e-9 torr each (comparing sum of peaks to the ion gage).  The NEG will poop out after ingesting ~ 2 torr-liters of N2, so at 1000 l/s it will choke and need regen after a few weeks.  Which is I guess what it did. 

It would be good to clean up the RGA so we can home in on the N2 and water pressure, and especially HC's  (I expect the HC peaks in these plots are from the RGA itself). To get practical use out of the NEG we should pace how much of these non-hydrogen gases it sees. We can expect to only get about 50 regens after N2 saturation, and small amounts of HC may kill it outright. 

We should be able to estimate the net speed of the NEG before and after from the pressure rise and decay time (we can calculate the beam tube response if we presume it's all H2). 

rainer.weiss@LIGO.ORG - 20:55, Thursday 30 July 2015 (20077)
I have trouble seeing even the hydrogen pumping by the NEG by looking at the different scans.
Suggest you set the RGA up to look at AMU vs time and do the leak and pump modulation again. Plot amu 2,
amu12,amu14,amu28.
john.worden@LIGO.ORG - 22:47, Thursday 30 July 2015 (20081)

Rai,

That is on our list of things to do - make a table of the  relevant amus' partial pressures.

Note that all the ascii data is at:

(see LIGO-T1500408-v1 for ascii data)

caution - 15 mbytes

Kyle can probably fish out the relevant data from the RGA computer so no need to plow through the whole file.

thanks for the comments, Mike and Rai.

H1 COC (ISC)
evan.hall@LIGO.ORG - posted 15:01, Tuesday 28 July 2015 (19968)
EY butterfly mode ringdown

Rana, Matt, Evan

Back on 2015-07-20, we accidentally rang up the butterfly mode of the EY test mass. We notched this mode out of the DARM actuation path, so that it is not excited by DARM feedback. Additionally, I installed a bandpass filter in mode 9 of the EY L2 BLRMS monitors so that we could monitor the ringdown of the mode.

We looked at 2 hours of data starting at 2015-07-20 11:01:30 Z. Looking at the DARM spectrum from that time, the frequency of the mode is 6053.805(5) Hz. From the BLRMs, we see a nice ringdown in the amplitude. Fitting an exponential to this ringdown, the time constant is 1300 s. This gives the Q of the mode as 2×107 or so. Uncertainty TBD.

Non-image files attached to this report
H1 PSL
thomas.shaffer@LIGO.ORG - posted 13:36, Tuesday 28 July 2015 (19997)
ISS Diffracted Power Adjusted from ~14% to 7.2%

Refsignal was brought to -2.10V

H1 SUS
richard.mccarthy@LIGO.ORG - posted 12:29, Tuesday 28 July 2015 - last comment - 16:20, Tuesday 28 July 2015(19994)
EX LVLN ESD drive
In an attempt to make both end stations function the same we installed the Low Voltage Low Noise  ESD driver below the existing HV ESD driver.  Finished pulling the cables and hooked everything up.  I verified that I could get Low Voltage out but could not proceed to HV testing do to site computer work.  I have left the controls for the system running through the LVLN system but have the High voltage cable still driving from the ESD driver directly.  Re testing will continue after software issues are fixed.
Comments related to this report
jeffrey.kissel@LIGO.ORG - 16:20, Tuesday 28 July 2015 (20003)CDS
J. Kissel, R. McCarthy

In order to make the HV + LVLN driver function as expected, and to obey the wiring convention outlined in D1400177 (which is sadly different from the rest of the suspensions 4-quadrant stages), we required a model change at the top level: a reordering of the outputs of the ESD control request from 
DAC3_0 = DC
DAC3_1 = UL
DAC3_2 = LL
DAC3_3 = UR
DAC3_4 = LR
to
DAC3_0 = DC
DAC3_1 = UR
DAC3_2 = LR
DAC3_3 = UL
DAC3_4 = LL
Note that this channel re-ordering corresponds to how Richard has reordered the cabling in analog, so there has been no overall change in signal ordering. Also note that this change has already been made to ETMY, but it didn't get much attention in the flurry to get the ETMY LVLN driver installed LHo aLOG 18622. 

Attached are screenshots of the change.

The updated top-level model had been committed to the userapps repo as of rev 11106:
jeffrey.kissel@opsws10:/opt/rtcds/userapps/release/sus/h1/models$ svn log h1susetmx.mdl 
------------------------------------------------------------------------
r11106 | jeffrey.kissel@LIGO.ORG | 2015-07-28 15:49:04 -0700 (Tue, 28 Jul 2015) | 1 line

Changed L3 ESD DAC output order to accommodate new wiring for LVLN driver, installed July 28 2015.


Images attached to this comment
H1 CDS (CAL, CDS, PSL, SEI, SUS)
jeffrey.kissel@LIGO.ORG - posted 12:17, Tuesday 28 July 2015 (19993)
Maintenance Day -- Bugs found during RCG Intstall, restarting from Scratch
J. Kissel, B. Weaver, C. Vorvick, C. Gray, D. Barker, J. Batch

While we (Jim, Betsy, and Myself) had attempted to upgrade the RCG, compile, install, and restart all front-end code/models in a piecemail fashion starting at 8:00a PDT this morning, as we began to recover, we began to see problems. We had intended to do the models in an order that would be most-facilitative to IFO recovery, namely PSL, SUS and SEI first. Also, in order to speed up the process, we (Jim, Betsy, and Myself) began compiling, and installing those models independently, and in parallel.

(1) We had "found" (this is common CDS knowledge) that our simultaneous installs of PSL, SEI and SUS front-end code all need to touch the  
/opt/rtcds/lho/h1/target/gds/param/testpoint.par
file, and *may* have resulted in collisions. After the AWG processes on the SUS models would not start, Betsy flagged down Jim, and found that, since we were all installing at the same time, there were SOME of the SUS that didn't make it into the testpoint.par file, so when AWG_STREAM was started by the front-end start script, it didn't find its list of test points, so it just never started. 
This "bug" had never been exposed because the all, sitewide front-end code had never had this many "experts" trying to run through the compilation / install process in parallel like this. 

That's what we get for trying to be efficient.

Dave arrived halfway through the restarts (we had finished all corner SEI, SUS, PSL, and ISC, upgrades by this point), and informed us that, last night, after he had removed and regenerated the 
/opt/rtcds/lho/h1/chans/ipc/H1.ipc 
file by compiling everything, he had moved it out of the way temporarily, in case there were any model restarts over night. If there were model restarts, and the new IPC file was still in place, then that model could potentially be pointed to the wrong connection, and wreak havoc on the whole system. However, we (Jim, Betsy, and I) didn't know that dave had done this, and so when we began compiling this morning, we were compiling on top of the old IPC file. 
Dave suggested that, as long as there were no IPC related model changes between last night and when we got started, the IPC file should be fine, so we proceeded onward.

(2) However, as we got further along the recovery process, we also found that some of the SUS (specifically those on the HAM2a and HAM34) were not able to drive out past their IOP model DAC outputs. We had seen this problem before on a few previous maintenance days, so we tried the fix that had worked before -- restart the all front-end models again. When that *didn't* work, we began poking around the Independent Software IOP Watchdog screens, and found that several of these watchdogs were suffering from constant, large, IPC error rates.

Suspecting that the 
/opt/rtcds/lho/h1/chans/ipc/H1.ipc 
file also had been corrupted because of the parallel code installations, Dave suggested that we abandon ship.

As such (starting at ~11:00a PDT) we're 
- Deleting the currently referenced IPC file
- Compiling the entire world, sequentially (a ~40 minute process)
- Installing the entire world, sequentially, (a ~30 minute process)
- Killing the entire world
- Restarting the entire world.

As of this entry, we're just now killing the entire world.

Stay tuned!
H1 GRD (GRD)
cheryl.vorvick@LIGO.ORG - posted 12:16, Tuesday 28 July 2015 - last comment - 12:37, Tuesday 28 July 2015(19992)
ISC_LOCK is still very DOWN DOWN and NONE NONE and CHECK_IR CHECK_IR'ed

Image attached - two DOWN states, two NONE states, two CHECK_IR states - why?  When did this happen?  Who will fix it?

One clue to the issue is that the "top" DOWN when selected will show up in the guardian log, but the "bottom" DOWN does not produce any guardian log entry.

Check out my first entry about this... here.

Images attached to this report
Comments related to this report
jameson.rollins@LIGO.ORG - 12:37, Tuesday 28 July 2015 (19995)

This is most likely a side affect of my attempts to improve the reload situation.  It's obviously something that needs to be fixed.

In the mean time, you probably have to restart that node to clear up the REQUEST_ENUM:

guardctrl restart ISC_LOCK

You should also then close and reopen the MEDM screens once the node has been restarted.

LHO VE (CDS, VE)
patrick.thomas@LIGO.ORG - posted 11:46, Tuesday 28 July 2015 - last comment - 15:54, Tuesday 28 July 2015(19991)
end Y Inficon NEG hot cathode gauge back on
I tried to turn the gauge back on through the Beckhoff interface by writing '01 02' to the binary field of the FB44:01 parameter but kept getting a response code corresponding to 'Emission ON / OFF failed (unspecified reason)' in FB44:03. Gerardo power cycled it and it came back on.
Comments related to this report
john.worden@LIGO.ORG - 13:28, Tuesday 28 July 2015 (19996)

Let's call these hot filament ion gauges or Bayard Alpert gauges rather than "hot cathode".  thx

patrick.thomas@LIGO.ORG - 15:54, Tuesday 28 July 2015 (20001)
Turned back off per Gerardo's request. Was able to do so through the Beckhoff interface.
H1 GRD (GRD, SEI)
cheryl.vorvick@LIGO.ORG - posted 09:54, Tuesday 28 July 2015 - last comment - 11:07, Tuesday 28 July 2015(19987)
trouble taking ITMY BSC1 HEPI and ISI to OFFLINE this morning

For Maintenance, we took all HEPI's and ISI's to OFFLINE.

I requested OFFLINE for BSC1 but it stalled and did not go there.

Jim helped, and got it to go OFFLINE.

TJ and I looked into why this happened, and what happened, and here's the list of events:

ITMY BSC1, Request for OFFLINE at 15:13 did not execute:

- SEI_ITMY request for OFFLINE did not take it to OFFLINE

- ISI_ITMY_ST1 left in Managed by User on 7/23 (found in the log, Tj and Cheryl)

- ISI_ITMY_ST1 Managed by User prevents SEI_ITMY from taking it OFFLINE

- the fix was that Jim requested INIT on SEI_ITMY

- this changed ISI_ITMY_ST1 from managed by User to managed by SEI_ITMY

- HOWEVER, HEPI was in transition to OFFLINE, and the INIT request interrupted that and sent HEPI back to the previous state, ROBUST_ISOLATED

- This brought HEPI back up when we wanted it OFFLINE

- Jim requested INIT again

- Jim requested OFFLINE again

- these second INIT/OFFLINE requests allowed SEI_ITMY to bring HEPI and ST1 and ST2 to OFFLINE

 

My questions:

- Jim, TJ, Cheryl

Images attached to this report
Comments related to this report
jameson.rollins@LIGO.ORG - 11:07, Tuesday 28 July 2015 (19990)

My questions:

  • do we always want Guardian to default to the previous state, as HEPI did when Guardian didn't recognize it's current state?  This feature is what sent HEPI back up to ROBUST_ISOLATED when we were trying to reach OFFLINE.

Be specific about what you mean by "Guardian didn't recognize it's current state".  It sounds like HPI_ITMY was always in full control of HEPI, and was reporting it's state correctly.  SEI_ITMY was a bit confused since one of its subordinates was stolen, but I think it still understood what states the subordinates were in.

When INIT is requested the request is reset to what it was previously right after it jumps to INIT.  Presumably SEI_ITMY's request was something that called for HPI to be ROBUST_ISOLATED when INIT was requested.

  • today I saw that my request for OFFLINE on SEI_ITMY wasn't really able to take the system OFFLINE, so is this an anomaly or something we need to plan for, or is the default procedure to INIT a system, reseting everything to Managed by Guardian (SEI_ITMY in this case)?

This should be considered an anomaly.  Someone in the control room had manually intervened with the ITMY SEI system, thus ISI_ITMY_ST1 reporting "USER" as the manager.  The intervening user should have reset the system back to it's nominal configuration (ISI_ITMY_ST1 managed by "SEI_ITMY") when they were done, which would have prevented this issue from occurring.

All of the problems here were caused by someone intervening in guardian and not reseting it properly.  Guardian has specifically been programmed to not second guess the users.  In that case the users have to be conscientious to reset things appropriately when they're done.

H1 PSL
corey.gray@LIGO.ORG - posted 08:54, Tuesday 28 July 2015 - last comment - 10:58, Tuesday 28 July 2015(19985)
PSL Weekly Report

I forgot to run the the PSL Checklist during my shift yesterday.  Today, I ran our PSLweekly script and here is the output.  There was model work recently/currently done on the PSL today, so this is why many items are ~10min old.

Items to note:
ISS Diffracted power is HIGH!

Laser Status:
SysStat is good
Front End power is 32.51W (should be around 30 W)
Frontend Watch is GREEN
HPO Watch is RED

PMC:
It has been locked 0.0 days, 0.0 hr 10.0 minutes (should be days/weeks)
Reflected power is 2.061Watts and PowerSum = 24.37Watts.

FSS:
It has been locked for 0.0 days 0.0 h and 10.0 min (should be days/weeks)
TPD[V] = 1.521V (min 0.9V)

ISS:
The diffracted power is around 14.4% (should be 5-9%)
Last saturation event was 0.0 days 0.0 hours and 10.0 minutes ago (should be days/weeks)

Comments related to this report
edmond.merilh@LIGO.ORG - 10:58, Tuesday 28 July 2015 (19989)

Keep in mind that this was taken on a maintenance day after the mode cleaner had been taken down and a previous snapshot had been restored from prior to proper adjustment of the AOM Diffracted power. Not the most optimal time to be taking vital signs :)

H1 ISC
stefan.ballmer@LIGO.ORG - posted 16:28, Sunday 26 July 2015 - last comment - 11:00, Tuesday 28 July 2015(19950)
Lock-loss after 16h due to PRM saturation
I happened to witness the lock loss after 16h. We had several PRM saturations spread over ~8 minutes, before one of them took down the interferometer.
Images attached to this report
Comments related to this report
kiwamu.izumi@LIGO.ORG - 10:47, Monday 27 July 2015 (19957)

Here are some cavity pole data using a Pcal line (see alog 19852 for some details):

The data is 28 hours-long and contains three lock stretches, the first one lasted for 9-ish hours, the second about 16 hours (as Stefan reported above) and the third one 2 hours. As shown in the plot, the frequency of the cavity pole was stable on a time scale of more than 2 hours. It does not show obvious drift on such a time scale. This is good. However, on the other hand, as the interferometer gets heated up, the frequency of the cavity pole drops by approximately 40 Hz at the beginning of every lock. This is a known behavior (see for example alog 18500 ). I do not see clear coherence of the cavity pole with the oplev signals as oppose to the previous measurement (alog 19907) presumably due to a better interferometer stability.

Darkhan is planning to perform more accurate and thorough study of the Pcal line for these parcitular lock stretches.

Images attached to this comment
rana.adhikari@LIGO.ORG - 00:09, Tuesday 28 July 2015 (19977)CAL

As a test, you could inject a few lines in this neighborhood to see if instead of cavity pole drift (which seems like it would take a big change in the arm loss) its instead SRC detuning changing the phase. With one line only, these two effects probably cannot be distinguished.

kiwamu.izumi@LIGO.ORG - 03:52, Tuesday 28 July 2015 (19979)

Rana,

It sounds an interesting idea. I need to think a little bit more about it, but looking at a plot in my old alog (17876), having additional lines at around 100-ish Hz and 500 Hz may suffice to resolve the SRC detuning. Although it would be very difficult if the detuning turns out to be small because it would look like almost a moving cavity pole with a small detuning. I will try checking it with high frequency Pcal lines at around 550 Hz for these lock stretches. /* by the way I disabled them today -- alog 19973 */

kiwamu.izumi@LIGO.ORG - 11:00, Tuesday 28 July 2015 (19988)

In addition to the time series that I posted, I made another time series plot with the corner HWSs. This was a part of the effort to see impacts of the thermal transient on the DARM cavity pole frequency.

There seems to be a correlation between the spherical power of ITMY and the cavity pole in the first two-ish hours or so of every lock stretch. However, one thing which makes me suspicisous is that the time constant of the spherical power seems a bit shorter than the one for the cavity pole and also the arm powers -- see the plot shown below. I don't have a good explanation for it right now.

 

Unfortunately the data from ITMX HWS did not look healthy (i.e. the spherical power suspiciousely stayed at a high value regardless of the interferometer state) and that's why I did not plot it. Additionally, the ITMY data did not actually look great either since it showed a suspiciously quiet time starting at around t=3 hours and came back to a very different value at around t=5.5 hours or so. I am checking with Elli and Nutsinee about the health of the HWSs.

Images attached to this comment
Displaying reports 63581-63600 of 83068.Go to page Start 3176 3177 3178 3179 3180 3181 3182 3183 3184 End