Displaying reports 63901-63920 of 83395.Go to page Start 3192 3193 3194 3195 3196 3197 3198 3199 3200 End
Reports until 20:36, Tuesday 28 July 2015
H1 ISC (CDS, DetChar, GRD, IOO, ISC, SEI, SUS)
jeffrey.kissel@LIGO.ORG - posted 20:36, Tuesday 28 July 2015 (20011)
7/28 Maintenance Day Relocking Efforts -- So far + Two Consecutive Big Earthquakes
J. Kissel, for the Relocking / Commissioning Teams

A progress update (picking up from when the future looked bleak), so I don't have to write a giant log at the end. (And to be honest, I'm not staying.)

Executive summary thus far: We're still struggling with the restoration settings after a major front-end computer outtage (like an RCG upgrade). Not because we aren't restoring to a good time, but it's that the "good time" (which has, thus far, been "the last time we were locked in low noise") is not the right time for some settings, especially those that are part of lock-acquisition. These settings will slowly but surely rear their head, and we just have to code them into the Guardian as we go, because they are almost always NOT monitored by SDF because they're part of filter banks under guardian control.

Also, big earthquakes are a huge hindrance to recovery from a rough maintenance day. We should should schedule those for Friday nights.

Here's what happened in the after noon / early evening:

13:00 All models have finished there upgrade and are up and running
      Quickly see that HAM SUS IOP output is OK
      Quickly see that IPC errors from SEI are gone
      Richard immediately to ETMX to finish cabling up ETMY ESD LVLN Driver

13:10 Discover IMC WFS front end model has been mistakenly named ASCIMC on the top-level making all MEDM channels go white (because they're now called H1:ASCIMC-... instead of H1:IMC-...)

13:20 All SEI and SUS "recovered" (i.e. brought to ALIGNED and FULLY ISOLATED)

13:30 PR3 oplev commissioning begins
      h1ascimc front end model naming bug fixed, back up and running
      Discover ITMs are getting a HUGE signal blasted in from ASC
      DAQ restart

13:40 Find ASC loops that are blasting the ITMS, loops turned OFF and history cleared. 
            << LESSON LEARNED -- We should Run the ISC_LOCK DOWN script after a full computer restart 
      
14:00 Discovered ETMX ESD wiring had not been updated for the new LVLN ESD wiring (LHO aLOG 20003)
      Resume wiring up ETMX ESD LVLN Driver

14:20 IMC recovered briefly after trouble with *some* unknown setting has been globally BURT restored

14:22 IMC lost again, because ISS is railing.

14:25 Jim and Dave head to EY to update BIOS settings on fast front-end (For the record, we've chosen NOT to revert the EY fast front-end to slow front-end, we've JUST upgraded the BIOS settings)

14:40 IMC recovered
            << LESSON LEARNED earlier global burt restore restored to a time of full IFO lock, 
               which had the ISS 2nd loop engaged. Without full IFO, it doesn't make sense to 
               have the 2nd loop ON. 2nd loop turned OFF, IMC OK.
      EX LVLN Driver Wiring Complete
      EX Charge measurements launched

15:00 Jim and Dave finish ETMY front-end machine BIO upgrades
      Found New ISI wiener filters are ON (as is standard for any new filter bank) and just 
               feeding STS signals straight to the platform, causing platforms to be very noisy.

15:15 Optical Lever work completes.

15:20 ETMY SUS and SEI fully recovered 
      Begin charge measurements  at EY to confirm SUS health
      Keita begins IMC WFS plant measurement for new DOF5 to reduce 200-300 Hz intensity noise

16:00 Charge on ETMY done -- but discover drive is saturating. (Found out later it was because of the move of the "sumComp" filter from DRIVEALIGN to COILOUTF that was performed yesterday evening [[no aLOG]])
      Begin Initial alignment

17:05 Found SNR for AS 45 Q (during initial alignment of SRY) was really low, even with high-power into the IFO.
            << LESSON LEARNED Discovered there's a -160 dB filter that's ON for full IFO low-noise state, which we *don't* want on during initial alignment. This is *not* included in any Guardian's state request. This *should* be included in both the IFO_ALIGN guardian and the IFO DOWN state.

17:40 Initial alignment complete, beginning full IFO lock acquisition attempt  (Need some hand tweaking of the BS by Evan)

17:45 We had *just* reached some stages of the CARM offset reduction for the first time and then
      Magnitude 5.9 Earthquake - 29km S of Acandi, Colombia
      
17:50 Charge measurements resume (we figure out the saturation issue mentioned above)
      Matt and Hang do some L2P / L2Y measurements on ITMY

18:40 Resume locking

19:10 Make it up to DARM_WFS (not even to RESONANCE where the FULL IFO is at least RF locked on ETMX)
      Found ETMY M0 Bounce mode damping filters were not set correctly, ringing up EY's Bounce Mode terribly
          -- blamed filters disappearing (debunked)
          -- blamed improperly engaged guardian state (debunked)
      It was really, that the last global BURT restore was done *before* the EY BIOS upgrade was finished. So we didn't BURT enough!
          << LESSON LEARNED These filters should *also* be forced into the right configuration, 
             since they are NOT monitored by the SDF system (because there are other parts of 
             the filter module that ARE controled by guardian)

19:45 While having resolved the Bounce Mode damping problem, 
      Magnitude 6.3 Earthquake - 71km SSW of Redoubt Volcano, Alaska


Goodnight everybody. "Things will *definitely* be better in ULTRA LIGO."
H1 ISC
jenne.driggers@LIGO.ORG - posted 20:25, Tuesday 28 July 2015 - last comment - 09:14, Thursday 30 July 2015(20012)
Test Beckhoff for new EOM driver

Daniel gave me the test rig for the AM stabilized EOM drivers that we should be receiving from Caltech this week.  This allowed me to test that the Beckhoff controls and readbacks work as expected.  I also made a summary screen (ISC_CUST_EOMDRIVER.adl) of these readbacks and controls, which is accessible from the LSC dropdown menu on the sitemap. 

The chassis is labeled "Corner 6", and has 2 unconnected connectors labeled "EOM Driver A" and EOM Driver B". 

The "A" connector controls the 45 MHz channels, and the "B" connector controls the 9 MHz channels. 

The controls and readbacks performed the same for both channels, so I'll only write out the list once.

I need to investigate the situation with the "Excitation Enable" switch, but other than that we should be ready to go when the EOM driver arrives, if we decide to install it.

Comments related to this report
daniel.sigg@LIGO.ORG - 09:14, Thursday 30 July 2015 (20057)

The RF setpoint goes from 4dBm (lowest setting) to 27dBm (highest setting) in steps of 0.2dB. The binary should start at zero for the lowest setting and increase by 1 for each step. This is a PLC problem.

H1 ISC
jenne.driggers@LIGO.ORG - posted 19:55, Tuesday 28 July 2015 - last comment - 09:18, Wednesday 29 July 2015(20008)
Global burt restore guardian option

[Jenne, StefanB, Cheryl]

We were having some trouble getting the IMC WFS to converge (WFS came on, looked okay for a while, then started dragging the MC transmitted power down), so we implemented and tested the new global burt restore state, accessible from the ISC_LOCK guardian. 

In the end, the specific problem with the IMC WFS was that the ISC input filters on the M3 stage of the MC mirrors (definitely MC1, maybe others) had gain of zero, so the WFS signals weren't getting through to the optics' outputs. 

The solution we created, which should solve all kinds of problems, is a guardian state that does a global burt restore to a recent set of autoburt snapshots. This state has a date and time hard-coded in the guardian script, although it is easily change-able if we find a more preferable time.  Currently, it is restoring to the autoburts of 28 July 2015, 07:10.  To select this state, you need to go to "manual", since there are no edges to get there from any other state.  When it has finished the restores, it will immediately go to and run the Down state. 

The list of snapshots that are restored is:

Perhaps though, we should just restore *every* snapshot that is captured by autoburt?

Comments related to this report
jeffrey.kissel@LIGO.ORG - 08:28, Wednesday 29 July 2015 (20030)CDS, GRD
J. Kissel, J. Driggers

Use this feature with caution, as we found yesterday evening during the rest of recovery (LHO aLOG 20011), BURT restoring to a single time with the IFO was at low-noise isn't necessarily the right time to BURT restore to, especially until we've caught all of the settings that have fallen between the guardian and SDF cracks.
jameson.rollins@LIGO.ORG - 09:18, Wednesday 29 July 2015 (20032)

I really don't like this as a solution to whatever problem you're trying to solve.  Is there something wrong with the SDF system that is causing you to resort to such a sledgehammer approach?

H1 CDS
patrick.thomas@LIGO.ORG - posted 19:44, Tuesday 28 July 2015 (20010)
Updated Conlog channel list
Added 429 channels. Removed 240 channels.
H1 GRD
cheryl.vorvick@LIGO.ORG - posted 18:18, Tuesday 28 July 2015 - last comment - 11:07, Wednesday 29 July 2015(20009)
Breaking the IFO Lock for Maintenance:

For Maintenance this morning I needed to break the IFO lock, so recorded the steps used today.

- Besty, Jeff, TJ, Cheryl

Comments related to this report
sheila.dwyer@LIGO.ORG - 02:02, Wednesday 29 July 2015 (20017)

The reason that requesting ISC_LOCK to down doesn't work anyore is because I made down not a goto state.  This means that there is no path from NOMINAL LOW NOISE to DOWN.  (We were having problems with unintentially going to DOWN.)  So when you request DOWN, the guardian doesn't actually go there.  To force it to do that, you can go to manual mode and select DOWN. 

cheryl.vorvick@LIGO.ORG - 11:07, Wednesday 29 July 2015 (20035)

I understand more now, and was not aware that you had changed that on purpose.

No suggestion for a change, just reporting what I was asked to try and the results I got.

H1 SYS
daniel.sigg@LIGO.ORG - posted 17:19, Tuesday 28 July 2015 (20007)
New GPS EPICS readouts for end station CNS II Clocks

Serial cables and new software infrastructure was installed to read the status information of the GPS clocks in the end stations. The CNS II clocks are using the Motorola binary format at 9600 baud. We read the @@Ha message once a second. The GPS receiver is reported as a Synergy SSR-M8T. This seems to be an u-blox LEA-M8T in an M12+ form factor and Motorola protocol emulation. Medm screens are available from the auto-generated list.

The duplicated PSL_ERROR channels were also eliminated.

H1 SUS (INS, SUS, SYS)
leonid.prokhorov@LIGO.ORG - posted 17:15, Tuesday 28 July 2015 (20006)
OPLEV Charge measurements
Some charge measurements was done today to check the ESD's after today's changes (see 19994). 
All quadrants of ETMX ESD seems working Ok. (Note, that charge measurements use Hi voltage driver).
We have ETMY measurements corrupted due to changing of the bias gain. The saturation was founded in ETMY ESD during the measurements. Jenne found that gain of bias is 1 while they reduced it last night (see 19978). Seems like this gain was somehow changed back to 1 right before the charge measurements. I don't think it was done by the charge measurement script, nevertheless it is possible. So we see that ESD works Ok, but we have no charge measurements data for ETMY today.
Updated ETMX charge plot is available in the attachment.

Images attached to this report
H1 IOO
keita.kawabe@LIGO.ORG - posted 16:06, Tuesday 28 July 2015 (20002)
60Hz comb filters were moved from IMC WFS inputs to IMC DOF[123] filters

IMC WFS inputs have an aggressive 60Hz comb filters. These are not desirable when we want to use these WFSs to control the PSL periscope PZT at around 300Hz to suppress some noise in DARM.

Though it's not clear if these filters are doing anything good for us, just to be on a safe side I moved all comb60 filters in IMC WFS inputs to IMC DOF filters.

Correct setting from now on:

comb60 in FM6 in H1:IMC-WFS_[AB]_[IQ][1234] (e.g. H1:IMC-WFS_A_Q3) are all OFF (they used to be ON).

comb60 in FM6 in H1:IMC-DOF_[123]_[PY] (e.g. H1:IMC-DOF_2_Y) are all ON (there used to be no filter in these).

DOF4 and DOF5 do NOT have comb60.

LHO General
corey.gray@LIGO.ORG - posted 16:00, Tuesday 28 July 2015 (19984)
Ops DAY Summary

Maintenance started with prep work before 8am by Jeff, Betsy, & Cheryl getting everything ready to take H1 down in a controlled way and then taking all guardians to a preferred states.

The all-encompassing RCG upgrade ended up taking up more time than planned, but it's looking like we can start seeing light at the end of the tunnel.  Despite valiant efforts Maintenance Day extended past noon.  There are a few remaining measurements which are going on now, but H1 looks close to being back at a state where it can be returned to locking.

H1 AOS
jason.oberling@LIGO.ORG - posted 15:44, Tuesday 28 July 2015 (20000)
PR3 Optical Lever laser replaced

J. Oberling, E. Merilh

We replaced the PR3 oplev laser with one that has been stabilized in the lab, as well as installing the cooler for laser thermal stabilization and an armored 3m fiber; this is in accordance with ECR E1500224.  The serial number of the new PR3 laser is 198-1; the old SN is 191.  The new laser will be left to thermally stabilize for the next 4-6 hours, then we will start looking at spectrograms of the SUM signal during quiet times to determine if we need to adjust the laser power (since the thermal environment of the LVEA differs from the LSB optics lab).  The old laser, since it was showing no signs of failure, just glitching, will be taken to the LSB optics lab to be stabilized.

After talking with the control room we determined the PR3 optic is in a good alignment, so we realigned the PR3 optical lever.

H1 CDS (SUS)
james.batch@LIGO.ORG - posted 15:23, Tuesday 28 July 2015 (19999)
Modified BIOS settings for h1susey
Modified BIOS settings for new faster h1susey computer to more closely match the original computer BIOS settings.  This makes the new computer a bit slower, but it is hoped that the IOP CPU usage will be stable with the new settings.  We should know the results in a few days.

This work required restart of dolphin IPC connected models at EY, which is all except the h1susauxey models.
LHO VE
kyle.ryan@LIGO.ORG - posted 15:19, Tuesday 28 July 2015 - last comment - 22:47, Thursday 30 July 2015(19998)
Y-end NEG pump regeneration (EY_NEG_REGEN1_.....)
Kyle, Gerardo

0900 hrs. local 

Added 1.5" O-ring valve in series with existing 1.5" metal valve at Y-end RGA pump port -> Valved-in pump cart to  RGA -> Valved-in Nitrogen calibration bottle into RGA -> Energized RGA filament -> Valved-out and removed pump cart from RGA -> Valved-in RGA to Y-end 

???? hrs. local 

Began faraday analog continuous scan of Y-end 

1140 hrs. local 

Isolated NEG pump from Y-end -> Began NEG pump regeneration (30 min. ramp up to 250C, 90 min. soak, stop heat and let cool to 150C) 

1410 hrs. local 

Valved-in NEG pump to Y-end 

1430 hrs. local 

Valved-out Nitrogen cal-gas from Y-end

1440 hrs. local 

Valved-in Nitrogen to Y-end -> Stop scanning
Comments related to this report
gerardo.moreno@LIGO.ORG - 16:52, Tuesday 28 July 2015 (20004)VE

Plot of pressure at Y-End station before, during and afer NEG regeneration.

Non-image files attached to this comment
gerardo.moreno@LIGO.ORG - 16:54, Tuesday 28 July 2015 (20005)

Response of PTs along the Y-arm to NEG pump regeneration.

Non-image files attached to this comment
kyle.ryan@LIGO.ORG - 14:38, Thursday 30 July 2015 (20058)
RGA and pressure data files for NEG regenerations to be centralized in LIGO-T1500408
Non-image files attached to this comment
michael.zucker@LIGO.ORG - 16:35, Thursday 30 July 2015 (20074)
Interesting!  As you predicted, the RGA is not super conclusive because of the background; but there seems a clear difference when you isolate the N2 calibration source. So your water and N2 may really be comparable to the hydrogen, say several e-9 torr each (comparing sum of peaks to the ion gage).  The NEG will poop out after ingesting ~ 2 torr-liters of N2, so at 1000 l/s it will choke and need regen after a few weeks.  Which is I guess what it did. 

It would be good to clean up the RGA so we can home in on the N2 and water pressure, and especially HC's  (I expect the HC peaks in these plots are from the RGA itself). To get practical use out of the NEG we should pace how much of these non-hydrogen gases it sees. We can expect to only get about 50 regens after N2 saturation, and small amounts of HC may kill it outright. 

We should be able to estimate the net speed of the NEG before and after from the pressure rise and decay time (we can calculate the beam tube response if we presume it's all H2). 

rainer.weiss@LIGO.ORG - 20:55, Thursday 30 July 2015 (20077)
I have trouble seeing even the hydrogen pumping by the NEG by looking at the different scans.
Suggest you set the RGA up to look at AMU vs time and do the leak and pump modulation again. Plot amu 2,
amu12,amu14,amu28.
john.worden@LIGO.ORG - 22:47, Thursday 30 July 2015 (20081)

Rai,

That is on our list of things to do - make a table of the  relevant amus' partial pressures.

Note that all the ascii data is at:

(see LIGO-T1500408-v1 for ascii data)

caution - 15 mbytes

Kyle can probably fish out the relevant data from the RGA computer so no need to plow through the whole file.

thanks for the comments, Mike and Rai.

H1 COC (ISC)
evan.hall@LIGO.ORG - posted 15:01, Tuesday 28 July 2015 (19968)
EY butterfly mode ringdown

Rana, Matt, Evan

Back on 2015-07-20, we accidentally rang up the butterfly mode of the EY test mass. We notched this mode out of the DARM actuation path, so that it is not excited by DARM feedback. Additionally, I installed a bandpass filter in mode 9 of the EY L2 BLRMS monitors so that we could monitor the ringdown of the mode.

We looked at 2 hours of data starting at 2015-07-20 11:01:30 Z. Looking at the DARM spectrum from that time, the frequency of the mode is 6053.805(5) Hz. From the BLRMs, we see a nice ringdown in the amplitude. Fitting an exponential to this ringdown, the time constant is 1300 s. This gives the Q of the mode as 2×107 or so. Uncertainty TBD.

Non-image files attached to this report
H1 PSL
thomas.shaffer@LIGO.ORG - posted 13:36, Tuesday 28 July 2015 (19997)
ISS Diffracted Power Adjusted from ~14% to 7.2%

Refsignal was brought to -2.10V

H1 SUS
richard.mccarthy@LIGO.ORG - posted 12:29, Tuesday 28 July 2015 - last comment - 16:20, Tuesday 28 July 2015(19994)
EX LVLN ESD drive
In an attempt to make both end stations function the same we installed the Low Voltage Low Noise  ESD driver below the existing HV ESD driver.  Finished pulling the cables and hooked everything up.  I verified that I could get Low Voltage out but could not proceed to HV testing do to site computer work.  I have left the controls for the system running through the LVLN system but have the High voltage cable still driving from the ESD driver directly.  Re testing will continue after software issues are fixed.
Comments related to this report
jeffrey.kissel@LIGO.ORG - 16:20, Tuesday 28 July 2015 (20003)CDS
J. Kissel, R. McCarthy

In order to make the HV + LVLN driver function as expected, and to obey the wiring convention outlined in D1400177 (which is sadly different from the rest of the suspensions 4-quadrant stages), we required a model change at the top level: a reordering of the outputs of the ESD control request from 
DAC3_0 = DC
DAC3_1 = UL
DAC3_2 = LL
DAC3_3 = UR
DAC3_4 = LR
to
DAC3_0 = DC
DAC3_1 = UR
DAC3_2 = LR
DAC3_3 = UL
DAC3_4 = LL
Note that this channel re-ordering corresponds to how Richard has reordered the cabling in analog, so there has been no overall change in signal ordering. Also note that this change has already been made to ETMY, but it didn't get much attention in the flurry to get the ETMY LVLN driver installed LHo aLOG 18622. 

Attached are screenshots of the change.

The updated top-level model had been committed to the userapps repo as of rev 11106:
jeffrey.kissel@opsws10:/opt/rtcds/userapps/release/sus/h1/models$ svn log h1susetmx.mdl 
------------------------------------------------------------------------
r11106 | jeffrey.kissel@LIGO.ORG | 2015-07-28 15:49:04 -0700 (Tue, 28 Jul 2015) | 1 line

Changed L3 ESD DAC output order to accommodate new wiring for LVLN driver, installed July 28 2015.


Images attached to this comment
H1 GRD (GRD)
cheryl.vorvick@LIGO.ORG - posted 12:16, Tuesday 28 July 2015 - last comment - 12:37, Tuesday 28 July 2015(19992)
ISC_LOCK is still very DOWN DOWN and NONE NONE and CHECK_IR CHECK_IR'ed

Image attached - two DOWN states, two NONE states, two CHECK_IR states - why?  When did this happen?  Who will fix it?

One clue to the issue is that the "top" DOWN when selected will show up in the guardian log, but the "bottom" DOWN does not produce any guardian log entry.

Check out my first entry about this... here.

Images attached to this report
Comments related to this report
jameson.rollins@LIGO.ORG - 12:37, Tuesday 28 July 2015 (19995)

This is most likely a side affect of my attempts to improve the reload situation.  It's obviously something that needs to be fixed.

In the mean time, you probably have to restart that node to clear up the REQUEST_ENUM:

guardctrl restart ISC_LOCK

You should also then close and reopen the MEDM screens once the node has been restarted.

LHO VE (CDS, VE)
patrick.thomas@LIGO.ORG - posted 11:46, Tuesday 28 July 2015 - last comment - 15:54, Tuesday 28 July 2015(19991)
end Y Inficon NEG hot cathode gauge back on
I tried to turn the gauge back on through the Beckhoff interface by writing '01 02' to the binary field of the FB44:01 parameter but kept getting a response code corresponding to 'Emission ON / OFF failed (unspecified reason)' in FB44:03. Gerardo power cycled it and it came back on.
Comments related to this report
john.worden@LIGO.ORG - 13:28, Tuesday 28 July 2015 (19996)

Let's call these hot filament ion gauges or Bayard Alpert gauges rather than "hot cathode".  thx

patrick.thomas@LIGO.ORG - 15:54, Tuesday 28 July 2015 (20001)
Turned back off per Gerardo's request. Was able to do so through the Beckhoff interface.
Displaying reports 63901-63920 of 83395.Go to page Start 3192 3193 3194 3195 3196 3197 3198 3199 3200 End