Displaying reports 65621-65640 of 85095.Go to page Start 3278 3279 3280 3281 3282 3283 3284 3285 3286 End
Reports until 12:29, Tuesday 28 July 2015
H1 SUS
richard.mccarthy@LIGO.ORG - posted 12:29, Tuesday 28 July 2015 - last comment - 16:20, Tuesday 28 July 2015(19994)
EX LVLN ESD drive
In an attempt to make both end stations function the same we installed the Low Voltage Low Noise  ESD driver below the existing HV ESD driver.  Finished pulling the cables and hooked everything up.  I verified that I could get Low Voltage out but could not proceed to HV testing do to site computer work.  I have left the controls for the system running through the LVLN system but have the High voltage cable still driving from the ESD driver directly.  Re testing will continue after software issues are fixed.
Comments related to this report
jeffrey.kissel@LIGO.ORG - 16:20, Tuesday 28 July 2015 (20003)CDS
J. Kissel, R. McCarthy

In order to make the HV + LVLN driver function as expected, and to obey the wiring convention outlined in D1400177 (which is sadly different from the rest of the suspensions 4-quadrant stages), we required a model change at the top level: a reordering of the outputs of the ESD control request from 
DAC3_0 = DC
DAC3_1 = UL
DAC3_2 = LL
DAC3_3 = UR
DAC3_4 = LR
to
DAC3_0 = DC
DAC3_1 = UR
DAC3_2 = LR
DAC3_3 = UL
DAC3_4 = LL
Note that this channel re-ordering corresponds to how Richard has reordered the cabling in analog, so there has been no overall change in signal ordering. Also note that this change has already been made to ETMY, but it didn't get much attention in the flurry to get the ETMY LVLN driver installed LHo aLOG 18622. 

Attached are screenshots of the change.

The updated top-level model had been committed to the userapps repo as of rev 11106:
jeffrey.kissel@opsws10:/opt/rtcds/userapps/release/sus/h1/models$ svn log h1susetmx.mdl 
------------------------------------------------------------------------
r11106 | jeffrey.kissel@LIGO.ORG | 2015-07-28 15:49:04 -0700 (Tue, 28 Jul 2015) | 1 line

Changed L3 ESD DAC output order to accommodate new wiring for LVLN driver, installed July 28 2015.


Images attached to this comment
H1 CDS (CAL, CDS, PSL, SEI, SUS)
jeffrey.kissel@LIGO.ORG - posted 12:17, Tuesday 28 July 2015 (19993)
Maintenance Day -- Bugs found during RCG Intstall, restarting from Scratch
J. Kissel, B. Weaver, C. Vorvick, C. Gray, D. Barker, J. Batch

While we (Jim, Betsy, and Myself) had attempted to upgrade the RCG, compile, install, and restart all front-end code/models in a piecemail fashion starting at 8:00a PDT this morning, as we began to recover, we began to see problems. We had intended to do the models in an order that would be most-facilitative to IFO recovery, namely PSL, SUS and SEI first. Also, in order to speed up the process, we (Jim, Betsy, and Myself) began compiling, and installing those models independently, and in parallel.

(1) We had "found" (this is common CDS knowledge) that our simultaneous installs of PSL, SEI and SUS front-end code all need to touch the  
/opt/rtcds/lho/h1/target/gds/param/testpoint.par
file, and *may* have resulted in collisions. After the AWG processes on the SUS models would not start, Betsy flagged down Jim, and found that, since we were all installing at the same time, there were SOME of the SUS that didn't make it into the testpoint.par file, so when AWG_STREAM was started by the front-end start script, it didn't find its list of test points, so it just never started. 
This "bug" had never been exposed because the all, sitewide front-end code had never had this many "experts" trying to run through the compilation / install process in parallel like this. 

That's what we get for trying to be efficient.

Dave arrived halfway through the restarts (we had finished all corner SEI, SUS, PSL, and ISC, upgrades by this point), and informed us that, last night, after he had removed and regenerated the 
/opt/rtcds/lho/h1/chans/ipc/H1.ipc 
file by compiling everything, he had moved it out of the way temporarily, in case there were any model restarts over night. If there were model restarts, and the new IPC file was still in place, then that model could potentially be pointed to the wrong connection, and wreak havoc on the whole system. However, we (Jim, Betsy, and I) didn't know that dave had done this, and so when we began compiling this morning, we were compiling on top of the old IPC file. 
Dave suggested that, as long as there were no IPC related model changes between last night and when we got started, the IPC file should be fine, so we proceeded onward.

(2) However, as we got further along the recovery process, we also found that some of the SUS (specifically those on the HAM2a and HAM34) were not able to drive out past their IOP model DAC outputs. We had seen this problem before on a few previous maintenance days, so we tried the fix that had worked before -- restart the all front-end models again. When that *didn't* work, we began poking around the Independent Software IOP Watchdog screens, and found that several of these watchdogs were suffering from constant, large, IPC error rates.

Suspecting that the 
/opt/rtcds/lho/h1/chans/ipc/H1.ipc 
file also had been corrupted because of the parallel code installations, Dave suggested that we abandon ship.

As such (starting at ~11:00a PDT) we're 
- Deleting the currently referenced IPC file
- Compiling the entire world, sequentially (a ~40 minute process)
- Installing the entire world, sequentially, (a ~30 minute process)
- Killing the entire world
- Restarting the entire world.

As of this entry, we're just now killing the entire world.

Stay tuned!
H1 GRD (GRD)
cheryl.vorvick@LIGO.ORG - posted 12:16, Tuesday 28 July 2015 - last comment - 12:37, Tuesday 28 July 2015(19992)
ISC_LOCK is still very DOWN DOWN and NONE NONE and CHECK_IR CHECK_IR'ed

Image attached - two DOWN states, two NONE states, two CHECK_IR states - why?  When did this happen?  Who will fix it?

One clue to the issue is that the "top" DOWN when selected will show up in the guardian log, but the "bottom" DOWN does not produce any guardian log entry.

Check out my first entry about this... here.

Images attached to this report
Comments related to this report
jameson.rollins@LIGO.ORG - 12:37, Tuesday 28 July 2015 (19995)

This is most likely a side affect of my attempts to improve the reload situation.  It's obviously something that needs to be fixed.

In the mean time, you probably have to restart that node to clear up the REQUEST_ENUM:

guardctrl restart ISC_LOCK

You should also then close and reopen the MEDM screens once the node has been restarted.

LHO VE (CDS, VE)
patrick.thomas@LIGO.ORG - posted 11:46, Tuesday 28 July 2015 - last comment - 15:54, Tuesday 28 July 2015(19991)
end Y Inficon NEG hot cathode gauge back on
I tried to turn the gauge back on through the Beckhoff interface by writing '01 02' to the binary field of the FB44:01 parameter but kept getting a response code corresponding to 'Emission ON / OFF failed (unspecified reason)' in FB44:03. Gerardo power cycled it and it came back on.
Comments related to this report
john.worden@LIGO.ORG - 13:28, Tuesday 28 July 2015 (19996)

Let's call these hot filament ion gauges or Bayard Alpert gauges rather than "hot cathode".  thx

patrick.thomas@LIGO.ORG - 15:54, Tuesday 28 July 2015 (20001)
Turned back off per Gerardo's request. Was able to do so through the Beckhoff interface.
H1 GRD (GRD, SEI)
cheryl.vorvick@LIGO.ORG - posted 09:54, Tuesday 28 July 2015 - last comment - 11:07, Tuesday 28 July 2015(19987)
trouble taking ITMY BSC1 HEPI and ISI to OFFLINE this morning

For Maintenance, we took all HEPI's and ISI's to OFFLINE.

I requested OFFLINE for BSC1 but it stalled and did not go there.

Jim helped, and got it to go OFFLINE.

TJ and I looked into why this happened, and what happened, and here's the list of events:

ITMY BSC1, Request for OFFLINE at 15:13 did not execute:

- SEI_ITMY request for OFFLINE did not take it to OFFLINE

- ISI_ITMY_ST1 left in Managed by User on 7/23 (found in the log, Tj and Cheryl)

- ISI_ITMY_ST1 Managed by User prevents SEI_ITMY from taking it OFFLINE

- the fix was that Jim requested INIT on SEI_ITMY

- this changed ISI_ITMY_ST1 from managed by User to managed by SEI_ITMY

- HOWEVER, HEPI was in transition to OFFLINE, and the INIT request interrupted that and sent HEPI back to the previous state, ROBUST_ISOLATED

- This brought HEPI back up when we wanted it OFFLINE

- Jim requested INIT again

- Jim requested OFFLINE again

- these second INIT/OFFLINE requests allowed SEI_ITMY to bring HEPI and ST1 and ST2 to OFFLINE

 

My questions:

- Jim, TJ, Cheryl

Images attached to this report
Comments related to this report
jameson.rollins@LIGO.ORG - 11:07, Tuesday 28 July 2015 (19990)

My questions:

  • do we always want Guardian to default to the previous state, as HEPI did when Guardian didn't recognize it's current state?  This feature is what sent HEPI back up to ROBUST_ISOLATED when we were trying to reach OFFLINE.

Be specific about what you mean by "Guardian didn't recognize it's current state".  It sounds like HPI_ITMY was always in full control of HEPI, and was reporting it's state correctly.  SEI_ITMY was a bit confused since one of its subordinates was stolen, but I think it still understood what states the subordinates were in.

When INIT is requested the request is reset to what it was previously right after it jumps to INIT.  Presumably SEI_ITMY's request was something that called for HPI to be ROBUST_ISOLATED when INIT was requested.

  • today I saw that my request for OFFLINE on SEI_ITMY wasn't really able to take the system OFFLINE, so is this an anomaly or something we need to plan for, or is the default procedure to INIT a system, reseting everything to Managed by Guardian (SEI_ITMY in this case)?

This should be considered an anomaly.  Someone in the control room had manually intervened with the ITMY SEI system, thus ISI_ITMY_ST1 reporting "USER" as the manager.  The intervening user should have reset the system back to it's nominal configuration (ISI_ITMY_ST1 managed by "SEI_ITMY") when they were done, which would have prevented this issue from occurring.

All of the problems here were caused by someone intervening in guardian and not reseting it properly.  Guardian has specifically been programmed to not second guess the users.  In that case the users have to be conscientious to reset things appropriately when they're done.

H1 SEI
corey.gray@LIGO.ORG - posted 09:08, Tuesday 28 July 2015 (19986)
OPS: reset of HEPI L4C Accumulated WD Counters Tuesday 28th July 2015

ITMx & ITMy HEPIs had counters up around ~400-500 counts, and were reset this morning.

H1 PSL
corey.gray@LIGO.ORG - posted 08:54, Tuesday 28 July 2015 - last comment - 10:58, Tuesday 28 July 2015(19985)
PSL Weekly Report

I forgot to run the the PSL Checklist during my shift yesterday.  Today, I ran our PSLweekly script and here is the output.  There was model work recently/currently done on the PSL today, so this is why many items are ~10min old.

Items to note:
ISS Diffracted power is HIGH!

Laser Status:
SysStat is good
Front End power is 32.51W (should be around 30 W)
Frontend Watch is GREEN
HPO Watch is RED

PMC:
It has been locked 0.0 days, 0.0 hr 10.0 minutes (should be days/weeks)
Reflected power is 2.061Watts and PowerSum = 24.37Watts.

FSS:
It has been locked for 0.0 days 0.0 h and 10.0 min (should be days/weeks)
TPD[V] = 1.521V (min 0.9V)

ISS:
The diffracted power is around 14.4% (should be 5-9%)
Last saturation event was 0.0 days 0.0 hours and 10.0 minutes ago (should be days/weeks)

Comments related to this report
edmond.merilh@LIGO.ORG - 10:58, Tuesday 28 July 2015 (19989)

Keep in mind that this was taken on a maintenance day after the mode cleaner had been taken down and a previous snapshot had been restored from prior to proper adjustment of the AOM Diffracted power. Not the most optimal time to be taking vital signs :)

H1 ISC
jeffrey.kissel@LIGO.ORG - posted 08:02, Tuesday 28 July 2015 (19982)
7/28 Mainenance Day Prep -- IFO intentially unlocked
J. Kissel, C. Vorvick, B. Weaver, C. Gray

We've attempted to bring down the IFO in a controlled fashion via the Gaurdian at 7:58a PDT. Cheryl will post details of how this went -- rather unexpectedly -- later.
H1 ISC
jeffrey.kissel@LIGO.ORG - posted 07:57, Tuesday 28 July 2015 - last comment - 08:11, Tuesday 28 July 2015(19980)
7/28 Maintenance Day Prep on the SDF system with the IFO fully locked in low noise at 60+ [Mpc]
J. Kissel, B. Weaver

In getting ready for maintenance day, we've done the following to the SDF while the IFO was fully locked at 60+ [Mpc]: 
Accepted 
- new coil driver states (in COILOUTF, ESDOUTF, and BIO STATE REQUESTs) on ETMX, ETMY, SRM, MC2
- SR3 gain to be zero
- new DRIVEALIGN GAINs for off-diagonal elements on ETMX, ETMY, ITMX, ITMY
- A few new TRAMP times on BS
- New work done in CAL-CS DARM calibration filters (H1:CAL-CS_DARM_FE_ETMY_L2_LOCK_L [turning OFF FM6, turning ON FM7] and CAL-CS_DARM_ERR_GAIN [from 1.32 to 1.22])
- Change in 538.1 calibration line frequency (from 538.1 to 329.9 [Hz]) and associated EXC amplitude (from 2.4 to 1.0 [ct])
- In LSC model, accepted new TR_X/Y QPB SUM OFFSETS 
- ASC DHARD P TRAMP increase from 10 to 20 [sec]
- The output switches OMC M1 LOCK filters
     We *think* these should be ON, given the model change to support pushing these length and alignment control signals through the DRIVEALIGN matrix LHO aLOG 19714, but we don't think Sheila got to commissioning them.
- OMC-ASC_QPD_A_YAW_OFFSET (was -0.11 and saved as -0.15)
- PSL FSS COMON_GAIN appears to have been tuned on (LHO aLOG 19715) from 20.17544 to 20.7
- PSL-ISS_LOOP_STATE_REQUEST (was 0 -- which we think corresponds to the ISS OUTER LOOP being OFF -- to 32700 -- which we think corresponds to it being ON)
- FEC-93_DACDT_ENABLE (that's for the IOPISCEY model) DAC duotone signal being ON (turned on yesterday July 27).
- A ton of stuff in LSCAUX LOCKIN / DEMOD stuff that (we think) has to do with Kiwamu's calibration line cavity pole tracker (LHO aLOG 19852)

Reverted 
- New? work done changing the OMC DCPD to DRAM input matrix element (H1:LSC-ARM_INPUT_MTRX_RAMPING_1_1) from 0 to 13.3050658281 -- this should be controlled by gaurdian!
       - in fact, we tried to revert it, and some gaurdian is FORCING it back to the 13.3 number. So we'll leave this as is, but it sounds like we should eventually not monitor it
- MATCH gain on ISIHAM2 from 1.0 to 1.036
- PCAX and PCALY calibration lines had been turned OFF, so they will come back ON (534.7 [Hz] @ 19300 [ct] and 540.7 @ 9900 [ct] )

Things that had tiny differences that we used caput to force to a reasonable precicison:
- caput H1:IMC-WFS_GAIN 0.1
- caput H1:PSL-FSS_COMMON_GAIN 20.7


Saved and loaded new EPICs DB (to clear uninitiallized channels and/or channels not found) for 
- ASC (NOT INITIALIZED)
- OMC (NOT INITIALIZED)
- ODC MASTER (NOT FOUNDs and NOT INITIALIZED)
- SUS ITMX (NOT FOUNDs and NOT INITIALIZED)
- LSC (NOT FOUNDs)
- ISIETMX (NOT FOUNDs)
- ISIETMY (NOT FOUNDs)
- PEM EX and PEM EY (NOT FOUNDs)

Comments related to this report
betsy.weaver@LIGO.ORG - 08:11, Tuesday 28 July 2015 (19983)

Hit LOAD COEFFICIENTS on H1CALEX and H1CALEY since I see that they were modified last night.  All LOAD COEFFICIENTs will be done with today's boots anyways, but since we were looking at the diffs of these anyways, we went ahead and did them.

H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 07:55, Tuesday 28 July 2015 (19981)
CDS model and DAQ restart report, Monday 27th July 2015

pre-maintenance restarts. New PI models with associated DAQ restart.

model restarts logged for Mon 27/Jul/2015
2015_07_27 16:05 h1susetmxpi
2015_07_27 16:09 h1susetmxpi
2015_07_27 16:20 h1susetmxpi
2015_07_27 16:23 h1susetmxpi
2015_07_27 16:48 h1susetmxpi
2015_07_27 16:50 h1susetmypi
2015_07_27 16:51 h1susetmypi

2015_07_27 16:56 h1broadcast0
2015_07_27 16:56 h1dc0
2015_07_27 16:56 h1fw0
2015_07_27 16:56 h1fw1
2015_07_27 16:56 h1nds0
2015_07_27 16:56 h1nds1

H1 AOS
sheila.dwyer@LIGO.ORG - posted 01:12, Tuesday 28 July 2015 - last comment - 11:59, Wednesday 09 September 2015(19978)
work tonight
Lisa, Matt,Jenne, Evan, Stefan,Hang, Sheila

None of these things changed the DARM noise.  (just the calibration)

Comments related to this report
evan.hall@LIGO.ORG - 11:59, Wednesday 09 September 2015 (21328)

As a follow-up to point #4: I redid the bias reduction test, this time by reducing the voltage from 380 V to 190 V.

  • 2015-09-09 02:55:00 to 03:00:00: nominal configuration (380 V bias, digital gain -30 ct/ct in EY ESD drivealign)
  • 2015-09-09 03:04:20 to 03:09:20: reduced bias configuration (190 V bias, digital gain -60 ct/ct in EY ESD drivealign)

As before, there was no obvious change in the DARM noise. [See attachment.]

Non-image files attached to this comment
H1 ISC
stefan.ballmer@LIGO.ORG - posted 23:59, Monday 27 July 2015 (19976)
ASC cut-off filtering
Took ASC_CHARD_P FM1 (20dB gain) out from guardian, and instead turned on FM9 in ASC_CHARD_P and ASC_CHARD_Y (LP9 - a low pass at 9Hz).

Also added 25Hz-40Hz band stop filters to FM9 of ASC_DHARD_P and ASC_DHARD_Y (called Ncheck). These can safely be engaged in full lock, but they are not in guardian for now.
H1 ISC (ISC)
stefan.ballmer@LIGO.ORG - posted 23:51, Monday 27 July 2015 (19975)
Broadband coherence noise in different ISS and FSS configurationms
Following alog 19856 we took the same coherence measurements (OMC_DCPDs vs AS_C_LF vs AS_A_RF36) in different configurations.

a)  Plot 1: we locked the OMC on a 45MHz SB at 15W of input power (to avoid PD saturation). In this configuration we retook the coherence measurement between AS_C_LF and OMC_DC, as well as AS_A_RF36 and OMC_DC. (Also, the ISS was off in this state)
 - While we see coherence above 2Hz - similar to that in alog 19856 - nothing is visible below that.
 - In the power spectrum of the side band we can identify two features:
   - At 19032Hz we see the effect of the sideband 00 mode going through arm resonance
     (actually, we are seeing the peak in between where the lower and upper audio sideband goes through resonance - resulting in maximum FM to AM conversion in between)
   - Similarly, at 9700Hz we see the feature that corresponds to the sideband 02 mode going through resonance.
     This might we a way to fine-tweak the PRC to ARM mode-matching when we do common CO2 heating runs.

b) Plot 2: comparing ISS_SECONDLOOP_GAIN at 25dB (dashed coherence) vs 0dB (solid coherence)
 - The 14kHz gain peaking of the ISS clearly shows up, but below ~8kHz there is no obvious change
 - for 25dB we estimate the UGF to be at 3kHz
 - for  0dB we estimate the UGF to be around a few 100Hz.

c) Plot 3: comparing FSS_COMMON_GAIN at 20.7dB vs 14.7dB
 - I do not see any difference in the coherence.
Images attached to this report
LHO VE
kyle.ryan@LIGO.ORG - posted 22:54, Monday 27 July 2015 (19974)
Partial loss of instrument air pressure at X-end
Will monitor hourly -> CP8's LLCV will likely increase from its current value of ~ 50% open to something higher -> won't need to fix until tomorrow as long as CP8's level stays out of alarm -> otherwise will
H1 ISC
stefan.ballmer@LIGO.ORG - posted 16:28, Sunday 26 July 2015 - last comment - 11:00, Tuesday 28 July 2015(19950)
Lock-loss after 16h due to PRM saturation
I happened to witness the lock loss after 16h. We had several PRM saturations spread over ~8 minutes, before one of them took down the interferometer.
Images attached to this report
Comments related to this report
kiwamu.izumi@LIGO.ORG - 10:47, Monday 27 July 2015 (19957)

Here are some cavity pole data using a Pcal line (see alog 19852 for some details):

The data is 28 hours-long and contains three lock stretches, the first one lasted for 9-ish hours, the second about 16 hours (as Stefan reported above) and the third one 2 hours. As shown in the plot, the frequency of the cavity pole was stable on a time scale of more than 2 hours. It does not show obvious drift on such a time scale. This is good. However, on the other hand, as the interferometer gets heated up, the frequency of the cavity pole drops by approximately 40 Hz at the beginning of every lock. This is a known behavior (see for example alog 18500 ). I do not see clear coherence of the cavity pole with the oplev signals as oppose to the previous measurement (alog 19907) presumably due to a better interferometer stability.

Darkhan is planning to perform more accurate and thorough study of the Pcal line for these parcitular lock stretches.

Images attached to this comment
rana.adhikari@LIGO.ORG - 00:09, Tuesday 28 July 2015 (19977)CAL

As a test, you could inject a few lines in this neighborhood to see if instead of cavity pole drift (which seems like it would take a big change in the arm loss) its instead SRC detuning changing the phase. With one line only, these two effects probably cannot be distinguished.

kiwamu.izumi@LIGO.ORG - 03:52, Tuesday 28 July 2015 (19979)

Rana,

It sounds an interesting idea. I need to think a little bit more about it, but looking at a plot in my old alog (17876), having additional lines at around 100-ish Hz and 500 Hz may suffice to resolve the SRC detuning. Although it would be very difficult if the detuning turns out to be small because it would look like almost a moving cavity pole with a small detuning. I will try checking it with high frequency Pcal lines at around 550 Hz for these lock stretches. /* by the way I disabled them today -- alog 19973 */

kiwamu.izumi@LIGO.ORG - 11:00, Tuesday 28 July 2015 (19988)

In addition to the time series that I posted, I made another time series plot with the corner HWSs. This was a part of the effort to see impacts of the thermal transient on the DARM cavity pole frequency.

There seems to be a correlation between the spherical power of ITMY and the cavity pole in the first two-ish hours or so of every lock stretch. However, one thing which makes me suspicisous is that the time constant of the spherical power seems a bit shorter than the one for the cavity pole and also the arm powers -- see the plot shown below. I don't have a good explanation for it right now.

 

Unfortunately the data from ITMX HWS did not look healthy (i.e. the spherical power suspiciousely stayed at a high value regardless of the interferometer state) and that's why I did not plot it. Additionally, the ITMY data did not actually look great either since it showed a suspiciously quiet time starting at around t=3 hours and came back to a very different value at around t=5.5 hours or so. I am checking with Elli and Nutsinee about the health of the HWSs.

Images attached to this comment
Displaying reports 65621-65640 of 85095.Go to page Start 3278 3279 3280 3281 3282 3283 3284 3285 3286 End