Displaying reports 62041-62060 of 85683.Go to page Start 3099 3100 3101 3102 3103 3104 3105 3106 3107 End
Reports until 19:11, Wednesday 16 December 2015
H1 ISC (Lockloss)
sheila.dwyer@LIGO.ORG - posted 19:11, Wednesday 16 December 2015 (24278)
lockloss initing guardian

In loading the new code for the SR3 cage servo, I caused the guardian to go into error (because there is a self.counter that is initalized in main and used in run). This took us out of observing.  I took the CAGE_SERVO guardian to manual, turned it off and back on so that it would initalize the counter.  Then I attempted to INIT ISC_LOCK so that it would manage the CAGE_SERVO again. The cage servo is managed by the DRMI guardian, though.  I took DRMI to manual, init, this broke the lock for some reason that I don't understand at all.

Apologies.

H1 AOS
corey.gray@LIGO.ORG - posted 18:14, Wednesday 16 December 2015 (24277)
Transition To EVE Shift Update

TITLE:  12/16 EVE Shift:  00:00-08:00UTC (16:00-00:00PDT), all times posted in UTC     

STATE of H1:   NLN just under 80Mpc

Outgoing Operator:  Jim

Quick Summary:

Walked in to find film crew in the middle of an interview, so held off entering the Control Room for a while, but Jim had H1 with still running from the lock I started from last night.  Once I was able to squeeze into the Control Room, after a while, H1 had a lockloss, and commissioners noted that that this was probably due to an "EPICS Freeze".  As far as I know, an "EPICS Freeze" symptom is that all the StripTool traces flatline & thus we appear to lose the ability to control H1 and thus have a lockloss.  For this one, the flatline happened on the order of 80sec and then we had the lockloss.  NOTE:  yesterday people mentioned "EPICS Freezes" last night, but they were shorter and we stayed locked.

Filming continues!  :)

H1 CDS (CDS, GRD, Lockloss)
sheila.dwyer@LIGO.ORG - posted 18:10, Wednesday 16 December 2015 (24276)
Epics freeze SR3 cage servo lockloss

Dave, Sheila

At 00:47 UTC, we lost lock after a 45 second epics freeze.  The attached screenshot shows that the cage servo stopped updating, durring the freeze, which is the intended behavior since we modified the guardian (22754) to watch and not update the servo when the PITMON does not update. 

Dave looked up the guardian logs, and indeed the guardian does have the intended behavoir.  However, when the servo comes back on after being off this long it puts out a large number.  Now we have added a counter that gets incremented when the PITMON has not updated, and cleared when it is updated.  If PITMON has not updated in 16 cycles, the guardian will go to CAGE_SERVO_OFF, then should return to CAGE_SERVO_RUNNING and reinitalize the servo.  This will happen once a second until the epics freeze is over. 

Images attached to this report
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 17:25, Wednesday 16 December 2015 (24275)
CDS model restart report Monday-Tuesday 14th-15th December 2015

O1 days 88-89 

model restarts logged for Tue 15/Dec/2015
2015_12_15 08:26 h1isietmx
2015_12_15 08:28 h1isietmy
2015_12_15 08:33 h1isiitmx
2015_12_15 08:38 h1isiitmy
2015_12_15 08:42 h1isibs
2015_12_15 08:49 h1isiham6
2015_12_15 08:50 h1isiham5
2015_12_15 08:52 h1isiham4
2015_12_15 08:55 h1isiham3
2015_12_15 09:02 h1isiham2

Maintenance day, new ISI code for both BSC and HAMS. No DAQ restart required.

model restarts logged for Mon 14/Dec/2015 No restarts reported

H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 17:18, Wednesday 16 December 2015 - last comment - 17:21, Wednesday 16 December 2015(24273)
ops home directory moved off of /ligo
WP5658. Jim, Jonathan, Carlos, Dave:

The ops account home directory has been moved from /ligo/home/ops to /opshome/ops. The new location is a partition on a new 2TB disk drive on the NFS server cdsfs0. I have found three uses of this home directory:

the ops account on the operator workstation*
the VerbalAlert system
the Lock Logging system

Operator0's local ops account was changed to use the new home directory. Corey is testing this by logging in as ops. The verbal alert system was changed earlier, see alog. The lock logger runs on nuc0, this code was modified to use the new path and the source code is now ran out of the new location.

To prevent any unknown systems from writing to the old location, I have made /ligo/home/ops read-only.

* the operator workstation was a "loaner machine" called opsws9. Today Carlos and Jonathan permanently renamed it operator0.
Comments related to this report
david.barker@LIGO.ORG - 17:21, Wednesday 16 December 2015 (24274)
I should add that the reason for the change is to ensure the ops account continues to function if the /ligo file system fills up. This is a temporary fix until we replace cdsfs0 with a new NFS server with more disk capacity. We hope to install the new server soon after the conclusion of O1.
H1 AOS
david.barker@LIGO.ORG - posted 17:11, Wednesday 16 December 2015 (24271)
lock loss due to long epics freeze up
we had a large epics freeze-up event which knocked H1 out of lock. The duration of the freeze was 16:45:46 - 16:46:56 PST.
H1 General
jim.warner@LIGO.ORG - posted 16:11, Wednesday 16 December 2015 (24268)
Shift Summary

TITLE:  12/16 Day Shift:  16:00-0:00UTC

STATE of H1: Observing ~80 Mpc

Support:  Normal Control Room crowd

Quick Summary: Quiet day, welcome break after yesterday

Activities:

16:00 JeffB to Mech Room

18:30 RichM to Mech Room

19:00 John, Kyle, MikeZ to Y mid

23:30 Chris Biwer setting up and running hardware injections

H1 INJ (DetChar, INJ)
christopher.biwer@LIGO.ORG - posted 15:38, Wednesday 16 December 2015 - last comment - 21:24, Wednesday 16 December 2015(24265)
preparing for hardware injection tests
I'm preparing to do a set of hardware injection tests. I will update this aLog entry as injections are scheduled.

I first need to svn up the repo to get the injection files. And check that the latest version of tinj is installed.
Comments related to this report
christopher.biwer@LIGO.ORG - 15:53, Wednesday 16 December 2015 (24266)
I am beginning by scheduling one hardware injection first. The schedule was updated with:
1134345924 1 1.0 coherentbbh10_1128678894_
christopher.biwer@LIGO.ORG - 16:05, Wednesday 16 December 2015 (24267)
tinj was enabled at 00:02 UTC at H1.
christopher.biwer@LIGO.ORG - 16:43, Wednesday 16 December 2015 (24269)
I did the following checks:
 * I checked the first injection to make sure it was flagged in the segment database. I queried H1:ODC-INJECTION_CBC:2 and L1:ODC-INJECTION_CBC:2 and found that both returned the segment [1134345929,1134345931). This looks good, it is what I expect.
 * I checked the tinj.log file as the injection was being performed. It was logged as successful.
 * I checked the TRANSIENT_OUT16 channel to check that the signal was injected. See attached plot.

Everything looks good. So I'm going to add the following to the schedule:
1134348928 1 1.0 coherentbbh11_1128678894_
1134350128 1 1.0 coherentbbh12_1128678894_
1134351328 1 1.0 coherentbbh13_1128678894_
1134352528 1 1.0 coherentbbh14_1128678894_
1134353728 1 1.0 coherentbbh15_1128678894_
1134354928 1 1.0 coherentbbh16_1128678894_
1134356128 1 1.0 coherentbbh17_1128678894_
1134357328 1 1.0 coherentbbh18_1128678894_
1134358528 1 1.0 coherentbbh19_1128678894_
Images attached to this comment
christopher.biwer@LIGO.ORG - 17:11, Wednesday 16 December 2015 (24270)
LHO lost lock a couple minutes before the injection scheduled for: 1134348928 1 1.0 coherentbbh11_1128678894_. So this injection only went into L1.

I've removed the lines:
1134350128 1 1.0 coherentbbh12_1128678894_
1134351328 1 1.0 coherentbbh13_1128678894_
1134352528 1 1.0 coherentbbh14_1128678894_

from the schedule since LHO is still trying to relock.
christopher.biwer@LIGO.ORG - 19:27, Wednesday 16 December 2015 (24279)
LHO  lost lock again.

The following line was then removed from the schedule file then:
1134358528 1 1.0 coherentbbh19_1128678894_
christopher.biwer@LIGO.ORG - 19:48, Wednesday 16 December 2015 (24280)
Added the following lines to the schedule to re-do the injections that were skipped due to LHO losing lock:
1134359800 1 1.0 coherentbbh11_1128678894_
1134361000 1 1.0 coherentbbh12_1128678894_
1134362200 1 1.0 coherentbbh13_1128678894_
1134363400 1 1.0 coherentbbh14_1128678894_
1134364600 1 1.0 coherentbbh19_1128678894_

Just to recap. Tonight tests the following injections are in the schedule:
1134345924 1 1.0 coherentbbh10_1128678894_
1134348928 1 1.0 coherentbbh11_1128678894_
1134353728 1 1.0 coherentbbh15_1128678894_
1134354928 1 1.0 coherentbbh16_1128678894_
1134356128 1 1.0 coherentbbh17_1128678894_
1134357328 1 1.0 coherentbbh18_1128678894_
1134359800 1 1.0 coherentbbh11_1128678894_
1134361000 1 1.0 coherentbbh12_1128678894_
1134362200 1 1.0 coherentbbh13_1128678894_
1134363400 1 1.0 coherentbbh14_1128678894_
1134364600 1 1.0 coherentbbh19_1128678894_
christopher.biwer@LIGO.ORG - 21:24, Wednesday 16 December 2015 (24281)
These injections are now complete.
H1 SEI
jim.warner@LIGO.ORG - posted 14:47, Wednesday 16 December 2015 - last comment - 08:53, Friday 22 January 2016(24245)
RY motion at ETMX causing ISI ring up

**Short version: Increased RY input motion (maybe HEPI, maybe wind/ground) causes ISI X loops to ring up when running 45mhz blends. The suspension/tidal is not the cause. The 90mhz blends seem to be immune to this. Other than using 90mhz blends, I'm not sure how to fix the ISI's configuration, short term to prevent the ISI from ringing up. But we should put a StripTool of the end station ISI St1 CPS locationmons somewhere in the control room so operators can see when ground tilt has rung up an ISI. Alternatively, we could add a notification to VerbalAlarms or the DIAG node when an ISI has been moving something like 10 microns peak to peak for several minutes.

This morning while the IFO was down for maintenance, Evan and I looked at ETMX to see if we could figure out what is causing the ISI to ring up. First we tried driving the L1 stage of the quad to see if some tidal or suspension drive was the cause. This did not have on the ISI, so I tried driving on HEPI. When I drove HEPI X, the ISI rang up a bit, but no more than expected with the gain peaking of the 45mhz blends. When I drove HEPI in RY, however, the ISI immediately rang up in X, and continued to ring for several minutes after I turned the excitation off. The attached image shows the ISI CPS X(red), RY (blue), HEPI IPS RY(green) and X (magenta). The excitation is visible in the left middle of the green trace, also visible in the sudden increase in the red trace. I only ran the excitation for 300 seconds (from about 1134243600 to 1134243900), but the ISI rang for twice that. After the ISI settled down I switched the blends to the 90mhz blends and drove HEPI RY again. The ISI moved more in X but it never rang up even after I increaed the drive by a factor of 5. The second plot shows the whole time series, same color key. The large CPS X motion (with barely noticeable increase in the IPS RY) is the oscillation with the 45mhz blend , the larger signal on the IPS RY (with small increase in CPS X) is with 90mhz blends. The filter I used for each excitation was zpk([0 0],[ .01 .01 .05 .05], 15111).

Images attached to this report
Comments related to this report
brian.lantz@LIGO.ORG - 23:14, Thursday 21 January 2016 (25090)
Did a bit more analysis of this data - 
Not sure why things are so screwy.
There might be non-linearity in the T240s.

Jim's entry indicates that it is NOT a servo interaction with the tidal loop.
so it is probably something local - still not really sure what. Based on the plots below 
I strongly recommend a low-freq TF of Stage 1 (HEPI servos running, ISI damping loops on, iso loops off)
drive hard enough to push stage 1 T240s to +/- 5000 nm/sec 

what I see
fig 1 (fig_EX_ringingXnY)- time series of X and Y and drive signal 
- This is the same as Jim's data, but we also see significant motion in Y
- In the TFs we need to look for X and Y cross coupling

fig 2 (fig_X_ringup_time) - this is the time I used for the other analysis -
We can see the CPS-X and T240-X signals here. Note that I have used bandpass_viafft to keep only data between 0.02 and 0.5 Hz.
The T240 and CPS signals are clearly related - BUT - does the T240 = derivative of the CPS?
signals are at input to the blend filters

fig 3 (fig_weirdTFs) - These are some TFs from ST1 X drive to ST1 CPS X and from ST1 X drive to ST1 T240.
If all the drive for X is coming from the actuators, then the CPS TF should be flat and the T240 TF should be Freq^1
The CPS TF looks fine, I can not explain the T240 TF 
The coherence between T240 and CPS sigs are in the bottom subplot

fig 4 (fig_coh) Coh for drive -> CPS, drive -> T240 and CPS->T240. All are about 1 from 0.03 to .15 Hz.
So the signals are all related, but not in the way I expect.
NOTE - If the ground were driving a similar amount to the actuators, then these TFs would be related by the loops and blend filters - 
I don't think this is the case. decent driven TFs would be useful, here.

fig 5 - sensor_X_difference : Take the numerical derivative of the CPS and compare it to the T240 as a function of time.
Also - take the drive signal * 6.7 (plant response at low freq from TF in fig 3) and then take the derivative of that.
These 3 signals should match - BUT they do not. The driven plant and the CPS signals are clearly similar, but the T240 is rather different looking, esp in the lower subplot. As if the higher frequency motion seen by the CPS is not seen by the T240.  
What the heck?

fig 6 - fig_not_gnd - could it be from ground motion? So I add the ground motion to the CPS signal - but this doesn't look any more like the T240 signal than the straight CPS signal. So the signal difference is not from X ground motion.

Non-image files attached to this comment
richard.mittleman@LIGO.ORG - 08:53, Friday 22 January 2016 (25094)

Has the tilt decoupling on stage 1 been checked recently?  with the 45mhz blends running we are not far from instabilty in this parameter (a factor of 2 maybe?)

H1 CDS
david.barker@LIGO.ORG - posted 14:15, Wednesday 16 December 2015 - last comment - 23:58, Wednesday 31 August 2016(24264)
Verbal Alarms code changed to log to the logging directory
the Verbal Alarms code was logging to the ops home directory. Prior to the move of this home directory (WP5658) I have modified the code to log to a new directory:

/ligo/logs/VerbalAlarms

We restarted the program at 14:04 and verified the log files are logging correctly.
Comments related to this report
jeffrey.kissel@LIGO.ORG - 14:45, Friday 26 August 2016 (29336)DetChar, GRD, ISC, SYS
These verbal log files actually live one level deeper, in
/ligo/logs/VerbalAlarms/Verbal_logs/

For the current month, the log files live in that folder. 
However, at the end of every month, they're moved into the dated subfolders, e.g.
/ligo/logs/VerbalAlarms/Verbal_logs/2016/7/

The text files themselves are named "verbal_m_dd_yyyy.txt".

Unfortunately, these are not committed to repo where these logs might be viewed off site. Maybe we;ll work on that.

Happy hunting!
thomas.shaffer@LIGO.ORG - 23:58, Wednesday 31 August 2016 (29428)

The Verbal logs are now copied over to the web-exported directory via a cronjob. Here, they live in /VerbalAlarms_logs/$(year)/$(month)/

The logs in /ligo/logs/VerbalAlarms/Verbal_logs/ will now always be in their month, even the curent ones.

H1 SEI
hugh.radkins@LIGO.ORG - posted 11:07, Wednesday 16 December 2015 (24262)
Looking at YAW ASC for eval of 45mHz Z Blends--no real conclusion

Attached are two side by side 10 hour trends during observation mode w/ Quite_90 blends on Z in the left group and 45mHz Z blends in the right group.  These are about four days apart, 8 vs 12 Dec.  I've scaled each plot to be the same on the two trends.

The first four signals are X & Y Ground motion and the seismic environment is a mix with some better others worse in the second group.  The ASC signals are very close with only suttle differences.  We need to look at spectra now but this says the ASC doesn't care or this yardstick is too coarse.

Images attached to this report
H1 DetChar (SEI)
nutsinee.kijbunchoo@LIGO.ORG - posted 03:42, Wednesday 16 December 2015 - last comment - 12:23, Wednesday 16 December 2015(24255)
90 mHz blend seems to be causing glitches between 10-30 Hz

Not sure if anyone has already caught this. Switching the ETMX blend to Quiet_90 on Dec 14th caused glitches to appear around 20Hz region (Fig 1 - starting at 20:00:00 UTC.) while switching both ETMX and ETMY to Quiet_90 everywhere caused gliches to appear around 10 Hz and 30 Hz region (Fig 2 - starting at 9:00:00 UTC). Wind speed has been low (<5mph) and the useism (0.03-0.1Hz) has been around 0.4 um/s. BNS range has been glitchy since the blend was switched but the lock has been relatively more stable. The question is, do we want clean data but constantly risk losing lock when the tidal rings up, or slightly glitchy data but relatively more stable interferometer?

Images attached to this report
Comments related to this report
nutsinee.kijbunchoo@LIGO.ORG - 04:28, Wednesday 16 December 2015 (24257)

Tried switching ETMX X to 45 mHz again. Looking good so far.

Images attached to this comment
nutsinee.kijbunchoo@LIGO.ORG - 04:11, Wednesday 16 December 2015 (24256)

After talking to Mike on the phone we decided to try switching both ETMs back to 45mHz blend. I'm doing this slowly. One dof at a time. Things got better momentary when I switched ETMX X to 45 mHz blend but soon tidal and CSOFT started running away. I had to leave ETMX X at 90 mHz. Out of Observing from 11:58:07 -  12:11:02 UTC.

nutsinee.kijbunchoo@LIGO.ORG - 04:33, Wednesday 16 December 2015 (24258)

And the tidal is back... I switched ETMX X to 90mHz. 45 mHz is used everywhere else.

Images attached to this comment
evan.hall@LIGO.ORG - 12:23, Wednesday 16 December 2015 (24263)

Switching to the 90 mHz blends resulted in the DARM residual becoming dominated by the microseism. The attachment shows the residual before and after the blend switch on the 14th; the rms increases from 5×10−14 m to 8×10−14 m.

As a first test in eliminating this nonstationarity, we should try engaging a boost to reduce the microseism contribution to DARM.

The other length loops (PRCL, MICH, SRCL) are not microseism dominated.

Similar to DARM, the dHard residuals are microseism-dominated and could also stand to be boosted, although this would require some care to make sure that the loops remain stable.

[Also, the whitening filter for the calibrated DARM residual is misnamed; the actual filter is two zeros at 0.1 Hz and two poles at 100 Hz, but the filter name said 1^2:100^2. I've changed the foton file to fix this, so it should be reloaded on next lock loss.]

Non-image files attached to this comment
H1 SEI
hugh.radkins@LIGO.ORG - posted 11:49, Tuesday 15 December 2015 - last comment - 08:52, Wednesday 16 December 2015(24240)
LHO ISIs updated to latest--smoother blend switching & immediate saturations clearing

Following aLog 24208, all LHO ISI platforms were restarted.  Most platforms deisolated nicely.  But, HAM2 decidedly did not.  New safe.snaps were captured for all platforms.  This captured the FF paths being off at start up and GS13 being in low gain.  Guardian was adjusted for HAM2 to desable the GS13 gain switching.

All snaps and isi/h1 guardians were committed to the svn.

Comments related to this report
hugh.radkins@LIGO.ORG - 08:52, Wednesday 16 December 2015 (24261)

The HAM ISIs were restarted to capture a code correction that clears saturation immediately upon request.  The BSCs got this fix as well.

Also, since HAM2 & 3 do not tolerate GS13 gain switching via guardian, that feature, while available, is disabled.  So, upon FE restart, the GS13s will be in low gain and the safe.snap SDF will be happy.  But, under OBSERVE.snap, the GS13s in low gain will be red.  These will need to be switched via the SEI Commands scripts.

Displaying reports 62041-62060 of 85683.Go to page Start 3099 3100 3101 3102 3103 3104 3105 3106 3107 End