Displaying reports 59501-59520 of 83119.Go to page Start 2972 2973 2974 2975 2976 2977 2978 2979 2980 End
Reports until 03:42, Wednesday 16 December 2015
H1 DetChar (SEI)
nutsinee.kijbunchoo@LIGO.ORG - posted 03:42, Wednesday 16 December 2015 - last comment - 12:23, Wednesday 16 December 2015(24255)
90 mHz blend seems to be causing glitches between 10-30 Hz

Not sure if anyone has already caught this. Switching the ETMX blend to Quiet_90 on Dec 14th caused glitches to appear around 20Hz region (Fig 1 - starting at 20:00:00 UTC.) while switching both ETMX and ETMY to Quiet_90 everywhere caused gliches to appear around 10 Hz and 30 Hz region (Fig 2 - starting at 9:00:00 UTC). Wind speed has been low (<5mph) and the useism (0.03-0.1Hz) has been around 0.4 um/s. BNS range has been glitchy since the blend was switched but the lock has been relatively more stable. The question is, do we want clean data but constantly risk losing lock when the tidal rings up, or slightly glitchy data but relatively more stable interferometer?

Images attached to this report
Comments related to this report
nutsinee.kijbunchoo@LIGO.ORG - 04:28, Wednesday 16 December 2015 (24257)

Tried switching ETMX X to 45 mHz again. Looking good so far.

Images attached to this comment
nutsinee.kijbunchoo@LIGO.ORG - 04:11, Wednesday 16 December 2015 (24256)

After talking to Mike on the phone we decided to try switching both ETMs back to 45mHz blend. I'm doing this slowly. One dof at a time. Things got better momentary when I switched ETMX X to 45 mHz blend but soon tidal and CSOFT started running away. I had to leave ETMX X at 90 mHz. Out of Observing from 11:58:07 -  12:11:02 UTC.

nutsinee.kijbunchoo@LIGO.ORG - 04:33, Wednesday 16 December 2015 (24258)

And the tidal is back... I switched ETMX X to 90mHz. 45 mHz is used everywhere else.

Images attached to this comment
evan.hall@LIGO.ORG - 12:23, Wednesday 16 December 2015 (24263)

Switching to the 90 mHz blends resulted in the DARM residual becoming dominated by the microseism. The attachment shows the residual before and after the blend switch on the 14th; the rms increases from 5×10−14 m to 8×10−14 m.

As a first test in eliminating this nonstationarity, we should try engaging a boost to reduce the microseism contribution to DARM.

The other length loops (PRCL, MICH, SRCL) are not microseism dominated.

Similar to DARM, the dHard residuals are microseism-dominated and could also stand to be boosted, although this would require some care to make sure that the loops remain stable.

[Also, the whitening filter for the calibrated DARM residual is misnamed; the actual filter is two zeros at 0.1 Hz and two poles at 100 Hz, but the filter name said 1^2:100^2. I've changed the foton file to fix this, so it should be reloaded on next lock loss.]

Non-image files attached to this comment
H1 General
nutsinee.kijbunchoo@LIGO.ORG - posted 01:05, Wednesday 16 December 2015 - last comment - 01:15, Wednesday 16 December 2015(24253)
Out of Observing to switched ETMX ISI blend

Tidal was running away. Switched the 45mHz blend to 90mHz and saved the lock. Out of Observing from 09:00:12 - 09:01:43 UTC. ETMY blend remains unchanged.

Images attached to this report
Comments related to this report
nutsinee.kijbunchoo@LIGO.ORG - 01:15, Wednesday 16 December 2015 (24254)

Out of Oserving again to change ETMY blend to match ETMX (09:10:33 - 09:11:55). I wasn't sure of the consequence of having two ETM ISI on different blends. Blend switch went smoothly. Now both ETM ISI are on 90 mHz blend.

Images attached to this comment
H1 General
nutsinee.kijbunchoo@LIGO.ORG - posted 23:55, Tuesday 15 December 2015 (24252)
Owl Shift Transition

TITLE:  12/16 OWL Shift 08:00-16:00UTC (00:00-08:00 PST), all times posted in UTC

Out going Ops: Corey

Quick Summary: Useism still near 50th percentile but slowly increasing. Wind speed ~10-20 mph. Tidal fluctuation comes and goes. Using Quiet90 blend on ETMX Y and ETMY X the rest is on 45 mHz.

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 23:23, Tuesday 15 December 2015 (24249)
EVE Ops Summary

TITLE:  12/15 EVE Shift:  00:00-08:00UTC (16:00-00:00PDT), all times posted in UTC     

STATE of H1:   NLN 

Incoming Operator:  Nutsinee

Support:  Jenne as Maintenance Re-Locking Commissioner

Quick Summary:

Walked in to Maintenance recovery, and Jenne damped down roll/bounce modes to allow H1 to get to NLN.  Had one lockloss which was a bit of a mystery.
 
See some oscillations on the tidal StripTool of nuc1, and these correspond with notable oscillations on the EY & EX ISIs (see attached).  If the oscillations get worse, Nutsinee may try transitioning to having both 90mHz Blends on for each axis.
Seismically:  useism has slight upward trend over the last 24hrs & slightly breezy tonight (not over 20mph though).
 
o Rebooted opsws9
o EX lights are on
 
Shift Log:
  • 1:22 RO Water Alarm
  • 2:11 "Hardware injections stopped" Verbal Alarm (acknowledged shortly after)
  • 2:21 Observing Mode
  • 4:45 FMCS CS REHEAT MAJOR/HIGH Alarm
  • 5:20 Lockloss
  • 5:24 Cleared HEPI WD counters (Tues task from Shift Checksheet)
  • 5:54 Observing
Images attached to this report
Non-image files attached to this report
H1 General
corey.gray@LIGO.ORG - posted 21:58, Tuesday 15 December 2015 (24251)
5:20 Mysterious Lockloss, 5:54 Observing

Not sure about what caused the Lockloss.  DRMI locked up in about 11min (just before I was going to try PRMI).  Everything went smoothly through Guardian steps, except for when I had to Engage the 2nd ISS Loop by hand, I think I was not quite at zero, and this generated an OMC DCPD Saturation (which produced huge dips on many signals on the StripTool...but managed to stay locked!).  Waited for the range to get up to around 80Mpc, and then took to Observing.

LHO General
corey.gray@LIGO.ORG - posted 20:01, Tuesday 15 December 2015 (24250)
Mid Shift Summary

All is going well.  useism looks like it has been increasing slightly over last 24hrs.  Thought winds were going to die down (jumped up to around 10mph).  

Doug at LLO called to inform me they are out of Observing mode due to A2L measurments and will need to stay out to bring back their TCSy laser (which will most likley break lock).

Notice that there is some redness on the ODC Mini Overview window on nuc1 (some of this is related to the BS ISI Fully Isolated change).

Range is at a nice 80Mpc.

H1 General
corey.gray@LIGO.ORG - posted 18:44, Tuesday 15 December 2015 (24248)
H1 Back To ¡OBSERVING! After Maintenance Day

After being down 12.5+hrs, H1 is back to Observing.  For the recent 1-2 hrs, Bounce & Roll modes had to be carefully addressed.

There were a few SDFs which needed to be ACCEPTED (Namely, ETMx & y BLENDS...we are basically still running with the 45mHz Blend state [with useism at 0.5um/s]).

Jenne ran an A2L measurement just prior to going to Observing.

 

LHO General
corey.gray@LIGO.ORG - posted 17:25, Tuesday 15 December 2015 (24246)
Transition To EVE Shift Update

TITLE:  12/15 EVE Shift:  00:00-08:00UTC (16:00-00:00PDT), all times posted in UTC     

STATE of H1:   Locking & recovering from Maintenance Day

Outgoing Operator:  Jim

Quick Summary:

As I walked in, they had locked DRMI for the first time.  As Guardian progressed, Jenne noticed oscillations on the AS camera & was ultimately traced to a rung-up ITMX bounce mode which Jenne eventually damped.  Currently working on getting H1 to RF_DARM, so we can take a look at rung-up roll modes & address them.
 
There is also a film crew in the Control Room filming interviews & footage.
H1 General
jim.warner@LIGO.ORG - posted 16:02, Tuesday 15 December 2015 (24244)
Shift Summary

TITLE:  12/15 Day Shift 16:00-24:00UTC

STATE Of H1: Re-locking

SUPPORT: Typical CR crowd

END-OF-SHIFT SUMMARY: Busy Tuesday.

Activity:

8:00 PeterK to LVEA to start Laser Safe transition
8:00 Hugh starting ISI restarts
8:00 Richard to EX
8:20 LVEA Transitioned to laser safe
8:30 JeffB heading to EY for dust monitor work
8:30 JeffK starting charge measurments on EX&EY, done 9:30
8:30 Gerardo to LVEA looking for OFI parts
8:45 Sheila offloading SR3 cage servo
8:45 BSC-ISI restarts done
9:00 Jason& Ed working on BS oplev, done 9:30
10:30 Karen/Christina done at EX, moving to EY
10:45 JoeD working on Xarm
11:20 Bubba to LVEA checking LTS containers, done 11:30
13:30 Movie crew in CR, starting initial alignment

 

 
16:00 PeterK to LVEA to start Laser Safe transition
16:00 Hugh starting ISI restarts
16:00 Richard to EX
16:20 LVEA Transitioned to laser safe
16:30 JeffB heading to EY for dust monitor work
16:30 JeffK starting charge measurments on EX&EY, done 9:30
16:30 Gerardo to LVEA looking for OFI parts
16:45 Sheila offloading SR3 cage servo
16:45 BSC-ISI restarts done
17:00 Jason& Ed working on BS oplev, done 9:30
18:30 Karen/Christina done at EX, moving to EY
18:45 JoeD working on Xarm
19:20 Bubba to LVEA checking LTS containers, done 11:30
21:30 Movie crew in CR, starting initial alignment
 
 

 

H1 CAL
jeffrey.kissel@LIGO.ORG - posted 15:39, Tuesday 15 December 2015 (24241)
Charge Measurement Update; EY UR Quadrant Surpasses 30 [V], but Long. Actuation Still OK ... For Now
J. Kissel

Gathered the weekly charge measurements today, and -- because we're getting close to the (prior) threshold for flipping the ESD bias sign on ETMY, and we weren't sure if we were seeing a convincing pattern / trend of charge accumulation / slow-down (see LHO aLOG 23866) -- gathered the longitudinal actuation strength as well. 

The messages:
- ETMX = still fine. No worries there.
- ETMY = 
    - The ETMY UR quadrant, measured in Yaw has now surpassed -30 [V] effective bias voltage according to the optical lever measurements. This was level at which we'd flipped the bias last time; when 3 or 4 out 8 measurements had shown ~ -30 [V]. (see LHO aLOG 22561 and LHO aLOG 22135)
    - All other quadrants are only at roughly -10 [V] effective bias, including the Pitch measurement for the same quadrant. 
    - This quadrant (as measured by Yaw) is definitely charging faster than the other quadrants (again, as measured by Yaw), and the relative actuation strength in Yaw is already back to +7% of the reference time.
    - The longitudinal relative actuation strength has not changed as much, it has only trended to 5%. Still difficult to say whether there is a significant decrease in the rate of strength change (let's call it the deceleration of actuation strentgh). By eye, you could easily convince yourself that the rate of change is decreasing, but if you gather data points in the longitunal strength spaced ~two weeks apart, the rate is inconsistently jumping around 0.7 - 1% of change per 2 weeks, or 0.4 - 0.5% per week.

There's 1 month left in the run, or 4 weeks. So, if 5-6% of relative actuation strength discrepancy from the reference time is our threshold (like it was for the last flip) then we'll need to do the flip before the end of the run.

As usual, you can ask "we'll what's LLO doing?" They have actuation strength discrepancy of the test mass stage w.r.t. their reference time  also at roughly 5%. They had made noise about fixing it (not by flipping the bias, but by creating a new reference model) two-three weeks when they were first investigating their 10 [Mpc] drop problems (e.g. LLO aLOG 23488), but they've not pulled the trigger since (which I prefer!!). However -- though Stuart hasn't yet made the same L, P, and Y comparison -- my impression is that their actuation strength acceleration / deceleration is lower than ours.

In summary, we're still in a (dark) grey area for making the call for whether it's worth the 8-hour down-time of flipping the bias, especially with respect to getting another round of full-actuation strength measurements a. la. LHO aLOG 21280. Of course, we *could* try to gather both at the same time, but that starts to get really ambitious -- and we've already seen that when pressed for time, we make mistakes (e.g. LHO aLOG 21005). Further complicating the issue are the holidays -- such measurements require Kiwamu and I present for the full time, and holidays make that cross-section lower than normal (this is why I suggested to Mike and Brian we either do it this week or wait until the end of the run). 

----------------------------
Some details on how to gather the test mass longitudinal actuation strength data from the LDAS/DMT tool "SLM" (Standard Line Monitor):
1) Gather the ascii dump of the raw data (which merely calculates the amplitude of each calibration line) from the SLM Tool's webpage. These are available / updated once a day (https://ldas-jobs.ligo-wa.caltech.edu/~gmendell/pcalmon/daily-pcalmonNavigation.html) but for these studies you want the monthly collection, which is available here (https://ldas-jobs.ligo-wa.caltech.edu/~gmendell/pcalmon/monthly-pcalmonNavigation.html). 
These should be saved to 
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/CAL_PARAM/
(simply because the CalSVN is far more accessible in the control room and on laptops than Greg's home folder on the LDAS clusters.)

2) Analyze the raw data using
/ligo/svncommon/CalSVN/aligocalibration/trunk/Projects/PhotonCalibrator/scripts/SLMTool/SLMData_analysis.m

for which you should only need to change the name of the parameter file that is called on Line 6 (right under where a comment says "Load Parameter File"). Of course, you should create a new parameter file (of which there are plenty of examples in that same folder, called parameter_*.m. This will save a new .mat file into 
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Results/CAL_PARAM/
(with name as specified in your SLMData_analysis parameter file) that covers the duration of data that is new.

3) Concatinate earlier results with the new results you've gathered. This, regrettably is still a rather klunky, by-hand, process. For example, today, I had to take the previously generated .mat file, 
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Results/CAL_PARAM/2015-12-01_Sep-Oct-Nov_ALLKappas.mat
and load it with my newly generated data from thus far in December,
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Results/2015-12-15_H1_Kappas_Nov30_to_Dec15_2015.mat
concatinate (e.g. kappa_TST = [file1.kappa_TST(:);file(2).kappa_TST(:)]), each variable and then save it as a new file,
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Results/2015-12-15_H1_ALLKappas_Sep10_to_Dec15.mat
Since the results from SLM tool are only available in at-most month-long chunks, this has too be done every new month. Note that, for the current month, even though the link suggest only up to the first of the month is available, all data up to 00:00 UTC of the current day of the month is available.

To compare these new longitudinal results against charge measurements, I ran the usual analysis instructions (see aWiki) to completion, ending with 
/ligo/svncommon/SusSVN/sus/trunk/QUAD/Common/Scripts/Long_Trend.m
but then by-hand saved select variables from the work space,
save([dataDir chargeDataFile],'DayDate','WeightedMeanPitch','AppliedBias','VariancePitch','WeightedMeanYaw','VarianceYaw','limit')
where I've defined dataDir and chargeDataFile in the work-space before hand (also by hand) to be also in
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Results/CAL_PARAM/2015-12-15_H1SUSETMY_ChargeMeasResults.mat

Then, I run the (sadly still scripted, and copied from date to date when I have a new time I want this run)
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Scripts/CAL_PARAM/compare_chargevskappaTST_20151215.m
which produces the last plot attached, as long as all of the input file names are changed correctly to the latest date.
Images attached to this report
Non-image files attached to this report
H1 CDS
james.batch@LIGO.ORG - posted 14:23, Tuesday 15 December 2015 (24242)
Modified Monit configuration for ext_alert
FRS 4077

The GraceDB monitoring process ext_alert failed with the maintenance of GraceDB this morning, and Monit stopped trying to restart it.  I modified the monit configuration to NOT time out when GraceDB is down.  Instead, Monit will continue to attempt to restart ext_alert indefinitely. 
H1 SEI
hugh.radkins@LIGO.ORG - posted 11:49, Tuesday 15 December 2015 - last comment - 08:52, Wednesday 16 December 2015(24240)
LHO ISIs updated to latest--smoother blend switching & immediate saturations clearing

Following aLog 24208, all LHO ISI platforms were restarted.  Most platforms deisolated nicely.  But, HAM2 decidedly did not.  New safe.snaps were captured for all platforms.  This captured the FF paths being off at start up and GS13 being in low gain.  Guardian was adjusted for HAM2 to desable the GS13 gain switching.

All snaps and isi/h1 guardians were committed to the svn.

Comments related to this report
hugh.radkins@LIGO.ORG - 08:52, Wednesday 16 December 2015 (24261)

The HAM ISIs were restarted to capture a code correction that clears saturation immediately upon request.  The BSCs got this fix as well.

Also, since HAM2 & 3 do not tolerate GS13 gain switching via guardian, that feature, while available, is disabled.  So, upon FE restart, the GS13s will be in low gain and the safe.snap SDF will be happy.  But, under OBSERVE.snap, the GS13s in low gain will be red.  These will need to be switched via the SEI Commands scripts.

H1 SEI (GRD, SEI)
sheila.dwyer@LIGO.ORG - posted 11:48, Tuesday 15 December 2015 - last comment - 18:34, Tuesday 15 December 2015(24239)
SEI BS ST2 isolated by ISC_LOCK guardian

Attached is a comparison of the GS13s with stage 2 isolated (blue) vs damped (red).  

We still haven't seen the impact on DARM, but I've added this to the ISC_LOCK guardian similar to the way it is done at LLO. This means I also changed the nominal state for SEI_BS to FULLY_ISOLATED

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 18:34, Tuesday 15 December 2015 (24247)

This makes no apparent difference in the DARM spectrum. 

Images attached to this comment
LHO FMCS (SYS)
richard.mccarthy@LIGO.ORG - posted 11:07, Tuesday 15 December 2015 (24238)
EX lighting controller
This morning we investigated the problem with the Lighting controller at EndX.  Water dripping from a pipe had enter the cabinet causing a fault.  We verified no further damage was done.  Replaced the relay that faulted with an unused one in the same cabinet and restored the system.  The Cabinet is dry and working.
H1 AOS (ISC, SUS)
jason.oberling@LIGO.ORG - posted 10:08, Tuesday 15 December 2015 - last comment - 15:39, Tuesday 15 December 2015(24236)
BS Optical Lever laser swapped (WP5651)

J. Oberling, E. Merilh

Today we swapped the glitching BS oplev laser with one that has been stabilized.  Old laser SN 130-1, new laser SN 129-2.  This laser will thermally stabilize over the next 4-6 hours or so.  Once this is done, I will evaluate it to see if we need to do any small power tweaks to acheive stable operation (this is because the thermal environment in the LVEA differs from the lab the laser was stabilized in).

We also adjusted the whitening gain at the BS whitening chassis to suit the new laser.  The old SUM counts were ~75k with 15 dB of whitening gain.  After adjustment the new SUM counts are ~34k with a whitening gain of 9 dB; this is safer as we won't cause saturation if all the light happens to fall on one quadrant of the QPD.  A picture of the Output Configuration Switch is attached and these changes are captured here: T1500556-v2.

This completes WP #5651.

Images attached to this report
Comments related to this report
jason.oberling@LIGO.ORG - 15:39, Tuesday 15 December 2015 (24243)

Increased the power by ~5%.  New SUM count reading ~33.8k.  Will see if this clears the glitching.  Continuing to monitor...

H1 SEI
hugh.radkins@LIGO.ORG - posted 14:25, Monday 14 December 2015 - last comment - 10:48, Tuesday 15 December 2015(24208)
Up dating ISI model

WP--5649, ECR E1500456, II 1167.  See aLog 23853 and SEI log 887 and 890.

In preparation for updating the ISI per above, I'm updating our local model and src areas:

hugh.radkins@opsws1:models 0$ svn st -u
?                    gnd_sts_library_NEW.mdl
        *    12022   isi2stagemaster.mdl
Status against revision:  12272
hugh.radkins@opsws1:models 0$ svn up isi2stagemaster.mdl
U    isi2stagemaster.mdl
Updated to revision 12272.
 

hugh.radkins@opsws1:src 0$ svn st -u
        *    11156   WD_SATCOUNT.c
?                    ISIWD_GPS_veryOLD.c
        *            BLENDMASTER_ST1.c
        *    10437   .
Status against revision:  12272
hugh.radkins@opsws1:src 0$ svn up
U    WD_SATCOUNT.c
A    BLENDMASTER_ST1.c
Updated to revision 12272.

hugh.radkins@opsws1:bscisi 0$ svn up
U    ISI_CUST_CHAMBER_ST1_BLEND_ALL.adl
Updated to revision 12272.
hugh.radkins@opsws1:bscisi 0$
 

I've successfully rebuilt all the ISI platforms.  Will wait until tomorrow for install & restart

Comments related to this report
hugh.radkins@LIGO.ORG - 10:48, Tuesday 15 December 2015 (24237)

FYI  The SATCOUNT.c update was a bug fix which applies to all ISI platforms.  This is why the HAMs were also rebuilt and restarted.  This fix will have the ISI saturations (seen n watchdog screen) clear immediately.  Added benefit for the BSC ISIs for not tripping those platforms from stale T240 saturations.  All good.

Displaying reports 59501-59520 of 83119.Go to page Start 2972 2973 2974 2975 2976 2977 2978 2979 2980 End