Displaying reports 56141-56160 of 78079.Go to page Start 2804 2805 2806 2807 2808 2809 2810 2811 2812 End
Reports until 16:06, Wednesday 14 October 2015
H1 General
jim.warner@LIGO.ORG - posted 16:06, Wednesday 14 October 2015 (22522)
Shift Summary

TITLE:  10/14 day Shift:  15:00-23:00UTC

STATE of H1:  Low-noise for 4 hours, periodic commissioning during single IFO time

Support:  Usual control room population

Quick Summary:

Quiet, busy shift with lots of noise investigations, H1 was well behaved

Shift Activities:

16:04 lockloss

18:03 LLO has a bounce mode rung up, so we go to commissioning

18:05 Evan turning on BLRMS filters, Richard to EY, Robert to LVEA

19:00 Gerardo to EX, JeffB to LVEA

19:00 Robert to EX for tamping

21:10 JeffB to LVEA

21:00 Evan trying some BLRMS filters

 

22:30 Robert turning HVAC off, done 22:50

H1 CDS
david.barker@LIGO.ORG - posted 15:40, Wednesday 14 October 2015 (22520)
SVN status of CDS source files

Now that all systems are using OBSERVE.snap for their SDF reference I modified the check_h1_files_svn_status script to scan for OBSERVE.snap file status as well as safe.snap. Here is the current source code SVN status:

david.barker@sysadmin0: check_h1_files_svn_status
 
SVN status of front end code source files...
done (list of files scanned can be found in /tmp/source_files_list.txt)
 
SVN status of filter module files...
M       /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSMC1.txt
M       /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSMC3.txt
M       /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSPRM.txt
M       /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSPR3.txt
M       /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSPR2.txt
M       /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSSR2.txt
M       /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSSRM.txt
M       /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSSR3.txt
M       /opt/rtcds/userapps/release/isc/h1/filterfiles/H1OAF.txt
M       /opt/rtcds/userapps/release/lsc/h1/filterfiles/H1LSC.txt
M       /opt/rtcds/userapps/release/asc/h1/filterfiles/H1ASC.txt
done (list of filter module files scanned can be found in /tmp/full_path_filter_file_list.txt)
 
SVN status of safe.snap files...
done (list of safe.snap files scanned can be found in /tmp/safe_snap_files.txt)
 
SVN status of OBSERVE.snap files...
M       /opt/rtcds/userapps/release/psl/h1/burtfiles/iss/h1psliss_OBSERVE.snap
M       /opt/rtcds/userapps/release/isc/h1/burtfiles/h1oaf_OBSERVE.snap
M       /opt/rtcds/userapps/release/lsc/h1/burtfiles/h1lsc_OBSERVE.snap
M       /opt/rtcds/userapps/release/omc/h1/burtfiles/h1omc_OBSERVE.snap
M       /opt/rtcds/userapps/release/asc/h1/burtfiles/h1asc_OBSERVE.snap
M       /opt/rtcds/userapps/release/asc/h1/burtfiles/h1ascimc_OBSERVE.snap
done (list of observe.snap files scanned can be found in /tmp/observe_snap_files.txt)
 
SVN status of guardian files...
M       /opt/rtcds/userapps/release/isc/h1/guardian/ISC_DRMI.py
M       /opt/rtcds/userapps/release/isc/h1/guardian/ISC_LOCK.py
M       /opt/rtcds/userapps/release/isc/h1/guardian/lscparams.py
done
 

 

H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 15:34, Wednesday 14 October 2015 - last comment - 16:01, Wednesday 14 October 2015(22518)
CDS maintenance summary Tuesday 13th October 2015

Summary of Tuesday's mainteance work:

h1calex model change for hardware injection

Jeff, Jim, Dave: WP5553, ECR1500386

h1calex model was modified to add CW and TINJ hardware injection filter modules. Also ODC channel names were changed from EX to PINJX. Three new channels were added to the science frame (HARDWARE_OUT_DQ and BLIND_OUT_DQ at 16k, ODC_CHANNEL_OUT_DQ at 256Hz)

The DAQ was restarted. Conlog was rescanned to capture the new ODC channel names.

Guardian DIAG_EXC node was modified to permit both calex and calcs excitations while in observation mode.

MSR SATABOY firmware upgrades

Carlos: WP5544

The two Sataboy RAID arrays used by h1fw1 had their controller cards firmware upgraded. Also the one Sataboy used by the DMT system was upgraded. No file system downtime was incurred.

Beckhoff SDF testing

Jonathan, Dave: WP5539

Tested Gentoo version of the SDF on h1build machine as user controls. For initial testing we are only connecting to h1ecatcaplc1. We discovered that this version of the sdf system set all the PLC1 setpoints each time it was restarted, so ECATC1PLC1 was reset several times between 10am and 1pm PDT Tuesday morning. This system was left running overnight for stability testing. We discovered that some string-out records cannot be changed. If we change the string for these records (when we accidentally applied the safe.snap strings on restarts) the PLC immediately (100uS later) reset to an internally defined string.

Long running CDS Server reboots, workstation updates

Carlos:

As part of our twice-yearly preventative maintenance, Carlos patched and rebooted some non-critical servers. CDS workstations were inventoried and updated.

Complete OBSERVE.snap install

Dave: WP5557

The models which were still running with safe.snap as their SDF reference were updated to us OBSERVE.snap. Models in the following systems were modifed: IOP, ODC, SUSAUX, PEM. Their initial OBSERVE.snap files were copied of their safe.snap.

Several systems had static OBSERVE.snap files in their target areas instead of a symbolic link to the userapps area. I copied these files over to their respective userapps areas and check them into SVN. During Wed morning non-locking time, I moved the target OBSERVE.snap file into an archive subdirectory and setup the appropriate symbolic links. I relinked within the 5 second monitor period, so no front end SDF reported a "modified file".

Comments related to this report
david.barker@LIGO.ORG - 15:36, Wednesday 14 October 2015 (22519)

sdf reference MEDM screen attached showing all models are referencing their OBSERVE.snap files

Images attached to this comment
david.barker@LIGO.ORG - 16:01, Wednesday 14 October 2015 (22521)

The two systems with partial filter module loads (SUS ETMX and ETMY) were fully loaded during maintenance

H1 ISC
evan.hall@LIGO.ORG - posted 13:59, Wednesday 14 October 2015 - last comment - 16:42, Monday 19 October 2015(22436)
IM3 jitter coupling to DARM

As part of Wednesday's commissioning excercises, we looked at the coupling of input jitter into DARM.

I injected band-limited white noise into IM3 pitch (and then IM3 yaw) until I saw a rise in the noise floor of DARM.

We can use the IM4 QPD as an estimate of the amount of jitter on the interferometer's S port. On the AS port side, we can use the OMC QPDs as an estimate of the AS port jitter, and DCPD sum indicates the amount of S port jitter coupling into DARM.

One thing of note is that the jitter coupling from IM3 to DARM is mostly linear, and more or less flat from 30 to 200 Hz:

The upper limit on IM3 jitter that one can place using the IM4 QPD seems to be weak. At 40 Hz, projecting the quiescent level of the IM4 yaw signal to the DCPD sum suggests a jitter noise of 2×10−7 mA/rtHz, but this is obviously not supported by the (essentially zero) coherence between IM4 yaw and DCPD sum during low-noise lock. Of course, this does not rule out a nonlinear coupling.

As for AS port jitter, the coupling is seen more strongly in OMC QPD B than OMC QPD A.

Images attached to this report
Comments related to this report
evan.hall@LIGO.ORG - 16:42, Monday 19 October 2015 (22641)

The test excitation for yaw was 6 ct/Hz1/2 at 100 Hz.

We can propagate this to suspension angle as follows:

  • Euler to OSEM matrix is (0.25 / L) ct/ct, where L is the lever arm that the coils act over.
  • DAC gain is 20 V / 218 ct.
  • Factor of 4×L (four coils providing torque, L is again the lever arm of the coils).
  • Driver transimpedance of 1.0×10−3 A/V.
  • Coil actuation strength of 0.016 N/V.
  • This gives TF of 1.2×10−9 (N m)/ct.
  • From the suspension model, the compliance at 100 Hz is 0.010 rad/(N m).

This gives 73 prad/Hz1/2 of yaw excitation at 100 Hz, which implies a DCPD coupling of 550 RIN/rad at 100 Hz.

Repeating the same computation for pitch [where the excitation was about 10 ct/Hz1/2 at 100 Hz, and the compliance at 100 Hz is 0.012 rad/(N m)] gives a pitch excitation of 140 prad/Hz1/2, which implies a DCPD coupling of 130 RIN/rad at 100 Hz. So the IM3 yaw coupling into DARM is a factor of 4 or so higher than the IM3 pitch coupling.

These excitations amount to >100 µV/Hz1/2 out of the DAC. Unless the IMs' electronics chains have an outrageous amount of input-referred noise, it seems unlikely that electronics-induced IM jitter is anywhere close to the DARM noise floor. Additionally, the seismically-induced motion of IM3 must be very low: projections of the HAM2 table motion suggest an IM3 suspension point motion of 10 prad/Hz1/2, and this motion will be filtered by the mechanical response of the suspensions before reaching the optics.

H1 DetChar (DetChar, ISC)
gabriele.vajente@LIGO.ORG - posted 11:46, Wednesday 14 October 2015 - last comment - 17:09, Wednesday 14 October 2015(22514)
Noise trend for the run

The plots attached to this elog show the trend of the LHO detector noise over the O1 time span so far. Each plot shows the band-limited RMS of the CAL-DELTAL_EXTERNAL signal, in a selected frequency band. I didn't apply the dewhitening filter. The BLRMS is computed over segments of 60 seconds of data, computing the PSD with 5s long FFTs (Hann window) and averaging. The orange trace shows a smoothed version, averaged over one hour segments. Only time in ANALYSIS_READY have been considered.

The scripts used to compute the BLRMS (python, look at trend_o1_fft_lho.py) are attached and uploaded to the NonNA git repository. The data is then plotted using MATLAB (script attached too).

I think there's a lot of interesting information in those plots. My plan would be to try to correlate the noise variations with IFO and environmental channels. Any suggestion from on-site commissioners is welcome!

Here are my first comments on the trends:

So what happenend on September 23rd around the end of the day, UTC time?

Images attached to this report
Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 17:09, Wednesday 14 October 2015 (22525)CAL, DetChar
S. Dwyer, J. Kissel, G. Vajente

Gabriele notes a drop in BLRMS that is most (only) visible in the 30-40 [Hz] band, and questions what happened around Sept 23rd.

Sheila (re)identified that I had made a change to the calibration that only affects DELTAL_EXTERNAL around the day, see LHO aLOG 21788. 

This change, increasing the delay between the calibrated actuation and sensing chains just before they're added, would indeed affect only the region around the DARM UGF (~40 [Hz]), where the calibrated actuation and sensing functions are both valid, and roughly equal in contribution to the DELTAL_EXTERNAL signal. Indeed, the (6.52 - 6)/6 = 0.086 = 8% drop is consistent with the 8% amplitude change expected that Peter describes in his motivation for me to make the change of the relative delay, see LHO aLOG 21746.

Gabriele has independently confirmed that he's using DELTAL_EXTERNAL (and not GDS-CALIB_STRAIN which would *not* have seen this change) and the change happened between Sept 22 2015, between 15:00 and 20:00 UTC (*not* Sept 23rd UTC that he mentioned above). I've confirmed that the EPICs record was changed right smack in between there on Sept 22 17:10 UTC.

Good catch, Sheila!
H1 ISC
peter.fritschel@LIGO.ORG - posted 11:23, Wednesday 14 October 2015 - last comment - 10:13, Friday 16 October 2015(22513)
Residual DARM motion: comparison of H1 and L1

The first attached plot (H1L1DARMresidual.pdf) shows the residual DARM spectrum for H1 and L1, from a recent coincident lock stretch (9-10-2015, starting 16:15:00 UTC). I used the CAL-DELTAL_RESIDUAL channels, and undid the digital whitening to get the channels calibrated in meters at all frequencies. The residual  and external DARM rms values are:

  residual DARM external DARM
H1 6 x 10-14 m 0.62 micron
L1 1 x 10-14 m 0.16 micron

The 'external DARM' is the open loop DARM level (or DARM correction signal), integrated down to 0.05 Hz. The second attached plot (H1L1extDARMcomparison.pdf) shows the external DARM spectra; the higher rms for H1 is mainly due to a higher microseism.

Some things to note:

The 3rd attached plot (H1L1DARMcomparison.pdf) shows the two calibrated DARM spectra (external/open loop) in the band from 20-100 Hz. This plot shows that H1 and L1 are very similar in this band where the noise is unexplained. One suspect for the unexplained noise could be some non-linearity or upconversion in the photodetection. However, since the residual rms fluctuations are 6x higher on H1 than L1, and yet their noise spectra are almost indentical in the 20-100 Hz band, this seems to be ruled out - or at least not supported by this look at the data. More direct tests could (and should) be done, by e.g. changing the DARM DC offset, or intentionally increasing the residual DARM to see if there is an effect in the excess noise band.

Non-image files attached to this report
Comments related to this report
evan.hall@LIGO.ORG - 09:43, Thursday 15 October 2015 (22550)

We briefly tried increasing the DCPD rms by decreasing the DARM gain by 6 dB below a few hertz (more specifically, it's a zero at 2.5 Hz, a pole at 5 Hz, and an ac gain of 1... it's FM5 in LSC-OMC_DC). This increased the DCPD rms by slightly less than a factor of 2. There's no clear effect on the excess noise, but it could be we have to be more aggressive in increasing the rms.

  • Nominal DARM configuration: 2015-10-14 20:40:00 to 20:45:00 Z
  • Reduced low-frequency gain: 2015-10-14 20:34:30 to 20:39:30 Z
Images attached to this comment
hartmut.grote@LIGO.ORG - 10:13, Friday 16 October 2015 (22585)
interesting,
but do I interpret it right that you (on the experiment reported in the comment) assume that the 
DARM errorpoint represents the true DARM offset/position?
I thought that it is the case at least at L1 that when DARM is locked on heterodyne,
and the OMC is locked onto carrier (with the usual DC offset in DARM),
then the power in transmission of the OMC fluctuates by several 10%.
Assuming that the TEM00-carrier coupling to the OMC would be no different when DARM is locked 
to OMC trans. power, then also the 'true' DARM would fluctuate this much
impressing this fluctuation onto DARM.
This fluctuation should show up in the heterodyne signal then.
So in this case increasing the DARM gain to reduce the rms would probably
not do anything.
Or?
H1 DCS (DCS)
gregory.mendell@LIGO.ORG - posted 09:54, Wednesday 14 October 2015 (22511)
Fixed issues affecting the H1 detchar summary pages

Two issues affected the H1 detchar summary pages. The diskcache servers was hung and the detchar home directory needed to be remounted.

Both of these issues are now fixed.

It will take a while for the jobs generating the summary pages to catch up, but they should catch up here sometime later today:

https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20151014/

H1 CDS (CDS, GRD, ISC)
sheila.dwyer@LIGO.ORG - posted 09:30, Wednesday 14 October 2015 - last comment - 20:17, Thursday 15 October 2015(22510)
Lockloss related to epics freeze

As Jim had almost relocked the IFO, we had an epics freeze in the gaurdian state RESONANCE.  ISC_LOCK had an epics connection error.

What is the right thing for the operator to do in this situation?

Are these epics freezes becoming more frequent again?

screenshot attached.

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 14:39, Wednesday 14 October 2015 (22516)

epics freezes never fully went away completely and are normally only a few seconds in duration. This morning's SUS ETMX event lasted for 22 seconds which exceeded Guardian's timeout period. To get the outage duration, I second trended H1:IOP-SUS_EX_ADC_DT_OUTMON. Outages are on a computer basis, not model basis, so I have put the IOP Duotone output EPICS channel into the frame as EDCU channels (access via channel access over the network). When these channels are unavailable, the DAQ sets them to be zero.

For this event the time line is (all times UTC)

16:17:22 DAQ shows EPICS has frozen on SUS EX
16:17:27 Guardian attempts connection
16:17:29 Guardian reports error, is retrying
16:17:43 Guardian timesout
16:17:45 DAQ shows channel is active again

The investigation of this problem is ongoing, we could bump up the priority if it becomes a serious IFO operations issue.

jameson.rollins@LIGO.ORG - 14:57, Wednesday 14 October 2015 (22517)

To be clear, it sounds like there was a lockloss during acquisition that was caused by some kind of EPICS drop out.  I see how a lockloss could occur during the NOMINAL lock state just from an EPICS drop out.  guardian nodes might go into error, but that shouldn't actually affect the fast IFO controls at all.

jameson.rollins@LIGO.ORG - 20:17, Thursday 15 October 2015 (22570)

Sorry, I meant that I can not see how a guardian EPICS dropout could cause a lock loss during the nominal lock state.

H1 General (GRD, PSL, SUS)
cheryl.vorvick@LIGO.ORG - posted 08:41, Wednesday 14 October 2015 (22509)
Ops Owl Summary: 07:00-15:00UTC (00:00-08:00PT)

Hi State: locked in Low Noise and in Observe for 3+ hours, GRB arrived at 14:14UTC

 

Help: Kiwamu

 

Shift Overview:  IFO was down and trying to lock PRMI when I arrived - 3.5 hours to return to Observe.

- DRMI produced a "Split Mode" 

- MICH Dark and SRC alignment corrected DRMI "Split Mode"

- first return to Low Noise had issues with the ISS Second Loop

- IMC_LOCK Guardian had buttons pushed by humans

- only change was ISS Diffracted Power (increased)

 

Timeline:

- 10:12 to 10:40UTC - IFO in Low Noise Lock with ISS Second Loop issues

- 11:39UTC - IFO in Low Noise Lock and Observe - no ISS issues

H1 PEM (PEM, PSL, SUS)
cheryl.vorvick@LIGO.ORG - posted 04:08, Wednesday 14 October 2015 - last comment - 05:14, Wednesday 14 October 2015(22505)
Owl shift update - earthquake, then alignment, and then ISS Second Loop issues since lock loss in previous shift

Title: Owl Mid-Shift Update, 10:48UTC, 3:49PT

H1 State: relocking and making it to Low Noise but engaging the ISS Second loop is gltiching all of the ISS and the IFO

Help this shift:  Kiwamu on the phone, Kiwamu's alog about engaging the ISS Second Loop manually, Jim's alog about DRMI Split Mode

Details:

07:00UTC - I arrive and IFO is down due to an earthquake.

07:00-08:00 - PRMI locked 3 times, but can't hold due to the earthquake

08:00-09:00 - PRMI locks and I align, then I transition to DRMI and thus the "Split Mode" announcements start, and after multiple "Split Mode" locks I call Kiwamu, he says do a MICH Dark and SRC align

09:20 - MICH Dark and SRC Align alignments are successful, and I take the IFO to Down and Init, and take IFO to full-lock

10:07:41 - Guardian tries to engage the ISS Second loop, I wait, then start the procedure in Kiwamu's alog, and then he called back and we tried unsuccessfully to engage the ISS second loop

10:40:38 - the IFO lost lock due to our work trying to engage the ISS Second Loop - Kiwamu is drining in

11:03 - IFO has not relocked, but I don't know why since I've been writing the alog...

More to come...

Comments related to this report
kiwamu.izumi@LIGO.ORG - 05:14, Wednesday 14 October 2015 (22506)

It seems that the ISS issue was related to a too-low diffraction power in the first loop.

Please check the ISS diffracted power and adjust it to 8 +/- 1% if having an issue with the engagement of the ISS 2nd loop.

When the ISS was unsuccessful engaging the ISS, diffracted power was at about 5% which should be 8%. Ideally, the diffracted power should not affect the 2nd loop so much, but in reality, for some reason the stability of the 2nd loop seems to be quite sensitive to the diffracted power only when engaging it. After a few failures in engaging the ISS, Cheryl and I tried to engage it a couple of times by hand with no success. Eventually I accidentally unlocked the whole interferometer because we had a wrong gain in ISS_SECONDLOOP_SIGNAL. Oops.

In the next lock trial, we let the IMC_LOCK guardian do it with the right diffracted power of about 8%. This went very smooth without a failure. Probably we should consider making another PID loop to automatically adjust the diffracted power so as to maintain it at around 8% when engaging the 2nd loop.

LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 00:36, Wednesday 14 October 2015 - last comment - 06:54, Wednesday 14 October 2015(22504)
Update on Y-Mid leak hunting

John, Bubba, Richard, Gerardo

16:44 Opened 10" valve.  Turbo, LD and QDP80 remained on overnight. Background at 1.1 x 10-08 torr*L/sec.
    ***Left system be for background signal to drop.***
We returned to the center longitudinal seam weld (previously referred as butt weld) on the North side of BT.  Divided the section into 3 equal sections, and bagged them.  Using a rate of 1 L/s of He, for 60 seconds dwell, background at 5.2 x 10-09 torr*L/sec, leak hunting started,:
18:00 Section 1 was sprayed with He, and the other two section were purged with instrument air, we waited for 5 minutes to see a response.  Background did not change.
18:06 Section 2 was sprayed with He, and once again the other two sections were purged with instrument air, no change on background.
18:10 John bags the purge/vent valve.
18:12 Section 3 was sprayed with He, while the other two sections were purged with instrument air.
18:15 Purge/vent valve is sprayed with He, and background value starts going up.  Response was to quick??
18:22 section 3 is purged with instrument air.
18:30 observed max value for background 5.2 x 10-08 torr*L/sec.
19:05 Aux cart connected and pumping on closed purge/vent valve, no change noted on BT pressure.
19:12 left for L U N C H, while Turbo pump pumped He out.

===Activity after lunch===

21:28 Background signal at 5.1 x 10-09 torr*L/sec
21:40 sprayed He at a rate of 1 L/s for 60 seconds on the longitudinal weld seam section 3.
21:43 background signal starts changing, slooooow.
21:50 background signal at 2.1 x 10-08 torr*L/sec
21:50 started purging section 3 with instrument air.
21:51 background at 3.0 x 10-08 torr*L/sec
21:52 background at 4.0 x 10-08 torr*L/sec
21:54 background at 5.0 x 10-08 torr*L/sec
21:56 background at 7.5 x 10-08 torr*L/sec
21:58 background at 9.0 x 10-08 torr*L/sec
22:00 background at 1.0 x 10-07 torr*L/sec
22:03 background at 1.2 x 10-07 torr*L/sec
22:10 background at 1.2 x 10-07 torr*L/sec *
22:28 background at 8.0 x 10-08 torr*L/sec
22:38 Turned off and removed aux cart from purge/vent valve, valve was restored as found, except now space is vented.
22:47 background signal at 4.5 x 10-08 torr*L/sec

At this point we divided section 3 into two sides.
    * East side which is away from the stiffener ring, this one was sprayed first with He, while the West side, close to the stiffener ring, was purged with air, we sprayed as before (1 L/s of He for 60 sec).

22:48 started spraying He on the East side.
23:03 background at 3.1 x 10-08 torr*L/sec, background continues to drop.
23:05 background at 2.9 x 10-08 torr*L/sec
23:05 switched hoses on the 2 sides, now we are going to spray He in the West side and purge with instrument air the East side.
23:05 background at 2.9 x 10-08 torr*L/sec
23:06 background at 2.8 x 10-08 torr*L/sec
23:07 start of He spraying at West side
23:10 background at 2.6 x 10-08 torr*L/sec
23:12 background at 2.8 x 10-08 torr*L/sec  ***
23:14 background at 3.2 x 10-08 torr*L/sec
23:17 background at 4.1 x 10-08 torr*L/sec
23:19 background at 4.8 x 10-08 torr*L/sec
23:20 background at 5.2 x 10-08 torr*L/sec
23:21 background at 5.4 x 10-08 torr*L/sec
23:26 background at 5.9 x 10-08 torr*L/sec
23:27 closed 10" gate valve to isolate turbo pump, LD and QDP80 from vacuum system, but they remain on.

Conclusion,
We think that the leak is somewhere in the small West side of the center longitudinal seam weld, we will continue to test to pinpoint the leak source.

Images attached to this report
Comments related to this report
john.worden@LIGO.ORG - 06:54, Wednesday 14 October 2015 (22507)

Typo in above - The helium flow rate was ~ 1 l/minute not 1 l/sec.

The suspected leak at the purge valve seems to have been a false positive due to the delayed response from the seam weld. After bagging the purge valve and chamber conflat as one we then removed the bag, removed the nw50 blank, and applied helium directly into the valve, into the test port of the valve body, and into the vacuum side conflat -each for 60 seconds. No response to this so we went back to the longitudinal seam weld. 

The photo shows 2 bagged portions of the seam weld - one purged/inflated with air and the other with helium. We sprayed in both directions - ie swapped air and helium  - no leak detector response was seen in one test while a strong signal was seen in the other. The prior day we saw a response from this area and in violation of my own rule I sprayed the triple conflat on the neighboring valve - again with 60 sec bursts of helium - no response.

The next step should be to try and evacuate the bagged portion of weld seam. This may prove to be a quicker and more definitive test than the helium testing, although the geometry is very tricky as the the weld is hidden by the vertical support structure and the stiffener stitch weld also crosses the seam weld.

LHO General
corey.gray@LIGO.ORG - posted 00:04, Wednesday 14 October 2015 (22492)
EVE Ops Summary

TITLE:  10/13 EVE Shift:  23:00-07:00UTC (16:00-00:00PDT), all times posted in UTC     

STATE of H1:  Locking due to recent EQ.  Waiting for seismic to come down & allow for PRMI alignment tweaks.

Incoming Operator:  Cheryl

Support:  Sheila by phone (about new PRMI guardian)

Quick Summary:

Nice shift with H1 in Observing Mode for 7of 8hrs & then a Russian EQ knocked us out.  

Shift Activities:

H1 ISC (DetChar, ISC)
sheila.dwyer@LIGO.ORG - posted 18:02, Tuesday 13 October 2015 - last comment - 22:45, Wednesday 14 October 2015(22494)
non stationary noise in DARM that appeared Oct 12th

Jordan, Sheila

In the summary pages, we can see that something non stationary appeared in DARM from about 80-250 Hz durring the long lock that spanned Oct 11th to 12th, and has stayed around.   Links to the spectra from the 11th  and the 12th.  

HVETO also came up with a lot of glitches in this frequency span starting on the 12th, (here) which were not around before.  These glitches are vetoed by things that seem like they could all be realted to corner statin ground motion: refl, IMC and AS WFS, all kinds of corner station seismic sensors, PEM accelerometers, MC suspensions.  

Although this noise seems to have appeared durring a time when the microseism was high for us, I think it is not directly related. (high microseism started approximately on the 9th, 2 days before this noise appeared and things are quieting down now but we still have the non stationary noise sometimes up to 200 Hz.)  

The blend switching could also seem like a culprit, but the blends were not switched at the begining of the lock in which this noise appeared, and we have been back on the normal (90mHz blends) today but we still have this noise.  We've seen scattering from the OMC with velocities this high before (17264 and 19195).  

Nutsinee and Robert have found that some of the glitches we are having today are due to RF45, but this doesn't seem to be the case on the 12th. 

Comments related to this report
nutsinee.kijbunchoo@LIGO.ORG - 18:39, Tuesday 13 October 2015 (22498)DetChar

Robert, Sheila, Nutsinee

 

The first plot attached is a timeseries of DARM vs RF45 mod of the 10/11 - 10/12 lock stretch (~25 hours). The second plot is the same channels during the beginning of 10/13 lock stretch (30 minutes). You can see RF45 started to act up on 10/13. I've also attached the BLRMS plot of DARM using the bandpass filter Robert used to find the OMC scattering. The none stationary noise we see is likely caused by two different sources.

Images attached to this comment
nutsinee.kijbunchoo@LIGO.ORG - 14:49, Wednesday 14 October 2015 (22515)DetChar

RF45 started to glitch Monday afternoon (16:04 PDT, 23:04 UTC). According to TJ's log no one was in the LVEA that day. The glitches stopped around 03:22 UTC (20:22 PDT)

Images attached to this comment
sheila.dwyer@LIGO.ORG - 22:45, Wednesday 14 October 2015 (22523)

Here is one example of one of these glitches rlated to ground motion in the corner that we had durring high microseim over the weekend (but not the entire time that we had high mircoseism). This is from Oct 12th at 14:26 UTC.  Even though these have gone away, we are moitvated to look into them because as Jordan and Gabriele have both confirmed recently, the noise in the unexplained part of the spectrum (50-100 Hz) is non stationary even with the beam diverter closed. If the elevanted ground motion over the weekend madde this visible in DARM up to 250Hz, it is possible that with more normal ground motion this is lurking near our sensitivity from 50-100 Hz.

If you believe these are scattering shelves.

  • The upper limit of the shelf is at 200-250 Hz, so the maximum velocity (lambda*f_max/(4pi)) is around 35-42 um/sec. 
  • There are about 9 arches in 4 seconds, so the frequency of the motion should be (9/4)/2  ~1.1 Hz (We see 2 arches in each period of the motion.) 
  • So something should be moving with an amplitude of 17-21 um at around 1 Hz, if the scatter path is double passed (IFO to scatterer and back only once).

See josh's alogs about similar problems, especially 19195 and recently 22405

One more thing to notice is that at least in this example the upconversion is most visibe when the derivative of DARM (loop corrected) is large.  This could just be because that is the time when the derivative of the ground motion is large. 

Images attached to this comment
nutsinee.kijbunchoo@LIGO.ORG - 18:18, Wednesday 14 October 2015 (22527)DetChar

Nairwita, Nutsinee

Nairwita pointed out to me that the non-stationary glitches we're looking at was vetoed nicely by HPI HAM2 L4C on October 12th, so I took a closer look. The first plot attached is an hour trend of DARM and HPI HAM2 L4C. But if I zoom into one of the glitches it seems to me that there's a delay in response between HPI and DARM up to ~10 seconds from just eye-balling it (second plot). I've also attached the spectrogram during that hour from the sumary page.

Images attached to this comment
H1 SUS (CDS)
evan.hall@LIGO.ORG - posted 20:20, Sunday 30 August 2015 - last comment - 10:42, Wednesday 14 October 2015(21037)
PRM/PR3 top-stage OSEMs need attention

Betsy, Sheila, Travis, Evan

Something about the following coils appears to be unhealthy: PRM M1 RT&SD, PR3 M1 T1&T2. See attached OSEM sensor spectra.

According to Betsy, this high-frequency junk started around 2015-08-29 15:00:00 Z. On the control side, this junk dominates the rms of the PRM M1 LF drive (it is about 10000 ct).

Probably this is related to the problem that Kiwamu saw last night with PRM M1 DAMP L.

Images attached to this report
Comments related to this report
betsy.weaver@LIGO.ORG - 20:34, Sunday 30 August 2015 (21038)

The first plot below is a trend of PRM DAMP L showing the start of the noise - the noise started in the middle of the lock stretch from Sat morning.  The second plot shows the 4 OSEM Sensors - it's hard to see it in any of the sensor trends, except PRM RT.

Images attached to this comment
betsy.weaver@LIGO.ORG - 20:46, Sunday 30 August 2015 (21039)

For Richard:  yes, all 4 of these noisy signals are on the same cable set and Sat box line.

daniel.hoak@LIGO.ORG - 23:02, Sunday 30 August 2015 (21041)ISC, SUS

It looks like this high-frequency noise is due to the shadow sensors.  We paused in LOCK_DRMI_1F with the PRM aligned, and turned off the top stage damping.  In this state there were no digital signals going to the coil driver (MASTER_OUTs were zero), and the NOISEMON and FAST_IMON readbacks were flat.  But the inputs from the RT and SD OSEMs had the high-frequency noise.  See attached spectra.  So, it doesn't look like it's a bad coil driver (this would have been three in as many weeks)...maybe it's an issue with the satellite box?

With the top stage damping enabled, the noise is large enough that it passes through the damping filters and shakes the M1 stage, but it's so high frequency that I don't think we are shaking the optic.  There's no sign of the noise peaks in the PRCL error signal.  No reason to think we can't run like this until Tuesday maintenance.

Images attached to this comment
carl.adams@LIGO.ORG - 10:42, Wednesday 14 October 2015 (22512)
We have seen this before at LLO back in July 2011:

https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=1262
Displaying reports 56141-56160 of 78079.Go to page Start 2804 2805 2806 2807 2808 2809 2810 2811 2812 End