Displaying reports 50221-50240 of 83128.Go to page Start 2508 2509 2510 2511 2512 2513 2514 2515 2516 End
Reports until 11:13, Thursday 02 February 2017
H1 CAL
david.barker@LIGO.ORG - posted 11:13, Thursday 02 February 2017 (33844)
updated h0vacex autoBurt.req file

Just remembered to install the latest autoBurt.req in the h0vacex target area, stemming from Tuesday's code change.

H1 DetChar (DetChar)
corey.gray@LIGO.ORG - posted 10:25, Thursday 02 February 2017 (33843)
Propane Forklift Engine Seen On DARM

[17:50-17:53 Flagging H1 Data due to Forklift engine near LVEA.]

Right after getting to NLN there was a request to power ON our propane forklift  (to charge battery) which is here at the Corner Station just outside the Receiving Roll Up Door area.  We don't allow driving of forklifts around the VEAs while in Observing (go/no go document), so I thought just turning on the engine would be fine, BUT Evan noticed some noise on DARM between 40-50Hz on DARM.  So, I quickly ran out to ask Joe D to power off the forklift (And yes, it was noisy...even before getting outside I could hear the lound engine.)

So, we should move the activity of powering this forklift to Tuesdays during Observation Runs.

H1 CDS
thomas.shaffer@LIGO.ORG - posted 10:02, Thursday 02 February 2017 (33842)
Updated the h1guardian0 autoBurt.req file

Using the updated create_guardian_autoburt.py script in (userapps)/cds/h1/scripts/, I created an updated list of channels with the appropriate read only (RO) tag so that Patrick can put them in ConLog.

Dave had also asked me to check the the .ini files for Guardian are also up to date. The guardctrl interface will automatically update these files whenever it creates/destroys a new/old node, so it should always be automatically taken care of.

LHO General
corey.gray@LIGO.ORG - posted 09:08, Thursday 02 February 2017 (33838)
Transition To DAY

TITLE: 02/02 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 65Mpc
OUTGOING OPERATOR: Nutsinee
CURRENT ENVIRONMENT:
    Wind: 9mph Gusts, 6mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.55 μm/s
QUICK SUMMARY:

Nutsinee handed off H1 and I had her give me a brief overview of what they needed to do to squelch the PI Modes last night (Once again, if there are issues with PI, please check the PI Damping wiki & Terra says she can be called any time.

Whoops, just had a lockloss at 17:02!

H1 PSL (PSL)
patrick.thomas@LIGO.ORG - posted 08:48, Thursday 02 February 2017 (33839)
Weekly PSL Chiller Reservoir Top-Off
Added 250 mL H2O to the crystal chiller. No alarms on the diode chiller. The canister filters appear clear.
H1 General
nutsinee.kijbunchoo@LIGO.ORG - posted 08:15, Thursday 02 February 2017 (33837)
Ops Owl Shift Summary

TITLE: 02/02 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC

STATE of H1: Observing at 68Mpc

INCOMING OPERATOR: Corey

SHIFT SUMMARY: 4735 Hz started off high in the beginning of this lock but damped down very quickly. Locked and Observe the rest of the night.

H1 General
nutsinee.kijbunchoo@LIGO.ORG - posted 00:53, Thursday 02 February 2017 (33834)
Owl Shift Transition

TITLE: 02/02 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC

STATE of H1: Observing at 66Mpc

OUTGOING OPERATOR: Jim

CURRENT ENVIRONMENT: Wind: 7mph Gusts, 6mph 5min avg Primary useism: 0.02 μm/s Secondary useism: 0.68 μm/s

QUICK SUMMARY: PI issue solved (alog33833). Back to Observe and hopefully stay there for the rest of the night.

H1 General
jim.warner@LIGO.ORG - posted 00:38, Thursday 02 February 2017 - last comment - 07:14, Thursday 02 February 2017(33832)
Shift Summary

TITLE: 02/02 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 66Mpc
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY: PI's bad, finally back to observing
LOG:

Couldn't stay locked because of PI. I made new band passes to deal with shifted modes. Didn't realize there is a PLL frequency that also needs to be adjusted when you change the band pass filter. It's kind of buried in the middle of another step on the PI wiki. The PLL shouldn't have locked with this frequency too mismatched (how close is close enough?), but I could have swore I saw the PLL lock light going green. Nutsinee found and adjusted it, seems good now.

Comments related to this report
terra.hardwick@LIGO.ORG - 07:14, Thursday 02 February 2017 (33836)

Thanks Jim, and sorry for the not-obvious PLL thing in the wiki.

As far as setting the PLL frequency, within a Hz should be close enough.

I'm sure you did see the PLL lock light going green before: it could've been locking on some other smaller peak and then unlocking and relocking, or once your PI rings up huge enough the PLL could've locked on the right peak, but by then it was too big to damp.

H1 DetChar (CDS, DetChar, Lockloss, SUS)
borja.sorazu@LIGO.ORG - posted 23:08, Wednesday 01 February 2017 (33830)
ADC saturations of H1:ASC-AS_C_SEG# may be causing occasional locklosses

This entry is a follow up of one of the reported issues on my last DQ Shift.

The single lockloss that took place on the 2017-01-28 16:43:45 (UTC) or 1169657043 as reported by the segment database under flag H1:DMT-DC_READOUT_LOCKED:1

The thing that called my attention of this lockloss is that it was preceded by a not very common overflow, of channels H1:ASC-AS_C_SEG#. These channels are the ADC values of the 4 segments of a QPD located on the antisymmetric port for aligment sensing and control of signal recycling mirrors SRM and SR2.

A closer look of the lockloss pointed to several issues:

1) DMT Segment DMT-DC_READOUT_LOCKED:1 flags that inform when the detector is locked seems to be off. DMT segment generation should round the first sample of lock up to the next GPS second, and down for the end of a segment. So while the start second is included as locked, the stop second of the segment should not be included as locked. It would be of the type: [startgps stopgps). Looking at this particular lockloss seems this is not right:

DMT-DC_READOUT_LOCKED:1 --> 1169596818   1169657043

That means that the detector was locked up to GPS 1169657043, that corresponds to UTC 2017-01-28 16:43:45.

The way that flag is defined is: H1:DMT_DC_READOUT_LOCKED_s boolean "H1:DMT_DC_READOUT_s & H1:DMT_XARM_LOCK_s & H1:DMT_YARM_LOCK_s"

Where:

X-ARM Lock: H1:DMT_XARM_LOCK_s meanabove "H1:LSC-TR_X_QPD_B_SUM_OUTPUT" threshold=500
Y-ARM Lock: H1:DMT_YARM_LOCK_s meanabove "H1:LSC-TR_Y_QPD_B_SUM_OUTPUT" threshold=500

So I plot next these 2 channels, together with H1:LSC-POP_A_LF_OUTPUT which is an indication of the power built up on PR cavity so a good indication of having IFO locked. In dashed lines; blue is for the threshold=500 and red for the stop GPS of the lock segment. Clearly the detector was unlock before the stop GPS time of the flag:

2) Now to the main issue of this post, the overflow of channels H1:ASC-AS_C_SEG# and the lockloss. I plot next the lockloss informaiton as indicated by H1:LSC-POP_A_LF_OUTPUT and I plot as well one of the saturating ADC channels H1:ASC-AS_C_SEG4 (from the H1ASC usermodel) together with the identical IOP model channels H1:IOP-ASC0_MADC6_EPICS_CH8, the first thing we notice is that they are not identical as they should. However notice that the usermodel channel is running at 2kHz while the IOP is running at 65kHz so while both channels are the same in principle to obtain H1:ASC-AS_C_SEG4 first we need to apply the downsampling filter to H1:IOP-ASC0_MADC6_EPICS_CH8 and then downsample it. In addition these plots are EPICS channels at 16Hz. Still there are considerable discrepancies. The vertical dashed magenta line represents the integer GPS time of the start of the overflow as given by the H1ASC usermodel Accumulative overflows: 1169657037). Because the ADCs are 16 bits the overflow should happen at 2^15 = 32768. There are not such value in that second but notice that they are 16 bit downsampled channels so it must have been a very short duration overflow, they are more clear overflows of longer duration just before the lockloss but they were not reported on the Summary Pages maybe because it ignores 1 second before lockloss?:

Now to the interesting part.

It would be nice to look at the saturation channels at higher sampling rate but they are not stored in the frames. However what we have in the frames (at 2kHz which is the freq of the corresponding user model) is a combination of the 4 segment channels as the normalised sum (H1:ASC-AS_C_NSUM_OUT_DQ), this channel is then used to normalize also the combination used to generate Pitch and Yaw signals (for Yaw is H1:ASC-AS_C_YAW_OUT_DQ). I have scaled these 2 channels and overlap them with the previous plot as purple and black respectively:

Notice how these NSUM and YAW channels oscillate after the overflows, as if it was a step response. In fact looking at the real time signals (with the help of David Barker) of the IOP channel H1:IOP-ASC0_MADC6_EPICS_CH8 and the equivalent usermodel channel H1:ASC-AS_C_SEG4, we observed that while the IOP saturated at 2^15 as expected, however the usermodel channel went above 50000 counts, the observation of these saturations are only possible when detector is unlocked as they happen often, it is not the case when the detector is locked. We then realised that this was due to the downsampling filters that are applied to the IOP channel (at 65kHz) to downsample it to the 2kHz frequency at which the usermodel runs, and sharp saturations of the IOP caused big step responses on H1:ASC-AS_C_SEG4. These step responses (as we can see above) perturbe the alignment signals that are later fed back to the control signals of the last stage of SR2 and SRM mirrors.

Jeff Kissel helped me to find the filter applied in the process and simulate the step response expected due to the downsampling filter used to go from 65kHz to 2kHz, and then apply the additional filter used to generate NSUM channel with a low pass filter at 1Hz (no additional filter is applied for the Pitch and Yaw channels but notice these are normalized by NSUM):

The step response time constant of less than 1 second for the product of both filters is not far from the period oscillations observed on the NSUM and YAW channels after saturation (periodicity of about 0.3 secs).

Now I add to the plot the control signals sent to the last stage of both mirrors (SRM and SR2), notice that for clarity I only plot one of the four quadrant MASTER_OUTPUT signals for each suspension:

Zooming around the lockloss shows how the step response happens immediately before the lockloss, and soon after the control signals to the SRM starts to missbehave:

For completion and in order to see how these H1:ASC-AS_C_SEG channels are used to generate alignment signals to SRM and SR2, I have attached to this aLOG the H1ASC overview screen, also H1ASC_AS_C overview and the H1ASC Inputand-Output Matrices.

It is important to notice that saturations of these channels are not common when the detector is locked, as an example here is an ASD plot of the ASC_AS_C_SEG channels and RMS of their associated IOP channels (they are well below saturation):

Finally, Jeff suggested to also look at the step response of the downsampling from 65kHz IOP model to 16kHz of the OMC model, with a time constant of a fracion of a milisecond, could this be related to kHz glitches?:

Images attached to this report
H1 ISC
jim.warner@LIGO.ORG - posted 19:58, Wednesday 01 February 2017 - last comment - 12:10, Friday 03 February 2017(33828)
New PI bandpasses added

Probably because of temperature changes in VEAs and TCS being down earlier, I've had to add two new PI filters. The modes at 18040 (now 18037.5) and 18056 (now 18055) hz have moved down in frequency and were both ringing up and breaking the lock. First attached plot shows some detail of the bandpasses. The dashed lines are the new filters. Second plot shows the spectra of the new modes, the vertical red lines are the troublemakers, the green trace is from a lock when they were both high, red trace is live.

Except that the blue filter doesn't seem to be working... I just lost lock again.

Images attached to this report
Comments related to this report
jim.warner@LIGO.ORG - 22:36, Wednesday 01 February 2017 (33831)

I can't seem to do anything about the 18037.5 mode. None of the tweaks (gains, phases) do anything when the mode rings up. I've also tried sitting at DC readout for an extended period to see if that helps quiet the mode any, but I'm not having any luck.

nutsinee.kijbunchoo@LIGO.ORG - 00:40, Thursday 02 February 2017 (33833)

The issue was fixed by setting the PLL set frequencies to match the new band pass filters. Did this for both MODE27 and 28. This PLL thing was buried in the "If PI seems unresponsive" section of the instruction so I broke step 3 into two parts.

terra.hardwick@LIGO.ORG - 07:11, Thursday 02 February 2017 (33835)

Thanks; this was absolutely the right steps to take when there's large temperature/TCS changes. Apologies that the need to change PLL frequency wasn't obvious in the wiki. Thanks Nutsinee for editing that!

Also, feel free to still call me at anytime (even middle of the night), especially if PI causes more than one lockloss. My number is on the main whiteboard.

H1 ISC (GRD, OpsInfo)
jim.warner@LIGO.ORG - posted 17:55, Wednesday 01 February 2017 - last comment - 09:37, Thursday 02 February 2017(33825)
ALS Fiber polarization fault can prevent ISC_LOCK from progressing, other problems

While Jason and Fil were looking at the TCSY laser, Corey had left ISC_LOCK sitting at ENGAGE_SRC_ASC. We noticed that DIAG_MAIN and the ALSX guardian were both complaining about the X-arm fiber polarization. We thought we could ignore this because the arms were no longer "on ALS", but when ISC_LOCK got to the SHUTTER_ALS state, ISC_LOCK couldn't proceed because the ALSX guardian was in fault.

To move forward, I had to take the ALSX guardian to manual and put it in the SHUTTERED state. Buuttt... now ALSX wasn't monitored by ISC_LOCK. When I got to NLN, the TCSCS was in safe (from earlier work?) and it had a bunch of differences in SDF, TCS_ITMY_CO2_PWR guardian was also complaining (where is this screen? I had to use "guardctrl medm TCS_ITMY_CO2_PWR" to launch it, it recovered after INITing), and ALSX was controlled by USER. This last one I fixed by doing a caput  "caput H1:GRD-ALS_XARM_MANAGER ISC_LOCK" . Normally, that would be fixed by initing the parent node, but for ISC_LOCK that means going through down and breaking lock.

Of course, after fixing all of that and surviving an earthquake, I lost lock due to PIs that seem to have shifted because of the TCS outage.

Comments related to this report
jim.warner@LIGO.ORG - 18:28, Wednesday 01 February 2017 (33826)TCS

There are more SDF diffs in TCSCS. Looks like these should probably be unmonitored.

Images attached to this comment
jim.warner@LIGO.ORG - 20:42, Wednesday 01 February 2017 (33829)

More TCS diffs.

Images attached to this comment
thomas.shaffer@LIGO.ORG - 09:37, Thursday 02 February 2017 (33841)OpsInfo

To clarify a few Guardian operations here; please be careful to anyone who tries to put a node back to managed by clicking the "AUTO" button, this will make USER the manager NOT the normal, correct, manager node. The way for the manager to regain control of its subordinates, is to go to INIT, as Jim states. It is true that if you select INIT from ISC_LOCK while not in Manual mode, then it will go to DOWN after completing the INIT state, but if you keep ISC_LOCK in Manual then you can wait for the INIT state to complete and then click the next state that ISC_LOCK should execute. That last part is the tricky part though. If you reselect the state that you stopped at before going to INIT, then you may run the risk of losing lock because it will rerun that state. It may not break lock, but some states will. Jim did the other way to regain control of a node by caput'ing the manager node into H1:GRD-{subordinate_node}_MANGER. This also works, but is kind of the "back door" approach (although it may be a bit more clear depending on circumstances).

As for the TCS_ITMY_CO2_PWR node, all nodes are on the Guardian Overview medm screen. All TCS is under the TCS group in the top right near the BRS nodes. Perhaps we should make sure that these are also accessable from the TCS screens.

H1 AOS
corey.gray@LIGO.ORG - posted 15:01, Wednesday 01 February 2017 - last comment - 19:26, Wednesday 01 February 2017(33816)
Another TCSy Laser Trip

While heading to NLN, the TCSy laser tripped again.  Fil & Jason are working on this now (& Guardian is paused).

Comments related to this report
jason.oberling@LIGO.ORG - 16:55, Wednesday 01 February 2017 (33821)TCS

J. Oberling, F. Clara

Fil removed the comparator box and ran some quick tests in the EE lab; he found nothing obviously wrong with it.  The box was reinstalled without issue.  Using a multimeter, Fil measured the input signal from the IR sensor.  The interesting thing here is that the input signal changed depending on how the box was positioned (trip point is at ~96mV).  Hanging free in mid-air the input signal measured ~56mV; with Fil touching the box, still hanging free, the signal dropped to ~36mV; holding the box on the side of the TCS enclosure the signal changed yet again to ~25mV; and finally, placing the box on top of the TCS enclosure (its usual home), the signal dropped yet again to ~15mV.  There is definitely something fishy going on with this comparator box; grounding issue or cable/connection problem maybe?

As a final check I opened the side panel to the TCS enclosure to check the viewport to ensure there were no obvious signs of damage.  Using a green flashlight I found no obvious signs of damage on either optic in the TCS viewport; in addition, nothing obviously looked amiss with the viewport itself, so for know now this seems to be an issue with either the comparator box or the IR sensor.  Seeing as how it appears to be working again (famous last words, I know...) we restarted the TCSy CO2 laser.  Everything came up without issue, we will continue to monitor this.

alastair.heptonstall@LIGO.ORG - 17:00, Wednesday 01 February 2017 (33823)

Wondering if this is related to the glitchy sigals from the TCSY laser controller.  They all run through that same controller box (though of course we did try swapping that out).  The lifting it up / putting it down, sounds like it could be a weird grounding issue.

Give me a call if you need any help on this tonight - I'll email my number to you.

jason.oberling@LIGO.ORG - 19:26, Wednesday 01 February 2017 (33827)TCS

Here are a couple pictures for informational purposes.  The first is the TCS laser controller chassis, and shows which light is lit when the IR sensor is in alarm.  The second shows the comparator box in alarm.  This box sits on top of the TCSy enclosure, on the north-east corner.

Images attached to this comment
H1 DetChar (DetChar, PEM, SEI, SUS)
thomas.dent@LIGO.ORG - posted 11:59, Tuesday 31 January 2017 - last comment - 08:20, Wednesday 08 February 2017(33761)
Severe transient scattering events in DARM caused by loud 20-30Hz disturbances ('thuds') in CS/LVEA

PyCBC analysts, Thomas Dent, Andrew Lundgren

Investigation of some unusual and loud CBC triggers led to identifying a new set of glitches which occur a few times a day, looking like one or two cycles of extremely high-frequency scattering arches in the strain channel.  One very clear example is this omega scan (26th Jan) - see particularly LSC-REFL_A_LF_OUT_DQ and IMC-IM4_TRANS_YAW spectrograms for the scattering structure.  (Hence the possible name SPINOSAURUS, for which try Googling.)

The cause is a really strong transient excitation at around 30Hz (aka 'thud') hitting the central station, seen in many accelerometer, seismometer, HEPI, ISI and SUS channels.  We made some sound files from a selection of these channels :

PEM microphones, interestingly, don't pick up the disturbance in most cases - so probably it is coming through the ground.

Note that the OPLEV accelerometer shows ringing at ~60-something Hz. 

Working hypothesis is that the thud is exciting some resonance/relative motion of the input optics which is causing light to be reflected off places where it shouldn't be ..

The frequency of the arches (~34 per second) would indicate that whatever is causing scattering has a motion frequency of about 17Hz (see eg https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=154054 as well as the omega scan above).

Maybe someone at the site could recognize what this is from listening to the .wav files?

Comments related to this report
thomas.dent@LIGO.ORG - 12:07, Tuesday 31 January 2017 (33763)

A set of omega scans of similar events on 26th Jan (identified by thresholding on ISI-GND_STS_HAM2_Y) can be found at https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/isi_ham2/

robert.schofield@LIGO.ORG - 13:26, Tuesday 31 January 2017 (33768)

Wow that is pretty loud, seems like it is even seen (though just barely) on seismometers clear out at EY with about the right propagation delay for air or ground propagation in this band (about 300 m/s). Like a small quake near the corner station or something really heavy, like the front loader, going over a big bump or setting its shovel down hard. Are other similar events during working hours and also seen at EY or EX?

thomas.dent@LIGO.ORG - 12:43, Wednesday 01 February 2017 (33811)

It's hard to spot any pattern in the GPS times.  As far as I have checked the disturbances are always much stronger in CS/LVEA than in end station (if seen at all in EX/EY ..).

More times can be found at https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/isi_ham2/jan23/ https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/isi_ham2/jan24/

Hveto investigations have uncovered a bunch more times - some are definitely not in working hours, eg  https://ldas-jobs.ligo-wa.caltech.edu/~tjmassin/hveto/O2Ac-HPI-HAM2/scans/1169549195.98/ (02:46 local)   https://ldas-jobs.ligo-wa.caltech.edu/~tjmassin/hveto/O2Ab-HPI-HAM2/scans/1168330222.84/  (00:10 local)
 

thomas.dent@LIGO.ORG - 08:20, Thursday 02 February 2017 (33812)

Here's a plot which may be helpful as to the times of disturbances in CS showing the great majority of occurrences on the 23rd, 26th-27th and early on 28th Jan (all times UTC).  This ought to be correlated with local happenings.

The ISI-GND HAM2 channel also has loud triggers at times where there are no strain triggers as the ifo was not observing.  The main times I see are approximately (UTC time)

Jan 22 : hours 13, 18 21-22

Jan 23 : hours 0-1, 20

Jan 24 : hours 0, 1, 3-6, 10, 18-23

Jan 25 : hours 21-22

Jan 26 : hours 17-19, 21-22

Jan 27 : hours 1-3, 5-6, 10, 15-17, 19, 21, 23

Jan 28 : hours 9-10

Jan 29 : hours 19-20

Jan 30 : hours 17, 19-20 

Hmm.  Maybe this shows a predominance of times around hour 19-20-21 UTC i.e. 11-12-13 PST.  Lunchtime??  And what was special about the 24th and 27th ..

Images attached to this comment
jim.warner@LIGO.ORG - 12:12, Thursday 02 February 2017 (33846)

Is this maybe snow falling off the buildings? The temps started going above the teens on the 18th or so and started staying near freezing by the 24th. Fil reported seeing a chunk he thought could be ~200 lbs fall.

corey.gray@LIGO.ORG - 12:48, Thursday 02 February 2017 (33847)DetChar

Ice Cracking On Roofs?

In addition to ice/snow falls mentioned by Jim, thought I'd mention audible bumps I heard from the Control Room during some snowy evenings a few weeks ago (alog33199)....Beverly Berger emailed me suggesting this could be ice cracking on the roof.  We currently do not have tons of snow on the roofs, but there are some drifts which might be on the order of a 1' tall.

MSR Door Slams?

After hearing the audio files from Thomas' alog, I was sensitive to the noise this morning.  Because of this, thought I'd note some times this morning when I heard a noise similar to Thomas' audio, and this noise was the door slamming when people were entering the MSR (Mass Storage Room adjacent to the Control Room & there were a pile of boxes which the door would hit when opened...I have since slid them out of the way).  Realize this isn't as big of a force as what Robert mentions or the snow falls, but just thought I'd note some times when they were in/out of the room this morning:

  • 19:00:55, 19:05:22, 19:10:16, 19:43:40-19:44:00 Mass Storage Room door slam (not seen on DARM spectra).
thomas.dent@LIGO.ORG - 06:02, Friday 03 February 2017 (33858)

I took a brief look at the times in Corey's previous 'bumps in the night' report, I think I managed to deduce correctly that it refers to UTC times on Jan 13.  Out of these I could only find glitches corresponding to the times 5:32:50 and 6:09:14.  There were also some loud triggers in the ISI-GND HAM2 channel on Jan 13, but only one corresponded in time with Corey's bumps: 1168320724 (05:31:46).

The 6:09 glitch seems to be a false alarm, a very loud blip glitch at 06:09:10 (see https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/H1_1168322968/) with very little visible in aux channels.  The glitch would be visible on the control room glitchgram and/or range plot but is not associated with PEM-CS_SEIS or ISI-GND HAM2 disturbances.

The 5:32:50 glitch was identified as a 'PSL glitch' some time ago - however, it also appears to be a spinosaurus!  So, a loud enough spinosaurus will also appear in the PSL. 
Evidence : Very loud in PEM-CS_SEIS_LVEA_VERTEX channels (https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=155306) and characteristic sail shape in IMC-IM4 (https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=155301).

The DetChar SEI/Ground BLRMS Y summary page tab has a good witness channel, see the 'HAM2' trace in this plot for the 13th - ie if you want to know 'was it a spinosaurus' check for a spike in HAM2. 

thomas.dent@LIGO.ORG - 06:44, Tuesday 07 February 2017 (33962)

Here is another weird-audio-band-disturbance-in-CS event (or series of events!) from Jan 24th ~17:00 UTC :
https://ldas-jobs.ligo-wa.caltech.edu/~tdent/detchar/o2/PEM-CS_ACC_LVEAFLOOR_HAM1_Z-1169312457.wav

Could be someone walking up to a piece of the instrument, dropping or shifting some heavy object then going away .. ??

Omega scan: https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/psl_iss/1169312457.3/

thomas.dent@LIGO.ORG - 08:20, Wednesday 08 February 2017 (33996)

The time mentioned in the last entry turns out to have been a scheduled Tuesday maintenance where people were indeed in the LVEA doing work (and the ifo was not observing, though locked).

H1 SEI (DetChar, SEI)
krishna.venkateswara@LIGO.ORG - posted 16:33, Wednesday 25 January 2017 - last comment - 14:55, Friday 03 February 2017(33648)
Spare STS test at End-Y

Krishna

I took a quick look at the data from the PEM STS at EndY, which is mounted on the BRSY platform. The channels are mentioned in 33533.

First plot shows the GND STS (used by SEI) and the PEM STS converted to angle units (by multiplying by w^2/g) in comparison to the BRSY - rX. The wind-speed during this time was less than 2-3 m/s. The GND STS sees less signal than the BRS below ~50 mHz, but the PEM STS sees a lot more. The second plot shows the coherence between some channels and the third plot shows the X-direction signal. The Z channel is not recorded so I can't access it through ligodv-web.

The source of extra noise in the PEM STS could be  - a) Insufficient mass centering or b) Extra temperature noise either on the STS case or the table it sits on; more insulation to the table and the STS case might help...

Non-image files attached to this report
Comments related to this report
hugh.radkins@LIGO.ORG - 09:25, Thursday 26 January 2017 (33659)

When allowed (Tuesday?) we could go check the centering and center it up as needed--Hugh

hugh.radkins@LIGO.ORG - 12:04, Thursday 02 February 2017 (33845)

On Tuesday went to EndY to Check on and Center the PEM STS Masses

Upon arrival checked the mass measurements right away: U V & W were -3.7 -1.8 & -12V.  The X axis is generated from U V & W but the Y signal comes just from V & W.  See manual for axis mapping.

While the W mass is clearly out to lunch, U is also high >2volts.  However, as W contributes to both X & Y signals (but with different weightings) you'd think both X & Y signals would be noisy.  One might even argue that Y(North on STS Manual) would be even worse than X.

This morning, Thursday ~0930pst, with an IFO Lockloss, we went to the VEA again to check and found the U V & W masses exactly where I recorded them almost two days ago after nearly two hours of centering attempts: 9.9 12.9 & -13.5V.  After recording the voltages this morning, we hit the centering button and left the VEA.

Now see the attached 60 hours of minute trends.  Before the Tuesday mass centering activity, the X & Y time series values suggest fairly zero'd signals; I did zoomed in and they are. During the interum ~2 days before this morning's look, the Y signal was pined to a rail.  Noting my U V & W voltages, before Tuesday just W was on it's rail but after Tuesday, essentially all 3 masses were at or near a rail.  This further suggests you can not just look at the X Y & Z signals to assess the mass centering.

So, clearly, waiting for the masses to come off their rail did not yield results.  In addition, it appears there is a higher frequency noise on the channels showing up on X after Tuesday and now on Y after this morning's centering..  We will go measure the mass voltages when allowed.  I did leave some leads hanging off the monitor port for voltage measuring but they are mostly insulated and I don't think they are causing noise.  However, we'll remove these next time just in case.

Meanwhile, the positive glitches seen on the time series (average is still under the min trace but the max [black] is way higher than the average) are not continuous.  I guess these show up as the 1/f noise starting around 40Hz on the spectra, second attachment.  The reference traces are 1200utc 31 Jan (before the Tuesday activities.)  Note the higher noise on the X (pink) trace compared to the Y signal (cyan.)  I checked the wind, it did not seem to be an issue at that time.

Images attached to this comment
hugh.radkins@LIGO.ORG - 14:55, Friday 03 February 2017 (33872)

I have to just throw this in and run.

Here is a comparison of the SEI Ground STS and the the PEM unit on the BRS.  These have some gain difference I don't understand yet but I made them the same by multiplying by the ratio at the useism.  At this BW, the PEM is not happy.

Images attached to this comment
Displaying reports 50221-50240 of 83128.Go to page Start 2508 2509 2510 2511 2512 2513 2514 2515 2516 End