With the ASC IMC model now running at 16384 Hz we look at the coherence of jitter as measured by the IMC WFS and other channels up to 7.4 kHz. Not sure we can conclude anything except that pointing errros contaminate everything.
We can compare this with an older 900-Hz bandwidth measurement from alog 31631 which was taken before the piezo peak fix (alog 31974).
Note that 1084Hz thing doesn't have coherence with IMC WFS.
Can you check the DC sum channels for the IMC WFS as well? They are the ones that hVeto keeps finding as related to the 1080 Hz noise, and they see a modulation in the noise rather than a steady spectrum.
Done, again nothing for the bump in question though there are coherence bumps for f>1100Hz and f<800Hz.
Restarted Seismic FOM display computer nuc5; had to close terminal window left open to correct display. Restart was also required to apply kernel patches.
A2L coherence looks good this morning.
WeeklyXtal - Powers in D3 and D4 show decline that is consistent with humidity trends. D2 power appears to be fairly steady. D1 power appears to have increased slightly.
WeeklyLaser - OSC_BOXHUM is on the decline. Generally everything looks ok here.
WeeklyEnv - As noted in the WeeklyXtal comments, humidity has dropped notably at all sensors.
WeeklyChiller - Normal. If anything to comment on, there appears to be a very marginal increase in pressures at times with an equally corresponding increase in some headflow.
Concur with Ed's analysis, everything looks normal.
Ops Shift Transition:12/07/2016, Day Shift 16:00 – 00:00 (08:00 - 16:00) - UTC (PT)
State of H1: IFO locked at NOMINAL_LOW_NOISE. Power is 29.0W. Range 70.9MPc
Intent Bit: Observing
Weather: Wind is a Light Breeze (1- 3mph), 22f, and clear, no rain/snow forcast
Primary 0.03 – 0.1Hz: Currently at 0.03um/s
Secondary 0.1 – 0.3Hz: Currently at 0.9um/s
Quick Summary: IFO locked in observing mode for past 15 hours.
Outgoing Operator: TJ
TITLE: 12/07 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 70.6383Mpc
INCOMING OPERATOR: Jeff
SHIFT SUMMARY: An extremely quiet shift. Cruising on a 14.5hr lock at 70Mpc. I haven't touched PI's, didn't run a2l, and CP4 has been a bit wierd but the VE team will be working on it.
LOG:
Locked and Observing for almost 11hrs. There has been 1 EY saturation per hour, but a few more glitches than that on the range. CP4 level is at 82%
TITLE: 12/07 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 72.5658Mpc
OUTGOING OPERATOR: Travis
CURRENT ENVIRONMENT:
Wind: 4mph Gusts, 3mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.35 μm/s
QUICK SUMMARY: Smooth running @72 Mpc for 6:45hrs. Its cold out but all of the ice is gone.
TITLE: 12/07 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 70.7388Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: Pretty smooth shift as far as locking goes. Some excitement regarding CP4 (see aLog 32283). Also, set some arbitrarily high thresholds for most dust monitors to stop alarms. JeffB will reset them tomorrow. Locked for over 6 hours now.
LOG:
3:35 CP4 alarm
5:24 GRB alert, contacted LLO who verified that they received the alert, begin standdown
6:24 GRB standdown end
7:10 PI mode 27 ringing up, changed phase from 20 to 120. Did not go out of Observe. Are we supposed to go out of Observing to make PI mode changes??
As of now, you do not need to drop out to change either PI phase or gain. We unmonitored these so they wont drop you out anyway. If PI ever gets so high that it takes more than a phase flip, gain flip, or couple thousand gain increase, you're seeing it in DARM, and you're worried about losing lock, you 'd want to drop out.
I notice on the CDS overview screen that there is a timing error on H1IOPASC0. I won't clear it now, but someone could if we lose lock.
As a benchmark against which to compare upcoming O2 data, I have compiled a list of narrow lines seen in the H1 DARM spectrum up to 2000 Hz, using 107 hours of ER10 FScan 30-minute SFTs. There are no big surprises relative to the lines and combs Ansel and I have reported on previously from ER9 and later data, but below are some observations. Attached figures show selected band spectra, and a zipped attachment contains a much larger set of bands. Also attached for reference is a plaintext list of combs, isolated lines, PEM-associated lines. etc. In the attached spectra, the red curve is the ER10 data, and the black curve is the full-O1 data. The label annotations are keyed to the height of the red curve, but in some cases, those labels refer to lines in the O1 data that are not (yet) visible in accumulated current data. For the most part, lines seen in O1 that don't show up in ER10 nonetheless remain for now in the lines list and still have labels on the graphs that end up in the red fuzz. If they fail to emerge in O2 data, they will be deleted from future line lists. Observations:
Here is a plot of the violin mode harmonics around 1kHz, comparing the amplitudes today to the amplitudes right after the damping efforts of Nov 30th.
We don't actively damp these by default, only when someone manually engages damping do they get damped. Durring the first part of ER10 ISI trips caused by tidal problems were ringing them up, but that problem is fixed now and most modes are ringing down. ETMX modes (between 1003 and 1006Hz) have increased in amplitude since the 30th.
The largest peak here is the pair on ETMY that Keith points out, we have settings that work to damp both of these modes using the mode9 filter bank on ETMY, and it would not be difficult to turn this damping on automatically in the guardian.
Our question for Keith and detchar is is this (the current spectrum) good enough? Or should we continue to try to add automatic damping for some of these modes?
Automatically damping the violin modes would reduce up-conversion contamination at the starts of lock stretches, making more data usable for CW searches. Even small excess powers in narrow bins leads to unnecessary outliers in analysis that waste computing and manpower. Unless there is a downside to such damping, it seems warranted. thanks, Keith
Jeff B. warned me that we would likely be getting a lot of dust monitor alarms after the PEM system was reset this afternoon. He advised that we can set arbitrary alarm values in the meantime and he will remedy them in the morning. I have been setting the thresholds to 1000 minor and 10000 major as the alarms occur.
CP4 LTY250 pump level went into high alarm 20 mins ago. Cell phones have received alarms. Waiting to talk to Chandra or someone VE...
The current alarm is not the normal "glitchiness" that has been ongoing for the last few days - 20 mins ago this signal jumped from ~95 to 100 and is riding up there. See attached.
Got a hold of Kyle. He is looking into it from home and will call back to advise if any action is required.
This is mostly a nuisance that we don't yet understand. Chandra reduced the "smoothing factor" (I love it!) from ~99.9 to 0.00 a day or so ago at John's suggestion but this doesn't seemed to have changed the behavior. We are mostly concerned with low pump levels as opposed to high pump levels these days. We have opened the exhaust check-valve bypasses on all 80K pumps on site. This eliminates any over "pressure" situations that could have been a threat in the past during rapid pump fillings or too high LN2 pump levels. For tonight, OPERATORS please post 4 hour trends (thanks Betsy) twice per shift. The Vacuum members will monitor from home.
It's on the bounce again...
I instructed Travis to put CP4 in manual mode with the LLCV at 35% open overnight. The software limit of 25% is great when the pump level is too high but nothing prevents it from going too open if the PID gets a pump level signal of too low. At least in Manual mode the crazy swings will stop. We can deal with it in the morning.
A couple hours in now on MANUAL at 35% open. Not alarming high anymore.
Seems to still be stable.
Level has been slowly dropping for the last 3 hours. If this continues, I may have to give VE and early wake up call.
The TCSY chiller flow seems to be struggling for the last 40min or so. This doesn't seem to be the normal issue of an air bubble because it is not recovering as it normally would. I saw a WP for the replacement of this flow meter but I didn't see an alog stating that it happened.
Attached is an hour trend of the flow rate.
Seems to have stablized itself now, back at 3.0Gpm
Eh, this actually is kinda the usual "thing" going on with this sensor of late... It drops low for a while ~30-60 mins and then recovers eventually. The work permit we have open is to fix next Tuesday.
I was wondering if this was coming from the CO2 controller chassis, D1200745. I had a look to see if there were accompanying glitches on the CO2Y laser temperature channel which might indicate a common cause. Whilst there are some glitches that show up simultaneously on both channels, there are many that are not simultaneous. It is far from obvious that there is a common cause for the flow rate variations.
% check to see if the CO2Y laser flow meter is broken or if there is a
% problem with electronics instead. Compare to CO2Y laser temperature
t0 = 1165166030 - 72*3600;
t1 = t0 + 2*24*3600;
chan = {'H1:TCS-ITMY_CO2_LASERTEMPERATURE.mean,m-trend', ...
'H1:TCS-ITMY_CO2_FLOWRATE.mean,m-trend'};
[data, t, info] = get_data(t0,t1,chan,[]);
subplot(2,1,1)
plot((1:size(data,1))/60, data(:, 1)); grid on
axis([0 48 23.72 23.82])
xlabel('Time (hr)')
ylabel('CO2Y laser temperature (C)')
title('CO2Y laser channel glitches - correlated or not?')
subplot(2,1,2)
plot((1:size(data,1))/60, data(:, 2)); grid on
axis([0 48 2 3.5])
xlabel('Time (hr)')
ylabel('CO2Y flow rate (gpm)')
orient tall
print('-dpdf', 'CO2Y_flow_rate_errors.pdf')