TITLE: 10/17 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Aquisition
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY: Lost lock a few times just after getting to NLN or just after powerup, not really sure why (once was me trying to adjust the ISS diffracted power too late in the game). Been locked for 4hours, commissioners hard at work.
LOG:
J. Kissel for S. Karki I've changed the > 1 kHz PCALX calibration line frequency, a.k.a. the "long duration sweep" line, from 2501.3 to 2001.3 Hz. Recall this started moving again on Oct 6th (see LHO aLOG 30269) I report the progress towards completing the sweep below. Frequency Planned Amplitude Planned Duration Actual Amplitude Start Time Stop Time Achieved Duration (Hz) (ct) (hh:mm) (ct) (UTC) (UTC) (hh:mm) --------------------------------------------------------------------------------------------------------------------------------------------------------- 1001.3 35k 02:00 1501.3 35k 02:00 2001.3 35k 02:00 39322.0 Oct 17 2016 21:22:03 UTC 2501.3 35k 05:00 39322.0 Oct 12 2016 03:20:41 UTC Oct 17 2016 21:22:03 UTC days 3001.3 35k 05:00 39322.0 Oct 06 2016 18:39:26 UTC Oct 12 2016 03:20:41 UTC days 3501.3 35k 05:00 39322.0 Jul 06 2016 18:56:13 UTC Oct 06 2016 18:39:26 UTC months 4001.3 40k 10:00 4301.3 40k 10:00 4501.3 40k 10:00 4801.3 40k 10:00 5001.3 40k 10:00
I had changed the pcal y line heights yesterday around 10 am local for a contrast defect test and have only now reverted them back to their old values.
Saying more words for Evan: He changed the 7.9 Hz and 1083.7 Hz line heights, by adjusting the H1:CAL-PCALY_PCALOSC4_OSC_[SIN,COS]GAIN and H1:CAL-PCALY_PCALOSC3_OSC_[SIN,COS]GAIN (respectively) oscillator gains. They we changed starting at Oct 16 2016 17:27 UTC and restored by Oct 17 2016 22:23 UTC. So any undisturbed time processed from this period should be excised from the collection of data for the 2501.3 Hz analysis for fear of confusion on the optical gain. Thus the new table of times for valid analysis is Frequency Planned Amplitude Planned Duration Actual Amplitude Start Time Stop Time Achieved Duration (Hz) (ct) (hh:mm) (ct) (UTC) (UTC) (hh:mm) --------------------------------------------------------------------------------------------------------------------------------------------------------- 1001.3 35k 02:00 1501.3 35k 02:00 2001.3 35k 02:00 39322.0 Oct 17 2016 22:24:00 UTC 2501.3 35k 05:00 39322.0 Oct 12 2016 03:20:41 UTC Oct 16 2016 17:27:00 UTC days 3001.3 35k 05:00 39322.0 Oct 06 2016 18:39:26 UTC Oct 12 2016 03:20:41 UTC days 3501.3 35k 05:00 39322.0 Jul 06 2016 18:56:13 UTC Oct 06 2016 18:39:26 UTC months 4001.3 40k 10:00 4301.3 40k 10:00 4501.3 40k 10:00 4801.3 40k 10:00 5001.3 40k 10:00
LN2 @ exhaust in 60 seconds after opening LLCV bypass valve 1/2 turn -> closed LLCV bypass valve. Next over-fill to be Wednesday, Oct. 19th.
WP#6221
While locked in Low Noise, Kyle headed to EX to start and stop a pump in two 5 min intervals. The pump is connect to the North side of BSC5.
Times in UTC (GPS):
Start1 - 20:52:52 (1160772789)
Stop1 - 20:57:52 (1160773089)
Start2 - 21:02:52 (1160773389)
Stop2 - 21:07:52 (1160773689)
The purpose of this test is to determine if these pumps can be run without interferring with commissioners.
"Going once...going twice....SOLD!" Having heard no complaints, I will forge ahead and plan on running these pumps for 5-7 days (starting tomorrow morning) in support of the required bake-out of the X-end RGA.
The Mid station chillers were turned off today and will likely remain off until March or so when the warmer weather returns.
Some recent locks have had steeply downward-trending ranges, which seem to be due to a steady increase in jitter coupling. I hadn't seen anyone say it explicitly, so I decided to check the cause of one of these drops in range. I picked the lock on Oct 15, where the detector was in intent mode around 19:30, and the range had dropped 20 Mpc by 20:30. I ran Bruco at this later time (Bruco results page). Above 200 Hz, all the coherences are with ISS PDB, IMC WFS, and the like, suggesting that jitter coupling is steadily increasing (and I think there's no feedforward because of issues with the DBB). Attached are the h(t) spectrum and the change in coherence with PDB relative to the start of the lock - the spectrum of PDB and the WFS don't change.
No restarts reported over these 4 days.
Weekly Xtal - graphs show some strange power fluctuations in Amp diode powers starting on or around the 10th. Also, this can be seen in the OSC_DB4_PWR graph as well.
Weekly Laser - Osc Box Humidity reached a high point at about the same time (10th) but seems to have started an upward trend sometime between the 8th and the 9th. PMC Trans power looks pretty erratic. Included is a zoomed view of the Osc box humidity and the OSC Box Temp just for correlative purpose.
Weekly Env - nothing notable.
Weekly Chiller - some marginal downward trends in headflow for heads 1-3. Head 4 is eother crazy stable and good OR this data is trash. ??
Head 4, power meter circuit, and front end flows are "fake" due to force writing in TwinCAT.
I went to EX this morning to check on the wind fence after Friday's wind storm. The fence is still there, intact, and hasn't accumulated any tumbleweeds, which is one of the concerns Robert had about a fence. However, a couple of the posts have been twisted, probably by the wind load and moisture, and all of the poured concrete footings have started creeping in the sand. I dont think there is any danger of the fence collapsing, yet, but I'll keep an eye on this.
Attached photos are: a picture of the total coverage from a month or two back (this hasn't changed), a picture showing the worst twisted post (this is new, I didn't notice this the last time I looked) and a picture of the gap in the sand around one of the footings (not new, but it's been getting bigger).
Laser Status:
SysStat is good
Front End Power is 34.67W (should be around 30 W)
Front End Watch is GREEN
HPO Watch is GREEN
PMC:
It has been locked 0.0 days, 0.0 hr 48.0 minutes (should be days/weeks)
Reflected power is 33.57Watts and PowerSum = 126.8Watts.
FSS:
It has been locked for 0.0 days 0.0 hr and 42.0 min (should be days/weeks)
TPD[V] = 3.225V (min 0.9V)
ISS:
The diffracted power is around 6.695% (should be 5-9%)
Last saturation event was 0.0 days 0.0 hours and 51.0 minutes ago (should be days/weeks)
Possible Issues:
PMC reflected power is high
ISS diffracted power is High (This seems to pop up after power increases)
SEI - Testing out some new configurations to get ready for O2.
SUS - No report
CDS - Running (go catch it. Ha.)
PSL/TCS - All good
Vac - Kyle wants to do test with pumps at end stations with locked IFO.
Fac - RO system issues, but it is back and running.
Maintenance
CDS - FRS's to address, cable pulling, power up RGAs at ends.
PCal - Travis to do some work.
Jitter (alog 30237): 10-6/√Hz (level) to n x 10-4/√Hz (peaks)
IMC suppression (alog 30124): ~1⁄200
⇒ at IFO: 5 x 10-9/√Hz to n⁄2 x 10-6/√Hz
Fixed misalignment of RF sidebands: Δα < 0.3
DC power in reflection with unlocked ifo at 50W: REFLDCunlocked ~ 300 mW
Error offset in REFL = jitter * REFLDCunlocked * Δα
⇒ 5 x 10-9/√Hz * 0.3 W * 0.1 ~ 1.5 x 10-10 W/√Hz (low)
⇒ n⁄2 x 10-6/√Hz * 0.3 W * 0.3 ~ n⁄2 x 10-7/√Hz (high)
Frequency noise coupling into DARM (alog 29893):
⇒10-10 m/W at 1kHz (approx. f-slope)
at 1kHz: 10-20 m to 10-17 m
at 300 Hz: n x 10-18 m (high) with periscope peak n ~ 4.
This seems at least a plausible coupling mechanism to explain our excess jitter noise.
Some additional comments:
This calculation estimates the jitter noise at the input to the ifo by forward propagating the measured jitter into the IMC. It then assumes a jitter coupling in reflection that mixes the carrier jitter with a RF sideband TEM10 mode due to misalignment. The corresponding RF signal would be an error point offset in the frequency suppression servo, so it would be added to the frequency noise. Finally, we are using the frequency noise to OMC DCPD coupling function to estimate how much would show up in DARM.
If this is the main jitter coupling path, it will show up in POP9I as long as it is above the shot noise. Indeed, alog 30610 shows the POP9I inferred frequency noise (out-of-loop) more than an order of magnitude above the one inferred from REFL9I (in-loop) at 100Hz. It isn't large enough to explain the noise visible in DARM. However, it is not far below the expected level for 50W shot noise.
TITLE: 10/17 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Aquisition
OUTGOING OPERATOR: Patrick
CURRENT ENVIRONMENT:
Wind: 45mph Gusts, 27mph 5min avg
Primary useism: 0.08 μm/s
Secondary useism: 0.36 μm/s
QUICK SUMMARY: As I took a seat in the chair, a large gust of wind could be heard from in the control and then lockloss...
Left in down while the earthquake subsided. Had a hard time getting through an initial alignment. The IMC kept losing lock in INPUT_ALIGN and PRM_ALIGN. Not certain what finally allowed it to stay locked long enough to get through these states. ISS and FSS still losing lock, but recovered on their own each time. IFO is in NLN but range has been dropping steadily. Starting shift with 6.9 magnitude earthquake reported 78km WNW of Kandrian, Papua New Guinea at 06:14:58 UTC. Jeff B. has put ISC_LOCK in down. 09:57 UTC Earthquake has subsided but wind is increasing 10:00 UTC Starting attempting to lock. Changed ISI config from EARTH_QUAKE_V2 to WINDY. 10:04 UTC Paused at LOCKING_ARMS_GREEN. Both are less than 1 on WFS. X is kind of jittery. No apparent glitches of ALS PDH control signals. Node is still waiting for arms to settle, won't proceed to locking ALS. Reluctantly moving TMS Pitch. Started with TMSX 67.8 and TMSY 127.5. This cleared the waiting for arms to settle message and allowed moving on. 10:25 UTC Lock loss waiting for DRMI ASC to converge 10:44 UTC Lock loss right after CARM_5PM. ISS in oscillation. Recovered on its own after about 140 sec. 10:48 UTC Lock loss during LOCKING_ALS. 10:51 UTC Lock loss during LOCKING_ALS. 11:03 UTC Check IR signals not stable. Starting to hear wind on building. 11:07 UTC Put in down 11:12 UTC Try locking again 11:27 UTC Lock loss at CARM_5PM 11:28 UTC Put in down 11:45 UTC Starting initial alignment ALS X arm randomly dropping out of lock. MC not stable on input align. Put IMC_LOCK to down. IMC_LOCK is in down, but IMC still seems to be trying to lock? 12:30 UTC Finally got to INPUT_ALIGN. Waiting for ASC convergence. Initial alignment past locking arms green seems to make IMC quite unstable. 12:54 UTC Initial alignment done 13:25 UTC NLN
Posted HEPI pump Trends (FAMIS #4525). Pressures for the 4 CS pump stations are flat around 100. The control voltage shows some fluctuations during the period, however day 1 and day 45 values are within a few 10ths of each other. Both end stations pressures and voltages are, by comparison, somewhat noisy. End X max pressure and min difference for the end points is 0.2. For End Y this same difference is 0.1. Between the measurement end points there is a more noise in both the min and max values. There is no apparent pattern to the fluctuations.
Sorry Jeff but these trends are just 10 minutes long.
J. Kissel, D. MacLeod Duncan had noticed that Omicron triggers for the H1 PCAL Y RX PD (H1:CAL-PCALY_RX_PD_OUT_DQ) had failed on Oct 13 02:51 UTC (Oct 12 18:51 PDT) because it was receiving too many triggers. Worried that it might have been a result of the recent changes in calibration line amplitudes (LHO aLOG 30476) or the restoration of the 1083.7 kHz line (LHO aLOG 30499), I've trended the output of the optical follower servo, making sure that it has not saturated, and/or is not constantly glitching. Attached is a 3 day and 30 day trend. There is indeed a feature in the trend at Oct 13 02:51 UTC, but it is uncorrelated in time with the two changes mentioned above. Indeed, the longer trend shows that the OFS has been glitching semi-regularly for at least 30 days. I'll have Detchar investigate whether any of these correspond with heightened period of glitching in DARM, but as of yet, I'm not sure we can say that this glitching in a problem.
The number of glitches seems to be definitely large and seeing them in OFS indicate it is real (and will be seen in DARM). Since Pcal interaction to DARM (at LHO) is oneway i.e, DARM is not expected to influence Pcal, it is probably originating in Pcal. At LLO we have seen glitches in Pcal when there were issues with power supplies (a-log LLO 21430), so it might be good to check those possibilities.
Evan G., Darkhan T., Travis S. We investigated these glitches in the y-end PCAL OFS PD more deeply and can fully explain all of the deviations. The excursions either due to DAQ restarts, line changes by users (including manual oscillator restarts, or by request to make transfer function measurements), shuttering the PCAL laser, or maintenance activities. See the attached 35 day trend of the excitation channel, shutter status, and OFS PD output (trends for both the 16 kHz and 16 Hz channels). What sets the limits on Omicron triggers? Should Omicron be set to allow a higher number of triggers for Pcal?