After yesterday's abuse, the BRSY was just not ready to contribute to the cause of science. After having the thermal enclosure breached for an hour or something, a big jolt from something when the remote desktop session was terminated and at least one...or was it two restarts of the code, I thought things had finally settled down but it was not so.
See the attached plot the first. Note the trend up and noisy steps in DRIFTMON. This channel should be smooth and quiet as the low frequency beam position. When the signal steps notice the RX (tilt) and VEL signals also going noisy and haywire. When I left the BRS last evening, the DRIFTMON was running smoothly (from about 0100 to 0200 utc.) But then it started the step behavior again and remained unusable until restarting this morning.
Also attached. the second, is two months of the DRIFTMON and an internal temperature signal. Obviously yesterday's incursion was a good cool down and the trend down indicates we better do another beam centering within a week or two at the longest.
The last plot is the most recent 2 hours and it appears that the periodic step and noise behavior has stopped. The large step down in the REF signal maybe indicates that Krishna's suspicion was correct in that the reference image captured by the camera was bad. The direction of the DRIFTMON suggests the BRS is still warming; I'll keep a close eye on this today to watch for errant tendencies.
TITLE: 03/29 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC STATE of H1: Lock Acquisition INCOMING OPERATOR: Ed (Cheryl covering) SHIFT SUMMARY: Not a very good night for science. Two short locks after the earthquake rang down, both broke to unknown causes. There were SDF diffs that came and went between these two (see mid shift summary). The third lock had a steep drop in range (see mid shift summary) and also broke due to an unknown cause. Another earthquake has just hit. The ISI config has been on WINDY_NO_BRSY all night (BRSY is rung up), except for a short attempt on WINDY. Kiwamu has begun commissioning. LOG: 07:30 UTC Restarted DTT session on nuc4. 08:42 UTC Restarted RF45 DTT session on nuc5. 08:47 UTC NLN 08:49 UTC Restarted range integrand DMT session on video0. 08:53 UTC Accepted SDF differences (where did these come from?) 08:55 UTC LLO still down. Running a2l. 08:59 UTC a2l done. 09:01 UTC Observing. 09:04 UTC Lock loss. 09:18 UTC GRB 09:39 UTC NLN. SDF differences are back! 09:44 UTC Damped PI mode 28 by changing sign of gain. 09:47 UTC Accepted SDF differences. Observing. 10:05 UTC Lost lock damping PI mode 27, but it was no where near saturating... 10:41 UTC NLN. No SDF differences this time?! 10:56 UTC Damped PI mode 28. Notice large peak around 10 Hz. 11:10 UTC Range dropping. Peak around PCAL line at ~35 Hz has broadened. 12:13 UTC Changed sign of gain to damp PI mode 28. 13:56 UTC Lock loss. PSL noise eater oscillating. 14:02 UTC Peter to LVEA to toggle PSL noise eater. 14:07 UTC Switched ISI config from WINDY_NO_BRSY to WINDY. 14:09 UTC Bubba and Apollo to mid Y 14:10 UTC Powercycled video5 14:12 UTC Set ISI config back to WINDY_NO_BRSY. Bubba reports that the balers started on X arm at 15:00 UTC, halfway between mid and end stations, moving towards mid station. 15:27 UTC Lockloss. Earthquake. 15:35 UTC Karen and Christina to mid stations.
The water leak on the potable water supply line to the VPW has been repaired and water is now available at the VPW. There is still a large hole in the ground where the repair was made. The hole is taped off with caution tape and will be filled in on the next maintenance day.
08:47 UTC Made it to NLN after earthquake rang down. Found unexpected SDF differences (attached as sdf_1.png) and accepted them to go to observing. Lost lock 3 minutes later. 09:39 UTC Made it back to NLN. SDF differences returned but with reversed values and setpoints (attached as sdf_2.png). Accepted them and went to observing. Lost lock 8 minutes later. 10:41 UTC Made it back to NLN. No SDF differences this time. Went to observing. Noticed large peak around 8 Hz to about 10^-14 in DARM. 11:10 UTC Range is dropping. Peak around pcal line at ~35 Hz seems to be broadening and contracting over time (see darm.png attached). Something is not right.
Range drop appears to occur before traffic noise (plot of range on top of SEI BLRMS usually associated with traffic attached).
TITLE: 03/29 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC STATE of H1: Earthquake OUTGOING OPERATOR: Nutsinee CURRENT ENVIRONMENT: Wind: 17mph Gusts, 12mph 5min avg Primary useism: 0.11 μm/s Secondary useism: 0.38 μm/s QUICK SUMMARY: Riding out the tail end of an earthquake with the possibility of more on the way.
TITLE: 03/29 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: Patrick
SHIFT SUMMARY: Most of the second half of shift was down due to earthquakes. Attempting to lock but not much luck so far. Maybe seismic is still a bit high.
LOG:
22:33 Balers done for the day.
04:18 Lockloss, there were couple of 5+ magnitudes eathquakes happened around the same time, but the seismic raised sharply to the roof implied something close by. The 6.6M earthquake in Russia was probably it.
00:00 Handling off to Patrick.
Terramon predicts R-wave to be 29.3 micron. If this is correct, we are going to be down for a whole while.
I have set the ISI config to LARGE_EQ_NOBRSXY since BRSY was still damping (and now rung up).
Correction: USGS reported 6.6M, not 6.9M. I've attached another prediction from Terramon.
In addition, there're couple of 5+ magnitude earthquake in Indonesia and Panama prior to the Russia earthquake. Not helping.
Summary: Injections suggest that vibration of the input beam tube in the 8-18 Hz band strongly couples to DARM and is the dominant source of noise in the 70-200 Hz band of DARM for transient truck and fire pump signals, and likely also for the continuous signal from the HVAC. Identification of the coupling site is based on the observation that local shaking of the input beam tube produces noise levels in DARM similar to those produced by global corner station vibrations from the fire pump and other sources, for similar RMS at an accelerometer under the Swiss cheese baffle by HAM2. The local shaker injections were insignificant on nearby accelerometers or HEPI L4Cs, and HEPI excitations cannot account for the noise, supporting a local, off-table coupling site in the IMC beam tube. In addition, local vibration injections occasionally produce a 12 Hz broad-band comb, which is also produced by trucks and the fire pump, possibly indicating a 12 Hz baffle resonance. While the Swiss cheese baffle seems the most likely coupling site, we have not yet eliminated the eye baffle by HAM3.
Several recent observations have suggested that we are limited by noise in the 100 Hz region that is produced by vibrations in the 10-30 Hz region. There was the observation that our range increased by a couple of Mpc when the HVAC was shut down, ( https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=32886 ), and, additionally, the observations of noise from the fire pump and from trucks.
I noticed that the strongest signals in DARM produced by the fire pump and trucks had peaks that were harmonics of about 12 Hz. I injected manually all around the LVEA and found that the input beam tube was the one place where I could produce a 12 Hz comb in the 100 Hz region (injections were sub-50 Hz). Figure 1 shows that a truck, the fire pump, and my manual injections at the input beam tube produced similar upconverted noise in DARM.
I also used an electromagnetic shaker on the input beam tube. Figure 2 is a spectrogram showing a slow shaker sweep and the strong coupling in the 8-18 Hz band. I wasn’t able to reproduce the broad 12 Hz comb with the shaker, possibly because, as mounted, it didn’t couple well to the 12 Hz mode. But the broad-band noise produced in DARM by the shaker is more typical of trucks and the fire pump: only occasionally does the 12 Hz comb appear. One possibility is that the bounce mode of the baffle is about 12 Hz.
Figure 3 shows that, for equivalent noise in DARM, the RMS displacement from the shaker and the fire pump were about the same at an accelerometer mounted under the Swiss cheese baffle by HAM2. Figure 4 shows that the shaker vibration is local to the beam tube. While the shaker signal is large on the beam tube accelerometer, it is almost lost in the background at the HAM2 and 3 accelerometers and the HAM2, 3 L4Cs. Finally, the failure to reproduce the noise with HEPI injections, both during PEM injections at the beginning of the run and a recent round by Sheila, further support the off-table source of the noise.
While everything is consistent with coupling at the Swiss cheese baffle near HAM2, we haven’t eliminated the eye baffle near HAM3. This might be done by comparing a second accelerometer under the eye baffle to the one under the Swiss cheese baffle, but I didn’t have another spare PEM channel.
If it is the Swiss cheese baffle, it might be worth laying it down during a vent. Two concerns are the blocking of any beams that are dumped on the baffle, and the shiny reducing flange at the end of the input beamt ube that would be exposed.
An immediate mitigation option is to try and move beams relative to the Swiss cheese baffle while monitoring the noise from an injection. Sheila and I started this but ran out of commissioning time and LLO was up for most of the weekend so I didn’t get back to it. If someone else wants to try this, either turn on the fire pump or, for even more noise in DARM, the shaker by HAM3 (the cable goes across the floor to the driver by the wall, enter 17 Hz on the signal generator and turn on the amp, it should still be set).
Shaker injections have shown the input beamtube to be sensitive for some time ( https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=31016). During pre-run PEM injections, an 8 to 100 Hz broad-band shaker injection on the input beamtube showed strong coupling. However the broad-band injection was smaller in the sensitive 10-18 Hz band then in other sub-bands and so the magnitude of the up-converted coupling from this narrow sub-band was not evident. When we have detected upconversion during PEM injections in the past, we have narrowed down the sensitive frequency band with a shaker sweep, but, for the input beam tube, we didn’t get to this until last week.
Figure 5 shows fire pump, trucks and the HVAC on input beam tube accelerometers and DARM.
Sheila, Anamaria, Robert
The current plan of the stray light upgrade team is to completely remove the aluminum panels of the Swiss cheese baffle, but leave the oxidized stainless outer ring to shield the flat shiny reducing flange. This is planned for post-O2.
Seems like 12.1Hz harmonics observed yesterday was also due to this?
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=35136
Been locked and Observing for 3 hrs 44 mins. Wind is coming down. BRSY is still not in use. Not much else going on.
It appears that some maintenance work on Tuesday March 14 led to degradation with respect to narrow lines in H1 DARM. The attached inverse-noise-weight spectra compare 1210 hours of data from the start of O2 until early morning of March 14 with 242 hours since that time. Summary of substantial comb changes: The comb with 0.9999-Hz spacing nearly aligned with N + 0.25 Hz (N = 15-55) is stronger. There is a new comb with 0.999984-Hz spacing visible nearly aligned with N + 0.75 Hz (N = 41-71) There is a new comb with 1.0000-Hz spacing visible at N - 0.0006 Hz (N = 104-125) Much activity was reported in the alog for March 14, but what jumps out at me are two references to HWS work: here and here. Are there HWS cameras running during observing mode? I had thought those things were verboten in observing mode, given their propensity to make combs. Fig 1: 20-50 Hz comparison (before and after March 14 maintenance) Fig 2: 50-100 Hz (before and after March 14 maintenance) Fig 3: 100-150 Hz (before and after March 14 maintenance) Attachment has full set of comparison sub-bands up to 2000 Hz with both A-B and B-A orderings to make clear which lines are truly new or louder.
We didn't change any configuration of the HWS camera that day. All HWS cameras have been turned on since the beginning of O2 and have been hooked up to the external power supplies (alog30799) since O1. Since then I haven't heard any complains about HWS cameras making noises. HWS cameras ON has been a nominal configuration since the beginning of O2.
I was asked via e-mail if this problem might have started later in March with the replacement of the Harmonic Frequency Generator. but explicit comparisons of daily spectra below show conclusively that the N + 0.25 Hz and N + 0.75 Hz combs are not present before Tuesday maintenance ("March 14" ends at 6:00 a.m. CDT on March 14 in the FSCan dating convention), but are present individually on March 15, 16 and 17. The new N Hz comb reported above is too weak to show up well in a single day's measurement with 1800-second SFTs. I was also told via e-mail (and just saw Nutsinee's note above) that HWS systems are run routinely during observing mode and that no configuration changes were made on March 14 (although there are some new cables in place). So perhaps there is a different culprit among the March 14 activities. Fig 1: Zoom in of line at 28.25 Hz Fig 2: Zoom in of line at 38.25 Hz Fig 3: Zoom in of line at 88.75 Hz Fig 4: Zoom in of line at 98.75 Hz
Looking back at the aLOGs from the March 14th period, some possibilities of activities that may be related stick out: 1) PSL bullseye detector for jitter studies installed 2) ITMY OpLev power supply moved 3) New cables for GigE and camera installed 4) ETM OpLevs laser power increased 5) CPS electronics power cycled and board reseated on WBSC1 ITMY 6) DBB powered up and supposedly powered off;is it still on??Kiwamu says he is 99% sure the DBB is off
It looks like all three combs jump in magnetometers at EX and EY between 3/14 and 3/15, but don't have any notable presence in the CS magnetometers.
More clues: at EX, there was a recent change point in the combs (strength drop) between 2/28 and 3/1. At EY, there was one (also a strength drop) between 3/7 and 3/8. These are Fscan dates, covering 24 hours back from the morning when they were run-- in any case, it looks like two prior Tuesdays may be involved.
As a side note, the 1 Hz comb with 0.5 Hz offset mirrors the stated behavior in several channels, at least one magnetometer at EX and one at EY.
WP6540 Reduce cronjobs on HEPI Pump Controllers
Dave:
Completed EY config, applied config to EX and L0.
WP6539 Relocate HWS frame grabber card from EX to MSR1
Carlos, Nutsinee, Dave:
Card was relocated, but we appear to have driver issues on the spare HWS machine. Carlos is working with TCS to install this machine from scratch and put the configuration into puppet.
WP6543 Upgrade cdslogin to Debian8
Carlos, Jonathan, Jim, Dave
cdslogin was upgraded from U12 to Deb8.
WP6544 New code h1calcs and h1omc
Daniel, Kiwamu, Jim, Dave:
New code was installed on h1omc and h1calcs. A new dolphin sender was installed on omc, with corresponding receiver on calcs. Two 4k DQ channels were added to the DAQ, along with a few dozen slow channels.
WP6546 New PCAL guardian node
TJ, Rick, Sudarshan
A new temporary guardian node named HIGH_FREQ_LINES was added
WP6519 Put seismon channels into DAQ.
Not done, defer to later time
Carlos, Jonathan, Jim, Dave:
the sshd and alarms machine cdslogin was upgraded from Ubuntu 12.04LTS to Debian 8 (Jessie) today. This is a 2FA sshd server machine, which also runs EPICS CA client code to send vacuum and fmcs alerts to lho staff cell phones as text messages. Due to Carlos and Jonathan's hard work in putting this configuration into puppet, the entire upgrade only took a couple of hours.
Was logged on checking health from the earlier invasive work. Everything was working fine so I closed (pushed X) on the remote desktop shell. The BRS output went to never never land and the ISI tripped. This of course did nothing useful for the the IFO or Observation mode.
When I logged back onto the BRSY, it was still running but giving some errors and the output was still very rung up. I am not sure which was causing which though. Following the BRS2 Manual (T1600103), restarted the TwinCat code, killed the old occurance and restarted the C#, and finally the EPICS. Exited the session the same way and this time it survived. Yikes!
The amplitude is still a bit large with the camera image swinging into the reference image on the edge. When that stays off the edge during its cycle, the BRS will be useful again.
The BRSY is now damping itself down and is no longer swinging out of range. But it is still getting itself under control. It is coming down quickly but may be some time. Operators should feel free to contact me if they aren't sure if it can be returned to service.
It is strongly advised to not login to the BRS machines when they are being used. This is because the spike in CPU-use disrupts the autocollimator fitting routine. This causes ~seconds long delays in the tilt output which affects the tilt-subtraction and so on.
Sudarshan, Shaffer
Today I created, started, and tested a node that Sudarshan made to change some EX PCal lines every 24hrs while we are not locked. The code will wait for the gps time to get greater than a day since its last change, then it will wait till the IFO is not locked, and finally it will adjust the frequency by 500hz and jump to waiting for a day to go by. Once the frequency has reached 5100hz it will then hang our in the FINISHED state. Here, Sudarshan can let us know what he wants to do with it.
Attached are the graph for the node and the Guardian overview
Just a few points of verification, clarification, and tagging DetChar, letting them (especially the CW group) know this sweeping PCAL X calibration line has been turned on. I attach a full and zoomed CAL-DELTAL spectrum against a PCALX spectrum (among the other usual things from the front wall's sensitivity FOM). Note that Pulsar injections are on-going, and it doesn't look like these features will interfere. This calibration line regime is sticking to the 2000 to 5000 Hz regions, and pulsar injections only go up to 1991.2 Hz (For a reminder of the full list of pulsar injection frequencies, see LHO aLOG 27642. The starting frequency is 2001.3 Hz, and the starting excitation amplitude is 30000 [ct], and has been hard-coded to remain this amplitude for all frequencies. The excitation frequency can be tracked using EPICs via this channel: H1:CAL-PCALX_PCALOSC1_OSC_FREQ and you can see a fast-channel version of the requested output here: H1:CAL-PCALX_EXC_SUM_DQ The guardian code for this node lives here /opt/rtcds/userapps/release/cal/h1/guardian/HIGH_FREQ_LINES.py and is attached for ease of reference. After some initial debugging, it guardian's log suggests that TJ left the frequency at 2001.3 starting at 2017-03-28 17:16:31 UTC, though it looks like the frequency has only been stable at 2001.3 since 19:49:00 UTC. TJ has the CHECK_IFO_STATUS state look whether the ISC_LOCK guardian's state is lower than 11, which means the excitation is killed if we're in - INITIAL ALIGNMENT - DOWN - IDLE - LOCKLOSS - LOCKLOSS_DRMI - INIT but maybe we want to rethink this, because it seems to turn off the excitation at unnecessary times (see attached dataviewer trend, where the guardian has changed the frequency twice since TJ left it).
Using MCMC fitting to the sensing function measurements made in ER10 and O2, we can establish an estimate on the variation of the optical response parameters. The table below gives the typical (simple mean), maximum, and minimum of the measured maximum a posteriori values from the MCMC fits.Parameter typical maximum minimum --------------------------------------------------- Gain (ct/m) 1.143e6 1.166e6 1.085 Cavity pole (Hz) 347.1 358 341 Time delay (usec) 0.5 2.5 -1.3 Detuned spring (Hz) 7.3 8.8 4.9 Detuned spring 1/Q 0.04 0.08 1e-3This covers measurement dates from Nov 07 2016 through Mar 06 2017. Attached are plots showing these trends for ER10 and O2. Note that I have added--where possible--the GDS calculated values for kappa_C and f_c (black crosses). Note that these values do not come with error bars because the uncertainty would need to be computed from the measurement uncertainty of what goes into the calculate of kappa_C and f_c. It would be useful to have calibration lines running during the measurements to see if there is any trend or drift during the measurements themselves.
Attached are trend plots for all ER10 and O2 measurements. The plots are stored at: ${CALSVN}/aligocalibration/trunk/Runs/O2/H1/Results/SensingFunctionTFs The script to produce plots is: ${CALSVN}/aligocalibration/trunk/Runs/O2/H1/Scripts/SensingFunctionTFs/runSensingAnalysis_H1_O2.m