TITLE: 12/16 Day Shift: 16:00-0:00UTC
STATE of H1: Observing ~80 Mpc
Support: Normal Control Room crowd
Quick Summary: Quiet day, welcome break after yesterday
Activities:
16:00 JeffB to Mech Room
18:30 RichM to Mech Room
19:00 John, Kyle, MikeZ to Y mid
23:30 Chris Biwer setting up and running hardware injections
I'm preparing to do a set of hardware injection tests. I will update this aLog entry as injections are scheduled. I first need to svn up the repo to get the injection files. And check that the latest version of tinj is installed.
**Short version: Increased RY input motion (maybe HEPI, maybe wind/ground) causes ISI X loops to ring up when running 45mhz blends. The suspension/tidal is not the cause. The 90mhz blends seem to be immune to this. Other than using 90mhz blends, I'm not sure how to fix the ISI's configuration, short term to prevent the ISI from ringing up. But we should put a StripTool of the end station ISI St1 CPS locationmons somewhere in the control room so operators can see when ground tilt has rung up an ISI. Alternatively, we could add a notification to VerbalAlarms or the DIAG node when an ISI has been moving something like 10 microns peak to peak for several minutes.
This morning while the IFO was down for maintenance, Evan and I looked at ETMX to see if we could figure out what is causing the ISI to ring up. First we tried driving the L1 stage of the quad to see if some tidal or suspension drive was the cause. This did not have on the ISI, so I tried driving on HEPI. When I drove HEPI X, the ISI rang up a bit, but no more than expected with the gain peaking of the 45mhz blends. When I drove HEPI in RY, however, the ISI immediately rang up in X, and continued to ring for several minutes after I turned the excitation off. The attached image shows the ISI CPS X(red), RY (blue), HEPI IPS RY(green) and X (magenta). The excitation is visible in the left middle of the green trace, also visible in the sudden increase in the red trace. I only ran the excitation for 300 seconds (from about 1134243600 to 1134243900), but the ISI rang for twice that. After the ISI settled down I switched the blends to the 90mhz blends and drove HEPI RY again. The ISI moved more in X but it never rang up even after I increaed the drive by a factor of 5. The second plot shows the whole time series, same color key. The large CPS X motion (with barely noticeable increase in the IPS RY) is the oscillation with the 45mhz blend , the larger signal on the IPS RY (with small increase in CPS X) is with 90mhz blends. The filter I used for each excitation was zpk([0 0],[ .01 .01 .05 .05], 15111).
Did a bit more analysis of this data - Not sure why things are so screwy. There might be non-linearity in the T240s. Jim's entry indicates that it is NOT a servo interaction with the tidal loop. so it is probably something local - still not really sure what. Based on the plots below I strongly recommend a low-freq TF of Stage 1 (HEPI servos running, ISI damping loops on, iso loops off) drive hard enough to push stage 1 T240s to +/- 5000 nm/sec what I see fig 1 (fig_EX_ringingXnY)- time series of X and Y and drive signal - This is the same as Jim's data, but we also see significant motion in Y - In the TFs we need to look for X and Y cross coupling fig 2 (fig_X_ringup_time) - this is the time I used for the other analysis - We can see the CPS-X and T240-X signals here. Note that I have used bandpass_viafft to keep only data between 0.02 and 0.5 Hz. The T240 and CPS signals are clearly related - BUT - does the T240 = derivative of the CPS? signals are at input to the blend filters fig 3 (fig_weirdTFs) - These are some TFs from ST1 X drive to ST1 CPS X and from ST1 X drive to ST1 T240. If all the drive for X is coming from the actuators, then the CPS TF should be flat and the T240 TF should be Freq^1 The CPS TF looks fine, I can not explain the T240 TF The coherence between T240 and CPS sigs are in the bottom subplot fig 4 (fig_coh) Coh for drive -> CPS, drive -> T240 and CPS->T240. All are about 1 from 0.03 to .15 Hz. So the signals are all related, but not in the way I expect. NOTE - If the ground were driving a similar amount to the actuators, then these TFs would be related by the loops and blend filters - I don't think this is the case. decent driven TFs would be useful, here. fig 5 - sensor_X_difference : Take the numerical derivative of the CPS and compare it to the T240 as a function of time. Also - take the drive signal * 6.7 (plant response at low freq from TF in fig 3) and then take the derivative of that. These 3 signals should match - BUT they do not. The driven plant and the CPS signals are clearly similar, but the T240 is rather different looking, esp in the lower subplot. As if the higher frequency motion seen by the CPS is not seen by the T240. What the heck? fig 6 - fig_not_gnd - could it be from ground motion? So I add the ground motion to the CPS signal - but this doesn't look any more like the T240 signal than the straight CPS signal. So the signal difference is not from X ground motion.
Has the tilt decoupling on stage 1 been checked recently? with the 45mhz blends running we are not far from instabilty in this parameter (a factor of 2 maybe?)
the Verbal Alarms code was logging to the ops home directory. Prior to the move of this home directory (WP5658) I have modified the code to log to a new directory: /ligo/logs/VerbalAlarms We restarted the program at 14:04 and verified the log files are logging correctly.
These verbal log files actually live one level deeper, in /ligo/logs/VerbalAlarms/Verbal_logs/ For the current month, the log files live in that folder. However, at the end of every month, they're moved into the dated subfolders, e.g. /ligo/logs/VerbalAlarms/Verbal_logs/2016/7/ The text files themselves are named "verbal_m_dd_yyyy.txt". Unfortunately, these are not committed to repo where these logs might be viewed off site. Maybe we;ll work on that. Happy hunting!
The Verbal logs are now copied over to the web-exported directory via a cronjob. Here, they live in /VerbalAlarms_logs/$(year)/$(month)/
The logs in /ligo/logs/VerbalAlarms/Verbal_logs/ will now always be in their month, even the curent ones.
Attached are two side by side 10 hour trends during observation mode w/ Quite_90 blends on Z in the left group and 45mHz Z blends in the right group. These are about four days apart, 8 vs 12 Dec. I've scaled each plot to be the same on the two trends.
The first four signals are X & Y Ground motion and the seismic environment is a mix with some better others worse in the second group. The ASC signals are very close with only suttle differences. We need to look at spectra now but this says the ASC doesn't care or this yardstick is too coarse.
Yesterday morning, I took some time (with approval, WP5563) after LLO went down for maintence before our maintence period started. Since nutsinee had just gotten the IFO locked when I arrived, I asked her to pause and went through noise tunnings without turning off the EX ESD.
I went through different configurations with the EX ESD and took long spectra in each configuration. The bottom line is there was no change in the DARM spectra between these configurations. The attached screenshot shows (nearly identical) spectra with the EX ESD:
Since we had believed that turning off the ESD helped reduce our noise, one could be skeptical that I actually had the ESD on. The second screenshot shows the current mons, which do show current when I attempted to actuate on EX, which I think means that the driver was on.
TITLE: 12/16 OWL Shift: 08:00-16:00UTC (00:00-08:00PST), all times posted in UTC
STATE of H1: Observing ~80 Mpc
Incoming Operator: Jim
Support: Mike
Quick Summary:
Tidal rung up three hours after it was first locked during Corey shift. I switched ETMs ISI to 90mHz blend and saved the lock but data became glitchy at ~ 30Hz and BNS range became unstable. After three hours I called Mike to discuss what action to take. We decided to put everything back to 45mHz. ETMX X ISI was left at 90 mHz because as soon as it's switched to 45 mHz Tidal started to ring up. Andy later suggested that these glitches weren't too bad. No invasive action needs to be taken to fix them. Environmental wise: Useism reaching 90th percentile. Wind speed between 10-20 mph.
Not sure if anyone has already caught this. Switching the ETMX blend to Quiet_90 on Dec 14th caused glitches to appear around 20Hz region (Fig 1 - starting at 20:00:00 UTC.) while switching both ETMX and ETMY to Quiet_90 everywhere caused gliches to appear around 10 Hz and 30 Hz region (Fig 2 - starting at 9:00:00 UTC). Wind speed has been low (<5mph) and the useism (0.03-0.1Hz) has been around 0.4 um/s. BNS range has been glitchy since the blend was switched but the lock has been relatively more stable. The question is, do we want clean data but constantly risk losing lock when the tidal rings up, or slightly glitchy data but relatively more stable interferometer?
Tried switching ETMX X to 45 mHz again. Looking good so far.
After talking to Mike on the phone we decided to try switching both ETMs back to 45mHz blend. I'm doing this slowly. One dof at a time. Things got better momentary when I switched ETMX X to 45 mHz blend but soon tidal and CSOFT started running away. I had to leave ETMX X at 90 mHz. Out of Observing from 11:58:07 - 12:11:02 UTC.
And the tidal is back... I switched ETMX X to 90mHz. 45 mHz is used everywhere else.
Switching to the 90 mHz blends resulted in the DARM residual becoming dominated by the microseism. The attachment shows the residual before and after the blend switch on the 14th; the rms increases from 5×10−14 m to 8×10−14 m.
As a first test in eliminating this nonstationarity, we should try engaging a boost to reduce the microseism contribution to DARM.
The other length loops (PRCL, MICH, SRCL) are not microseism dominated.
Similar to DARM, the dHard residuals are microseism-dominated and could also stand to be boosted, although this would require some care to make sure that the loops remain stable.
[Also, the whitening filter for the calibrated DARM residual is misnamed; the actual filter is two zeros at 0.1 Hz and two poles at 100 Hz, but the filter name said 1^2:100^2. I've changed the foton file to fix this, so it should be reloaded on next lock loss.]
Tidal was running away. Switched the 45mHz blend to 90mHz and saved the lock. Out of Observing from 09:00:12 - 09:01:43 UTC. ETMY blend remains unchanged.
TITLE: 12/16 OWL Shift 08:00-16:00UTC (00:00-08:00 PST), all times posted in UTC
Out going Ops: Corey
Quick Summary: Useism still near 50th percentile but slowly increasing. Wind speed ~10-20 mph. Tidal fluctuation comes and goes. Using Quiet90 blend on ETMX Y and ETMY X the rest is on 45 mHz.
TITLE: 12/15 EVE Shift: 00:00-08:00UTC (16:00-00:00PDT), all times posted in UTC
STATE of H1: NLN
Incoming Operator: Nutsinee
Support: Jenne as Maintenance Re-Locking Commissioner
Quick Summary:
Not sure about what caused the Lockloss. DRMI locked up in about 11min (just before I was going to try PRMI). Everything went smoothly through Guardian steps, except for when I had to Engage the 2nd ISS Loop by hand, I think I was not quite at zero, and this generated an OMC DCPD Saturation (which produced huge dips on many signals on the StripTool...but managed to stay locked!). Waited for the range to get up to around 80Mpc, and then took to Observing.
Following aLog 24208, all LHO ISI platforms were restarted. Most platforms deisolated nicely. But, HAM2 decidedly did not. New safe.snaps were captured for all platforms. This captured the FF paths being off at start up and GS13 being in low gain. Guardian was adjusted for HAM2 to desable the GS13 gain switching.
All snaps and isi/h1 guardians were committed to the svn.
The HAM ISIs were restarted to capture a code correction that clears saturation immediately upon request. The BSCs got this fix as well.
Also, since HAM2 & 3 do not tolerate GS13 gain switching via guardian, that feature, while available, is disabled. So, upon FE restart, the GS13s will be in low gain and the safe.snap SDF will be happy. But, under OBSERVE.snap, the GS13s in low gain will be red. These will need to be switched via the SEI Commands scripts.