Received GRB Alert at 18:11UTC. Going through the checklist (L1500117).
Yesterday when taking H1 to Observation Mode, Evan & I noticed a RED SDF on video0 (I think it was for PEMEX or EY), but we did not see it on our SDF screens on our work stations. I reopened the SDF and the RED went away. The medm was not frozen, because we were Accepting channels & it would go green before noticing this errant PEM RED. Just thought it was something interesting.
TITLE: 10/4 DAY Shift: 15:00-23:00UTC (00:00-8:00PDT), all times posted in UTC
STATE of H1: Observation Mode with Avg of 74Mpc
Outgoing Operator: JimW
Support: Vinny
Quick Summary: useism continues a slow trend down (at about 0.15um/sec). Winds hovering around 12mph.
Terramon has just come up with a RED warning of a 5.6 Peruvian earthquake who's Rayleigh wave is due here in a minute (0.7um/s), due at 15:30:38UTC. We'll see what happens.
L1 just went down at 15:30. Terramon said the EQ's Rayleigh waves (of 1.4um/s) would arrive there at 15:17UTC.
So we might not be out of the woods yet...watching 0.03-0.1Hz (all three axis have yet to move up at the same time and all are still under 0.1um/s...I've seen us drop out when all three go above that velocity...but that was a few weeks ago before the DHARD filter).
No signs of anything on tidal or ASC control strip tools either.
It's been 15min since the R-wave arrival estimate, I'm assuming we rode through the EQ. (it was barely observable in here on seismic bands, range, striptools). Time to make breakfast/coffee.
Title: 10/3 OWL Shift 7:00-15:00 UTC
State of H1: Low noise, observing 75 mpc
Shift Summary: Quiet night
Activity log:
Nothing happened. Quiet night, Corey's lock from yesterday made it through the night.
Quiet night at LHO. Wind ~10mph, seimic relatively low, lock from yesterday continues.
TITLE: 10/3 EVE Shift: 23:00-07:00UTC (Oct.4) (16:00-23:59PT), all times posted in UTC
STATE OF H1: Observation at 77Mpc
SUPPORT: Robert, Jordan, Sheila
INCOMING OPERATOR: Jim
SHIFT SUMMARY:
LLO is still having issues with useism.
Robert did one round of injections, and produced many ETMY saturations in 2 minutes - he may need to redo this measurement tomorrow night.
Jordan did one PEM measurement and would like to take another. The first was about 30 minutes and the second should be about this long as well.
Shift Activities:
00:00:42UTC, Oct. 4th - Robert's PEM injections start
01:34:37UTC - Robert's PEM injections end
03:35:37UTC - Jordan's PEM injections start
04:06:47UTC - Jordan's PEM injections end
I started to look at our locking attempts over the last two weeks, especially trying to understand our difficulty yesterday. I will write a more complete alog in the next few days, but I wanted to put this one in early so that operators can see it.
We've known for a long time that at LLO they always pull the OMC off resonance durring the CARM offset reduction, and they've told us that they can't lock if it is flashing. We know that we can lock when it is flashing here, which might be because our output faraday has better isolation.
In the two weeks of data that I looked at, we've locked DRMI 64 times, 33 of these locks resulted in low noise locks and 31 of them failed durring the acquistion prodecure. Of these 31 failures, about 9 happened as the OMC was flashing. We also had about 12 sucsesfull locking attempts where the OMC flashed. OMC flashing probably wasn't our main problem yesterday, but it can't hurt and it might help to pull the OMC off resonance durring the CARM offset reduction.
Operators: If you see that the OMC is flashing (visible on the OMC trans camera right under the AS camera on the center video screen) you can pull it off resonance by opening the OMC control screen, and moving the PZT offset slider which is in the upper right hand quadrant of the screen. Even if you don't see the OMC flashing on the camera it might not hurt to pull the PZT away from the offset it is left at, which was the offset where it was locked in the last lock. I will try to add this to guardian soon and let people know when I do.
screenshot with slider circled in a red dashed line
Evan Sheila
Here is a plot of the 24 locklosses we had from Sept 17th to Oct 2nd durring the early stages of the CARM offset reduction. The DCPD sum is shown in red while the black line shows H1:LSC-POPAIR_B_RF18_I_NORM (before the phase rotation) to help in identifying the lockloss time. You can see that in many of these locklosses the OMC was flashing right before for as we lost lock. This is probably because the AS port was flashing right before lockloss and the OMC is usually nearly on resonance.
We looked at 64 total locking attempts in which DRMI locked, 24 of these resulted locklosses in the early stages of CARM offset reduction (before the DHARD WFS are engaged). In 28 of these 64 attempts the OMC DCPD sum was above 0.3mA sometime before we start locking the OMC, so the OMC flashed in 44% of our attempts. We lost lock 16 out of 18 times that the OMC was flashing (57% of time) and 8 out of 36 times that the OMC was not flashing (22% of the time).
We will make the guardian pull the OMC off resonance before starting the acquisition sequence durring tomorow's maintence window.
Title: 10/3 Eve Shift: 23:00-07:00UTC (16:00-00:00PT), all times posted in UTC
State of H1: Observation Mode at 55Mpc for 64 minutes at 00:00:42
Outgoing Operator: Corey
Quick Summary: Corey had an outstanding shift with the IFO relocking and remaining locked for 5 hours.
Update: Robert is in the LVEA doing injections. This started at 00:00:42UTC, when I switched the IFO to commissioning.
TITLE: 10/3 DAY Shift: 15:00-23:00UTC (00:00-8:00PDT), all times posted in UTC
STATE OF H1: Observation at 75Mpc
SUPPORT: Evan, Daniel, Robert, Jordan
INCOMING OPERATOR: Cheryl
SHIFT SUMMARY:
Started off shift with H1 in DOWN mode. Established a game plan by talking with Mike & Daniel over the phone. Evan arrived and started investigations while I re-aligned, and after a few hours we had H1 back up. It has been up with a decent range around 75Mpc (with a few of the usual ETMy saturations).
LLO is having issues with useism. Robert is taking advantage of their downtime to run PEM injections.
If you squint your eyes and look sideways, you can see the useism beginning to trend down. Winds are around 12-15mph.
Shift Activities:
Happen to notice a Red Box on the Ops Overview medm which said there was a GraceDB Querying Failure. I wanted to figure out when this occurred, but I was not able to figure it out (trended the channel in DV, used conlog, looked on the CAL_INJ medm [where this RED Box also lives]). Maybe this happened between 21-22:00?
So checking alogs, found a link to the following instructions. It was not clear to me what state I was in: did I need a "code start up" or a "code restart". I followed the "code start up" instructions.
On the operator2 terminal (which is generally logged in as controls), I did the following:
ssh controls@h1fescript0 cd /opt/rtcds/userapps/release/cal/common/scripts screenpython ext_alert.py run
This gave a GREEN "GraceDB querying Successful" box on the CAL_INJ medm (and the box entirely disappeared on the Ops Overview).
As I detatched from the screen environment, I did not get process ID # for the screen session; I only had a "[detached]" prompt. So I did NOT store a file with a PID# under the home directory. Maybe I should have followed the restart instructions? Distinguishing how to determine what error state one is in will help with the instructions here.
I'm also assuming it's OK to restart/start this computer while in Observation Mode (because I did).
The attached plot shows some unexpected variations in the controls signals of the two EOM drivers over the past 20 hours. This is also visible in the RF power monitors of all distribution amplifiers in the electronics room. This may be due to temperature fluctuations, but we don't seem to have a temperature readout in the electronics room. The LVEA shows no variation in temperature.
H1 Status:
After 22+hrs of being down, H1 is finally back. The only notable change to the system was Evan's change of an ASC ALS_X Pit Gain. (another minor point is the moving of PR3 during alignment....my experience is we don't have to move PR3 in general).
For SDF, ACCEPTED a few RED items: (before / now)
After a few checks, H1 was taken to Observation Mode and has been hovering pretty close to 77Mpc. Robert/Evan noted a low periscope peak around 300Hz.
Robert Injection Request
Whenever L1 is down, Robert is looking to continue PEM injections (per WP#5531) with approval from Landry.
Environmental:
useism channel is hovering around 0.2um/s. Winds are hovering around 15mph.
Corey, Daniel, Evan
For the past 24 hours the green transmission through the X arm has been uncharacteristically unstable, sometimes dipping to less than 60% of its maximum value on timescales of a few seconds.
Looking at the quad oplev signals, it seems that this dipping (perhaps unsurprisingly) is mostly associated with EX pitch. It could be because of wind (there were gusts above 40 mph yesterday), or it could be because of the microseism (the 0.1–0.3 Hz STS bands peaked yesterday around 0.4 µm/s, which is the highest they've been in the past 90 days, excepting earthquakes), or it could be because of something else entirely.
Turning down the EX green WFS pitch gain (H1:ALS-X_WFS_DOF_1_P_GAIN)
by a factor of 5 seems to lessen the fluctuations in the transmitted green signal, making it stay within 75% of its maximum. It is a small effect, but it seemed to make an improvement for the transmitted IR light in the CHECK_IR
step.
After this change we were able to make it past SWITCH_TO_QPDS
and all the way to nominal low noise. It could just be a coincidence, though.
Taken with ALS WFS feedback going to the ETMs (and not the ITMs).
10/3 DAY Shift: 15:00-23:00UTC (00:00-8:00PDT), all times posted in UTC
15:56 - 16:39 Ran through an Initial Alignment
First lock attempt had DRMI lock within 2min, but it dropped out during DARM ON TR.
Support: Robert, Evan, & Daniel on site.
Investigations continue.
Just in case you're wondering why LHO sees two noise bumps at 315 and 350Hz (attached, middle blue) but not at LLO, we don't fully understand either but here is the summary.
There are three things here, environmental noise level, PZT servo, and jitter coupling to DARM. Even though the former two explains a part of the LLO-LHO difference, they cannot explain all of it, and the coupling at LHO seems to be larger.
Reducing the PSL chiller flow will help but that's not a solution for the future.
Reimplementing PZT servo at LHO will help and this should be done. Squashing it all will be hard, though, as we are talking about the jitter between 300 and 370Hz and there's a resonance at 620Hz.
Reducing coupling is one area that was not well explored. Past attempts at LHO were on top of dubious IMC WFS quadrant gain imbalances.
1. Environmental difference
These bumps are supposed to be from the beam jitter caused by PSL periscope resonances (not from the PZT mirror resonances). In the attached you can see that the bumps in H1 (middle blue) correspond to the bumps in PSL periscope accelerometer (top blue). (Don't worry, we figured out which server we need to use for DTT to give us correct results.)
Because of the PSL chiller flow difference between LLO and LHO (LHO alog, couldn't find LLO alog but we have MattH's words), in general LLO periscope noise level is lower than LHO. However, the difference in the accelerometer signal is not enough to explain the difference in IFO.
For example, at 350Hz LHO PSL periscope is only a factor of 2 noisier than LLO. At 330Hz, LHO is quieter than LLO by more than a factor of 2. Yet we have a huge hump in DARM at LHO, it becomes larger and smaller in DARM but it never goes away, while LLO DARM is deat flat.
At LLO they do have a servo to supress noise at about 300Hz, but it shouldn't be doing much if any at 350Hz (see the next section).
So yes, it seems like environmental difference is one of the reasons why we have larger noise.
But the jitter to DARM coupling itself seems to be larger.
Turning down the chiller flow will help but that's not a solution for the future.
2. Servo difference
At LLO there's a servo to squash beam jitter in PIT at 300Hz. LHO used to have it but now it is disabled.
At LLO, IOOWFS_A_I_PIT signal is used to suppress PIT jitter targetting the 300Hz peak which was right on some mechanical resonance/notch structure in PZT PIT (which LHO also has), and the servo reduced the noise between about 270 and about 320Hz (LLO alog 19310).
Same servo was successfully copied to LHO with some modification, which also targeted 300Hz bump (except that YAW was more coherent than PIT and we used YAW signal), with somewhat less (but not much less) aggressive gain and bandwidth. At that time 300Hz bump was problematic together with 250Hz bump and 350Hz bump. Look at the plots from alog 20059 and 20093.
Somehow 250Hz and 300Hz subsided, and now LHO is suffering from 315Hz and 350Hz bumps (compare the attached with the above mentioned alog). Since we never had time to tune the servo filter to target either of the new bumps, and since turning the servo on without modification is going to make marginal improvement at 300Hz and will make 250Hz/350Hz somewhat worse due to gain peaking, it was disabled.
Reimplementing the servo to target 315 and 350Hz bumps will help. But it's not going to be easy to make this servo wide band enough to squash everything because of 620Hz resonance, which is probably something in the PZT mirror itself (look at the above mentioned alog 20059 for open loop transfer function of the current servo, for example). In principle we can go even wider band, but we'll need more than 2kHz sampling rate for that. We could stiffen the mount if 620Hz is indeed the mount.
3. Coupling difference
As I wrote in the environment difference, from the accelerometer data and IFO signal, it seems as if the coupling is larger at LHO.
There are many jitter coupling measurements at LHO but the best one to look at is this one. We should be able to make a direct comparison with LLO but I haven't looked.
Anyway, it is known that the coupling depends on IMC alignment and OMC alignment (and probably the IFO alignment).
At LHO, IMC WFS has offsets in PIT and YAW in an attempt to minimize the coupling. This is on top of dubious imbalances in IMC WFS quadrant gains at LHO (see alog 20065, the minimum quadrant gain is a factor of 16 larger smaller than the maximum). We should fix that before spending much time on studying the jitter coupling via alignment.
At LLO, there's no such imbalance and there's no such offset.
The coupling of these peaks into DARM appears to pass through a null near the beginning of each full-power lock stretch, perhaps indicating that this coupling can be suppressed through TCS heating.
Already from the summary pages one can see that at the beginning of each lock, these peaks are present in DARM, then they go away for about 20 minutes, and then they come back for the duration of the lock.
I looked at the coherence (both magnitude and phase) between DARM and the IMC WFS error signals at three different times during a lock stretch beginning on 2015-09-29 06:00:00 Z. Blue shows the signals 10 minutes before the sign flip, orange shows the signals near the null, and purple shows the signals 20 minutes after the sign flip.
One can also see that the peaks in the immediate vicinity of 300 Hz decay monotonically from the beginning of the lock strech onward; my guess is that these are generated by some interaction with the beamsplitter violin mode and have nothing to do with jitter.
Addendum:
alog 20051 shows the PZT to IMCWFS transfer function (without servo) for PIT and YAW. Easier to see which resonance is on which DOF.
We've both restarted our sessions on TeamSpeak (I rebooted computer since ours would not allow me to open anything. Upon reboot, TeamSpeak opened automatically [thanks, Ryan!].). We are both now re-connected.