Laser room environment since pulling the breaker on the AC the other evening. Also attached is the relative humidity sensor signals. Not sure why the humidity sensor from the oscillator seems to be oscillating. The relative change is not large enough to suggest a water leak - thankfully.
Relocking is going okay. Stopping at Violin_Mode_Damping_2 to squish one of the fundamental modes that rung up high.
A bunch of ETMX CDS overview went red briefly, middle ish column (either DAC or DAQ). Came with OMC DC PD saturation warning. And the red went away before I could look closer.
Accepted ETM and ITM SDF differences after making sure ETMX bounce mode was not ringing up. The rest are gain settings. Back to Observe.
An accepted PI differences. Had to changes mode 28 filter and PLL set freq. Back to Observe again.
Nutsinee is correct, at the same time we got a SUS-ETMX glitch the OMC PD DC ADC channels saturated. The time was 08:42:49 UTC. The saturations stopped 2 seconds later, the glitch was cleared at 08:44:04 UTC (cronjob running on the minute). Could this be a wierd coincidence?
TITLE: 03/10 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY:
LOG:
Shift has been very quiet, until.....
7:18 Lockloss ( a little ding in the earthquake band??) - had to clear green WFS and re-align arms. ALS Y VCO error had to be reset.
7:41 lockloss @ finding IR (IMC loses lock) - this has been the status quo. What's the proper action? clear IMC WFS and let them recenter and simmer for a while before re-attempt? or go to Locking ALS and let that "cook" for a bit before moving on??
Handing off the re-locking task to Nutsinee
I added 135mL to the Xtal Chiller. The Diode chiller water level display showed 'OK'.
Here's a list of how they're doing just in case you care:
STS A DOF X/U = -0.483 [V]
STS A DOF Y/V = 0.197 [V]
STS A DOF Z/W = -0.661 [V]
STS B DOF X/U = 0.54 [V]
STS B DOF Y/V = 0.305 [V]
STS B DOF Z/W = -0.245 [V]
STS C DOF X/U = 0.391 [V]
STS C DOF Y/V = 0.729 [V]
STS C DOF Z/W = -0.245 [V]
STS EX DOF X/U = 0.079 [V]
STS EX DOF Y/V = 0.576 [V]
STS EX DOF Z/W = 0.087 [V]
STS EY DOF X/U = 0.129 [V]
STS EY DOF Y/V = 0.096 [V]
STS EY DOF Z/W = 0.463 [V]
2017-03-09 20:17:42.356342
There are 6 T240 proof masses out of range ( > 0.3
ETMX T240 2 DOF X/U = -0.47 [V]
ETMX T240 2 DOF Y/V = -0.574 [V]
ETMY T240 3 DOF Z/W = 0.373 [V]
ITMY T240 3 DOF X/U = -0.41 [V]
ITMY T240 3 DOF Z/W = -1.306 [V]
BS T240 1 DOF Z/W = 0.302 [V]
All other proof masses are within range ( < 0.3 [V
ETMX T240 1 DOF X/U = 0.092 [V]
ETMX T240 1 DOF Y/V = 0.085 [V]
ETMX T240 1 DOF Z/W = 0.184 [V]
ETMX T240 2 DOF Z/W = -0.203 [V]
ETMX T240 3 DOF X/U = 0.116 [V]
ETMX T240 3 DOF Y/V = 0.031 [V]
ETMX T240 3 DOF Z/W = 0.07 [V]
ETMY T240 1 DOF X/U = 0.018 [V]
ETMY T240 1 DOF Y/V = 0.127 [V]
ETMY T240 1 DOF Z/W = -0.166 [V]
ETMY T240 2 DOF X/U = 0.234 [V]
ETMY T240 2 DOF Y/V = -0.191 [V]
ETMY T240 2 DOF Z/W = 0.029 [V]
ETMY T240 3 DOF X/U = -0.18 [V]
ETMY T240 3 DOF Y/V = 0.015 [V]
ITMX T240 1 DOF X/U = -0.277 [V]
ITMX T240 1 DOF Y/V = -0.128 [V]
ITMX T240 1 DOF Z/W = -0.053 [V]
ITMX T240 2 DOF X/U = -0.069 [V]
ITMX T240 2 DOF Y/V = -0.065 [V]
ITMX T240 2 DOF Z/W = -0.074 [V]
ITMX T240 3 DOF X/U = -0.224 [V]
ITMX T240 3 DOF Y/V = -0.042 [V]
ITMX T240 3 DOF Z/W = -0.017 [V]
ITMY T240 1 DOF X/U = 0.081 [V]
ITMY T240 1 DOF Y/V = 0.047 [V]
ITMY T240 1 DOF Z/W = 0.051 [V]
ITMY T240 2 DOF X/U = 0.057 [V]
ITMY T240 2 DOF Y/V = 0.191 [V]
ITMY T240 2 DOF Z/W = 0.13 [V]
ITMY T240 3 DOF Y/V = 0.138 [V]
BS T240 1 DOF X/U = -0.082 [V]
BS T240 1 DOF Y/V = 0.043 [V]
BS T240 2 DOF X/U = 0.144 [V]
BS T240 2 DOF Y/V = 0.276 [V]
BS T240 2 DOF Z/W = 0.086 [V]
BS T240 3 DOF X/U = 0.122 [V]
BS T240 3 DOF Y/V = -0.016 [V]
BS T240 3 DOF Z/W = -0.018 [V]
Assessment complete.
17 hours into the lock today, we noticed that IMC-F on the wall FOM was steadily drifting off, instead of riding around 0 where it usually sits. After a bit of digging Sheila and Kiwamu found that both X and Y arm LSC tidal HEPI off load had (or had nearly) reached their limits. The limits set in the LSC model were 500 microns, while the HEPI limits are set at 700 microns, so even though the HEPIs were nowhere near out of range, the offload from LSC was being limited. This meant that LSC had to use the IMC to do the tidal control. After some careful gymnastics Sheila was able to relieve the tidal offset from the IMC to the end station HEPIs, without breaking the lock. The limits for LSC have been updated, but this took us out of observing for ~80 minutes (23:13-0:36 UTC).
Attached plot is the last 4 hours timeseries of IMC-F, X & Y tidal mons (the offset sent to the endstation HEPIs). The first bid dip is IMC-F drifting off because the X tidal has hit it's limit, the jaggy bits are Sheila slowly ramping the offset to HEPI and letting the LSC tidal recover.
It was brought up in the 1pm site meeting that our range has dropped during O2. I did some quick trends and indeed the range has dropped about 4 Mpcs since the holiday break, and also a few on February 25.
First attachment shows two drops in the range for the last 110 days. After the holiday break the range started out where it was, but then began to rapidly drop over the next week. From there, it seemed to be one issue after another, a few examples are: snow plowing, PSL ISS AA chassis, possible ETMY L2 hardware, or PSL temperature swings, just to point out a few.
Once most of these issues were resolved the range began a slow craw to almost where it was before, and then on February 25 at ~9UTC there was a lock loss caused by a PSL trip (alog34395) [second attachment]. This seems to be the most recent step in the range. I could find anything saying why this may have caused the range to drop in the alogs.
Also mentioned during the meeting was the lack of evidence for the dip in range seen in the mornings that most operators will say is due to the Hanford traffic. The reason we say this is because this dip in range is only seen Monday through Thursday at the same time (~13UTC) while we can see the commute on the building camera, and it is coincident with a rise in the 3-10Hz seismic BLRMS FOM. I could not find an alog from Robert reporting on this issue, but a quick check of the summary pages makes pretty clear to me.
While we were in corective maintence today (because of the tidal issue that Jim is alogging) Keita and I noticed that the Oplev damping loop didn't have the same settings for all 4 test masses. ETMX had a gain of -440 while the other test masses had a gain of -300, it seems to have been this way for about a year.
We want the damping settings to be the same for all the optics so that the plants for the ASC loops are well behaved in the radiation pressure basis. We set ETMX to -300 like the others without any problem and accepted it in SDF.
TITLE: 03/09 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC STATE of H1: Preventive Maintenance INCOMING OPERATOR: Ed SHIFT SUMMARY: Ran a2l and unset FM1 BP filter for mode 27 when LLO lost lock (Nutsinee noted two BP filters were on for this mode). Kiwamu investigated the effect of damping PI mode 23 on DARM during commissioning period. Tidal control of HEPI hit limit in LSC and started offloading to the IMC. Limit was raised. Currently in preventative maintenance while offset to IMC is bleed back off to HEPI. LOG: 17:11 Out of observing to run a2l. 17:31 a2l done. Turned of FM1 BP filter for PI mode 27. Accepted SDF difference (attached). 17:33 Back to observing. 19:06 Joe D. to mid X, inventory. 19:20 Set intent bit to commissioning. Kiwamu starting PI test. 19:59 Joe D. back. 21:00 PI test done. Back to observing. 21:24 Marc to mid Y to retrieve parts from electronics room. 22:00 Marc back 22:16 Marc to mid Y 22:41 Marc back 22:48 Bubba to mid Y, inventory 23:07 Bubba back 23:13 Tidal hit limit in LSC control. Sheila, Jim W., Kiwamu mitigating. Out of observing. 23:17 RO alarm
All files under the /opt/rtcds/userapps/release path are required be writable by all CDS users to permit file sharing. To accomplish this our policy is:
Every Tuesday I am running a checking script which verifies that files under /opt/rtcds/userapps/release have the correct ownership and permissions. It also checks that special permission bits have not been set on regular files. The script is /opt/rtcds/userapps/release/cds/h1/scripts/check_correct_userapps_file_perms.bsh. It was originally written by Michael Thomas. If I have to correct any problems, I do this as user root on h1fs0 during maintenance.
In a long-form directory listing in the userapps area, you should see that all directories have the following permission and group-ownership:
drwxrwsr-x {owners-name} controls
the "s" in the group execute location (replacing "x") indicates files under this directory will also have controls group-ownership.
Patrick, Terra (remotely), Kiwamu,
related log: 34660
During the commissioning window today, we did a test in which we actively excited PI mode 23 (ETMY, 32 kHz) to determine whether we can reproduce a same type of DARM noise which had coincided with high RMS of PI mode 23 yesterday.
Conclusions:
Message:
[The test]
The results are shown in the first attached. It contains the following three different configurations for the damping of mode 23.
When the damping was on, it produced a few narrow lines (at 14.5, 64.0, 78.5 and 142.5 Hz). We are not sure why. As we flipped the control gain, it excited the mode and resulted in higher line heights. Looking at the control signal sent to the ESD, we made sure that the DAC was not saturating. By eyeballing the medm screen, the typical amplitude we sent for the nominal damping setting was about several 100 counts at the DAC.
[A hypothesis]
This is totally a speculation: as some interferometer variables vary (such as OMC alignment or something), it changes the coupling for mode 23 to OMC making a high readout value for mode 23. At the same time, this phenomenon causes a glitch in DARM. So in this hypothesis, mode 23 is just a witness of the phenomenon and not the cause.
The second attached shows the glitches that we had yesterday coinciding with high RMS of mode 23. The other PI modes which are also derived from the OMC DCPDs (modes 5, 12 and 22) didn't show any obvious events at the same time. So this OMC coupling hypothesis may not be trustable.
TITLE: 03/10 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Patrick
CURRENT ENVIRONMENT:
Wind: 4mph Gusts, 2mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.16 μm/s
QUICK SUMMARY:
TJ, Miriam
Miriam found a population of glitches while following up loud background events from PyCBC that seem to be due to ETMY oplev laser glitches.
It seems like ETMY optical lever laser glitches are coupling into h(t) through L2 via the oplev damping loops, similar to how the ETMX oplev laser glitches were found to be coupling into h(t) in alog 31810 from November 2016. These are thought to be laser glitches since they show up strongly in the OPLEV SUM readout.
The attachment shows data from Feb 25th, but I've seen similar behavior from earlier today.
The first page of the attachment shows the BLRMS of the ETMY L3 OPLEV SUM aligned with Omicron triggers in h(t).
The second page shows the ETMY L3 OPLEV SUM BLRMS aligned with the oplev damping loop error point, which seems to be where the coupling into h(t) is coming from.
The third page shows the EMTY L3 OPLEV SUM BLRMS aligned with the L2 noisemon, which shows the same coincident glitches.
Verbal alarms reported a timing error just before 10pm (local time) Saturday night. This was a transient alarm, which cleared within seconds.
I have just completed the analysis of the error. The alarm was raised by the 1PPS comparator in the MSR, the fourth input signal went OOR (Out-0f-Range).
This channel is the independent Symmetricom GPS receiver, its nominal range is -200 to 0, at 21:58 4 Mar 2017 PST it briefly went to -201
Trending signal_3 for a day around this time shows that the signal wandered for several hours before settling down. I verified that the other three signals being compared did not make any excursions at this time, indicating the error was with the Symmetricom signal itself (trend attached).
Using MEDM time-machine, I captured the detailed comparator error screen at this time which verifies the error was "PPS 3 OOR" (image attached)
TITLE: 03/09 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 58Mpc
INCOMING OPERATOR: Patrick
SHIFT SUMMARY: Locked and Observe most of the night (except for one little accidence, see below). PI mode 27 has two BP filters on right now. Patrol visited the site last night. Foton doesn't seem to work when trying to use it on an medm screen (usual right click >> foton >> click on filter)
LOG:
10:00 Hanford Patrol on site. Called the control room from the gate but the thing didn't work. I only heard echos of my voice when trying to talk. They came through anyway.
10:10 Patrol left the exit gate, headed to LSB. Off site couple of minutes later.
13:18 Noticed two BP filters were on for PI mode 27. Ed told me about this before he left but I didn't realized they were both being used. Accidently went out of Observe trying to revert the configuration. I left the BP filters as they are for now.
14:08 Bubba heading to Y arm to check on tumbleweed.
14:26 Bubba back.
instafoton problem has been fixed, some diagnostics code was added which inadvertently generated a reliance on a temporary file's ownership.
With a nudge from peterF and mevans, I checked to see how hard it might be to do some time-domain subtraction of the jitter in H1 DARM. This is similar to what Sheila (alog 34223) and Keita (alog 33650) have done, but now it's in the time domain so that we could actually clean up DARM before sending it to our analysis pipelines.
The punchline: It's pretty easy. I got pretty good feedforward subtraction (close to matching what Sheila and Keita got with freq-domain subtraction) without too much effort.
Next steps: See if the filters are good for times other than the training time, or if they must be re-calculated often (tomorrow). Implement in GDS before the data goes to the analysis pipelines (farther future?).
I was finding it difficult to calculate effective Wiener filters with so many lines in the data, since the Wiener filter calculation is just minimizing the RMS of the residual between a desired channel (eg. DARM) and a witness (eg. IMC WFS for jitter). So, I first removed the calibration lines and most of the 60Hz line. See the first attached figure for the difference between the original DARM spectrum and my line-subtracted DARM spectrum. This is "raw" CAL-DELTAL_EXTERNAL, so the y-axis is not in true meters.
I did not need to use any emphasis filters to reshape DARM or the witnesses for the line removal portion of this work. The lines are so clear in these witnesses that they don't need any help. I calculated the Wiener filters for each of the following channels separately, and calculated their estimated contribution to DARM individually, then subtracted all of them at once. H1:CAL-PCALY_EXC_SUM_DQ has information about the 7Hz line, the middle line in the 36Hz group, the 332Hz line and the 1080Hz line. H1:LSC-CAL_LINE_SUM_DQ has information about the highest frequency line in the 36Hz group. Both of those are saved at 16kHz, so required no extra signal processing. I used H1:SUS-ETMY_L3_CAL_LINE_OUT_DQ for the lowest frequency of the 36Hz group, and H1:PEM-CS_MAINSMON_EBAY_1_DQ for the 60Hz power lines. Both of these channels are saved slower (ETMY cal at 512Hz and MainsMon at 1kHz), but since they are very clean signals, I felt comfortable interpolating them up to 16kHz. So, these channels were interpolated using Matlab's spline function before calculating their Wiener filters. Robert or Anamaria may have thoughts on this, but I only used one power line monitor, and only at the corner station for the 60Hz line witness. I need to re-look at Anamaria's eLIGO 60Hz paper to see what the magical combination of witnesses was back then.
Once I removed the calibration lines, I roughly whitened the DARM spectrum, and calculated filters for IMC WFS A and B, pit and yaw, as well as all 3 bullseye degrees of freedom. Unfortunately, these are only saved at 2kHz, so I first had to downsample DARM. If we really want to use offline data to do this kind of subtraction, we may need to save these channels at higher data rates. See the second attached figure for the difference between the line-cleaned DARM and the line-and-jitter-cleaned DARM spectrum. You can see that I'm injecting a teeny bit of noise in, below 9Hz. I haven't tried adjusting my emphasis filter (so far just roughly whitening DARM) to minimize this, so it's possible that this can be avoided. It's interesting to note that the IMC WFS get much of the jitter noise removed around these broad peaks, but it requires the inclusion of the bullseye detector channels to really get the whole jitter floor down.
Just because it's even more striking when it's all put together, see the third attachment for the difference between the original DARM spectrum and the line-and-jitter-cleaned DARM spectrum.
It might be worth pushing the cleaned data through the offline PyCBC search and seeing what difference it makes. How hard would it be to make a week of cleaned data? We could repeat e.g. https://sugwg-jobs.phy.syr.edu/~derek.davis/cbc/O2/analysis-6/o2-analysis-6-c00-run5/ using the cleaned h(t) and see what the effect on range and glitches are. The data could be made offline, so as long as you can put h(t) in a frame (which we can help with) there's no need to get it in GDS to do this test.
Do you think it would be possible to post the spectrums as ascii files? It would be interesting to get a very rough estimate of the inspiral range difference.
In fact, I'm working on a visualization of this for a comparison between C00 and C01 calibration versions. See an example summary page here:
https://ldas-jobs.ligo.caltech.edu/~alexander.urban/O2/calibration/C00_vs_C01/L1/day/20161130/range/
I agree with Other Alex and I'd like to add your jitter-free spectrum to these plots. If possible, we should all get together at the LVC meeting next week and discuss.