On Monday, I went to EndX to retrieve data from the anemometers I had around the wind fence, and found the set up kind of trashed. The ultrasonic gust sensor had been blown over, which sounds like a common issue with that setup from the past. The fall, however, broke some of the plastic pieces that the screws holding the head of the sensor together. I think the sensor is still functional, but it will need to be glued back together.
One of the other cup anemometers may have been attacked by a bird, or possibly a vehicle. The cable had been unplugged from the sensor, drug more than 20 feet and the sheath and a couple wires had been cut. I would normally blame a bird for that, but the copper tube I had secured the anemometer to was also bent. It's possible that a gust of wind had bent the post, but neither of the other posts were similarly bent. There are tire tracks up around the fence, but it's hard to say how old they are, it's likely they are left over from the fence install, and none of them come very close to the post. Finally, the batteries in the logger for the 3 cup anemometers had worked free of their connections, so the logger only collected about 3 days worth of data, even though they were installed for a week and a half. The batteries are held in place by a plastic strap, and this strap probably got soft in the sun, which allowed the batteries to work they way free.
I don't know what we can do to keep birds (or bees, they seem to really like the fence) away from the outside instruments, but couple of other things I'll get for next time:
1. Cones or flags to mark a stay clear around the fence.
2. Weights and guy lines for the ultrasonic sensor, after I get it back together.
3. A foam shim to secure the batteries in the cup anemometer datalogger.
Kyle, Chandra Starting step down of heating of Vertex RGA -> Closed calibration gas bottle isolation valves and reduced variacs (except one supplying turbo inlet) by 10% -> Will need access to continue this process every few hours or so throughout the day.
[Sheila, Kiwamu, Terra, Jenne]
Next up: Try dither loop to hold ETMX spot position in yaw to prevent spot position movement on BS. Separate SOFT loops to X and Y, use offset of XSOFT yaw to hold ETMX spot position constant. Alternatively (or simultaneously), dither BS and demodulate with several different signals at the same time, to understand better how the spot is moving.
We ended up moving PR2 (our uncontrolled recycling optic), and walking it with the POP_A offset. This allowed us to get to a PRC gain of 31.6 by the O1 standard, but without the sideband powers tanking. We think that this was promising, and will come back to it tomorrow. Once we decide that we're happy, we should re-do the green initial alignment setpoints yet again (including the beatnote PDs, which weren't done earlier today) to set this as our reference alignment. Attached is a big screenshot of where we were most happy. Also attached is the PR2 offset screen, time-machined back to before we started moving PR2.
TITLE: 09/01 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Aquisition
INCOMING OPERATOR: Sep
SHIFT SUMMARY: Lots of commissioning, a PSL trip, and a small earthquake.
LOG:
the h1susprocpi model had two entries in the testpoint.par file (dcuids 52 and 71). The correct ID is 71. This was causing a failure to open testpoints on this model. After the testpoint.par file was corrected, I restarted h1susprocpi and testpoints are now available.
Attached are trends of the various flow sensors in the PSL. Comparison of the timestamps shows that the high power oscillator
tripped because of a flow rate problem with the AMP circuit. The problem does not lie within the 4 laser heads, leaving either
the front end power amplifier, or the 4 crystal chambers.
Unfortunately checking the 4 crystal chambers requires dismantling the laser. The vortex flow sensor in the AMP circuit
is relatively new. The other object in that circuit is the power amplifier module. An inspection mirror might be able to
afford a view of the plumbing underneath the housing.
One can see that the flow rate in the AMP circuit drops before the output of the front end laser drops, and that the output
power of the front end laser drops before the high power oscillator. I think the sequence of events is the following:
1. flow rate problem in the AMP circuit trips the flow watch dog of the front end laser
2. the switching off of the front end laser breaks the injection locking of the high power oscillator
3. the loss of injection locking results in a power drop in the high power oscillator which then trips the power watch dog
4. the power watch dog switches the high power oscillator off
Jeff spotted this this morning in the crystal chiller circuit filter. Wasn't there a couple of days ago. Might be the cause of last night's syncope.
Filed FRS #6132.
WP6121: Jonathan, Carlos, Jim, Dave
Tuesday Aug 30, the Remote Access Control system (RACCESS) was turned on during the times we have operator coverage (Mon-Fri, 8am - 4pm Pacific). This is a 'real life' test of the system, and an opportunity for everyone to get familiar with it before it is on full-time during O2. Any problems with the system should be communicated as an LHO-CDS FRS ticket.
To provide more visibility of who is logged into the border machine (cdsssh) I have expanded the RACCESS portion of the CDS_OVERVIEW medm (center right area).
WP 6131. Jonathan's new daqd code which is running on both h1fw0 and h1fw1 exports more signals via EPICS channels. At 12:42PDT today I restarted the DAQ with a new H1EDCU_DAQ.ini file which includes the full set of EPICS channels. I also added the standard set of channels for the h1fw2 frame writer.
I have extended the DAQ Overview medm to show trend file sizes, and added links to open the detailed screens for fw0 and fw1 which show the full diagnostic suite.
I redid our green initial alignment setpoints, in hopes that CHARD won't have such a large input offset when we try to engage the loops.
Over-filled CP3 with the exhaust bypass valve fully open and the LLCV bypass valve 1/2 turn open.
Flow was noted after 23 minutes 15 seconds, closed LLCV valve, and 3 minutes later the exhaust bypass valve was closed.
I raised CP3's LLCV from 19% to 20%, due to the time increase for the last two fills.
TITLE: 08/31 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: TJ
SHIFT SUMMARY: First half of today was dedicated to Kiwamu's SRC Gouy phase measurements. After lunch, I did an IA, and after some troubleshooting with Sheila and Jenne of ALS Diff, we are back to locking.
LOG:
16:20 Kissel taking TMSx TFs, Kiwamu starting SRC measurements
16:24 Keita to LVEA making ISS 2nd loop electronics measurements
16:39 Kiwamu to LVEA byu ISCT6 for SRC Gouy meas.
16:44 Karen done at MY
16:48 Cristina done at MX
17:05 Betsy, John, and Chandra to MY
17:35 Betsy, John, and Chandra to MX
18:03 Kissel done with TMSx
19:31 Kiwamu done
19:34 Keita to LVEA for more 2nd loop meas.
20:29 Chandra to LVEA
20:43 Chandra done
21:21 Gerardo to MY
22:22 Gerardo done
22:56 Jim switching ISIs to Earthquake mode for incoming EQ
Only HAM2 required a WD counter reset. See attached screenshot.
The CO2 heating power was calculated based on Aidan's CO2 power vs. PSL power plot (alog25932). With the new thermal lensing measurement (alog28799) I fine-tuned the equation.
CO2 power = slope * PSL power + offset
ITMX slope was -0.01, now -0.012
ITMY slope was -0.01, now -0.014
ITMX offset was 0.5, now 0.6
ITMY offset was 0.3, now 0.4

Corrected actuation plot. Divided lensing by a factor of 2 to make it a single-pass value.

Elli (remotely), Kiwamu,
This morning was a morning for another measurement of the SRC gouy phase. As opposed to the single bounce measurements done yesterday, we proceeded to a round trip beam measurement.
I wanted to measure two different configurations as instructed by Elli, but I could get only one of them done today. We will spend (at least) another morning to measure the other configuration.
[Some background]
A trick in this whole series of measurements is that one can effectively cancel the effect of the output optical train (i.e. the optical path from SRM all the way to the setup on ISCT6 which is not easy to precisely characterize) by having measurements of single bounce and round trip beams. The round trip beam we mean here is a beam that bounces around the signal recycling cavity only once and comes out to the AS port. We have finished the single bounce measurement yesterday, and therefore the next step today was to measure the round trip beam.
[The setup]
I added an extra component to ISCT6. It is a beam blocking object. Everything else was unchanged on ISCT6. See the picture of the setup shown below.
The blocking object (a black rectangular piece in the middle) is dedicated to block the single bounce beam which would cause undesired interference with the round trip beam. To split the beam into single and round trip beams, I introduced an intentional misalignment in SRM by 700 urad in pitch as suggested by Elli. In addition, I found that the separation of the two beams became even better with additional misalignment (~ 100 urad) in pitch of PR2 as well. This misaligned configuration is one of two configurations we wanted to test. The other configuration will introduce misalignment in another combination of optics, ITM and IM4, instead of SRM.
As I introduced misalignment in SRM, it made a clean beam separation on CAM17 (see the previous log) while ASAIR did not as expected. I manually steered a mirror and beam splitter that were in front of the cameras to center the round trip beam on both cameras. The blocking object was removed when I finished the measurement this morning.
- - - - some other settings.
PSL power into IMC = 25 W
ITMY ring heater = 0.5 W (0.25 W for upper and lower segments each)
CO2Y = 286 mW
The interferometer configuration = single bounce (with ITMY aligned) + SRM almost aligned (see the second and third attachment for the specific alignment values)
The camera settings = same as the previous measurements (alog 29389)
ASAIR exposure time = 4400 usec
CAM17 exposure time = 7000 usec
[The measurement]
The measurement itself is the same as what we did yesterday -- excite BS or PR2 in yaw at a certain frequency and measure the centroid positions on the two gigE cameras. I ended up doing four sets of measurements as described below because I was worried that a high excitation may have introduced a large enough clipping somewhere which may confuse the later analysis. By the way, later Jenne told me that there were some angular excitation signals unintentionally left on throughout the measurements on BS and all the SR mirrors (in both pitch and yaw) at frequencies around 20 Hz, which I don't think an issue because they are small compared to my measurement excitation and also the frequencies are different than my excitation.
- measurement #1
18:06:40 - 18:16:40 UTC
BS yaw excitation by 6 urad at 0.2 Hz (ASAIR camera showed a clipping-type behavior)
- measurement #2
18:19:53 - 18:29:53 UTC
BS yaw excitation by 3 urad at 0.2 Hz
- measurement #3
18:34:15 - 18:44:15 UTC
PR2 yaw excitation by 20 urad at 0.2 Hz (ASAIR camera showed a clipping-type behavior)
- measurement #4
18:47:15 - 18:57:15 UTC
PR2 yaw excitation by 10 urad at 0.2 Hz
[The data]
A thorough analysis will be remotely performed by Elli. The data are saved in kiwamu.izumi/Public/measurements/20160831_SRCgouy2/data1
J. Kissel While perusing the list of FRS tickets / Integration issues that had been opened and related to me, I found an old issue -- now FRS #3246 -- that cites LHO aLOG 19208. The story from that aLOG : after the vent in the summer of 2015, the LF & RT OSEM signal chains had shown a factor of a few less gain than their values prior to the vent. I was tempted to close the issue, claiming the cop out "well, we detected gravitational waves with TMSX like this..." but there happened to be some time this morning, where then end stations were free. Also, given the typical-forgetten-about state of the TMTS, this transfer function had not been remeasured since just after the pump-down of the chamber -- not at final vacuum levels. As such, I've remeasured the standard top-to-top transfer function of H1SUSTMSX, and found even further a drop in transfer function magnitude. The transfer functions confirm that V and P -- transfer functions show a factor of 4 drop in response from the 2014 to 2016 measurements. Additionally, and not mentioned in the original fault (although present in the 2015 transfer functions) report R also shows a drop from 2014 to 2016 of a factor 2. See first pdf attachment (alltmtss_2016-08-31_Phase3b_H1SUSTMSY_M1_ALL_ZOOMED_TFs.pdf). The drop in plant gain leaves the R, P and especially V DOFs with little to no damping on resonance. Digging even further, recall that TMTS top masses (M1) are rotated 90 deg to that of the QUAD, so the DOF mapping (for the relevant DOFs in question) is V --> LF and RT R --> F1, F2, and F3 P --> LF and RT Since L (= SD), T, and Y (= F2 and F3) look the same, we can rule out problems with F2, F3, and SD. This leaves LF, RT, and F1 as our suspect OSEMs. Unlike suspected before, I'm not sure if this is an external electronics chain issue. Why? Because, typically electronics chain problem show up clustered in an entire satellite amplifier or coil driver. The TMTS OSEMs are group in the typical six-osem stage fashion of F1, F2, F3, LF on one cable chain, and RT, SD on another (see pg 3 of D1002741). I've also attached some new figures (which are standard output of the transfer function scripts, but posting them had fallen out of fashion) that compares the response in the OSEM basis to Euler basis drive. This isolates the individual sensor composition of each DOF. See the rest of the .pdf attachments ( H1SUSTMSX_M1_*.pdf), which compare the 2014 and 2016 data sets in this manner. This shows a consistent story, that - F1 alone (as opposed to F2 and F3) has dropped in sensitivity by a factor of 2. - LF has dropped in sensitivity by a factor 6, and RT has dropped by a factor of 3. I then went on to wonder -- seeing the trend from 2014 to 2016 -- have the OSEM LEDs just slowly decayed in sensitivity over time? This launched the data viewer mining exercise for all of the attached .pngs, H1SUS${OPTIC}_${TOPSTAGE}_OSEMINF_3yr_Trend.png. These are hourly trends of the mean value for each OSEM over the past three years. I was hoping to see the H1SUSTMSX's F1, LF, and RT OSEMs showed a slow, but substantial, downward decay in raw input ADC voltage over time, with the hypothesis that this represented a slow failure of those OSEM's LEDs or PDs. Sadly, evidence from other suspensions I checked showed that a random smattering of BOSEMs scattered around all BSC SUS show either flat or a slow decay of at most ~3000 [ct]; the rest are flat in time. This drop in PD current is only about 10% of the full range (~30000 [ct]), so it cannot explain the factors of 2 to 6. In conclusion, I've recommended that we close this ticket with the LONGTERMFIX, and open up a proper Integration Issue about it, marking it WHEN VENT, because I have a feeling this will be easier to debug when we have access to the entire signal chain. In the mean time, we can band-aid the problem by increasing the overall gain the V, R, and P damping loops by the amount of plant sensitivity that has been lost.
Expecting PT120 to be 3 x 10-9 torr -> Don't see logged activity to explain -> am on road and not able to trend data etc.
Probably gauge problem since 114 did not follow.
Some more gauges -
Pressure change was due to incorrect wiring of PT-120 during Beckhoff transition and was brought to light when Gerardo incorporated signal wiring for AIPs and needed to turn off PT180 this week. Pressure reading now is correct (and the same as it was in Q2 2016). We should check other gauges to be sure they are wired properly and reading correct pressures. Wearing laser safety glasses contributes to issues like this because visibility is compromised.
J. Kissel In short: The ETM ESD bias signs have now been flipped. After whining about it since ER9, I've commissioned why the bias sign flipping had caused the ALS DIFF control to go unstable (and subsequently DARM once we get onto ETMY): controlling the loop gain sign beyond the ESD linearization just doesn't work. As such, I've restored all settings for both test masses to their successful settings back in April -- namely that from LHO aLOG 26826. As such, we've returned to the aesthetically displeasing but functional method of controlling the DARM loop sign in the DRIVEALIGN matrix. We still have yet to assess the impact on PI damping. ---------------------- Explicit details for the next time this becomes confusing: ETMX (1) Changed H1:SUS-ETMX_L3_LOCK_INBIAS from -9.5 [V] to +9.5 [V] (2) Changed H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN from +1.0 to -1.0 H1:SUS-ETMX_L3_DRIVEALIGN_L2P_GAIN from +0.021 to -0.021 H1:SUS-ETMX_L3_DRIVEALIGN_L2Y_GAIN from +0.007 to -0.007 (3) Changed H1:SUS-ETMX_L3_ESDOUTF_LIN_FORCE_COEFF from -124518.4 to +124518.4 (4) Made sure all H1:SUS-ETMX_L3_ESDOUTF_??_GAIN fields are +1 always, all the time, as before. (No changes need to the calibration model since we don't use ETMX in our lowest noise state.) ETMY (5) Changed H1:SUS-ETMY_L3_LOCK_INBIAS from +9.5 [V] to -9.5 [V] (6) Changed H1:SUS-ETMY_L3_DRIVEALIGN_L2L_GAIN from +30.0 [V] to -30.0 [V] (7) Changed H1:SUS-ETMY_L3_ESDOUTF_LIN_FORCE_COEFF from +124518.4 to -124518.4 (even though ETMY doesn't use linearization). (8) Made sure all H1:SUS-ETMX_L3_ESDOUTF_??_GAIN fields are +1 always, all the time, as before. (9) Changed H1:CAL-CS_DARM_FE_ETMY_L3_DRIVEALIGN_L2L_GAIN from +30 to -30 (10) Changed H1:CAL-CS_DARM_FE_ETMY_L3_ESDOUTF_UL_GAIN from -1 to +1, where it should remain always, all the time, as before. (11) Changed H1:CAL-CS_DARM_FE_ETMY_L3_ESDOUTF_LIN_FORCE_COEFF from +124518.4 to -124518.4 (even though ETMY doesn't use linearization). I've also redone (essentially reverted) the DOWN state in the ISC_LOCK guardian with respect to the ETM ESD settings, such that steps 2-3, and 6-11 are done automatically if a user does steps 1 and 5. Once we figure out the impact on PI damping, we'll code these up in the DOWN state of the ISC_LOCK guardian as well. Finally, I've accepted these changes into the H1SUSETMX down.snap (to which its safe.snap is a soft link) H1SUSETMY down.snap (to which its safe.snap is a soft link) H1CALCS safe.snap and OBSERVE.snap. SDF systems.
I have flipped PI ETM damping gain signs and confirmed successful damping many many times now. I've added a bias flip check to the SUS_PI guardian under the PI_DAMPING state; this will choose sign of gain based on sign of ETM bias.
Executive summary: * Good news - as expected, the 16-Hz comb due to the OMC length dither is gone (at least at this sensitivity level) * Bad news - low-frequency 1-Hz combs remain, and some new low-frequency combs & lines have appeared Some details:
I analyzed the 56.8406Hz comb with coherence tool and here are the results. The same structure is found to be significant in 35 channels in ER9, distributed in ISI, SUS, PEM and LSC subsystems. Among all the 35 channels, 22 of them does not have a range up to its 11th harmonic, 625.25 Hz.
Keith indicated in his slog entry that a DAQ malfunction is suspected to be the ultimate source of this, and these findings suggest it's in an EX electronics crate.
Here are a few interesting observations:
The 9th harmonic at 511.56Hz is the weakest in most channels, sometimes buried in noises.
In some PEM channels, there are missing lines at low frequency (< 200 Hz) and high frequency (> 500 Hz).
In PEM and ISI channels, there seems to be another comb structure with a frequency slightly larger than 56.8406Hz coexists. That one is usually most significant at its third harmonics.
Generally, the structure is more clearly seen in LSC, SUS and ISI channels
Sample plots from each subsystem:
Figure 1: We can see the 56.8406Hz comb structure exists with its 9th harmonic weakest in ISI.
Figure 2: PEM channels have more noises and, as in ISI channels, the other comb structure coexists.
Figure 3: SUS channels do not have enough range up its 11th harmonic but we can see its first and second harmonic here.
Figure 4: There is only one channel from LSC but the structure is very clear.
All plots and a list of channels are attached in the zip file.
Just to be clear. Here are the channels that the coherence tool is finding the comb. This is what is supporting Keith's assumption that the problems could be in an EX electronics crate. Channels List: H1:ISI-ETMX_ST2_BLND_RX_GS13_CUR_IN1_DQ_data H1:ISI-ETMX_ST2_BLND_RY_GS13_CUR_IN1_DQ_data H1:ISI-ETMX_ST2_BLND_RZ_GS13_CUR_IN1_DQ_data H1:ISI-ETMX_ST2_BLND_X_GS13_CUR_IN1_DQ_data H1:ISI-ETMX_ST2_BLND_Y_GS13_CUR_IN1_DQ_data H1:ISI-ETMX_ST2_BLND_Z_GS13_CUR_IN1_DQ_data H1:LSC-X_TR_A_LF_OUT_DQ_data H1:PEM-EX_ACC_BSC9_ETMX_Y_DQ_data H1:PEM-EX_ACC_BSC9_ETMX_Z_DQ_data H1:PEM-EX_ACC_ISCTEX_TRANS_X_DQ_data H1:PEM-EX_ACC_VEA_FLOOR_Z_DQ_data H1:PEM-EX_MIC_VEA_MINUSX_DQ_data H1:PEM-EX_MIC_VEA_PLUSX_DQ_data H1:ISI-ETMX_ST1_BLND_Y_T240_CUR_IN1_DQ_data H1:ISI-ETMX_ST1_BLND_Z_T240_CUR_IN1_DQ_data H1:ISI-GND_STS_ETMX_X_DQ_data H1:ISI-GND_STS_ETMX_Y_DQ_data H1:PEM-EX_MAINSMON_EBAY_1_DQ_data H1:PEM-EX_MAINSMON_EBAY_2_DQ_data H1:PEM-EX_MAINSMON_EBAY_3_DQ_data H1:PEM-EX_SEIS_VEA_FLOOR_X_DQ_data H1:PEM-EX_SEIS_VEA_FLOOR_Y_DQ_data H1:SUS-ETMX_L1_WIT_Y_DQ_data H1:SUS-ETMX_L2_WIT_L_DQ_data H1:SUS-ETMX_L2_WIT_P_DQ_data H1:SUS-ETMX_L2_WIT_Y_DQ_data H1:SUS-ETMX_M0_DAMP_L_IN1_DQ_data H1:SUS-ETMX_M0_DAMP_P_IN1_DQ_data H1:SUS-ETMX_M0_DAMP_T_IN1_DQ_data H1:SUS-ETMX_M0_DAMP_V_IN1_DQ_data H1:SUS-ETMX_M0_DAMP_Y_IN1_DQ_data
I chased Comb 23 (type K) in Keith’s post, shown in Keith's original post as
This comb has an offset of 153.3545 Hz and a fundamental frequency of 0.0884Hz. It starts at 153.3545 Hz and goes up to its 11th harmonic, 154.3272 Hz. As is listed in Keith's txt file:
Comb 23 (type K, offset=153.354500): Frequency (offset + harmonic x fund freq) Ampl (m/rtHz) Bar (logarithmic) K 153.3545 ( 0 X 0.0884) 1.844961e-19 **** K 153.4429 ( 1 X 0.0884) 1.949756e-19 **** K 153.5314 ( 2 X 0.0884) 2.165192e-19 ***** K 153.6198 ( 3 X 0.0884) 2.181833e-19 ***** K 153.7082 ( 4 X 0.0884) 2.457840e-19 ***** K 153.7966 ( 5 X 0.0884) 2.243089e-19 ***** K 153.8851 ( 6 X 0.0884) 2.709562e-19 ***** K 153.9735 ( 7 X 0.0884) 2.499596e-19 ***** K 154.0619 ( 8 X 0.0884) 2.562208e-19 ***** K 154.1503 ( 9 X 0.0884) 1.945817e-19 **** K 154.2388 ( 10 X 0.0884) 1.951777e-19 **** K 154.3272 ( 11 X 0.0884) 1.703353e-19 ****
I found the comb structure in two channels of ISI subsystem.
Figure 1 shows the plot of channel H1:ISI-HAM6_BLND_GS13RZ_IN1_DQ. Descriptions of this channel can be found here:
https://cis.ligo.org/channel/314371
Figure 2 shows the plot of channel H1:ISI-HAM6_BLND_GS13Z_IN1_DQ. Descriptions of this channel can be found here:
https://cis.ligo.org/channel/314374
In the plots of both channels, we can see a comb structure stands out at the positions of harmonics. We are wondering about the reason for this:
Why these seismic isolation channels?
This post is supplementary to the first post about coherence analysis result for the 56.8406Hz Comb at
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=28619
The first post is addressing the 56.8406Hz comb found in Keith's original post (marked as D comb):
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=28364
Information about this comb from the txt file in Keith's post:
Comb 35 (type D, offset=0.000000): Frequency (offset + harmonic x fund freq) Ampl (m/rtHz) Bar (logarithmic) D 56.8406 ( 1 X 56.8406) 3.968800e-17 *********** D 113.6811 ( 2 X 56.8406) 1.773964e-17 ********** D 170.5217 ( 3 X 56.8406) 7.121580e-18 ********* D 227.3622 ( 4 X 56.8406) 3.232935e-18 ******** D 284.2028 ( 5 X 56.8406) 1.166094e-18 ******* D 341.0433 ( 6 X 56.8406) 1.007273e-18 ******* D 397.8839 ( 7 X 56.8406) 5.962059e-19 ****** D 454.7245 ( 8 X 56.8406) 3.752194e-19 ***** D 511.5650 ( 9 X 56.8406) 2.577108e-19 ***** D 568.4056 ( 10 X 56.8406) 1.964393e-19 **** D 625.2461 ( 11 X 56.8406) 1.891774e-19 **** --------------------------------------------------------------
Besides the 35 channels found in the original post, 7 more channels are found to be relevant to the 56.8406Hz Comb. Two new subsystems, ASC and HPI are involved.
These new channels are:
H1:ASC-X_TR_A_NSUM_OUT_DQ
H1:ASC-X_TR_B_NSUM_OUT_DQ
H1:HPI-ETMX_BLND_L4C_Y_IN1_DQ
H1:HPI-ETMX_BLND_L4C_Z_IN1_DQ
H1:PEM-EX_ACC_BSC9_ETMX_X_DQ
H1:SUS-ETMX_L1_WIT_L_DQ
H1:SUS-ETMX_L1_WIT_P_DQ
So updated channel list is (42 channels in total):
H1:ASC-X_TR_A_NSUM_OUT_DQ
H1:ASC-X_TR_B_NSUM_OUT_DQ
H1:HPI-ETMX_BLND_L4C_Y_IN1_DQ
H1:HPI-ETMX_BLND_L4C_Z_IN1_DQ
H1:ISI-ETMX_ST1_BLND_RX_T240_CUR_IN1_DQ
H1:ISI-ETMX_ST1_BLND_RY_T240_CUR_IN1_DQ
H1:ISI-ETMX_ST1_BLND_RZ_T240_CUR_IN1_DQ
H1:ISI-ETMX_ST1_BLND_X_T240_CUR_IN1_DQ
H1:ISI-ETMX_ST1_BLND_Y_T240_CUR_IN1_DQ
H1:ISI-ETMX_ST1_BLND_Z_T240_CUR_IN1_DQ
H1:ISI-ETMX_ST2_BLND_RX_GS13_CUR_IN1_DQ
H1:ISI-ETMX_ST2_BLND_RY_GS13_CUR_IN1_DQ
H1:ISI-ETMX_ST2_BLND_RZ_GS13_CUR_IN1_DQ
H1:ISI-ETMX_ST2_BLND_X_GS13_CUR_IN1_DQ
H1:ISI-ETMX_ST2_BLND_Y_GS13_CUR_IN1_DQ
H1:ISI-ETMX_ST2_BLND_Z_GS13_CUR_IN1_DQ
H1:ISI-GND_STS_ETMX_X_DQ
H1:ISI-GND_STS_ETMX_Y_DQ
H1:LSC-X_TR_A_LF_OUT_DQ
H1:PEM-EX_ACC_BSC9_ETMX_X_DQ
H1:PEM-EX_ACC_BSC9_ETMX_Y_DQ
H1:PEM-EX_ACC_BSC9_ETMX_Z_DQ
H1:PEM-EX_ACC_ISCTEX_TRANS_X_DQ
H1:PEM-EX_ACC_VEA_FLOOR_Z_DQ
H1:PEM-EX_MAINSMON_EBAY_1_DQ
H1:PEM-EX_MAINSMON_EBAY_2_DQ
H1:PEM-EX_MAINSMON_EBAY_3_DQ
H1:PEM-EX_MIC_VEA_MINUSX_DQ
H1:PEM-EX_MIC_VEA_PLUSX_DQ
H1:PEM-EX_SEIS_VEA_FLOOR_X_DQ
H1:PEM-EX_SEIS_VEA_FLOOR_Y_DQ
H1:SUS-ETMX_L1_WIT_L_DQ
H1:SUS-ETMX_L1_WIT_P_DQ
H1:SUS-ETMX_L1_WIT_Y_DQ
H1:SUS-ETMX_L2_WIT_L_DQ
H1:SUS-ETMX_L2_WIT_P_DQ
H1:SUS-ETMX_L2_WIT_Y_DQ
H1:SUS-ETMX_M0_DAMP_L_IN1_DQ
H1:SUS-ETMX_M0_DAMP_P_IN1_DQ
H1:SUS-ETMX_M0_DAMP_T_IN1_DQ
H1:SUS-ETMX_M0_DAMP_V_IN1_DQ
H1:SUS-ETMX_M0_DAMP_Y_IN1_DQ
Attached images are sample plots from ASC and HPI subsystem.
Full results are also attached.
Here are the coherence search results of all the single lines in ER9 data, which are listed in Keith’s post. I found 29 of all the 198 lines on the list and posted the results on my homepage here:
https://ldas-jobs.ligo-wa.caltech.edu/~duo.tao/ER9_single_lines/index.html
the Verbal Alarms code was logging to the ops home directory. Prior to the move of this home directory (WP5658) I have modified the code to log to a new directory: /ligo/logs/VerbalAlarms We restarted the program at 14:04 and verified the log files are logging correctly.
These verbal log files actually live one level deeper, in /ligo/logs/VerbalAlarms/Verbal_logs/ For the current month, the log files live in that folder. However, at the end of every month, they're moved into the dated subfolders, e.g. /ligo/logs/VerbalAlarms/Verbal_logs/2016/7/ The text files themselves are named "verbal_m_dd_yyyy.txt". Unfortunately, these are not committed to repo where these logs might be viewed off site. Maybe we;ll work on that. Happy hunting!
The Verbal logs are now copied over to the web-exported directory via a cronjob. Here, they live in /VerbalAlarms_logs/$(year)/$(month)/
The logs in /ligo/logs/VerbalAlarms/Verbal_logs/ will now always be in their month, even the curent ones.