TITLE: 03/29 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY:
IFO has been down for Commissioning all day in search of some un-wantedness on an ITM surface.
LOG:
Run BRS Drift Mon check: results are posted below. BRS-X looks good. BRS-Y has been in a downward trend since 02/15/2017. It crossed the lower warning line around 03/22/2017. With the exception of a couple of spikes up to zero, it has been below the warning line since.
Start of downward trend is 03/15/2017, not 02/15/2017.
Laser Status:
SysStat is good
Front End Power is 34.09W (should be around 30 W)
HPO Output Power is 165.3W
Front End Watch is GREEN
HPO Watch is GREEN
PMC:
It has been locked 0 days, 6 hr 15 minutes (should be days/weeks)
Reflected power = 16.57Watts
Transmitted power = 60.79Watts
PowerSum = 77.36Watts.
FSS:
It has been locked for 0 days 4 hr and 17 min (should be days/weeks)
TPD[V] = 3.368V (min 0.9V)
ISS:
The diffracted power is around 3.7% (should be 3-5%)
Last saturation event was 0 days 4 hours and 45 minutes ago (should be days/weeks)
Possible Issues:
Xtal Chiller "light" is showing intermittent alarm on Statues Screen. 300ml of water was added
model restarts logged for Tue 28/Mar/2017
2017_03_28 11:26 h1omc
2017_03_28 11:27 h1calcs
2017_03_28 11:30 h1broadcast0
2017_03_28 11:30 h1dc0
2017_03_28 11:30 h1fw0
2017_03_28 11:30 h1fw1
2017_03_28 11:30 h1fw2
2017_03_28 11:30 h1nds0
2017_03_28 11:30 h1nds1
2017_03_28 11:30 h1tw1
2017_03_28 12:47 h1calcs
2017_03_28 12:49 h1broadcast0
2017_03_28 12:49 h1dc0
2017_03_28 12:49 h1fw0
2017_03_28 12:49 h1fw1
2017_03_28 12:49 h1fw2
2017_03_28 12:49 h1nds0
2017_03_28 12:49 h1nds1
2017_03_28 12:49 h1tw1
maintenance day, new calcs and omc code, associated DAQ restarts.
model restarts logged for Mon 27/Mar/2017 - Fri 24/Mar/2017 No restarts reported
Starting CP3 fill. LLCV enabled. LLCV set to manual control. LLCV set to 50% open. Fill completed in 201 seconds. LLCV set back to 17.0% open. Starting CP4 fill. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Fill completed in 1137 seconds. LLCV set back to 33.0% open.
Last night we had some decent winds so here is a comparison of the ITMY STS (STS2-B) and the HAM5 STS (STS2-C) with STS2-C at the Roam1 location--guess I'm going to have to make a drawing.
Spectra starting at 1845 UTC 28 March, winds pretty steadily above 15mph with gusts to 30, from 30 to 50 degrees (hitting the SW corner of the LVEA building.) This wind state was pretty consistent for nearly four hours; more than enough for ten 1mHz averages. See attachment the first.
The second attachment are X Y and Z spectra plots comparing the STS2 B & C during the windy period last night and a calm reference time of March 24--seen also in aLog 35075. The Reference traces are from the quiet time and the current traces are from last night. Notice the change in the microseisms especially the primaries (around 60mHz) and how the tilts impacting the X & Y traces pretty well obliterates this primary. Is that the tertiary showing in the references around 0.18Hz? Sorry about the cryptic title: CurrWind 20 Y-X means the current wind is 20mph from the +Y - X direction (45 dataviewer degrees or true South [for LHO]).
Not sure what to say about the differences in the Z signals between 1 & 7mHz during the windy time... But, as for X & Y tilt susceptibility of the ITMY and the Roam1 positions, I'd say they are the same. Time to move to Roam2.
J. Kissel, J. Betzwieser Patrick had called out that line features the DARM sensitivity around 35 Hz were "broadening and contracting" in a recent lock stretch starting 2017-03-29 11:10 UTC, which impacted the inspiral range (see LHO aLOG 35173). These features are the 35.9, 36.7, 37.3 Hz calibration lines actuated by the ETMY SUS, PCAL, and overall DARM_CTRL, respectively. What Patrick has seen is classic alignment-related up-conversion, where low-frequency (between 0.05-10 Hz) angular motion mixes with the intentionally, and necessarily loud calibration lines and creates side-bands on the CAL lines that mirror the ASD of angular motion. We've been anecdotally/casually/qaulitatively seeing this several times in the recent past. This time, however, it seems to have gone so far as to pollute the calculation of the time-dependent correction factors, which are applied to the astrophysical output -- namely the relative ESD/TST stage actuation strength change ("kappa_TST"), and the relative optical gain change ("kappa_C"). Since the data were so noisy around these lines (the SNR dropped from the typical ~500 to less than 10), this dropped the coherence between excitation and DARM, and increased the uncertainty beyond our pre-defined thresholds for good data. As such I recommend this data stretch to be flagged as garbage by DetChar, from where bits 13 & 15 ("kappa_TST median" and "kappa_PU median") of the H1:GDS-CALIB_STATE_VECTOR are red during this observational stretch. I attach good deal of plots to help support the interrelated case between - The DELTAL_EXT sensitivity of the detector, zoomed in around both the 2150 Hz OMC dither lines and CAL lines - OMC Dither Alignment Control (whose error signals are informed by 4, evenly spaced lines between 2125 and 2200 Hz) - OMC SUS and OM3 alignment - DHARD Pitch Control - The noisy kappa_TST and kappa_C and then other plots that show it's NOT causally related to - Earthquakes (the recent EQ at 4:30 UTC was done by then) - ETMY Optical Lever Excursions (ETMY's optical lever went off into the weeds, but what's shown in the BLRMS is just the ring-down of the low-pass filter) - 1-30 Hz GND Motion (the increase happens *after* this alignment excursion) - Large differential motion of the ETMs (ETMX X and ETMY Y show that same ISI ST1 performance at 11:30 UTC and 03:00 UTC which are bad and good times, repectively) I don't have any speculation as to which of the inter-related symptoms are the actual cause of the problem, and/or I don't have a clue as to why the OMC dither line heights are so much louder during the bad time. I'll ask around for speculation from other members of the commissioning team.
The difference in the calibration lines and the OMC dither heights seems to be mostly an increase in sidebands at about 0.4 Hz. The first attachment is a zoomed in spectrum of the OMC dither at 2225.1 Hz compared to the time just before the bad time. The dither line itself isn't very visible in DARM (since the alignment should suppress it), the difference in the feature around that frequency are sidebands spaced by about 0.4 Hz. The second attachment is the 331.whatever Hz calibration line, which has similar sidebands. I think the same is true for the three near 30 Hz, but they overlap and it's hard to sort out. At the same time, the test mass oplevs all show an increase in the 0.43 Hz pitch mode. Attachment three is the ETMX spectrogram. There's an increase in angular motion on other optics, like the OMC, especially at this frequency and its double. But I would guess that they're just following the motion of the test masses. I'll point out again that the ITMY oplev signal looks very strange (final attachment). Could something wrong with the oplev damping ring up the 0.43 Hz mode?
@Andy, DetChar, et. al I think you may be on to something, but I'm not sure I would call the ITMY optical lever the *cause* just yet per se. Recall that is radiation pressure to ~0.4 Hz pitch mode coupling in the arm cavities of the IFO (G1600864), so the ITM optical lever may just be another witness to the problem (i.e. something goes bad that kicks up the instability and ITMs see it, and the Oplev Damping / ASC / LSC system tries to follow / control it) -- and the signals may look particularly bad (i.e. the harmonics) because the optical lever spot is clipping, or on the edge of the QPD or something. I agree that it's getting worse (this problem is happening more frequently), though. I say this because -- continuing to browse around the summary pages, I see two other recent examples of this whole inter-related badness that impacts to 35 Hz calibration lines: 2017-03-24 from ~01:30 - 2:45 UTC BNS Range DARM Spectrogram 35 Hz CAL Lines and harmonics kappa_C TDCF DHARD Control OM3 Pointing Beam Alignment into the OMC Cavity ITMY Pitch via Optical Lever 2017-03-22 from ~4:45 - 5:00 UTC BNS Range DARM Spectrogram 35 Hz CAL Lines and harmonics kappa_C TDCF DHARD control OM3 Pointing Beam Alignment into the OMC cavity ITMY Pitch via Optical Lever But I see *tons* of examples of the ITMY bad news that don't seem to have similar effects over the past several months. (PS it would be *super* awesome if we could create a custom summary page with all of the above graphics / cross-coupled info; it'll save us from concatenating plots from 15 pages and lend it self to better pattern recognition). The other thing, is that -- at least from what I've found and with what precision I can get on a 24 hr trend from the summary pages (which is not precise at all), the ITMY optical lever problems seem to come and go slowly, where the OMC alignment, CAL line problems, and Range decay all seem to happen a bit more suddenly. Also -- how fine a resolution spectra are you gathering on this? Can you resolve whether the harmonics you see are 0.43 vs 0.47 Hz? The spectra / spectrograms on the summary pages don't have this kind of precision, and as far as I know, there's no cursor feature on LIGO DV, so I worry that you're colloquially referring to the wrong mode, which may be confusing the study -- especially since these modes move around with Sigg-Siddles radiation pressure stiffening / softening (see, e.g. G1600509). Did we see any noticeable change after Jason swapped out the ITMY laser on March 7th (see LHO aLOG 34646)? I couldn't find any conclusive correlation between the swap, the above mentioned calibration line shoulder / harmonics, and the badness of ITMY after browsing through the summary pages from Feb 1 to today.
Jeff, as requested, find attached a 2mHz-resolution FFT, with appropriate markings for 0.43 and 0.47 Hz side-bands. It looks like the side-bands are actually 0.454 Hz, if that means anything to anybody.
TITLE: 03/29 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
OUTGOING OPERATOR: Patrick
CURRENT ENVIRONMENT:
Wind: 11mph Gusts, 9mph 5min avg
Primary useism: 0.21 μm/s
Secondary useism: 0.38 μm/s
QUICK SUMMARY:
Site Activities:
H1 Activities:
Update as of 18:37UTC (11:37PT):
Site Activities:
H1 Activities:
After yesterday's abuse, the BRSY was just not ready to contribute to the cause of science. After having the thermal enclosure breached for an hour or something, a big jolt from something when the remote desktop session was terminated and at least one...or was it two restarts of the code, I thought things had finally settled down but it was not so.
See the attached plot the first. Note the trend up and noisy steps in DRIFTMON. This channel should be smooth and quiet as the low frequency beam position. When the signal steps notice the RX (tilt) and VEL signals also going noisy and haywire. When I left the BRS last evening, the DRIFTMON was running smoothly (from about 0100 to 0200 utc.) But then it started the step behavior again and remained unusable until restarting this morning.
Also attached. the second, is two months of the DRIFTMON and an internal temperature signal. Obviously yesterday's incursion was a good cool down and the trend down indicates we better do another beam centering within a week or two at the longest.
The last plot is the most recent 2 hours and it appears that the periodic step and noise behavior has stopped. The large step down in the REF signal maybe indicates that Krishna's suspicion was correct in that the reference image captured by the camera was bad. The direction of the DRIFTMON suggests the BRS is still warming; I'll keep a close eye on this today to watch for errant tendencies.
Summary: Injections suggest that vibration of the input beam tube in the 8-18 Hz band strongly couples to DARM and is the dominant source of noise in the 70-200 Hz band of DARM for transient truck and fire pump signals, and likely also for the continuous signal from the HVAC. Identification of the coupling site is based on the observation that local shaking of the input beam tube produces noise levels in DARM similar to those produced by global corner station vibrations from the fire pump and other sources, for similar RMS at an accelerometer under the Swiss cheese baffle by HAM2. The local shaker injections were insignificant on nearby accelerometers or HEPI L4Cs, and HEPI excitations cannot account for the noise, supporting a local, off-table coupling site in the IMC beam tube. In addition, local vibration injections occasionally produce a 12 Hz broad-band comb, which is also produced by trucks and the fire pump, possibly indicating a 12 Hz baffle resonance. While the Swiss cheese baffle seems the most likely coupling site, we have not yet eliminated the eye baffle by HAM3.
Several recent observations have suggested that we are limited by noise in the 100 Hz region that is produced by vibrations in the 10-30 Hz region. There was the observation that our range increased by a couple of Mpc when the HVAC was shut down, ( https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=32886 ), and, additionally, the observations of noise from the fire pump and from trucks.
I noticed that the strongest signals in DARM produced by the fire pump and trucks had peaks that were harmonics of about 12 Hz. I injected manually all around the LVEA and found that the input beam tube was the one place where I could produce a 12 Hz comb in the 100 Hz region (injections were sub-50 Hz). Figure 1 shows that a truck, the fire pump, and my manual injections at the input beam tube produced similar upconverted noise in DARM.
I also used an electromagnetic shaker on the input beam tube. Figure 2 is a spectrogram showing a slow shaker sweep and the strong coupling in the 8-18 Hz band. I wasn’t able to reproduce the broad 12 Hz comb with the shaker, possibly because, as mounted, it didn’t couple well to the 12 Hz mode. But the broad-band noise produced in DARM by the shaker is more typical of trucks and the fire pump: only occasionally does the 12 Hz comb appear. One possibility is that the bounce mode of the baffle is about 12 Hz.
Figure 3 shows that, for equivalent noise in DARM, the RMS displacement from the shaker and the fire pump were about the same at an accelerometer mounted under the Swiss cheese baffle by HAM2. Figure 4 shows that the shaker vibration is local to the beam tube. While the shaker signal is large on the beam tube accelerometer, it is almost lost in the background at the HAM2 and 3 accelerometers and the HAM2, 3 L4Cs. Finally, the failure to reproduce the noise with HEPI injections, both during PEM injections at the beginning of the run and a recent round by Sheila, further support the off-table source of the noise.
While everything is consistent with coupling at the Swiss cheese baffle near HAM2, we haven’t eliminated the eye baffle near HAM3. This might be done by comparing a second accelerometer under the eye baffle to the one under the Swiss cheese baffle, but I didn’t have another spare PEM channel.
If it is the Swiss cheese baffle, it might be worth laying it down during a vent. Two concerns are the blocking of any beams that are dumped on the baffle, and the shiny reducing flange at the end of the input beamt ube that would be exposed.
An immediate mitigation option is to try and move beams relative to the Swiss cheese baffle while monitoring the noise from an injection. Sheila and I started this but ran out of commissioning time and LLO was up for most of the weekend so I didn’t get back to it. If someone else wants to try this, either turn on the fire pump or, for even more noise in DARM, the shaker by HAM3 (the cable goes across the floor to the driver by the wall, enter 17 Hz on the signal generator and turn on the amp, it should still be set).
Shaker injections have shown the input beamtube to be sensitive for some time ( https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=31016). During pre-run PEM injections, an 8 to 100 Hz broad-band shaker injection on the input beamtube showed strong coupling. However the broad-band injection was smaller in the sensitive 10-18 Hz band then in other sub-bands and so the magnitude of the up-converted coupling from this narrow sub-band was not evident. When we have detected upconversion during PEM injections in the past, we have narrowed down the sensitive frequency band with a shaker sweep, but, for the input beam tube, we didn’t get to this until last week.
Figure 5 shows fire pump, trucks and the HVAC on input beam tube accelerometers and DARM.
Sheila, Anamaria, Robert
The current plan of the stray light upgrade team is to completely remove the aluminum panels of the Swiss cheese baffle, but leave the oxidized stainless outer ring to shield the flat shiny reducing flange. This is planned for post-O2.
Seems like 12.1Hz harmonics observed yesterday was also due to this?
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=35136
It appears that some maintenance work on Tuesday March 14 led to degradation with respect to narrow lines in H1 DARM. The attached inverse-noise-weight spectra compare 1210 hours of data from the start of O2 until early morning of March 14 with 242 hours since that time. Summary of substantial comb changes: The comb with 0.9999-Hz spacing nearly aligned with N + 0.25 Hz (N = 15-55) is stronger. There is a new comb with 0.999984-Hz spacing visible nearly aligned with N + 0.75 Hz (N = 41-71) There is a new comb with 1.0000-Hz spacing visible at N - 0.0006 Hz (N = 104-125) Much activity was reported in the alog for March 14, but what jumps out at me are two references to HWS work: here and here. Are there HWS cameras running during observing mode? I had thought those things were verboten in observing mode, given their propensity to make combs. Fig 1: 20-50 Hz comparison (before and after March 14 maintenance) Fig 2: 50-100 Hz (before and after March 14 maintenance) Fig 3: 100-150 Hz (before and after March 14 maintenance) Attachment has full set of comparison sub-bands up to 2000 Hz with both A-B and B-A orderings to make clear which lines are truly new or louder.
We didn't change any configuration of the HWS camera that day. All HWS cameras have been turned on since the beginning of O2 and have been hooked up to the external power supplies (alog30799) since O1. Since then I haven't heard any complains about HWS cameras making noises. HWS cameras ON has been a nominal configuration since the beginning of O2.
I was asked via e-mail if this problem might have started later in March with the replacement of the Harmonic Frequency Generator. but explicit comparisons of daily spectra below show conclusively that the N + 0.25 Hz and N + 0.75 Hz combs are not present before Tuesday maintenance ("March 14" ends at 6:00 a.m. CDT on March 14 in the FSCan dating convention), but are present individually on March 15, 16 and 17. The new N Hz comb reported above is too weak to show up well in a single day's measurement with 1800-second SFTs. I was also told via e-mail (and just saw Nutsinee's note above) that HWS systems are run routinely during observing mode and that no configuration changes were made on March 14 (although there are some new cables in place). So perhaps there is a different culprit among the March 14 activities. Fig 1: Zoom in of line at 28.25 Hz Fig 2: Zoom in of line at 38.25 Hz Fig 3: Zoom in of line at 88.75 Hz Fig 4: Zoom in of line at 98.75 Hz
Looking back at the aLOGs from the March 14th period, some possibilities of activities that may be related stick out: 1) PSL bullseye detector for jitter studies installed 2) ITMY OpLev power supply moved 3) New cables for GigE and camera installed 4) ETM OpLevs laser power increased 5) CPS electronics power cycled and board reseated on WBSC1 ITMY 6) DBB powered up and supposedly powered off;is it still on??Kiwamu says he is 99% sure the DBB is off
It looks like all three combs jump in magnetometers at EX and EY between 3/14 and 3/15, but don't have any notable presence in the CS magnetometers.
More clues: at EX, there was a recent change point in the combs (strength drop) between 2/28 and 3/1. At EY, there was one (also a strength drop) between 3/7 and 3/8. These are Fscan dates, covering 24 hours back from the morning when they were run-- in any case, it looks like two prior Tuesdays may be involved.
As a side note, the 1 Hz comb with 0.5 Hz offset mirrors the stated behavior in several channels, at least one magnetometer at EX and one at EY.
Read below for more details on the methodology and the results.
At this point I have 5548 times identified as blip glitches, and the value of the suspension channels for each of those. And 5548 more times that are clean data, with the value of the suspension channels for each of those.
Here's an example plot of the result. I picked one of the most interesting channel (SUS-ETMX_M0_DAMP_Y_INMON). The PDF files attached below contains the same kind of histograms for all channels, divided by test mass.
The first row shows results for the SUS-ETMX_M0_DAMP_Y_INMON signal. The first panel compares the histogram of the values of this signal for blip times (red) and clean times (blue). This histogram is normalized such that the value of the curve is the empirical probability distribution of having a particular value of the signal when a blip happens (o doesn't happen). The second panel in the first row is the cumulative probability distribution (the integral of the histogram): the value at an abscissa x gives the probability of having a value of the signal lower than x. It is a standard way to smooth out the histogram, and it is often used as a test for the equality of two empirical distributions (the Kolmogorov-Smirnov test). The third panel is the ratio of the histogram of glitchy times over the histogram of clean times: if the two distributions are equal, it should be one. The shaded region is the 95% confidence interval, computed assuming that the number of counts in each bin of the histogram is a naive Poisson distribution. This is probably not a good assumption, but the best I could come up with.
The second row is the same, but this time I'm considering the derivative of the signal. The third row is for the second derivative. I don't see much of interest in the derivative plots of any signal. Probably it's due to the high frequency content of those signals, I should try to apply a low pass filter. On my to-do list.
However, looking a the histogram comparison on the very first panel, it's clear that the two distributions are different: a subset of blip glitches happen when the signal SUS-ETMX_M0_DAMP_Y_INMON has a value of about 15.5. There are almost no counts of clean data for this value. I think this is a quite strong correlation.
You can look at all the signals in the PDF files. Here's a summary of the most relevant findings. I'm listing all the signals that show a significant peak of blip glitches, as above, and the corresponding value:
ETMX | ETMY | ITMX | ITMY | |
---|---|---|---|---|
L1_WIT_LMON | -21.5 | -54.5 | ||
L1_WIT_PMON | 495, 498 | 609 | 581 | |
L1_WIT_YMON | 753 | -455 | 643, 647 | 500.5 |
L2_WIT_LMON | 437, 437.5 | -24 | -34.3 | |
L2_WIT_PMON | 1495 | -993, -987 | ||
L2_WIT_YMON | -170, -168, -167.5 | 69 | ||
L3_OPLEV_PIT_OUT16 | -13, -3 | 7 | ||
L3_OPLEV_YAW_OUT16 | -4.5 | 7.5 | -10, -2 | 3 |
M0_DAMP_L_INMON | 36.6 | 14.7 | 26.5, 26.9 | |
M0_DAMP_P_INMON | -120 | 363 | 965, 975 | |
M0_DAMP_R_INMON | 32.8 | -4.5 | 190 | -90 |
M0_DAMP_T_INMON | 18.3 | 10.2 | 20.6 | 27.3 |
M0_DAMP_V_INMON | -56.5 | 65 | -25 | -60 |
M0_DAMP_Y_INMON | 15.5 | -71.5 | -19, -17 | 85 |
Some of the peaks listed above are quite narrow, some are wider. It looks like ITMY is the suspension with more peaks, and probably the most significant correlations. But that's not very conclusive.
There were some days with extremely high rates of glitches in January. It's possible that clumping of glitches in time could be throwing off the results. Maybe you could try considering December, January, and February as separate sets and see if the results only hold at certain times. Also, it would be nice to see how specifically you can predict the glitches in time. Could you take the glitch times, randomly offset each one according to some distribution, and use that as the 'clean' times? That way, they would have roughly the same time distribution as the glitches, so you shouldn't be very affected by changes in IFO alignment. And you can see if the blips can be predicted within a few seconds or within a minute.
In my analysis I assumed that the list of blip glitches were not affected by the vetoes. I was wrong.
Looking at the histogram in my entry, there is a set of glitches that happen when H1:SUS-ETMX_M0_DAMP_Y_INMON > 15.5. So I plotted the value of this channel as a function of time for all blip glitches. In the plot below the blue circles are all the blips, the orange dots all the blips that passes ANALYSIS_READY, and the yellow crosses all the blips that passes the vetoes. Clearly, the period of time when the signal was > 15.5 is completely vetoed.
So that's why I got different distributions: my sampling of clean times did include the vetoes, while the blip list did not. I ran again the analysis including only non vetoed blips, and that family of blips disappeared. There are still some differences in the histograms that might be interesting to investigate. See the attached PDF files for the new results.
One more thing to check is the distribution over time of the blip glitches and of the clean times. There is an excess of blips between days 39 and 40, while the distribution of clean data is more uniform. This is another possible source of issues in my analsysis. To be checked.
@Gabriele
The excess of blips around day 40 corresponds to a known period of high glitchiness, I believe it was around Dec 18-20. I also got that peak when I made a histogram of the lists of blip glitches coming from the online blip glitch hunter (the histogram is at https://ldas-jobs.ligo.caltech.edu/~miriam.cabero/tst.png, is not as pretty as yours, I just made it last week very quickly to get a preliminary view of variations in the range of blips during O2 so far).
J. Kissel, S. Aston, P. Fritschel Peter was browsing through the list of frame channels and noticed that there are some differences between H1 and L1 on PR2 (an HSTS), even after we've both gone through and made the effort to revamp our channel list -- see Integration Issue 6463, ECR E1600316, LHO aLOG 30844, and LLO aLOG 29091. What difference he found is the result of the LHO-ONLY ECR E1400369 to increase the drive strength of the lower states *some* of the HSTSs. This requires the two sites to have a different front-end model library part for different types of the same suspension because the BIO control each stage is different depending on the number of drivers that have been modified; At LHO the configuration is Library Part Driver Configuration Optics HSTS_MASTER.mdl No modified TACQ Drivers MC1, MC3 MC_MASTER.mdl M2 modified, M3 not modified MC2 RC_MASTER.mdl M2 and M3 modified PRM, PR2, SRM, SR2 At LLO, the configuration is Library Part Driver Configuration Optics HSTS_MASTER.mdl No modified TACQ Drivers MC1, MC3, PR2 MC_MASTER.mdl M2 modified, M3 not modified MC2, PRM, SR2, SRM RC_MASTER.mdl M2 and M3 modified none This model's DAQ channel list for the MC and RC masters is the same. The HSTS master is different, and slower, because these SUS are used for angular control only: HSTS (Hz) MC or RC (Hz) M3_ISCINF_L_IN1 2048 16384 M3_MASTER_OUT_UL 2048 16384 M3_MASTER_OUT_LL 2048 16384 M3_MASTER_OUT_UR 2048 16384 M3_MASTER_OUT_LR 2048 16384 M3_DRIVEALIGN_L_OUT 2048 4096 Since LLO's PR2 does not have any modifications to its TACQ drivers, it uses the HSTS_MASTER model, which means that PR2 alone is going to show up as a difference in the channel list between the sites that seemed odd Peter -- that L1 had 6 more 2048 Hz channels than H1. Sadly, it *is* used for longitudinal control, so LLO suffers the lack of stored frame rate. In order to "fix" this difference, we'd have to create a new library part for LLO's PR2 alone that has the DAC channel list of an MC or RC master, but have the BIO control logic of an HSTS master (i.e. to operate M2 and M3 stages with an unmodified TACQ driver). That seems excessive given that we already have 3 different models due to differing site preferences (and maybe range needs), so I propose we leave things as is, unless there's dire need to compare the high frequency drive signals to the M3 stage of PR2 at LLO. I attach a screenshot that compares the DAQ channel lists for the three library parts, and the two types of control needs as defined by T1600432.
Just to trace out the history HSTS TACQ drivers at both sites: Prototype of L1200226 increase MC2 M2 stage at LLO: LLO aLOG 4356 >> L1 MC2 becomes MC_MASTER. ECR to implement the L1200226 on MC2, PRM, and SRM M2 stages for both sites: E1200931 >> L1 PRM, SRM become MC_MASTERs >> H1 MC2, PRM, SRM become MC_MASTERs LLO temporarily changes both PR2 and SR2 M2 drivers for an L1200226 driver: LLO aLOG 16945 And then reverted two days later: LLO aLOG 16985 ECR to increase the drive strength SR2 M2 stage only at LLO: E1500421 >> L1 SR2 becomes MC_MASTER ECR to increase the drive strength of SR2 and PR2 M2 and PRM, PR2, SRM, SR2 M3 at LHO only: E1400369 >> H1 SRM, SR2, SRM, SR2 become RC_MASTERs.