Search criteria
Section: H1
Task: SEI
Jeff, Sheila, Elenna
Executive summary: Our input alignment has changed. We know this because the alignment onto IM4 trans has changed. This corresponds to the IMs 1-3 OSEMs showing a change, which corresponds with HAM2 being taken offline to change the power supply.
Today, an H1SEI23 power supply was replaced, and preceding this activity HAM2 and HAM3 were taken offline, 85475.
When HAM2 came back online, the OSEM readbacks on IMs 1-3 showed a change in both pitch and yaw. The table below summarizes how much each suspension's alignment has changed.
IM | Pit change (urad) | Yaw change (urad) |
1 | 2 | 0 |
2 | 66.8 | 2.7 |
3 | 14.8 | 4.3 |
This shift in alignment is also apparent on IM4 trans, which shows the pitch offset has increased by 0.3 and yaw offset by 0.03 (uncalibrated).
We first noticed something had changed when we ran input alignment- I had to move PR2 by 5 microradians in both pitch and yaw just to get the lock to catch, and then the alignment offload further changed the PR2 sliders.
This change is further backed up by the fact that the POP A offsets have changed, and the PRG at 2 W seems to have decreased slightly.
We're not sure how bad of a problem this is. We have been able to lock and power up with no issues. We had a random unrelated lockloss, so I cannot yet see if the buildups at full power once thermalized have changed significantly.
The locking process is now much slower again, since the alignment offsets for the PRM have changed, and the convergence of the PRM ADS tends to set the rate at which we can engage ASC in full lock.
The first two attachments are screenshots show the shift in the IM alignment and IM4 trans alignment. I included the HAM2 ISI guardian state to show this change corresponds with HAM2 going offline.
The third attachment shows how the POP A offset has changed at full IFO 2 W. We can use this scope to adjust the POP A QPD offset and speed up locking, if we want to keep this input alignment.
The fourth attachment shows the ITMX green camera offset screen in full IFO at 2 W. I was watching this screen to confirm the new ITMX green camera offsets I set earlier today were ok. I think the set value differs from the live value here because of the new input alignment.
We sat at full power for 1 hour and had thermalized to a PRG of 49. Trending back to our last lock before maintenance it looks like we achieve about the same PRG in this amount of time, so there is minimal impact on the buildups. There might be a small reduction in the jitter coupling. We should check again when fully thermalized to be sure.
Since we are going to stay this way for at least tonight, I updated the POP A offsets to help shorten ENGAGE ASC FOR FULL IFO. SDF attached.
Ivey and Edgard,
We just finshed a fit of the Yaw-to-Yaw transfer functions for the OSEM estimator using the measurements that Oli took for SR3 last Tuesday [see LHO: 85288].
The fits were added to the Sus SVN and live inside '~/SusSVN/sus/trunk/HLTS/Common/FilterDesign/Estimator/fits_H1SR3_2025-06-30.mat' . They are already calibrated to work on the filter banks for the estimator and can be installed using 'make_SR3_yaw_model.m', which lives in the same folder [for reference, see LHO: 84041, where Oli got the fits running for a test].
Attached below are two pictures of the fits we made for the estimator.
The first attachment shows the Suspoint Y to M1 DAMP Y fit. We made sure to fit the asymptotic behavior as well as we could, which ends up being 0.95x10^{-3} um/nm (5% lower than expected from the OSEM calibration). The zpk for this fit is
'zpk([-0.024+20.407i,-0.024-20.407i,-0.044+11.493i,-0.044-11.493i,0,0],[-0.067+21.278i,-0.067-21.278i,-0.095+14.443i,-0.095-14.443i,-0.07+6.405i,-0.07-6.405i],-0.001)'
The second attachment shows the M1 drive Y to M1 DAMP Y fit. We kept the same poles that we had for the other fit, but manually fit the zeros and gain to make a good match. The zpk for this fit is
'zpk([-0.051+8.326i,-0.051-8.326i,-0.011+19.259i,-0.011-19.259i],[-0.067+21.278i,-0.067-21.278i,-0.095+14.443i,-0.095-14.443i,-0.07+6.405i,-0.07-6.405i],12.096)'
Hopefully Oli and co. will have time to test this soon!
The new filters have been loaded in. Here are the matlab plots for the fits for SUSPOINT_Y_2GAP and for EST_MODL_DRV_Y_2GAP.
Closes FAMIS 26456. Last checked in alog 84705
Trends look normal. SEI BRS Maintenance work in Mid-May was part of vent work and is the only excursion from the threshold lines.
I did a very quick search and didn't find any other posts, so I putting a plot here comparing the motion of the pre-vent HAM1 TT L4C passive stack motion to the current HAM1 ISI motion. ISI motion is generally 1-3 orders of magnitude better, only with a very narrow window at 15hz where the passive stack barely reached down to the ISI motion, but only for the x dof. Blue, green and brown are the ISI, cyan, pink and black are the passive stack.
Almost done with the st0 feedforward, X,Y,Z and RY are running. Still have to do tilt decoupling (which will improve low frequency motion), vac work at the chamber kept tripping the ISI during my measurements this morning.
Lockloss @ 03:03 UTC after almost 6 hours locked - link to lockloss tool
Several quakes rolling through around this time; hard to say which was the real cause but likely a M5.7 in the Caribbean.
During the commissioning window this morning, I worked on the St0 to St1 feedforward for the HAM1 ISI. This time I borrowed some code from Huyen to try the RIFF frequency domain fitting package from Nikhil. This required using matlab 2023b which seems to have a lot of computationally heavy stuff like code suggestions added so it was kind of clunky to use and I'm not sure what all of the different fitting options do, so each dof took multiple rounds of fitting to get working. I also had to add an ac-coupling high pass after the fact to the filters because they all went to 1 at 0hz. Still the results I got for HAM1 seem to work pretty well. Attached spectra are the on-off data I collected for the X,Y and Z dofs. Refs are the ff off, live traces ff on spectra. Top of each image are the asds for the ff on & off, bottom is the magnitude of the st0 l4c to st1 gs13 tf. The improvement is broad ~10x less motion from 5 hz up to ~50hz. I'm looking at the rotational dofs still, but there is less coherence there, so not as much to win.
Elenna has said this seemed to have improved chard asc, maybe she has some plots to add.
There is about an order of magnitude improvement in the CHARD P error signal between 10-20 Hz as a result of these improvements, comparing the NLN spectra from three days ago versus today. Fewer noisy peaks are also present in INP1 P. I included the CHARD P coherence with GS13s, focusing on the three DoFs with the most coherence: RX, RZ, and Z. The improvements Jim made greatly reduced that coherence. To achieve the CHARD P shot noise floor at 10 Hz and above, there is still some coherence of CHARD P with GS13 Z that is likely contributing noise. However, for the IFO, this is sufficient noise reduction to ensure that CHARD P is not directly limiting DARM above 10 Hz. I also compare the CHARD P coherence with OMC DCPD sum from a few days ago to today, see plot.
In terms of how this compares with our passive stack + L4C feedforward performance, I found some old templates where I compared upgrades to our HAM1 feedforward. I compare our ISI performance now with the passive stack, no L4C feedforward to ASC, and passive stack with the best-performance feedforward we achieved: the results. It's actually a pretty impressive difference! (Not related to the ISI seems to be a change in the shot noise floor- looks like the power on the REFL WFS may have changed from the vent.)
The coupling of CHARD P to DARM appears to be largely unchanged, so this generally means we are injecting about 10x less noise from CHARD into DARM. from 10-30 Hz.
H1 ISI CPS Noise Spectra Check - FAMIS 26048
NEW and IMPROVED H1 ISI CPS Noise Spectra check Now includes HAM1 !
HAM1 currently have some very loud VAC equipment attached to it which is running and may be why HAM1 looks so terible relative to the rest of the othe HAMs.
In preparation for the RCG upgrade, we are using the relocking time to reconcile SDF differences in the SAFE file.
Here are some of mine:
I have also determined that the unmonitored channel diffs in the LSC, ASC, and OMC models are guardian controlled values and do not need to be saved.
Not accepting or reverting the h1sqz, h1ascsqzfc, or slow controls cs_sqz sdfs, attached. As these have the same observe and safe files.
Accepted H1:TCS-ETMX_RH_SET{LOWER,UPPER}DRIVECURRENT as ndscope-ing shows they are normally at this value.
Some of these SDFs may have then led to diffs in the OBSERVE state. I have reverted the roll mode tRamp, and accepted the OSC gains in the CAL CS model.
I updated the OPTICALIGN OFFSETs for each suspension that we use those sliders on. I tried using my update_sus_safesnap.py script at first, but even though it's worked one other time in that past, it was not working anytime I tried using it on more than one suspension at a time (it seems like it was only doing one out of each suspension group). I ended up being able to get them all updated anyway eventually. I'm attaching all their sdfs and will be working on fixing the script. Note that a couple of the ETM/TMS values might not match thesetpoint exactly due to the screenshots happening during relocking and after they had moved a bit with the WFS
Closes FAMIS37206, last checked in alog84428
HEPI pump trends look mostly normal, HPI-PUMP_LO_CONTROL_VOUT has dropped slightly (~30).
J. Kissel Executive Summary I've tuned the lens position and measured the projected beam profile of one of three trial fiber collimators to be used for the SPI. The intent is to do this measurement before vs. after a vacuum bake like done for previous collimators (Section 3.3 of E1500384), to see if the bake will cause any negative impact on the performance of the collimator. It also helped clear out a few "rookie mistake" bugs in my data analysis, though there's still a bit of inconsequential confusion in post-processing the fit of the beam profile. Full Context The SPI intends to use supposedly vacuum compatible Schaefter + Kirchhoff (SuK) fiber collimators (D2500094, 60FC-0-A11-03-Ti) in order to launch its two measurement (MEAS) and reference (REF) beams from their fiber optical patchcords attached to feedthroughs (D2500175) in to free-space and throughout the ISIK transceiver / interferometer (D2400107). However, unlike their (more expensive) stainless steel counterparts from MicroSense / LightPath used by the SQZ team, SuK doesn't have the facilities to specify a temperature at which they believe the titanium + D-ZK3 glass lens + CuBe & Ti lens retaining ring assembly will fail (see E2300454). As such, we're going to run one SuK collimator through the same baking procedure as the MicroSense / LightPath (see Section 3.3 of E1500384) -- slow 6 [hr] ramp up to 85 [deg C], hold for 24 hours, similar slow ramp down -- and see if it survives. A catastrophic "if it survived" result would be that the lens cracks under the stress of differential heating between the titanium, CuBe, and glass. As we don't expect this type of failure, in order to characterize "if it still functions," we want a quantitative "before" vs "after" metric. Since I have to learn how to "collimate" this type of collimator anyways (where "collimate" = mechanically tune the lens position such that emanating beam's waist is positioned as close to the lens as possible), I figure we use a 2D beam profile as our quantitative metric. We expect the symptom of a "failed" fiber collimator to be that "before bake it projects a lovely, symmetric, tunable, Gaussian beam, and after bake the profile looks asymmetric / astigmatic or the lens position is no longer consistently/freely adjustable." The SPI design requires a beam whose waist (1/e^2) radius is w0 = 1.050 mm +/- 0.1 mm. That puts the Rayleigh range at zR = pi (w0^2) / lambda = 3.25 [m], so in order to get a good fit on where the waist lies, we need a least one or two data points beyond the Rayleigh range. So that means projecting the beam over a large distance, say at least ~5-6 [m]. Measurement Details 2025-06-03_SPIFC_OpticalSetup.pdf shows the optical setup. Pretty simple; just consuming a lot of optical table space (which we thankfully have at the moment). The "back" optical table (in IFO coordinates the "-X table (with the long direction oriented in the Y direction") already has a pre-existing fiber coupled, 1064 nm, laser capable of delivering variable power of least 100 [mW] so I started with that. It's output has an FC/APC "angled physical contact" connector, where as the SuK fiber collimators have an FC/PC (flat) "physical contact" connector. I thus used a ThorLabs P5-980PM-FC-2 APC to PC fiber patch cord to couple to the SuK fiber collimator, assembled with a (12 mm)-to-(1 inch) optic mount adapter (AD12NT) and mounted in a standard 1" mirror mount, set at 5 inch beam height. Ensuring I positioned the free-space face of the collimator at the 0 [in] position, I projected to beam down the width of the optical table, placed steering mirrors at 9 [ft] 9 [in] (the maximum +Y grid holes), separated by 6 [in] in order to return the beam down the table. Along this beam, I marked out grid-hole positions to measure the beam profile with a NanoScan head at z = [0.508 0.991 1.499 2.007 3.251 4.496 5.41] [m] These are roughly even half-meter points that one gets from "finding 1 inch hole position that gets you close to 0.5 [m] increments", i.e. z = [1'8", 3'3", 4'11", 6'7", 10'8", 14'9", and 17'9"]. Using the SuK proprietary tooling, I then loosened the set-screws that had secured the lens in position with the 1.2 mm flat-head (9D-12), and adjusted the position of the lens with their short eccentric key (60EX-4), per their user manual (T2500062 == Adjustment_60FC.pdf), with the NanoScan was positioned at the 5.41 [m] position. At the 5.41 [m] position, we expect the beam radii to be w(z=5.41 m) = w0 * sqrt(1 + (z/zR)^2) = 2.036 [mm], or a beam diameter of d(z=5.41 m) = 4.072 [mm]. Regretfully, I made the rookie mistake of interpreting the NanoScan's beam widths (diameters) as radii on the fly, and "could not get a beam 'radii' lower than 3.0 [mm]," at the z = 5.41 position, as adjustments to the eccentric key / lens position approached a "just breath near it and it'll get larger" level at that beam size. This will be totally fine for "the real collimation" process, as beam widths (diameters) of 4.0 [mm] required much less a delicate touch and where quite repeatable (I found, while struggling to achieve 3.0 [mm] width). Regardless, once I said "good enough" at the lens position, I re-secured the set screws holding the lens in place. That produced a d = 3.4 [mm] beam diameter, or 1.7 [mm] radius, at the z = 5.41 [m]. I then moved the NanoScan head to the remaining locations (aligning the beam into the head at each position, as needed, with either the steering mirrors or the fiber collimator itself) to measure the rest of the beam profile as a function of distance. 2025-06-03_SPIFC_BeamScans_vs_Position.pdf shows the NanoScan head position and raw data at each z position after I tuned the lens position at z = 5.41 [m]. Having understood during the data analysis that the NanoScan software returns either 1/e^2 or D4sigma beam *diameters*, I followed modern convention and used the mean D4sigma values, and converted to radii with the factor of 2, w(z) = d(z) / 2. One can see from the raw data that the beam profile at each point is quite near ''excellently Gaussian,'' which is what we hope will remain true *after* the bake. Results 2025-06-03_spifc_S0272502_prebake_beamprofile_fit.pdf shows the output of the attached script spifc_beamprofile_S0272502_prebake_20250603.m, which uses Sheila's copy of a la mode to fit the profile of the beam. Discussion The data show a pretty darn symmetric beam in X (parallel to the table) and Y (perpendicular to the table), reflecting what had already been seen in the raw profiles. The fit predicts a waist w0 of (w0x, w0y) = (0.89576, 0.90912) [mm] at position (z0x, z0y) = (1.4017, 1.3469) [m] away, downstream from the lens. Makes total sense that the z position of the waist is *not* at the lens position, given that I tried to get the beam as small a *diameter* possible at the 5.41 [mm] position, rather than what I should have done which is to tune the lens position / beam diameter to be the desired 4.072 [mm]. What doesn't make sense to me, is that -- in trying to validate the fit and/or show that the beam behaves as an ideal Gaussian beam would -- I also plot the predicted beam radius at the Rayleigh range, w(zR) [model] = w0 * sqrt(1 + (zR/zR)^2) = w0 * sqrt(2), or (wzRx, wzRy) [model] = (1.2668,1.2857) [mm] which is much larger than the fit predicts, (wzRx, wzRy) [fit] = (0.9677,0.9958) [mm] at a Rayleigh range, zR = pi * w0^2 / lambda of (zRx,zRy) [from fit w0] = (2.3691, 2.4404) [m] Similarly confusing, if I plot a line from the waist position (z0x, z0y) to the end of the position vector (6 [m]), whose angle from the x-axis is the divergence angle theta_R = lambda / (pi * w0) i.e. a predicting a waist radius at zEnd = 6 [m] of w(zEnd) = (zEnd - z0) * atan(theta_R) this results in a beam waist at xEnd much smaller than the fit. Most demo plots, e.g. from wikipedia:Rayleigh_length or wikipedia:Beam_divergence, show that slope of the line should start to match the beam profile just after the Rayleigh range. To be fair, these demos never have quantitative axes, but still. At least the slope seems to match, even though the line is quite offset from the measured / fit z > 6 [m] asymtote. Conclusion Though I did not adjust the lens position to place the beam waist at desired position, I now have a good measurement setup and data processing regime to do so for the "production" pair of fiber collimators. For this collimator, since we're just looking at "before" vs. "after" bake data, this data set is good enough. Let's bake!
(RyanC, CoreyG)
Got a Wake Up Call at 1211amPDT, but I was sort of already awake. RyanC was up after his shift and we were both watching H1 remotely. He was battling H1 most of his shift and made headway toward the end of his shift and then handed off info for how things were going (the winds started dying down around 9pm-ish). A few things he did for H1:
Now that I'm sort of awake, wondering why the Earth was playing with us and that Earthquake which literalluy caught us by surprise seconds after we made it to Observing last night.
Although there were some Alaska quakes around the time of the lockloss, they were under Mag2.6.
So assuming it was the Mag5.0 off the coast of Chile which was roughly about 30min before the lockloss. I guess that's fine, but why were there no notifications on Verbal of an Earthquake? Why didn't SEI_ENV transition from CALM to EARTHQUAKE?
Looking at the seismic BLRMS, the last time SEI_ENV transitioned to EARTHQUAKE was about 12hrs ago at 0352utc (see attached screenshot) during RyanC's shift (but he was just getting done dealing with winds at that time, so H1 was down anyway). But after that EQ, there were a few more earthquakes, which were less that the 0352 one, but not by much, and certainly big enough to knock H1 out at 0725 from the Chilean coast earthquake. Perhaps it was a unique EQ, because it was off the Pacific coast, albeit South American coast.
Just seems like H1 should have been able to handle this pesky measly Mag5.0 EQ that the Earth taunted us with after a rough night---literally seconds after we had hit the OBSERVING button! :-/
TITLE: 06/11 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 144Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 24mph Gusts, 18mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.14 μm/s
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
Recovered from the earlier lockloss!
An Initial_Alignment was ran, and we got back up to Nominal_Low_Noise at 20:46 UTC
I did have to accept some SDF diffs for the SEI system, and reached out to Jim about it. He mentioned that the some of those channels should not be monitored.
When the SEI ENV went back to calm, and it dropped us from Observing a few moments later we unmonotored those channels.
We got back to Observing at 21:04 UTC
21:10 UTC we fell out of Observing because of a SQZ issue, returning to Observing at 21:24 UTC
SUS ETMY Roll Mode is growing!
H1:SUS-ETMY_M0_DARM_DAMP_R_GAIN was changed to account for an interesting Roll mode on ETMX, and accepted in SDF. YES, Ya Read That Correctly and I didn't make a mistake here. Changes to ETMY damped a Roll mode on ETMX .
The Ops Eve Shifter should revert this change, before handing off the IFO to the Night Owl Op.
H1 has been locked for 2 hours and 40 minutes and is currently OBSERVING.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:22 | FAC | LVEA is LASER HAZARD | LVEA | YES | LVEA is LASER HAZARD \u0d26\u0d4d\u0d26\u0d3f(\u239a_\u239a) | 22:54 |
15:46 | Fac | Mitchel | Mech Mezz | N | Checking the dust pump | 16:56 |
15:49 | FAC | Tyler | VAC prep | N | Staging tools & sand paper | 16:04 |
15:57 | FAC | Tyler | EY | N | Checking on bee box | 16:17 |
17:44 | EE | Fil & Eric | WoodShop | N | Working on cabling chiller yards mon channels. | 19:44 |
17:50 | FAC | Kim | H2 | N | Technical cleaning. | 18:30 |
18:39 | VAC | Gerardo, Jordan | LVEA | - | Bolt tightening and pump evaluation | 19:19 |
18:44 | PSL | RyanS | CR | N | RefCav alignment | 18:51 |
19:06 | PEM | Robert | LVEA | - | Damping vacuum pumps | 19:22 |
20:04 | PEM | Camilla | LVEA | Yes | Turning off an SR785 that was left on. | 20:09 |
Lockloss @ 18:33 UTC after 12.5 hr lock stretch - link to lockloss tool
Looks to be from some very local sudden ground motion. USGS has nothing to report yet, but site seismometers and Picket Fence certainly saw activity.
Now looks to be due to a M4.4 EQ from Fort St. John, Canada.
SDF Overview looks great except this HPIHAM1 channel not found.
TITLE: 06/11 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 140Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 11mph Gusts, 5mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.18 μm/s
QUICK SUMMARY:
H1 is still locked 8 Hours and 30 minutes later!
All systems seem to be functioning.
There was talk of doing a calibration measurement, Which I started to do right after making sure there wasn't anyone still inside working the LVEA.
I ran a PCAL BroadBand with this command:
pydarm measure --run-headless bb
2025-06-11 07:44:58,555 config file: /ligo/groups/cal/H1/ifo/pydarm_cmd_H1.yaml
2025-06-11 07:44:58,571 available measurements:
pcal: PCal response, swept-sine (/ligo/groups/cal/H1/ifo/templates/PCALY2DARM_SS__template_.xml)
bb : PCal response, broad-band (/ligo/groups/cal/H1/ifo/templates/PCALY2DARM_BB__template_.xml)
The BroadBand finished. But I did not run the Simulines. It was believed by the Calibration gurus that we don't need it before Observing because our calibration .
"monitoring lines show a pretty good uncertainty for LHO this morning: https://gstlal.ligo.caltech.edu/grafana/d/StZk6BPVz/calibration-monitoring?orgId=1&var-DashDatasource=lho_calibration_monitoring_v3&var-coh_threshold=%22coh_threshold%22%20%3D%20%27cohok%27%20AND&var-detector_state=&from=1749629890225&to=1749652518797 Roughly +/-2% wiggle "
~Joe B
Clicked the button for Observing, And we went right into observing with out any SDF issues!
Went into observing at 14:57 UTC
There are messages though mostly from the SEI system, all of which are Setpoint changes see SPM DIFFS for differences for HAMs 2,3,4,5.
But these have not stopped us from getting in to Observing.
I have attached a screenshot of the broadband measurement from this morning. It shows that the calibration uncertainty is within +-2%, which means that our new calibration is excellent!
For those who want to plot the latest PCAL broadband, you can use a template that I have saved in /opt/rtcds/userapps/release/cal/h1/dtt_templates/PCAL_BB_template.xml (aka [userapps] cal/h1/dtt_templates/)
In order to use this template, you must find the GPS time of the start of the broadband measurement, which I found today by converting the timestap in Tony's post above into GPS time. This template pulls data from NDS2 because it uses GDS, so you will also need to go to the "Input" tab, and put your current GPS time in the "Epoch stop" entry that is within the "NDS2 selection" box. The current time will hopefully be after the start time of the broadband measurement, so that will ensure that the full span of the data you need is requested from NDS2. If you don't do this, the template will give you an error if you try to run.
I went down to end Y to retrieve the usb stick that I remotely copied the c:\slowcontrols directory on h1brsey to, and also to try to connect h1brsey to the kvm switch in the rack. I eventually realized that what I thought was a vga port on the back of h1brsey was probably not, and instead I found this odd seeming wiring connected from what I am guessing is a hdmi or dvi port on the back of h1brsey, to some kind of converter device, then to a usb port on a network switch. I'm not sure what this is about, so I am attaching pictures.
I copied the contents of C:\SlowControls on h1brsex and h1brsey onto a usb stick. I committed changes local to h1brsex into svn. I ran svn update on h1brsey. I was just expecting the changes I committed from h1brsex to show up, but a whole lot more did. I guess no one had run svn update in a long time. It appeared to complete without conflicts and reported that it was now at revision 6189. I started committing files to svn on h1brsey that I thought did not conflict between the two machines, but I think I accidentally committed one that does. I committed local changes on h1brsey to /trunk/BRS2 C#/BRSReadout/configs/LHOEY.config, but afterward found that there are also local changes to the same file on h1brsex.
This is a very quick first look at the ASC performance with the HAM1 ISI. Jim is still working on bringing the ISI up to full performance, but the first attached plot compares the INP1 P and CHARD P error spectra from March 31 just before the vent and yesterday, when we locked to full power and low noise with HAM1 in the isolated state.
Before the vent, we were running feedforward of the HAM1 L4Cs to CHARD, INP1 and PRC2. PRC2 is now on the POP sensor, and CHARD and INP1 both use a combination of the REFL WFS 45 MHz signals. The large peak around 20 Hz before the vent appears to have reduced in INP1 P and shifted down in frequency in CHARD P. The second attachment shows that large peak in CHARD P is coherent with the HAM1 GS13s, from about 10 to 20 Hz, especially with RX, RY, RZ, and Z.
There is also a peak at 6 Hz, which we know is a vertical resonance of the RMs, 84712.
To investigate the 70Hz feature in HAM1 chamber, which Jim reported in his alog (LHO 84638) I started looking into the structural resonances of the periscope which is installed in HAM1 (see two pictures which TJ sent me, was taken by Corey here - view01, view02) .
Betsy, handed me a periscope (similar but not exactly the same) for investigation purpose, which is now setup in staging building. I attached an accelerometer to the top of the periscope and connected it to the front end of the B&K setup for hammer impact measurements - see picture of the experimental setup here.
At first I used two dog clamps to secure the periscope. The results of the two dog clamp B&K measurement is shown in this plot (data from 0.125 to 200Hz)) - one can see a 39Hz feature in the Z hit direction. See zoomed-in (30-100Hz) figure here.
Next, I attached a third dog clamp, just like in HAM1 chamber and took a second round of measurements (especially for Z direction impact).
This plot compares the two vs three dog clamps scenario on the periscope and one can see that the resonance mode has been pushed up from 39Hz to 48Hz.