Search criteria
Section: H2
Task: ISC
J. Kissel, O. Patane, F. Clara ECR E2400330 Calling this out explicitly: We have changed the OSEM PD satellite amplifiers on H1SUSPRM, H1SUSPR3, H1SUSBS, H1SUSSR3, and H1SUSSRM top masses; see LHO:85463 We chose to implement ECR E2400330 by modifying spare chassis ahead of time, and installing those modified spares in place of the old chassis (which will become some other suspensions' "new" amps next week). Though the ECR only changes the whitening stage frequency response. However, because the old vs. new chasses have different set of other overall transimpedance gain determining components, the read-back for the OSEM PDs will likely change slightly. Thus, the OSEMs' recast as EULER basis signals will also change slightly, *looking* like an alignment change, even though the alignment of the physical suspension will NOT have changed. This won't be in any consistent direction, and the transimpedance gain is determined by components that have value at any value within the components' tolerance. I attach two examples, H1SUSPR3 and H1SUSBS, of the levels we're talking about -- In the OSEM basis, it's 1-3 [urad], and same in the EULER basis. For the beam splitter, a physical change in alignment of that magnitude would be significant, hence me bringing it up explicitly. So, the new normal for the following suspension alignment starts on 2025-07-01 12:30 UTC: H1SUSPRM H1SUSPR3 H1SUSBS H1SUSSR3 H1SUSSRM We'll definitely have to re-run initial alignment after today's maintenance day, given that (unrelated) - we rebooted the entire electronics racks at EY to replace failing power supplies, - we rebooted the seih23 - we adjusted the green camera alignment
Per WP12650 we investigated and replaced two supplies with locked up cooling fans at EY. These supplies service power to the field rack R1 and by extension, the neighboring SUS rack.
We noted the -18V supply in slot U26 on VDC-C1 was running very hot, and when we looked the fan was visibly siezed up. Continuing down the rack, U22 +24V was also siezed up. We replaced both failed supplies with refurbished supplies with upgraded fans. We also lubricated the opposing supplies' fan shafts, +18V and -24V respectively with a drop of lubricant each.
U26L S1203041 +18V was lubricated.
U26R S1201988 -18V was replaced.
U22L S1201929 +24V was replaced.
U22R S1300297 -24V was lubracted.
F. Clara
J. Figueroa
M. Pirello
Elenna, Sheila, Oli
I've added thermalization ramping for the PRCL2 gain so that it ramps from 1.0 to 1.9 over the first 75 minutes at max power. The 'unthermalized' (1.0) and 'thermalized' (1.9) values are taken from lscparams in the new dictionary 'prcl2_gain', so any changes to be made in the future to those values should be set in that dictionary, and then the THERMALIZATION guardian reloaded so that it grabs those new values.
This addition is pretty much a copy of the way we ramp the SRCL offset, but for PRCL gain of course..
This change to the THERMALIZATION guardian as well as the dictionary addition to lscparams have been committed to svn as r32157.
Jennie W, Rahul, Keita
This is just a summary of our work over the last two days trying to repeat the alignment coupling measurements for the replacement ISS array unit (D1101059, unit S1202965). The reason we need to repeat these is because we have now uograded the washer and clamp plate in QPD assembly. See Keita's previous alog for details.
Thursday
First we changed input alignment to get roughly 4 V on each PD in the array, this is acheived by inserting the larger iris and using the two steering mirrors (M2 closest to the array, M1 further towards the laser) to change the input alignment of the auxiliary laser into the unit.
As we have tilted the QPD by adding the new components we need to re-align the QPD to centre the beam (which is split off from the main beam entering the unit by the beam splitter on the elevator assembly which sits at one corner of the ISS array unit).
Then we unscrewed the four screws holding the QPD down (see image) and tried to move the QPD to minimise the coupling from yaw motion if the input beam to pitch. We only managed to minimise pitch coupling and couldn't get it centred on qpd in yaw as the whole QPD unit moves a lot when not screwed down.
We screwed down the QPD but it was still off in yaw by a lot (see image).
As we were adjusting the input alignment mirror to check the coupling I managed to lost the input alignment to the array.
Friday
Today Keita brought the input alignment back by using the beam viewer to check the position on the diodes while changing M2. Then we saw about 3.5-4V on each of the PDs in the array. Next we only undid the two lower screws on the QPD (these hold the QPD unit itself clamped to the platform it sits on, the two upper screws hold in the connector to the back of the QPD and these were only slighly loosened). Keita moved around the unit till the QPD readout showed we were near centred and then we screwed down the unit. It moves alignment while being screwed down probably because of the angle of the QPD relative to the clamp.
For this alignment we used the QPD amplifier unit that gives a live visual readout of the centering.
We also have the option of using another amplifier that gives the QPD X, Y and SUM channels so we can read them on an oscilloscope but these had some weird saw tooth noise on them (see image from Thursday). Keita then discovered that we were using the wrong cable (too low a current rating) for this amplifier, we searched for the correct one but could not find it. We will get back to this on Monday.
Summary: We think we now have the QPD in a good place relative to the PD array as yaw and pitch are fairly decoupled, but maybe the angle of the QPD in rotation is still slightly off as the P and Y motion of the beam are still slightly off from the QPD quadrants. We need a new cable for the ON-TRAK amplifier.
Lockloss at 2025-06-25 21:45 UTC probably from the wind. Three seconds before the lockloss, we had an EX saturation. Wind jumped up all of a sudden, peakmon jumped up (although from very low to still pretty low but LSC CPSFF was affected), DARM had a bit oscillation, and almost all the ASC channels rang up starting 20-25 seconds before the lockloss
TITLE: 06/25 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 2mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY:
Observing and have been Locked for almost 2.5 hours. Range is okay at 145Mpc, it looks like its been drifting a bit down since the lock started.
Looking at the lockloss from last night (2025-06-25 11:02 UTC - note that the 'refined lockloss time' is one whole second early than the actual lockloss time), it's immediately clear that we lost lock due to a quick ringup, but it is unclear where that ringup came from or what frequency it was actually at. We can see an 8Hz oscillation in DARM one second before the lockloss, but looking at the LSC channels, SRCL sees a 1.5 Hz oscillation right before the lockloss, and PRCL has a 4.5 Hz oscillation (although the PRCL oscillation could be unrelated, although it does look like it grows a bit larger). In the ASC channels, MICH P OUT has a ringup at ~3.8 Hz in that last second, and some of the other ASC channels look like they maybe also have some sort of excursion right before the lockloss.
Ryan S., Elenna
Ryan and I are still trying to speed up the MOVE_SPOTS state. Today, Ryan implemented new code that checks the convergence of the loops and only ramps up the ADS gains of loops that are not yet converged to help them converge faster. This appeared to work well, although the state is still slow. We are now taking the spots to the FINAL spots that the camera servos go to, instead of some old spot, so it's possible that which loops that are far off have changed.
Ryan also pointed out that the ENGAGE_ASC_FOR_FULL_IFO state is taking a while because it is limited by the convergence of the PIT3 ADS. This is likely because the POP A offset used in DRMI ASC is not quite right, so I adjusted it for pitch so the PRM should be closer to the full lock position. SDFed.
With regards to ENGAGE_ASC_FOR_FULL_IFO, the three locks that we've had after the adjustment made yesterday have made the state take an average of 4.5 minutes to get through. Before making this change, it was taking us an average of 8.5 minutes (looking at the four locks before this change), so this has made a big improvement for this state!
However, it looks like the main reason why this state still takes a pretty long time compared to most other states is because it's still needing to wait a long time for the PIT3 and YAW3 ADS to converge (ndscope). Here's the log from this last time that we went through ENGAGE_ASC and you can see that most of the time is waiting for ADS. The actual wait timers in there are only 50 seconds of waiting, so the rest of the wait timers (the one second timers) are just from the convergence checker.
I updated the POP A yaw offset so that PRC1 in DRMI will bring the PRM closer to the full lock point and hopefully make convergence in this state faster.
We're assembling the first unit that incooporates all upgrades including the QPD tilt and here are minor problems we've stumbled upon. (No ISS array unit with an upgrade to tilt the QPD (E1400231) has been assembled before as far as I see and nobody seems to have cared to update all drawings.)
First picture is an example of the QPD before upgrade. QPD assembly (D1400139) and the cable connector assembly (D1300222) are mounted on the QPD platform by the QPD clamp plate (D1300963-v1, an older version) and a pair of split QPD connector clamps (d1300220). Two pieces of kapton insulation sheets are protecting the QPD assy from getting short-circuited to the platform.
After the upgrade, the QPD assy sits on top of a tilt washer (D1400146, called beveled C-bore washer) that tilts the QPD by 1.41deg in a plane that divides YAW and PIT plane by 45 degrees (2nd picture). The bottom kapton will go between the washer and the QPD platform plate.
Problem 1: Insulation between the QPD clamp and the QPD pins is a bit sketchy.
Titled QPD means that the bottom of the QPD assy is shifted significantly in YAW and PIT. A new asymmetric QPD clamp plate with tilted seating for the screws (D1300963-v2) has been manufactured to accommodate that. But we have no record of updated kapton insulators, so the center of the clamp bore doesn't agree with the kapton (3rd picture, note that the QPD rotation is incorrect in this picture, which had to be fixed when connecting the cable). Since the tilt washer is not captured by anything (it's just sandwiched between the clamp and the platform plate), it's not impossible to shift the QPD assy such that some of the QPD pins will be grounded to the clamp and thus to the QPD platform plate.
You must check that there's no electrical connection between the QPD assy and the platform each time you adjust the QPD position in the lab.
Problem 2: New QPD connector clamp posts are too long, old ones are too short.
Old posts for the QPD connector are 13/16" long, which is too short for the upgrade because of the tilt washer, see 4th picture where things are in a strange balance. It seems as if it's working OK, but you can wiggle the post a bit so the post slides laterally relative to the clamp and/or the platform, it settles to a different angle and suddnly things become loose. To avoid that, you tighten the screws so hard that they start bending (which may be already starting to happen in this picture).
Also, because the clamp positions are 45 degrees away from the direction of tilt, one clamp goes higher than the other.
To address these, somebody procured 1" and 15/16" posts years ago, but they're just too tall to the point where the clamps are loose. If anything, what we need are probably something like 27/32" and 7/8" (maybe 7/8" works for both).
We ended up using older 13/16" posts, but added washers. Two thin washers for the shorter clamp, two thin plus one thick for the taller one (5th picture). This works OK. Shorter screw is the original, longer screw was too long but it works.
Problem 3: It's easy to set the rotation of the QPD wrong.
When retrofitting the tilt washer and the newer QPD clamp plate, you must do the following.
I screwed up and put the QPD on the connector at a wrong angle. It's easy to catch the error because no quadrant responds to the laser, but it's better not to make a mistake in the first place. It will help if the QPD assy barrel is marked at the cathode-anode1 corner.
It seems that D1300222 and D1101059 must be updated. Systems people please have a look.
D1300222: A tilt washer (D1400146), a new QPD clamp (D1300963-v2) and two sheets of kapton insulation are missing. Spacers are longer than 13/16".
D1101059: Explicitly state that part #28 (D1300963, QPD clamp) must be D1300963-v2.
I installed the beam dumps (which are two plates of filter glass, probably from Schott?) for the array after cleaning them according to E2100057.
There are marks that look like water spots and/or some fog that couldn't be removed by repeated drag wiping with methanol (see picture).
After installation, I found that these plates are very loosely captured between two metal plates, see the video, this seems to be by design. I don't like it but the same design has been working in chamber for years.
Elenna, Sheila, Kevin, Matt, Camilla
For some thermalization tests, at 17:05UTC we stepped CO2 powers down from 1.7W to 0.9W each into IFO. Expect majority of thermalization to take ~1hour.
Beforehand, Sheila plugged in the freq noise injection cables in the LVEA PSL racks and Elenna turned on the AWG_LINES guardian.
I'm adding a detchar tag here in case anyone is wondering where all the lines are coming from in the data around this time- these are purposefully injected lines. If AWG_LINES is injecting, it will be in state 10. When IDLE (no injections), it is in state 2.
There were a few SDF diffs before observing in ramp times that I think came from running various scripts like the A2L and the DARM offset step. I reverted all of the changes so we (hopefully) don't get another SDF diff next time we lock.
To run the test that calculates how much loss we have between the ASC-AS_C (anti-symmetric port) and the DCPDs at the output of the OMC (where strain is derived from).
conda activate labutils
python auto_darm_offset_step.py
The script turns on one notch in the DARM LSC loop, then changes the PCAL line heights so all the power is in just two frequencies.
At the end it reverses both these changes.
It can be stopped using Crtl-C but this will not restore the PCAL line heights back to their default values so you may have to use time machine.
After setting up the PCAL the script steps the DARM offset level in 120 s steps, I think it takes about 21 minutes to run.
After the script finishes, please put the OMC ASC MASTER GAIN back to its original value.
Jennie W, Keita, Rahul
Today the three of us went into the optics lab to upgrade unit S1202965 with the c'bore washer and qpd clamp plate that give a ~ 2 degree tilt between the QPD plane and the mount plate. See detail D in D1300720.
Looking at the assy solidworks file, LIGO-D1101059, if the back face of the photodiode array is facing to the back, the longer clamp leg points towards the front, and the notch on the tilt washer should be approx at the 4o'clock position.
We first checked the alignment into the array photodiodes currently and realised the beam was off by a large amount in yaw from the entrance aperture.
Keita had to change the mounts for the PZT mirror and lens as these were slightly tilted on the translation stage and it seemed like we needed some more robust alignment setup.
We then tried aligning with the upstream steering mirror and PZT mirror but can see multiple beams on each array PD. To check that the beam is not too large at the input aperture we want to re-profile the beam size on the way into the ISS assembly.
We left the set-up with the M2MS beam prolfer set up at the corner of the table and rough alignment of the beam into it, more fine adjustment needs to be done.
The reason why the alignment was totally off is unknown. It was still off after turning on the PZT driver with an offset (2.5V) so it cannot be the PZT mirror. Something might have been bumped in the past month or two.
In preparation for the RCG upgrade, we are using the relocking time to reconcile SDF differences in the SAFE file.
Here are some of mine:
I have also determined that the unmonitored channel diffs in the LSC, ASC, and OMC models are guardian controlled values and do not need to be saved.
Not accepting or reverting the h1sqz, h1ascsqzfc, or slow controls cs_sqz sdfs, attached. As these have the same observe and safe files.
Accepted H1:TCS-ETMX_RH_SET{LOWER,UPPER}DRIVECURRENT as ndscope-ing shows they are normally at this value.
Some of these SDFs may have then led to diffs in the OBSERVE state. I have reverted the roll mode tRamp, and accepted the OSC gains in the CAL CS model.
I updated the OPTICALIGN OFFSETs for each suspension that we use those sliders on. I tried using my update_sus_safesnap.py script at first, but even though it's worked one other time in that past, it was not working anytime I tried using it on more than one suspension at a time (it seems like it was only doing one out of each suspension group). I ended up being able to get them all updated anyway eventually. I'm attaching all their sdfs and will be working on fixing the script. Note that a couple of the ETM/TMS values might not match thesetpoint exactly due to the screenshots happening during relocking and after they had moved a bit with the WFS
Oli, Camilla, Sheila, RyanS
It was pointed out (84972) that our new SRCL offset is too big at the beginning of the lock, affecting the calibration and how well we are squeezing. Camilla had the idea of taking the unused-but-already-set-up THERMALIZATION guardian and repurposing the main state so it steps LSC-SRCL1_OFFSET from the LSC-SRCL1_OFFSET value at the end of MAX_POWER to the official offset value given in lscparams (offset['SRCL_RETUNE']). This stepping starts at the end of MAX_POWER and goes for 90 minutes. Here is a screenshot of the code.
To go with this new stepping, we've commented out the line (~5641) in ISC_LOCK's LOWNOISE_LENGTH_CONTROL (ezca['LSC-SRCL1_OFFSET'] = lscparams.offset['SRCL_RETUNE']) so that the offset doesn't get set to that constant and instead keeps stepping.
To get this to run properly while observing, we did have to unmonitor the LSC_SRCL1_OFFSET value in the Observe sdf (sdf).
Attached is a screenshot of the grafana page, highlighting the 33 Hz calibration line, which seems to be the most sensitive to thermalization. Before, when the SRCL offset was set static, it appears that the 33 Hz line uncertainty starts at about 1.09 and then decays down to about 1.02 over the first hour. With the thermalization adjustment of the SRCL offset from 0 to -455 over one hour, the 33 Hz uncertainty starts around 0.945 and then increases to 1.02 over the first hour. Seems like we overshot in the other direction, so we could start closer to -200 perhaps and move to -455.
We decided to change the guardian so that it starts at -200 before then stepping its way up to -455 over the course of 75 minutes instead of 90 minutes.
With the update to the guardian to start at -200, each calibration line uncertainty has actually stayed very flat for these first 30 minutes of lock (except for the usual very large jump in the uncertainty for the first few minutes of the lock).
This shows the entire lock using the thermalization guardian with the SRCL offset ramping from -200 to -455, The line uncertainty holds steady the entire time within 2-3%!
J. Kissel Executive Summary I've tuned the lens position and measured the projected beam profile of one of three trial fiber collimators to be used for the SPI. The intent is to do this measurement before vs. after a vacuum bake like done for previous collimators (Section 3.3 of E1500384), to see if the bake will cause any negative impact on the performance of the collimator. It also helped clear out a few "rookie mistake" bugs in my data analysis, though there's still a bit of inconsequential confusion in post-processing the fit of the beam profile. Full Context The SPI intends to use supposedly vacuum compatible Schaefter + Kirchhoff (SuK) fiber collimators (D2500094, 60FC-0-A11-03-Ti) in order to launch its two measurement (MEAS) and reference (REF) beams from their fiber optical patchcords attached to feedthroughs (D2500175) in to free-space and throughout the ISIK transceiver / interferometer (D2400107). However, unlike their (more expensive) stainless steel counterparts from MicroSense / LightPath used by the SQZ team, SuK doesn't have the facilities to specify a temperature at which they believe the titanium + D-ZK3 glass lens + CuBe & Ti lens retaining ring assembly will fail (see E2300454). As such, we're going to run one SuK collimator through the same baking procedure as the MicroSense / LightPath (see Section 3.3 of E1500384) -- slow 6 [hr] ramp up to 85 [deg C], hold for 24 hours, similar slow ramp down -- and see if it survives. A catastrophic "if it survived" result would be that the lens cracks under the stress of differential heating between the titanium, CuBe, and glass. As we don't expect this type of failure, in order to characterize "if it still functions," we want a quantitative "before" vs "after" metric. Since I have to learn how to "collimate" this type of collimator anyways (where "collimate" = mechanically tune the lens position such that emanating beam's waist is positioned as close to the lens as possible), I figure we use a 2D beam profile as our quantitative metric. We expect the symptom of a "failed" fiber collimator to be that "before bake it projects a lovely, symmetric, tunable, Gaussian beam, and after bake the profile looks asymmetric / astigmatic or the lens position is no longer consistently/freely adjustable." The SPI design requires a beam whose waist (1/e^2) radius is w0 = 1.050 mm +/- 0.1 mm. That puts the Rayleigh range at zR = pi (w0^2) / lambda = 3.25 [m], so in order to get a good fit on where the waist lies, we need a least one or two data points beyond the Rayleigh range. So that means projecting the beam over a large distance, say at least ~5-6 [m]. Measurement Details 2025-06-03_SPIFC_OpticalSetup.pdf shows the optical setup. Pretty simple; just consuming a lot of optical table space (which we thankfully have at the moment). The "back" optical table (in IFO coordinates the "-X table (with the long direction oriented in the Y direction") already has a pre-existing fiber coupled, 1064 nm, laser capable of delivering variable power of least 100 [mW] so I started with that. It's output has an FC/APC "angled physical contact" connector, where as the SuK fiber collimators have an FC/PC (flat) "physical contact" connector. I thus used a ThorLabs P5-980PM-FC-2 APC to PC fiber patch cord to couple to the SuK fiber collimator, assembled with a (12 mm)-to-(1 inch) optic mount adapter (AD12NT) and mounted in a standard 1" mirror mount, set at 5 inch beam height. Ensuring I positioned the free-space face of the collimator at the 0 [in] position, I projected to beam down the width of the optical table, placed steering mirrors at 9 [ft] 9 [in] (the maximum +Y grid holes), separated by 6 [in] in order to return the beam down the table. Along this beam, I marked out grid-hole positions to measure the beam profile with a NanoScan head at z = [0.508 0.991 1.499 2.007 3.251 4.496 5.41] [m] These are roughly even half-meter points that one gets from "finding 1 inch hole position that gets you close to 0.5 [m] increments", i.e. z = [1'8", 3'3", 4'11", 6'7", 10'8", 14'9", and 17'9"]. Using the SuK proprietary tooling, I then loosened the set-screws that had secured the lens in position with the 1.2 mm flat-head (9D-12), and adjusted the position of the lens with their short eccentric key (60EX-4), per their user manual (T2500062 == Adjustment_60FC.pdf), with the NanoScan was positioned at the 5.41 [m] position. At the 5.41 [m] position, we expect the beam radii to be w(z=5.41 m) = w0 * sqrt(1 + (z/zR)^2) = 2.036 [mm], or a beam diameter of d(z=5.41 m) = 4.072 [mm]. Regretfully, I made the rookie mistake of interpreting the NanoScan's beam widths (diameters) as radii on the fly, and "could not get a beam 'radii' lower than 3.0 [mm]," at the z = 5.41 position, as adjustments to the eccentric key / lens position approached a "just breath near it and it'll get larger" level at that beam size. This will be totally fine for "the real collimation" process, as beam widths (diameters) of 4.0 [mm] required much less a delicate touch and where quite repeatable (I found, while struggling to achieve 3.0 [mm] width). Regardless, once I said "good enough" at the lens position, I re-secured the set screws holding the lens in place. That produced a d = 3.4 [mm] beam diameter, or 1.7 [mm] radius, at the z = 5.41 [m]. I then moved the NanoScan head to the remaining locations (aligning the beam into the head at each position, as needed, with either the steering mirrors or the fiber collimator itself) to measure the rest of the beam profile as a function of distance. 2025-06-03_SPIFC_BeamScans_vs_Position.pdf shows the NanoScan head position and raw data at each z position after I tuned the lens position at z = 5.41 [m]. Having understood during the data analysis that the NanoScan software returns either 1/e^2 or D4sigma beam *diameters*, I followed modern convention and used the mean D4sigma values, and converted to radii with the factor of 2, w(z) = d(z) / 2. One can see from the raw data that the beam profile at each point is quite near ''excellently Gaussian,'' which is what we hope will remain true *after* the bake. Results 2025-06-03_spifc_S0272502_prebake_beamprofile_fit.pdf shows the output of the attached script spifc_beamprofile_S0272502_prebake_20250603.m, which uses Sheila's copy of a la mode to fit the profile of the beam. Discussion The data show a pretty darn symmetric beam in X (parallel to the table) and Y (perpendicular to the table), reflecting what had already been seen in the raw profiles. The fit predicts a waist w0 of (w0x, w0y) = (0.89576, 0.90912) [mm] at position (z0x, z0y) = (1.4017, 1.3469) [m] away, downstream from the lens. Makes total sense that the z position of the waist is *not* at the lens position, given that I tried to get the beam as small a *diameter* possible at the 5.41 [mm] position, rather than what I should have done which is to tune the lens position / beam diameter to be the desired 4.072 [mm]. What doesn't make sense to me, is that -- in trying to validate the fit and/or show that the beam behaves as an ideal Gaussian beam would -- I also plot the predicted beam radius at the Rayleigh range, w(zR) [model] = w0 * sqrt(1 + (zR/zR)^2) = w0 * sqrt(2), or (wzRx, wzRy) [model] = (1.2668,1.2857) [mm] which is much larger than the fit predicts, (wzRx, wzRy) [fit] = (0.9677,0.9958) [mm] at a Rayleigh range, zR = pi * w0^2 / lambda of (zRx,zRy) [from fit w0] = (2.3691, 2.4404) [m] Similarly confusing, if I plot a line from the waist position (z0x, z0y) to the end of the position vector (6 [m]), whose angle from the x-axis is the divergence angle theta_R = lambda / (pi * w0) i.e. a predicting a waist radius at zEnd = 6 [m] of w(zEnd) = (zEnd - z0) * atan(theta_R) this results in a beam waist at xEnd much smaller than the fit. Most demo plots, e.g. from wikipedia:Rayleigh_length or wikipedia:Beam_divergence, show that slope of the line should start to match the beam profile just after the Rayleigh range. To be fair, these demos never have quantitative axes, but still. At least the slope seems to match, even though the line is quite offset from the measured / fit z > 6 [m] asymtote. Conclusion Though I did not adjust the lens position to place the beam waist at desired position, I now have a good measurement setup and data processing regime to do so for the "production" pair of fiber collimators. For this collimator, since we're just looking at "before" vs. "after" bake data, this data set is good enough. Let's bake!
Ryan had a hard time locking the OMC, and there was no DCPD_SUM spikes as Ryan moved the PZT offset manually. We saw nothing in the OMC trans camera either.
I found that the OMC PZT2 monitor dropped to zero-ish at 14:23 PDT (21:23 UTC). That coincides with vacuum-related activity for HAM1, not sure if they are related.
In the mezzanine I found that the output of the HV driver was zero.
Pressing VSET and ISET, I saw that the driver was set up to 110V 80mA. Pressing RECALL -> ENTER didn't do anything. I also noticed at this point that the unit was somehow in CC (constant current ) mode, which is usually automatically determined by the power supply. It should be in CV mode.
I turned the output off, asked Ryan to turn the PZT offset to zero (which means the middle of 0-100V range, i.e. 50V, so I should have asked -50V offset), power cycled the unit just because, pressed VSET again (it was still 110V), pressed ENTER, turned the output ON, and it started working again.
Ryan moved the PZT offset and the HV monitor responded. Shortly after this the IFO lost lock but I don't think that was related to the HV.
Corey, Craig and I had the exact same issue 2 weeks ago.
This is twice in a few weeks. Either we have a PZT drawing too much current or a power supply failing. We will swap the power supply Tuesday.
Since we've been seeing the ETMY roll mode consistently ringing up over the start of lock stretches and that it can cause locklosses after long enough, Sheila modified the 'DAMP_BOUNCE' [502] state of ISC_LOCK to now engage damping of this mode with a gain of 40. The state has also been renamed to 'DAMP_BOUNCE_ROLL'. I have accepted the gain of 40 and timeramp of 5 sec in the OBSERVE.snap table of h1susetmy and only the timeramp in the SAFE.snap table (screenshots attached; we had originally set the gain at 30 but then updated it to 40, which I forgot to take a screenshot of).
We are still unsure as to why this roll mode has been ringing up since the vent, but so far Elenna has ruled out the SRCL feedforward and theorizes it could be from ASC, specifically CHARD_P (see alog84982 and comments).
I think this is causing us locklosses, twice we've lost lock in this state as it turns on when I slowly stepped through the states, and twice we've lost it a few seconds into POWER_10Ws when GRD was moving automatically. I reduced the gain to 30 from 40 (SVN commited and reloaded ISC_LOCK, I had to first commit the DAMP_BOUNCE_ROLL state edits) and doubled the tramp to 10 (SDFed in SAFE).
The reduced gain and increased tramp didn't stop it from killing the lock, as soon as it engaged we lost lock. I've commented it out from ISC_LOCK - line 3937.
I think the BOUNCE_ROLL channel was mistyped in ISC_LOCK, the line is ezca['SUS-ETMY_M0_DAMP_R_GAIN'] = 40 where it should be ezca['SUS-ETMY_M0_DARM_DAMP_R_GAIN'] = 40 ? I should have noticed this earlier.
I edited the channel in ISC_LOCK to add "DARM_" but I did not get a chance to reload before we went into Observing.
TITLE: 06/13 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 5mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.13 μm/s
QUICK SUMMARY: Looks like H1 unlocked at 14:00 and just finished up running an initial alignment. Starting lock acquisition now.
The cause of the range drop and eventual lockloss this morning appears to be from the problematic roll mode we've been seeing recently (see alog84982).
If I see that it's still rung up once H1 relocks, I'll apply the damping gain of 30 that seemed to work yesterday evening.
Received phone call at 1:20amPDT.
Saw that H1 was at NLN, and ready to go to Observing, but could not due to SDF Diffs for LSC & SUSETMY (see attached screenshot):
1) LSC
I was not familiar with these channels, so I went through the exercise of trying to find their medm, but for the life of me I could not get there! The closest I got was LSC Overview / IMC-MCL Filter Bank, but they were not on that medm. (probably spent 30min looking everywhere and in between with no luck). Looked at these channels in ndscope & these channels were at their nominals for the last lock. Also looked in the alog, and only saw SDF entries for them from 2019 & 2020. Ultimately, I just decided to do a REVERT (and luckily, H1 did not lose lock).
2) SUSETMY
Then H1 automatically went back to Observe.
Maybe Guardian, for some reason took these channels to these settings? At any rate, going to try to go back to sleep since it has been an hour already (hopefully this does not happen for the next lock!).
These MCL trigger thresholds come from the IMC_LOCK Guardian and are set in the 'DOWN' and 'MOVE_TO_OFFLINE' states.
In 'DOWN', the trigger ON and trigger OFF thresholds are set at 1300 and 200, respectively, for the IMC to prepare to lock as seen in the setpoints from Corey's screenshot.
In 'MOVE_TO_OFFLINE', the trigger ON and trigger OFF thresholds are set at 100 and 90, respectively (for <4W input), as seen in the EPICS values from Corey's screenshot.
So, it would seem that after the lower thresholds were set when taking the IMC offline sometime recently, they were incorrectly accepted in SDF. I'll accept them as the correct values in the OBSERVE.snap table once H1 is back up to low noise, as I expect they'll show up as a difference again.