We have now been locked for over 16 hours.
IMC REFL DC power is steady at 18.5 mW
IMC WFS A is at 0.95 mW and IMC WFS B is at 0.75 mW
The IMC power in is 62 W and the power at IM4 trans is 56.7 W
MC2 trans is about 9670 [mystery units]
This is reasonable power for IMC refl, but the WFS power is very low. These are the jitter witnesses, and jitter subtraction is not performing as well as it was before the power outage. I can think of several possible reasons for this, but I'm sure that having less than a mW of power isn't helping.
We may want to consider either a) increasing the power on the IMC refl path b) changing the splitter between IMC refl and IMC WFS to be a 50/50 instead of a 90/10, or c) some combination of the first two options that gets us reasonable power on both IMC refl and IMC WFS.
The numbers are confirmed to have held through the entire 40 hours of this most recent lock (killed by earthquake).
Wed Sep 17 10:08:25 2025 INFO: Fill completed in 8min 22secs
Gerardo confirmed a good fill curbside.
Yesterday it initally looked like the BRSY was run up by activity at EY, but by the end of the day there was still no damping going on and DIAG_MAIN as well as SEI_CONF had notifications about BRSY. I spoke with Jim and he walked me through the same procedure that he and I did back in July (alog86074) when this happened last. The difference between last time and this time is that the checks I put in worked and we caught this much sooner. I also now have the correct way to remote into the brs machine.
After logging in and recapturing frames two times, the BRS is damping nicely.
TITLE: 09/17 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 2mph Gusts, 1mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.17 μm/s
QUICK SUMMARY:
H1 is locked and Observering for 12 hours.
The Violins are very upset, seems like EMTY mode 1 is angry. DCPDS are very diverged.
Main DARM screen is being restarted.
A new Diag_Main:
SEI_STATE: SEI_CONF might be stuck
15:30 UTC I put the nominal gain of -0.1 into ETMY mode1 and it started to damp down.
Iit looked like it was damping at the beginning of the lock with a negative gain, but at 03:44 UTC the gain was changed from -0.2 to +0.2 without pausing at zero which is the safe thing to do. The mode pretty much immediately started to ring up until guardian turned it off at 04:38 UTC. It was damping with nominal at the start of the lock but it was increased in a few steps from -0.1 to -0.4, ETMY mode1 is a finicky mode, increasing the gain does not always increase the damping rate.
TITLE: 09/16 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: TJ
SHIFT SUMMARY:
H1 Back To OBSERVING Again & Leaving so overnight (Elenna messaged TJ [owl] about status).
Day 6 post-Power-Outtage from noon PDT last Wednesday.
Hand-off from TJ at beginning of shift:
End Of Shift Notes:
LOG:
Locking Notes
Jonathan and Tony fixed the nuc30 missing window manager, FOMs are back working again.
H1 made it to NLN, and there were a couple SDFs:
Violin fundamental was just above 1e-15, but now violins are being damped (a handful have some extra gain to help the expedite).
Elenna messaged TJ (owl shift), to give him a heads up he may need to Guardian Code work.
Tagging OpsInfo and guardian since this log includes some big changes.
Tonight after several tries and failures I have found a workaround for the carm offset reduction sequence that should get us locked. I have adjusted the code and proofread it multiple times, but it is currently untested in the sense that I have not run it, I have only replicated in code the steps that I took to get us locked.
The major problem I faced tonight is that I could not engage DHARD at DHARD WFS; the CARM offset that is set in CARM_150_PM is just too far off the fringe to make the DHARD signal any good. I found that as soon as the CARM OFFSET REDUCTION state ran, the DHARD signal was actually useable for control. The first thing CARM OFFSET REDUCTION does is set H1:LSC-TR_CARM_OFFSET to -7, whereas CARM 150 PM ends with this offset at -3. I found that getting this offset to -7 or -8 sometimes is close enough to make the DHARD signal "real". I tried just setting the value in CARM 150 PM, but I had one lockloss with that strategy, not sure if it was the cause. I finally decided that I think the correct order here (for now) should be CARM 150 > DARM TO RF > PARK ALS VCO > SHUTTER ALS> CARM OFFSET REDUCTION > DHARD WFS. DHARD WFS usually comes right after DARM TO RF so this involves moving the DHARD engagement up a bit. This is a little risky, as anyone who watches the AS camera during carm offset reduction knows, because the arm alignment starts to get really shaky as the arms get closer to resonance. However, I've been doing a version of this for a bit now as a part of debugging this sequence and I think it generally works.
Besides that problem, we had several locklosses tonight around CARM 5 PM, which is where Sheila made several changes to follow our "new recipe" for getting CARM to REFL. A major change we are making here is that we are further reducing the CARM offset and further bumping up the PRCL gain while it is on REFLAIR 27I. PRCL seems to be losing too much gain as we reduce the offset and it causes locklosses when we reach resonance. However, once PRCL is on POP, the gain is fine, so Sheila and I chose to edit the LSC input matrix value. I had to correct some errors to ensure the correct intrix value is changed and that it is changed to the correct value. Then, I realized that in our "recipe" steps, we actually ran the entire usual CARM 5 PM state first, then we an our new steps which hard codes a TR carm offset and such. So, I edited the code to run the usual CARM 5 PM steps, then if we set this "after_power_outage" value to True, it will also run the additional recipe steps.
That is a lot of words, so below I am going to write down the steps of what should happen, if all goes correctly:
Here are some other details:
Since DHARD WFS was behaving bizarrely, I did leave the green shutters open and check the green alignment once the ASC converged, using the normal technique of running the green QPD offsets while the IFO ASC converges so the green alignment follows the IFO alignment. Once that completed, I checked and all the offsets appeard to be the same, the largest difference being less than 0.1. Therefore, I concluded that it is unlikely that we need to reset the green alignment, and that whatever is causing the DHARD issue is not because we are in a bad alignment.
When I moved DHARD WFS around in the ladder, I realized that this messes up the IDLE_ALS option. I'm hoping that this DHARD change is temporary, so I just commented out the ladder options that include IDLE_ALS.
We had a mystery lockloss during MOVE SPOTS that I don't understand.
Based on our calculations of the PSL>IMC throughput, I determined that the PSL power should be set to 62 W requested, which should give us 56 W on IM4 trans (we get about 91% now verses 93% before the power outage). I confirmed that is the case, IM4 trans is 56.7 W now, before we went to laser noise suppression (where ISS second loop is engaged). I edited the NLN power value in LSCparams and I reloaded both ISC_LOCK and LASER_PWR guardians.
Here are the main changes to isc/h1/guardian/ISC_LOCK.py as displayed by meld, the new code is on the right.
I have not commited ISC_LOCK.py to the subversion repository, so 'svn di ISC_LOCK.py' still shows the recent changes.
Tony, Sheila
I've reverted the run state of CARM_5_PICOMeters to the way I left it in 86974, I believe that the way I wrote it was correctly doing the TR_CARM offset reduction and gain changes after the usual previous steps. This is committed in the svn as 33120, but it hasn't been loaded as we are in observing.
I left the change to the DHARD engagement order in. We used to do the DHARD engagement later in the carm offset redcution like this, but we've found the process to be much more tolerant to variations in the initial alignment since we moved this step lower on the fringe. Perhaps we need to check the phasing of AS45 WFS to see if something is wrong with our error signal so that we can move this earlier.
Sheila reverted the code to her original method, which was fine except for a few errors:
ezca['ASC_DHARD_P_TRAMP']
caused a channel connection error last night
ezca['ASC-DHARD_P_TRAMP']
ISC_library.intrix['PRCL', 'REFLAIR_B27I'] = 1.6*lscparams.gain['DRMI_PRCL']['3F_27']
I saved these changes in ISC_LOCK, but did not load. Tony made a note to load the guardian at the next lockloss.
TITLE: 09/16 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Lots of work today to understand our troubled IFO as well as some maintenance items. We perhaps have narrowed down the change from the power outage and have changed the input power into the PMC to bring us back to a similar place. We have run an initial alignment, which probably could have run automated, and we are now going through PRMI/DRMI for a second time. I accepted some SDFs in safe before we started locking, see screenshots attached.
BRSY was rung up during some work at EY today, and doesn't seem to be damping. I've contacted Jim, but we will keep an eye on it.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
14:36 | SYS | Randy, Mitchell, Chris | EY | n | Craning spiral staircase out | 16:36 |
14:58 | FAC | Contractor (C&E) | Vertex | n | Fire hydrant repair in vertex area near FCBTE | 20:19 |
15:11 | CDS | Ken | LVEA | n | HAM5/6 cable tray install | 19:04 |
15:17 | FAC | Nelly | LVEA | n | Tech clean | 16:40 |
15:33 | SPI | Jeff | Opt Lab | n | Parts | 15:38 |
15:34 | - | Jennie, parents | LVEA | n | Tour | 16:11 |
15:35 | CDS | Fil | LVEA | LOCAL | IOT2 table enclosure lights | 15:59 |
15:38 | VAC | Janos, Travis | MY | n | Pump work | 19:22 |
15:45 | ISC | Camilla | Opt Lab | n | Grab equipment | 15:50 |
15:46 | ISC | Camilla, Sheila | LVEA | LOCAL | IOT2 table checks | 17:09 |
15:46 | PSL | Jason, Ryan S | PSL enc | YES | PSL FSS adjustment | 18:54 |
15:56 | PEM | Ryan C | LVEA | n | Dropping off dust monitor testing equipment for the PSL team | 15:58 |
16:00 | SUS | Ryan C | EX | n | SUS charge meas. | 17:59 |
16:15 | SEI | Jim | LVEA | n | Replace CPS card for HAM3 | 17:09 |
16:41 | FAC | Nelly | PSL enc | YES | Tech clean | 16:46 |
16:41 | SYS | Betsy | LVEA | n | Mega clean room sock meas. and check on status of work | 16:55 |
16:52 | ISC | Elenna | LVEA | n | Unplugged SR785 and other test equipment | 16:52 |
17:24 | FAC | Nelly | EX | n | Tech clean | 17:58 |
17:32 | VAC | Gerardo, Jordan | LVEA | n | AIP check at HAM6 | 17:38 |
17:33 | FAC | Tyler | Mids | n | 3IFO checks | 18:33 |
17:59 | FAC | Nelly | HAM Shack | n | Tech clean | 18:34 |
19:59 | CDS | Marc | MY | n | Grabbing a chassis | 21:36 |
20:20 | PSL | Jason, Ryan S | PSL enc | YES | Table test | 21:55 |
20:27 | VAC' | Gerardo, Camilla | LVEA | n | Lookng for viewport covers | 20:46 |
20:46 | VAC | Gerardo | LVEA | n | HAM6 AIP | 20:48 |
20:54 | PCAL | Francisco | PCAL lab | Local | PCAL lab work | 21:21 |
22:28 | - | Oli | LVEA | n | Sweep | 22:49 |
Interestingly we saw a slight rise in the iop duo-tones for all four EY front ends which coincide with the spiral-staircase craining.
Plot shows all four IOP DUOTONE channels, AC2 power strip current usage and building lights. Seqence is
07:51 lights on
08:04 duo tone rise
09:13 duo tone starts dropping, AC2 less noisy
09:32 lights out
On the night of Sept 10 the IMC REFL power slowly increased all night. To avoid this happening in the future, we decided to have H1_MANAGER check this and kill the lock before calling someone. This is the first instance in guardian, that I'm aware of, that we intentially kill the lock.
This takes place in the form of a decorator in the Low_Noise state of H1_MANAGER. The threshold is currently set to 35, and if it goes above then the node will kill the lock and move ISC_LOCK to IDLE before going to ASSISTANCE_REQUIRED itself.
We tested it while not in low noise with an offset in IMC_REFL and the decorator moved to the Relocking state of H1_MANAGER.
TITLE: 09/16 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 7mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.13 μm/s
QUICK SUMMARY:
Site Power Outage recovery continues (with H1 down, but it was up last night/18hrs ago for almost 3hrs). Saw H1 locking DRMI as I walked in but it lost lock...but it looks like this was intentional---running Guardian tests to return for being able to have Owl Shift calls.
BRSy is NOT In USE (It's been noisy since around the beginning of Maintenance today)---TJ says it might get a Beckhoff reboot in the morning.
We have been doing some things by hand to lock recently. I've put a flag in CARM_5_PM to do these.
This will take the TR_CARM offset to -52 (instead of the caclulated offset), waits, then increases the TR_CARM to 2.1, then sets the offset to -56, increases the DHARD P gain to -30 and Y to -40, and increases the input matrix element for REFLAIR 27 I to PRCL by a factor of 1.6.
Of these things, I think that the DHARD gain increases should probably be a good thing that we would want to keep in the long run. The PRCL gain we plan to check early in the locking sequence.
R. Short, J. Oberling
Today we tuned up the RefCav alignment for the PSL FSS, since the TPD had been doing its usual wander and also had a drop after last week's power outage. As usual, we began with a power budget:
The AOM signle pass diffraction is lower than usual, so we adjusted the AOM alignment to improve it. While we were doing this we kept seeing small drops in the AOM output that we couldn't recover with alignment, which was making us think that we were walking the alignment off. Turns out the PMC transmission was slightly dropping. We remeasured the power incident on the AOM and sure enough, it had dropped a little, to 265.1 mW. The diffraction efficiency looked good, so we stopped moving the AOM and moved on with adjusting M21 to improve the double pass diffraction (always have to adjust this mirror if the AOM is moved). The results:
There was no clipping through the EOM, but we adjusted it to better center the beam through the input and output apertures.
In recovering the RefCav, we decided to check beam alignment on the transmission PD, so I set up a LEMO splitter so we could see the DC voltage on a voltmeter. The RefCav locked without much issue, and Ryan tweaked the beam alignment with the picomotor mirrrors. Interestingly, the TPD voltage in MEDM did not match the voltage read by the voltmeter, so a recalibration there was necessary. The TPD readings were not close to each other, at all, so we trended back to see the last time the TPD was calibrated. Turns out this was at original PSL install in 2012, it hasn't been updated since. Well then. Alignment tweaking results:
The negative sign is normal on these PDs from AEI, as they were designed to output a negative voltage; the sign gets flipped in the MEDM filter module. I tweaked the beam alignment on the TPD, which increased the voltmeter voltage to -0.535 V, so it was decently aligned already. We recalibrated the MEDM reading to match the voltmeter, and updated the offset to cancel out the small dark offset observed when the TPD was blocked. Results:
We then aligned the beam onto the RFPD and measured the RefCav visibility:
We were done in the enclosure at this point, so we exited and returned the enclosure to Science Mode. Outside, we set up a network analyzer to measure the FSS TF. With a common gain of 14 the FSS UGF was ~380 kHz. We increased the common gain to 15 and the UGF was 445.6 kHz. This is closer to where we generally like it, so we left the common gain at 15 (this can be reverted should there be an issue). Ryan has a picture of the TF that he'll post as a comment to this alog. With the lower, but now correct, TPD in MEDM, a couple of guardian thresholds required updating. The TPD light on the Ops Overview screen now comes on if the TPD is less than 0.4 V and the "FSS Transmission Low" warning in DIAG_MAIN now comes on when the TPD is less than 0.3 V. We'll monitor this over the coming days/weeks and adjust as needed. This closes WP 12796.
LVEA was swept and everything looks good
Jason Ryan Jenne Daniel
We reduced the PSL power after the amplifier and before the AOM by a factor of 2.3. We then re-adjusted the power into the IMC to get back to 2W. This reduced the IMC reflected light power by ~2.7. This seems to strongly indicate that we have a heating issue in the path from the PMC to the polarizer. The following tests were run:
Normal PSL setup |
PMC inp |
EOM inp power reduced |
EOM inp power reduced |
|
---|---|---|---|---|
EOM power (W) | 115 | 50.8 | 49.6 | 87 |
IMC input power (W) | 2.01 | 2.05 | 2.02 | 2.04 |
IMC REFL power (mW) | 1.10 | 0.405 | 0.49 | 0.64 |
Attaching a trend showing that the PMC refl has been increasing gradually since Feb, and that there was a jump at the time of the power outage.
When the power into the PMC was low, the mode matching was worse, this is probably expected due to thermal lensing in the PMC.
Elenna, Sheila
Before the power outage, IM4 trans was 93% of the IMC input power. Yesterday, we had 90% of IMC input power at IM4 trans. Today after lowering the power through the EOM, we have 91%.