I tried to get a better measurement of the ETMX charge per its OPLEVs this morning, I started with a higher gain on the ESD_OUTPUT filters than the last time (0.75 instead of 0.5, 1 worked until recently). This measurement had pretty low coherence like last week so I ctrl+c and increased the drive_amp by 20%. This measurement was a little better but still not as good as I would like it, and as we have had in the past, so I restarted it again with another 20% increase in drive_amp, I ran 3 measurements with this configuration. The prelimary processing looked decent but the full processing revealed pretty large error bars still.
This measurement is better than last weeks, but there is still room for improvement. I brought the drive_amp up from 11000 to 20600 with ESD_OUTPUT gains of 0.75 and I still wasn't saturating so I still have more room to push to improve the measurements and increase the coherence.
Here is a side by side of powers at the ports before and after the power outage. This is using last night's lock at 2 hours 55 minutes after the end of the max power, versus a lock before the power outage at 2 hours 55 minutes from max power.
| Quantity | Now | Then | Ratio (now/then) |
| IM4 trans (W on PRM) | 56.5 | 56.0 | 1.008 |
| PRG (W/W) | 49.5 | 49.6 | 0.997 |
| LSC POP A (mW on diode) | 31.76 | 31.58 | 1.006 |
| LSC REFL A (mW on diode) | 7.64 | 7.59 | 1.006 |
| OMC REFL (mW?) | 684.2 | 698.0 | 0.980 |
| X arm circ (kW) | 379.4 | 376.2 | 1.008 |
| Y arm circ (kW) | 379.9 | 375.8 | 1.01 |
| AS_C (W into HAM6) | 0.680 | 0.695 | 0.978 |
| kappa_c | 0.967 | 0.988 | 0.979 |
| f_c (Hz) | 446.5 | 445.5 |
It seems like the input power, POP, LSC REFL and circulating power numbers hang together. The OMC refl, as AS_C numbers also hang together. Sheila and I discussed that we would expect if the arm power increased the kappa c would increase, but it appears that arm power increased but kappa c decreased following the OMC refl and AS_C values.
Sheila, Elenna, Camilla. WP#12797 . IOT2L layout D1300357
We took the PSL input power down to ~100mW, locked out the rotation stage and then used the nano scan to take some beam profiles in the IMC REFL path on IOT2L. Elenna and Sheila also moved a beamdump to fully block the MC REFL rejected beam that was getting on the MC REFL camera.
| Location | D4 Sigma A1 Horizontal (um) | D4 Sigma A2 Vertical (um) | D4 Sigma A1 at 45deg (um) | D4 Sigma A2 at 45deg (um) |
| A: Profiler 11 3/4" upstream of IO_MCR_BS1 | 4685 | 4240 | 4345 | 4450 |
| B: Profiler 10 1/16" downstream of IO_MCR_BS1 (7" + 1 1/4" + 1 13/16") | 4678 | 4647 | 4605 | 47921 |
| C: Profiler 14 1/2" downstream of IO_MCR_BS1 (7" + 1 1/4" + 6 1/4") | 4800 | 4807 | 4784 | 4676 |
By eye the beam looked gaussian and the numbers show it is mainly symmetric. Maybe last measurements where when the WFS were being installed in 2013 6439.
For positions B and C, we added a temporary steering mirror between IO_MCR_M7 and IO_MCR_L2. Distance between IO_MCR_BS1 and IO_MCR_M7 = 7"; Distance bewtween IO_MCR_M7 = 7" and temporary steering mirror = 1 1/4".
Tue Sep 16 10:07:25 2025 INFO: Fill completed in 7min 21secs
TITLE: 09/16 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 9mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.14 μm/s
QUICK SUMMARY: No alarms found this morning. Planned lighter maintenance today so we can get back to locking and understanding the power losses.
Workstations were updated and rebooted. This was an OS packages update. Conda packages were not updated.
Literally everyone (to name who I know/can recall would be unfair)
TITLE: 09/16 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: IDLE
INCOMING OPERATOR: NONE
SHIFT SUMMARY:
IFO is in IDLE and DOWN for CORRECTIVE MAINTENANCE buuuuut IFO was OBSERVING for ~1 hr.
Lockloss was intentional in order to avoid potential harmful lockloses and issues throughout the night. We actually got to NLN and OBSERVING though!
It's not exactly over yet - because we have only been locked for 2 hours. Problems may be there and dormant. Nevertheless, we got here from a outrageous outage recovery and a nasty ISC/Lock reacquisition that took 5 days to get back into observing.
The main things:
How we got to NLN:
Alog 86951 summarizes the lock acquisition. We just sat there for a bit at OMC_WHITENING as violins fell (and continued to)
After NLN:
LOG:
None
Just want to add some notes about a few of these SDFs
In this alog I accepted the TCS SIM and OAF jitter SDFs incorrectly. The safe restore had restored old values and I mixed up the "set point" and "epics value" columns here (a mistake I have made before and will likely make again). I should have reverted these values last week instead of accepting them.
Luckily, I was able to look back at Matt's TCS sim changes and I have the script that set the jitter cleaning coeffs, so I was able to reset the values and sdf them in safe. Now they are correctly SDFed in observe as well.
Start Time: 1442034629
End Time: 1442034940
2025-09-15 22:15:22,374 bb measurement complete.
2025-09-15 22:15:22,375 bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250916T051011Z.xml
2025-09-15 22:15:22,375 all measurements complete
Ibrahim, Elenna, Jenne, Ryan S
We made it to LASER_NOISE_SUPPRESSION!
Here's how we got here again and here's what we're monitoring
After much needed help from Ryan S, Elenna and Jenne in initial alignment (manual initial alignment needed), we managed to get back to CARM_5_PICOMETERS but encountered the same issues with an SRM ringup upn getting to RESONANCE, which caused a Lockloss.
Elenna guided me through the following combination of steps tried this morning:
1. Get to CARM_5_PICOMETERS
2. Set TR_CARM_OFFSET to -52
3. Set the REFLAIR_B_RF27 I PRCL matrix to 1.6*(value-it's-at). Press LOAD MATRIX. This is reached by going to LSC->LSC_OVERVIEW->AIR3F (under input matrices area on left of screen, you want to look for REFLAIR_B_RF27 on the left).
4. Continue locking - SRM rang up as before and saturated once but with the new gain, it was enough to not cause a LL.
This worked, and now we're at OMC_WHITENING with high violins (so understandable, sorry fibers).
Now, I'm monitoring H1:IMC-REFL_DC_OUT16 and watching for an increase in power, which is bad. We've been at 63W (the new 60) for 18 minutes - here's a plot of that.
I don't know the calibration, but CDS OVERVIEW is reading out 147MPc
[Fil, Jeff, Dave, Erik, Patrick]
As part of the hunt for problems related to the power outage on September 10, Dave notices that the channel H1:ISC-RF_C_AMP24M1_POWEROK had moved from 1 before the outage to 0 after.
Jeff determined that the output of the amplifier was nominal, so that likely it was a problem with Beckhoff and not with the amplifier itself.
Fil inserted a breakout board between the amp the beckhoff cable and recorded a voltage drop from 3.3 to 2.2 volts on the power ok signal (pin 7).
Some more testing shows the problem was definitely at Beckhoff end. The associated terminal may need to be replaced.
J. Oberling, K. Kawabe
This afternoon we went into the PSL enclosure to inspect things after last week's power outage. We concentrated on the IOO side of the PSL table, downstream of the PMC. Our results:
We did a visual inspection with both the IR viewer and the IR-sensitive Nikon camera and did not find any obvious signs of damage on any of the optical surfaces we had access to; the only ones we couldn't see were the crystal surfaces inside the ISC EOM, we could see everything else. I looked at the optics between Amp2 and the PMC and everything there looked normal, no signs of anything amiss.
While the beam was not perfectly centered on every optic, we saw no clipping anywhere in the beam path. The irises after the bottom periscope mirror were not well centered, but they've been that way for a while so we didn't have a good reference for assessing alignment in that path (these irises were set after the O4 PSL upgrade, but there have been a couple of alignment shifts since then and the irises were not reset). For reference, the beam is in the -X direction on the HWP in the power control rotation stage and in the -Z direction (but centered horizontally) on the PZT mirror after the power control stage. We do have a good alignment reference on the ALS path (picked off through mirror IO_MB_M2, the mirror just before the ISC EOM), as those were set as part of the HAM1 realignment during this year's vent. By my eye the first iris looked a tiny bit off in yaw (-Y direction) and pitch (+Z direction), while the second iris looked perfectly centered. We found this odd, so Keita used the IR-sensitive camera to get a better angle on both irises and took some pictures. With the better angle the beam looked well centered in yaw and maybe a little off in pitch (+Z direction) on that first iris, so I think my eye was influenced by the angle from which I was viewing the iris. The second iris still looked very well centered. Edit to add: Since the ALS path alignment looks good, to me this signals that there was not an appreciable alignment shift as a result of the change in PMC temperature. If the PMC was the source of the alignment shift we would see it in both the main IFO and ALS paths. If there is a misalignment in the main IFO path, its source is not the PMC. Upon further reflection, a more accurate statement is: If the PMC is the source of an alignment shift, the shift is too small to be seen on the PSL table (but not necessarily too small to be seen by the IMC).
The other spot of note is the entrance aperture for the ISC EOM. It's really bright so it's hard to make a definitive determination, but it could be argued there's a very slight misalignment going into the ISC EOM. I couldn't make anything out with the IR viewer, but Keita's picture shows the typical ring around the aperture a little brighter on the left side versus the right. Despite this, there is no clipping in the beam, as we set up a WinCAM beam profiler to check.
The WinCAM was set behind IO_AB_L4, which is the lens immediately behind the bottom periscope mirror (this is the path that goes to the IMC_IN PD). The attached picture shows what the beam looks like there. No signs of clipping in the beam, so it's clearing all the apertures in the beam path. I recall doing a similar measurement at this spot several years ago, but a quick alog search yields nothing. I'll do a deeper dive tomorrow and add a comment should I find anything.
So to summarize, we saw no signs of damage to any visible optical surfaces. We saw no clear evidence of a misalignment in the beam; the ALS path looks good, and nothing in the main IFO path looks suspicious outside of the ISC EOM entrance aperture (a lack of good alignment irises makes this a little difficult to assess; once we get the IFO back to a good alignment we should reset those irises). We saw no clipping in the beam.
Keita has many pictures that he will post as a comment to this log.
For PSL table layout, see https://dcc.ligo.org/D1300348/.
TITLE: 09/15 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: More progress today towards recovering H1 back to low noise! We've been able to make it up to LASER_NOISE_SUPPRESSION once so far and updated alignment offsets. Unfortunately we did see the slow increase of IMC REFL again once increasing the input power to 63W during LNS, and shortly after there was an IMC-looking lockloss. Since then, Jason and Keita inspected optics in the PSL and didn't find anything alarming. We attempted to reproduce the IMC REFL behavior with just the IMC locked up to 63W with the ISS secondloop on and off, but did not see it, interestingly. We then decided to try locking again, so an initial alignment with the new green offsets is ongoing.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:47 | FAC | Nellie | MY | N | Technical cleaning | 16:40 |
| 16:40 | FAC | Kim | MX | N | Technical cleaning | 17:31 |
| 16:41 | FAC | Randy | MY | N | 18:25 | |
| 16:44 | ISC | Elenna | LVEA | N | Plugging in freq injection cable | 16:49 |
| 17:49 | FAC | Kim | H2 | N | Technical cleaning | 18:00 |
| 21:15 | PEM | TJ | EY | N | Checking DM vacuum pump | 22:13 |
| 21:16 | PSL | Jason, Keita | PSL Encl | Local | Optics inspection | 23:11 |
| 21:25 | VAC | Gerardo | LVEA | N | Check HAM6 pump | 21:33 |
| 21:37 | VAC | Gerardo | MX | N | Inspect insulation | 23:12 |
| 21:40 | CDS | Erik, Fil | LVEA/CER | N | Checking RF chassis | 22:13 |
After finding the error code changes for the AMP241M1 sensor, Erik suggested listing channels which have single values before and after, but the value changed.
This table shows a summary of the channel counts, the detailed lists at in attached text files. non-zero to zero is called dead, varying to flatline is called broken, single to diff single is called suspicious.
| System | num chans | num_dead | num_broken | num_suspicious |
| aux-cs | 23392 | 10 | 10 | 221 |
| aux-ex | 1137 | 0 | 0 | 7 |
| aux-ey | 1137 | 0 | 0 | 7 |
| isc-cs | 2618 | 1 | 0 | 14 |
| isc-ex | 917 | 0 | 0 | 10 |
| isc-ey | 917 | 0 | 0 | 9 |
| tcs-cs | 1729 | 1 | 4 | 39 |
| tcs-ex | 353 | 0 | 1 | 7 |
| tcs-ey | 353 | 0 | 3 | 7 |
| sqz-cs | 3035 | 0 | 4 | 33 |
Quick summary - I tested my theory of the dust monitor pump running backwards and spewing contaminate with a pump at EY and was able to get it to run backwards.
A day after the power outage Dave noticed that the PSL dust counts were still very elevated, so I went to check on the corner station dust monitor vacuum pump and found it in an odd state (alog86857). The pump was running very hot, read 0inHg on the dial, and had some loose connections. I turned it off, tightened things up, it turned it back on with good pressure. Thinking more about it after I had walked away, my theory is that the pump started to run backwards when the power went out and the low pressure of the vacuum pulled the pump in reverse. The power then came back on and the motor started and continued in that direction.
Today I wanted to check on the end station pumps, so I took this opportunity to bring a pump that needed to be rebuilt anyway with me to the end station and try to recreate my theory above. I found no pump at EX, so I went to EY where I found the pump completely locked up and the motor, not pump, very hot. I unplugged this, hooked up the one that needed a rebuild and plugged it in. It pulled to -12inHg. I then tried a few times unplugging the power, listening to hear if it started to spin backwards, then pulling it back in. Third time I got it and it was running while pushing a bit of air out of the bleed valve and the pressure read 0inHg.
I didn't test at the dust monitor end of this system if it was spewing out any contaminate, most likely the graphite from the vanes in the pump, but I'd guess if it was running backwards over night it would make enough for us to notice. I'm looking into check valves, in line filters, or some type of relay to avoid this in the future.
TITLE: 09/15 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 5mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
IFO is in IDLE for CORRECTIVE MAINTENANCE
Promise! Today, we were able to get to LASER_NOISE_SUPPRESSION!
Witness to the culprit (In Jenne's words): The IMC_REFL power increases with IMC power increase.
There was a dust monitor that blew back some dust into the PSL that we think may have been the culprit but the Jason and Keita just came back from the PSL after looking for signs of this contamination but found none.
The plan now is to look at the IMC_REFL power increase again by setting the IMC power to 60W then to 63, whilst also trying a new combo:
Did IMC_REFL increase? We shall find out.
After engaging full IFO ASC and soft loops this afternoon, I updated the ITM camera and ALS QPD offsets and accepted them in the appropriate SAFE SDF tables. After Beckhoff reboots and PSL optic inspections, we'll run an initial alignment to solidify these alignment setpoints. They will need to be accepted in the OBSERVE tables once eventually back to NLN.
I don't really think this is related to the poor range, but it seems that one of the cps on HAM3 has excess high frequency noise and has been noisy for a while.
First image is 30+ day trends of the 65-100hz nad 130-200hz blrms for the HAM3 cps. Something happened about 30 days ago that cause the H2 cps to get noisy at higher frequency.
Second image are rz location trends for all the HAM ISI for the last day around the power outage. HAM3 shows more rz noise after the power outage.
Last image are asds comparing HAM2 and HAM3 horizontal CPS. HAM3 H2 shows much more noise above 200hz.
Since finding this, I've tried power cycling the CPS on HAM3 and reseating the card, but that so far has not fixed the noise. Since this has been going for a while, I will wait until maintenance to try to either fix or replace the card for this CPS.
I've replaced the noisy cps, and adjusted the cps setpoint to maintain the global yaw alignment, meaning I looked at the free hanging (iso loops off) position before and after the swap and changed the RZ setpoint so the delta between the isolated and freehanging position for RZ was the same with the new sensor. New sensor doesn't show either the glitching or high frequency noise that the old sensor had. I also changed the X and Y set points, but those only changed a few microns and should not affect IFO alignment.