Jason Ryan Jenne Daniel
We reduced the PSL power after the amplifier and before the AOM by a factor of 2.3. We then re-adjusted the power into the IMC to get back to 2W. This reduced the IMC reflected light power by ~2.7. This seems to strongly indicate that we have a heating issue in the path from the PMC to the polarizer. The following tests were run:
| Normal PSL setup |
PMC inp |
EOM inp power reduced |
EOM inp power reduced |
|
|---|---|---|---|---|
| EOM power (W) | 115 | 50.8 | 49.6 | 87 |
| IMC input power (W) | 2.01 | 2.05 | 2.02 | 2.04 |
| IMC REFL power (mW) | 1.10 | 0.405 | 0.49 | 0.64 |
Attaching a trend showing that the PMC refl has been increasing gradually since Feb, and that there was a jump at the time of the power outage.
When the power into the PMC was low, the mode matching was worse, this is probably expected due to thermal lensing in the PMC.
Elenna, Sheila
Before the power outage, IM4 trans was 93% of the IMC input power. Yesterday, we had 90% of IMC input power at IM4 trans. Today after lowering the power through the EOM, we have 91%.
The summary page finally created the omicron glitch plot for last night's 3 hours of lock. On Friday we halved the power on the IMC refl diode. Last night we locked for three hours. Here is the glitch rate from last night, which can be compared to our 10 hour lock just after the power outage which had a significantly increased glitch rate.. As a reference, here is what the glitch rate looked like a few days before the power outage. It appears the glitch rate is back to the normal level.
Camilla
Followed setup instructions from 80010, last done in 86445. Increased to 75mW injected in the SEED beam, we had 1mW on OPO_IR_PD_DC. Have counts of ~60 and 66 on ASC-AS_A and B and 7e-4 on ASC-OMC-A and B NSUMs all similar to the August 19th measurement. To turn on the OMC ASC, H1:OMC-ASC_MASTERGAIN to 0.02. Took H1:OMC-PZT2_OFFSET goes down to -50 to start and then ran a template /sqz/h1/Templates/dtt/OMC_SCANS/ Sept16_2025_PSAMS_OMC_scan_coldOM2.xml ref 35,36.
PSAMS settings at nominal ZM4/ZM5 6.0V / -0/4V. Measured mode mismatch of TEM02 of 2.975%. In August 86445, this was only 2.174% however in that measurement, the OMC ASC has been left off so there would have been more in the misalignment peaks.
| ZM4/5 PSAMs | TEM00 | TEM02 |
Mismatch*
(% of TEM02)
|
| 6.0V / -0/4V | 0.5375 | 0.01648 | 2.975% |
*calculated with TEM02 / (TEM00 + TEM02)
I tried to get a better measurement of the ETMX charge per its OPLEVs this morning, I started with a higher gain on the ESD_OUTPUT filters than the last time (0.75 instead of 0.5, 1 worked until recently). This measurement had pretty low coherence like last week so I ctrl+c and increased the drive_amp by 20%. This measurement was a little better but still not as good as I would like it, and as we have had in the past, so I restarted it again with another 20% increase in drive_amp, I ran 3 measurements with this configuration. The prelimary processing looked decent but the full processing revealed pretty large error bars still.
This measurement is better than last weeks, but there is still room for improvement. I brought the drive_amp up from 11000 to 20600 with ESD_OUTPUT gains of 0.75 and I still wasn't saturating so I still have more room to push to improve the measurements and increase the coherence.
Here is a side by side of powers at the ports before and after the power outage. This is using last night's lock at 2 hours 55 minutes after the end of the max power, versus a lock before the power outage at 2 hours 55 minutes from max power.
| Quantity | Now | Then | Ratio (now/then) |
| IM4 trans (W on PRM) | 56.5 | 56.0 | 1.008 |
| PRG (W/W) | 49.5 | 49.6 | 0.997 |
| LSC POP A (mW on diode) | 31.76 | 31.58 | 1.006 |
| LSC REFL A (mW on diode) | 7.64 | 7.59 | 1.006 |
| OMC REFL (mW?) | 684.2 | 698.0 | 0.980 |
| X arm circ (kW) | 379.4 | 376.2 | 1.008 |
| Y arm circ (kW) | 379.9 | 375.8 | 1.01 |
| AS_C (W into HAM6) | 0.680 | 0.695 | 0.978 |
| kappa_c | 0.967 | 0.988 | 0.979 |
| f_c (Hz) | 446.5 | 445.5 |
It seems like the input power, POP, LSC REFL and circulating power numbers hang together. The OMC refl, as AS_C numbers also hang together. Sheila and I discussed that we would expect if the arm power increased the kappa c would increase, but it appears that arm power increased but kappa c decreased following the OMC refl and AS_C values.
Sheila, Elenna, Camilla. WP#12797 . IOT2L layout D1300357
We took the PSL input power down to ~100mW, locked out the rotation stage and then used the nano scan to take some beam profiles in the IMC REFL path on IOT2L. Elenna and Sheila also moved a beamdump to fully block the MC REFL rejected beam that was getting on the MC REFL camera.
| Location | D4 Sigma A1 Horizontal (um) | D4 Sigma A2 Vertical (um) | D4 Sigma A1 at 45deg (um) | D4 Sigma A2 at 45deg (um) |
| A: Profiler 11 3/4" upstream of IO_MCR_BS1 | 4685 | 4240 | 4345 | 4450 |
| B: Profiler 10 1/16" downstream of IO_MCR_BS1 (7" + 1 1/4" + 1 13/16") | 4678 | 4647 | 4605 | 47921 |
| C: Profiler 14 1/2" downstream of IO_MCR_BS1 (7" + 1 1/4" + 6 1/4") | 4800 | 4807 | 4784 | 4676 |
By eye the beam looked gaussian and the numbers show it is mainly symmetric. Maybe last measurements where when the WFS were being installed in 2013 6439.
For positions B and C, we added a temporary steering mirror between IO_MCR_M7 and IO_MCR_L2. Distance between IO_MCR_BS1 and IO_MCR_M7 = 7"; Distance bewtween IO_MCR_M7 = 7" and temporary steering mirror = 1 1/4".
Tue Sep 16 10:07:25 2025 INFO: Fill completed in 7min 21secs
TITLE: 09/16 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 9mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.14 μm/s
QUICK SUMMARY: No alarms found this morning. Planned lighter maintenance today so we can get back to locking and understanding the power losses.
Workstations were updated and rebooted. This was an OS packages update. Conda packages were not updated.
Literally everyone (to name who I know/can recall would be unfair)
TITLE: 09/16 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: IDLE
INCOMING OPERATOR: NONE
SHIFT SUMMARY:
IFO is in IDLE and DOWN for CORRECTIVE MAINTENANCE buuuuut IFO was OBSERVING for ~1 hr.
Lockloss was intentional in order to avoid potential harmful lockloses and issues throughout the night. We actually got to NLN and OBSERVING though!
It's not exactly over yet - because we have only been locked for 2 hours. Problems may be there and dormant. Nevertheless, we got here from a outrageous outage recovery and a nasty ISC/Lock reacquisition that took 5 days to get back into observing.
The main things:
How we got to NLN:
Alog 86951 summarizes the lock acquisition. We just sat there for a bit at OMC_WHITENING as violins fell (and continued to)
After NLN:
LOG:
None
Just want to add some notes about a few of these SDFs
In this alog I accepted the TCS SIM and OAF jitter SDFs incorrectly. The safe restore had restored old values and I mixed up the "set point" and "epics value" columns here (a mistake I have made before and will likely make again). I should have reverted these values last week instead of accepting them.
Luckily, I was able to look back at Matt's TCS sim changes and I have the script that set the jitter cleaning coeffs, so I was able to reset the values and sdf them in safe. Now they are correctly SDFed in observe as well.
Start Time: 1442034629
End Time: 1442034940
2025-09-15 22:15:22,374 bb measurement complete.
2025-09-15 22:15:22,375 bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250916T051011Z.xml
2025-09-15 22:15:22,375 all measurements complete
Ibrahim, Elenna, Jenne, Ryan S
We made it to LASER_NOISE_SUPPRESSION!
Here's how we got here again and here's what we're monitoring
After much needed help from Ryan S, Elenna and Jenne in initial alignment (manual initial alignment needed), we managed to get back to CARM_5_PICOMETERS but encountered the same issues with an SRM ringup upn getting to RESONANCE, which caused a Lockloss.
Elenna guided me through the following combination of steps tried this morning:
1. Get to CARM_5_PICOMETERS
2. Set TR_CARM_OFFSET to -52
3. Set the REFLAIR_B_RF27 I PRCL matrix to 1.6*(value-it's-at). Press LOAD MATRIX. This is reached by going to LSC->LSC_OVERVIEW->AIR3F (under input matrices area on left of screen, you want to look for REFLAIR_B_RF27 on the left).
4. Continue locking - SRM rang up as before and saturated once but with the new gain, it was enough to not cause a LL.
This worked, and now we're at OMC_WHITENING with high violins (so understandable, sorry fibers).
Now, I'm monitoring H1:IMC-REFL_DC_OUT16 and watching for an increase in power, which is bad. We've been at 63W (the new 60) for 18 minutes - here's a plot of that.
I don't know the calibration, but CDS OVERVIEW is reading out 147MPc
[Fil, Jeff, Dave, Erik, Patrick]
As part of the hunt for problems related to the power outage on September 10, Dave notices that the channel H1:ISC-RF_C_AMP24M1_POWEROK had moved from 1 before the outage to 0 after.
Jeff determined that the output of the amplifier was nominal, so that likely it was a problem with Beckhoff and not with the amplifier itself.
Fil inserted a breakout board between the amp the beckhoff cable and recorded a voltage drop from 3.3 to 2.2 volts on the power ok signal (pin 7).
Some more testing shows the problem was definitely at Beckhoff end. The associated terminal may need to be replaced.
J. Oberling, K. Kawabe
This afternoon we went into the PSL enclosure to inspect things after last week's power outage. We concentrated on the IOO side of the PSL table, downstream of the PMC. Our results:
We did a visual inspection with both the IR viewer and the IR-sensitive Nikon camera and did not find any obvious signs of damage on any of the optical surfaces we had access to; the only ones we couldn't see were the crystal surfaces inside the ISC EOM, we could see everything else. I looked at the optics between Amp2 and the PMC and everything there looked normal, no signs of anything amiss.
While the beam was not perfectly centered on every optic, we saw no clipping anywhere in the beam path. The irises after the bottom periscope mirror were not well centered, but they've been that way for a while so we didn't have a good reference for assessing alignment in that path (these irises were set after the O4 PSL upgrade, but there have been a couple of alignment shifts since then and the irises were not reset). For reference, the beam is in the -X direction on the HWP in the power control rotation stage and in the -Z direction (but centered horizontally) on the PZT mirror after the power control stage. We do have a good alignment reference on the ALS path (picked off through mirror IO_MB_M2, the mirror just before the ISC EOM), as those were set as part of the HAM1 realignment during this year's vent. By my eye the first iris looked a tiny bit off in yaw (-Y direction) and pitch (+Z direction), while the second iris looked perfectly centered. We found this odd, so Keita used the IR-sensitive camera to get a better angle on both irises and took some pictures. With the better angle the beam looked well centered in yaw and maybe a little off in pitch (+Z direction) on that first iris, so I think my eye was influenced by the angle from which I was viewing the iris. The second iris still looked very well centered. Edit to add: Since the ALS path alignment looks good, to me this signals that there was not an appreciable alignment shift as a result of the change in PMC temperature. If the PMC was the source of the alignment shift we would see it in both the main IFO and ALS paths. If there is a misalignment in the main IFO path, its source is not the PMC. Upon further reflection, a more accurate statement is: If the PMC is the source of an alignment shift, the shift is too small to be seen on the PSL table (but not necessarily too small to be seen by the IMC).
The other spot of note is the entrance aperture for the ISC EOM. It's really bright so it's hard to make a definitive determination, but it could be argued there's a very slight misalignment going into the ISC EOM. I couldn't make anything out with the IR viewer, but Keita's picture shows the typical ring around the aperture a little brighter on the left side versus the right. Despite this, there is no clipping in the beam, as we set up a WinCAM beam profiler to check.
The WinCAM was set behind IO_AB_L4, which is the lens immediately behind the bottom periscope mirror (this is the path that goes to the IMC_IN PD). The attached picture shows what the beam looks like there. No signs of clipping in the beam, so it's clearing all the apertures in the beam path. I recall doing a similar measurement at this spot several years ago, but a quick alog search yields nothing. I'll do a deeper dive tomorrow and add a comment should I find anything.
So to summarize, we saw no signs of damage to any visible optical surfaces. We saw no clear evidence of a misalignment in the beam; the ALS path looks good, and nothing in the main IFO path looks suspicious outside of the ISC EOM entrance aperture (a lack of good alignment irises makes this a little difficult to assess; once we get the IFO back to a good alignment we should reset those irises). We saw no clipping in the beam.
Keita has many pictures that he will post as a comment to this log.
For PSL table layout, see https://dcc.ligo.org/D1300348/.
TITLE: 09/15 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: More progress today towards recovering H1 back to low noise! We've been able to make it up to LASER_NOISE_SUPPRESSION once so far and updated alignment offsets. Unfortunately we did see the slow increase of IMC REFL again once increasing the input power to 63W during LNS, and shortly after there was an IMC-looking lockloss. Since then, Jason and Keita inspected optics in the PSL and didn't find anything alarming. We attempted to reproduce the IMC REFL behavior with just the IMC locked up to 63W with the ISS secondloop on and off, but did not see it, interestingly. We then decided to try locking again, so an initial alignment with the new green offsets is ongoing.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:47 | FAC | Nellie | MY | N | Technical cleaning | 16:40 |
| 16:40 | FAC | Kim | MX | N | Technical cleaning | 17:31 |
| 16:41 | FAC | Randy | MY | N | 18:25 | |
| 16:44 | ISC | Elenna | LVEA | N | Plugging in freq injection cable | 16:49 |
| 17:49 | FAC | Kim | H2 | N | Technical cleaning | 18:00 |
| 21:15 | PEM | TJ | EY | N | Checking DM vacuum pump | 22:13 |
| 21:16 | PSL | Jason, Keita | PSL Encl | Local | Optics inspection | 23:11 |
| 21:25 | VAC | Gerardo | LVEA | N | Check HAM6 pump | 21:33 |
| 21:37 | VAC | Gerardo | MX | N | Inspect insulation | 23:12 |
| 21:40 | CDS | Erik, Fil | LVEA/CER | N | Checking RF chassis | 22:13 |
After finding the error code changes for the AMP241M1 sensor, Erik suggested listing channels which have single values before and after, but the value changed.
This table shows a summary of the channel counts, the detailed lists at in attached text files. non-zero to zero is called dead, varying to flatline is called broken, single to diff single is called suspicious.
| System | num chans | num_dead | num_broken | num_suspicious |
| aux-cs | 23392 | 10 | 10 | 221 |
| aux-ex | 1137 | 0 | 0 | 7 |
| aux-ey | 1137 | 0 | 0 | 7 |
| isc-cs | 2618 | 1 | 0 | 14 |
| isc-ex | 917 | 0 | 0 | 10 |
| isc-ey | 917 | 0 | 0 | 9 |
| tcs-cs | 1729 | 1 | 4 | 39 |
| tcs-ex | 353 | 0 | 1 | 7 |
| tcs-ey | 353 | 0 | 3 | 7 |
| sqz-cs | 3035 | 0 | 4 | 33 |
I don't really think this is related to the poor range, but it seems that one of the cps on HAM3 has excess high frequency noise and has been noisy for a while.
First image is 30+ day trends of the 65-100hz nad 130-200hz blrms for the HAM3 cps. Something happened about 30 days ago that cause the H2 cps to get noisy at higher frequency.
Second image are rz location trends for all the HAM ISI for the last day around the power outage. HAM3 shows more rz noise after the power outage.
Last image are asds comparing HAM2 and HAM3 horizontal CPS. HAM3 H2 shows much more noise above 200hz.
Since finding this, I've tried power cycling the CPS on HAM3 and reseating the card, but that so far has not fixed the noise. Since this has been going for a while, I will wait until maintenance to try to either fix or replace the card for this CPS.
I've replaced the noisy cps, and adjusted the cps setpoint to maintain the global yaw alignment, meaning I looked at the free hanging (iso loops off) position before and after the swap and changed the RZ setpoint so the delta between the isolated and freehanging position for RZ was the same with the new sensor. New sensor doesn't show either the glitching or high frequency noise that the old sensor had. I also changed the X and Y set points, but those only changed a few microns and should not affect IFO alignment.