Closes FAMIS26502
2025-06-06 08:41:32.265177
There are 2 STS proof masses out of range ( > 2.0 [V] )!
STS EY DOF X/U = -4.667 [V]
STS EY DOF Z/W = 2.386 [V]
All other proof masses are within range ( < 2.0 [V] ):
STS A DOF X/U = -0.521 [V]
STS A DOF Y/V = -0.607 [V]
STS A DOF Z/W = -0.719 [V]
STS B DOF X/U = 0.2 [V]
STS B DOF Y/V = 0.974 [V]
STS B DOF Z/W = -0.422 [V]
STS C DOF X/U = -0.857 [V]
STS C DOF Y/V = 0.855 [V]
STS C DOF Z/W = 0.629 [V]
STS EX DOF X/U = -0.034 [V]
STS EX DOF Y/V = -0.051 [V]
STS EX DOF Z/W = 0.072 [V]
STS EY DOF Y/V = 1.164 [V]
STS FC DOF X/U = 0.195 [V]
STS FC DOF Y/V = -1.134 [V]
STS FC DOF Z/W = 0.628 [V]
Closes FAMIS26423
Laser Status:
NPRO output power is 1.85W
AMP1 output power is 70.39W
AMP2 output power is 140.3W
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 18 days, 0 hr 26 minutes
Reflected power = 23.16W
Transmitted power = 105.5W
PowerSum = 128.6W
FSS:
It has been locked for 0 days 1 hr and 12 min
TPD[V] = 0.801V
ISS:
The diffracted power is around 3.8%
Last saturation event was 0 days 13 hours and 21 minutes ago
Possible Issues:
PMC reflected power is high
TITLE: 06/06 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: MAINTENANCE
Wind: 11mph Gusts, 5mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.10 μm/s
QUICK SUMMARY:
I wrangled some SDFs before Daves LSC and OMC model restarts, for SQZ and SQZ_ASC I did not accept them.
Ryan and I decided to temporarily accepted H1:SQZ-FC_ASC_SWITCH as OFF so that FC optics aren't pulled away after restarts. This will need to be re accepted as ON before we go to Observing.
TITLE: 06/06 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
SHIFT SUMMARY:
The detector is in DOWN for the night and there is a test running right now for Matt for the IMC. The HAM1 cleanroom is on and will be on all night to prep for the +Y door coming off tomorrow morning. Craig and I had turned it on earlier, but then I got worried that it was causing locklosses so I went out and turned it back off, and just now went and turned it back on for the night. It looks like the temperature in that zone (zone 5) was really affected by the cleanroom being turned on (and off) (ndscope1), but the locklosses that we had after it was turned on and back off I don't think should be related to the temperature of the input arm.
After the lockloss earlier in the day during the calibration simulines measurement, we really wanted to get back up and at least check that simulines was running fine and had not caused the lockloss (it didn't look like it did 84836, but still). Unfortunately we weren't able to get back up, as the ifo hasn't really liked me much lately and we had constant locklosses from different places:
LOG:
23:00UTC in NLN_CAL_MEAS
23:14 Started calibration measurement
23:16 Lockloss during calibration measurement, cause unknown
- Lockloss from ENGAGE_ASC_FOR_FULL_IFO
- Lockloss from ENGAGE_ASC_FOR_FULL_IFO (from same location)
- Figured out it was due to the DC6 centering signals not being able to keep up with the movements. Elenna upped the gain for the DC6 loops and this fixed the issue (84843)
- Lockloss during MOVE_SPOTS due to Craig and me turning on the HAM1 cleanroom
- Lockloss from MOVE_SPOTS due to a PRC1/2 P ringup at 0.27 Hz
- Lockloss from ENGAGE_ASC_FOR_FULL_IFO due to 84843
- I turned the HAM1 cleanroom back off in case it was affecting our ability to lock
- Lockloss from TRANSITION_FROM_ETMX due to 1 Hz oscillation in DHARD P and PRC2 P from ringup (84844)
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
00:11 | PEM | Robert | LVEA | n | Listening for noise | 00:24 |
01:16 | Oli, Craig | LVEA | n | Turning on HAM1 cleanroom | 01:23 | |
02:13 | Oli | LVEA | n | Turning HAM1 cleanroom off | 02:19 | |
05:12 | Oli | LVEA | n | Turning HAM1 cleanroom on | 05:16 |
I noticed that the LLCV is railing at its top value, 100% open, it can't open no more. This is a known issue, but it appears as if the valve is reaching 100% sooner than expected, meaning when the tank is almost half full. First, I'm going to try a re-zero the LLCV actuator and await for the results. First attachment is a 2 day plot of LLCV railing today and yesterday. The second plot is a 3 year history looking at the tank level and the LLCV, it rails at 100% a few times.
BURT restore? PID tuning ok? CP2 @ LLO PID parameters attached for comparison.
Thanks Jon. However, this system has a known issue, it turns out that the liquid level control valve is not suitable for the job, that is the reason why it reaches 100% sooner than later, but it appears as if something slip, now it reaches 100% at a higher level, this is the reason why I want to re-zero the actuator.
Attached is the Fill Control for CP7 The issue was mentioned here for the first time aLOG 4761, but I never found out who discovered this is only briefly mentioned by Kyle. Another entry on aLOG 59841.
Dirty solution of solving the issue with the LLCV getting railed at 100% open, we used the bypass valve, opened it up by 1/8 of a turn and that did the job. Not a single shot, but eventually we settle on that turn number. PID took over and managed to settle around to 92% open for the LLCV. Today we received a load for the tank for CP7. We are still going to calibrate the actuator.
Oli, Elenna
This is incredibly baffling, but we are still seeing locklosses from lownoise ASC due to a 1 Hz ring up. We have stepped through this state multiple times by hand, and we seem to survive when we do the steps very slowly, with 2-5 minute breaks between each step.
The culprit at times has appeared to be either CHARD P or DHARD P (and maybe also the soft loops?). I have looked at the loop design and measurements of both DHARD P and CHARD P but they show stable loop designs with no gain peaking at 1 Hz. DHARD P has a very large 1 Hz peak in it's error signal spectrum, but I don't know where it comes from.
Tonight, I rewrote the lownoise ASC state to step very slowly through each change and wait 60 seconds after each change, but that did not help. After we left the state and moved into the next, the ring up occurred again and Oli and I tried to save the lock by reverting changes. We're not sure if it was the ring up or our change reversion that broke the lock.
I think this may be an ongoing problem that we didn't realize was a "problem" because the ring up likely occurs during the transition from ETMX state, so we might have classified those locklosses as a problem with the transition, and not a problem with an ASC instability.
I was only able to get second trends to load for locklosses from 557 [TRANSITION_FROM_ETMX] or 558 [LOWNOISE_ESD_ETMX], but here they are:
Lockloss Time (UTC) | Lockloss Time (GPS) | State | Caused by ASC DHARD/CHARD ringup? |
2025-06-06 04:17:14.000000 UTC | 1433218652 | 558 | yes (obviously) |
2025-06-06 03:10:27.721680 UTC | 1433214646 | 557 | yes (obviously) |
2025-06-03 03:46:27.758789 UTC | 1432957606 | 558 | yes |
2025-03-28 14:14:50.167969 UTC | 1427206508 | 558 | yes |
2025-03-27 22:21:18.425781 | 1427149296 | 557 | no |
2025-03-27 20:15:52.046387 UTC | 1427141770 | 557 | no |
2025-03-13 18:43:17.802734 UTC | 1425926616 | 558 | no (windy) |
2025-03-13 11:33:50.178223 UTC | 1425900848 | 558 | no (windy) |
I narrowed down the locklosses from those states with the lockloss tool, here is the link to the list I was looking at: link
Bin Wu, Julia Rice
We have been working on writing a new Guardian script that will allow us to switch off the dither loops sooner in the power up process and instead use cameras.
We first looked at the error signals in the ASC cameras to see if we could adjust offsets to better values. We did this for the CAMERA_SERVO guardian, and added new intermediate states, including TURN_ON_CAMERA_25W_OFFSETS and TURN_ON_CAMERA_MOVESPOTS_FIXED_OFFSETS. We've attached screenshots showing the graph before and after our additional states. Below are the values we used for offsets for each. The weight from DITHER_ON to TURN_ON_CAMERA_25W_OFFSETS is set to be 10.
TURN_ON_CAMERA_25W_OFFSETS
PIT1 | -233 |
PIT2 | -168 |
PIT3 | -217 |
YAW1 | -248 |
YAW2 | -397 |
YAW3 | -338 |
TURN_ON_CAMERA_MOVESPOTS_FIXED_OFFSETS
PIT1 | -232 |
PIT2 | -159 |
PIT3 | -226 |
YAW1 | -248 |
YAW2 | -431 |
YAW3 | -351 |
We also adjusted the offsets for TURN_ON_CAMER_FIXED_OFFSETS:
PIT1 | --230 |
PIT2 | -181 |
PIT3 | -226 |
YAW1 | -245 |
YAW2 | -418 |
YAW3 | -353 |
These are added in lscparams.py. We also added the necessary code into CAMERA_SERVO.py.
We also added a parameter in lscparams.py for new ADS camera CLK_GAIN at 25W ('ads_camera_gains_25W', screenshot included)
Right now the new states are not requestable, so they will need to be ran manually to test. We haven't made any change to the ICS_LOCK guardian.
Also added lines in CAMERA_SERVO.py under DITHER_ON to check if ISC_LOCK is at 25W when proceeding through self counter loop, see attached.
Adding A2L gains for reference, first chart clarifies dither loop actuators and cameras.
Beam pointing actuator | ADS Dither | Cam Sensor |
PRM PIT | Dither ITMX: ITMX P2L | BS Cam 1 PIT |
PRM YAW | Dither ITMX: ITMX Y2L | BS Cam 1 YAW |
X SOFT PIT | Dither ETMX: ETMX P2L | ETMX Cam 2 PIT |
X SOFT YAW | Dither ETMX: ETMX Y2L | ETMX Cam 2 Yaw |
Y SOFT PIT | Dither ETMY: EMTY P2L | ETMY Cam 3 PIT |
Y SOFT YAW | Dither ETMY: ETMY Y2L | ETMY Cam 3 Yaw |
GRD State | A2L Gain | Cam Offset |
POWER_25W [506] | P2L ITMX: -1.25 | -233 |
P2L ETMX: 0.85 | -168 | |
P2L ETMY: 0.85 | -217 | |
Y2L ITMX: 0 | -248 | |
Y2L ETMX: 0 | -397 | |
Y2L ETMY: 0 | -338 |
GRD State | A2L Gain | Cam Offset |
MOVE_SPOTS [508] | P2L ITMX: -1.8+1.0 | -232 |
P2L ETMX: 4.0 | -159 | |
P2L ETMY: 3.6+1.0 | -226 | |
Y2L ITMX: 2.1 | -248 | |
Y2L ETMX: 4.4 | -431 | |
Y2L ETMY: 2.2+1.0 | -351 |
GRD State | A2L Gain | Cam Offset |
MAX_POWER [520] | P2L ITMX: -0.54 | -230 |
P2L ETMX: 3.31 | -181 | |
P2L ETMY: 5.57 | -226 | |
Y2L ITMX: 3.23 | -245 | |
Y2L ETMX: 4.96 | -418 | |
Y2L ETMY: 1.37 | -353 |
Oli, Elenna
We lost lock twice at engaging full IFO ASC, and it was hard to tell what the issue was, except that the DC6 centering signal gets pretty large just before lockloss. I bumped the DC6 gain up slightly to try and help with this, and then stepped through the state by hand. There were no problems. I also took this moment to turn off the beamsplitter ASC offsets, which worked, so I uncommented those lines in the guardian.
New gains SDFed, guardian loaded with changes.
M. Todd, C. Cahillane, G. Mansell
Part 1: IMC WFS -- IMC alignment during thermalization
We trended the WFS signals during thermalization after power up. There is some interesting dynamics during the first bit in the WFS signals...seems the IMC alignment could be wandering around.
Part 2: DARM comparison throughout the ISS picomotoring process Craig did.
Craig pico-ed around to see if he could increase the pwoers on the ISS PDs and lower the peaks we saw in the spectrum.
We also took DARM spectrum measurements during different points of this process, and it seems that things may have gotten slightly better, but may just be thermalization and settling.
In particular, as Craig pico-ed we saw some reduction in the lines between 520Hz and 600Hz. The reduction of some of the broader peaks (most likely beat notes between lines and violin modes) could be from the reduction in the violin modes.
Motivation During this locking session, we've seen that beam jitter is limiting DARM worse than in March 2025. We've also seen it's worse upstream, in e.g. IMC REFL DC (alog 84809). We've been skipping closing the ISS SECONDLOOP while locking, because we have evidence that the witnessed beam jitter noise on the ISS array now is much higher than in the past (See PDF 1 from March 2025 versus now: this plot from alog 84805). What we did We picoed on the ISS array to a very new position, following the instructions from alog 63431. Our goal was to maximize the power on the ISS array PDs, and to see if by aligning onto the PDs better, if we could reduce the observed intensity fluctuations on the ISS array PDs as well as in DARM. We did all of this with the SECONDLOOP open. Results We were successful in picoing to get new higher powers on the PDs. PNG 1 shows how far we moved on the HAM2 + oplev picomotor 8, -150 in X and +320 in Y. PNG 2 shows how much power we gained on the PD array during picoing. We were successful in getting lower beam-jitter-induced intensity noise on the PD array as well. PDF 2 is the PSDs from every ISS array PD, and should be compared to alog 84805. The IMC ASC and beam jitter peaks are largely reduced in every PD now. We ended up realigned onto the ISS QPD, at PIT = -0.8 and YAW = 0.17. We started at PIT = -0.97 and YAW = -0.63. ISS QPD SEG3 is no longer saturating, but SEG4 is now VERY close to saturation, so we did not really solve that problem. Discussion We did not really finish picoing because we started the Simulines calibration script which seem to have killed the lock twice now. I think the place we are leaving the alignment should be better for locking and for closing the ISS SECONDLOOP. We as usual saw the powers on the PDs moving differentially (some going up, while others go down). I prioritized maximizing the power/eliminating the noise on the in-loop PDs 1-4. Matt Todd will post a comparison of DARM before and after picoing on the ISS array. We should not have expected an improvement because we know our beam jitter is coming from upstream, but it's possible we improved something at 410 Hz. It should also just be nonstationary noise or thermalization. I recommend altering the IMC alignment to help ameliorate our beam jitter woes. We have evidence from IM4 TRANS and IMC WFS DC that our input alignment is altered slightly, and the IMC REFL camera lately has been looking like an big S. It may be time for walking the IMC alignment, either with DOF4 steps or MC2 QPD offsets. Robert also suggests that the turbo pumps running next to HAM1 could be a source of our elevated beam jitter peaks.
TITLE: 06/05 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
IFO is in LOCKING_ALS and PLANNED ENGINEERING
A very productive day whereby we discovered that on Fri, we LHO needs to vent HAM1 to briefly reseal the +Y door. This begun a rush to get a calibrated IFO before that vent. In order, here's what happened.
VAC Morning Maintenance:
Vacuum entered at varound 8:15 AM PT to determine where their detected annulus pump leak was. The day prior, they had found that it was in one of the HAM1 doors. This morning, they found that it is the +Y door, which requires the movement of ISCT1 to access. They then told the rest of the lab at the 10 AM PT comissioning meeting. We have to vent on Fri Jun 6. Vacuum will attempt to vent and pump down all within the weekend still prepped for a Wed O4 restart. VAC was out at 10:30 AM PT and we began locking right after.
Locking and the Calibration Sweep Rush:
Armed with the knowledge tat IFO would be down for comissioning starting tomorrow, it was determined that we needed to get at least one thermalized NLN calibration sweep so we began locking:
The plan for tonight is to try and get a calibration sweep ahead of maintenance tomorrow.
Relevant Alogs and Other Investigations:
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:30 | LASER | LASER HAZARD | LVEA | LASER HAZARD | LVEA IS LASER HAZARD (\u2310\u25a0_\u25a0) | 14:55 |
14:43 | OPS | TJ | LVES | YES | Laser safe transition | 14:55 |
15:00 | VAC | Jordan, Gerardo | LVEA | N | HAM1 Annulus Leak Checks | 17:29 |
15:01 | SUS | Randy | LVEA | N | Walkabout (and maybe a forklift lift) | 15:22 |
15:04 | VAC | Janos | LVEA | N | HAM1 Annulus Leak Checks | 17:29 |
15:17 | FAC | Kim, Nellie | LVEA | N | Technical Cleaning (Nellie out 9:25) | 16:37 |
15:23 | FAC | Tyler | LVEA | N | FAMIS Checks | 15:47 |
16:03 | FAC | Richard | LVEA | N | Walkabout for vacuum | 16:08 |
16:08 | EE | Fil | LVEA | N | HAM6 RAcks | 16:40 |
16:43 | ISC | Jennie | LVEA | N | Measuring HAM1/ISCT1 distances | 16:48 |
17:10 | FAC | Nellie, Kim | MX | N | Tech Clean | 18:27 |
17:11 | SUS | Betsy | LVEA | N | Cable Dispatch | 17:30 |
18:28 | VAC | Marc, Janos | LVEA | N | Ion pump cables | 18:35 |
18:32 | VAC | Gerardo, Janos | FCES | N | Wrench Search | 19:32 |
21:10 | PCAL | Rick | PCAL LAB | N | Part search | 23:10 |
22:12 | EPO | Robert, Syracuse Tour | LVEA | N | Roof Tour | 22:47 |
TCS Monthly Trends FAMIS 28461
ITMY SLED power is decreasing in power.
I trended these channels back to before the vent and all of these plots seem to have abnormal values due to the vent and various other tinkerings, like swapping out the SLEDs.
TITLE: 06/05 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 8mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY:
(This is Oli)
We just lost lock while trying to run a calibration measurement, so we're working on relocking.
(This is Oli)
Once we had been at 60W for two hours, I started a calibratoin measurement. I started with Simulines since yesterday I had gotten a broadband measurment done (84808). A couple minutes into the measurement, we lost lock. The cause is unknown, but I've attached the output for the simulines measurement (txt).
Lockloss happened as Calibration signals were ramping on, see attached. First glitch in L3, see attached.
Attaching trend of OMC DCPD sum during LL. Plot suggests the DCPDs were not the cause of the lock-loss.
Georgia, Sheila, Camilla
PRMI ASC feedback on PRM hasn't been successfully used since the vent. The operators have been by hand touching up PRM.
We looked at this data for PIT (attached today plot) and YAW (attached yesterday plot). Using the same sensor, REFL_A_RF9_I, is fine, zero crossings of signals are good. The YAW sign is wrong. Have changed the gain in ISC_DRMI.py for ASC-PRC1_Y_GAIN from -0.1 to +0.1 and reloaded. It will get set back to zero by the guardian so shouldn't effect anything else.
This is untested and left off: to test you would need to change line155 of /isc/h1/guardian/lscparams.py to set PRMI ASC to True.
Elenna, Jennie, Camilla
We tested these before turning on, YAW worked great, PIT did not, see attached error signal not coming to zero as alignment is improved. Unsure what has changed but Elenna and Sheila saw similar weirdness in PRX this AM. Can try again next week.
Jordan, Gerardo, Travis, Marc, Fil, Janos - WP 12591 Today, starting at 8:00 am Gerardo and Jordan leak checked the HAM1 annulus system, by connecting a leak checker to the annulus pumping turbo, and of course valving out / switching off the annulus IPs. At first, the all-metal parts of the system have been checked, mostly on top of HAM1. Those were found OK. Then, the Y- door was checked with sniffing He in the leak checking grooves, then after a few seconds flushing it with Nitrogen. Y- door was found OK, too. Then, the Y+ door was checked with the same method, and the bottom half of it showed huge leaks. Therefore, the chamber needs to be re-vented, most likely the O-rings replaced, and the door reinstalled. This will take place tomorrow (06-06). In the meantime, IP3 was diagnosed, and troubleshooted. Travis and myself swapped the 2 HV cables on the Ion pump, and the same half of the pump remained functional, which indicated that there is a cable failure (together with the consequences of aLog 84761). So we pulled in a legacy H2 (former IP7) cable, and tested, but without success. Then Marc and Fil jumped in, and after a while we figured that in the MER we plugged the wrong cable in the controller, due to wrong cable labeling. In the end, Marc and myself went back into the LVEA, made sure that the LVEA-end of the currently plugged cable is indeed not connected, then we searched for the appropriate cable-end in the MER, plugged it in to the controller, and so now the IP is working again. The only last thing remaining is to valve it back in to the main volume.
Filiberto and I went to the mechanical room to look the different components on the vacuum racks, primarily the location of the ion pump controllers, and to our surprise we found the controller for IPFCC1, see trend for time it was off. Found that the power cable can be easily shaken off, we removed and replaced the power cable, much better fit with the new one. As power is restored the controller powered back on and restored high voltage to the pump.
At the end of the day on Friday, Randy fork-lifted ISCT1 back to HAM1. Location and height was adjusted to match markings. We left the yellow VP covers on and bellows off as HAM1 VAC had started pumping.