[ Jennie, Siva, Mayank]
Laser: Axcel ,Model Designation: BF-A64-0130-PP
We removed the Lens(f= 250 mm) in front of M2 meter and we placed a lens (1 m focal length) roughly 1 m away from the m2 meter.
The lens was adjusted so that the beam spot falls roughly at the center of traveling range of the stage.
We measured the M2 of the beam which gave the following result
M²x = 1.09 and M²y= 1.10.
Beam Waist Diameter X =702 µm
Beam Waist Diameter Y =703 µm
Beam Waist Position X =132 mm
Beam Waist Position Y =121 mm
Rayleigh Length X = 332 mm
Rayleigh Length Y =331 mm
The m2 value was better compared to the measurement done with the 250mm lens. However the Beam spot location of X and Y direction were off by 10 mm.
This astigmatism could be because of mismatch between beam and the optics axis of the lens.
We tried manually adjusting the lens but this was the minimum we could get by hand.
we can try tomorrow by placing the lens on a vertical x y stage to get fine movement.
TITLE: 02/03 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 20mph Gusts, 14mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.34 μm/s
QUICK SUMMARY: H1 has been locked for almost 5 hours.
Sheila, Camilla, Matt Todd
Sheila, Ibrahim, Tony and Ryan S had lots of SQZ issues over the weekend: 82588, 82599, 82597, 82585, 82581
Changed made to avoid these:
I'm also updating SQZ troubleshooting wiki, to hopefully be more clear and allow the operator team to more easily solve future SQZ issues.
During commisioning this morning, we added the final part of the ESD glitch limiting, by adding the actual limit part to ISC_LOCK. I added a limit value of 524188 to ETMX_L3_ESD_UR/UL/LL/LR filter banks, which are the upstream part of the 28 bit dac configuration for SUS ETMX. These limits are engaged in LOWNOISE_ESD_ETMX, but turned off again in PREP_FOR_LOCKING.
In LOWNOISE_ESD_ETMX I added:
log('turning on esd limits to reduce ETMX glitches')
for limits in ['UL','UR','LL','LR']:
ezca.get_LIGOFilter('SUS-ETMX_L3_ESD_%s'%limits).switch_on('LIMIT')
So if we start losing lock at this step, these lines could be commented out. The limit turn-off in PREP_FOR_LOCKING is probably benign.
Diffs have been accepted in sdf.
I think the only way to tell if this is working is to wait and see if we have fewer ETMX glitch locklosses, or if we start riding through glitches that has caused locklosses in the past.
Using the lockloss tool, we've had 115 Observe locklosses since Dec 01, 23 of those were also tagged ETM glitch, which is around 20%.
Since Feb 4th, we've had 13 locklosses from Observing, 6 of these tagged ETM_GLITCH: 02/10, 02/09, 02/09, 02/08, 02/08, 02/06.
Jim, Sheila, Oli, TJ
We are thinking about how to evaluate this change. In the meantime we made a comparison similar to Camilla's: In the 7 days since this change, we've had 13 locklosses from observing, with 7 tagged by the lockloss tool as ETM glitch (and more than that identified by operators), compare to 7 days before the change we had 19 observe locklosses of which 3 had the tag.
We will leave the change in for another week at least to get more data of what it's impact is.
I forgot to post this at the time, we took the limit turn on out of the guardian on Feb 12, with the last lock ending at 14:30 PST, so locks since that date have had the filters engaged, but since they multiply to 1, they shouldn't have an effect without the limit. We ran this scheme between Feb 3 17:40 utc until Feb 12 22:15 utc.
Camilla asked about turning this back on, I think we should do that. All that needs to be done is uncommenting out the lines (currently 5462-5464 in ISC_LOCK.py):
#log('turning on esd limits to reduce ETMX glitches')
#for limits in ['UL','UR','LL','LR']:
# ezca.get_LIGOFilter('SUS-ETMX_L3_ESD_%s'%limits).switch_on('LIMIT')
The turn off of the limit is still in one of the very early states is ISC_LOCK, so nothing beyond accepting new sdfs should be needed.
Mon Feb 03 10:07:31 2025 INFO: Fill completed in 7min 28secs
TCmins [-89C, -87C] OAT (1C, 34F) DeltaTempTime 10:07:40
Low range coherence check due to the low range we've been having in our last lock. Attached below.
Shown is worse range for 10Hz to 55Hz compared to Dec 15 '24.
Using the range comparison script, it also looks like most of the range drop is from below 100Hz
DARM Coherence Check after A2L scripts running.
Closes FAMIS 26355, Last checked in alog 82451
Laser Status:
NPRO output power is 1.845W
AMP1 output power is 70.07W
AMP2 output power is 135.9W
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 5 days, 21 hr 55 minutes
Reflected power = 27.05W
Transmitted power = 101.4W
PowerSum = 128.4W
FSS:
It has been locked for 0 days 0 hr and 49 min
TPD[V] = 0.7664V
ISS:
The diffracted power is around 3.2%
Last saturation event was 0 days 0 hours and 49 minutes ago
Possible Issues:
PMC reflected power is high
TITLE: 02/03 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 6mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.36 μm/s
QUICK SUMMARY:
IFO is LOCKING at ACQUIRE_PRMI
IFO was locking when I came in having lost lock only 30 mins ago (14:57 UTC).
TITLE: 02/03 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Mostly quiet shift with a drop from observing due to SQZ and a lockloss near the end of the shift. H1 just finished an initial alignment and has started relocking.
Lockloss @ 05:36 UTC - link to lockloss tool
No obvious cause, and I don't really see evidence of an ETM glitch this time.
At 01:02 UTC, H1 dropped from observing due to the SHG dropping out, and it would not come back on its own. I brought SQZ_MANAGER to 'DOWN' and raised the SHG temperature (H1:SQZ-SHG_TEC_SETTEMP) from 34.6 to 35.2 to raise the OPO ISS control signal and SHG green power back up to around 2.8 and 120mW, respectively. After that, I requested SQZ_MANAGER back to 'FREQ_DEP_SQZ' and everything came back without issue. Once I accepted the new SHG temperature in SDF, H1 returned to observing at 01:13 UTC.
TITLE: 02/02 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 143Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 20:31 UTC
Many, many issues today all SQZ related. In short, our range is terrible and we know it is SQZ related.
Most of the story can be found in alog 82588 (and comments) but to summarize:
Otherwise, Dave did a vacstat restart due to a glitch - alog 82595
LOG:
TITLE: 02/02 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 22mph Gusts, 18mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.41 μm/s
QUICK SUMMARY: H1 has been locked for 3.5 hours. Ibrahim is bringing me up to speed on the SQZ issues of the morning.
VACSTAT detected a single BSC3 sensor glitch at 13:34 this afternoon. Last glitch was 20 days ago. I restarted vacstat_ioc.service on cdsioc0 at 13:46 and disabled HAM6's gauge.
Unknown cause Lockloss. Sheila and I were troubleshooting a host of SQZ mysteries when this happened (not actually touching anything just discussing). It's tempting to say SQZ related since all our issues this weekend have been but yet unknown.
Back to NLN as of 20:31 UTC
Sheila and I troubleshot more SQZ stuff, ultimately deciding to turn off SQZ ANG ADJUST since it was oscillating the phase, and thus our range, unnecessarily. I edited sqzparams.py to turn it off and then had to edit SQZ ANG guardian to make DOWN the nominal state. Now we're observing but will need to manually change the angle to optimize range, which I just did and will do again once thermalized.
SQZ Morning Troubles
After reading Tony's alog, I had the suspicion that this was the SHG power being too low, causing lock issues. I trended Sheila's SHG Power while adjusting the SHG temp to maximize. This worked to lock SQZ almost immediately - screenshot below.
However, despite SQZ locking, it looked like in trying to lock for hours the SQZ_ANG_ASC had gone awry and as the wiki put it, "ran away" I requested the reset SQZ_ANG guardian but this didn't work. Then, I saw another SQZ Angle reset guardian, called "Reset_SQZ_ANG_FDS", which immediately broke guardian. The node was in error and a bunch of settings in ZM4 abd ZM5 got changed. I reloaded the SQZ guardian and successfully got back to FREQ_DEP_SQZ. I saw there were some SDF changes that had occured so I undid them, successfully making guardian complain less and less. Then, there were 2 TCS SDF OFFSETs that were ZM4/5 M2 SAMS OFFSET and they were off by 100V, sitting at their max 200V. I recalled that when SQZ would drop out, it would notify something along the lines of "is this thing on?". I then made the assumption that because this slider was at its max, all I'd need to do was revert this change, whic happened over 3hrs ago. I did that and then we lost lock immediately.
What I think happened in order:
Now, I'm relocking, having reverted the M2 SAM OFFSETs that guardian made and having maximized the SHG Temp power (which interestingly had to go down contrary to yesterday). I've also attached as screenshot of the SDF change related to the AWC-SM4_M2_SAMS_OFFSET because I'm about to rever the change on the assumption it is erroneous.
What I find interesting is that lowering the SHG temp increases the SHG power first linearly, then after some arbitrary value, exponentially but only for one big and final jump (kind of like a temperature resonance if you can excuse my science fiction). You can see this in the SHG DC POWERMON trend that shows a huge final jump.
Recovered from Lockloss and got to OBSERVING automatically - 17:58 UTC.
Only SDF was the only change I made which was re-adjusting SHG temperature to maximize its power.
Investigating further, I realize that once I fixed the SHG temp issue, the message "SQZ ASC AS42 not on??" was the flashing notification, in which I should have followed: alog 71083. Instead, SDF led me to what the specific issue with ASC was. I will continue to monitor and investigate. Since range is fluctuating, I will probably go into comissioninbg to try and optimize squeezing once thermalized.
Ibrahim summarized this over the phone for me, and here's our summary of 4 things that need a fix or looking into (sometime):
In regards to Sheila's final bullet point above, the change to the PSAMS offsets happened as a result of Tony's misclick (alog82585) where he brought SQZ_MANAGER to 'SET_ZM_SLIDERS_IFO [52]' as reported by the Guardian log at that time:
I also show this interaction in the attached ndscope.