TITLE: 02/04 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 3mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.26 μm/s
QUICK SUMMARY:
Workstations were updated and rebooted. This was an OS packages update. Conda packages were not updated.
TITLE: 02/04 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Very quiet shift with just one brief drop from observing early on. H1 has now been locked for 11 hours.
FAMIS 31071
Jason's work in the enclosure on the FSS and increase of chiller flow is clearly seen on many trends (alog82503). PMC Refl continues to rise.
TITLE: 02/04 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 19:06 UTC
Calm day overall with a lot of improvements to locking issues in the weekend from the 8:30 to 11:30 AM comissioning time.
Namely:
SQZ weekend issue fixes: alog 82604
Weekend Low Range Checks (coherence and A2L) changes: alog 82605
SDF Scripts accepted after comissioning: alog 82614 (and attached).
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
17:16 | SAFETY | LASER SAFE ( \u2022_\u2022) | LVEA | SAFE! | LVEA SAFE!!! | 19:08 |
17:38 | FAC | Chris | MY | N | HVAC Maintenance | 18:38 |
18:47 | PCAL | Tony | PCAL Lab | Local | PS4/5 Measurement | 19:09 |
20:49 | VAC | CEBEX | MY | N | CEBEX Walkthrough | 21:32 |
21:50 | ISC | Jennie, Mayank, Shivananda | Optics Lab | Local | Mode matching for ISS | 22:50 |
23:17 | ISC | Matt, Camilla | Optics Lab | Local | Tool pan sorting | 00:17 |
00:29 | PCAL | Tony Francisco | PCAL Lab | Local | PS5/4 Measurement | 01:29 |
Sheila, Jennie W
Sheila realised the QPD offset changes I mad eon January 21st had made the kappa C worse and that this was probably because they were the wrong sign. So I reverted them to the pre-Jan 21st offsets and accepted the values in OBSERVE.snap. We might remeasure these values and update them during the next commissioning period.
The other sdf images were me reverting tramp values after Ibrahim and Sheila measured the test mass A2L values during commissioning.
Jennie W, Mayank, Sheila, Matt
Mayank, Sheila, Matt and I updated the calibration of the OPTIC_ALIGN offsets in the PR2_SPOT_MOVE state that move the IM4, PRM and PR2 mirrors as the commissioner hand tunes the PR3 position. These are set in userapps/isc/h1/guardian/ISC_library.py.
The values were found by Camilla and Matt looking at past spot moves after Elenna realised we hadn't been keeping up with PR2 moves the last tiem we did this in LHO alog#82568.
The new value for pitPR3toIM4 is 11.
The new value for yawPR3toIM4 is 0.755.
The new value for yawPR3toPRM is 1.89.
After updating these in the guardian we went to PR2_SPOT_MOVE state and Mayank stepped a few steps up and down in yaw, then in pitch of PR3 just to check the alignment loops were acting to cancel out this change in the locking loops for PR2, PRM and IM4. We found that PR2 and IM4 M1_LOCK_Y_OUTPUT stayed constant. PRM was moving quite a lot as we thermalised so it was hard to tell. We also checked that the test masses and SR2 and SRM loops were not affected by this yaw change - see image.
During the pitch change we did see some changes in IM4, PRM and PR2 so we may have to update the calibration factor for these sliders in the PR2_SPOT_MOVE state. PR2 P LOCK OUTPUT also changed when we were making Y changes to PR3 due to cross-coupling. The test mass locking loops lag the PR3 pitch movements also so this is something to watch out for.
Olli committed these changes to ISC_library.py for me.
We canged the PR3 pitch by +1 count and observed the following couplings
We also noticed a PR3 pitch to PR2 Yaw cross coupling.
[ Jennie, Siva, Mayank]
Laser: Axcel ,Model Designation: BF-A64-0130-PP
We removed the Lens(f= 250 mm) in front of M2 meter and we placed a lens (1 m focal length) roughly 1 m away from the m2 meter.
The lens was adjusted so that the beam spot falls roughly at the center of traveling range of the stage.
We measured the M2 of the beam which gave the following result
M²x = 1.09 and M²y= 1.10.
Beam Waist Diameter X =702 µm
Beam Waist Diameter Y =703 µm
Beam Waist Position X =132 mm
Beam Waist Position Y =121 mm
Rayleigh Length X = 332 mm
Rayleigh Length Y =331 mm
The m2 value was better compared to the measurement done with the 250mm lens. However the Beam spot location of X and Y direction were off by 10 mm.
This astigmatism could be because of mismatch between beam and the optics axis of the lens.
We tried manually adjusting the lens but this was the minimum we could get by hand.
we can try tomorrow by placing the lens on a vertical x y stage to get fine movement.
TITLE: 02/03 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 20mph Gusts, 14mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.34 μm/s
QUICK SUMMARY: H1 has been locked for almost 5 hours.
Sheila, Camilla, Matt Todd
Sheila, Ibrahim, Tony and Ryan S had lots of SQZ issues over the weekend: 82588, 82599, 82597, 82585, 82581
Changed made to avoid these:
I'm also updating SQZ troubleshooting wiki, to hopefully be more clear and allow the operator team to more easily solve future SQZ issues.
During commisioning this morning, we added the final part of the ESD glitch limiting, by adding the actual limit part to ISC_LOCK. I added a limit value of 524188 to ETMX_L3_ESD_UR/UL/LL/LR filter banks, which are the upstream part of the 28 bit dac configuration for SUS ETMX. These limits are engaged in LOWNOISE_ESD_ETMX, but turned off again in PREP_FOR_LOCKING.
In LOWNOISE_ESD_ETMX I added:
log('turning on esd limits to reduce ETMX glitches')
for limits in ['UL','UR','LL','LR']:
ezca.get_LIGOFilter('SUS-ETMX_L3_ESD_%s'%limits).switch_on('LIMIT')
So if we start losing lock at this step, these lines could be commented out. The limit turn-off in PREP_FOR_LOCKING is probably benign.
Diffs have been accepted in sdf.
I think the only way to tell if this is working is to wait and see if we have fewer ETMX glitch locklosses, or if we start riding through glitches that has caused locklosses in the past.
Using the lockloss tool, we've had 115 Observe locklosses since Dec 01, 23 of those were also tagged ETM glitch, which is around 20%.
Since Feb 4th, we've had 13 locklosses from Observing, 6 of these tagged ETM_GLITCH: 02/10, 02/09, 02/09, 02/08, 02/08, 02/06.
Jim, Sheila, Oli, TJ
We are thinking about how to evaluate this change. In the meantime we made a comparison similar to Camilla's: In the 7 days since this change, we've had 13 locklosses from observing, with 7 tagged by the lockloss tool as ETM glitch (and more than that identified by operators), compare to 7 days before the change we had 19 observe locklosses of which 3 had the tag.
We will leave the change in for another week at least to get more data of what it's impact is.
I forgot to post this at the time, we took the limit turn on out of the guardian on Feb 12, with the last lock ending at 14:30 PST, so locks since that date have had the filters engaged, but since they multiply to 1, they shouldn't have an effect without the limit. We ran this scheme between Feb 3 17:40 utc until Feb 12 22:15 utc.
Camilla asked about turning this back on, I think we should do that. All that needs to be done is uncommenting out the lines (currently 5462-5464 in ISC_LOCK.py):
#log('turning on esd limits to reduce ETMX glitches')
#for limits in ['UL','UR','LL','LR']:
# ezca.get_LIGOFilter('SUS-ETMX_L3_ESD_%s'%limits).switch_on('LIMIT')
The turn off of the limit is still in one of the very early states is ISC_LOCK, so nothing beyond accepting new sdfs should be needed.
Mon Feb 03 10:07:31 2025 INFO: Fill completed in 7min 28secs
TCmins [-89C, -87C] OAT (1C, 34F) DeltaTempTime 10:07:40
Low range coherence check due to the low range we've been having in our last lock. Attached below.
Shown is worse range for 10Hz to 55Hz compared to Dec 15 '24.
Using the range comparison script, it also looks like most of the range drop is from below 100Hz
DARM Coherence Check after A2L scripts running.
Closes FAMIS 26355, Last checked in alog 82451
Laser Status:
NPRO output power is 1.845W
AMP1 output power is 70.07W
AMP2 output power is 135.9W
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 5 days, 21 hr 55 minutes
Reflected power = 27.05W
Transmitted power = 101.4W
PowerSum = 128.4W
FSS:
It has been locked for 0 days 0 hr and 49 min
TPD[V] = 0.7664V
ISS:
The diffracted power is around 3.2%
Last saturation event was 0 days 0 hours and 49 minutes ago
Possible Issues:
PMC reflected power is high
TITLE: 02/03 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 6mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.36 μm/s
QUICK SUMMARY:
IFO is LOCKING at ACQUIRE_PRMI
IFO was locking when I came in having lost lock only 30 mins ago (14:57 UTC).
Sheila, Mayank
We wanted to double check the calibration of IM4 trans into power on PRM, similar to 63812 and 62213
I have updated the IM4 trans calibration with this additional factor of 0.9566. I engaged FM8 in the IM4 trans nsum filter bank, which is now labeled "alog82260".
This indicates that at 60.1 W input power from the PSL, we have 56.6 W of power on the PRM.
Screenshots show the new filter in IM4 trans NSUM, SDF in safe, and SDF in observe.
Sheila will add her code shortly.
code used to check this calibration is attached
Lost lock in the middle of the magnetic injections 15:32 UTC