Oli, Camilla WP12203. Repeat of some of the work done in 2019: EX: 52608, EY: 52636, older: part 1, part 2, part 3.
We misaligned ITMY and turned off the ALS-Y QPD servo with H1:ALS-Y_PZT_SWITCH and placed the Ophir Si scanning slit beam profiler to measure both the 532nm ALSY outgoing beam and the ALSY return beam in the HWS path.
The outgoing beam was a little oblong in the measurements but looked pretty clean and round by eye, the return beam did not! Photos of outgoing and return beam attached. Outgoing beam was 30mW, return beam 0.75mW.
Attached is the 13.5% and D4sigma measurements, I also have photos of the 50% measurements if needed. Distances are measured from the optic where HWS and ALS beams combine, ALS-M11 in D1400241.
We had previously removed HWS-M1B and HWS-M1C and translated HWS-M1A from whats shown in D1400241-v8 to remove clipping.
TJ, Camilla
We expanded on these measurements today and measured the positions of the lenses and mirrors in both ALS and HWS beampaths and took beamscan data further from the periscope, where the beam is changing size more. Data attached for today and all data together calculated from the VP. Photo of the beamscanner in the HWS return ALS beam path also attached.
Oli, Camilla
Today we took some beam measurements between ALS-L6 and ALS-M9. These are in the attached documents with today's data and all the data. The horizontal A1 measurements seemed strange before L6. We're unsure why as further downstream when the beam is larger and easier to see by eye it looks round.
TITLE: 11/19 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 10mph Gusts, 8mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 1.47 μm/s
QUICK SUMMARY: Due to very rapidly rising microseism, H1 will not be locking this evening or overnight, which allows us to run a long test for PSL glitch hunting by leaving just the PMC locked (no FSS, ISS, IMC). The DB37 cable was unlplugged from the FSS fieldbox at 23:45 UTC.
Camilla C., TJ
Recently, the CO2Y laser that we replaced on Oct 22 has been struggling to stay locked for long periods of time (alog81271 and trend from today). We've found some loose or bad cables in the past that have caused us issues, so we went out on table today to double check they are all ok.
The RF cables that lead into the side of the laser can slightly impact the output power when wiggled, in particular the ones with a BNC connector, but not to the point that we think it would be causing issues. The only cable that we found loose was for the PZT that goes to the head of the laser. The male portion of the SMA that comes out of the laser head was loose, and cannot be tightened from outside of the laser. We verified that the connection from this to the cable were good, but wiggling it did still introduce glitched in the PZT channel. I don't think that we've conviced ourselves that this is a problem though, because the PZT doesn't seem to glitch when the laser loses lock and instead it will run away.
An unfortunate consequence of the cable wiggling was that one of the Beckhoff plugs at the feedthrough must have been unseated slightly and caused our mask flipper read backs to read incorrectly. The screws for this plug were not working so we just pushed the plug back in to fully seat it and all seemed to work again.
We still are not sure why we've been having these lock losses lately, the 2nd and 3rd attachments show a few of them from the last day or so. They remind me of back in 2019 when we saw this - example1, example2. The fix ultimately a chiller swap (alog54980), but the flow and water temp seem more stable this time around. Not completely ruling it out yet though.
We've only had two relocks in the last two weeks since we readjusted cables. This is within its normal behavior. I'll close this FRS32709 unless this suddenly becomes unstable again. Though there might be a larger problem of laser stability, I think closing the this FRS makes sense since it is referencing a specific instance of instability.
Both X&Y tend to have periods of long stretches where they don't relock, and periods where they have issues staying locked (attachment 2). Unless there are obvious issues with chiller supply temperature, loose cables, wrong settings, etc. I don't think that we have a great grasp as to why it loses lock sometimes.
LVEA has been swept. Just had to unplug a(n already off) PEM function generator and turn off the lights for the HAM5/6 giant cleanroom.
Today we swapped the PT154 gauge on the FC-A section that was reported faulty in alog 81078 with a brand new gauge, same make/model.
FCV1 & 2 were closed to isolate the FCA section, and the angle valve on the A2 cross closed to isolate the gauge. Volume was vented with dry nitrogen and gauges swapped. CF connection was helium leak checked, no He signal above the HLD background ~2e-10 Torr-l/s.
Closing WP 12200
Setup a scope near the PSL rack. The channels are FSS test2, PMC mixer out, ISS PDB, and IMC servo test 1. The trigger has been connected to the IMC REFL shutter. The shutter usually triggers upon a lock loss, or when more than ~4W are detected in reflection of the IMC.
22 Nov 2024 around 20:30pm PT, I disconnected the remote scope and all its input BNCs, and I unplugged this power strip for the PSL remote scope + 785 + aglient because Ryan was close to relocking.
Sheila, writing for a large crew (Jason, Vicky, Daniel, Keita, Marc, Rick)
This morning we spent about 3 hours with the IMC offline and the ISS autolocker requested to off, (from 16:30 UTC) then unplugged the IMC feedback to the IMC VCO (from 17:08 UTC to 11:54 UTC) .
In the attached screenshot you can see that there are a few large disturbances during this time, which show up with an amplitude of 0.1 on the PMC mixer channel, and large oscillations in the FSS fast monitor (30 counts), dips in the reference cavity transmission, and glitches in the PMC high voltage. The second screenshot shows a zoomed in version of the problem times.
We also noticed that unplugging the feedback from the IMC to the VCO changed the reference cavity transmitted power, by about 1%, Daniel suggests that this might be OK because the change in laser frequency causes a small change in alignment out of the AOM.
We set the FSS autolocker to off at 19:54 UTC. This doesn't acutaly disable the FSS, but we can track times when the reference cavity would go through resonance by watching in the trans PD. At 21:19 UTC, Ryan Crouch started locking the IFO as the winds are supposed to be calm this afternoon (we did not see any glitches in this short test but need a longer one).
We will plan to continue this PMC only test overnight when the wind is supposed to come back up.
Related alog: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=81320
Sometime ago (July/09/2024) the analog DC gain of H1:ASC-AS_B_DC was set from "High" to "Low" when I was looking to extract more information from WFSs about the power coming past the Fast Shutter when the shutter bounces down after high power lock losses. Since this is not necessary any more (see plots in alog 81130) and since keeping this means that we have to either adjust dark offset once in a while and/or making the "shutter closed" threshold for Fast Shutter test (alog 81320), I set the gain switch back to "HIGH", and disabled the +20dB digital gain in H1:ASC-AS_B_DC_SEG[1234] FM4.
Interestingly, when I flipped the gain switch, SEG3 output didn't change (1st attachment). It could be that the specific DC channel is stuck to HIGH or LOW, but it could be that the analog offset happens to be really small for that channel. I cannot test it right now as we're leaving IMC unlocked. As soon as the light comes back I will go to the floor and test.
The second attachment is the relevant MEDM screen (that shows the state it should be in now) and the third one is the picture of the interface chassis (the switch I flipped is circled in red, and it's supposed to be HIGH now).
After the IMC was relocked I went to the floor, opened the fast shutter, switched the analog gain switch back and forth several times and confirmed that all segments responded correctly.
I accepted in SDF in both SAFE and OBSERVE to keep off FM4
It looks like usually the PSL PMC + FSS can stay locked for long periods of time, for example >1 month over the emergency vent in August. See the attached trends of the # of locked days and hours in the last few months (ndscope saved as /ligo/home/victoriaa.xu/ndscope/PSL/pmc_fss_can_stay_locked_over_emergency_o4b_vent.yaml).
The last time the PMC was locked for almost a month ended on Sept 23, 2024. Since then the PSL PMC has not stayed locked for over 5 days, but this is most likely due to commissioning tests and debugging which started around then.
Some trends showing several PSL signals over O4b, and the end of O4a.
Key points- PMC + FSS stayed locked continuously during both the O4a/b break (Feb 2024), and the emergency vent (Aug 2024), with minimal glitching in PMC TRANS and Ref Cav Trans PD (FSS-TPD).
Trying to compare PSL-related spectra between the original O4 NPRO and the current NPRO using DTT is kinda confusing.
Comparing times with before O4 NPRO from 21 Oct 2024 04:55 UTC (blue, 169 Mpc), and now with O3 NPRO at 16 Nov 2024 08:20 UTC (red, 169.9 Mpc).
For the fast mon spectra in the top left plot, FSS FAST MON OUT looks 2-5x noisier now than before. H1:PSL-FSS_FAST_MON_OUT_DQ calibrated into Hz/rtHz using zpk = ([], [10], [1.3e6]) (see 81251, this makes H1 and L1 spectra into comparable Hz/rtHz units 81210).
But for FSS-TPD (ref cav trans), it looks like there's some extra 1-10Hz noise, but otherwise the trans spectra might be quieter? Similarly confusing, the ISS AOM control signal looks quieter. Not a clear takeaway from just these spectra on how to compare O3/O4 NPROs.
Bypass will expire:
Tue Nov 19 10:45:37 PM PST 2024
For channel(s):
H0:FMC-CS_FIRE_PUMP_1
H0:FMC-CS_FIRE_PUMP_2
Tue Nov 19 10:12:46 2024 INFO: Fill completed in 12min 42secs
WP12201
Marc, Fil, Dave, Ryan
h1susex is powered down to restore cabling from the new 28bit LIGO-DAC to the original 20bit General Standards DACs.
Procedure:
Put h1iopseiex SWWD into long bypass
Safe the h1susetmx, h1sustmsx and h1susetmxpi models
Stop all models on h1susex
Fence h1susex from the Dolphin fabric
Power down h1susex.
D. Barker, F. Clara, M. Pirello
Swap to original 20 bit DAC complete. Here are the steps we took to revert from LD32 to GS20
Included images of front and rear of rack prior to changes.
Tue19Nov2024
LOC TIME HOSTNAME MODEL/REBOOT
09:59:38 h1susex h1iopsusex
09:59:51 h1susex h1susetmx
10:00:04 h1susex h1sustmsx
10:00:17 h1susex h1susetmxpi
15:22 UTC lockloss - 1416065051
FSS_FAST_MON grew just below the LL, IMC lost it at the same time as ASC. FSS, and ISS lost it at the same time.
TITLE: 11/19 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventative Maintenance
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 7mph Gusts, 5mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.54 μm/s
QUICK SUMMARY:
TITLE: 11/19 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 161Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
Shift consisted of 3 locks and 2 IMC locklosses
Locking has been fairly straight forward once the IMC decides to get and stay locked.
Survived a 5.6M Tonga Quake, and a few PI ring ups.
Tagging SUS because ITMY Mode 5 is ringing up. I have turned OFF the Gain to it as the current Nominal state is making IYM5 worse and has been for the last few locks.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
00:24 | EPO | Corey +1 | Overpass, Roof | N | Tour | 00:24 |
01:06 | PCAL | Rick S | PCAL Lab | Y | Looking for parts | 02:16 |
After discussions with control room team: Jason, Ryan S, Sheila, Tony, Vicky, Elenna, Camilla
Conclusions: The NPRO glitches aren't new. Something changed to make us not be able to survive them as well in lock. The NRPO was swapped so isn't the issue. Looking the timing of these "IMC" locklosses, they are caused by something in the IMC or upstream 81155.
Tagging OpsInfo: Premade templates for looking at locklosses are in /opt/rtcds/userapps/release/sys/h1/templates/locklossplots/PSL_lockloss_search_fast_channels.yaml and will come up with command 'lockloss select' or 'lockloss show 1415370858'.
Updated list of things that have been checked above and attache a plot where I've split the IMC only tagged locklosses (orange) from those tagged IMC and FSS_OSCIALTION (yellow). The non IMC ones (blue) are the normal lock losses and (mostly) only once we saw before September.