Sheila, Camilla.
New SQZ ASC using AS42 signals with feedback to ZM4 and ZM6 tested and implemented. We still need to watch that this can keep a good SQZ alignment during thermalization. In O4a we used a SQZ ASC with ZM5/6, we have not had a SQZ ASC for the majority of O4b.
Prep to improve SQZ:
Testing ASC from 80373:
In the first 20 minutes of the lock, the SQZ ASC appears to be working well, plot.
Note to operator team: if the squeezing gets really bad, you should be able to use the SQZ Overview > IFO ASC (black linked button) > "!graceful clear history" script to turn off the SQZ ASC. Then change /opt/rtcds/userapps/release/sqz/h1/guardian/sqzparams.py use_ifo_as42_asc to False and go though NO_SQZEEZING then FREQ_DEP_SQUEEZING in SQZ_MANAGER and accept the sdfs for not using SQZ ASC. If SQZ still looks bad, put ZM4/6 osems (H1:SUS-ZM4/6_M1_DAMP_P/Y_INMON) back to when squeezing was last good and if needed run scan sqz alignment and scan sqz angle with SQZ_MANAGER.
Sheila moved the "0.01:0" integrators from the ASC_POS/ANG_P/Y filters into the ZM4/5/6_M1_LOCK_P/Y filter banks.
This will allow us to more easily adjust the ASC gains and to use the guardian ZM offload states. We turned them on on ZM4/6. Edited OFFLOAD_SQZ_ASC to offload for ZM4,5,6. And tested by putting an offset on ZM4. We put ZM4/6 back to positions they were in in lock via the osesms. SDFs for filters accepted. I removed the "!offload AS42" button from the SQZ > IFO ASC screen (liked to sqz/h1/scripts/ASC/offload_IFO_AS42_ASC.py) as it caused a lockloss yesterday.
Oli tested the SQZ_MANAGER OFFLOAD_SQZ_ASC guardian state today and it worked well. We still need to make the state request-able.
ASC now turns off before SCAN_SQZANG_FDS/FIS and SCAN_ALIGNMENT_FDS/FIS. It wil check if the ASC is on via H1:SQZ-ASC_WFS_SWITCH and turn the asc off before scanning alignment or angle.
We changed the paths so that to get from SCAN_SQZANG_FDS/FIS and SCAN_ALIGNMENT_FDS/FIS back to squeezing, the guardian will go though SQZ_ASC_FDS/FIS to turn back on ASC afterwards.
starting at 16:48 UTC, we have the IMC locked with the NPRO temperature at 0.3, compared to -0.18 for the last 1.5 years (the medm screen says this is in units of K). This was a suggestion from the PSL team to see if our problem is that the laser is near mode hopping.
Ryan Short noticed that this was still glitching at the higher temperature, so that hasn't solved the issue. The first two screenshots show times of the glitching, the glitches also show clearly in the PWR_NPRO channel, but they are not as clear when looking at minute trends as in the FSS channel. This test ran until 17:53 UTC.
We are now sitting with the IMC and FSS unlocked, to see if we see the glitches like this in the NPRO channel. This would rule out that the problem is coming from the FSS, and point to a laser problem. We will probably need to look at full data for the NPRO channel for this second test. We've been sitting here since 17:57 UTC.
We saw similar glitches in the NPRO power monitor with the FSS off as on, so the glitches don't seem to be coming from the FSS. (1st attachment)
Ryan next closed the shutter after the NPRO, before the first amplifier. We didn't see any glitches for nearly 2 hours, but then we saw a series of similar glitches (second screenshot). So this narrows the problem down to something in the laser or controller.
Continuing this glitch search from yesterday the PSL has been locked to the reference cavity with an NPRO temperature of -0.7 since 15:35 UTC October 8th. At that temperature, there was a glitch which looked slightly different from the usual glitches. There was also an osciallation in the FSS.
At around 9 am, I went to the diode room and turned off the noise eater, in that configuration I saw some glitches that looked fairly different from the ones seen regularly, it is mostly only visible in the FSS channel but can also be seen as a small step in the NPRO power channel. There were about 4 glitches like this in an hour.
Then we had the lower temperature (-0.7) with the noise eater on for about an hour, the glitches were not bad during this time.
Later, on a suggestion from Daniel, Rick and I went and disconnected the "diagnostic cables" which connect the power supply to the beckhoff system. To do this, we noted first the set and acutal temperatures and diode currents, as well as the A and B buttons. (I will add photos later of these).
Then we went to the diode room and followed instructions that Ryan Short gave me to turn off the two amplifiers in order, then the shutter and then we turned the NPRO off. We went to the rack, disconnected the cables, and turned the NPRO on by turning on the button on the controller box. This conntroller doesn't have a switch on the front panel for the noise eater, it was replaced by a cable which is no longer used. Filiberto looked up some information about this and tells us that the noise eater would be off in this configuration. We quickly saw that there were many glitches visible in this configuration, while we had the laser temperature back to it's usual -0.2K. This test started at 12:42 pacific-.
At 1:30 pacific we disconnected the "slow" BNC cable from the back of the controller, labeled NPRO temp, it was in this configuration from 1:30 to 2:15. We did see glitches in that time, but not the largest ones.
Now we've set the temperature back to normal, and reconnected the cables, and turned back on the amplifiers and their watchdogs. Oli and Tony are proceeding with initial alignment and Rick and I will reset the watchdogs before leaving.
Mon Oct 07 08:08:32 2024 INFO: Fill completed in 8min 28secs
Jordan confirmed a good fill curbside.
TITLE: 10/07 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 3mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.28 μm/s
QUICK SUMMARY:
H1's currently observing w/ a 3.5hr lock and range at 160Mpc. useism has slowly been inching up the last 12hrs; winds are calm.
Two locklosses overnight with NO FSS tags. One was from a Tonga earthquake (which required an automated Initial Alignment).
NOTE: Monday Commissioning is from 8:30-11:30am PDT.
TITLE: 10/07 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: 1 lockloss, relocking took a little wihle but we've been locked for just over 4 hours now.
LOG: No log.
Genevieve and Sam identified the sources of many of the peaks that were vibrationally coherent with DARM in the 20 to 40 Hz region (8012 ). We suspected coupling at HAM1 and I tested this recently using local injections and the beating shaker technique.
With local injections at HAM1 using shakers and HEPI, we can produce much larger signals at the HAM1 table top then at the PSL or HAM2. The coupling site will likely have the same ratio of locally injected peaks to distant equipment peaks in its vibration sensors as the ratio of these peaks in DARM. The figure shows that the ratio of the injected peak at 25 Hz to the Air Handler peak at 26.35 Hz is consistent with coupling at HAM1 but not HAM2 (or any place else – most other potential coupling sites don’t even detect the peak injected locally at HAM1)
The figure also shows that the beats from the beating shakers matched DARM at the HAM1 table top, but could also be consistent with coupling at HAM2, so this technique, as implemented, gave a consistency check but did not discrimate well between HAM1 and 2.
Not many options to improve HAM1 table motion untill we can install an ISI, but we could reduce the motion of HAM2 would using feedforward to HEPI using the 3dl4c path. I've tested this on HAM3 and it works. I added the adc connections and moved the L4Cs this morning to the HAM2 to HEPI, but I only have spare vertical sensors right now, so limited to the Z dof. Everything is set up to do a test during some future commissioning window, I could use some time to collect a couple measurements to check the filter fit. The couple of quick tests I did during maintenance show the filter from HAM3 works though.
After August’s work, we wanted to check the ITMY compensation plate yaw setting. I did this last week using an input arm shaker injection at 12.6 Hz while sweeping the ITMY compensation plate in yaw, as I had done several times in the past (e.g. 76969, Figure 3).
The figure shows that scattering noise continues to be quite sensitive to the CP yaw on a 20 microradian scale. While re-checks did not previously suggest the need for a change after the initial setting (76969), the figure shows that this time the check suggests a new setting. The original setting, -250, is now in a coupling peak. I suggest changing it to -325 (see figure).
The second page of the figure shows that the coupling peaks are very repeatable.
I've changed ITMY CP yaw slider to -325 as of 18:00 UTC October 7th.
Accepted in SDF
TITLE: 10/06 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Aquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 11mph Gusts, 7mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.17 μm/s
QUICK SUMMARY:
Back to Observing at 00:48 UTC
TITLE: 10/06 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
Observing for almost half of the shift. With one EQ lockloss in the afternoon which we are recovering from now.
LOG:
Smooth sailing this morning with H1 locked most of the time, but it just lost lock after a 4hr lock (managed to make it through which took down L1 30min ago). Other than that, environmentals are decent. Off to make lunch.
About Last Night: 737 - 1401utc (1237am-701am PDT)
Only looking at locklosses at ISC LOCK states from 500 and above (i.e. with ASC Full IFO); a lot of the other locklosses were at early states (and quite a bit have the FSS Oscillation tag)
Sun Oct 06 08:09:41 2024 INFO: Fill completed in 9min 37secs
TITLE: 10/06 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 3mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.14 μm/s
QUICK SUMMARY:
H1 currently at ENGAGE ASC (after a lockloss from Observing for a 51min lock). Environmal conditions about identical to yesterday. (although there looks to be an EQ about an 1hr ago (close to recent lockloss [but also don't see on 1412258495.71875]...perhaps M5.2 from Vanuatu region).
Additional Note: The 3hr downtime had one lockloss which happened 10min after getting to NLN and H1 Manager had not taken H1 to Observing which is a bit odd. After that the next 2hr locking time had lots of Green Arm locklosses (9 in total), but it eventually made it back to Observing.
H1 called for assistance at 08:39 UTC after reaching NLN successfully but being unable to start observing due to an issue with SQZ. I found the issue was that the PMC was consistently locking between 11-12V, and because of the lower bound of 15V in the check recently added to unlock and relock the PMC when outside of a comfortable range (see alog80371), SQZ_MANAGER would repeatedly unlock the PMC once it reached FDS_READY_IFO. Since the PMC seemed happy at this 11-12V point, for now I've changed the lower threshold for the PMC PZT check from 15V to 10V (line 73). SQZ_MANAGER injected squeezing just fine after that, and H1 started observing at 08:55 UTC.
TITLE: 10/06 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: 1 lockloss with an automated relock, we've been locked for a little over an hour.
LOG: No log.
01:45 UTC I adjusted the OPO temperature a 2nd time as the high frequency looked bad on NUC33.
01:53 UTC Superevent S241006k
02:35 UTC lockloss (12:44 lock)
03:45 UTC Observing