Tue Oct 08 08:07:33 2024 INFO: Fill completed in 7min 29secs
Gerardo confirmed a good fill curbside.
TITLE: 10/08 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Commissioning
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 2mph Gusts, 0mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.26 μm/s
QUICK SUMMARY:
Detector Locked for 3.5 hours now and running injections.
Workstations were updated and rebooted. This was an OS packages update. Conda packages were not updated.
H1 called cause ITMX ISI watchdoog trip.
TITLE: 10/08 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: We've been locked for just under 2 hours, the range has been just under 160Mpc.
I've taken one of Sheila's and TJ's scripts and adjusted it to plot the max values for PSL-FSS_FAST_MON_OUT_DQ and PSL-FSS_PC_MON_OUTPUT channels before and after we started having issues with the FSS, only looking at data when we were in NLN (600+) and ignoring the data from the last minute in each locked stretch when we lost lock (since the IMC unlocks during locklosses and everything in the detector is generally all over the place).
We started seeing FSS related locklosses starting September 17th, so the plot(attachment1) shows the 'before' in two chunks - June 12 - July 13 (in blue), which was pre- FSS issues and pre- OFI vent, August 24th - September 17th in green, which was after the OFI vent but before we started having FSS issues, and then the 'after'/during is September 17th - October 4th, shown in red.
In both channels we can see that the red tends to reach higher than the blue or green, but the difference isn't as drastic and the glitching doesn't seem to be more frequent during the FSS issues either. By squishing up the plot(attachment2), I did notice that the level FASTMON reaches does look to have been gradually increasing over the last 4 months, which is interesting.
Lockloss at 00:55 UTC, most likely from earthquake ground motion.
02:02 UTC lost it at LOWNOISE_ESD_ETMX, ASC_AS_A AND IMC-TRANS lost lock kind of close together, like ~100 ms. It doesn't look like there was any glitching until after ASC_AS_A lost light.
One of the recent FIND_IR lockloss looks like it may have been a oscillation?
03:10 UTC back to Observing
TITLE: 10/07 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
With today's Monday Commissioning time, opportunity was taken for PSL/FSS investigations and this went longer than the usual 3hrs. Toward the end of the shift (2210utc), returned to locking. Arms were OK but PRMI was very misaligned, so an Initial Alignment was run. Had a few instances of the IMC in a loop of losing lock. The ISS Autolocker was toggled OFF and ON, and the REFSIGNAL was also clicked one unit to the right to get the difracted power closer to the target of 2.5%. Initial Alignment completed successfully and then return to locking and have locked DRMI within
RyanS went to check for spare PSL power supplies (just in case).
LOG:
Our range has increased back to around 160 Mpc on the CALIB CLEAN channel the past few days. I ran the DARM integral compare plots using an observing time on Sept 28 (before OPO crystal and PRCL FF changes) and Oct 5 (after those changes). It appears the largest improvement has occured at low frequency. Some of that can be attributed to the PRCL feedforward, but not all. Based on the previous noise budget measurements, and the change in the coherence of PRCL from Sept 28 to Oct 5, I think the improvement in DARM from 10-30 Hz is likely due to the PRCL improvement. Above 30 Hz, I am not sure what could have caused that improvement. It doesn't appear there is much improvement above 100 Hz, which is where I would expect to see changes from the squeezing, if it improved from the OPO changes.
Sheila pointed out two things to me: first, that if we are not using median averaging, these plots might be misleading if there is a glitch, and second, that some of the improvement at low frequency could be squeezing related.
I went through the noise budget code and found that these plots were made without median averaging. However, changing the code to use median averaging is a simple matter of uncommenting one line of code in /ligo/gitcommon/NoiseBudget/aligoNB/aligoNB/common/utils.py that governs how the PSD is calculated for the noise budget.
I reran the darm_integral_compare code using median averaging. The result shows much less difference in the noise at low frequency between these two times. The range is still improved from 10-50 Hz, but there is a small drop in the range between 50-60 Hz. I still think the change from 10-30 Hz is likely due to PRCL.
As a further confirmation of the necessity of median averaging here, I made a spectrogram of the data span on Sept 28, and a few glitches, especially around low frequency, are evident. I didn't see these glitches in the sensitivity channel that I used to choose the data spans (I just trend the sensmon CLEAN range and look for regions without big dips). However, the Oct 5 data span appears fairly stationary.
TITLE: 10/07 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 2mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.27 μm/s
QUICK SUMMARY:
00:00 UTC Observing
We had another transient glitch of the BSC3 PT132-MOD2 gauge which triggered the VACSTAT alarms. Gerardo confirmed that no other gauge had an issue.
The PT132 glitch is a square wave, only 2 seconds in width.
Pressure went from 4.8e-08 to 6.7e-08 Torr at 13:25:42 and returned to its base value 2 seconds later.
VACSTAT MEDM and comparison between BSC2 and BSC3 attached.
I restarted vacstat_ioc.service on cdsioc0 at 13:53 to clear this BSC3 latched event.
Sheila, Camilla.
New SQZ ASC using AS42 signals with feedback to ZM4 and ZM6 tested and implemented. We still need to watch that this can keep a good SQZ alignment during thermalization. In O4a we used a SQZ ASC with ZM5/6, we have not had a SQZ ASC for the majority of O4b.
Prep to improve SQZ:
Testing ASC from 80373:
In the first 20 minutes of the lock, the SQZ ASC appears to be working well, plot.
Note to operator team: if the squeezing gets really bad, you should be able to use the SQZ Overview > IFO ASC (black linked button) > "!graceful clear history" script to turn off the SQZ ASC. Then change /opt/rtcds/userapps/release/sqz/h1/guardian/sqzparams.py use_ifo_as42_asc to False and go though NO_SQZEEZING then FREQ_DEP_SQUEEZING in SQZ_MANAGER and accept the sdfs for not using SQZ ASC. If SQZ still looks bad, put ZM4/6 osems (H1:SUS-ZM4/6_M1_DAMP_P/Y_INMON) back to when squeezing was last good and if needed run scan sqz alignment and scan sqz angle with SQZ_MANAGER.
Sheila moved the "0.01:0" integrators from the ASC_POS/ANG_P/Y filters into the ZM4/5/6_M1_LOCK_P/Y filter banks.
This will allow us to more easily adjust the ASC gains and to use the guardian ZM offload states. We turned them on on ZM4/6. Edited OFFLOAD_SQZ_ASC to offload for ZM4,5,6. And tested by putting an offset on ZM4. We put ZM4/6 back to positions they were in in lock via the osesms. SDFs for filters accepted. I removed the "!offload AS42" button from the SQZ > IFO ASC screen (liked to sqz/h1/scripts/ASC/offload_IFO_AS42_ASC.py) as it caused a lockloss yesterday.
Oli tested the SQZ_MANAGER OFFLOAD_SQZ_ASC guardian state today and it worked well. We still need to make the state request-able.
ASC now turns off before SCAN_SQZANG_FDS/FIS and SCAN_ALIGNMENT_FDS/FIS. It wil check if the ASC is on via H1:SQZ-ASC_WFS_SWITCH and turn the asc off before scanning alignment or angle.
We changed the paths so that to get from SCAN_SQZANG_FDS/FIS and SCAN_ALIGNMENT_FDS/FIS back to squeezing, the guardian will go though SQZ_ASC_FDS/FIS to turn back on ASC afterwards.
starting at 16:48 UTC, we have the IMC locked with the NPRO temperature at 0.3, compared to -0.18 for the last 1.5 years (the medm screen says this is in units of K). This was a suggestion from the PSL team to see if our problem is that the laser is near mode hopping.
Ryan Short noticed that this was still glitching at the higher temperature, so that hasn't solved the issue. The first two screenshots show times of the glitching, the glitches also show clearly in the PWR_NPRO channel, but they are not as clear when looking at minute trends as in the FSS channel. This test ran until 17:53 UTC.
We are now sitting with the IMC and FSS unlocked, to see if we see the glitches like this in the NPRO channel. This would rule out that the problem is coming from the FSS, and point to a laser problem. We will probably need to look at full data for the NPRO channel for this second test. We've been sitting here since 17:57 UTC.
We saw similar glitches in the NPRO power monitor with the FSS off as on, so the glitches don't seem to be coming from the FSS. (1st attachment)
Ryan next closed the shutter after the NPRO, before the first amplifier. We didn't see any glitches for nearly 2 hours, but then we saw a series of similar glitches (second screenshot). So this narrows the problem down to something in the laser or controller.
Continuing this glitch search from yesterday the PSL has been locked to the reference cavity with an NPRO temperature of -0.7 since 15:35 UTC October 8th. At that temperature, there was a glitch which looked slightly different from the usual glitches. There was also an osciallation in the FSS.
At around 9 am, I went to the diode room and turned off the noise eater, in that configuration I saw some glitches that looked fairly different from the ones seen regularly, it is mostly only visible in the FSS channel but can also be seen as a small step in the NPRO power channel. There were about 4 glitches like this in an hour.
Then we had the lower temperature (-0.7) with the noise eater on for about an hour, the glitches were not bad during this time.
Later, on a suggestion from Daniel, Rick and I went and disconnected the "diagnostic cables" which connect the power supply to the beckhoff system. To do this, we noted first the set and acutal temperatures and diode currents, as well as the A and B buttons. (I will add photos later of these).
Then we went to the diode room and followed instructions that Ryan Short gave me to turn off the two amplifiers in order, then the shutter and then we turned the NPRO off. We went to the rack, disconnected the cables, and turned the NPRO on by turning on the button on the controller box. This conntroller doesn't have a switch on the front panel for the noise eater, it was replaced by a cable which is no longer used. Filiberto looked up some information about this and tells us that the noise eater would be off in this configuration. We quickly saw that there were many glitches visible in this configuration, while we had the laser temperature back to it's usual -0.2K. This test started at 12:42 pacific-.
At 1:30 pacific we disconnected the "slow" BNC cable from the back of the controller, labeled NPRO temp, it was in this configuration from 1:30 to 2:15. We did see glitches in that time, but not the largest ones.
Now we've set the temperature back to normal, and reconnected the cables, and turned back on the amplifiers and their watchdogs. Oli and Tony are proceeding with initial alignment and Rick and I will reset the watchdogs before leaving.
After August’s work, we wanted to check the ITMY compensation plate yaw setting. I did this last week using an input arm shaker injection at 12.6 Hz while sweeping the ITMY compensation plate in yaw, as I had done several times in the past (e.g. 76969, Figure 3).
The figure shows that scattering noise continues to be quite sensitive to the CP yaw on a 20 microradian scale. While re-checks did not previously suggest the need for a change after the initial setting (76969), the figure shows that this time the check suggests a new setting. The original setting, -250, is now in a coupling peak. I suggest changing it to -325 (see figure).
The second page of the figure shows that the coupling peaks are very repeatable.
I've changed ITMY CP yaw slider to -325 as of 18:00 UTC October 7th.
Accepted in SDF