Currently Observing at 147Mpc and have been Locked for 50 minutes. We did lose lock during maintenance due to us turning off the BRSs at the end stations making us unable to hold the lock, but with Jim's help we changed ISI_ETMX/Y_ST2_SC from SC_OFF to CONFIG_FIR and that helped us relock and we were relocked by 19:05UTC. The squeezer was also able to be fixed so we are also squeezing now.
As per WP 12069 I have installed a new system in the daq-2 rack slot 39, above the existing router. I have hooked it up to the admin vlan and have ipmi enabled on it. This is all of the network connectivity that will be enabled today. I have installed a solar flare 10g card into the system for its eventual connection to the core switch. I had to replace one power supply which had failed. I pulled the replacement power supply from a spare unit on the shelves. Install notes: * I am installing the same version of vyos on this router as is used on the current router. * the test stand log from the last time I did this (https://alog.ligo-la.caltech.edu/TST/index.php?callRep=15381). * I created a bootable thumb drive from the install iso and booted into a live image mode. * After booting to the thumbdrive and logging in I issued the 'install image' * select the local disk (sda) * automatically partition to a 40GB size * Default settings otherwise * After install, issue the reboot command. To transfer the config over, I formatted a usb thumb drive as a ext4 filesystem and mounted it to /mnt. I then entered config mode, issued a 'save config_3_sep_2024' and exited out. I copied the /config/config_3_sep_2024 file to /mnt and unmounted mnt. After moving the drive to the new router, I became root, mounted /mnt copied the new config file to /home/vyos/config_3_sep_2024 and changed it's ownership to the vyos user. At this time I also updated the config to have the correct hw mac addresses for this box. Then as the vyos user I entered config mode, issued a 'load /home/vyos/config_3_sep_2024', then 'commit', then 'save' to make the config persists. I rebooted to make sure the config was properly saved. It took me two tries as the first time I only committed the change and did not save it. I have installed an optic on the router and powered it of. I will provide documentation for the operator on how to switch over to this router if there is a failure. The basic procedure is: * power off the old router (rack 5, slot 37) using the power button on the front * go to the back of the rack * move the pink cable from the older router to the the new one (the port is labeled GB1 on both systems). * move the fiber from the older router to the new one (there is presently only 1 optic in each so there should be no confusion). * go back to the front of the rack and power on the new router (rack 5, slot 39) using the power button on the front.
While Oli was relocking I went to ISCT1 and checked the centering on POPAIR B (motivated by the observation that the POP18 is low compared to before the vent, as well as the DC light on this diode 79663). The beam wasn't well centered on the diode, and I've moved it to be more centered while the IFO was locked at 22W on PRM (this didn't make much difference in the powers at 22W). This did improve the powers on POP18 after power up by about 15-30%, but it doesn't recover us to the power levels we had in the earlier part of O4b. It seems that the degradation started around May 15. We might want to go to the table to check for clipping upstream (today I only touched the mirror in front of POP AIR B), or think about if we need to touch that pico motor.
FAMIS 21311
pH of PSL chiller water was measured to be just above 10.0 according to the color of the test strip.
Now that we are trying to stay locked on some maintenance days, I've added a "LIGHT_MAINTENANCE" state to the SEI_ENV guardian. This state turns off the end station stage 1 sensor correction and the all of the CPS_DIFF controls. It doesn't include all of the normal environmental tests, but will do the LARGE_EQ transition that we added a while ago, if the peakmon channel goes above 10 micron/s. It won't go to the normal eq state.
I don't think this will work as when the microseism is high, but Oli has been able to do an alignment and work on getting the IFO locked, while people were cleaning the endstations and working in the high bay.
Recovery to normal operations is the same as the normal maintenance state, select AUTOMATIC on SEI_ENV and INIT.
FAMIS 21271
After tuning up the FSS path in the enclosure last week (alog79736), the signal on the RefCav trans TPD has held steady and the PMC looks like it came back at around the same levels for reflected and transmitted power. The incursion is easily seen on several environmental trends.
No other major events of note.
Sheila, Naoki, Daniel
Overnight, the OPO was scanning for about 5 hours, during which time the 6MHz demod was seeing flashes from the CLF reflected off the OPO. This morning, we still see DC light on the diode, but no RF power on the demod channel. There aren't any errors on the demod medm screen.
We did a manual check that we have nonlinear gain using the seed, (we can't use the guardian because of the RF6 problem), and it seems that we do have NLG, so the OPO temperature correct.
Daniel found that the CLF frequency was far off from normal (5MHz), because the boosts were on in the CLF common mode board. Turning these off solved the issue. We've added a check in the OPO guardian in PREP_LOCK_CLF to check if this frequency is more than 50kHz off, if so it will not return true and will give a notificiation to check the common mode board.
Starting 17:43 Fri 30jul2024 the DTS environment monitoring channels went flatline (no invalid error just unchanging values).
We caught this early this morning when Jonathan rebooted x1dtslogin and the DTS channels did not go white-invalid. When x1dtslogin came back, we restarted the DTS cdsioc0 systemd services (dts-tunnel, dts-env) and the channels are active again.
Opened FRS31994
Tue Sep 03 08:11:49 2024 INFO: Fill completed in 11min 45secs
Jordan confirmed a good fill curbside. The low TC temperatures outside of the fill over the weekend was tracked to an ice build up at the end of the discharge line which has now been cleared. 1-week trend of TC-A also attached.
Workstations and displays were updated and rebooted. This was an os packages update. Conda packages were not updated.
TITLE: 09/03 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 16mph Gusts, 12mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
Currently in NOMINAL_LOW_NOISE but not Observing due to some SDF diffs from the PEM injections. We are planning on trying to stay locked during today's maintenance.
H1 called for assistance following some trouble relocking, the previous lock only lasted ~8 minutes.
08:50 UTC lockloss
09:46 UTC lockloss
10:40 UTC started an IA which took about 20 minutes
11:30 lost it at LOW_NOISE_LENGTH_CONTROL
I had a lot of trouble getting DRMI to lock, flashes were fairly decent (>100)
12:41 UTC back to NLN, the ISS refused to stay locked I finally put us into obs without sqzing after many tries 13:21UTC
TITLE: 09/03 Eve Shift: 2300-0500 UTC (1600-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Only one lockloss this shift. Recovery has been taking a while as it's been windy this evening, but H1 is now finally mostly relocked, currently waiting in OMC_WHITENING to damp violins. Otherwise a pretty quiet shift.
LOG:
No log for this shift.
Lockloss @ 03:31 UTC - link to lockloss tool
No obvious cause. Looks like there was some shaking of the ETMs about half a second before the lockloss. Wind speeds have come up to 30mph in the past 45 minutes, so it's possible that could have something to do with the lockloss.
TITLE: 09/02 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 143Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
Since the last update the only thing that happend was @ 21:58 UTC Lockloss from an unknown cause.
Wind wasn't elevated, not was the Primary microseism.
No PI ring up.
relocked and Back to Obsering at 22:51 UTC
LOG:
No log
TITLE: 09/02 Eve Shift: 2300-0500 UTC (1600-2200 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 19mph Gusts, 12mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY: H1 just began observing as I walked in; sounds like several locklosses have been due to PI ringups, and a couple from EQs.
Lockloss page
No warning or notice of an Earthquake was given it was a sudden small spike in ground motion.
It was Observed in Picket fence.
USGS didn't post this right away, but it was a M 4.2 - 210 km W of Bandon, Oregon right off the coast.
I took ISC_lock to Initial Alignment after a lockloss at check miche fringes.
Relocking now.
Interesting.
Naoki called the control room pretty early and suspected that the Locklosses from last night were from some PI ring ups as seen in Ryans alog 79860 .
So I told him I'd check it out and document it.
Last night during the spooky & completely automated OWL shift, when no one was around to see it. There were 5 episodes of lock aquisition and locklosses that strangely all happened around the time when the Lock Clock approached the 2 hour mark.
Turns out, Naoki's gut instinct was right, they were PI ring ups !
Not only was he correct that the Locklosses were caused by PI ring ups but they were Dreaded Double PI Ring up! Some Say that the Double PI ring up is just a myth or an old operator's legend. Its supposedly a rare event when 2 different Parametric Instabilities modes ring up at the same time!
But here is a list of the Dreaded Double PI ring up sitings from just last night!
2024-09-02_05:34:01Z ISC_LOCK NOMINAL_LOW_NOISE -> LOCKLOSS Cause: The Dreaded Double PI 28 And 29!!! The SUS-PI guardian did not change the phase of compute mode 28 at all. Lockloss page
2024-09-02_08:06:14Z ISC_LOCK NOMINAL_LOW_NOISE -> LOCKLOSS Cause: Another Deaded Double PI 28 & 29! Compute mode 28 phase was not moved again this time. Lockloss page
2024-09-02_10:37:06Z ISC_LOCK NOMINAL_LOW_NOISE -> LOCKLOSS Cause: Dreaded Double PI ring up But this time the Phase for 28 changed, But by then it was too late for everyone involved! ( No one was invloved, cause this was completetly automated.)
Lockloss page
3: 2024-09-02_12:38:45Z ISC_LOCK NOMINAL_LOW_NOISE -> LOCKLOSS Ok I'll admit that this one is not a Dreaded Double PI ring up... but it certainly is a PI 24 ring up!
Lockloss page After getting some second Eyes on this i am now Convinced that this is a Wind Gust Lockloss.
Maybe it is just the SUS-PI Guardian is having trouble damping PI28 and instead is trying to damp PI29 but that must be ringing up PI mode 29.
If it happens again, Naoki has asked me to simply take the SUS-PI Guardian to IDLE and take the damping gains for 28 & 29 to 0.
Wish me luck.