H1 called for assistance at 13:30 UTC because it was stuck trying to lock ALSY. I gave ETMY two clicks in yaw and that seemed to be all it needed; H1 is relocking smoothly and up to PREP_DC_READOUT.
TITLE: 07/07 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
The first lock acquistion had the same issues that Corey saw during his shift with ALS unable to stay locked or unable to get past FIND IR. After some time it started to have longer times between lock losses and it eventually made it past ALS. I noticed that the COMM beat note was low (-18 dBm then grew to -16 over 30 min). This might explain part of the troubles. The second lock acquisiton was a very similar situation but the beat note started at -16dBm so ALS was more stable sooner. Maybe we found the threshold, and I did NOT touch PR3. I also had to run an initial alignment since DRMI was a bit too far off to catch. We are now on DC readout and moving forward.
ITMX mode 13 was rung up when I arrived and I was able to damp it fairly quick with Rahul's settings and then slowly adding gain.
LOG:
Lock loss 1404369750
Again, not seeing much of anything on any of the plots. Another 4-5 hour lock.
Lockloss 1404346170
Quite sudden, I'm not seeing anything on the lock loss ndscopes, even the many new ones. DIAG_MAIN reports that the ref cav transmission is low, which it seems to have been this way for the past 4 days. This seems really low and potentially an issue, but we've had locks within these last 4 days so perhaps not. I'm running into the same issue that Corey had this morning with ALS. It seems that it won't stay locked. I'm still catching up on the week's alogs, so maybe I'll find a hint soon.
Back to observing at 0232 UTC.
ALS wouldn't hold a lock, and I still can't figure out exactly why. Just like with Corey on hist shift, it just decided to hold long enough for DRMI. There was one lock loss here, but then a bit later it worked again and we got off of ALS and then it was all the way up to low noise. ITMX mode 13 is still just as high as before, I'll continue to try Rahul's settings.
Noticing now that the COMM beat note was very low and slowly trended up over time. This perhaps allowed for longer locks, and eventually long enough to get off ALS. Looking back at the last few locks, it's definitely lower than it has been even over the last few days. I'm hesitant to touch PR3, but this doesn't seem to be working.
TITLE: 07/06 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY:
H1 was locked at beginning of the shift, but had a lockloss just after 10am local. Relocking sadly took 3hrs, but it isn't clear why it took so long, because other than trying multiple times and eventually running an alignment and continued with more locking attempts, H1 finally made it past DRMI (and then locking was mostly automatic from there), but.... within an hour one of the 1kHz violins rung up (itmx13). I made some fairly large gain drops/increases which probably didn't help---currently Rahul is closely damping with much smaller gains. The DARM spectrum on nuc30 is glitchy/ugly---assuming it's due to noisier violins.
Sadly, no Calibration run today.
Guardian Notes:
LOG:
TITLE: 07/06 Day Shift: 2330-0100 UTC (1630-0100 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: Corey G
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 17mph Gusts, 14mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY: Locked for 3 hours but the 2nd order violins are quite rung up. They are damping, but slowly. A calibration measurement was not run today due to a short lock and now the high violins. If these damp down I'll run one.
Received a text message from Corey that ITMX mode 13 was rung up and the nominal settings was making it worse. Corey tried increasing/decreasing the gain settings without any luck. At this point I applied the following settings to try and damp the mode,
Gain = -1 phase -90 degrees - increases
Gain = +1 phase -60 degrees - increases
Gain = +0.1 no phase - increases
Gain = +0.1 +60degrees - increases
The above settings did not work and the one shown below seems to working for now as shown in the plot,
FM1+FM2+FM4+FM10 gain +0.1 (phase -90 degrees). After some time I will try ramping up the gain to see if I can bring IX13 down at a faster rate.
H1 had a 4.5hr lock after last night's Timing Error, but then there was a random lockloss this morning just after 10am local.
Sadly, locking has not been trivial with many lockloss for green arms (fully complete & automatic Initial Alignment fixed those). Then had hiccups with DRMI oddly. Had about 3-4 locklosses around the engage ASC for DRMI. I texted Camilla at this point to give a heads up for help, but as we chatted on TeamSpeak, H1 made it past DRMI ASC! (so leaving her on top of call list). Currently H1 is engaging asc for full ifo.
Fred will soon enter the control room with a high school class from New Zealand. Robert is also on-site.
h1 just had a lockloss and green arms looked decent, but it kept hanging up with locklosses (6 so far) with the LOCKING ALS.
Will attempt to tweak up SRM (maybe PRM/BS) at ACQUIRE DRMI 1F to help ease ASC for DRMI.
Attempt #2: automatically taken to check mich fringes then back to drmi...
Attempt #3: Went to PRMI fine.
Attempt #4: DRMI ASC completed FINE this time! (I had phoned Camilla before before this lock with fears I'd need help! Luckily, we both watched it get past DRMI.)
Because of possible heat issues we are running the CP1 fills earlier in the morning, 8am instead of 10am. For today's fill the outside temp at 8am is 30C (85F).
Sat Jul 06 08:17:25 2024 INFO: Fill completed in 17min 21secs
This run used the standard "summer" trip temps of -130C. Both TCs quickly got to -100C and TCA crept down to -100C before the fill completed. I have lowered the trip to -140C to give a margin of safety.
TITLE: 07/06 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 10mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.13 μm/s
QUICK SUMMARY:
H1's been locked/observing 2hrs after the historic efforts in recovery from the timing/dolphin crash last night. All looks decent at the moment (winds died about 1hr ago, but the Xarm (EX + MX) are both above 10mph oddly.
It is Saturday, so that means it's Calibration Day in about 4hrs *knock on wood*.
The front entrance gate won't close all the way so I have exited and entered it without having to swipe in (been told it's due to heat).
Tony handed off the IFO to me at the end of his shift and after the CDS team had recovered following a timing error and Dolphin glitch; see alog78892 for details on that. Generally alignment was very bad, not unsurprising since mutliple seismic platforms tripped as a result of the glitch.
Executive summary: H1 is back to observing as of 12:35 UTC after a lengthy alignment recovery process. See below for my general outline of notes I kept through the process.
FAMIS 26314
There looks to be a slight increase in noise on MR_FAN6 about 5 days ago.
Tony, Jim, Erik, Dave:
We had a timing error which caused DACKILLs on h1susb123, h1sush34, h1sush56 and DAC_FIFO errors on h1sush2a.
There was no obvious cause of the timing error which caused the Dolphin glitch, we noted that h1calcs was the only model with a DAQ CRC error (see attached).
After diag-reset and crc-reset only the SUS dackill and fifo error persisted.
We restarted all the models on h1susb123, h1sush2a, h1sush34, h1sush56 after bypassing the SWWD SEI systems for BSC1,2,3 and HAM2,3,4,5,6.
SUS models came back OK, we removed the SEI SWWD bypasses and handed the system over to Tony.
H1SUSH2A DACs went into error 300 ms before the CRC SUM increased on h1calcs.
The DAQ (I believe) reports CRC errors 120 ms after a dropped packet, leaving 180 ms. unaccounted for.
FRS31532 created for this issue. I has been closed as resolved-by-restarting.
Model restart logs from this morning:
Sat06Jul2024
LOC TIME HOSTNAME MODEL/REBOOT
01:12:22 h1susb123 h1iopsusb123
01:12:33 h1sush2a h1iopsush2a
01:12:39 h1sush34 h1iopsush34
01:12:43 h1susb123 h1susitmy
01:12:47 h1sush2a h1susmc1
01:12:56 h1sush56 h1iopsush56
01:12:57 h1susb123 h1susbs
01:12:59 h1sush34 h1susmc2
01:13:01 h1sush2a h1susmc3
01:13:10 h1sush56 h1sussrm
01:13:11 h1susb123 h1susitmx
01:13:13 h1sush34 h1suspr2
01:13:15 h1sush2a h1susprm
01:13:24 h1sush56 h1sussr3
01:13:25 h1susb123 h1susitmpi
01:13:27 h1sush34 h1sussr2
01:13:29 h1sush2a h1suspr3
01:13:38 h1sush56 h1susifoout
01:13:52 h1sush56 h1sussqzout
Hey so I had a quick squiz whether your locklosses might be PI related. The regularity of your lock lengths is very suspicious.
You had at least one lockloss from a ~80296 PI on July 5th just after 6am UTC.
Since then you have been passing through it, exciting it, but surviving the PI-Fly-By.
On those plots you will notice broader a feature that moves slowly up in frequency and excites modes as it passes (kinda vertical-diagonalish on those plots) that smells like the interacting optical mode to me.
Last couple of days you have lost lock when that feature reaches ~80306, where there's a couple of modes which grow a little in amplitude as the broad feature approaches.
It is hard for me to say which mechanical mode had the super high gain that would make you lose lock on the scale of seconds [because these modes would move wildly in frequency at LHO as I discuss in the link at the end of this line], but its in that ballpark,and I quote myself: avoid at all costs.
Please investigate if this is PI and whether you need to tweak up your TCS.
EDIT: Longer lock on the 8th makes me think 80kHz PIs are probably fine. Worth double checking carefully in the 20-25 kHz region since your mode spacing looks pretty high. It might be subtle to see in the PI plot on the lock pages since that aliases a lot of noise all over the place.
Thanks for pointing us to these PI locklosses out Vlad, Ryan also caught one of them in 78867, 78874.
The lockloss website for the July 5th 637UTC lockloss 1404196646 tags OMC_DCPD and can see elevated noise on the "PI Monitor" dropdown on the website too, maybe peaks at 15 and 30kHz.
Edit2: There are some elevated peaks at many frequencies before a lockloss I chose to look at, but nothing alarming enough. The largest peak change is something at 3619 Hz, which could be a red herring or aliased from a much higher frequency, but I don't know what that peak is.