Lock loss 1404369750
Again, not seeing much of anything on any of the plots. Another 4-5 hour lock.
Lockloss 1404346170
Quite sudden, I'm not seeing anything on the lock loss ndscopes, even the many new ones. DIAG_MAIN reports that the ref cav transmission is low, which it seems to have been this way for the past 4 days. This seems really low and potentially an issue, but we've had locks within these last 4 days so perhaps not. I'm running into the same issue that Corey had this morning with ALS. It seems that it won't stay locked. I'm still catching up on the week's alogs, so maybe I'll find a hint soon.
Back to observing at 0232 UTC.
ALS wouldn't hold a lock, and I still can't figure out exactly why. Just like with Corey on hist shift, it just decided to hold long enough for DRMI. There was one lock loss here, but then a bit later it worked again and we got off of ALS and then it was all the way up to low noise. ITMX mode 13 is still just as high as before, I'll continue to try Rahul's settings.
Noticing now that the COMM beat note was very low and slowly trended up over time. This perhaps allowed for longer locks, and eventually long enough to get off ALS. Looking back at the last few locks, it's definitely lower than it has been even over the last few days. I'm hesitant to touch PR3, but this doesn't seem to be working.
TITLE: 07/06 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY:
H1 was locked at beginning of the shift, but had a lockloss just after 10am local. Relocking sadly took 3hrs, but it isn't clear why it took so long, because other than trying multiple times and eventually running an alignment and continued with more locking attempts, H1 finally made it past DRMI (and then locking was mostly automatic from there), but.... within an hour one of the 1kHz violins rung up (itmx13). I made some fairly large gain drops/increases which probably didn't help---currently Rahul is closely damping with much smaller gains. The DARM spectrum on nuc30 is glitchy/ugly---assuming it's due to noisier violins.
Sadly, no Calibration run today.
Guardian Notes:
LOG:
TITLE: 07/06 Day Shift: 2330-0100 UTC (1630-0100 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: Corey G
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 17mph Gusts, 14mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY: Locked for 3 hours but the 2nd order violins are quite rung up. They are damping, but slowly. A calibration measurement was not run today due to a short lock and now the high violins. If these damp down I'll run one.
Received a text message from Corey that ITMX mode 13 was rung up and the nominal settings was making it worse. Corey tried increasing/decreasing the gain settings without any luck. At this point I applied the following settings to try and damp the mode,
Gain = -1 phase -90 degrees - increases
Gain = +1 phase -60 degrees - increases
Gain = +0.1 no phase - increases
Gain = +0.1 +60degrees - increases
The above settings did not work and the one shown below seems to working for now as shown in the plot,
FM1+FM2+FM4+FM10 gain +0.1 (phase -90 degrees). After some time I will try ramping up the gain to see if I can bring IX13 down at a faster rate.
H1 had a 4.5hr lock after last night's Timing Error, but then there was a random lockloss this morning just after 10am local.
Sadly, locking has not been trivial with many lockloss for green arms (fully complete & automatic Initial Alignment fixed those). Then had hiccups with DRMI oddly. Had about 3-4 locklosses around the engage ASC for DRMI. I texted Camilla at this point to give a heads up for help, but as we chatted on TeamSpeak, H1 made it past DRMI ASC! (so leaving her on top of call list). Currently H1 is engaging asc for full ifo.
Fred will soon enter the control room with a high school class from New Zealand. Robert is also on-site.
h1 just had a lockloss and green arms looked decent, but it kept hanging up with locklosses (6 so far) with the LOCKING ALS.
Will attempt to tweak up SRM (maybe PRM/BS) at ACQUIRE DRMI 1F to help ease ASC for DRMI.
Attempt #2: automatically taken to check mich fringes then back to drmi...
Attempt #3: Went to PRMI fine.
Attempt #4: DRMI ASC completed FINE this time! (I had phoned Camilla before before this lock with fears I'd need help! Luckily, we both watched it get past DRMI.)
Because of possible heat issues we are running the CP1 fills earlier in the morning, 8am instead of 10am. For today's fill the outside temp at 8am is 30C (85F).
Sat Jul 06 08:17:25 2024 INFO: Fill completed in 17min 21secs
This run used the standard "summer" trip temps of -130C. Both TCs quickly got to -100C and TCA crept down to -100C before the fill completed. I have lowered the trip to -140C to give a margin of safety.
TITLE: 07/06 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 10mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.13 μm/s
QUICK SUMMARY:
H1's been locked/observing 2hrs after the historic efforts in recovery from the timing/dolphin crash last night. All looks decent at the moment (winds died about 1hr ago, but the Xarm (EX + MX) are both above 10mph oddly.
It is Saturday, so that means it's Calibration Day in about 4hrs *knock on wood*.
The front entrance gate won't close all the way so I have exited and entered it without having to swipe in (been told it's due to heat).
Tony handed off the IFO to me at the end of his shift and after the CDS team had recovered following a timing error and Dolphin glitch; see alog78892 for details on that. Generally alignment was very bad, not unsurprising since mutliple seismic platforms tripped as a result of the glitch.
Executive summary: H1 is back to observing as of 12:35 UTC after a lengthy alignment recovery process. See below for my general outline of notes I kept through the process.
FAMIS 26314
There looks to be a slight increase in noise on MR_FAN6 about 5 days ago.
TITLE: 07/06 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
Lockloss 1:06 UTC
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1404263217
Lockloss Screenshots attached
Relocking:
After the Lockloss I had pretty small flashes on X arm.
I allowed Increase flashes to run and it didn't get me better than 0.3.
I then touched it up by hand and could not get it better than 3.1, Trending back I think the goal is above 1.
I then tried to get better alignment by rolling back to an alignment from the beggining of the last lock.
I tried the Alignment after the Last initial alignment.
Im going to try to move just PR2 now.
Revert all movements back to right after we lost lock.
Sheila did a small Pico motor move in HAM3.
Pico Motor ALS/Pop steering HAM1 ( Actually in HAM3).
H1:ALS-C_TRX_A_LF_GAIN was increased temporarily to make the X arm WFS run.
And Sheila did another move of ALS/Pop steering once the WFS were running.
Note the H1:ALS-X_FIBR_A_DEMOD_RFMON beat note dropped down to -38 and the threshold was lowered to -43.
Once this was done we could do an Initial Alignment, BUT we did not have anything on AS Air
Moved IM4 & PRM to get light on AS Air and Refl PRM cam.
Sheila used IM4 PRM & PR2 Osems to match prior OSEM values to do a "manual WFS releave past" to Move PR2 which gave us increased IR flashes.
Touched up PRM in Yaw to lock PRX
Finished Initial Alignment at 5:19 UTC
Locking was being difficult and would lockloss at FIND IR and Locking ALS.
I tried giving another Initial Alignment after it failed a number of time because we did touch it up from hand.
Even after that IA it was still locklossing at Find IR and Lockling ALS. Paused in Locking ALS to allow the WFS to calm down.
Yeah ALS WFS DOF 2 is pulling it away for some reason. But even trying to allow the WFS to melow out, it still catches a Lockloss.
Finally got past DRMI !!! YAY!!!
LOCKLOSS!? From MAX POWER!?
7:30 UTC Random HEPI HAM1 Watchdog trip.
IOP SUS56, 34, & 23 all had a IOP DACkill trip at the same time.
Seems like sush2a Had DAC error calling in CDS team.
CDS team is resetting all the SUS front end models because everything from HAM1 to HAM6 tripped in this timing glitch. ****
LOG:
Sheila remotely Helped get me a good Alignment and got me through a rouch IA
Dave B & Erik helped restart all the Front Ends.
Jim was also called due to a HEPI trip and he was next on the call list.
Every one has been cycled to the bottom of the list.
See Daves alog about the CDS Timing error https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=78892
Tony, Jim, Erik, Dave:
We had a timing error which caused DACKILLs on h1susb123, h1sush34, h1sush56 and DAC_FIFO errors on h1sush2a.
There was no obvious cause of the timing error which caused the Dolphin glitch, we noted that h1calcs was the only model with a DAQ CRC error (see attached).
After diag-reset and crc-reset only the SUS dackill and fifo error persisted.
We restarted all the models on h1susb123, h1sush2a, h1sush34, h1sush56 after bypassing the SWWD SEI systems for BSC1,2,3 and HAM2,3,4,5,6.
SUS models came back OK, we removed the SEI SWWD bypasses and handed the system over to Tony.
H1SUSH2A DACs went into error 300 ms before the CRC SUM increased on h1calcs.
The DAQ (I believe) reports CRC errors 120 ms after a dropped packet, leaving 180 ms. unaccounted for.
FRS31532 created for this issue. I has been closed as resolved-by-restarting.
Model restart logs from this morning:
Sat06Jul2024
LOC TIME HOSTNAME MODEL/REBOOT
01:12:22 h1susb123 h1iopsusb123
01:12:33 h1sush2a h1iopsush2a
01:12:39 h1sush34 h1iopsush34
01:12:43 h1susb123 h1susitmy
01:12:47 h1sush2a h1susmc1
01:12:56 h1sush56 h1iopsush56
01:12:57 h1susb123 h1susbs
01:12:59 h1sush34 h1susmc2
01:13:01 h1sush2a h1susmc3
01:13:10 h1sush56 h1sussrm
01:13:11 h1susb123 h1susitmx
01:13:13 h1sush34 h1suspr2
01:13:15 h1sush2a h1susprm
01:13:24 h1sush56 h1sussr3
01:13:25 h1susb123 h1susitmpi
01:13:27 h1sush34 h1sussr2
01:13:29 h1sush2a h1suspr3
01:13:38 h1sush56 h1susifoout
01:13:52 h1sush56 h1sussqzout
Hey so I had a quick squiz whether your locklosses might be PI related. The regularity of your lock lengths is very suspicious.
You had at least one lockloss from a ~80296 PI on July 5th just after 6am UTC.
Since then you have been passing through it, exciting it, but surviving the PI-Fly-By.
On those plots you will notice broader a feature that moves slowly up in frequency and excites modes as it passes (kinda vertical-diagonalish on those plots) that smells like the interacting optical mode to me.
Last couple of days you have lost lock when that feature reaches ~80306, where there's a couple of modes which grow a little in amplitude as the broad feature approaches.
It is hard for me to say which mechanical mode had the super high gain that would make you lose lock on the scale of seconds [because these modes would move wildly in frequency at LHO as I discuss in the link at the end of this line], but its in that ballpark,and I quote myself: avoid at all costs.
Please investigate if this is PI and whether you need to tweak up your TCS.
EDIT: Longer lock on the 8th makes me think 80kHz PIs are probably fine. Worth double checking carefully in the 20-25 kHz region since your mode spacing looks pretty high. It might be subtle to see in the PI plot on the lock pages since that aliases a lot of noise all over the place.
Thanks for pointing us to these PI locklosses out Vlad, Ryan also caught one of them in 78867, 78874.
The lockloss website for the July 5th 637UTC lockloss 1404196646 tags OMC_DCPD and can see elevated noise on the "PI Monitor" dropdown on the website too, maybe peaks at 15 and 30kHz.
Edit2: There are some elevated peaks at many frequencies before a lockloss I chose to look at, but nothing alarming enough. The largest peak change is something at 3619 Hz, which could be a red herring or aliased from a much higher frequency, but I don't know what that peak is.