It looks like it was for a picomotor (from SYS); screenshot attached.
TITLE: 08/28 Eve Shift: 2300-0500 UTC (1600-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Corey
SHIFT SUMMARY: High winds made locking very difficult, they're dying down as the shift is ending and I just finished an IA so I'm hopeful the IFO will be able to lock itself as we just got DRMI offloaded at 05:00UTC
Lock #1
Lock #2
Lock#3:
I was able to new SRCL and MICH feedforward using the iterative method. Measurements were taken in alog 79693.
The current MICH FF is performing well except for below 30 Hz, so that's where I targeted in my fit. SRCL needed some improvement everywhere.
The new MICH filter is saved and loaded in FM8 and the new SRCL filter is saved and loaded in FM5. They are both labeled with the date '8-27-24'.
To compare how these filters are performing, I would use the templates Oli saved in the above alog, but first save the live trace as a reference to compare against.
I have not yet done the PRCL fit because I want to see how SRCL performs, and maybe redo the PRCL injections before trying the fit.
Here is some more information about the fits.
First, my MICH fit caused a lockloss this morning because I forgot to the check the phase on the filter. Sometimes, the phase gets flipped during the fitting. Usually, I compare the sign of the phase with the previous filter in foton and adjust accordingly, but I forgot to do it this time (rookie mistake). I have double checked the new SRCL filter phase and fixed the MICH phase.
Attached are two screenshots of the fittings. First, the MICH fitting compares the current filter, labeled as "reference 1" with the red trace which represents the new fit. The bottom right plot compares the fit residuals. You can see from this plot that the most improvement occurs at low frequency, with some small improvement at mid frequency. The SRCL fitting has many more traces, but compare the orange "current fit" trace with the trace labeled "best with less HF gain". This has more improvement almost everywhere. There is an increase in gain at high frequency, but it is less than an order of magnitude, so I think it's ok. The new fit also has reduced the high Q feature around 300 Hz that was potentially injecting noise. There is a factor 5-10 improvement between 10-50 Hz that will help the most.
These have now been tested, SRCL passes, MICH fails.
The SRCL screenshot compares Oli's SRCL measurement from four days ago with the "current" filter, and the new "trial" filter that I applied today. There is clear improvement everywhere (except for a small worsening between 100-200 Hz), so I think we should use this filter.
The MICH screenshot compares Oli's MICH measurement from four days ago with the "current" filter, along with "no FF" and "trial". The trial did what I promised, and reduced the coupling between 10-20 Hz, but clearly at the sacrifice of the noise everywhere else. I think we should stay with "current".
I am changing the guardian to select this new SRCL filter (FM5) in "lownoise length control" and the gain back to 1. There will be an SDF observing diff for SRCLFF1 that can be accepted by the operator.
Some thoughts about MICH FF:
In my opinion the hardest region to fit is between 10-20 Hz because of the presence of some high Q features, such as around 17 Hz. It would be worth considering what is causing those features- perhaps some bounce/roll notches. Do we still need those notches, or could they be briefly turned off during a feedforward injection? That might make the low frequency portion easier to fit and therefore easier to achieve good subtraction 10-30 Hz.
Looks like these are maybe BS M2 LOCK L FM10. We can try turning them off I suspect. Foton says they have a 2 second ramp, so should be okay to turn off just before the measurement (I'm not sure if we need them all the time, but maybe we do).
Today Sheila took another injection of PRCL for me so I could fit a new feedforward. The fit looked promising, however once it engaged it apparently caused oscillations everywhere, and I turned it off fast enough to avoid lockloss (thanks Corey and Ibrahim!). I checked the phase and gains beforehand, no high Q features, etc so I don't know what could be the issue.
In the last lock, PI31 at 10.428 kHz rang up as shown in the first attachment. Although it was damped by SUS_PI guardian, PI31 RMSMON reached close to 2000. The frequency of this PI is 10427.67 Hz as shown in the second attachment. This is within the input bandpass filter (425-431 Hz after downconversion by 10kHz). I increased damping gain for PI31 from 1000 to 10000 in SUS_PI guardian.
EDIT: This gain increase was too much. I reverted the damping gain to 1000 in SUS_PI guardian.
00:02 UTC lockloss, we were fighting PI 31 at the time, although winning.
[RyanS, TJ, Jenne]
Some (but not all) of our recent locks have had significant jittering and shaking visible on the OMC Trans camera. Since it's happening again this lock, I took a look at some of the channels that look at the trans camera readbacks.
The y-cursors are set to the max/min values of the m-trend from right now, when the camera is shaking. I then scrolled to the recent lock (that ended with maintenance) and show that the variance in all of these channels was much lower in that past lock.
I'm still not sure what is causing this, but at least the size of the fuzz in any of these camera channels can help us see when it is / isn't happening. My prime suspicion is a loop oscillation in the OMC length loop (that is based entirely on non-scientific gut feelings though).
I don't see anything suspicious in the OMC-LSC_SERVO_OUT_DQ channel (the spectrum looks the same as last lock), so likely my unscientific gut feeling is wrong.
The camera shaking is *not* visible in the AS_AIR camera though.
Back to Observing after fighting wind and then stepping through the CARM reduction slowly. There was a SQZ SDF diff that was changed when we lost lock this morning. I accepted it for now.
TITLE: 08/27 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Maintenance day today. We started relocking around 12 but wind made recovery a bit more difficult. We were not making it past the CARM reduction, so Sheila and Jenne stepped us through it slowly. This worked and just got back to Observing.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:58 | SAF | H1 | LHO | YES | LVEA is laser HAZARD | 18:24 |
15:21 | FAC | Nellie | FCES | n | Tech clean | 16:39 |
15:21 | FAC | Karen | EY | n | Tech clean | 16:15 |
15:21 | FAC | Kim | EX | n | Tech clean | 16:25 |
15:22 | GRD | TJ | CR | n | h1guardian1 reboot | 15:42 |
15:32 | PSL | Jason, Ryan S | PSL enc | Yes | PSL incursion, FSS | 17:59 |
15:41 | VAC | Gerardo, Jordan | LVEA | Yes | HAM6 RGA scans | 16:45 |
15:55 | FAC | Tyler | CS, MY | n | Forklift equipment around | 17:50 |
16:01 | CDS | Jonathan | MSR | n | Pulling two unused switches | 16:30 |
16:32 | PEM | Robert | LVEA | Yes | Install accel. near HAM6 | 18:40 |
16:44 | PCAL | Tony | PCAL lab | local | Parts for LLO | 16:50 |
16:59 | CDS | Fil | LVEA - HAM6 | Yes | Test network cable | 17:15 |
17:00 | SEI | Jim | FCES | n | HAM8 testing | 17:18 |
17:09 | FAC | Kim, Karen | LVEA | Yes | Tech clean | 18:22 |
17:21 | PEM | Sam, Genevieve | FCES | n | PEM sensors | 18:51 |
18:15 | VAC | Jordan | LVEA | Yes | Turning off RGA by HAM6 | 18:22 |
18:52 | PSL | Ryan S | CR | n | Rotation stage calibration | 19:05 |
22:32 | VAC | Gerardo | LVEA | Yes | Photography | 22:52 |
TITLE: 08/27 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Wind
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: SEISMON_ALERT
Wind: 23mph Gusts, 17mph 5min avg
Primary useism: 0.13 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
Ibrahim A, Jeff K
On 8/22, Jeff K and I ran some measurements of our assembled BBSS and determined the Bounce and Roll modes to be:
TJ, Sheila
TJ is having difficulty locking in the high winds this afternoon. We've been running without DRMI ASC other than the BS, so that operators have to manually adjust PRM and SRM sometimes. This afternoon we added back in INP1, PR1 and PRC2 loops, but left SRC ASC off. This seems to be helping with POP18, although the operators may still need to adjust SRM by hand.
Maintenance activies completed around 30 min ago. I ran an initial alignment at 1100 PST, then started locking at 1206 PST. The wind is really starting to pick up and we just lost lock at the state Resonance. We will keep trying but this might be a tough relock.
Since it hasn't been done since before the vent, and the output of the PMC has continued to slowly drop, before locking today I calibrated the PSL rotation stage following the steps in alog79596.
The measurement file, new calibration fit, and screenshot of accepting SDFs are all attached.
Power in (W) | D | B (Minimum power angle) | C (Minimum power) | |
Old Values | 97.882 | 1.990 | -24.818 | 0.000 |
New Values | 94.642 | 1.990 | -24.794 |
0.000 |
Now that CIT's ldas-pcdev2 machine is back up following last week's data center troubles, my CW hardware injection monitoring shows no apparent signals since the return to observing mode on Saturday. Is that by design, or did the injections restart fall through a crack? Thanks.
I just checked the CW hardware injection signal is running with no problems. The h1hwinj1 server had crashed on 25jul2024 and I rebooted it on 29jul2024 and verified it started ok. Attachment shows a month trend of the signal, the restart can been seen on the left.
My bad. There was indeed another disruption to the monitoring that was affected by the pcdev2 shutdown, which I hadn't noticed. Thanks for the quick follow up and sorry for the noise.
WP12061 Upgrade RCG h1susex, remove LIGO-DAC delays
EJ, Erik, Jonathan, Dave, Daniel, Marc:
h1susex RCG was upgraded to a custom 5.3.0 specifically for the IOP to remove a delay in the new LIGO 28AO32 DAC. We compliled all of the user models with this RCG as well.
Our first restart was a simple model restart, but we got Dolphin errors. So our second restart was to fence h1susex from the Dolphin fabric and power cycle h1susex via IPMI. After this the models started with no issues.
No DAQ restart was required.
Code path is /opt/rtcds/rtscore/advLigoRTS-5.3.0_dacfix
WP12063 Alert System
Dave:
The locklossalert system code was modified to permit an alert window which spans over midnight, needed for the new owl shift hours.
Note that the business/every day filter is applied after the minute-in-day filter, so a window starting Friday evening and extending into Saturday morning will cut off at midnight if business days only is selected.
One other change:
For everyone subscribed to Guardian alerts, if for any reason the Guardian EPICS record cannot be read (node down, h1guardian1 down) the alert will now default to SEND.
Guardian Reboot
TJ:
h1guardian1 was rebooted at 08:21 PDT. All nodes except TEST came back automatically. TJ worked on TEST and got it going.
MSR Rack Cleanup
Jonathan:
Jonathan removed two test switches from the MSR racks which are no longer needed.
I updated the o4 script which reports where we are in O4 and reminds us of important dates (break start/end, vent start/end).
Tue27Aug2024
LOC TIME HOSTNAME MODEL/REBOOT
08:50:45 h1susex h1iopsusex <<< 1st try, restart model
08:50:59 h1susex h1susetmx
08:51:13 h1susex h1sustmsx
08:51:27 h1susex h1susetmxpi
08:57:11 h1susex h1iopsusex <<< 2nd try, reboot
08:57:24 h1susex h1susetmx
08:57:37 h1susex h1sustmsx
08:57:50 h1susex h1susetmxpi
Before the vent, we had lowered the DARM offset used at the end of the DARM_OFFSET state for locking the OMC since we had seen the PRG fall off and cause a lockloss with the nominal offset of 9e-05 (see alog79082 for details). When locking this afternoon, I raised the offset from 6e-05 back to 9e-05 after running through DARM_OFFSET, and seeing that the PRG didn't plummet and cause a lockloss, we continued locking. The OMC locked on the first try, something that hasn't been the case recently, so having more carrier light there seems to help. I compared OMC scans from this lock against the last lock, which used the lower DARM offset; attachment 1 shows the scan with the higher offset and attachment 2 with the lower offset. According to the OMC-DCPD_SUM channel, we get ~10mW more carrier light on the OMC when locking with this higher DARM offset.
I've put this DARM offset of 9e-05 back into ISC_LOCK and loaded it. We can watch over the next couple of lock acquisitions to see if the problem wth the PRG dropping off resurfaces.
Tagging OpsInfo
If you see the power recycling gain start falling soon after DARM_OFFSET, you can turn off the LSC-DARM1 filter module offset, lower it, and turn it back on until the PRG stays steady, then proceed with OMC locking.
Now that we've locked several times successfully since yesterday with this higher DARM offset, I've also rearranged the state order in ISC_LOCK so that the DARM offset is applied before any ASC so that the OMC can work on locking while ASC converges (this is how the order used to be before the DARM offset issues started).
See attached for the new state progression around this point in locking.