J. Kissel We now have heard from search groups that they don't (yet, definitively) mind having more calibration lines -- as long as they're subtracted. Further, we've re-found the need to make continuous measures of the low-frequency end of the sensing and response function. Finally, we want to use an excitation solution that is more robust than awg (which is what the recently re-started CAL_AWG_LINES guardian uses). As such, I propose the following changes: (1) In the PCAL models' library part: expand the same solution that's already in use -- front-end, synchronized oscillators whose parameters are controllable by EPICs -- piped into the same "LINE_SUM" channels that already exist and stored into frames. Since we always seem to find more uses for PCAL lines, I've added 20 more beyond the 10 that are already there, for a total of 30 at each end station. There're no new fast channels / test points needed, since we only monitor the SUM of all the oscillators, and that sum is already stored in the frames. (2) In the h1omc model's LSC block*** which contains the DARM control filters -- add front-end, synchronized oscillators 10 in total -- and sum them in to the error point of the loop, just down stream of DARM_ERR, just like the actual excitation point, DARM1_EXC. We'll only store the sum of the oscillators in the frames, like in the PCAL system using the pre-existing channel stored in the frames -- LSC_CAL_LINE_SUM -- so no new data storage burden there. However, because we're using these DARM calibration lines as measures of the open loop gain, loop suppression, and closed loop gain, we need to store the test point immediately *downstream* of the summation point, which, in H1's case is DARM1_IN1. (We'd already stored DARM1_IN2 as the test point just downstream of awg-style excitations -- which is still needed for swept sine and broadband excitations). (***the LSC block is intentionally *not* a library part, since L1's LSC control needs are different that H1's). So -- this proposal's impact on data storage is (20 PCALX + 20 PCALY + 10 DARM) oscillators * (5 EPICs records per OSC) = 250 new EPICs records at 16 Hz, and ONE new test point stored at 16 kHz. I attach screenshots of "before" vs "after" changes in the comments below. I'll now use this documentation to write the ECR to install, hopefully next Tuesday (7/31/2023).
Following on from 71381, today during commissioning time, I injected 2.6Hz sine wave into in H1:OMC-SUS_M1_TEST_{L, P, Y, V, T, R}_EXC by increasing gradually the gain until when we reach the peak-to-peak amplitude around 40 urad or 4um on the H1:OMC-SUS_M1_DAMP_{L, P, Y, V, T, R}_INMON channels. I didn't get this amplitude in R or V due as any higher gain was causing DAC saturations.
DARM plot attached during these times. Can see a lot of coupling with P, Y, L. Not anything easily noticeable with R, V, T.
TITLE: 07/26 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 15mph Gusts, 11mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.05 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING
Optics Lab dust monitor (300NM) is going off so will investigate
TITLE: 07/26 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 143Mpc
SHIFT SUMMARY: Relocked two times today completely automatic. Longer period of commissioning today, but we are back into Observing now.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:15 | FAC | Cindi | MX | n | Tech clean | 16:47 |
| 15:52 | FAC | Christina | Opt lab, MX | n | Property search | 16:19 |
| 16:22 | VAC | Jordan, Travis | MY | n | Hepta and turbo pump work | 16:57 |
| 16:56 | FAC | Cindi | Mechnal Room | N | Tech clean | 18:26 |
| 17:12 | PCAL | Tony, 2 Detchar | PCAL lab | local | PCAL lab measurement | 17:30 |
| 18:08 | FAC | Tyler | CS AHU | n | Looking into AH fan noise | 18:08 |
| 18:33 | FAC | Karen | Vacuum Prep | n | Tech Clean | 20:03 |
| 18:37 | - | Fanis, Christopher | arms | n | Drive down arms and film | 20:37 |
| 19:50 | SEI | Jim | cr | n | HAM1 loop test | 20:20 |
| 21:07 | FAC | Chris | MY | n | Grab items | 21:27 |
| 21:23 | ISC | Gabriele | remote | n | SRCLFF meas. | 22:00 |
| 22:00 | DetChar | Sidd | CR | n | HW safety inj | 22:14 |
| 22:15 | ISC | Camilla | CR | n | OMC shaking test | 22:45 |
Jenne, Sidd
Performed the safety injections at 1374444261.56. The injections information is uploaded on GraceDB playground. The details of injected parameters can be found here https://git.ligo.org/siddharth.soni/hardware_injections/-/tree/main/data
Sidd ran into the same error today as I did last time, so I added a comment to the hwinj git issue.
Marissa, Shania, Tabata, Gaby, Brennan, Andy GravitySpy volunteers noticed a new type of glitch that happened on May 31 and June 1 (GSpy link). Many of these glitches appeared as a high-Q line at 590 Hz, but they also appear as a stack of high-Q lines in the 100 to 600 Hz range, and sometimes only below 300 Hz (plot 1). The cause seems to be physical motion of the PSL periscope (plot 2) in short bursts. The coupling to DARM is through the periscope motion causing beam jitter (plot 3). The MCL/REFL_SERVO loop are also witnesses. Some of the glitches in the periscope are broadband, while others have a few well-defined frequencies (plot 4). There was an increase in the motion of the periscope during this period, seen by BLRMS of the periscope accelerometer (plot 5). The motion was still elevated above the previous level when this period ended. This kind of jitter glitch has occurred again in short bursts in some later days, but not for sustained periods. Scans of selected times are at this link (needs LVK login). Note that this configuration does not include the jitter or periscope accelerometer channels.
We also see in some of these glitches a loud noise in HAM1 HPI and LVEA floor accelerometer channels. See the attached example omega scans showing one glitch from June 26 showing up in H1:HPI-HAM1_BLND_L4C_Y_IN1_DQ and H1:PEM-CS_ACC_LVEAFLOOR_HAM1_Z_DQ. You can also see other accelerometers showing up in the full omega scans here but those were ones that showed up especially strong in several of the glitch times we examined. These are obvious in the omega scans at the specific glitch times, but this motion can't be clearly seen in the day-long spectrograms in the summary pages.
Brennan Hughey, Amber Stuver
Following alog 71706, I'm recording here that Amber Stuver and I examined online coherent WaveBurst results when the extra calibration lines were turned on in late June (see e.g. alog 70724). We could not find any correlation in background rates with the extra lines being turned on for any version of the burst online searches, so turning these CAL_AWG_LINES back on appears harmless from the burst perspective. We'll look at the new data with the lines turned back on to make sure this is still the case.
A more detailed alog with analysis will follow.
I injected some 10-100 Hz noise in CHARD_Y. It looks like we have a safety factor of 30-100 for noise coupling above 10 Hz.
Also from previous OLG measurements and models, it looked like we could simply increase CHARD_Y gain by a factor 3 and improve the residual motion.
It worked. I tested a gain 3x the nominal for ten minutes.
CHARD_Y noise ampl 20 butter("BandPass",6,10,100)
start PDT: 2023-07-26 14:20:47.869550 PDT
UTC: 2023-07-26 21:20:47.869550 UTC
GPS: 1374441665.869550
stop PDT: 2023-07-26 14:23:11.548956 PDT
UTC: 2023-07-26 21:23:11.548956 UTC
GPS: 1374441809.548956
CHARD_Y gain 180
from PDT: 2023-07-26 14:28:47.965492 PDT
UTC: 2023-07-26 21:28:47.965492 UTC
GPS: 1374442145.965492
to PDT: 2023-07-26 14:39:08.938636 PDT
UTC: 2023-07-26 21:39:08.938636 UTC
GPS: 1374442766.938636
CHARD_Y gain back to nominal 60
from PDT: 2023-07-26 14:40:25.797705 PDT
UTC: 2023-07-26 21:40:25.797705 UTC
GPS: 1374442843.797705
to PDT: 2023-07-26 14:50:34.172825 PDT
UTC: 2023-07-26 21:50:34.172825 UTC
GPS: 1374443452.172825
Derek, Iara, Zach Following up on Vicky's request to determine if the continuous turning on and off of the ESD drive in response to PI ringup is causing noise. We examined 28 June 2023 at 07:25 UTC and 19 July 2023 at 16:19 UTC where the ESD drive was repeatedly turned on and off. We referenced strain and omicron segments for the same times and found no significant increase in glitches or average strain. We find no additional noise from the engagement of the ESD drive. Additionally, we notice no significant difference between the time periods when the ESD drive is on vs. when it is off. Attached are images of the PI ringups and resulting ESD damping and the corresponding times in strain and omicron.
Andy, Marissa, Gaby, Brennan, Tabata, Shania
We investigated glitches that occurred in Calib strain on June 27 2023 (see red box in glitch-gram and attached omegascan) which are related to the OM2 TSAMS used to improve mode matching and increase BNS range (see aLog 70886). The glitches in Calib strain appear while the temperature is changing (see attached TSAMS plots) and occur around 300 Hz. These glitches are also witnessed by OMC-ASC_QPD_{A/B}_YAW (see attached omegascan). Once the temperature is stable the glitches are no longer present. These glitches also occur in Livingston (March 15 2023) when the TSAMS are used (see aLog 63983 for TSAMS plot and attached L1 glitch-gram).
Now that we're back to our lower noise situation with OM2 hot, we're redoing the test that Oli ran in 71304 to look at how much different our noise is with and without the calibration lines on. We expect that this will primarily show that the noise right around the line frequencies is reduced, as suggested by Gabriele in 71614. I don't think we expect any major changes other than right around the lines, but if we do, that would be very interesting.
Since the low frequency calibration lines are back on this week (alog 71706), it took more 'doing' than normal to turn off the calibration lines. ISC_LOCK's NLN_CAL_MEAS does not currently turn off the CAL_AWG_LINES guardian-controlled lines, so I selected LINES_OFF in that guardian. However, that didn't actually stop all of the lines, so I also did an awg clear 8 * to stop the lines going to the DARM1_EXC. TJ is looking into ensuring that NLN_CAL_MEAS takes care of the awg lines.
I'm a little surprised, but I'm not seeing much of a difference in the spectra when the lines are on vs. off. All 4 panels are the same 4 traces (noted above in the bullet points), but zoomed differently. These traces are all of the GDS-CALIB_STRAIN_NOLINES, so for times that the calibration lines are on, they've been subtracted out of the data here (note that the 'new' CAL_AWG_LINES are not yet subtracted, so those are still present in this channel). I expected this channel to show some noise around the calibration lines for times when the Cal lines were on. But, I'm not really seeing anything.
I'm not sure if the PCALY_DARM lines are coming back on at the same height each time or not. It's possible that they are, and it's just that the attached ndscope can only look at 16 Hz channels, and so the beating between lines / aliasing is causing it to look like they are not. But, just to flag that we should check to ensure that the lines are coming on with awg at the amplitude requested.
Lock loss 1374437133
Caused by commissioning activities.
Wed Jul 26 10:07:00 2023 INFO: Fill completed in 6min 56secs
Travis confirmed a good fill curbside
Plots of this week's (07/26/2023) SUS charge measurement attached.
You can see from the in-lock charge plots that V_eff changed the direction of it's trend at certain times, roughly listed below. In the plot attached, the bias voltage applied to the DC ESD changed at certain times 68123 67698. I wanted too see if these times agreed (shown in G1600699). More investigation is needed as the current plots only go back as far as March but this could explain the trend in ITMY V_eff. Not shown in plot is that EX bias was off 10 March to 05 April 2023 (68446).
| Optic |
V_eff changed direction, plots above. estimated dates |
Bias voltage applied to the DC ESD changed, plot attached |
| ITMX | Feb/March 2023 | No change |
| ITMY | April 2023 | 23rd April 2023 |
| ETMX | No change | 28th Feb 2023 |
| ETMY | end of April 2023 | 28th Feb 2023 |
Lockloss from NLN (and Observing) at 23:54 UTC
No known reason for the lockloss. Re-locking now.
Jenne, Camilla
In 71559 Ryan C and Daniel found that there was a DHARD_Y Transient during DHARD_WFS state that was larger since June 29th, coinciding with the time the violins starting to ring up 71404.
Yesterday TJ and Oli increased the DHARD_P and _Y TRAMPs from 5s to 15s in this state 71672 and in the two relocks since then we've only had to stay in OMC_ WHITNIENTING damping violins <15 minutes 71676. The transient is much smaller after this TRAMP increase but still there and a factor 2 larger than pre-June 29th.
The step response of the only filter on at the time (FM6 "newcntrl") is very big when it's turned on, but over in 1sec, attached. Currently FM6 is turned on and then 1 second later the input and gain is turned on, plot attached. Jenne suggests as the step response starts when the input is turned on, we should turn the input on with FM6 and then after 1 second, ramp the gain up. I've added this to ISC_LOCK and loaded. Hope this will remove the transient and then we can reduce the tramp back to 5s.
Daniel pointed out that FM4,7,9 are turned off when FM6 is turned on. These filters all have 5 second ramp times so we are probably getting the transient from these filters being still ramping down as the gain is ramped up. I've accepted these as OFF in the safe.snap sdf see attached. So they should be reverted on sdf revert.
We saw a similar transient in DHARD_P but haven't looked at it yet so we should repeat this process with other filters.
Most of the individual filters in this module have a 2-5s ramp time. For whatever reason, FM4/5/7/9 are all on just before DHARY gets engaged, They then get turned off while FM6 is turned on, a short time before the input to the moduleis turned on. However, there is not enough time for these filters to ramp off, so they are still on during the inital gain ramping.
This transient is not visible in the optics anymore, see attached. I'll put the tramp back to 5s but ISC_LOCK will need to be reloaded when we are out of observe. Opened and closed FRS 28647.
Brina, Genevieve and Lance are going to look for transients from other filters and at other times, Elenna suggested checking DHARD_P.
PCAL library part, before vs. after. I took the opportunity to re-organize other parts visually for clarity as well, but there's no functional change other than the 20 new oscillators. Also, as a minor simulink detail -- the individual OSC blocks *had* been a library part with-in a library part bug since the beginning of time (someone copied and pasted the block, but forgot the annoying feature of library sub-blocks that if you copy a block within a model that's already a library, it stays linked to the original thing you copied. One needs to explicitly "disable link" and "break link" if you don't want that sub-block to reference the other). Instead, I *actually* made a new library part, since I new I wanted to copy the same thing to the LSC model. Thus, the PCAL_MASTER.mdl library now relies on a new library part, /opt/rtcds/userapps/release/cal/common/models CAL_OSC_MASTER.mdl which is manifested in the PCAL model as the oscillators "PCALOSC1," "PCALOSC2," ... "PCALOSC30". 2023-07-26_PCAL_MASTER_PCAL_OSCfocus_before.png shows the impacted parts of the PCAL_MASTER.mdl library part before the changes. 2023-07-26_PCAL_MASTER_PCAL_OSCfocus_after.png shows the impacted PCAL_MASTER after the changes. 2023-07-26_CAL_OSC_MASTER.png shows the new CAL_OSC_MASTER library block, and 2023-07-26_CAL_OSC_MASTER_inside.png shows the innards.And here's the LSC block before and after the changes. Here, again, I took the opportunity to aesthetically clean up the garbled mess that was all the things that have been stapled on and around the DARM bank over the years. But, the only functional changes are (a) the new oscillators, (b) the move of the CAL_LINES summation point from into DARM_CTRL to into DARM_ERR, (c) the DARMOSC_SUM test point and epics monitor, and (d) the storage of DARM1_IN1 in the frames. The changes (b) and (d) are "interesting." For old iLIGO reasons that I don't remember, the "DARM" calibration lines were summed in down stream of the DARM bank, i.e. into DARM_CTRL. Even though the infrastructure is there (I personally installed it circa 2012-2013), we haven't used these DARM calibration lines *at all* in the advanced LIGO era. This is because we quickly realized that we need such a "CTRL" excitation at each stage of QUAD's DARM actuation if we want to track the actuation strength of each stage of the QUAD separately like we do now. Now, we want to re-invoke the "DARM" calibration lines to constantly measure the DARM open loop gain, G, or more specifically the loop suppression, 1/(1+G), so we can divide it *out* of an adjacent PCAL line measure of the response function, C/(1+G). But, as always, we need two test points surrounding it, the so-called "IN1" (just up stream of the excitation) and "IN2" (just down stream of the excitation) points. Of course, when measuring live, it doesn't matter where in the loop this trifecta of IN1+EXC=IN2 system of channels are; you'll get the same answer whether it's up or down stream of the DARM banks. BUT, while we already store DARM_ERR and DARM_OUT in the frames, which could both equally be the "IN1" channel, - there was no convenient test point to store after the DARM OUT, - I wanted the calibration lines to mimic the location of where the awg input DARM1_EXC was injected in the loop -- i.e. in between DARM1_IN1 and DARM1_IN2, and - I figure the DARM1_IN1 test point (which comes by default with the DARM1 standard filter module) is already there and has a more natural name. So, I moved the summation point. So, when we analyze these calibration lines offline, we'll be taking the transfer function between the following channels to get the following equivalent loop characterizations: H1:LSC-DARM_ERR_DQ / H1:LSC-DARM1_IN1_DQ == "IN1/IN2" == G H1:LSC-DARM1_IN1_DQ / H1:LSC-CAL_LINE_SUM_DQ == "IN2/EXC" = 1/(1 + G) H1:LSC-DARM_ERR_DQ / H1:LSC-CAL_LINE_SUM_DQ == "IN1/EXC" = G/(1 + G) To the ECR process!