IFO is in NLN and OBSERVING as of 11:15 UTC.
IMC/EX Rail/Non-Lockloss Lockloss Investigation:
In this order and according to the plots, this is what I believe happened.
I believe EX saturated, prompting IMC to fault while guardian was in DRMI_LOCKED_CHECK_ASC (23:17 PT), putting guardian in a weird fault state but keeping it in DRMI, without unlocking (still a mystery). Ryan C noticed this (23:32 PT) and requested IMC_LOCK to DOWN (23:38 PT), tripping MC2’s M2 and M3 (M3 first by a few ms). This prompted Guardian to call me (23:39 PT). What is strange is that even after Ryan C successfully put IMC in DOWN, guardian was unable to realize that IMC was in DOWN, and STAYED in DRMI_LOCKED_CHECK_ASC until Ryan C requested it to go to INIT. Only after then did the EX Saturations stop. While the EX L3 stage is what saturated before IMC, I don’t know what caused EX to saturate like this. The wind and microseism were not too bad so this could definitely be one of the other known glitches happening before all of this, causing EX to rail.
Here’s the timeline (Times in PT)
23:17: EX saturates. 300ms later, IMC faults as a cause of this.
23:32: Ryan C notices this weird behavior in IMC lock and MC2 and texted me. He noticed that IMC lost lock and faulted, but that this didn’t prompt an IFO Lockloss. Guardian was still at DRMI_LOCKED_CHECK_ASC, but not understanding that IMC is unlocked and EX is still railing.
23:38: In response, Ryan C put IMC Lock to DOWN, which tripped MC2’s M3 and M2 stages. This called me. I was experiencing technical issues logging in, so ventured to site (made it on-site 00:40 UTC).
00:00: Ryan C successfully downed IFO by requesting INIT. Only then did EX stop saturating.
00:40: I start investigating this weird IMC fault. I also untrip MC2 and start an initial alignment (fully auto). We lose lock at LOWNOISE_ESD_ETMX, seemingly due to large sus instability probably from the prior railing since current wind and microseism aren’t absurdly high. (EY, IX, HAM6 and EX saturate). The LL tool is showing an ADS excursion tag.
03:15: NLN and OBSERVING achieved. We got to OMC_Whitening at 02:36 but violins were understandably quite high after this weird issue.
Evidence in plots explained:
Img 1: M3 trips 440ms before M2. Doesn’t say much but was suspicious before I found out that EX saturated first. Was the first thing I investigated since it was the reason for call (I think).
Img 2: M3 and M2 stages of MC2 showing IMC fault beginning 23:17 as a result of EX saturations (later img). All the way until Ryan C downs IMC at 23:38, which is also when the WD tripped and when I was called.
Img 3: IMC lock faulting but NOT causing ISC_LOCK to go to lose lock. This plot shows that guardian did not put IFO in DOWN or cause it to lose lock. IFO is in DRMI_LOCKED_CHECK_ASC but IMC is faulting. Even when Ryan C downed the IMC (which is where the crosshair is), this did not cause a ISC to go to DOWN. The end time axis is when Ryan C put IFO in INIT, finally causing a Lockloss and ending the railing in EX (00:00).
Img 4: EX railing for 42 minutes straight, from 23:17 to 00:00.
Img 5: Ex beginning to rail 300ms before IMC faults.
Img 6: EX L2 and L3 OSEMs at the time of the saturation. Interestingly, L2 doesn’t saturate but before the saturation, there is erratic behavior. Once this noisy signal stops, L3 saturates.
Img 7: EX L1 and M0 OSEMs at the time of the saturation, zoomed in. It seems that there is a noisy and loud signal, (possibly a glitch or due to microseism?) in the M0 stage that is very short which may have kicked off this whole thing.
Img 8: EX L1 and M0 OSEMs at the whole duration of saturation. We can see the moves that L1 took throughout the 42 minutes of railing, and the two kicks when the railing started and stopped.
Img 9 and 10: An OSEM from each stage of ETMX, including one extra from M0 (since signals were differentially noisy). Img 9 is zoomed in to try to capture what started railing first. Img 10 shows the whole picture with L3’s railing. I don’t know what to make of this.
Further investigation:
See what else may have glitched or lost lock first. Ryan C’s induced lockloss, saving the constant EX railing doesn’t seem to show up under the LL tool, so this would have to be done in order of likely suspects. I’ve never seen this behavior before what this was if anyone else has.
Other:
Ryan’s post EVE update: alog 81061
Ryan’s EVE Summary: alog 81057
Couple of strange things that happened before this series of events that Ibrahim has written out:
TJ Suggested checking on the ISC_DRMI node: seemed fine, was in DRMI_3F_LOCKED from 06:54 until DRMI unlocked at 17:17UTC, then it went to DOWN.
2024-11-05_06:54:59.650968Z ISC_DRMI [DRMI_3F_LOCKED.run] timer['t_DRMI_3f'] done
2024-11-05_07:17:40.442614Z ISC_DRMI JUMP target: DOWN
2024-11-05_07:17:40.442614Z ISC_DRMI [DRMI_3F_LOCKED.exit]
2024-11-05_07:17:40.442614Z ISC_DRMI STALLED
2024-11-05_07:17:40.521984Z ISC_DRMI JUMP: DRMI_3F_LOCKED->DOWN
2024-11-05_07:17:40.521984Z ISC_DRMI calculating path: DOWN->DRMI_3F_LOCKED
2024-11-05_07:17:40.521984Z ISC_DRMI new target: PREP_DRMI
2024-11-05_07:17:40.521984Z ISC_DRMI executing state: DOWN (10)
2024-11-05_07:17:40.524286Z ISC_DRMI [DOWN.enter]
TJ also asked what the H1:GRD-ISC_LOCK_EXECTIME was, this kept getting larger and larger e.g. after 60s was at 60, like ISC_LOCK got hung, see attached (bottom left plot). Starting getting larger at 6:54:50 UTC which was the same time as the last message from ISC_LOCK. Reached a maximum of 3908 seconds ~ 65minutes before Ryan reset it using INIT. Another simpler plot here.
TJ worked out that this is due to a call to cdu.avg without a timeout.
ISC_LOCK DRMI_LOCKED_CHECK_ASC converge checker must have returned True so it went ahead to the next lines when contained a call to nds via cdu.avg().
We've had previous issues with similar calls getting hung. TJ has already written a fix to avoid this, see 71078.
'from timeout_utils import call_with_timeout' was already imported as is used for PRMI checker. I edited the calls to cdu.avg to use the timeout wrapper in ISC_LOCK:
I was about to go to bed when I glanced at the control room screenshots and saw that something looked wrong, verbal was full of MC2 and EX saturations and ETMX drivealign was constantly saturating along with the suspension. I saw ISC_LOCK was also in a weird state, as it thought it was in DRMI_LOCKED_PREP_ASC despite having lost the IMC and both ARMS.
I texted Ibrahim that something didn't look right, I logged in soon after and requested IMC_LOCK to down to stop it which have been a bad move as it tripped the M2 and M3 watchdogs on MC2. After some texts with ibrahim I tried to bring ISC_LOCK to down, had to init it to get it there. Ibrahims taking over now.
TITLE: 11/05 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Wind
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: We've been down from WIND for most of the day, winds are finally below 20mph for the 3 minutes average as of the end of the shift.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
00:20 | PEM | Robert | LVEA | - | Moving magnetometer | 01:20 |
02:21 | PEM | Robert | CER | - | Move Magnetometer | 02:26 |
PSL/IMC investigations today while we were down for the wind alog81056, the secondary microseism is still decreasing however slowly. Its been a windy shift, I had a ~2ish hour window of >30mph winds where I was able to hold ALS, but I ran into issues at ENGAGE_ASC where we would have an IMC lockloss 13 seconds into the state everytime.
See zoomed out trends from the 2 hour ISS_ON vs. 2 hour ISS_OFF test attached.
edit: fixed labels that were previously backwards (thanks Camilla). With ISS off, we had not seen glitches in yesterday's 2-hour test.
TITLE: 11/05 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Wind
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 32mph Gusts, 18mph 3min avg
Primary useism: 0.09 μm/s
Secondary useism: 0.51 μm/s
QUICK SUMMARY:
TITLE: 11/04 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Environment - Wind
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: H1 has been down all shift due to high microseism for the first half of the day and high winds for the second. Meanwhile, we've been able to continue glitch investigations with the PSL/IMC from alog80990. We sat with the IMC locked and ISS off for 2 hours starting at 21:37 and saw no fast glitches (trend in first attachment, white background). After turning the ISS back on, things were calm for a while before the IMC lost lock at 23:54 (second attachment, black background). There were three more IMC locklosses within 10 minutes of this, but things have been steady again since.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
21:41 | SAF | Laser | LVEA | YES | LVEA is laser HAZARD | Ongoing |
16:25 | FAC | Karen | MY | n | Technical cleaning | 17:49 |
18:18 | FAC | Kim | MX | n | Technical cleaning | 19:44 |
18:18 | CDS | Fil | CER | - | Power cycling PSL chassis | 19:14 |
18:58 | PEM | Robert, Lance | LVEA | Check on Magnetometer in PSL racks | 19:03 | |
19:44 | CDS | Erik, Fil, Dave | CER | - | Swapping ADC card | 20:41 |
20:41 | PEM | Robert, Lance | LVEA | - | Installing magnetometer in PSL racks | 20:58 |
21:18 | TCS | Camilla | MER | n | Fixing water container | 21:45 |
21:21 | CDS | Fil | MY | n | Picking up equipment | 22:21 |
This alog is a continuation of previous efforts to correctly calibrate and compare the LSC couplings between Hanford and Livingston. Getting these calibrations right has been a real pain, and there's a chance that there could still be an error in these results.
These couplings are measured by taking the transfer function between DARM and the LSC CAL CS CTRL signal. All couplings were measured without feedforward. The Hanford measurements were taken during the March 2024 commissioning break. The Livingston MICH measurement is from Aug 29 2023 and the PRCL measurement from June 23 2023.
As an additional note, during the Hanford MICH measurement, both MICH and SRCL feedforward were off. However, for the Livingston MICH measurement, SRCL feedforward was on. For the PRCL measurements at both sites, both MICH and SRCL feedforward were engaged.
The first plot attached shows a calibrated comparison between the MICH, PRCL and SRCL couplings at LHO.
The second plot shows a calibrated comparison between the Hanford and Livingston MICH couplings. I also included a line indicating 1/280, an estimated coupling level for MICH based on an arm cavity finesse of 440. Both sites have flat coupling between about 20-60 Hz. There is a shallow rise in the coupling above 60 Hz. I am not sure if that's real, or some artifact of incorrect calibration. The Hanford coupling below 20 Hz has steeper response, which appears like some cross coupling between SRCL perhaps (it looks about 1/f^2 to me). Maybe this is present because SRCL feedforward was off.
The third plot shows a calibrated comparison between the Hanford and Livingston PRCL couplings. I have no sense of what this coupling should look like. If the calibration here is correct, this indicates that the PRCL coupling at Hanford is about an order of magnitude higher than Livingston. Whatever coupling is present has a different response between both sites, so I don't really know what to make of this.
The Hanford measurements used H1:CAL-DELTAL_EXTERNAL_DQ and the darm calibration from March 2024 (/ligo/groups/cal/H1/reports/20240311T214031Z/deltal_external_calib_dtt.txt
)
The Livingston measurement used L1:OAF-CAL_DARM_DQ and a darm calibration that Dana Jones used in her previous work (74787, saved in /ligo/home/dana.jones/Documents/cal_MICH_to_DARM/L1_DARM_calibration_to_meters.txt)
LHO MICH calibration: I updated the CAL CS filters to correctly match the current drive filters. However, I made the measurement in March 11 before catching some errors in the filters. I incorrectly applied a 200:1 filter, and multiplied by sqrt(1/2) when I should have multipled by sqrt(2) (76261). Therefore, my calibration includes a 1:200 filter and a factor of 2 to appropriately compensate for these mistakes. Additionally, my calibration includes a 1e-6 gain to convert from um to m, and an inverted whitening filter [100, 100:1, 1]. This is all saved in a DTT template: /ligo/home/elenna.capote/LSC_calibration/MICH_DARM_cal.xml
LLO MICH calibration: I started with Dana Jones' template (74787), and copied it over into my directory: /ligo/home/elenna.capote/LSC_calibration/LLO_MICH.xml. I inverted the whitening filter using [100,100,100,100,100:1,1,1,1,1] and applied a gain of 1e-6 to convert um to m.
LHO PRCL calibration: I inverted the whitening using [100,100:1,1] and converted from um to m with 1e-6.
LLO PRCL calibration: I inverted the whitening using [10,10,10:1,1,1] and converted from um to m with 1e-6.
I exported the calibrated traces to plot myself. Plotting code and plots saved in /ligo/home/elenna.capote/LSC_calibration
Evan Hall has a nice plot of PRCL coupling from O1 in his thesis, Figure 2.16 on page 37. I have attached a screen grab of his plot. It appears as if the PRCL coupling now in O4 is lower than it is in this measurement (from I am assuming O1)- eyeballing about 4e-4 m/m at 20 Hz now in O4 compared to about 2e-3 m/m at 20 Hz in Evan's plot.
Mon Nov 04 10:10:29 2024 INFO: Fill completed in 10min 26secs
WP12186
Richard, Fil, Erik, Dave:
We performed a complete power cycle of the h1psl0. Note, this is not on the Dolphin fabric so no fencing was needed. Procedure was
The system was power cycled at 10:11 PDT. When the iop model started, it reported a timing error. The duotone signal (ADC0_31) was a flat line signal of about 8000 counts with a noise of a few counts.
Erik thought the timing card had not powered up correctly, so we did a second round of power cycles at 10:30 and this time the duotone was correct.
NOTE: the second ADC failed its AUTOCAL on both restarts. This is the PSL FSS ADC.
If we continue to have FSS issues, the next step is to replace the h1pslfss model's ADC and 16bit DAC cards.
[ 45.517590] h1ioppsl0: INFO - GSC_16AI64SSA : devNum 0 : Took 181 ms : ADC AUTOCAL PASS
[ 45.705599] h1ioppsl0: ERROR - GSC_16AI64SSA : devNum 1 : Took 181 ms : ADC AUTOCAL FAIL
[ 45.889643] h1ioppsl0: INFO - GSC_16AI64SSA : devNum 2 : Took 181 ms : ADC AUTOCAL PASS
[ 46.076046] h1ioppsl0: INFO - GSC_16AI64SSA : devNum 3 : Took 181 ms : ADC AUTOCAL PASS
We decided to go ahead and replace h1pslfss model's ADC and DAC card. The ADC because of the continuous autocal fail, the DAC to replace an aging card which might be glitching.
11:30 Powered system down, replace second ADC and second DAC cards (see IO Chassis drawing attached).
When the system was powered up we had good news and bad news. The good news, ADC1 autocal passed after the previous card had been continually failing since at least Nov 2023. The bad news, we once again did not have a duotone signal in ADC0_31 channel. Again it was a DC signal, with amplitude 8115+/-5 counts.
11:50 Powered down for a 4th time today, replaced timing card and ADC0's interface card (see drawing)
12:15 powered the system back up, this time everything looks good. ADC1 AUTOCAL passed again. Duotone looks correct.
Note that the new timing card duotone crossing time is 7.1uS, and the old card had a crossing of 7.6uS
Here is a summary of the four power cycles of h1psl0 we did today:
Restart | ADC1 AUTOCAL | Timing Card Duotone |
10:11 | FAIL | BAD |
10:30 | FAIL | GOOD |
11:30 | (new card) PASS | BAD |
12:15 | (new card) PASS | (new cards) GOOD |
Card Serial Numbers
Card | New (installed) | Old (removed) |
64AI64 ADC | 211109-24 | 110203-18 |
16AO16 DAC | 230803-05 (G22209) | 100922-11 |
Timing Card | S2101141 | S2101091 |
ADC Interface | S2101456 | S1102563 |
Detailed timeline:
Mon04Nov2024
LOC TIME HOSTNAME MODEL/REBOOT
10:20:14 h1psl0 ***REBOOT***
10:21:15 h1psl0 h1ioppsl0
10:21:28 h1psl0 h1psliss
10:21:41 h1psl0 h1pslfss
10:21:54 h1psl0 h1pslpmc
10:22:07 h1psl0 h1psldbb
10:33:20 h1psl0 ***REBOOT***
10:34:21 h1psl0 h1ioppsl0
10:34:34 h1psl0 h1psliss
10:34:47 h1psl0 h1pslfss
10:35:00 h1psl0 h1pslpmc
10:35:13 h1psl0 h1psldbb
11:43:20 h1psl0 ***REBOOT***
11:44:21 h1psl0 h1ioppsl0
11:44:34 h1psl0 h1psliss
11:44:47 h1psl0 h1pslfss
11:45:00 h1psl0 h1pslpmc
11:45:13 h1psl0 h1psldbb
12:15:47 h1psl0 ***REBOOT***
12:16:48 h1psl0 h1ioppsl0
12:17:01 h1psl0 h1psliss
12:17:14 h1psl0 h1pslfss
12:17:27 h1psl0 h1pslpmc
12:17:40 h1psl0 h1psldbb
C. Compton, R. Short
I've updated the PSL_FSS Guardian so that it won't jump into the FSS_OSCILLATING state whenever there's just a quick glitch in the FSS_FAST_MON_OUTPUT channel. Whenever the FSS fastmon channel is over its threshold of 0.4, the FSS_OSCILLATION channel jumps to 1, and the PSL_FSS Guardian responds by dropping the common and fast gains to -10 dB then stepping them back up to what they were before. This is meant to catch when there's a servo oscillation in the FSS, but it's also been happening when the FSS glitches. Since moving the gains around makes it difficult for the IMC to lock, the Guardian should now only do that when there's an actual oscillation and should save time relocking the IMC.
FAMIS 31058
Trends are all over the place in the last 10 days due to several incursions, but I motsly tried to focus on how things have looked since we brought the PSL back up fully last Tuesday (alog80929). Generally things have been fairly stable, and at least for now, I dn't see the PMC reflected power slowly increasing anymore.
TITLE: 11/04 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 28mph Gusts, 19mph 3min avg
Primary useism: 0.07 μm/s
Secondary useism: 0.56 μm/s
QUICK SUMMARY: H1 has been sitting in PREP_FOR_LOCKING since 08:05 UTC. Since the secondary microseism has been slowly decreasing, I'll try locking H1 and we'll see how it goes.
TITLE: 11/04 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism, LOCK_ACQUISITION
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
We spent most of the day with the secondary microseism mostly above the 90th percentile, it started to decrease ~8 hours ago but still remains about half above the 90th percentile. The elevated microseism today is from both the storms by Greenland and the Aleutians as seen by the high phase difference for both the arms with the corner station (bottom plot).
We keep losing it at the beginning of LOWNOISE_ESD_ETMX, I see some ASC ringups on the top of NUC29 (CHARD_P I think?), no FSS glitches before the locklosses, no tags on the lockloss tool besides WINDY. It's always a few seconds into the run method, I was able to avoid this by slowly stepping through some of the higher states (PREP_ASC, ENGAGE_ASC, MAX_POWER, LOWNOISE_ASC). I'm not sure which of these is the most relevent to not see the ringup, probably not the ASC states as when I paused there but forgot MAX_POWER I still saw the ringup.
LOG: No log.
For OWL shift, received 12:03amPT notification, but I had a phone issue.
Most of the locklosses over this weekend have the IMC tag and those do show the IMC losing lock at the same time as AS_A, but since I don't have any further insight into those, I wanted to point out a few locklosses where the causes are different from what we've been seeing lately.
2024-11-02 18:38:30.844238 UTC
I checked during some normal NLN times and the H1:PSL-ISS_AOM_DRIVER_MON_OUT_DQ channel does not normally drop below 0.31 while we are in NLN (plot). In Oli's times before in lockloss, it drops to 0.28 when we loose lock. Maybe we can edit the PSL glitch scripts 80902 to check this channel.
I ran the noise budget injections for frequency noise, input jitter (both pitch and yaw) and PRCL. All injections were run with CARM on one sensor (REFL B). The cable for the frequency injection is still plugged in as of this alog, but I reset the gains and switches so we are back on two CARM sensors and the injection switch is set to OFF.
All injections are saved in the usual /ligo/gitcommon/NoiseBudget/aligoNB/aligoNB/H1/couplings folder under Frequency_excitation.xml, IMC_PZT_[P/Y]_inj.xml, and PRCL_excitation.xml.
I realized an intensity noise injection might be interesting, but when I went to run the template for the ISS excitation, I was unable to see an excitation. I think there's a cable that must be plugged in to do this? I am not sure.
*********Edit************
Ryan S. sent me a message with this alog that has notes about how intensity noise injections should be taken. Through this conversation, I realized that I had misread the instructions in the template. I toggled an excitation switch on the ISS second loop screen, when I should have instead set the excitation gain to 1.
I was allowed another chance to run the intensity injections, and I was able to do so, using the low, middle, and high frequency injection templates in the couplings folder.
Also, the Input jitter injections have in the past been limited to 900 Hz, because the IMC WFS channels are DQed at 2048 Hz. However, the live IMC channels are at 16 kHz, so I edited the IMC injection templates to run up to 7 kHz, and use the live channels instead of the DQ channels. That allowed the measurements to run above 900 Hz. However, the current injections are band-limited to only 1 or 2 kHz. I think we can widen the injection band to measure jitter up to 7 kHz. I was unable to make those changes because we had to go back to observing, so this is future to-do item. I also updated the noise budget code to read in the live traces instead of the DQ traces.
Unfortunately, in my rush to run these injections, I forgot to transition over to one CARM sensor, so both the intensity and jitter measurements that are saved are with CARM on REFL A and B.
I ran an updated noise budget using these new measurements, plus whatever previous measurements were taken by Camilla in this alog. Reminder: the whole noise budget is now being run using median averaging.
I used a sqz time from last night where the range was around 165 Mpc, starting at GPS 1412603869. Camilla and Sheila took a no-sqz data set today starting at 1412607778. Both data sets are 600 seconds long. I created a new entry in gps_reference_times.yml called "LHO_O4b_Oct" with these times.
To run the budget:
>conda activate aligoNB
>python /ligo/gitcommon/NoiseBudget/aligoNB/production_code/H1/lho_darm_noisebudget.py
all plots found in /ligo/gitcommon/NoiseBudget/aligoNB/out/H1/lho_darm_noisebudget/
I made one significant edit to the code, which is that I decided to separate the laser and input jitter traces on the main DARM noise budget. That means that the laser trace is now only a sum of frequency noise and intensity noise. Input beam jitter is now a trace that combines the pitch and yaw measurements from the IMC WFS. Now, due to my changes in the jitter injections detailed above, these jitter injections extend above 900 Hz. To reiterate: the injections are still only band-limited around 2 kHz, which means that there could be unmeasured jitter noise above 2 kHz that was not captured by this measurement.
One reason I wanted to separate these traces is partly because it appears there has been a significant change in the frequency noise. Compared to the last frequency noise measurement, the frequency noise above 1 kHz has dropped by a factor of 10. The last time a frequency noise injection was taken was on July 11, right before the OFI vent, alog 79037. After the OFI vent, Camilla noticed that the noise floor around 10 kHz appeared to have reduced, as well as the HOM peak heights, alog 76794. She posted a follow-up comment on that log today noting that the IFO to OMC mode matching could have an effect on those peaks. This could possibly be related to the decrease in frequency noise. Meanwhile, the frequency noise below 100 Hz seems to be about the same as the July measurement. One significant feature in the high frequency portion of the spectrum is a large peak just above 5 kHz. I have a vague memory that this is approximately where a first order mode peak should be, but I am not sure.
There is no significant change in the intensity noise from July, except that there is also a large peak in the intensity noise just above 5 kHz. Gabriele and I talked about this briefly; we think this might be gain peaking in the ISS, but its hard to tell from alog measurements if that's possible. We think that peak is unlikely to be from the CARM loop. We mentioned the ISS theory to Ryan S. on the off-chance it is related to the current PSL struggles.
The other significant change in the noise budget is the change in the LSC noise. The LSC noise has reduced relative to the last noise budget measurement, alog 80215, which was expected from the PRCL feedforward implementation. Looking directly at the LSC subbudget, PRCL has been reduced by a factor of 10, just as predicted from the FF performance. Now, the overall LSC noise contribution is dominated by noise from MICH. Between 10-20 Hz, we might be able to win a little more with a better MICH feedforward, however that is a very difficult region to fit because of various high Q features (reminder alog).
Just as in the previous noise budget, there is a large amount of unaccounted-for noise. The noise budget code uses a quantum model that Sheila and Vicky have been working on extensively, but I am not sure of the status, and how much of that noise could be affected by adjustments to the model. Many of the noisy low frequency peaks also appear very broad on the timescale of the noise budget plot. We could try running over a longer period of time to better resolve those peaks.
Between 100-500 Hz there are regions where the sum of known noises is actually larger than the measured noise. I think this is because the input jitter projections are made using CAL DELTA L, but the overall noise budget is run on CALIB CLEAN where we are running a jitter subtraction.
I believe these couplings were pushed to aligoNB repo in commit bcdd729e.
I reran the jitter noise injections, trying to increase the excitation about 2 kHz to better see the high frequency jitter noise. The results were moderately successful; we could probably push even harder. The results indicate that jitter noise is with a factor of 2-3 of DARM above 1 kHz.
I have attached the updated DARM noise budget and input jitter budget. I'm also attaching the ASC budget (no change expected) just because I forgot to attach it in the previous post.