Jennie W, Sheila
At the start of the commissioning period we did some OMC Alignment tests to see if this improved our optical gain.
First Sheila turned off injection of squeezing.
15:32:50 UTC Started OM1 and OM3 dithers at 15:32 UTC roughly from OMC control screen. sitemap -> OMC -> OMC Control then push slider shown in this image to on.
15:32:50 UTC started low frequency OMC ASC dither lines via template at /ligo/home/jennifer.wright/Documents/OMC_Alignment/20240419_OMC_Alignment_EXC.xml
Stopped OMC ASC lines at 16:03:07 UTC.
The lines effect on the OMC ASC degrees of freedom and kappa C (optical gain) can be seen here.
Then we re-injected freq dep squeezing.
OM1 and OM3 dither lines were turned off at 16:03 UTC.
After these tests we proceeded to run A2L optimisation recorded here.
To analyse the date I used a version of Gabriele's method where we demodulate OMC DCPD SUM at the OMC ASC dither frequencies and then the 410Hz PCAL line, to see what combination of QPD alignment offsets gives the highest optical gain.
The notebook is at /ligo/home/jennifer.wright/Documents/OMC_Alignment/OMC_Alignment_2024_06_24.ipynb
and can be ran using:
jupyter notebook OMC_Alignment_2024_06_24.ipynb
and then opening the provided link in your browser.
The time series of the QPD offsets being changed are in this image. The right-hand plots show the cleaned DARM time series for different BLRMs regions. It can be seen that the first three have large glitches in the time series, but the 410 Hz one on the bottom right does not and so this frequency should be good for analysis.
The same plot but using data from the OMC DCPDs in the same frequency regions on the right is here. Again, there are glitches in the other BLRMS frequencies, but the 410 Hz one looks clear.
The final plot shows the BLRMS at 410Hz after the double demodulation against the QPD offsets, the red lines are set to be the proposed change in QPD offset. For three it does not look like changing ther QPD offset will improve the optical gain. For the bottom right (H1:OMC-ASC_QPD_B_YAW_OUTPUT) the correct offset change could be 0 or it could be +0.1.
It was decided not to change any of the ASC alignment offsets therefore.
A few weeks back Oli put in an alog about IMC and SRM M3 saturations during earthquake lock losses. In April of 2023 Gabriele had redesigned the M1-M3 offload, reducing the gain of the offload somewhat to get rid of 3-ish hz instabilities in the corner cavities. This may have contributed to the SRM saturations that Oli found, so we want to try adding some low frequency gain back into the SRM offloading.
Sheila wrote out the math for me for the stability of both the SRCL loop and the stability of the M1-M3 crossover, so I have been looking at ways to increase the low frequency gain without affecting the stability above 1hz. The open look gain for SRCL looks like :
SRCL_OLG = SRCL_sens * SRCL_FIlter * (SRM_M3_PLANT+SRM_M1_LOCK_FILTER * SRM_M1_PLANT);
The SRM M1-M3 offloading looks like:
SRM_OFFLOAD_OLG = SRM_M1_LOCK_FILTER * SRM_M1_PLANT * SRCL_sens * SRCL_Filter / (1-SRM_M3_PLANT * SRCL_sens * SRCL_FILTER);
Both of these are (or behave like) open loop gains (g), so the suppression/gain peaking can be show by looking at the 1/(1-g) for each.
I made a 50mhz boost filter for this and tried it during the commissioning window this morning. Bode plots for the boost (red), boost *m1 lock filters(blue) and the nominal M1 lock filter(green) are shown in the first image). The affect on M3 drives and SRCL are shown in the second image asds, live traces are with the new boost, refs are without. There is good reduction below 100mhz, but there is gain peaking from the M1-M3 offloading at .2-.4hz. which might bleed into the secondary microseism during the winter. I'm working on a filter with similar gain, but less gain peaking in a region that won't affect the overall rms of the m3 drive as much. I will try installing and testing during maintenance tomorrow.
These are some of the design plots I have been using. First image is the M1-M3 cross-over, red is the Mar 2023 filter that may have been causing 3ish hz instabilities, solid blue is the filter that Gabriele installed at that time, dashed purple is the boost I tried this morning, and dotted yellow is a modified boost that I want to try tomorrow. Second plot is the suppression for each filter. The .2-.3 hz gain peaking I saw during the test this morning is easy to see in the dashed purple on the second plot, I think the dashed yellow will have less gain peaking and move it closer to .7-1hz where it won't affect the rms of the M3 drive as.
The new boost is installed on SRM and ready to try when we get a chance. Attached image shows the bode plots for the boost filter (red), boost*nominal M1 filters(blue), and the nominal M1 filters (green). I think we might try to test these on Thursday.
I tested this new boost yesterday, it works well, so I'm adding engaging FM5 to the ISC_DRMI guardian. Will post a log with the results in a bit.
Jennie, TJ
I ran TJ's script to measure and set all the A2L gains for the quads.
16:05 UTC ran script in userapps/isc/h1/scripts/a2l
python a2l_min_multi.py -a
*************************************************
RESULTS
*************************************************
ETMX P
Initial: 3.26
Final: 3.19
Diff: -0.07
ETMX Y
Initial: 4.89
Final: 4.81
Diff: -0.08
ETMY P
Initial: 4.35
Final: 4.39
Diff: 0.04
ETMY Y
Initial: 1.06
Final: 1.13
Diff: 0.07
ITMX P
Initial: -1.0
Final: -1.02
Diff: -0.02
ITMX Y
Initial: 2.8
Final: 2.73
Diff: -0.07
ITMY P
Initial: -0.38
Final: -0.41
Diff: -0.03
ITMY Y
Initial: -2.41
Final: -2.28
Diff: 0.13
16:14 UTCscript finished.
TJ added these to lscparams.py and so they were loaded when Corey re-locked after the commissioning period (around 20:50 UTC).
I accepted these in OBSERVE.snap, see attached photos.
Since we've got OM2 warm now, I've updated the jitter cleaning coefficients. It seems to have added one or two Mpc to the new SENSMON2 calculated sensitivity [Notes on new SENSMON2 below].
The first plot shows the SENSMON2 range, as well as an indicator of when the cleaning was changed (bottom panel, when there's a spike up, that's the changeover).
The second plot shows the effect in spectra form. The pink circle is at roughly the same frequencies in all 3 panels. The reference channels are data taken before the jitter cleaning was updated (so, coefficients we've been using for many months, trained on cold OM2 data), and the live traces are with newly trained jitter coeffs today.
I've saved the previous OBSERVE.snap file in /ligo/gitcommon/NoiseCleaning_O4/Frontend_NonSENS/lho-online-cleaning/Jitter/CoeffFilesToWriteToEPICS/h1oaf_OBSERVE_valuesInPlaceAsOf_24June2024_haveBeenLongTime_TraintedOM2cold.snap , so that is the file we should revert Jitter Coeffs to if we turn off the OM2 heater.
Notes on SENSMON2, which was installed last Tuesday:
I added a new version of the DARM BLRMS FOM based off of Minyo's template (alog77815). The top two plots are the inverted BLRMS, so positive changes seen on these should correlate to positive changes in the range. The units of the top plots should be close to Mpcs, so it is important to look for changes in the top plot since the scale is so much larger.
Using this in conjunction with the low range checks should help us diagnose what is changing our range around lately.
Mon Jun 24 10:09:54 2024 INFO: Fill completed in 9min 51secs
Jordan confirmed a good fill curbside.
Derek Davis, Gravity Spy user ZngabitanT
In previous cases where there were OM2 temperature transitions, Gravity Spy users had noted a specific glitch class (example 1) that is present for ~tens of minutes after the transition begins. This issue was also previously noted in alog 71735. Specific times where OM2 transitions were correlated with Gravity Spy glitches are listed in this comment.
With this morning's OM2 heat-up (see alog 78573), these glitches were noted again. This can be seen in these example gravity spy subjects (example 2, example 3). This behavior can also be seen in the glitch gram for the relevant hour near 300 Hz (the OM2 transition began at 12:30 UTC).
More notes from Gravity Spy users about this glitch class (referred to as "bike chains") can be found on this zooniverse talk page. Many thanks to all the volunteers who contributed to these investigations!
The GC UPS detected a power glitch and was on battery power between 20:48:15 and 20:48:20 PDT. The attached plot shows the three phases in the corner station. The CDS UPS in the MSR did not report at this time.
TITLE: 06/24 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 1mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.14 μm/s
QUICK SUMMARY:
H1's been locked almost 7hrs, the range is running a little low afterthe No Squeezing test earlier this morning. Winds are much calmer after last night's wind storm.
Mon Commissioning is planned from 8:30-11:30amPT (1530-1830utc).
10 minutes of no squeezing time started at 12:18 UTC June 24th, back to squeezing at ~12:29. This is so we have a comparison without squeezing before OM2 starts heating up 78573.
Unmonitored the OM2 heater channel when the script changed it, now we are back in observing while OM2 heats up.
Thermistor 2 has thremalized completely, but thermistor 1 still shows some thremal transient settling. Unlike what we've seen in the past, 71087, the optical gain did not change with this TSAMS change.
There is coherence with DHARD Y, and SRCL coherence has increased as expected if the DARM offset has changed.
Here are some jitter injections, looking at the coupling change between OM2 cold and hot. The pitch jitter coupling (without cleaning) seems worse with OM2 hot, which is different than our previous OM2 tests.
TITLE: 06/24 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Two locklosses this shift and several hours of downtime due to high winds. H1 has been locked for about 30 minutes.
LOG: No log for this shift.
Lockloss @ 06:38 UTC - link to lockloss tool
H1 was observing for 3 minutes before again losing lock from an unknown cause. There was more significant ETMX motion about a half second before the lockloss in this case, but I'm unsure where it came from.
H1 back to observing at 07:38 UTC. Fully automated relock.
Vicky, Begum, Camilla: Vicky and Begum noted that the CLF ISS and SQZ laser is glitchy.
Vicky's plot shows CLF ISS glitches started with O4b, attached.
Timeline below shows SQZ laser glitches started May 20th and aren't related to TTFSS swaps. DetChar - Request : Do you see these gltiches in DARM since May 20th?
Summary pages screenshots from: before glitches started, first glitch May 20th (see top left plot 22:00UTC), bad glitches since then.
Missed point:
In addition to previous report, I must note that glitches started on May 9th and continued for several times even before to May 25th.
Glitches are usually accompanied by the increased noise in H1:SQZ-FIBR_EOMRMS_OUT_DQ and H1:SQZ-FIBR_MIXER_OUT_DQ channels.
Andrei, Camilla
Camilla swapped the TTFSS fiber box 78641 on June 25th in hopes that this will resolve the glitches issue.
However, it made no difference: Figure (see from 20:40 UTC, as it is when TTFSS box was swapped).
State of H1: Observing at 157Mpc, locked for 6.5 hours.
Quiet shift so far except for another errant Picket Fence trigger to EQ mode just like ones seen last night (alog78404) at 02:42 UTC (tagging SEI).
That's about two triggers in a short time. If the false triggers are an issue, we should consider triggering on picket fence only if there's a Seismon alert.
The picket fence-only transition was commented out last weekend on the 15th by Oli. We now will only transition on picket fence signals if there is a live seismon notificaition.
Thanks Jim,
I'm back from my vacation and will resume work on the picket fence to see if we can fix these errant triggers this summer.
Sheila, Camilla, Robert
A change in the ETMX measured charge over the O4a-O4b break (e.g. 78114 ) suggested that there might be a change in electronics ground fluctuation coupling. This is because the hypothesized mechanism for ground fluctuation coupling depends on the test mass being charged so that, as the potential of electronics (such as the ring heater and the ESD itself) near the test mass fluctuate with electronics ground potential, there is a fluctuating force on the test mass.
We swept the bias (see figure) and found that the minimum in coupling had changed from an ESD bias of about 150 V in August of 2023 ( 72118 ) to 58 V now, with the coupling difference between the two setting a factor of about ten (in other words, if we stuck with the old setting the coupling would be nearly ten times worse). Between January of 2023 and August of 2023, the minimum coupling changed from about 130 V to about 150 V, with the coupling difference between the two settings being less than a factor of two. The second page of the figure is from this August alog, showing the difference in the coupling between then and now. I checked the differences across the break for ETMY, ITMY and ITMX and the coupling differences across the break were not much more than a factor of two, so the change in ETMX, about a factor of ten, seems particularly large, as might be expected for a significant charge change.
I started to find the gain adjustment that we need to change the bias. To get to an offset of 70V, preserving the DARM UGF we need 436.41 in L3 drivealign L2L, an an offset of 2.0 in the BIAS filter.