N. Aritomi, J. Kissel, C. Compton, T. Shaffer As we were poking around for ideas as to why violin modes are now constantly getting rung up, we found that the lock-loss triggers were never installed in any of the parametric instability damping control models. TJ's doing some trending in parallel to me writing, and I hear him verbally convincing himself via trends that this isn't the source of violin mode ring ups; I guess he'll write a comment, or something to share his convincement. For those who haven't heard of these "lock loss triggers" -- they're fast-front-end, 16 kHz speed triggers that shut off all global control to the suspensions to ensure that -- upon lockloss -- no garbage ISC control signals (including violin mode damping) are sent to the test masses -- which have historical been a source of violin mode ring up. Regardless, of whether TJ's convinced that this isn't our problem *today,* I still think this is an oversight in the design that should be rectified. I've opened a work-permit this this to be done in one of the upcoming maintenance days -- WP:11322. Shown are the recommended places where the lock-loss tool should interrupt the signals in - h1susetmypi.mdl (and h1susetmxpi.mdl should be modified in the same way) and - h1susitmpi.mdl
I looked at the PI outputs monitor channels (H1:SUS-PI_PROC_COMPUTE_MODE24_DAMP_OUTPUT) and their IOP DAC channel (H1:FEC-97_DAC_OUTPUT_4_6) since these signals get sent straight to the DAC. The main times I looked at are the last lock loss we had before the violins run run up on June 29th, and the relocking up to noticing the high violins. I found no difference in the output of these signals during this lock loss compared to previous lock losses, or during the subsequent acquisition.
WP11314 Ramped CP1 fill
Gerardo, Richard, Travis, Jordan, Dave:
My code change was not ready for today's 10:00 fill, so we did one at the old time of 13:15.
Gerardo confirmed this was a good fill curbside.
New code summary:
Opening the LLCV is now ramped over 5 minutes
Closing is still as fast as the valve can move.
The code change details:
New PV added to IOC: H1:VAC-CP1_OVERFILL_LLCV_RAMP_TIME_SECONDS
The YAML configuration file parser now reads a new llcv_ramp_time_seconds line (default is 0)
At time of fill, instead of setting the LLCV to MAX_VAL as part of the pre-fill, it now overwrites the starting value to test caput is working.
In the fill loop, if the run_time is within the ramp_time, the LLCV is ramped to the next value.
If ramp time=0, the LLCV is set to the MAX_VAL in one caput, identical to pre-ramp code.
If during the ramping an end-of-fill condition is detected (cancel, fill-completed, TC failure, discharge line pressure out-of-bounds) the fill is stopped and LLCV returned to starting position.
Note that opening the LLCV is ramped, but closing it back to the starting value is still done with a single write (prevents excessive LN2 loss when overflowing).
Attached trends show both the standard logarithmic version, and a linear one zoomed in to show the new ramp
Adding a DetChar tag, to see if this change affects the range drop that had been reported by Marissa (alog 71301).
Carried out some noise injections while violin modes were being damped. Analysis will follow. Times below
ASC-DC3_Y_EXC
butter("BandPass",8,0.5,5)zpk([],[1.5+i*2.59808;1.5-i*2.59808;1.5+i*2.59808;1.5-i*2.59808;1.5+i*2.59808;1.5-i*2.59808],1,"n")
ampl 10
start PDT: 2023-07-17 12:12:25.248450 PDT
UTC: 2023-07-17 19:12:25.248450 UTC
GPS: 1373656363.248450
stop PDT: 2023-07-17 12:14:28.217030 PDT
UTC: 2023-07-17 19:14:28.217030 UTC
GPS: 1373656486.217030
ampl 30
start PDT: 2023-07-17 12:15:09.156033 PDT
UTC: 2023-07-17 19:15:09.156033 UTC
GPS: 1373656527.156033
stop PDT: 2023-07-17 12:19:34.343574 PDT
UTC: 2023-07-17 19:19:34.343574 UTC
GPS: 1373656792.343574
DHARD_Y_EXC
resgain(0.22, 2,14)resgain(0.48,3,20)resgain(2.5,8,15)cheby1("BandPass", 6,1, 0.05,3)gain(2000)zpk([1],[0.1],1,"n")
ampl 0.6
start PDT: 2023-07-17 12:25:39.173166 PDT
UTC: 2023-07-17 19:25:39.173166 UTC
GPS: 1373657157.173166
stop PDT: 2023-07-17 12:30:06.552000 PDT
UTC: 2023-07-17 19:30:06.552000 UTC
GPS: 1373657424.552000
ampl 1.0
start PDT: 2023-07-17 12:30:58.643632 PDT
UTC: 2023-07-17 19:30:58.643632 UTC
GPS: 1373657476.643632
resgain(1.2,5,10)resgain(0.22,3,20)resgain(0.45,2,22)resgain(2.2, 1.5,23)
cheby1("BandPass",6,1,0.08,3)gain(1000)zpk([1],[0.1],1,"n")
ampl 1.0
start PDT: 2023-07-17 12:37:54.797826 PDT
UTC: 2023-07-17 19:37:54.797826 UTC
GPS: 1373657892.797826
I had some time to try a DHARD_Y injection, with only a few iterations at imprving the noise shape, and not enough averages. Still, the results are promising and I got a first go at the DHARD_Y plant.
The measurement needs to be improved around 2.6-3 Hz to esolve the shape of the peak / bump there
Witth the noise injection on ASC-DC3_Y, I could determine that the DC centering loops does not limit the DHARD_Y noise: the coherence we normally see below a few Hz is due to DHARD and DC3/DC seeing the same beam motion.
The first plot shows a comparison of DHARD_Y, DARM and DC3_Y with and without noise injection. The second plot shows the DC3_Y noise projection into DHARD_Y, using the transfer function measured during noise injection.
This also implies that the DC centering loops are not responsible for the 2.6 Hz peak
Since our violins troubles June 26/27th, 71129, 71342, 71063, the higher harmonics have remained very rung-up, see attached plot. These higher harmonics do not damp during our lock although Rahul and the SUS team are starting to look into doing this. Jenne suggested in 71063 that this could be the cause of our fundamentals re-ringing up on each lockloss.
C. Compton, J. Oberling
With the lockloss I took the opportunity to walk Camilla through adjusting the ISS diffracted power %, as it has been trending a little low since last week's PSL recovery. We increased the ISS RefSignal from -2.01 V to -2.00 V, which brought the ISS diffracted power % to ~2.5% (it does move around a little). This change was accepted in the ISS SAFE SDF to prevent a SDF revert from inadvertently reverting the ISS RefSignal (see attached screenshots; it looks like last week's adjustment from -2.02 V to -2.01 V was not captured in SDF).
We did not run the CP1 fill at 10am today. I am making a code change (WP11314) to ramp the LLCV open. We will run it this afternoon, hopefully while H1 is locked o see if we have reduced the impact on the range.
1373650934 after 10h13 in NLN. No obvious reason for LL.
Back to NLN and observing at 19:49UTC, ~1hour in OMC_WHITENING
While doing my usual Monday morning PSL checks I noticed the chiller was throwing an alarm. Upon investigating I found it throwing a Low Level alarm, so I added 150mL of water to top it off. This stopped the alarm. According to the fill log and a quick alog search we have been adding ~150mL every 6 weeks or so; this is in contrast to the old PSL chiller (O1 through O3), where we were adding ~150mL on a weekly basis. Everything with the chiller looked OK.
TITLE: 07/17 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 135Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 21mph Gusts, 15mph 5min avg
Primary useism: 0.06 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY: IFO has been in NLN 7h30
Dust monitors, VAC, SUS, SEI, CDS okay.
TITLE: 07/17 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
SHIFT SUMMARY:
EQ recovery mostly handled by Ryan C at beginning of shift. Per suggestion from Rahul via TeamSpeak, looked at Violins after locklosses, but could not find anything out of the ordinary over our recent woes and back 2+weeks ago and earlier, when they were much better.
H1's been locked for about 7.5hrs (double coincidence for most of this time).
Winds are picking up, and a bit cooler (73degF) than yesterday morning.
LOG:
Smooth Sailing Shift thus far with H1 locked for 3.5+hrs, ~136Mpc, low winds, & no noticeable EQs.
TITLE: 07/17 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 18mph Gusts, 14mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.05 μm/s
QUICK SUMMARY:
Receiving a nice H1 very close to be OBSERVABLE...babysitting violins and away we will go!
Surprised to see winds tonight, but looks like they are waning *knock on wood* & it's a balmy 81degF. Here's to no earthquakes!
TITLE: 07/16 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
SHIFT SUMMARY:
Lock#1:
Lock#2:
Lock#3:
Lock#4:
Lock#5:
LOG:
No log for this shift
Thought we were gonna make it but then the R wave came through and took us out. Lockloss at 03:29UTC, I'm gonna hold at pre_for_locking for things to calm down a bit.
Just got back to OMC_WHITENING after a few lock attempts and an IA but I'll probably have to be here for an hour to damp violins.
[Jenne, Naoki, Brina, Caden, Robert, Lance, Camilla]
Since alignment changes were so impactful yesterday (alog 71302) and we still have higher noise than when OM2 was hot last week, and also higher noise than our April 60W times with cold OM2 (eg, Elenna's alog 71284 yesterday), we tried some alignment shifts today with some PEM vibration injections going, to see if we could find a better alignment. For now, we've left the IFO with the same settings it had last night since there's nothing that's significantly better. Nothing we've found comes close to matching the broadband (good) noise level of the April 60W time, or the last week hot OM2 times.
The first attachment shows a big-picture of the moves we made today.
Daniel points out that the time with OM2 hot may have been some of our best-ever sensitivity around 70 Hz. It would be helpful to have a plot like alog 71309, but including Elenna's time from April 6th, to see if pre-O4 we ever had as good of sensitivity around 70 Hz.
It's also a little tricky to compare our HWS data using our plotter to what Elenna and Cao posted yesterday. It would be helpful to either have the times from Cao's plot, or have Cao plot where it looks like our ITMX spot was at 21:50 UTC today.
Looks to me the big difference is in the frequency region from 50 to 100Hz. The excess noise in this band has been rather stubern since O3 and its origin is unknown.
Mode matching of the single bounce beam to the OMC is really bad and we don't know why. We don't even know the beam shape of the single bounce beam hitting the OMC. I constrained the beam shape by looking at the OMC scan data.
There are many OMC single bounce scans but the most recent two w/o RF SBs, one with cold and the other with hot OM2, were carefully analyzed by Jennie to resolve 02 and 20 mode as separate peaks (alogs 70502 and 71100), so I used them here.
If you just want to see the results, look at the third panel of the first attachment.
X-axis is the normalized waist position difference, Y-axis is the normalized waist radius difference. From the measured cold mode matching loss of 11.5%(!!) and hot loss of 6.2%, and the fact that the loss changed by only changing the ROC of OM2, the beam parameters hitting the OMC were constrained to two patches per each OM2 ROC. Yellow is when OM2 is cold, blue is when OM2 is hot. Arrows show how cold (yellow) patches are transformed to hot (blue) patches when OM2 ROC is changed by heating.
Note that we're talking about inconceivably huge mismatching parameters. For example, about -0.3 normalized waist position difference (left yellow patch) means that the waist of the beam is ~43cm upstream of the OMC waist. Likewise, about +0.3 normalized waist radius difference means that the beam waist radius is 690um when it should be 490um.
We cannot tell (yet) which patch is closer to reality, but in general we can say that:
There are many caveats. The first one is important. Others will have limited impact on the analysis.
Moving forward:
Here's a brief explanation of what was done.
Top left panel of the 1st attachment is the mode matching loss contour plot. loss=0 when [posDiffNormalized, sizeDiffNormalized]=[0.0]. Contours are not circular because the loss is calculated analitically, not by quadratic approximation.
Top right panel of the 1st attachment only shows the region close to the measured losses. Yelllow ring is when OM2 is cold, blue is when hot. Each and every point on these rings represent a unique waist size and waist position combination (relative to the OMC waist).
Since we are supposed to know the OMC-OM2 distance and ROC of the cold and hot OM2, you can choose any point on the yellow (cold) ring, back-propagate the beam to the upstream of OM2 (assuming the cold ROC), "heat" the OM2 by changing the ROC to the hot number, propagate it again to the OMC waist position, and see where the beam lands on the plot. If it's on the blue ring, it's consistent with the measured hot loss. If not, it's inconsistent.
Just for plotting, I chose 9 such points on the cold ring and connected them with their hot landing points on the top right panel. If you for example look at the point at ~[0, 0.4] on the plot ("beam too big but position is perfect when cold"), after heating OM2 the beam becomes smaller but the beam position doesn't change meaningfully, therefore the matching becomes better. In this case the improvement is much better than the measured (i.e the landing point is inside the blue ring), so we can conclude that this ~[0, 0.4] for cold is inconsistent with the measured hot loss.
By doing this for each and every point on the yellow ring we end up with a patch or two that are consistent with reality.
If you cannot visualize what's going on, see the 2nd attachment. Here I'm ploting the beam propagation of "beam too big but position is perfect when cold" case in the top panel. The beam between the OM2 and OMC is directly defined by the initial (cold) parameters. The beam upstream of the OM2 is back-propagation of that beam. On the bottom panel is the propagation diagram of when OM2 becomes hot. The beam upstream of OM2 is the same as the cold case. You propagate that beam to the OMC position using hot ROC. In this case the loss, which was ~12% when cold, was improved to 4.3%, that's inconsistent with the measured hot loss of (1+-0.1)*6.2%.
Further summary:
We can probably down-select the patch by 30uD single-path thermal lensing in ITM comp plate relative to the thermal lensing we had in previous scans (alogs 70502 and 71100). Start by a hot OM2. If we see a significant reduction in MM loss after ITM TCS, the actual beam parameters are on the patches in the left half plane.
Details 1:
In the 1st attachment, I took two representative points on the hot patches indicated by little green circles, which define the beam shape at the OMC waist position. I then back propagated the beam to the upstream of ITM (i.e., in this model, optics are correctly placed with correct ROC and things, but the input beam is bad). ITM is at the average ITM position. The only lensing in the ITM is the nominal diversing lens due to ITM's curvature on the HR.
Then I added the thermal lens, once to the beam impinging the ITM HR and once to the beam reflected, and see what happens to the beam parameter at the OMC waist location. These parameters are represented by tiny crosses. Blue means negative diopters (annular heating) and red means positive (central heating). I changed the thermal lensing by 10uD steps (single-path number).
As you can see, if you start from the left half plane patch, central heating will bring you close to ~(-0.04, 0) with 30uD single-path (or 60uD double-path).
OTOH if you start from the right half plane, ITM heating only makes things worse both ways.
FYI, 2nd plot shows, from the left to the right, good mode matching, hot patch in the left half plane and in the right half plane. The beam size on the ITM is ~5.3cm nominally, 5.1cm if in the left half plane (sounds plausible), 6.8cm in the right (sounds implausible). From this alone, right half plane seems almost impossible, but of course the problem might not be the bad input beam.
Details 2:
Next, I start with (almost) perfectly mode-matched beam and change the optics (either change ROC/lens or move) to see what happens. We already expect from the previous plots that ITM negative thermal lensing will bring us from perfect to the hot patch in the left half plane, but what about other optics?
3rd attachment shows twice the Gouy phase separation between ITM and other optics. Double because we're thinking about mode matching, not misalignment. As is expected, there's really no difference between ITM, SR3 and SR2. OM1 is almost the opposite of ITM (172 deg), so this is the best optic to compensate for the ITM heating, but the sign is opposite. OM2 is about -31 deg, SRM ~36 deg. From this, you can expect that SR3 and SR2 are mostly the same as ITM as actuators.
4th attachment shows a bunch of plots, each representing the change of one DOF of one optic. (One caveat is that I expected that the green circles, which repsent the beam perfectly mode matched to the arm propagated to the OMC waist position, will come very close to (0, 0) with zero MM loss, but in this model it's ~(-0.4, 0.1) with ~1.2% loss. Is this because we need a certain amount of ITM self-heating to perfectly mode match?)
Anyway, as expected, ITM, SR3, SR2 all look the same. It doesn't matter if you move the position of SR3 and SR2 or change the ROC, the trajectory of the beam parameter points on these plots are quite similar. These optics all can transform the perfectly matched system to the blue patch in the left half plane.What is kind of striking, though not surprising, is that 0.025% error in SR3 ROC seem to matter, but this also means that that particular error is easily compensated by ITM TCS.
SRM, OM1 and OM2 are different (again as expected). Somewhat interesting is that if you move OM2, the waist size only goes smaller regardless of the direction of the physical motion.
From these plot, one can conclude that if you start from perfectly matched beam, you cannot just change one optic to reach the hot patch in the right half plane. You have to make HUGE changes in multiple optics at the same time e.g. SRM ROC and ITM thermal lensing.
Both Details 1 and 2 above suggest that, regardless of what's wrong as of now (input beam or the optics ROC/position), if you apply the central heating on ITM TCS and see an improvement in the MM loss, it's more likely that the reality is more like the patches on the left, not right.
Dan pointed me to their SRC single-path Gouy phase measurement for the completely cold IFO, which was 19.5+-0.4 deg (alog 66211).
In my model, 2*Gouy(ITM-SRM single path) was ~36deg, i.e. the SRC single-path Gouy phase is about 18 degrees. Seems like they're cosistent with each other.
ITM central heating plot was updated. See attached left. Now there are four points as the "starting points" without any additional TCS corresponding to both hot and cold patches.
According to this, starting with cold OM2, if the heating diopter (single path) is [0, 10, 20, 30, 40]uD, the loss will be [11.5, 7.1, 3.5, 1.1, 0.1]% if the reality is in the left half plane (attached right, blue), or [11.5, 9.9, 10.5, 13.1, 17.3] % if in the right half plane (attached right, red).
Updated to add cold OM2, ITMY single bounce, central CO2 OFF/ON case in alog 71457.
Jennie Wright, Keita Kawabe, Sheila Dwyer
Above Keita says "I assumed that the distance between OM2 and OMC waist is as designed (~37cm). " 37 cm is a typo here, the code actually uses 97 cm, which is also the value listed for OMC waist to OM2 in T1200410
Measured IFO beam self-heating absorption, using data form Saturday April 1st when Dan had an IFO lock with CO2s off. CO2's had been off for 6 hours, and the IFO had been locked for 2h30, see attached plot of thermal state. A better method was used in alog 66098, I'll work on how to use correctly this with Dan/Cao.
See attached plot made using Aidan's absorption plot script (instructions in TCS wiki) in /ligo/gitcommon/labutils/hws_absorption_fit/april2023.
ITMX and ITMY: ~128mW absorbed = 350ppb (128mW / 363kW arm power from H1:ASC-{X/Y}_PWR_CIRC_OUT16). Coatings are expected to have 0.5ppm (500ppb) coating absorption.
We've previously measured significantly higher absorption on ITMX, which has known point absorbers, not sure what's changed apart from significantly higher circulating power - 360kW rather than 230kW. Previously measured:
| April 2022 (62468, 62782) | Nov 2022 (66036) | April 2023 | |
| ITMX | 430ppb | 490ppb | 350ppb |
| ITMY | 370ppb | 385ppb | 350ppb |
Using Dan's /ligo/gitcommon/labutils/hws_absorption_fit/april2023/fit_absprtion_v2.py from alog 66098, which takes into account the center of the IFO beams (should be updated in channels HWS_ORIGIN_{X,Y} but wasn't at that time). This shows that absorption is close to what was previously measured, plot attached:
| April 2022 (62468, 62782) | Nov 2022 (66036) | April 2023 | |
| ITMX | 430ppb | 490ppb | 475ppb |
| ITMY | 370ppb | 385ppb | 375ppb |
* Looking at the last few locks the IFO beam position on ITMY has slightly changed in YAW since April 1st, see attached for IFO beam center difference between start of April and now, with old (491,566) center plotted as red cross.