TITLE: 06/06 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 133Mpc
SHIFT SUMMARY: About to start maintenance day.
LOG:
H1 glitches nuc27 FOM isn't updating for Omicrom. I posted this in the LHO DetChar chat.ligo group.
I've had to restart the DARM FOM on nuc30 3 times and Ryan noted he did ~4 times during the evening. Tagging CDS.
I expected PEM magnetic injections to start at 7:25am, it was 14:30UTC to 14:48UTC. SUS_CHARGE started at 14:45UTC with lines at 11-14Hz when PEM_MAG_INJ was stilll at EY_EBAY_HIGH_INJ. So they overlapped by 2-3 minutes. Tagging PEM and Tagging SUS In-lock charge measurements caused a lockloss again at 15:00UTC 70062.
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 13:21 | CDS | Erik | Remote | N | Rebooting work stations | 13:21 |
| 14:17 | COMM | Camilla | CR | N | PRCL OLG Measurement | 14:27 |
| 14:30 | PEM | PEM_MAG_INJ GRD | Auto | N | Automatic magnetic injections | 14:48 |
| 14:45 | SUS | SUS_CHARGE GRD | Auto | N | Automatic in-lock charge measurements | 15:01 |
To fix the overlap between the magnetic injections and in-lock charge measurements, I've moved the start time of the magnetic injections up to 7:20am from 7:25am (tagging PEM).
TITLE: 06/06 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 19mph Gusts, 15mph 5min avg
Primary useism: 0.07 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY: Maintance day about to start. In lock charge measurements are currently running. We plan to not transition the SEI mode today and keep sensor correction ON.
Workstations were updated and rebooted, except an operator workstation and opslogin 0. OS packages were updated, not conda-installed packages.
The conda puppet configuration was also updated. This brings the underlying configuration code in-line with LLO and frees up a lot of disk space on the workstations. The change does not affect conda-installed packages or conda environments.
Temperatures:
I've updated the scale on the /cds/h1/scripts/fom_startup/nuc32/CER_temperatures.yaml ndscope template so that the MSR trends are not cut off. H1:PEM-C_MSR_RACK{1,2}_TEMPERATURE changed slightly 1 month ago to need this. Plot attached of the CER/MSR temperatures.
Attached plot of VEA temperatures over last 7 days, can see zone 1B 0.3degF increase 18 hours ago as noted in 70128.
Dust Counts:
See attached. we've had high dust count at EY (3000 0.3um and 200 0.5um) over the last 3 hours without particularly high wind. I'll ask Kim to check on everything when she cleans EY later today.
STATE of H1: Lock Acquisition
Lost lock at 09:37UTC, cause was 11Hz PRCL instability, see attached and Brina's 70153. this is with Jenne's updated PRCL2 gain of 1.7 70160. Note there is an issue with the lockloss tool but you can use the command line tool with 'lockloss show 1370079438' to get the ndscope plots.
Successfully did an initial alignment after couldn't lock PRMI (with moving PRM) and we lost lock at CHECK_MICH_FRINGES. Since then locking has been fine, currently at MAX_POWER.
I'm hopeful (although have not yet looked closely at the lockloss) that this was more due to the earthquake than the PRCL gain. I note that the amplitude of the 11 Hz in this plot is ~10x lower than those in Brina's alog. For now I've set the guardian to put the PRCL2 gain to 1.7, and we'll see if we get another nice long lock like this one was.
Closes FAMIS 23809. Last done in 69998.
All fine, as Ryan notes, FAN5_170_1 and EY_FAN1_470_1 are the highest at ~0.4.
FAN4_17870_1 was turned on 05/30 (7 days ago).
Closes FAMIS 21443. Last done in 69989.
Ryan notes that the ISS Diffracted Power has been drifting more than usual in 70158 and the ISS AA chassis was swapped in 70089. PSL trends in 70154.
Laser Status:
NPRO output power is 1.818W (nominal ~2W)
AMP1 output power is 67.3W (nominal ~70W)
AMP2 output power is 134.8W (nominal 135-140W)
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PMC:
It has been locked 20 days, 12 hr 46 minutes
Reflected power = 15.65W
Transmitted power = 109.2W
PowerSum = 124.9W
FSS:
It has been locked for 0 days 11 hr and 13 min
TPD[V] = 0.9477V
ISS:
The diffracted power is around 3.2%
Last saturation event was 0 days 11 hours and 14 minutes ago
Possible Issues:
ISS diffracted power is high
TITLE: 06/06 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 134Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 15mph Gusts, 12mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.14 μm/s
QUICK SUMMARY: Locked in Observing for 9h30.
Ryan notes in alog 70163 that the SQZ ISS may knock us out of observing as is close to saturating.
Ryan notes the DARM DTT on nuc30 has crashed ~4 times tonight (looked frozen, crashes on restart and need to be closed/reopened).
TITLE: 06/06 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 136Mpc
SHIFT SUMMARY: H1 has been locked and Observing for 9.5 hours. Very quiet evening, wind and seismic activity are low.
LOG:
No log for this shift.
State of H1: Observing at 132Mpc
H1 has been locked and Observing for over 6 hours. Wind has died down and seismic is low.
Vicky has warned me the SQZ ISS is in need of realignment and might saturate tonight. She reminded me of Sheila's instructions on what to do if it does: alog 70050
I've updated NUC34's PI monitor with lockloss references, see CR screenshot with red highlighting the danger areas. When the upper lump of the live black trace reaches the reference, this is consistently where PI29 rung up locklosses, in the first 1-3 hours of lock. The bottom right corner monitors for PI 29; it is bad if any peak appears, though we have survived brief appearances in the past.
It's handy to look for the 80 kHz PI 29 on an aliased-down version on the slower DQ channel (pops up at ~14.76 kHz, 68165), b/c we can't play back the 500kHz fast channel. The black *live* trace thermalizes from lower to high frequencies.
Brina and I are now starting to look into the recent PI29 locklosses, starting with 06/02/23 20:13:27 UTC (noted by Brina in 70153). This is after the change from 76W to 75W input power (5/31, 70042), which followed the ETMX ring heater change to 1.2W (5/24, 69871). To keep damping this PI, we should check the signal bandpass filter for this mode -- the DCPD's see the alias'ed ring up 10 Hz lower than where it "usually" is (examples here), which is strange and would explain why its damping didn't work, if the mode was out of band. I don't think it's just that our increased damping strength (69800) pushing the mode out of band (we've seen weak damping move the freq before by a couple Hz, 68165). Watching the lockloss, I don't see the mode frequency move down by 10 Hz, I only see it pop up at that frequency already. Will investigate more this week.
Since Brina found in alog 70153 that we have still been having locklosses due to too-low PRCL gain, I asked RyanS to take us out of Observing for a few minutes (ended up being ~10 mins) so that I could measure the PRCL open loop gain. It looked like the PRCL gain was a bit low, compared to the latest reference from May 31st. I increased the PRCL2 gain from 1.5 to 1.7, and saved that value in the Observe SDF.
In the first screenshot, I show the SDF.
In the second attached screenshot, you can see the reference from May 31st ("Prev reference"), the somewhat low-ish PRCL gain "As found, 4 hrs locked", and the higher PRCL OLG after I increased the PRCL2 gain "current measurement".
I'm hopeful that this helps us stay locked a little longer. We may need some more spot measurements of the PRCL gain. I have not yet made any changes to guardian, and I don't think I can switch us to the safe.snap SDF file, so I suspect that next lock, this value will come back as 1.5. We should probably measure the PRCL OLG early in a lock, but then modify this gain wherever it was set to 1.5, to now be 1.7. And we should figure out why it needed to be increased!
Ran the template in lsc/h1/templates/PRCL/PRCL/PRCL_OLG_NOISE_FULL_LOCK_NLN.xml and saved results in camilla.compton/Documents (see attached).
This was with H1:LSC-PRCL2_GAIN at 1.7 and after 23 minutes at NLN. Cursor is on 41.5Hz, gain 1, phase 21deg. Unsure on what the references are.
Thanks! I think this means that we can (just barely) put into guardian to use 1.7 in the PRCL2 gain, rather than the 1.5 we'd been using. I'll do that when I get out of my morning set of meetings.
I've just done this in the ISC_LOCK guardian, so next lock it'll automatically be set to a gain of 1.7. I'll watch this as we come up from maintenance in a few hours.
Since we had not quite gone into Observe yet, I took a quick PRCL OLG measurement, 15 mins after we got to NomLowNoise. The PRCL UGF is quite high at 50 Hz, shown in red in the attachment (pink is the same old reference as the red trace in Camilla's plot above). I could see some gain peaking in the DRMI DTT on the wall, but that relaxed quite quickly, which is consistent with Camilla's plot 23 mins into a lock looking like a much more sensible loop with less gain peaking. Since the goal UGF is around 30 Hz, a better thing to do will be to modify the thermalization guardian to increase the PRCL gain a little bit when we're first locked, but then much more than it currently is later in the lock, and put PRCL2 gain back to its nominal value of 1.
I think I have a candidate new 'equation' for the PRCL gain thermalization. In the attached plot, the overall digital PRCL gain (so, PRCL1 gain times PRCL 2 gain) is plotted versus minutes, for the first 6 hours of lock that the thermalization guardian is changing things. After the first 6 hours, it just holds whatever gain was in there.
Blue is what the thermalization guardian was set up to do, with a variable in the thermalization called GAIN_SCALE set to 3.0, and PRCL2 gain set to 1.
Orange is what we've been running with for the last several weeks, with PRCL2 gain set to 1.5.
Green is what we've been running the last few locks, with PRCL2 gain set to 1.7.
The three circle markers are estimates of what the gain ought to be to keep the UGF at 30 Hz, from measurements that Camilla and I have taken.
The pink trace is a candidate thermalization gain equation that would replace the blue one, and we'd use with PRCL2 gain set back to 1. For this, the only change to the thermalization guardian is that the GAIN_SCALE would be set to 5.55 (rather than the current value of 3.0), and the PRCL2 gain would be set back to 1.
I'll make this change next time the IFO is unlocked and I'm around.
Over recent days, we've been noticing that the ISS diffracted power has been changing more than usual, which has been varying the input IFO power by ~0.5W lock to lock. I adjusted the ISS RefSignal Friday night to mitigate this, but it still seems to be moving around.
I've attached trends of this drift over the past week and month (I include some temperatures because they've been on our minds recently). The ISS first appears to start doing strange things back on May 16th (cursor location in the ndscope screenshot), which lines up with when Jason did an FSS aligment tune-up and the RF distribution went down due to ongoing power supply work, which unlocked the PMC and RefCav while Jason was in the enclosure, but the ISS itself was not touched other than turning the autolocker on and off. Two weeks after that, the diffracted power seems to settle.
J. Kissel Picking up where I last left off, in LHO:69898, I had divided up the ten 4-hour segments of IFO thermalization times during ER15 into slices of average X&Y arm cavity power, or into power bins of 1.0 W. This aLOG covers the attempts to create a representative model of the collection of transfer functions during each power bin using methodology developed during O3 -- fitting the data using a Gaussian Process Regression with the same radial basis function kernel we use to fit for other unknown frequency dependence in the sensing and actuation functions. Executive Summary: Gaussian Process Regression's cons continue outweigh its pros as a transfer function fitter with the kernel we typically use. I'm able to finagle the hyper parameters of the kernel model such that the regression output tracks the median behavior of the transfer function, but it does *not* reflect the large, measured, variability in the transfer function (which we traditionally try to cover with the Gaussian uncertainty). This may have to be "good enough," but I'm going to have to farm this out to other specialists if we need better. Step 1: Reorganize the *real* transfer function data, and add uncertainty Attachment 1: sensingFunction_syserror_vs_powerslices_originaldata_vs_binneddata.pdf As of May 24 2023, and LHO:69898, I had found *a* way to group the data into power slices and sort the 3D data arrays. However, in doing so, I had NOT saved any of the coherence-based uncertainty on the measurement points. So, as of this aLOG, I have now added the measurement, coherence-based uncertainty to each of the data points in all the collections of data. While doing so, I had some more time with 3-D python array manipulation, so I reorganized things as well. This collection of plots, shows all the collection of power bin's transfer functions and their associated power, now with quantified uncertainty. On the left, the bode plot of the transfer function organized in one way, and on the right, the same data is just organized differently -- but serves as a sanity check that my new organization preserves all the right data and uncertainty assignment. Step 2: Gather some statistical intuition on the data by finding 68th, 95th, 99.7th quantiles as one way to quantify the uncertainty of the parent distribution at each frequency point. Attachment 2: sensingFunction_syserror_vs_powerslices_binneddata_freqhist.pdf One of the fundamental assumptions of a Gaussian Process regression is that the family of curves that you're given it is sampled from a -- you guessed it -- Gaussian parent distribution of curves. So... is this collection of data Gaussian? This plot shows histograms of each frequency point's magnitude and phase within a power bin. Overplotted on top of the histogram is - the (red) median, or 50th percentile, then - the (darkest pink) 0.1587th,0.8413th "1-sigma" quantiles, - the (medium pink) 0.9772nd,0.0228th "2-sigma" quantiles, and - the (light pink) 0.9986th,0.0013rd "3-sigma" quantiles. Of course, each bin's worth of data is diverse in number of transfer function values. Some bins only have 3 traces, and others have 170. As such, I've let python's histogram function determine the number of bins by setting histogram_bin_edges='auto'. But still, this display gives *me* a good impression of the data, and the first impressions that a Guassian Process regression is just not the right tool: - the distributions of transfer function values at each frequency are *not* Gaussian, and - the "uncertainty" of the distribution, quantified by the 1-, 2-, and 3- sigma quantiles is not symmetric. Alas, we proceed. Step 3: Add "fake" data the the transfer function collections, convert from mag/phase, to complex, then to real/imaginary parts, then flatten in prep for input into GPR. Attachment 3: sensingFunction_syserror_vs_powerslices_binneddata_vs_gprinputdata.pdf As we've done in O3, we want a model of these transfer functions to serve as a systematic error (with associated uncertainty) that can be "stapled" in to the overall calibration's modeled response function systematic error (with associated uncertainty). In doing so, we want this model to add systematic error only where where think the response function is changing as a function of thermalization. However, Guassian Process Regression gives you large uncertainty where there are no measured frequency points to "anchor" the fit. Further, due to the querks of the skikit learn's python implementation of GPR, (or at least the way that Craig Cahillane and then Evan Goetz figured out how to get it to "work"), you must flatten the collection of input data on to a single vector (i.e. the nTFs by mFreq complex 2D array must be reshaped into a (nTFs * mFreq) x 1 vector). This collection of plots shows the sanity check that the addition of fake data points, and the flattening works. The fake data points have a transfer function value of 1.0 + 0.0j i.e. a magnitude of 1.0 [(ct/m) / (ct/m)], and a phase of 0, the sensing function residual [meas/model] if model perfectly agrees with measurement. I've assigned these fake data points an uncertainty of 1e-4, or 0.01%. The *number* of fake data points required depends on the radial basis function hyper parameter "length scale" Step 4: Explore the "standard" O3-era "radial basis function"-inclusive hyper parameter space in order to create a "satisfying" fit. Attachment 4: sensingFunction_syserror_vs_powerslices_gprfit_lengthscale_0p25_nxtrapnts30.pdf The GPR transfer function kernel we continue use to describe unknonwn frequency dependence in "our" transfer function residuals since Evan added additional terms after Craig's O1/O2 work (see "Slides for follow up on action item" from G1901479) is as follows: F * RBF(\ell) + C where F = a "ConstantKernel" function that serves to be what I call an "amount of frequency dependence" coefficient that determines how much of the radial basis function is "used" RBF(\ell) = a "Radial Basis" function, which is a Gaussian function of the dimensionless hyper-parameter \ell, where the function computes the correlation of neighboring data points with respect to the length scale, \ell. As we've shoved the transfer function data into this kernel function, it's computing the Euclidean distance between the neighboring, logarithmic frequency points of the real and imaginary points of the complex transfer function. Working through a bit of algebra, as in G2101319, the physical meaning of the length scale becomes delta f = f * (10^\ell - 1) i.e. the RBF looks for lower frequency points have tighter (in frequency) correlation. C = another "ConstantKernel," but in this case used as an additive constant. This is used as a representative of what an "ideal" residual should look like, i.e. a transfer function of 1.0 + 0.0j at all frequencies. As I've been reminded through my exploration, if you start probing for length scales, \ells, that are smaller than the log(f) frequency separation, the GPR will return a large uncertainty on the transfer function in between your data points. Thus, the number of *fake* data points needs to also start to come into play if your length scale gets small. So, the parameter space to explore is the priors on the constant for the two ConstantKernels as well as the prior on the RBF length scale \ell. In the end, I used F = value of 0.5, with uniform probability between (0.4, 1.0). I interpret this as allowing for the frequency dependence to be "all the way on" (i.e. 1.0) or a little-bit less than "half" on (0.4). \ell = value of 0.25, with uniform probability between (0.125, 0.375). This value I determined entirely empirically, by trying values between [0.125, 0.25, 0.5, 0.75, 1.0, 2.0] C = value of 1.0, with uniform probability between (0.9, 1.1). nExtraPoints = 30. One can safely run with 15 points on length scales down to 0.5, but below 0.5 the length scales become comparable to the frequency separation in these points, so the frequency space *between* the extra points starts to become prohibitively uncertain, which spoils the "frequencies above the thermalization region shall not impact the greater systematic error budget" rule. This plot shows the Gaussian Process Regression fit against the Median and 1-, 2-, and 3- sigma quantiles of the 4 *real* data frequency points, as well as the original data. From here, we arrive at our executive summary: I'm able to finagle the hyper parameters of the kernel model such that the regression output tracks the median behavior of the transfer function, but it does *not* reflect the large, measured, variability in the transfer function (which we traditionally try to cover with the Gaussian uncertainty) -- i.e. some of the way there, and may be good enough. For the fits that are the worst, the lowest power bins, I think this is an OK compromise if we recall that the IFO stays in these lowest power bins for the *least* amount of time given the "exponential" nature of the thermalization, and sometimes the IFO is past these arm powers before we even hit "NOMINAL_LOW_NOISE." In fact, over my years of asking for One Transfer Function Fitter to Rule Them All, I've found that this exact problem is the hardest part about transfer function fitting in LIGO -- finding a fit or model that appropriately describes the *uncertainty*, or in our application here, the *variability* in the transfer function. I guess it should be a surprise that I wasn't able to figure out an entire field of computational metrology (or loosely, parameter estimation) in 1.5 weeks. So, at this point, (a) We need to install *something* in the overall systematic error budget, and this is a good start, (b) There's talk of reducing the power and reverting the SRCL offset, perhaps soon rendering this data set "no longer representative," (c) I need the help of others to do better in terms of capturing the variability (and maybe that means exploring different kernels, or switching to zpk-based fitting like IIRrational or Ethan's ZPK Fitting), (d) We could also try to do all of this same process for the *response function,* rather than the sensing function (which *may* be quicker than 2 months worth of work because I already have examples of how to manipulate the data which was a huge time suck for me) (e) I need to move on with my life and make progress elsewhere So, I hand it off to my fellow calibrators, transfer function fitters, and the world.
The saved results for each GPR of each power level live in /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O4/H1/Results/FullIFOSensingTFs/Thermalization/ supplemental_sensing_gpr_405_kW.hdf5 supplemental_sensing_gpr_406_kW.hdf5 supplemental_sensing_gpr_407_kW.hdf5 [... hidden for brevity ...] supplemental_sensing_gpr_434_kW.hdf5 supplemental_sensing_gpr_435_kW.hdf5 supplemental_sensing_gpr_436_kW.hdf5 where, I note that the power level listed in the filename is the *upper* limit of the power level bin, so - the first bin's file, supplemental_sensing_gpr_405_kW.hdf5 should be used for all powers from 0 kW to 405 kW, then - the next 31 power bins should be used from when the power is 1 kW less than that in the file name up that in the file name, i.e. supplemental_sensing_gpr_434_kW.hdf5 should be used when the average arm power is between 433 and 434 kW. Recall, the algorithm to compute that power against which to compare the file is, for the two minutes surrounding your start_gps of choice, take the two 16 Hz channels worth of X and Y arm powers, average them to create a 16 Hz mean arm power, then take the median of the average across the two minutes. The following python code snippit should get you there. from gwpy.timeseries import TimeSeriesDict as tsd armPwrChList = ['H1:ASC-X_PWR_CIRC_OUT16', 'H1:ASC-Y_PWR_CIRC_OUT16'] pwrbinlowerlim = 404 pwrbinupperlim = 436 pwrbinstep = 1 pwrbins = [[n,n+pwrbinstep] for n in range(pwrbinlowerlim,pwrbinupperlim,pwrbinstep)] pwrbins[0] = [0,pwrbinlowerlim+pwrbinstep] stride = 2*60 start = start_gps end = start_gps + stride pwrdata = tsd.get(armPwrChList, start, end, frametype='R',verbose=True) # Compute the average (np.mean) of the two arm powers at each 16 Hz data point, # then compute the np.median across the stride pwr = np.median(np.mean(np.array([pwrdata[armPwrChList[0]].value, pwrdata[armPwrChList[1]].value]), axis=0)) for thispwrbin in pwrbins: if np.logical_and(pwr > thispwrbin[0], pwr <= thispwrbin[1]): print('You should use the {} kW file'.format(str(thispwrbin[1])))
On the choice of length scale and nExtra Points Just so folks get a feel for how I've made the choice of length scale and nExtra points -- and really, a better demonstration of the empirical process of choosing the hyperparameters in the kernel, see the attached files, which show how the fit result changes as a function of hyper parameter \ell, as it ranges from the values of [0.12, 0.25, 0.5, 0.75, 1.0]. Note, as discussed above, the lower values of \ell need more extra frequency points in order to reduce uncertainty between points. To see this, compare the pdfs from \ell = 0.25 with either 15 points or the final answer, 30 points. For \ell larger than 0.25, I've used 15 points extra points in each.
The code that produces this data lives in gpr_sensing_darm_comb_pwrslices.py, committed at version with git hash ba6d60. Start there if you want to rip out the GPR fitting and plot making (everything below Line 306) to start at the point where the data is grouped into bins around line and below Line 260. Be forewarned, it takes ~5 minutes to run one round of GPR fitting with \ell = 0.25 and 30 extra points. It takes ~2 minutes to fit it with \ell = 0.5 and higher with 15 extra data points. Also be warned that there's a lot of plotting code commented out; those collections of plots were to produce the plots from steps 1 through 3 above, and not needed for producing the "final answer" plots of the fits. I've committed the raw text files that one needs to run the code and process the data to ^/trunk/Runs/O4/H1/Measurements/FullIFOSensingTFs/Thermalization sensingFunctionResidual_meas_over_model_wTDCFs_*_freqfreq.txt sensingFunctionResidual_meas_over_model_wTDCFs_*_mag.txt sensingFunctionResidual_meas_over_model_wTDCFs_*_pha.txt sensingFunctionResidual_meas_over_model_wTDCFs_*_pwrpwr.txt sensingFunctionResidual_meas_over_model_wTDCFs_*_timetime.txt sensingFunctionResidual_meas_over_model_wTDCFs_*_unc.txt where the * conveys the different GPS start and stop times of the 10 different segments from LHO:69796.
I've set the set point for the OPO trans to 60 uW, this gives us better squeezing and a little bit higher range. However, the SHG output power sometimes fluctuates for reasons we don't understand, which causes the ISS to saturate and knocks us out of observing. Vicky and operators have fixed this several times, I'm adding instructions here so that we can hopefully leave the setpoint at 60uW and operators will know how to fix the problem if it arrises again.
If the ISS saturates, you will get a message on DIAG_MAIN, then, the operators can lower the set point to 50 uW.
1) take sqz out of IFO by requesting NO_SQUEEZING from SQZ_MANAGER.
2) reset ISS setpoint, by opening the SQZ overview screen, and opening the SQZ_OPO_LR guardian with a text editor. in the sqzparams file you can set opo_grTrans_setpoint_uW to 50. Then load SQZ_OPO_LR, request LOCKED_CLF_DUAL_NO_ISS, then after it arrives re-request LOCKED_CLF_DUAL, this will turn the ISS on with your new setpoint.
3)This change in the ciruclating power means that we need to adjust the OPO temperature to get the best SQZ. Open the OPO temp ndscope, from the SQZ scopes drop down menu on the sqz overview (pink oval in screenshot). THen adjust the OPO temp setting (green oval) to maximize the CLF-REFL_RF6_ABS channel, the green one on the scope.
4) Go back to observing, by requesting FREQ_DEP_SQZ from SQZ_MANAGER. You will have 2 SDF diffs to accept as shown in the screenshot attached.
Update: in the SDF diffs, you will likely not see H1:SQZ-OPO_ISS_DRIVEPOINT change, and just the 1 diff for OPO_TEC_SETTEMP. The channel *ISS_DRIVEPOINT is used for commissioning but ISS stabilizes power to the un-monitored value which changes, H1:SQZ-OPO_ISS_SETPOINT.
Also, if SQZ_OPO_LR guardian is stuck ramping in "ENGAGE_PUMP_ISS" (you'll see H1:SQZ-OPO_TRANS_LF_OUTPUT ramping), this is b/c the setpoint is too high to be reached, which is a sign to reduce "opo_gr_TRANS_setpoint_uW" in sqzparams.py.
Update for operators:
2) reset ISS setpoint, by opening the SQZ overview screen, and opening the SQZ_OPO_LR guardian with a text editor. In the sqzparams file you can set opo_grTrans_setpoint_uW to 50 60. Then load SQZ_OPO_LR, request LOCKED_CLF_DUAL_NO_ISS, then after it arrives re-request LOCKED_CLF_DUAL, this will turn the ISS on with your new setpoint. Check if the OPO ISS control monitor (H1:SQZ-OPO_ISS_CONTROLMON) is around 3 by opening SQZ OVERVIEW -> SQZT0 -> AOM +80MHz -> Control monitor (attached screenshot). If the control monitor is not around 3, repeat 2) and adjust the opo_grTrans_setpoint_uW to make it around 3.
Vicki has asked me to make a comment about how the line in sqzparams.py should stay at 80 due to tuning for 80 instead of 50 or 60.
Line 12: opo_grTrans_setpoint_uW = 80 #OPO trans power that ISS will servo to. alog 70050.
relevent alog:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72791
Latest update on how to deal with this SQZ error message with a bit more clarity:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=80413