We had an episode of really large beam motion at about 0.6Hz on OMC QPDs as well as OMCR QPDs but not on ASC_AS_C nor AS WFSs. It was very very slow to damp, and at some point it didn't damp further. It was clear that the OMC or OM3 was moving, but the motion didn't show up on none of OMC and OM3 OSEMs. But the OMC SD and RT OSEMs were super noisy.
On inspecting the spectrum of OSEMs, it was clear that the SD and RT OSEMs of OMC had a large broad band noise, the signature of the satellite amp oscillation. However, the difference is that we couldn't see the oscillation in kHz range even using the 16kHz channel (sorry no screen shot). Maybe the oscillation frequency got higher?
Anyway, apparently it has been like that since power outage (attached). Though it's not shown in the spectrum, we confirmed that the broadband noise was there every day after the power outage.
We disabled the OMC damping and the beam motion on OMC QPD immediately got quieter.
We went to the floor and power cycled the satellite amp for RT/SD of OMC (they're on the same box, everything else is on another one), and the noise went away.
Turned on the damping again and it worked fine, no excessive beam motion.
Apparently this happens even though all satellite amplifiers are "fixed". It's worth checking all OSEMs for all optics.
I made a PDF with spectra of all of the OSEMs. The green trace is after the power outage, the black is a reference time before. I used the same times as in Jenne's alog. There's a turn-up in some quadrants of ETMX L2 at high frequency. IM3_M1_LL has a big broadband noise increase. ITMX L2 has a new peak at 3 Hz in all quadrants (it might be worth checking with faster channels in case this is aliasing). ITMY_L2_LR has a turn-up at high frequency. MC1_M3_UL has a small broadband noise increase. Obviously OM1, OM2, and OMC look terrible. PRM_M2_UR has a broadband noise increase. SR2_M1 LF and T1 look like they might have started glitching (bouncy shape in the spectrum). A bunch of channels also have excesses at low frequency, but I don't know if that's a change in physical motion, or maybe it's a result of other things being unstable.
Sheila, Cheryl, Haocun
We re-centered the WFS A and B this afternoon to fix the misalignment (alog27792). Both of them are centered around (0,0) now and we took measurements on the IMC.
The spectrum (measured at test 1) and transfer function (with 20dB gain) of the IMC were taken both before and after re-centering, but there were nearly no differences between them.
For the spectrum, several peaks are:
-55.5 dBm/Hz @ 197.5kHz
-85 dBm/Hz @ 87.5kHz
-83.7 dBm/Hz @ 25kHz
For the Transfer Function:
UGF @150kHz, and we have a bump close to 0dBm at higher frequency.
IMC loop gain measurements should always go up to 5 MHz. There can be EOM resonances as high as 2.5 MHz which, if unmonitored, can poke up and make another unity gain crossing.
Jenne, Nutsinee Sheila
We had trouble with locking the arm in IR for inital alingment, because PSL-POWER_SCALE_OFFSET was different from the input power. We added a statement to the "IDLE" run state of LASER_PWR (the generator of states when not changing the power) that will adjust the normalization if it is more than 0.5Watts wrong.
if abs(ezca['PSL-POWER_SCALE_OFFSET'] - ezca['IMC-PWR_IN_OUTMON'])> 0.5:
ezca['PSL-POWER_SCALE_OFFSET'] = ezca['IMC-PWR_IN_OUTMON']
I came to the realization, this fine Monday morning, that this was my fault. I adjusted the laser power down from 25W to 2W without using the PSL Guardian node. Lesson learned. Apologies to those involved in the solution for taking the time to discover and fix MY "oops".
I checked the pickoff mirror, IO_MCR_M14, that I installed on the MCR path on IOT2L (see alog 27725), which picks off the p-pol beam and sends it to a beam dump. I found that with a locked IMC, the edge of the p-pol beam was getting past the pickoff mirror, so I adjusted the mirror position about 1mm toward the main beam. By IR card, there's about 3mm between the p-pol beam and the main s-pol beam where the pickoff is installed, so enough clearance that the pickoff mirror should not clip the main beam.
15:00 Krishna out to EY to re center BRS. Also Chris went to EX to check on the wind screen construction; back inside 30 minutes.
16:30 Chandra at MX performing manual overfill of CP5. Wanred me about ensuing alarms.
16:45 Dave B called to ask me about the aforementioned alarms.
17:14 Schofield and co. into CER
18:17 Sheila and Haocun out to ISCT1 to center POP wavefront sensor that is railing
18:27 Sheila and Haocun out.
20:19 Sheila out to LVEA to take some IMC measurements
20:21 Cheryl out to IOT2L
20:24 Karen to optics lab for cleaning
20:45 Karen out of the optics lab
21:05 Keita out to IMC area
21:12 Kyle into LVEA to retrieve a meter by HAM4
21:16 Keita back
21:17 Kyle out
21:26 Gerardo to MX
22:29 Cheryl/Haocun/Sheila back from aligning IMC WFS
22:40 Initial Alignment
Now for the excessive detail.
Before the last couple of months work, we thought that for the end station ISI's, we should use the Quite_90 blends and the Mitt_SC broadband sensor correction. A couple of months ago, Krishna pointed out that the Mitt_SC filter was probably insufficiently rolled off at low frequency, where the BRS actually makes the STS/BRS supersensor worse. There have been a number of iterations on sensor correction filters, but I think we now have a good filter. The first attached images shows a few of the filters we have tried, I won't go through them all, but the dark blue is the Mitt SC filter of old, and the cyan is what we are using currently. The other two are filters that we've tried and discarded, for various reasons,and there are many more in the foton files, but I'm tired of plotting them.
Around the same time, I realized that using the Quite_90 blend may not roll off the ISI T240's quickly fast enough at low frequency to avoid injecting platform tilt and that we should use a higher blend, at least while the microseism is low. Currently we're using the Quite_250 blends for a high blend, but there may be room for optimization there.
The next plot is the estimated supprestion of the "old" BRS configuration (the Mitt_SC filter, Quite_90 blend), the "new" SC/blend configuration (Warn_SC_v3, Quite_250 blend), and a couple relevant CPS blend filters (which are a good first approximation to the low frequency performance, without sensor correction). JeffK has a log (594) in the seismic log that explains this estimation of the expected sensor correction performance, for those interested.
We know from experience the 45mhz blends are hopeless in 10+ mph winds but suppress the microseism well, and the Quite_90 are okay in winds to 20 mph, but are no good in high (~>.5 micron BLRMS) microseism. I've tried using the Quite_250 blends alone with low microseism, but I think the motion around the first TM modes was too high (the spots on the AS port were moving around a lot at something like .5hz). The new configuration has allowed us to lock in 30-40mph winds (and lower), but microseism is very low. Based on the the simple model used in the second plot, it should do better than the Quite_90 blends alone in high microseism. It even looks like it should do better that the 45mhz blends at the secondary microseism (.150 hz), but the gain peaking is right on top of the primary microseism (~30-80mhz), which we don't see often.
The gain peaking of the new configuration looks like it would be worse than either the 45mhz blends or the Quite_90s, but this model doesn't include platform tilt, which is filtered out by using a higher blend on the ISI and a tilt-subtracted STS ( at the ETM's) or a low tilt STS ( ITMY) for sensor correction. I think Krishna showed that this "new" configuration is in practice better, in his alog 27735.
LLCV bypass valve 1/2 turn open, and the exhaust bypass valve fully open.
Flow was noted after 52 seconds, closed LLCV valve, and 3 minutes later the exhaust bypass valve was closed.
After Sheila's log about the BS causing locklosses, I wanted to check a few things in the guardian code and I found what looks like conflict in the code. In /opt/rtcds/userapps/release/isi/h1/guardian/ ISI_BS_ST2.py, line 4
ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([ ] , [ ])
the empty brackets at the end indicate the ISI is not set to restore any Cartesian locations. This is supposed to be the code that sets what dofs are restored when the ISI re-isolates, and is particular to the beamsplitter.
However, in /opt/rtcds/userapps/release/isi/common/guardian/isiguardianlib/isolation/ CONST.py lines 102 (for BSC ST1) and 122 (for BSC ST2) both read
CART_BIAS_DOF_LISTS = ([], ['RZ']),
CONST.py is common code, and I would interpret this to mean that all BSCs are restoring a stored RZ location. This isn't a problem for the other BSCs, because they never change state, but if we turn on the ST2 loops for the BS, this code could force the ISI to rotate some after the loops come on. The attached trend shows the last ten days of the BS ST2 RZ setpoint and locationmon, and they track each other, so I dont think the BS is returning to some old RZ location, but someone who understands this code should explain it to me.
The CART_BIAS_DOF_LIST in the common code you could say is the "default" setting, but can be overwritten by the chamber node's local file (as is done here). The two empty lists in the local file show that there are no cartesian bias degrees of freedom being being restored.
A quick "grep CART_BIAS ./*.py" in (userapps)/isi/h1/guardian/ yields:
./ISI_BS_ST1.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([], [])
./ISI_BS_ST2.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([], [])
./ISI_ETMX_ST1.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([], [])
./ISI_ETMX_ST2.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([],[])
./ISI_ETMY_ST1.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([], [])
./ISI_ETMY_ST2.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([],[])
./ISI_HAM2.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([], ['X', 'Y', 'Z', 'RX', 'RY', 'RZ'])
./ISI_HAM4.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([], ['X', 'Y', 'Z', 'RX', 'RY', 'RZ'])
./ISI_HAM5.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([], ['X', 'Y', 'Z', 'RX', 'RY', 'RZ'])
./ISI_HAM6.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([], ['X', 'Y', 'Z', 'RX', 'RY', 'RZ'])
./ISI_ITMX_ST1.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([], [])
./ISI_ITMX_ST2.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([],[])
./ISI_ITMY_ST1.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([], [])
./ISI_ITMY_ST2.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([],[])
So it seems that we only restore the bias on HAMs 2,4,5,6. Doing this for L1 shows all empty lists, if the version we have is up to date.
After talking to JIm, he suggests that we change the "default" value to be an empty list, to avoid any possible future mishaps. Just to reiterate though, it is currently not restoring any biases on any of the BSCs, only HAMs 2,4,5,6.
TJ's analysis is correct: the default is what's defined in the isiguardianlib/isolation/CONST.py, which currently specifies that the RZ cart bias should be restored during the second phase of the isolation.
You can change the default in the CONST.py, but any change has to be well corrdinated with LLO, since that configuration is common for them as well.
TJ missed HAM3 in the list of platforms restoring biases. This miss of the grep is likely from some h1 files being in the common area rather than the H1 directory.
David.M, Jenne.D
This is a follow up post to https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=27721. The issue that I reported on Tuesday turned out to be a minor problem that I resolved by double checking the electronics in the CER. Now we are obtaining data from all six L4Cs currently in the LVEA. This allowed us to compare the three seismometers grouped together on the linoleum surface to the three positioned on the concrete. It seems as though there is only a very small difference which shouldn't cause any problems for us, I've attached two plots showing this. When looking at the plots it should be noted that L4C channels 1,2 and 3 are positioned on the linoleum, whereas L4C channels 4,5, and 7 are positioned on the concrete.
The first plot (L4Cs_STS.png) shows all 6 uncalibrated L4C signals overlapped and also the calibrated STS-2 signal in black (this is only relevant for the shape, the difference in calibration means the magnitude of the STS-2 signal is very different to the L4Cs).
The second plot (coherence_TFs2.png) shows on top the coherence between L4C2 (positioned on linoleum) and the STS-2 (in red) as well as L4C7 (positioned on concrete) and the STS-2 (in blue). We see that both have very nice coherence in our frequency band of interest (up to ~30Hz). The bottom plot shows two transfer functions between L4Cs on the same surface (red plot is both on linoleum and black plot is both on concrete) as well as two transfer functions between L4Cs on opposite surfaces (yellow and blue plots). We can see that the pairings on the same surface are slightly better matched, but all four transfer functions are quite nicely matched and flat.
We should be able to position the L4Cs on top of the linoleum coating without having to cut any holes in it.
EDIT: I fixed the first plot, which I accidently gave a linear frequency axis
Raised CP5 Dewar pressure (3/4 turn at pressure regulator + 1/4 turn yesterday). Will take days to read a pressure response. Current pressure is 17.5 psi in tank and 15 psi at exhaust. Also filled CP5 to 100% with 1/2 turn open on LLCV bypass valve.
CP5 "setting % open" has been shifting upward. The vacuum team is working to try to get it under control. Last week such setting was around 88% now is oscillating around 100%.
Note to operators if "cryopump level % Full" falls below 88% it will be very hard for the current system to bring it up to nominal (92%), if it does reach that point, feel free to call a member of the vacuum team, the cryopump will need manual filling.
Trend data.
I am also monitoring from home - I don't think that there are "operators" working the weekends(?)
It should be noted that the power incident on the reflected photodiode (RPD) was ~8.4V and the diagnostic breadboard would like to have a minimum of 9V. So all scans involving the front end laser will have a message concerning "low power" associated with them. The front end laser relative power noise looks good. Better than the reference trace in places, about the same in others. The frequency noise measurement does not look quite right below 20 Hz. Or between 300 and 400 Hz. This might be a consequence of not having enough power but I suspect not. Noise is higher than the reference trace for frequencies greater than 1 kHz. During the beam pointing measurement, the diagnostic breadboard pre-modecleaner dropped out of lock for a split second or two. Measurement does not appear to be affected though, and looks good. During the mode scan measurement, the error message concerning the gouy phase appeared. Every now and then the pre-alignment flickers from being on to off. The mode scan is higher than the reference traces for all peaks but reports the higher order mode content to be 5.7%. Note that the alignment of the front end laser beam to the diagnostic breadboard is still a work in progress.
Apologies for the after-the-fact posting: Here are DBB scans from 6/16. Note that files marked .001 are LO power and files marked .002 are HI power.
Unbeknownst to me, Ed had already performed diagnostic breadboard scans of the laser. However since I had already started, I might as well finish. First off, the high power oscillator. No problems observed with the scans except when it came to doing the mode scan, when the following error message(s) appeared: [======================= ] 45% 5m29s / 10m00sNo significant mode found! The roundtrip gouy phase might be wrong. [======================== ] 48% 5m15s / 10m00sNo significant mode found! The roundtrip gouy phase might be wrong. [======================== ] 48% 5m13s / 10m00sNo significant mode found! The roundtrip gouy phase might be wrong. [======================== ] 48% 5m09s / 10m00sNo significant mode found! The roundtrip gouy phase might be wrong. [========================= ] 50% 5m01s / 10m00sNo significant mode found! The roundtrip gouy phase might be wrong. [========================= ] 50% 4m57s / 10m00sNo significant mode found! The roundtrip gouy phase might be wrong. [============================== ] 60% 4m01s / 10m00sNo significant mode found! The roundtrip gouy phase might be wrong. [============================== ] 60% 3m59s / 10m00sNo significant mode found! The roundtrip gouy phase might be wrong. This is the first time I've seen such an error message. One could also argue what about the first 44% of the time and the last 39% of the time. Not sure what happened there. The relative power noise measurements looks okay but is a factor of ~2 higher than the reference traces across the board. The frequency noise scan shows a problem for frequencies lower than ~60 Hz. Some kind of oscillation or settling time issues. With that comparison with the reference traces probably does not make sense. The beam pointing measurement looks out of whack to me. Not sure why but 12 orders of magnitude better than the reference traces is obviously too good to be true. The mode scan says 6.6% in higher order modes. There is a note of caution about this value though because of the error messages that appeared during the scan. Measurement of the inner loop power stabilisation: the out of loop relative power noise is ~10 times higher for frequencies lower than 40 Hz and ~4 times higher at 100 Hz. Although at this point I do not recall if the reference traces are for the high power oscillator or the front end laser (probably the latter methinks).
Since the reference traces are different in the plots, the relevant reference trace is the one for each laser.
Evan, Stefan
We didn't get past the ASC engaging tonight.
The one thing that clearly improved stability was setting the MC2 cross-over up by 4dB - before that even the slightest disturbances kicked the x-over into an oscillation.
After that fix, we at least didn't have any more immediate losses.
However, the current ASC engaging logic doesn't work. It also doesn't seem to make much sense. We have convergence checkers, but they are currently used before all loops are closed. Thus we wait for one loop to drag us off into some direction, only for the next loop to come on and load up the previous loop again.
What changed? The MC crossover only depends on the actuator strength ratio between MFC and MCL.
Concerned about locking issues, I looked into alignment changes and compared before the power outage to after the power outage and there's a few things that stand out.
IMC MC1-3 alignment changes:
mc1 p | 1.3 | urad |
mc1 y | -24 | urad |
mc2 p | 3.5 | urad |
mc2 y | -8 | urad |
mc3 p | 18 | urad |
mc3 y | -22 | urad |
changes of 8-24urad
Total angular change of each chamber:
HAM2 | 71.4 | nrad |
HAM3 | 6.8 | nrad |
Linear change at 16.4m (IMC length)
HAM2 | 1.2 | um |
HAM3 | 0.1 | um |
change due to ISI is in um range
change in IM4 Trans
im4 t p | 0.02 | normalized QPD |
im4 t y | 0.079 | normalized QPD |
change in IM1 alignment required to account for change in IM4 Trans
IM1 / IM4 Trans conversion | IM4 t diff | calculated IM1 change | ||||
pitch | 287.5 | urad/ normalized QPD | 0.02 | 5.75 | urad | |
yaw | 147.3988439 | urad/ normalized QPD | 0.079 | 11.64 | urad |
IM1 would have to move 11.64urad in yaw to account for the change on IM4 Trans.
IM1 hasn't move 11.64urad, so changes on IM4 Trans are coming from somewhere else
change in WFSA and WFSB yaw (before values are approximate)
before | after | diff | |
approx | |||
WFSA yaw | -0.83 | -0.88 | -0.05 |
WFSB yaw | -0.83 | -0.88 | -0.05 |
diff value is not so much the problem
both WFSA and WFSB are close to or already railed in yaw, at -0.9 on a +/-1.0 scale
Seems like these things have been drifting over a long long time, see attached. Note that IMC WFS PIT and YAW signals are physically YAW and PIT on IOT2L, but PIT and YAW in chamber.
In the attached trend for 180 days, you can see that the WFS DC pointing was pretty well centered in YAW (PIT on table) and about 0.25 in PIT (YAW on table) until about 10-days after O1 concluded. There have been 5 distinct jumps since then and each step made the YAW (PIT on the table) centering worse.
It could be the loose periscope mirror on MC REFL periscope in HAM2 (alog 15650) but it's hard to say for sure.
Anyway, this means that the beam could be off on the MC length diode. If that's the case this should be fixed on the table, not by MC WFS picomotors.