LLCV bypass valve 1/2 turn open, and the exhaust bypass valve fully open.
Flow was noted after 52 seconds, closed LLCV valve, and 3 minutes later the exhaust bypass valve was closed.
After Sheila's log about the BS causing locklosses, I wanted to check a few things in the guardian code and I found what looks like conflict in the code. In /opt/rtcds/userapps/release/isi/h1/guardian/ ISI_BS_ST2.py, line 4
ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([ ] , [ ])
the empty brackets at the end indicate the ISI is not set to restore any Cartesian locations. This is supposed to be the code that sets what dofs are restored when the ISI re-isolates, and is particular to the beamsplitter.
However, in /opt/rtcds/userapps/release/isi/common/guardian/isiguardianlib/isolation/ CONST.py lines 102 (for BSC ST1) and 122 (for BSC ST2) both read
CART_BIAS_DOF_LISTS = ([], ['RZ']),
CONST.py is common code, and I would interpret this to mean that all BSCs are restoring a stored RZ location. This isn't a problem for the other BSCs, because they never change state, but if we turn on the ST2 loops for the BS, this code could force the ISI to rotate some after the loops come on. The attached trend shows the last ten days of the BS ST2 RZ setpoint and locationmon, and they track each other, so I dont think the BS is returning to some old RZ location, but someone who understands this code should explain it to me.
The CART_BIAS_DOF_LIST in the common code you could say is the "default" setting, but can be overwritten by the chamber node's local file (as is done here). The two empty lists in the local file show that there are no cartesian bias degrees of freedom being being restored.
A quick "grep CART_BIAS ./*.py" in (userapps)/isi/h1/guardian/ yields:
./ISI_BS_ST1.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([], [])
./ISI_BS_ST2.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([], [])
./ISI_ETMX_ST1.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([], [])
./ISI_ETMX_ST2.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([],[])
./ISI_ETMY_ST1.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([], [])
./ISI_ETMY_ST2.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([],[])
./ISI_HAM2.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([], ['X', 'Y', 'Z', 'RX', 'RY', 'RZ'])
./ISI_HAM4.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([], ['X', 'Y', 'Z', 'RX', 'RY', 'RZ'])
./ISI_HAM5.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([], ['X', 'Y', 'Z', 'RX', 'RY', 'RZ'])
./ISI_HAM6.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([], ['X', 'Y', 'Z', 'RX', 'RY', 'RZ'])
./ISI_ITMX_ST1.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([], [])
./ISI_ITMX_ST2.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([],[])
./ISI_ITMY_ST1.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([], [])
./ISI_ITMY_ST2.py:ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = ([],[])
So it seems that we only restore the bias on HAMs 2,4,5,6. Doing this for L1 shows all empty lists, if the version we have is up to date.
After talking to JIm, he suggests that we change the "default" value to be an empty list, to avoid any possible future mishaps. Just to reiterate though, it is currently not restoring any biases on any of the BSCs, only HAMs 2,4,5,6.
TJ's analysis is correct: the default is what's defined in the isiguardianlib/isolation/CONST.py, which currently specifies that the RZ cart bias should be restored during the second phase of the isolation.
You can change the default in the CONST.py, but any change has to be well corrdinated with LLO, since that configuration is common for them as well.
TJ missed HAM3 in the list of platforms restoring biases. This miss of the grep is likely from some h1 files being in the common area rather than the H1 directory.
David.M, Jenne.D
This is a follow up post to https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=27721. The issue that I reported on Tuesday turned out to be a minor problem that I resolved by double checking the electronics in the CER. Now we are obtaining data from all six L4Cs currently in the LVEA. This allowed us to compare the three seismometers grouped together on the linoleum surface to the three positioned on the concrete. It seems as though there is only a very small difference which shouldn't cause any problems for us, I've attached two plots showing this. When looking at the plots it should be noted that L4C channels 1,2 and 3 are positioned on the linoleum, whereas L4C channels 4,5, and 7 are positioned on the concrete.
The first plot (L4Cs_STS.png) shows all 6 uncalibrated L4C signals overlapped and also the calibrated STS-2 signal in black (this is only relevant for the shape, the difference in calibration means the magnitude of the STS-2 signal is very different to the L4Cs).
The second plot (coherence_TFs2.png) shows on top the coherence between L4C2 (positioned on linoleum) and the STS-2 (in red) as well as L4C7 (positioned on concrete) and the STS-2 (in blue). We see that both have very nice coherence in our frequency band of interest (up to ~30Hz). The bottom plot shows two transfer functions between L4Cs on the same surface (red plot is both on linoleum and black plot is both on concrete) as well as two transfer functions between L4Cs on opposite surfaces (yellow and blue plots). We can see that the pairings on the same surface are slightly better matched, but all four transfer functions are quite nicely matched and flat.
We should be able to position the L4Cs on top of the linoleum coating without having to cut any holes in it.
EDIT: I fixed the first plot, which I accidently gave a linear frequency axis
Raised CP5 Dewar pressure (3/4 turn at pressure regulator + 1/4 turn yesterday). Will take days to read a pressure response. Current pressure is 17.5 psi in tank and 15 psi at exhaust. Also filled CP5 to 100% with 1/2 turn open on LLCV bypass valve.
CP5 "setting % open" has been shifting upward. The vacuum team is working to try to get it under control. Last week such setting was around 88% now is oscillating around 100%.
Note to operators if "cryopump level % Full" falls below 88% it will be very hard for the current system to bring it up to nominal (92%), if it does reach that point, feel free to call a member of the vacuum team, the cryopump will need manual filling.
Trend data.
I am also monitoring from home - I don't think that there are "operators" working the weekends(?)
It should be noted that the power incident on the reflected photodiode (RPD) was ~8.4V and the diagnostic breadboard would like to have a minimum of 9V. So all scans involving the front end laser will have a message concerning "low power" associated with them. The front end laser relative power noise looks good. Better than the reference trace in places, about the same in others. The frequency noise measurement does not look quite right below 20 Hz. Or between 300 and 400 Hz. This might be a consequence of not having enough power but I suspect not. Noise is higher than the reference trace for frequencies greater than 1 kHz. During the beam pointing measurement, the diagnostic breadboard pre-modecleaner dropped out of lock for a split second or two. Measurement does not appear to be affected though, and looks good. During the mode scan measurement, the error message concerning the gouy phase appeared. Every now and then the pre-alignment flickers from being on to off. The mode scan is higher than the reference traces for all peaks but reports the higher order mode content to be 5.7%. Note that the alignment of the front end laser beam to the diagnostic breadboard is still a work in progress.
Apologies for the after-the-fact posting: Here are DBB scans from 6/16. Note that files marked .001 are LO power and files marked .002 are HI power.
Unbeknownst to me, Ed had already performed diagnostic breadboard scans of the laser. However since I had already started, I might as well finish. First off, the high power oscillator. No problems observed with the scans except when it came to doing the mode scan, when the following error message(s) appeared: [======================= ] 45% 5m29s / 10m00sNo significant mode found! The roundtrip gouy phase might be wrong. [======================== ] 48% 5m15s / 10m00sNo significant mode found! The roundtrip gouy phase might be wrong. [======================== ] 48% 5m13s / 10m00sNo significant mode found! The roundtrip gouy phase might be wrong. [======================== ] 48% 5m09s / 10m00sNo significant mode found! The roundtrip gouy phase might be wrong. [========================= ] 50% 5m01s / 10m00sNo significant mode found! The roundtrip gouy phase might be wrong. [========================= ] 50% 4m57s / 10m00sNo significant mode found! The roundtrip gouy phase might be wrong. [============================== ] 60% 4m01s / 10m00sNo significant mode found! The roundtrip gouy phase might be wrong. [============================== ] 60% 3m59s / 10m00sNo significant mode found! The roundtrip gouy phase might be wrong. This is the first time I've seen such an error message. One could also argue what about the first 44% of the time and the last 39% of the time. Not sure what happened there. The relative power noise measurements looks okay but is a factor of ~2 higher than the reference traces across the board. The frequency noise scan shows a problem for frequencies lower than ~60 Hz. Some kind of oscillation or settling time issues. With that comparison with the reference traces probably does not make sense. The beam pointing measurement looks out of whack to me. Not sure why but 12 orders of magnitude better than the reference traces is obviously too good to be true. The mode scan says 6.6% in higher order modes. There is a note of caution about this value though because of the error messages that appeared during the scan. Measurement of the inner loop power stabilisation: the out of loop relative power noise is ~10 times higher for frequencies lower than 40 Hz and ~4 times higher at 100 Hz. Although at this point I do not recall if the reference traces are for the high power oscillator or the front end laser (probably the latter methinks).
Since the reference traces are different in the plots, the relevant reference trace is the one for each laser.
SEI - Krishna and co. at EY recentering BRS. Wind screen work: ongoing
SUS - Nothing to report. BS Side OSEM may still be an issue to be mitigated.
VAC - Gerardo and Chandra may be going to EX. Work permits ned to be closed.
FAC - see SEI (windscreen)
CDS - Electronics: nothing until Tues. Computers: nothing
Commish: Ongoing ASC woes.
PEM - Robert will be traveling to electronics bays at both ends during the day.
Outreach: (not at meeting) A group of 5th graders (two groups) in for a tour between 12:30 and 13:00PDT
Evan, Stefan
We didn't get past the ASC engaging tonight.
The one thing that clearly improved stability was setting the MC2 cross-over up by 4dB - before that even the slightest disturbances kicked the x-over into an oscillation.
After that fix, we at least didn't have any more immediate losses.
However, the current ASC engaging logic doesn't work. It also doesn't seem to make much sense. We have convergence checkers, but they are currently used before all loops are closed. Thus we wait for one loop to drag us off into some direction, only for the next loop to come on and load up the previous loop again.
What changed? The MC crossover only depends on the actuator strength ratio between MFC and MCL.
[Sheila, Jenne, Keita]
While investigating our locklosses that keep occuring as we try to engage the PRC2 loops, we noticed that the centering loop for POPX was railed. We turned off the loop, recentered the beam on the diode, then were able to engage the centering loop successfully. This was clearly necessary, but didn't solve the lockloss problems.
The violin mode monitor is now monitoring each fundamental mode indivudually, using the same band pass filters (and notch filters if there's any) from the SUS damaping filters. The RMS value of each mode should equal to the order of magnitude from the nominal observing noise floor. I haven't had a chance to check if that's true for every mode so there probably will be some changes to the gains in the future. At least it's usable and trust worthy enough for damp job.
Dave (on phone), Nutsinee
Concerned about locking issues, I looked into alignment changes and compared before the power outage to after the power outage and there's a few things that stand out.
IMC MC1-3 alignment changes:
mc1 p | 1.3 | urad |
mc1 y | -24 | urad |
mc2 p | 3.5 | urad |
mc2 y | -8 | urad |
mc3 p | 18 | urad |
mc3 y | -22 | urad |
changes of 8-24urad
Total angular change of each chamber:
HAM2 | 71.4 | nrad |
HAM3 | 6.8 | nrad |
Linear change at 16.4m (IMC length)
HAM2 | 1.2 | um |
HAM3 | 0.1 | um |
change due to ISI is in um range
change in IM4 Trans
im4 t p | 0.02 | normalized QPD |
im4 t y | 0.079 | normalized QPD |
change in IM1 alignment required to account for change in IM4 Trans
IM1 / IM4 Trans conversion | IM4 t diff | calculated IM1 change | ||||
pitch | 287.5 | urad/ normalized QPD | 0.02 | 5.75 | urad | |
yaw | 147.3988439 | urad/ normalized QPD | 0.079 | 11.64 | urad |
IM1 would have to move 11.64urad in yaw to account for the change on IM4 Trans.
IM1 hasn't move 11.64urad, so changes on IM4 Trans are coming from somewhere else
change in WFSA and WFSB yaw (before values are approximate)
before | after | diff | |
approx | |||
WFSA yaw | -0.83 | -0.88 | -0.05 |
WFSB yaw | -0.83 | -0.88 | -0.05 |
diff value is not so much the problem
both WFSA and WFSB are close to or already railed in yaw, at -0.9 on a +/-1.0 scale
Seems like these things have been drifting over a long long time, see attached. Note that IMC WFS PIT and YAW signals are physically YAW and PIT on IOT2L, but PIT and YAW in chamber.
In the attached trend for 180 days, you can see that the WFS DC pointing was pretty well centered in YAW (PIT on table) and about 0.25 in PIT (YAW on table) until about 10-days after O1 concluded. There have been 5 distinct jumps since then and each step made the YAW (PIT on the table) centering worse.
It could be the loose periscope mirror on MC REFL periscope in HAM2 (alog 15650) but it's hard to say for sure.
Anyway, this means that the beam could be off on the MC length diode. If that's the case this should be fixed on the table, not by MC WFS picomotors.
Peter, Sheila, Evan
We have been having trouble keeping the IMC locked when powering up (without the interferometer).
We found that the UGF of the loop was something like 110 kHz. During O1, we ran with a UGF that was more like 50 kHz. Recall also that the IMC loop at one point had some questionable resonance features above 100 kHz, so it is probably in our interest to keep the UGF on the low side (though we should confirm this with a high-frequency OLTF measurement).
We turned down the loop gain by 4 dB, giving a UGF that is more like 60 kHz. We were able to power up from 2 W to 23 W without lockloss.
Attached is an OLTF measurement after turning down the gain. As usual, the last point is garbage.
To keep the IMC length crossover stable in full lock, Stefan and I increased the gain by 7 dB. This brought the crossover form 16 Hz (nearly unstable) to 40 Hz (stable).
In fact the right way to compensate for the decreased electronic IMC gain is to also decrease the AO gain (not the MCL gain). Now both are decreased by 4 dB.