Keita pointed out that the ISS 3rd loop 560 was overloaded, and was in low noise when it should have been in high dynamic range. I went to check it out and indeed, it saturates even without any input or with a terminator.
Fil didn't have a spare working 560, but loaned me an SR650. We are using an AC coupled low pass at 30 Hz, gain of 0 dB, positive polarity, to imitate the settings Keita used in 27895
We don't know how long the 560 has been overloaded, but this probably is the reason why we have had so much difficulty with the CSOFT instability in the last week.
First lock attempt after Sheila's fix we were able to go to 50W without any instability. No fancy offsets or anything, so the PRC gain still dropped (to ~24), but we didn't have any trouble acquiring or holding the lock.
Experiment on CP4 today: Started at 90% full Filled to well over 100% at 45% open on LLCV At 12:10pm local, CP reached 100% full (took ~3 hr 40 min) I sat at MY for about two hours with nothing happening (at this flow rate it takes a lot longer than 35 min. to overfill till LN2 comes out) Finally I began to ramp the LLCV 5%, and then 10%, every 20 min. or so At 72% open the signal from the flow meter was very noisy After a few min. at 72% the exhaust pressure started to rapidly rise so I set to PI-mode with min. allowance set to 39% (love the new code!) The pressure was still high and unstable and because I didn't want LN2 to spew onto the flow meter I lowered the PI-code min. to 37% The exhaust pressure has slowly ramped back down to nominal Leaving in PI-mode (at 37% min) over the weekend and will periodically monitor pump level and pressure
There should be NO alarms for the exhaust pressure The pump level will alarm for a few hours until it falls below 98% full (it is well over 100% right now)
It appears that something goes wrong when zooming in that creates random lines across the plots.
Pyplot does some strange things with the data when you zoom in, sometimes. Maybe this is a result of data gaps being handled poorly by pyplot? I've been able to get these artifacts to go away by resetting the plots and zooming in slightly less, but Patrick and I weren't able to get these particular ones to clean up. I'll see if I can make this a little nicer on Monday.
It looks like using NaN to fill in the gaps in the data was not the right thing to do. Filling with POF_INF seems to eliminate the glitching in the plots. I've also set some hard coded Y-axis limits on the pit and yaw plots and scaled the sum plots to the max in each dataset, so the plots should start out closer to a finished product.
I've also added the HAM2 oplev, which doesn't have any DQ channels, so I've used the OUT16 channel. Shouldn't matter much, since the plots are of m-trends.
3pm local 18 sec to overfill CP3 with 1/2 turn open on LLCV bypass valve. Lowered CP3 LLCV from 20% to 19%.
Attached are trends of the last month+ of relative humidity and temperature data for the 2 desiccant cabinets in the VPW. One is used for 3IFO storage. The last trends are posted at alog: 28526 . The last month of data show normal trends with some bumps when the cabinet was opened for some time and the RH raised breifly. Temp of the cabinets is the usual ~daily fluctuation of ~10deg F.
During this morning's PSL ISS code change, I tested if restarting h1pslpmc model with the FLOW IIR filter replaced with an EPICS part still closes the shutter, and indeed it did.
Looking at full data DAQ trends at the time of model startup shows what is going on. The FLOW ADC data is downsampled from the 64kHz IOP model to the 32kHz h1psliss model. The downsampling filter starts the data at zero and takes several cycles to settle to the digitized value. As an example the plot below shows an LSC channel at the time we restarted h1lsc yesterday. In the PSL PMC case, since any instantaneous FLOW value outside of the acceptible range causes a shutter close trigger, this will always close the shutter on model startup.
One solution we can think of is to add code in the FLOW_ERR C-file which triggers FLOW_OK to become False only if the flow values are out of range for several consecutive cycles. For a decision on whether the issue is of enough concern to warrent this code change we'll defer to the PSL team.
This morning the RMS watchdogs for all of the test masses have been tripping, usually during locklosses at higher ISC_LOCK states. Resetting these requires opening the tripped Quad overview, and setting H1:SUS-*TM*_BIO_L2_UL_RMSRESET to zero, then back to 1. Since I've had all 4 trip at once, and Verbal_Alarms yells each one at you multiple times, I made an alias in my .bashrc profile to do this. Resetting all of the RMS WDs doesn't seem to effect anything, or disrupt lock, so this alias can be used any time one of these WDs trips. Entering rms_wd in a terminal now sets the WD bit to zero, sleeps for a second, then sets it to 1.
Operators can easily add this to their .bashrc file from a terminal by copy/pasting :
echo 'alias rms_wd="caput H1:SUS-ITMX_BIO_L2_UL_RMSRESET 0 && caput H1:SUS-ITMY_BIO_L2_UL_RMSRESET 0 && caput H1:SUS-ETMY_BIO_L2_UL_RMSRESET 0 && caput H1:SUS-ETMX_BIO_L2_UL_RMSRESET 0 && sleep 1 && caput H1:SUS-ITMX_BIO_L2_UL_RMSRESET 1 && caput H1:SUS-ITMY_BIO_L2_UL_RMSRESET 1 && caput H1:SUS-ETMY_BIO_L2_UL_RMSRESET 1 && caput H1:SUS-ETMX_BIO_L2_UL_RMSRESET 1"' >> .bashrc
You then need to open a new terminal to get the new alias.
Terra, Sheila
After recovery tonight we did a few things:
We have noticed that since we are going to higher powers, things are misalined after a lockloss and become realigned as they cool off. We might want to account for this someday to save time.
Log was meant to be posted ~3:30 this morning.
WP 6122 h1psliss model change (and subsequent DAQ restart)
Keita, Peter, Patrick, Dave:
We installed Keita's PSL ISS model change on h1psl0. I also restarted the PMC model to see if it closed the shutter (it did) so now I have DAQ data on this problem.
We restarted the DAQ for the new ISS INI file. This also applied the FMCS changes (addition of Relative Humidity channels).
This will generate an alarm on pump level. Filling at 45% open on LLCV, past 100% full until I see a change in flow meter curve or until ~20 minutes go by (takes around 30 min. for LN2 to spew out the exhaust).
J. Kissel, J. Warner As Jim brought the instrument up this morning, we were alarmed to see a forest of lines around 20 to 25 Hz. As they did not disappear or change height as we went through the lock acquisition sequence, as ASC loops turned on, noticed them in MICH and PRCL, found a whole bunch of ASC-ADS SDF differences, and I heard Shiela suggest that she might try an LLO scheme for dithering the alignment, I began to suspect that these lines were intentional. Indeed, after poking around, I found the dither alignment overview screen and found several oscillators pushing out excitations at 19.1, 19.7. 20.1. and 20.8 Hz in Pitch and 21.3. 21.9, 22.3, and 23.0 Hz in yaw, being sent to PR2, PR3, and the BS. I confirmed that these lines actually make it to the SUS by gathering an ASD of the input control request at the bottom of PR2, PR3 and the BS indeed I see the same features in PR3 and BS (why not PR2?). *phew!* We'll leave these on, in case the plan was to just gather data with these lines present. See attached ASD of input request to the SUS, a screen cap of the Dither Overview screen, and the corresponding DARM ASD.
For the record and/or future study, this lock stretch -- chilling in DC_READOUT at 2W -- lasted ~1.5 hours, from 2016-08-26 16:10:26 UTC to 2016-08-26 17:47:?? UTC. This would be an excellent lock stretch to, for example, pull out the 3501.3 Hz PCALX to DARM transfer function. From DTT alone, I was able to estimate the TF to be: mag: 2.531e-6 [m/ct] pha: -40.885 [deg] re: 1.852e-6 [m/ct] im: -1.682e-6 [m/ct] coh: 0.607 BW: 0.0025 ENBW: 0.00292 nAvgs: 35 unc: sqrt((1-coh)/(2*nAvgs*coh)) = 0.096 = 9.6%
We had a couple locklosses when the ISS saturated, both with the third loop on and off. We just increased the ref signal to -1.52 V to move the diffracted power to around 4%. Was some setting incorect after a reboot this morning? SDF indicated that everythind is OK.
The 1st loop's reference offset voltage (H1:PSL-ISS_REFSIGNAL) is not monitored in the SDF system. During O1, because of the alignment drift in the PSL and or interactions with the 2nd loop, the operators would often adjust the diffracted power with this offset. I've now re-monitored the channel, for now, because we don't have any restriction of SDF difference to do science. Keita restarted the PSL ISS model again this morning, and we had begun relocking too quickly (!!) after that reboot, so we had to adjust the offset on the fly, when it was have too late. This resulted in a close, but not the same, value of -1.602. Also had to restore the "AOM Offset" H1:PSL-ISS_CTRL_OFFSET, which should be 3.85 (it had come up as 2.5). I've accepted both the -1.602 offset voltage and 3.85 AOM offset in the SDF's safe.snap and down.snap for the PSLISS front-end.
Note that the H1:PSL-ISS_CTRL_OFFSET used to be 2.5, but yesterday Keita and I changed it to 3.85. Thank you Jason & Jeff for pointing out that I forgot to write an alog.
We made this adjustment with the ISS first loop off, and set the offset such that the diffracted power was a little over 3%. Before doing this, the diffracted power was 1.7% with the first loop off, and we kept hitting the bottom rail, which would send the first loop into oscillation and cause it to unlock. We were able to close the 1st loop, and everything looked okay. So, 3.85 is the new best value.
With all that said, we've had a few times over the last 3 days of hitting the bottom rail of the ISS when we've got the IFO locked, and it changes the power injected into the vacuum and the IFO oscillates a bit although usually retains lock. We may need to consider increasing the first loop offset even more, so that we're a teeny bit farther away from that rail.
the Verbal Alarms code was logging to the ops home directory. Prior to the move of this home directory (WP5658) I have modified the code to log to a new directory: /ligo/logs/VerbalAlarms We restarted the program at 14:04 and verified the log files are logging correctly.
These verbal log files actually live one level deeper, in /ligo/logs/VerbalAlarms/Verbal_logs/ For the current month, the log files live in that folder. However, at the end of every month, they're moved into the dated subfolders, e.g. /ligo/logs/VerbalAlarms/Verbal_logs/2016/7/ The text files themselves are named "verbal_m_dd_yyyy.txt". Unfortunately, these are not committed to repo where these logs might be viewed off site. Maybe we;ll work on that. Happy hunting!
The Verbal logs are now copied over to the web-exported directory via a cronjob. Here, they live in /VerbalAlarms_logs/$(year)/$(month)/
The logs in /ligo/logs/VerbalAlarms/Verbal_logs/ will now always be in their month, even the curent ones.