I "destroyed" the TEST guardian node, so that we have only critical nodes active during the run.
Today, I finally committed all of the SUS, LSC, and ASC SAFE.snap files to the svn after a few weeks of updating from SDF has gone by.
[Jamie, Vern, Duncan M]
I have modified the ODC-MASTER EPICS configuration to read guardian state information directly from the IFO top node, rather than the ISC_LOCK node.
This has no impact on the guardian system itself, but means that ODC-MASTER bit 3 is now a readback of H1:GRD-IFO_OK.
The upshot of all of this is that the ODC-MASTER will now not report 'observation ready' (bit 2) until all guardian nodes (monitored by the IFO node) report OK, and there are no test-point excitations on an ODC-monitored front-end.
conlog failed last night at 18:37PDT, same error. I have just restarted it. Aug 18 18:37:16 h1conlog1-master conlog: ../conlog.cpp: 301: process_cac_messages: MySQL Exception: Error: Out of range value for column 'value' at row 1: Error code: 1264: SQLState: 22003: Exiting. Aug 18 18:37:17 h1conlog1-master conlog: ../conlog.cpp: 331: process_cas: Exception: boost: mutex lock failed in pthread_mutex_lock: Invalid argument Exiting.
Betsy, J. Kissel, Nutsinee
Today we restored all the band-limitting and low-pass filters to the violin monitor (alog20660). The band-limitting filter gains have been restored to 100.
Look for a subsequent alog regarding sign change troubles in ASC from commissioners later. However, because Stefan was lamenting that the alignment of the SRC seemed to be problematic of late, we trended the SR3 PIT pointing during OPLEV damping control epochs and CAGE servo control epochs. We discovered that the "bad pointing" of SR3 corrected by commissioners on last Thur Aug 13 (20519) was restored somehow on the day the cage servo was implemented Sunday Aug 16 (20571). So, today we turned the cage servo off, manually restored the SR3 PIT pointing, cleared the offset and turned it back on. Note - ON, OFF, and CLEAR were all done via the servo's Guardian. Hopefully this will help with the unstable ASC matrix problems, but we'll see...
See attached trend which represents the pointing history of SR3.
Upon further evaluation, Stefan advised that we just hard code the "good" SR3 pointing into the CAGE SERVO, so we're edited the CAGE SERVO Guardian to comment out the line where it servos around the current position, and instead added a line to servo around H1:SUS-SR3_M3_WIT_PMON = 922 (the "good" position). Stefan, etal are still chewing on this as currently things are still subpar in low noise locking land. Currently we are ringing up some ~41Hz mode.
Continuing on the task of summarizing the SVN status of CDS code, here is the guardian python user code list: M /opt/rtcds/userapps/release/als/common/guardian/ALS_ARM.py M /opt/rtcds/userapps/release/als/common/guardian/ALS_GEN_STATES.py M /opt/rtcds/userapps/release/isc/h1/guardian/lscparams.py ? /opt/rtcds/userapps/release/isc/h1/guardian/TEST_BOUNCE_ROLL_DECORATOR.py M /opt/rtcds/userapps/release/isi/common/guardian/isiguardianlib/ISI_STAGE/edges.py M /opt/rtcds/userapps/release/omc/h1/guardian/omcparams.py M /opt/rtcds/userapps/release/sys/common/guardian/ifolib/CameraInterface.py M /opt/rtcds/userapps/release/sys/h1/guardian/IFO_NODE_LIST.py M /opt/rtcds/userapps/release/sys/h1/guardian/SYS_DIAG_tests.py to get the list of python files I did the following: looking at the archive of the guardian logs under /ligo/backups/guardian, I made a list of log files created in the past 19 days (for the month of August). For each log file I grepped for "user code:" to get the source py file. This gave a list of 79 files. For each file I checked its SVN status.
Dave, your technique does not produce a complete list, cause it misses the main guardian modules for each node.
A better way is to use the guardutil program to get a listing of all source files for each node.
Here's the list I come up with:
jameson.rollins@operator1:~ 0$ guardlog list | xargs -l guardutil files | sort | uniq | xargs -l svn status
M /opt/rtcds/userapps/release/als/common/guardian/ALS_ARM.py
M /opt/rtcds/userapps/release/als/common/guardian/ALS_COMM.py
M /opt/rtcds/userapps/release/als/common/guardian/ALS_DIFF.py
M /opt/rtcds/userapps/release/als/common/guardian/ALS_GEN_STATES.py
M /opt/rtcds/userapps/release/als/common/guardian/ALS_XARM.py
M /opt/rtcds/userapps/release/als/common/guardian/ALS_YARM.py
M /opt/rtcds/userapps/release/ioo/common/guardian/IMC_LOCK.py
M /opt/rtcds/userapps/release/isc/h1/guardian/ALIGN_IFO.py
M /opt/rtcds/userapps/release/isc/h1/guardian/ISC_LOCK.py
M /opt/rtcds/userapps/release/isc/h1/guardian/lscparams.py
M /opt/rtcds/userapps/release/isi/common/guardian/isiguardianlib/ISI_STAGE/edges.py
M /opt/rtcds/userapps/release/omc/h1/guardian/omcparams.py
M /opt/rtcds/userapps/release/sus/h1/guardian/SR3_CAGE_SERVO.py
M /opt/rtcds/userapps/release/sys/common/guardian/ifolib/CameraInterface.py
M /opt/rtcds/userapps/release/sys/common/guardian/SYS_DIAG.py
M /opt/rtcds/userapps/release/sys/h1/guardian/IFO_NODE_LIST.py
M /opt/rtcds/userapps/release/sys/h1/guardian/SYS_DIAG_tests.py
jameson.rollins@operator1:~ 0$
A brute force coherence report can be found here:
https://ldas-jobs.ligo.caltech.edu/~gabriele.vajente/bruco_1123740797/
I’m using data from Evan’s elog
It took some time to process this since first data was not available on ldas-pcdev1.ligo-wa.caltech.edu due to some maintenance, and then my home folder was not available on ldas-pcdev1.ligo.caltech.edu due to some other maintenance. However, all’s well that ends well
Yesterday I completed installing the updated DMT software (gds-2.17.2) and its dependencies on the DMT Machines at both observatories. This includes (but is not limited to) new versions of: * h(t) calibration pipeline (gstlal-calibration-0.4.0) * SenseMonitor * Omega trigger generation from h(t) improved, configured and run continuously. * Improved monitoring of the DMT run status. A few notes: The latest version of SenseMonitor contains the improved anti-aliasing filtering using the decimation function written long ago by Peter for dtt. This fixes the problem that was noted at the start of ER7 where noise ner the Nyquist frequency was folded down to near DC due to inadequate anti-aliasing. The version of the h(t) calibration pipeline was introduced because it is packaged with the new gds infrastructure. I am not qualified to comment on this verison of the pipeline, but from what I understand, it continues to run in a mode using the same calibration algorithm as the package it replaces. Further updates will be needed to start makeing the additional corrections under development by the calibration group. The new monitoring functionality continues to generate the DMT Spi page showing the status of all monitors running. It now does a better job of checking that all monitor processes are reading data from one of the shared memory partitions. This is especially useful for monitors that were reading the calibrated h(t) data.
This non-event almost went unnoticed, but yesterday was the day h1ecatx1 should have done its every-fortnight-on-a-Tuesday crash until Carlos fixed the issue a week ago. Looks like this problem is now resolved.
Fil, Peter K., Nutsinee
Today I tried to hook up the spare IR sensor and the comparator box to the controller. Again it didn't work. I tried swapping the sensor, comparator, controller, I even tried swapping DB9 cable. A set of sensor and comparator box has been sent to EE shop for analysis (one left at the controller so the CO2X can lase). Peter and Fil found that the IR sensor works but the comparator box didn't seem to behave right. Fil has replaced the op-amp and the comparator but the output was still not what we were expecting (reduced set point but the comparator tripping point didn't get any lower). We are going to replace the switch and hope for the best.
LVEA: Laser Hazard IFO: Locked Observation Bit: Commissioning All Times UTC 07:00 Take over from TJ. 07:00 IFO Locked at DC_READOUT 07:00 Go to INCREASE_POWER at 16w 07:21 Take IFO to 21w 07:35 Take IFO to 23w 07:42 Go to LOWNOISE_ESD_ETMY 07:42 Lockloss – Commissioners looking into 07:56 Locked at DC_READOUT 09:05 Go to INCREASE_POWER at 15.8w 09:14 Take IFO to 23w 09:20 Go to COIL_DRIVERS 09:20 Lockloss – UE 09:40 Locked at DC_READOUT 09:45 Go to INCREASE_POWER at 23w 09:51 Go to NOMINAL_LOW_NOISE 09:53 Lockloss – 10:10 Locked at DC_READOUT 10:15 Go to LOWNOISE_ESD_ETMY 10:17 Lockloss – 10:32 Locked at INCREASE_POWER at 23w 10:58 Locked at NOMINAL_LOW_NOISE 11:00 Set Undisturbed bit 11:37 Lockloss – 12:01 Locked at NOMINAL_LOW_NOISE 13:30 Set Undisturbed bit
Matt asked if Detchar could check other OSEMs for problems like the ones seen in this alog. We're working on an automated solution, but for now Josh has suggested just plotting all the spectra. There are 270 channels in the attached PDF, with spectra taken in the middle of the most recent observation intent time (Aug 19 11:20 UTC). These are 60 seconds of data, 4 second FFTs, 75% overlap. We can easily make plots at other times, repeat this for L1, etc. I'll clean up the script and make it available. Here's a quick list of channels with lines or other bad features in them. You can search the PDF to see the plots. The first two groups are the ones to be most concerned about. Detchar should follow these up, especially the 'bouncy' spectra (which may be time-domain glitching). These channels have a distinct line, that may be aliased down like the 1821 Hz from the TMSX was: All OM1_M1 - just below 120 Hz All OM3_M1 - just above 90 Hz SR3_M1 T1 - about 65 Hz SR3_M2 LR - about 65 Hz SRM_M1 LF, RT, and SD - about 70 Hz, and something weird at high frequencies SRM_M2 UL - about 65 Hz SRM_M3 UR - about 65 Hz The following have 'bouncy' spectra, which usually means repeated glitches that are better seen in the time domain: MC2_M1 SD PR2_M1 T2 PR2_M3 LL and UL SR2_M1 T1 The following all have a big 60 Hz line and the spectra above 60 Hz is not smooth. Maybe there's a forest of lines or wandering lines. ITMX_M0 LF and RT ITMX_R0 LF and RT ITMY_L1 UR ITMY_L2 UR ITMY_R0 RT PRM_M1 RT and SD PR3_M1 T1 and T2 All PR3_M3 PR2_M3 UR MC2_M1 RT All SR2 M1 and M2 ETMY_M0 F1, F2, and SD All IM2_M1
Josh, Andy, TJ
We had a look into the channels in the "bouncy" spectra category above. These are strong glitches that come in pairs, one pair every two seconds. Even more strangely, the glitches are in sync in all of the mentioned channels, even though they are different suspension stage OSEMs and different suspensions! Attached is a four-page PDF with normalized spectrograms and timeseries of the glitches on the OSEMS for PR2 (M1 T2,M3 UL), MC2 (M1 SD), and SR2 (M1 T1), showing that they are synchronous.
Notes:
S. Dwyer, D. Hoak, J. Driggers, J. Bartlett, J. Kissel, C. Cahillane
We had trouble with the transition to ETMY, which turned out to be the same problem that we had with ETMX ESD driver this morning (20652) -- the Binary IO configuration was toggled such that high-voltage driver was disconnected, but we were still driving through the high voltage driver. This was the source of several lock losses during this evening's maintenance recovery. We'd found it by comparing a conlog between now and during yesterday's DARM OLGTF measurements by the calibration group. Once we fixed it, we made sure to accept the configuration in the SDF (where we looked first, but it turns out that the wrong configuration had been stored there).
For the record, the attached screenshot shows the correct configuration for ETMY BIO. The good configuration is with have HI/LO Voltage OFF to run with the low voltage driver and the HI Voltage Disconnected (i..e the switch is OFF). In the same screenshot, we show what the DARM loop gain looks like with (yellow) locked on EX alone, (blue) a half-transition to EY when EY is in the bad configuration, (red) a half transition to EY with EY in the good configurtation.
The noise tonight: excess stuff between 30 and 200Hz (first plot). The lower frequency end is very nonstationary and coherent with LSC (second plot) and ASC (third plot) signals. Between 100 and 200Hz it's quite stable. The ISS second loop is not on.
MICH Feedforward appears to be off or mistuned.
The ISS should not be running far from the hard-coded 8% diffracted power. 13% (reported in 20663) is probably too close to the upper limit... I would not change the second loop target value, but rather set the inner loop offset so that the diffrected power runs near 8%.
The amount of excess noise was anomalously bad from this lock stretch. It's not so surprising that the MICH FF was not working.
The current 17+ hour lock has a more typical DARM noise, at least during the high-range stretches. Here the MICH FF seems to be mostly working, although we could get rid of some more coherence by retuning it.
Also note the coherence with PRCL in the region where we have the PSL PZT peaks...
This entry is meant to survey the sensing noises of the OMC DCPDs before the EOM driver swap. However, other than the 45 MHz RFAM coupling, we have no reason to expect the couplings to change dramatically after the swap.
The DCPD sum and null data (and ISS intensity noise data) were collected from an undisturbed lock stretch on 2015-07-31.
Noise terms as follows:
The downward slope in the null at high frequencies is almost certainly some imperfect inversion of the AA filter, the uncompensated premap poles, or the downsampling filter.
* What is the reasoning behind the updated suspension thermal noise plot?
* Its weird that cHard doesn't show up. At LLO, cHard is the dominant noise from 10-15 Hz. Its coupling is 10x less than dHard, but its sensing noise is a lot worse.
I remade this plot for a more recent spectrum. This includes the new EOM driver, a second stage of whitening, and dc-lowpassing on the ISS outer loop PDs.
This time I also included some displacement noises; namely, the couplings from the PRCL, MICH, and SRCL controls. Somewhat surprising is that the PRCL control noise seems to be close to the total DCPD noise from 10 to 20 Hz. [I vaguely recall that the Wipfian noise budget predicted an unexpectedly high PRCL coupling at one point, but I cannot find an alog entry supporting this.]
Here is the above plot referred to test mass displacement, along with some of our usual anticipated displacement noises. Evidently the budgeting doesn't really add up below 100 Hz, but there are still some more displacement noises that need to be added (ASC, gas, BS DAC, etc.).
Since we weren't actually in the lowest-noise quad PUM state for this measurement, the DAC noise from the PUM is higher than what is shown in the plot above.
If the updated buget (attached) is right, this means that actually there are low-frequency gains to be had from 20 to 70 Hz. There is still evidently some excess from 50 to 200 Hz.
Here is a budget for a more recent lock, with the PUM drivers in the low-noise state. The control noise couplings (PRCL, MICH, SRCL, dHard) were all remeasured for this lock configuration.
As for other ASC loops, there is some contribution from the BS loops around 30 Hz (not included in this budget). I have also looked at cHard, but I have to drive more than 100 times above the quiescient control noise in order to even begin to see anything in the DARM spectrum, so these loops do not seem to contribute in a significant way.
Also included is a plot of sensing noises (and some displacement noises from LSC) in the OMC DCPDs, along with the sum/null residual. At high frequencies, the residual seems to approach the projected 45 MHz oscillator noise (except for the high-frequency excess, which we've seen before seems to be coherent with REFL9).
Evidently there is a bit of explaining to do in the bucket...
Some corrections/modifications/additions to the above:
Of course, the budgeted noises don't at all add up from 20 Hz to 200 Hz, so we are missing something big. Next we want to look at upconversion and jitter noises, as well as control noise from other ASC loops.
There were eight separate locks during this shift, with typical inspiral ranges of 60 - 70 Mpc. Total observation time was 28.2 hours, with the longest continuous stretch 06:15 - 20:00 UTC on June 11. Lock losses were typically deliberate or due to maintenance activities.
The following features were investigated:
1 – Very loud (SNR > 200) glitches
Omicron picks up roughly 5-10 of these per day, coinciding with drops in range to 10 - 30 Mpc. They were not caught by Hveto and appear to all have a common origin due to their characteristic OmegaScan appearance and PCAT classification. Peak frequencies vary typically between 100 - 300 Hz (some up to 1 kHz), but two lines at 183.5 and 225.34 Hz are particularly strong. These glitches were previously thought to be due to beam tube cleaning, and this is supported by the coincidence of cleaning activities and glitches on June 11 at 16:30 UTC. However, they are also occurring in the middle of the night, when there should be no beam cleaning going on. Tentative conclusion: they all have a common origin that is somehow exacerbated by the cleaning team's activities.
2 – Quasi-periodic 60 Hz glitch every 75 min
Omicron picks up an SNR ~ 20 - 30 glitch at 60Hz which seems to happen periodically every 70 - 80 min. Hveto finds that SUS-ETMY_L2_WIT_L_DQ is an extremely efficient (use percentage 80-100%) veto, and that SUS-ETMY_L2_WIT_P_DQ and PEM-EY-MAG-EBAY-SEIRACK-X_DQ are also correlated. This effect is discussed in an alog post from June 6 (link): "the end-Y magnetometers witness EM glitches once every 75 minutes VERY strongly and that these couple into DARM". Due to their regular appearance, it should be possible to predict a good time to visit EY to search for a cause. Robert Schofield is investigating.
3 – Non-stationary noise at 20 - 30Hz
This is visible as a cluster of SNR 10 - 30 glitches at 20 - 30 Hz, which became denser on June 11 and started showing up as short vertical lines in the spectrograms as well. The glitches are not caught by Hveto. Interestingly, they were absent completely from the first lock stretch on June 10, from 00:00 – 05:00 UTC. Daniel Hoak has concluded that this is scattering noise, likely from alignment drives sent to OMC suspension, and plans to reduce the OMC alignment gain by a factor of two to stop this (link to alog).
4 – Broadband spectrogram lines at 310 and 340 Hz
A pair of lines at 310 and 340 Hz are visible in the normalized spectrograms, strongest at the beginning of a lock and decaying over a timescale of ~1 hr as the locked interferometer settles into the nominal alignment state. According to Robert Schofield, these are resonances of the optic support on the PSL periscope. The coupling to DARM changes as the alignment drifts in time (peaks decay beacuse the alignment was tuned to minimize the peaks when the IFO is settled.) Alogs about this: link, link, link.
There are lines of Omicron triggers at these frequencies too, which interestingly are weakest when the spectrogram lines are strongest (probably due to a 'whitening' effect that washes them out when the surrounding noise rises). Robert suspects that the glitches are produced by variations in alignment of the interferometer (changes in coupling to the interferometer making the peaks suddenly bigger or smaller).
5 – Wandering 430 Hz line
Visible in the spectrograms as a thin and noisy line, seen to wander slightly in Fscan. It weakened over the course of the long (14h) lock on June 11. Origin unknown.
6 – h(t) calibration
Especially noisy throughout the shift, with the ASD ratio showing unusually high variance. May be related to odd broadband behavior visible in the spectrogram. Jeff Kissel and calibration group report that nothing changed in the GDS calibration at this time. Cause unknown.
Attached PDF shows some relevant plots.
More details can be found at the DQ shift wiki page.
I believe the 430 Hz wandering line is the same line Marissa found at 415 Hz (alog18796). Which turns out, as Gabriele observed, to show coherence with SRCL/PRCL.
Ross Kennedy, my Ph.D. student, implemented tracking of this line over 800 seconds using the iWave line tracker. Overlaid with a spectrogram, you can see that there is quite good agreement as the frequency evolves. We're working on automating this tool to avoid hand-tuning parameters of the line tracker. It would also be interesting to track both this line and PSL behaviour at the same time, to check for correlation. In the attached document there are two spectrograms - in each case the black overlay is the frequency estimate from iWave.