As part of the recovery effort today we stumbled upon a way that seems to fairly reliable allow us to transition from PRMI to DRMI (we were 3 for 3 today - we'll try again tomorrow). The procedure is: - Let Guardian set up for DRMI locking. - Misalign SRM - turn off H1:LSC-SRCL OFFSET, INPUT and OUTPUT - Let PRMI lock - make sure H1:LSC-SRCL OFFSET, INPUT and OUTPUT are off (guardian turns on OFFSET), and clear history. - Pause ISC_LOCK and ISC_DRMI guardian - Align SRM, wait until it damps down. Today the PRMI always stayed locked in this configuration. - Turn on H1:LSC-SRCL OUTPUT - Turn on H1:LSC-SRCL INPUT. DRMI is now locked. - Turn on H1:LSC-SRCL OFFSET, if desired for mode-hopping suppression
Already done at Livingston a long time ago...
https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=11340
Except this scheme seems much faster than waiting for DRMI...
As recorded in r11378, I have fixed a bug in the ext_alert.py GRB monitor that prevented it from running.
Can this monitor be restarted at the earliest available opportunity?
16:00-17:00 Kiwamu confesses he has been tuning OMC without realizing the Intent Bit was set to Observing
17:05 Stefan doing some tuning
17:06 Lockloss
17:45 Fil to EY
18:10 Fil back from EY
Locking summary: IFO was locked when I arrived (5 hour lock stretch), although at poor range. Lockloss due to commissioning efforts. After the tuning was satisfactory, I did an initial alignment. After some Guardian issues were worked out with Kiwamu's help, we were able to bring the IFO back to Low Noise, albeit shortlived locks. Commissioning continued thereafter.
The lockloss analysis tool at "/opt/rtcds/userapps/trunk/isc/common/scripts/lockloss" is updated. Now it will be triggered only when a lock loss from nomimal low noise state happens; losses during lock acquisation cannnot trigger this tool.
Besides, a "readme.txt file" is added to help you use this tool.
I "destroyed" the TEST guardian node, so that we have only critical nodes active during the run.
Today, I finally committed all of the SUS, LSC, and ASC SAFE.snap files to the svn after a few weeks of updating from SDF has gone by.
[Jamie, Vern, Duncan M]
I have modified the ODC-MASTER EPICS configuration to read guardian state information directly from the IFO top node, rather than the ISC_LOCK node.
This has no impact on the guardian system itself, but means that ODC-MASTER bit 3 is now a readback of H1:GRD-IFO_OK.
The upshot of all of this is that the ODC-MASTER will now not report 'observation ready' (bit 2) until all guardian nodes (monitored by the IFO node) report OK, and there are no test-point excitations on an ODC-monitored front-end.
conlog failed last night at 18:37PDT, same error. I have just restarted it. Aug 18 18:37:16 h1conlog1-master conlog: ../conlog.cpp: 301: process_cac_messages: MySQL Exception: Error: Out of range value for column 'value' at row 1: Error code: 1264: SQLState: 22003: Exiting. Aug 18 18:37:17 h1conlog1-master conlog: ../conlog.cpp: 331: process_cas: Exception: boost: mutex lock failed in pthread_mutex_lock: Invalid argument Exiting.
Betsy, J. Kissel, Nutsinee
Today we restored all the band-limitting and low-pass filters to the violin monitor (alog20660). The band-limitting filter gains have been restored to 100.
Look for a subsequent alog regarding sign change troubles in ASC from commissioners later. However, because Stefan was lamenting that the alignment of the SRC seemed to be problematic of late, we trended the SR3 PIT pointing during OPLEV damping control epochs and CAGE servo control epochs. We discovered that the "bad pointing" of SR3 corrected by commissioners on last Thur Aug 13 (20519) was restored somehow on the day the cage servo was implemented Sunday Aug 16 (20571). So, today we turned the cage servo off, manually restored the SR3 PIT pointing, cleared the offset and turned it back on. Note - ON, OFF, and CLEAR were all done via the servo's Guardian. Hopefully this will help with the unstable ASC matrix problems, but we'll see...
See attached trend which represents the pointing history of SR3.
Upon further evaluation, Stefan advised that we just hard code the "good" SR3 pointing into the CAGE SERVO, so we're edited the CAGE SERVO Guardian to comment out the line where it servos around the current position, and instead added a line to servo around H1:SUS-SR3_M3_WIT_PMON = 922 (the "good" position). Stefan, etal are still chewing on this as currently things are still subpar in low noise locking land. Currently we are ringing up some ~41Hz mode.
Continuing on the task of summarizing the SVN status of CDS code, here is the guardian python user code list: M /opt/rtcds/userapps/release/als/common/guardian/ALS_ARM.py M /opt/rtcds/userapps/release/als/common/guardian/ALS_GEN_STATES.py M /opt/rtcds/userapps/release/isc/h1/guardian/lscparams.py ? /opt/rtcds/userapps/release/isc/h1/guardian/TEST_BOUNCE_ROLL_DECORATOR.py M /opt/rtcds/userapps/release/isi/common/guardian/isiguardianlib/ISI_STAGE/edges.py M /opt/rtcds/userapps/release/omc/h1/guardian/omcparams.py M /opt/rtcds/userapps/release/sys/common/guardian/ifolib/CameraInterface.py M /opt/rtcds/userapps/release/sys/h1/guardian/IFO_NODE_LIST.py M /opt/rtcds/userapps/release/sys/h1/guardian/SYS_DIAG_tests.py to get the list of python files I did the following: looking at the archive of the guardian logs under /ligo/backups/guardian, I made a list of log files created in the past 19 days (for the month of August). For each log file I grepped for "user code:" to get the source py file. This gave a list of 79 files. For each file I checked its SVN status.
Dave, your technique does not produce a complete list, cause it misses the main guardian modules for each node.
A better way is to use the guardutil program to get a listing of all source files for each node.
Here's the list I come up with:
jameson.rollins@operator1:~ 0$ guardlog list | xargs -l guardutil files | sort | uniq | xargs -l svn status
M /opt/rtcds/userapps/release/als/common/guardian/ALS_ARM.py
M /opt/rtcds/userapps/release/als/common/guardian/ALS_COMM.py
M /opt/rtcds/userapps/release/als/common/guardian/ALS_DIFF.py
M /opt/rtcds/userapps/release/als/common/guardian/ALS_GEN_STATES.py
M /opt/rtcds/userapps/release/als/common/guardian/ALS_XARM.py
M /opt/rtcds/userapps/release/als/common/guardian/ALS_YARM.py
M /opt/rtcds/userapps/release/ioo/common/guardian/IMC_LOCK.py
M /opt/rtcds/userapps/release/isc/h1/guardian/ALIGN_IFO.py
M /opt/rtcds/userapps/release/isc/h1/guardian/ISC_LOCK.py
M /opt/rtcds/userapps/release/isc/h1/guardian/lscparams.py
M /opt/rtcds/userapps/release/isi/common/guardian/isiguardianlib/ISI_STAGE/edges.py
M /opt/rtcds/userapps/release/omc/h1/guardian/omcparams.py
M /opt/rtcds/userapps/release/sus/h1/guardian/SR3_CAGE_SERVO.py
M /opt/rtcds/userapps/release/sys/common/guardian/ifolib/CameraInterface.py
M /opt/rtcds/userapps/release/sys/common/guardian/SYS_DIAG.py
M /opt/rtcds/userapps/release/sys/h1/guardian/IFO_NODE_LIST.py
M /opt/rtcds/userapps/release/sys/h1/guardian/SYS_DIAG_tests.py
jameson.rollins@operator1:~ 0$
A brute force coherence report can be found here:
https://ldas-jobs.ligo.caltech.edu/~gabriele.vajente/bruco_1123740797/
I’m using data from Evan’s elog
It took some time to process this since first data was not available on ldas-pcdev1.ligo-wa.caltech.edu due to some maintenance, and then my home folder was not available on ldas-pcdev1.ligo.caltech.edu due to some other maintenance. However, all’s well that ends well
Yesterday I completed installing the updated DMT software (gds-2.17.2) and its dependencies on the DMT Machines at both observatories. This includes (but is not limited to) new versions of: * h(t) calibration pipeline (gstlal-calibration-0.4.0) * SenseMonitor * Omega trigger generation from h(t) improved, configured and run continuously. * Improved monitoring of the DMT run status. A few notes: The latest version of SenseMonitor contains the improved anti-aliasing filtering using the decimation function written long ago by Peter for dtt. This fixes the problem that was noted at the start of ER7 where noise ner the Nyquist frequency was folded down to near DC due to inadequate anti-aliasing. The version of the h(t) calibration pipeline was introduced because it is packaged with the new gds infrastructure. I am not qualified to comment on this verison of the pipeline, but from what I understand, it continues to run in a mode using the same calibration algorithm as the package it replaces. Further updates will be needed to start makeing the additional corrections under development by the calibration group. The new monitoring functionality continues to generate the DMT Spi page showing the status of all monitors running. It now does a better job of checking that all monitor processes are reading data from one of the shared memory partitions. This is especially useful for monitors that were reading the calibrated h(t) data.
This non-event almost went unnoticed, but yesterday was the day h1ecatx1 should have done its every-fortnight-on-a-Tuesday crash until Carlos fixed the issue a week ago. Looks like this problem is now resolved.
Fil, Peter K., Nutsinee
Today I tried to hook up the spare IR sensor and the comparator box to the controller. Again it didn't work. I tried swapping the sensor, comparator, controller, I even tried swapping DB9 cable. A set of sensor and comparator box has been sent to EE shop for analysis (one left at the controller so the CO2X can lase). Peter and Fil found that the IR sensor works but the comparator box didn't seem to behave right. Fil has replaced the op-amp and the comparator but the output was still not what we were expecting (reduced set point but the comparator tripping point didn't get any lower). We are going to replace the switch and hope for the best.
I have taken the transfer function of the coil A, B, C, and D inputs to outputs on the Satellite Box (D0901284). The measurement was taken with an SR785 sourcing the input through a Coil Driver Chassis to turn the input into a differential, and then fed through the Satellite Box. We expected the Satellite Box to be have flat phase response and gain. We have confirmed that to a tenth of a percent. (0.1%) The Reference TF was taken of the Coil Driver Chassis by itself, and the residual calculation took our response from the Satellite Box Coil A over Reference TF. The following was calculated using Satellite_Box_TF_Plotter.m located in /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER8/H1/Measurements/Satellite_Box/2015-08-17/.
I remade the plots above since they didn't show up nicely.
While Travis was working on initial alignment earlier today, we found that we were once again having trouble turning on the ASC for the X green arm. I think the trouble was that the DC centering loops' error signals (from the green ITMX camera) were a little bit too large. This isn't something that happens too often, but it definitely caused the ASC to pull the arm away from resonance. Engaging the ASC without the DC centering was fine, so it's definitely something with that pair of loops.
By hand, I turned the gain on the DOF 3 loop for both pitch and yaw (which handles the DC centering) to half of the nominal value. Once the error signals were closer, I put the gain back up at the nominal value for both loops.
Since this is a particularly tricky problem to troubleshoot, I've tried to handle it in the ALS Arm guardians. The modifications were made in the "generator" states, so they are the same for each arm. The actual gain values for each arm are stored in the alsconst.py file, which is already loaded by the generator states.
I have loaded this new code, but it has not been fully tested, since we have not done initial alignment since I loaded it (and that's where it'll get used). I've tried to test the logic in the guardian shell and that all seems to be fine, but there's no test like doing it live. If it is giving errors that seem insurmountable, I did an svn checkin of the as-found code before I started modifying it (as well as another checkin after my edits), so we can go back one svn version.
This is a sign of ghosts past! I seem to remember that we solved this problem months ago with a delayed boost/gain which is engaged by the WFS FM triggers. Is it clear that going back to the original settings doesn't address it?
This entry is meant to survey the sensing noises of the OMC DCPDs before the EOM driver swap. However, other than the 45 MHz RFAM coupling, we have no reason to expect the couplings to change dramatically after the swap.
The DCPD sum and null data (and ISS intensity noise data) were collected from an undisturbed lock stretch on 2015-07-31.
Noise terms as follows:
The downward slope in the null at high frequencies is almost certainly some imperfect inversion of the AA filter, the uncompensated premap poles, or the downsampling filter.
* What is the reasoning behind the updated suspension thermal noise plot?
* Its weird that cHard doesn't show up. At LLO, cHard is the dominant noise from 10-15 Hz. Its coupling is 10x less than dHard, but its sensing noise is a lot worse.
I remade this plot for a more recent spectrum. This includes the new EOM driver, a second stage of whitening, and dc-lowpassing on the ISS outer loop PDs.
This time I also included some displacement noises; namely, the couplings from the PRCL, MICH, and SRCL controls. Somewhat surprising is that the PRCL control noise seems to be close to the total DCPD noise from 10 to 20 Hz. [I vaguely recall that the Wipfian noise budget predicted an unexpectedly high PRCL coupling at one point, but I cannot find an alog entry supporting this.]
Here is the above plot referred to test mass displacement, along with some of our usual anticipated displacement noises. Evidently the budgeting doesn't really add up below 100 Hz, but there are still some more displacement noises that need to be added (ASC, gas, BS DAC, etc.).
Since we weren't actually in the lowest-noise quad PUM state for this measurement, the DAC noise from the PUM is higher than what is shown in the plot above.
If the updated buget (attached) is right, this means that actually there are low-frequency gains to be had from 20 to 70 Hz. There is still evidently some excess from 50 to 200 Hz.
Here is a budget for a more recent lock, with the PUM drivers in the low-noise state. The control noise couplings (PRCL, MICH, SRCL, dHard) were all remeasured for this lock configuration.
As for other ASC loops, there is some contribution from the BS loops around 30 Hz (not included in this budget). I have also looked at cHard, but I have to drive more than 100 times above the quiescient control noise in order to even begin to see anything in the DARM spectrum, so these loops do not seem to contribute in a significant way.
Also included is a plot of sensing noises (and some displacement noises from LSC) in the OMC DCPDs, along with the sum/null residual. At high frequencies, the residual seems to approach the projected 45 MHz oscillator noise (except for the high-frequency excess, which we've seen before seems to be coherent with REFL9).
Evidently there is a bit of explaining to do in the bucket...
Some corrections/modifications/additions to the above:
Of course, the budgeted noises don't at all add up from 20 Hz to 200 Hz, so we are missing something big. Next we want to look at upconversion and jitter noises, as well as control noise from other ASC loops.