[Matt, Jenne, Evan, Sheila] There is an enormous peak in the DARM spectrum at 4735 Hz. Shown in the DTT printout below is the IOP channel for the OMC DC PD (H1:IOP-LSC0_MADC0_TP_CH12), from 1 kHz to 25 kHz, and this 4.7 kHz peak is dominating by about 2 orders of magnitude. We wonder if this is perhaps an acoustic internal mode of one of the test masses, although we are having trouble finding a listing of such modes. Does anyone know where we can find a listing of test mass acoustic modes? Or, alternatively, does anyone have any thoughts on what this mode might be?
Sort of unsatisfying (because they're not the real deal, or their incomplete) FEA results for the test mass body modes can be found here: http://www.ligo.caltech.edu/~coyne/AL/COC/AL_COC.htm (Only for a right cylinder) and here T1400738 (only shows the modes which are likely to be parametrically unstable). A quick glance through the above doesn't show anything at or near that frequency (including abs(16384 - FEA results)). I've yet to see FEA analysis of non-test-mass optics, but I've been told that Ed Daw and/or Norna/Calum's summer students on working on it. The best I've seen on that is the ancient 2004 document for the Beam Splitter, T040232 which is where we colloquialy get the frequency of the beam splitter's butterfly mode, which was done by eyeballing the current beam splitter's parameter location Figure 2. (But, the modeled dimensions are wrong, and the wording is confusing on whether the listed frequencies are from the model with flats or not.)
It appears to be a 10th order violin mode on EY.
It is damped with a 1 Hz wide butterworth (unity gain in the passband), a +100 dB filter, and a gain of -30. No rotation needed.
Jeff
As you notes there is some data in the links you already included and we have started to fill in the blanks. Refer to https://dcc.ligo.org/T1500376-v1. When we talk I (we) can complete.
Calum
For reference, with a combination of Slawek's (T1400738) and Calum's (T1500376) FEA models, and Calum's video of the test mass internal mode shapes (T1500376), we expect to find the drumhead mode around 8029 Hz, the x-polarized butterfly mode around 5821 Hz, and the +-polarized butterfly mode around 5935 Hz (using Slawek's values for the mode frequencies). The next two modes (at 8102 Hz and 8156 Hz) do not involve distortion of the test mass face in the direction of the beamline.
Sheila, Matt, Elli
Today, like yesterday, we used AS A 45Q YAW to damp the roll modes. The ITMY settings were different today compared to yesterday; the sign and the phase has changed. Currently the roll mode damping is working with settings
| Optic | ETMY | ETMX | ITMY | ITMX |
| Roll mode frequecy [Hz] | 13.816 | 13.889 | 13.93 | 13.978 |
| Phase [deg] | 0 | -90 | 0 | -60 |
| Filters on | FM 3,4 | FM 2,3,4,7 | FM 3,4 | FM 2,3,4 |
| Gain | -50 | 10 | 100 | 100 |
The filter settings are in the guardian, however the H1:ASC-OUTMATRIX element; H1:ASC-OUTMATRIX_TESTMASS_DAMP_1_3, does not get set from 0 to 1 in the guardian, so these filters do not currently turn on automatically.
Matt, Sheila, Eli
At some point today the bounce mode on EX got excited enough that we could see it in the PUM OSEMs as pitch motion. The RMS of the observed "pitch" was about 3 nrad, and the line in DARM was about 1e-13 m. Assuming that OSEM misalignent is providing the roll to observed pitch motion, and that this misalignment is of order 1 degree, the estimated roll motion was about 3e-7 rad.
This gives an order of magnitude estimate of the Roll to DARM coupling of 3e-7 m / rad.
Assuming a 10cm lever arm, this give a dimentionless coupling of 3e-6. Compared to the bounce to DARM coupling, which is order 1e-3, the roll coupling is tiny, which means that the roll motion is HUGE (since they both look about the same in DARM).
My 24 hours have passed, but the first sentence should read "At some point today the roll mode on EX..."
Suppose that the beam is at (X, Y)=R(cos(theta), sin(theta)) on the mirror where R=0 is the center of roll rotation and theta=0 is the horizontal line crossing the center. Though the COG is somewhat lower than the mirror center due to wedge, R should be more or less equal to the radial distance of the beam from the center of the mirror.
Mirror thickness at this position is
T(R, theta) ~ -R*sin(theta)*w + T0
where T0 is the thickness at the center and w is the wedge in radians that is about 0.08deg=1.4mrad for all ITMs and ETMs.
Roll changes the thickness by adding some small angle d_theta to theta: dT=-R*cos(theta)*w*d_theta=-X*w*d_theta.
When the rolling plane is in the middle of the front and the back surface, the light see the half of the total thickness change, so the roll-to-length coupling coefficient should be
length/roll ~ |dT/d_theta /2| = X*w/2
= (X/5mm) * 3.5E-6 [m/rad].
For Matt's estimate of 3E-7 m/rad to hold true, the horizontal centering should be 0.5mm or so, which is pretty good but not outrageously so.
What this probably means is that Matt's estimate about the roll angle was reasonable, as in it cannot be off by that much. A factor of something, not orders of magnitude.
[edit on Jul 15] However, if the roll plane is parallel to the local gravity, the above doesn't hold true.
In this case, w/2 is replaced by the angle between the local gravity and LIGO global vertical: 8urad for LHO EX, 639urad for EY, -619urad for IX and 12urad for IY (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=14876):
length/roll(EX)~ X*8urad = (X/5mm) * 4e-8 [m/rad],
length/roll(EY)~ X*619urad = (X/5mm) * 3E-6 [m/rad].
For EX and IY, that's two orders of magnitude smaller than what I showed yesterday, though EY and IX it didn't change.
It seems that we need a suspension model to find out the actual rotation plane.
Today I saw, via my monitor programs, a freeze up of EPICS data from LSC and ASC between the times 20:27:25 and 20:27:40 UTC, a duration of 15 seconds. My strip tool showed flat-line for this time. I was able to get the load-average on a variety of corner station front ends around this time, they were all elevated. Looking at the Guardian ISC_DRMI node logs and checking writes against the DAQ data I see that systems other than LSC, ASC appear to be frozen. For example ISC_DRMI attempted to change the H1:SUS-IM4_M1_LOCK_P_GAIN during this period. The modifications which were attempted during the freeze time did not show up in the DAQ data.
To see if IPC was playing a role in this, I started a third monitor on h1susauxb123 (no IPC). At 17:23 as I was writing this alog we got another event, which was seen by all three ASC, LSC and SUSAUXB123.
Investigation continues.
A Guardian top node as been added to H1: IFO:

(The node should now be visible in the GUARD_OVERVIEW MEDM screen.)
ifo: H1
name: IFO
CA prefix:
module:
/opt/rtcds/userapps/release/sys/common/guardian/IFO.py
usercode:
/opt/rtcds/userapps/release/sys/h1/guardian/IFO_NODE_LIST.py
nominal state: ALL_NODES_OK
initial request: ALL_NODES_OK
states (*=requestable):
50 * ALL_NODES_OK
20 WAITING_FOR_NODES_OK
10 NODE_FAULT
0 INIT
The node is monitor only; it only monitors the status of all other nodes in the system. This is based on the system previously deployed at L1.
The sole purpose of this node is to report on the full status of the guardian system. This is done via the node OK channel, and is defined similarly to L1:
The check is essentially that all nodes in the system are themselves reporting OK == True, e.g.:
self['OK'] = self['OP'] == 'EXEC'
and
self['MODE'] in ['AUTO', 'MANAGED']
and
self['REQUEST'] == self['NOMINAL']
and
self['STATE'] == self['NOMINAL']
and
self['STATUS'] == 'DONE'
and
not self['ERROR']
and
self['CONNECT'] == 'OK'
In other words, all nodes are running as expected, are not in error, and their STATE and REQUEST are equal to the NOMINAL state.
The list of nodes being monitored is currently stored at:
/opt/rtcds/userapps/release/sys/h1/guardian/IFO_NODE_LIST.py
The list currently includes:
NOTE: the nodes without NOMINAL states defined will prevent the IFO node from ever becoming OK. We need to either define NOMINAL states for these nodes, or temporarily remove them from the list of monitored nodes.
[Stefan, Jenne] Earlier in the day, we were struggling to get through the INCREASE_POWER state in the ISC guardian, which happens after the transition of DARM to DC readout. After much tracking and searching, we discovered that the problem was that the setpoint for the OMC transmitted power was done as an ezcaread after the PSL rotation stage had already started moving. Stefan had seen in the past that this kind of system will often have some lag (you're not reading the current OMC DC PD value infinitely fast, so you're constantly changing your setpoint) which causes the system to run away. We have changed this to a hard-coded value (defined as lscparams.omc_dcpd_sum_target), so that the DARM offset is changed while the power is increased to keep the OMC DC PD at this value (currently 20 mW). This seems to now work exactly as we expect, and we're easily able to get past this state.
J. Kissel, E. Merilh, J. Warner, B. Weaver
As we begin to figure out what it means to be on the relocking team, we've made our best attempt at organizing / coordinating all planned maintenance day activities such that we understand their impact on the IFO and can recover from them as quickly as possible. See below. Have patience while we figure this "new" "system" out with you!
All tasks have been arranged and coordinated so as to not conflict with one another. All tasks and estimated times for completion will be added to the reservation system when they are scheduled, and after the task manager has checked in with the operator. PLEASE PAY ATTENTION TO THE RESERVATION SYSTEM (to help, we're going to put it on the big projector during maintanence). As always, please keep the operators informed of your activities as accurately as possible / reasonable throughout this maintenance day so the reservation list can be adjusted accurately. We appreciate your cooperation!
Maintenance Day Timeline:
Kissel typically fleshes out the above task lists on the Monday before the Tuesday maintenance period and (with help from the operator) shops around for interferences and conflicts. This week we set the timeline of tomorrows task list order on the CR whiteboard such that all parties know roughly when their slated time frame is during the maintenance window. The operator will keep the parties on track tomorrow. The attached picture is of the whiteboard "hand-gant" in the event you want to see the above list in a different format.
I ran my brute force coherence script on last night lock. The results are available here:
https://ldas-jobs.ligo.caltech.edu/~gabriele.vajente/bruco_1120811417/
I'll post as summary soon.
Since the IFO has been nicely locking I updated the SUS drfit mon to capture the new ETMx and TMSx alignments. The time I used to update was 1120863300.
Betsy, Jeff, Ed, Sheila, Kiwamu, JimW, probably others
Because we wanted to reduce the opportunities for chaos tomorrow, we worked on making sure SDF's were clean at the end stations. This involved a lot of asking questions, head-scratching and saying "yeah, it's probably fine". There are still a number of red tables for the corner station, such as windy blends on the CS BSC's and a bunch of LSC and ASC diffs, but the plan currently only includes restarting end station models.
Rediscovering why we fixed the large ion pump voltages at 7000V in the past -> Changed IP1-6 to 7000V fixed voltage
I have been working on putting together a power dudget for Hanford IFO. I have calulcated the power on the beamsplitter using abolute power on vaious photodiodes, and put this into a shot noise curve model. I have compared this to shot curve is measured using DCPD null readout.The shot noise curve is taken from the GWinc model. The parameter file I am using is attached. These files are available in /ligo/home/eleanor.king/PowerBudget.
I calculated the power on the beamsplitter using the TR QPDS (TR_X/Y_QPD_A/B) to determine the power in the arms, and using the POP sensors(POP_A_QPD, POP_B_QPD POP_A_LF). The resuts are summarized in the table below. There is a matlab script with my actual calculations in /ligo/home/eleanor.king/PowerBudget/PowerOnBS. The recycling gain for the TR is larger than that measured with the POP_PDs. If calibrate the POP_A PDs to a single shot, same result. I am assuming the TR QPDs are correct. The resylsing gain calculated by the absolute powers on these PDs agrees with the recycling gain calculated by the relative power change before and after locking using both TR and POP photodiodes.
I have taken an average value of all of the TR QPDs, which is 41(+/-15%). I have also included lines showing +/- 1 standard deviation in the plot of the shot noise curve. The photodecetctor quantum effieciency is 85%, and the losses in the arms are 120ppm, measured alog 16579. Next I plan get a better understanding of the mode matching numbers used for generating the shot noise curve. (Mode matching into arms and mode matching into SRC, which is currently assume dto be perfect.)
| Sensor | Caculated power on BS [W] |
Calculated Recycling Gain 15_06_07 0:00:00 UTC |
|
LSC-POP_A_LF |
751.3 | 33.6 |
|
ASC-POP_A_QPD |
723.1 | 32.4 |
|
ASC-POP_B_QPD |
662.0 | 29.6 |
|
ASC-X_TR_A |
716.8 | 32.0 |
|
ASC-X_TR_B |
940.6 | 42.1 |
|
ASC-Y_TR_A |
963.9 | 43.2 |
|
ASC-Y_TR_B |
1026.2 | 45.9 |
-------
Additional Comments:
Propagation of arm power to recycling gain:
Assume losses in arms 0f 120ppm, alog 16579.
Recycling gain from power on the beamsplitter:
Pinput=IMC_input_power*0.88*Tprm. (It is power on the beamsplitter that I am calculateing from the photodiodes and putting into the GWINC noise model. But I find it easier to convert from this to recycling gain, and think in terms of recycling gain.
Some comments on the current photodiode calibrations:
POP LSC Photodiodes were calibrated by Kiwamu in alog 13905, based on the transimpedence of this photdiode. [cnts/W] = 0.76 [A/W] x 200 [Ohm] x 216 / 40 [cnts/V]
TR_QPDs and POP_QPD calibrated using Dan's calibration alog 15432. Note the whitening gain changes on these during full lock, so it is important to keep track of the multiple dewhitining filter banks.
And the promised parameter file...
The disk drive in control room computer opsws6 has died. The computer was removed and will be taken in for service, a substitute has been installed in it's place. The name of the substitute is lveaws2, so don't be surprised when you log in if it has a different name. The opsws6 computer will be reinstalled when it has been repaired.
(Not Dan)
An accidental INDENTING change in the ISC_LOCK guardian affected a "return True" statement. Result:
- CARM_ON_TR returned true without actually ramping H1:LSC-REFL_SERVO_IN2GAIN to -32dB.
- This resulted in random lock losses later on in the lock sequence.
- This resulted in 6h of head scratching of the assembled commissioning team.
There is a long an ongoing discussion about the python indentation convention. No need to state which side the author of this log is on.
In the end we were able to bring the interferometer back into full, low-noise lock at 24 W. The DARM spectrum between 10 and 100 Hz is attached, along with the previous best. Of course, our actuator has changed since the vent, so the calibration must be redone. However, I have tried to make the comparison more fair by (1) making sure the DARM OLTF is the same as at the start of ER7, and (2) rescaling the calibrated control and error signals (in the frontend) to restore the heights of the pcal lines (the required rescaling seems to be about 1.2 for both control and error). I also adjusted the EY L3 drivealign calibration gain from -50 to -30, since that is what we now use in lock.
Dan, can you paste a diff of the offending indent code? Or point to a revision in the USERAPPS SVN? It would be instructive to see the code explicitly.
Nic, Sheila
We put an excitation into the HAM6 ISI ISO Y filter bank, (30000 counts at 0.3 Hz) from about 3:17-3:19 UTC July 11. We then did a by eye fit (on a log log scale)for a fringe wrapping model. We expected the excitation to result in 30 um motion of the OMC, but we had to use 36 um to get the fringe speed right. We get an amplitude reflectivity of 1.6e-7 for the single pass shelf. (compare to 1e-5 measured in 17919) We see no evidence of a second shelf or a shelf in the null stream.
We plan to make measurements in exactly the same was as 17919, if we get a chance again tonight.
There is a typo in this alog, the reflectivity is r=160e-7, as the legend in the plot says, not 1.6e-7 as I wrote.
Stefan, Sheila, Nic, Evan
We remeasured the relative strength of the EX and EY ESDs in full lock. EX was driven with the high-range driver (40 V/V dc), and EY was driven with the low-noise driver (2 V/V dc). All other things being equal, we expect a relative strength of 20 V/V between the two actuation chains.
We found that the relative strength is instead 30 V/V (see attachment, which includes a digital gain of 30 ct/ct in the EY path). When we first did this measurement (back in May, pre-discharging), the relative strength was more like 50 V/V. So we are closer to the nominal value.
We successfully transitioned control of DARM from EX to EY with this new digital gain. We also took a quick DARM OLTF, both before an after the transition. The attachment shows an old, pre-vent OLTF (blue), today's OLTF with EX (green), and today's OLTF with EY (red).
A relatively small point, but the LV ESD Driver DC gain is actually closer to 1.9 V/V at low frequencies. There's a pole just above 150Hz.
Earlier today, I wrote a couple out of loop feedforward filters to the BS ISI foton file using Foton. When I hit the load coefficients button (while the ISI was isolated, the ff paths were off, so it shouldn't have done anything), the ISI tripped, hard. It rang up the T240's pretty bad and I couldn't isolate the ISI for several minutes after. Worried I had inadvertantly written some other filter I ran a diff on the most recent archived file and the file created yesterday when Jeff restarted the models. This showed a whole bunch of filter coefficient differences, which shouldn't have been there (as reported by a diff of the two archive files, I don't know exactly what changed, see attached). Talking to Jim, Dave and Jeff, it sounds like the glitch was probably caused by my having used Quack recently (June 22nd) to load some blend filters. Jeff's model restart (and even a prior model restart on June 30) simply inherited that quack-written file. Today was the first time the BS's foton file was opened and saved in Foton. Quack can apparently load coefficients with higher precision than Foton will accept, so when you open and save a "too high" precision filter with Foton, it rounds the coefficients off. Sudden change in precision of SOS coefficients in the blend filters = bad for isolation loops = bad trip.
We've seen this Foton vs. Quack Rounding problem before -- see e.g. https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=3553 -- and it's still biting us.
This sounds like a relatively easy thing to control for, I can think of two ways:
- getting Quack to check and do the rounding on it's own before writing to the Foton file.
- have the post-build script run a "foton -c" on all filter files before the model gets restarted.
Is there someone in the CDS group who can fix this? Maybe it has been? There are several versions of Quack running around, June was my first attempt with it, maybe I used the wrong one.
I used /ligo/svncommon/SeiSVN/seismic/Common/MatlabTools/autoquack.m
https://svn.ligo.caltech.edu/svn/seismic/Common/MatlabTools/autoquack.m
Last Changed Author: brian.lantz@LIGO.ORG
Last Changed Rev: 7939
Last Changed Date: 2014-02-14 15:38:15 -0800 (Fri, 14 Feb 2014)
Text Last Updated: 2014-02-14 15:48:16 -0800 (Fri, 14 Feb 2014)
we should use the readfoton script to read and plot the installed filter, i can do that
I suspect that the problem appears because a change (however small) in the filter coefficients causes the filters to reset (clear history, start over) and reset of the filter history = glitch in the output. It is easy to image this glitch being quite large for a ISI loop which is holding a static offset. I am working on an update to autoquack which will have it automatically call foton -c so that the filter updates happen in a deterministic way, and there is a log file telling you which filters have been touched.
filed ECR https://services.ligo-wa.caltech.edu/integrationissues/show_bug.cgi?id=1077 testing of possible solution, see https://alog.ligo-la.caltech.edu/SEI/index.php?callRep=789 -Brian