This morning there was some confusion over whether or not the EX ESD was completely off. Since the latest technique for getting some more mpc's is to turn off the EX ESD after the ifo has achieved full lock, it is important for operators to know if it is all the way off or not. After talking with Evan, he cleared up some of my confusions and this is what I've found
There are three different states that the ESD can be in:
The confusion this morning arose from the ESD medm screen's "Active" light. The light was red but the Analog Monitors still had a few hundred counts to them. This is because the ESD DC Bias was OFF and the driver was on. To avoid this in the future, I added a third light to ESD Active. If the Active light is:
(Legend on the medm so you don't have to memorize that, screenshot attached)
Hopefully this will eliminate any future bewilderment.
Also, we updated the guardian to turn the EX ESD on and off as appropriate.
In LSC_FF, if the EX bias has been ramped to zero, the guardian will write 1 to H1:ISC-EXTRA_X_BO_3. This is then reset to 0 after a few seconds (by Beckhoff?). This will inactivate the driver.
In DOWN, if the analog readbacks for the driver are close to zero, the guardian will again write 1 to H1:ISC-EXTRA_X_BO_3. This is again reset to 0 after a few seconds. This activates the driver.
Occasonally it seems that the reset mechanism freezes, so that cycling BO_3 does not do anything. In this case, it usually works to do the following:
caput H1:ISC-EXTRA_X_BO_4 0
caput H1:ISC-EXTRA_X_BO_4 1
And then write 1 to H1:ISC-EXTRA_X_BO_3.
Glitches have been showing up in the IFO range since the lock that started at about 17:00UTC, and the rate of glitches has been about the same throughout the morning.
The crew took a break at 11:00 local time, and all glitches stopped. I verified that no one in the CR had made a change. Glitches resumed about 40 minutes later, which coincides with the crew starting to clean again.
20:22:00 UTC, beam tube cleaning starts.
The first IFO glitch in this lock happens within one minute.
Bubba has had the crew mark where they were working at the time.
Robert is heading down to investigate.
The attached plot shows the tidal feedback to the ETM HEPIs. The available range is ±250µm. The maximum used drive so far was ~170µm. There seems to be a couple of times where the X end HEPI walked away between lock stretches.
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=16970
Since there's some chance that we're not going to go in EX in the upcoming vent, I checked the trend of X green QPD SUM over the past 5 months.
They only changed by 1% or so, and there's no reason to believe that it's getting worse. Certainly we will survive O1 without fixing it.
The quad matlab model generate_QUAD_Model_Production.m script has 3 new features:
1) reading in live damping filters
2) reading customized parameter files for the suspensions
3) building models with fiber violin modes based on the measured frequencies and Qs
svn up the following directory: ../SusSVN/sus/trunk/QUAD/Common/MatlabTools/QuadModel_Production/
Here is an example of how to build a model with all 3 features:
quadModel = generate_QUAD_Model_Production(frequency_vector,'h1etmy',[],0,true,'liveh1etmy',0,0,true,8);
* frequency_vector is some frequency vector you specify.
* 'h1etmy' is the customized H1 ETMY parameter file (quadopt_fiber is the general quad file). All other quads have parameter files now too, but h1etmy is the only unique one at this point.
* true (the first one) says you want damping
* 'liveh1etmy' says you want live filters from H1 ETMY. The 'live' prefix is what triggers the flag to read live filters. The code imports the filters from 5 minutes in the past, to allow for some latency.
* true (the second one) says you want violin modes
* 8 says you want the first 8 violin modes in the model (2 is the default if not specified)
The h1etmy.m parameter file includes the measured mode frequencies for the first 8 violin modes at the bottom of the file. More can be added as desired. There is also a place holder for the measured Qs, but I am not aware of any measured values. The violin mode script (called by the generate script) will incorporate any measured values found in the parameter file. It will use modeled values for those that are not specified.
I'll leave the testing of the live filter reading to the people at the sites, since this is harder to do off site. Please let me know if something doesn't work properly. I've only gone as far as checking that it compiles (on my laptop) and that the closed loop system is stable.
The generate script has instructions for setting up your computer to run the live filters. This text is copied here:
> Follow noise budget SVN checkout instructions here
https://awiki.ligo-wa.caltech.edu/aLIGO/NoiseBudget
Checkout the following addpath paths:
svnDir = '../NbSVN/aligonoisebudget/trunk/' (the '..' must be the same as for the SusSVN, e.g. /ligo/svncommon/)
addpath(genpath([svnDir 'Externals/SimulinkNb/']))
addpath([svnDir 'Common/Utils/NoiseModel/'])
addpath([svnDir 'Common/Utils/'])
> Follow NDS2 install instructions here
https://wiki.ligo.org/viewauth/RemoteAccess
Matlab should be pointed to this version of liveparts.m:
../NbSVN/aligonoisebudget/trunk/Externals/SimulinkNb/SimulinkNb/liveParts.m
Corey noted the watchdog trip of the BS ISI early Sunday morning. As he said, this was just a State3 trip where the stage Isolation and Feedforward are blocked but the Damping remains engaged. Again, this is the normal state of operation for the BS ISI so this trip had no impact on the performance of the platform or the IFO. A state3 watchdog trip is entered if the motion level returns to okay magnitude within 2 seconds.
Attached is the wave form as seen by the GS13 on the BS. It looks like it could be an earthquake but I can't be positive as velocities and arrival times between different waves for teleseismic events is not simple.
The candidate Earthquake would appear to be the 4.5Mag event at the far northwest end of the Aleutian Trench. The low-latency online Earthquake Monitor, does not put the arrival of any of the EQ wave types at the site at the time of this watchdog trip. They are all predicted to arrive 10s of minutes before so either this is not an earthquake response or the velocities and distances travelled for this monitor are not correct enough.
Okay--after looking at other platforms (GS13s on other BSC ISIs are inloop,) HEPI L4Cs and the ground STS2, none of which sees this same signal which tripped the BS stage2, nor does any of these see any thing resembling an EQ in the predicted arrival time, I'm going to say this is not a EQ trip of the BS and it possibly came from the platform itself. More investigation required.
This is all a good thing as we'd be in tough shape if we were vulnerable to a 4.5EQ 5000 km away.
So this trip occurred when the IFO was not locked and based on the MICH_CTRL signal, Kiwamu sees this as an attempt to lock MICH before the alignment was close enough.
This first attached shows MICH_CTRL, LSC-TR signal (not locked) and the GS13s. The second attachment is zoomed into the start of the event and you can see that the MICH event begins before the GS13 response. The second vertical black bar is the watchdog trip time.
An audible alarm on this would have allowed the operator to untrip the watchdog before being in Science Mode [sic].
Detchar
Conclusion: Now with three data points, when PRM M2 coils get a mean value close to 2^16 (it has some RMS too, not shown) we see low frequency glitches in DARM. Second conclusion, today's lockloss at 4:45 UTC was likely caused by PRM M2 hitting the DAC limit of 2^17. So the tidal offset that was tried before should fix that at least.
Dan Hoak on the call pointed us to that there were two more times with PRCL/SRCL glitches seen in hveto. I used these three times to make the attached plot as an update to previous studies. Back then we were not able to make a smoking gun correlation between 2^16 crossings in PRM and these glitches, whcih is unsettling. We will look at these new times to see if it's more convincing.
Today: LLO is recovering from a power outage Hugh looking at cause of BS trip over weekend Kingsoft will be on site to replace RO filters John may change chilled water setpoint if IFO loses lock There will be a vent planning meeting at noon Tuesday: Betsy to run charging measurement Jim B. to perform filesystem maintenance on framewriters Richard to look at cosmic ray detector Kyle to work on TMDS piping and run pumps at end X Jeff B. to set up dust monitors at end stations
Power Outage LLO just experienced a power outage around 1350 UTC. https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=18564
Comments related to this report richard.oram@LIGO.ORG - 10:21, Monday 08 June 2015 (18568) Link Power outage @LIGO was of 5 seconds duration.
DEMCO control room informed us that the main feed from Watson to the LIGO substation was interrupted unexpectedly (possibly by a tree- they are sending crews out to investigate cause).
The automatic breakers detected the momentary short and re-established connection after five second pause. DEMCO does not expect any further interruption but will not guaranty it. We have started power outage recovery procedures.
When one makes a "comment" to an entry, there is a "Title" field (default-filled with "Comment to [original entry's title]"), but one could change the Title. However, whatever is in the title field for the comment does not show up with the Comment entry. It would be nice to tag your comment/sub-entry with Title.
I've noticed that as well. I would suggest filing a bug against the "alog" product in the CDS bugzilla:
https://bugzilla.ligo-wa.caltech.edu/bugzilla3/enter_bug.cgi?product=alog
You should also make the "Section" of this log entry be "Logbook Admin".
This is already an open issue in the aLOG product of the bugzilla Jamie mentions: Bugzilla Entry 870. This bug had been open by Keith based on the original aLOG that did as Jamie suggested LHO aLOG 3728.
Attached is the output from dmesg on h1broadcast0. It is reporting an divide error on the daqd process and suggesting a reboot. We should consider scheduling this for tomorrow's maintenance.
The failure mode of the GDS broadcaster (no updates to GPS time, etc.) are consistent with a failure of the 'myri10ge' network driver. That driver controls the 10G adapter taking in data (at 16Hz) from the data concentrator. If it gets no data, it does not update GPS time, etc. that are in the data packet. As for solutions, perhaps restarting machines more often that once every 9 months will help. We are also look at updating the OS on the DAQ computers, which would include newer drivers - they are built into the kernel in Linux 3.2.
0:02 Came to find IFO unlocked. AS Air shows 01 mode.
0:37 Lock loss at LOCK_DRMI_1F after trying to adjust PR3 and BS (the requested state not met)
0:41 Not trusting the alignment due to the 01 mode on AS Air I saw earlier, I restarted the initial alignment.
ALS-C_COMM_A_DEMOD_RFMON was less than 5. I adjusted PR3 until the value reached 5.
INPUT_ALIGN gives repeated message "arm not IR locked" and " timer[ 'pause' ] = 3". LSC-TR_X not flashing. No spot on AS Air.
1:35 Got off the phone with Sheila. She suggested SR2 and SR3 alignment might be bad. I realigned them to where they were 10 hours ago (when the IFO was still locked). It didn't solve the problem. Sheila then suggested I realign PR2 and IM4 as well.
At this point the beam is back on AS Air, but LSC-TR_X still not flashing. So I realigned PR3, BS, ETMX, and ITMX back to where they were 10 hours ago. Lost ALS-X in the process.
3:58 Relocked ALS-X. INPUT_ALIGN request state finally met (*phew*). I tweaked PR2 YAW but couldn't get LSC-TR_X to reach 1 (it was ~0.97 -- close enough?).
Adjusted BS during MICH_DARK_LOCK. The requested state was never met. I tweaked BS until ASAIR_B_RF90 went to 0 and moved on to the next state.
5:00 ALS_DIFF told me to find IR by hand. So I tweaked VCO until IR found. If you (fellow operators) ever have to do this, the narrow peak you see in LSC-TR_Y is a lie (in a sense that you can't get LSC-TR_Y to rise and become stable from there). Tweak VCO until you get a broad peak. Then decrease the step size and adjust the VCO until LSC-TR_Y is steady around 1.
5:20 Lock loss at BOUNCE_VIOLIN_MODE_DAMPING. No obvious reason. Rt. 10 traffic is picking up though. Lockloss analysis attached.
5:50 IFO at DC_READOUT -- waited until the IFO settle before requesting LSC_FF
5:55 Locking at LSC_FF (nominal). Range ~50 Mpc. Intent bit switched to undisturbed.
6:06 Sheila suggested earlier that I turn the ring heater power down to 0.4 W after the ifo is locked (from 0.5 W). So I change the RH power to 0.4 W. The intent bit is still undisturbed at this point.
8:00 Still locking and going strong. Handling off the ifo to Patrick (Covering for Cheryl).
If lock loss again during the day, I suggest whoever has the interferometer at that point redo the initial alignment. Maybe realign the optics I haven't touched or mentioned in this alog. AS Air spot is off to the right and the range isn't that great (I've seen the ifo stable at a better range, though Vern is happy with where it is). The drift monitor shows that OMC pitch and yaw is "Somewhat out of range" and in "Danger Zone".
With 23W operations, there is a step Operators had been setting for the EX ESD Driver (post Stefan/Evan's high power work last Fri). I can't remember how to get to the this medm without it in front of me, but basically there is a small-ish medm window with an ESD ACTIVE box (RED or GREEN) on the lower right, and above it is a HI & LO button and a skinny red or green box in between these boxes (it's green when you are LOW & red when you are in HIGH state). To the right of this, are monitors.
At the end of my lock yesterday, ESD ACTIVE was red, the monitors were several hundred counts in the negative (TJ showed me that they should be around zero). So, I clicked the HI button, and this briefly took ESD ACTIVE to green, and then to red. More importantly, this took the monitors to around zero (also showed a huge glitch on DARM on projector0), and the range climbed from 56Mpc up to 66Mpc.
Are we happy with 23W operation? Is deactivating EX ESD Driver something we want to make part of standard operating procedure? Do we want Guardian to take care of this for us?
Starting right around 21:00 local, the 25.4 hz (well, DTT says something like 25.37) peak showed back up, creating a nasty looking comb in the DARM spectra (first image). Unsure of what else to do, I turned the power down to 16 watts, the peak has now kind of subsided(second image). I'll wait to see if the peak settles down any more, then maybe turn the power back up.
Dan called in and helped me look for a PI with a template he had ready. There were 2 peaks rung up at 15516 and 15540, see attached plot. The main culprit causes the big bump in the RMS, the little peak next to it was also rung up more than Dan's quiet reference. We were trouble shooting when the IFO lost lock. Guardian is bringing everything back up.
Thanks, Jim!
The channel H1:IOP-LSC0_MADC0_TP_CH12 is the 64kHz-sampled IOP input for OMC DCPD A.
Recall that the frequency of this mode matches what Elli measured for our parametric instability.
The other approach would be to change your ring heater power. I guess you in the same situation as us from the fact that the previous step up in ring heater power was effective. So you need to increase your ring heater up from 0.5W per segment, which is what I think it was set to after your first observation of parametric instability. At Livingston if we increase the ring heater too much we then ring up a 15004Hz mode. There is a new wiki here for operators as we had an apparent change in the parametric gain after last vent and we are still in the process of finding a new operating point for the ring heater.
The two acoustic modes appear to ring up with a similar time constant see image, there is also a peak in DARM a bit further down ldvw link any idea if this is related? It's a bit big for an acoustic mode that is not ringing up.
I would guess that these two modes are ETMY and ETMX ringing up with the vertically oriented mode, as that is what we mostly see at Livingston. You could look at the transmission QPD channels for more information if you are interested.
At Livingston these channels are L1:IOP-ISC_EY_MADC1_TP_CH0-7 for X and Y end, pitch and yaw orientations can be derived.
I followed up the range drop tonight at 2015/06/07 around 8:30 UTC. Here are the symptoms: - 25.4Hz and harmonics ringing up and saturating everything. - Significant increase of two lines: 842.781Hz and 868.188Hz (Yes, they are 25.407Hz apart) Those are marked with crosses in the attached plots. Note that there are some other lines that increased (red) over the reference (black), but they are symmetric 25.4Hz sidebands of strong lines that did not increase. The two lines at 42.781Hz and 868.188Hz are not such modulation sidebands. Parametric Instability? alog 17903 reports on an observed parametric instability at 15540.6Hz, causing a line at 843.4Hz. Seems close enough to suspect it as the culprit for 842.781Hz. But what is 868.188Hz? And why does the 25.4Hz show up so strong? BS roll mode? My first though on seeing something at 25Hz was BS roll. But T1200415 reports the BS roll mode at 25.9715Hz... If we believe that, 25.4Hz can't be the BS roll... Do we have an actual recent measurement of the BS roll? I did try to look at the BS oplevs for a sign of the roll mode rung up - nothing. I haven't looked at the OSEMS yet All attached plots were taken at 8:15 UTC, just before it go really bad. The black reference is from 7:00 UTC. I also left instructions with Cheryl on how to lower power if this happens again. If that fixes it it would nail the PI. Running of to the airport, but Cheryl will follow up.
If you want to look at the trans QPDs you have to look at the IOP channels, is that what you were looking at?
[Daniel, Paul]
We have updated our LHO FINESSE file (https://dcc.ligo.org/LIGO-T1300904) to compute an estimate of the DCCP as requested by Jeff.
The DCCP value we calculated was 369 Hz. This is for a well mode matched and aligned setup.
The optics details were taken from the galaxy page for the various mirrors installed at LHO. We also tried to get some more accurate losses in the arms taken from the following documents:
| Optic | Loss [ppm] | LHO aLOG ref. |
| ITMX | 42 | N.A. |
| ETMX | 78 | 16082 |
| XARM | 120 | 16579 |
| ITMY | 30 | N.A. |
| ETMY | 125 | 15919 |
| YARM | 155 | 15937 |
The caveat here is that the DCCP value has been seen to wander depending on alignment, as was reproduced in some of Gabriele's simulations, https://dcc.ligo.org/LIGO-G1500641. Here Gabriele found a slightly higher well aligned DCCP value of ~380Hz. We know that the DDCP value depends on many variables (alignment, mode mismatch in SRC, arm losses etc.), some of which are not yet well constrained with measurements.
Attached is a plot of the DARM TF from the Finesse model, which is well described by a single pole at 369 Hz.
This is my own reckoning of the loss measurements we've performed after the ETMs were cleaned in December. Note that these visibility measurements do not independently measure losses from the ITMs or the ETMs; they just give the total arm loss from scatter and absorption.
If these measurements are not satisfactory, we could always repeat them.
We could also repeat the ringdown measurements, but we would need to be more careful when collecting the data. Last time, the incident IR power on the arm fluctuated from lock to lock, which made the uncertainties in the inferred losses much too big for the measurements to be usable.
| Loss | Date | Method | alog | Notes |
| 78(18) | 2015-01-14 | Visibility | 16082 | — |
| Loss | Date | Method | alog | Notes |
| 286(33) | 2015-01-05 | Visibility | 15874 | Green WFS not on |
| 125(19) | 2015-01-07 | Visibility | 15919 | — |
| 155(19) | 2015-01-08 | Visibility | 15937 | — |
| 140(16) | 2015-01-09 | Visibility | 15991 | — |