I took a look at the scattering noise we see in all lock since last night .
I computed the band-limited RMS between 20 ans 120 Hz, and this is a good indicator of the scattering noise level. Then I looked at correlations with all suspension motions (using M0 and M1 signals, as Keita did for the OMC).
So I'm able to reconstruct the noise variation over time, using a linear combination of all the suspension signals and their squared values. However, I'm not able to pick point one single mirror which is moving more, as shown in the ranking of the most important channels for the BLRMS reconstruction.
I compared the suspension motion spectra from tonight (GPS 1123422219) and few days ago (GPS 1123077617). The most relevant difference is that all test masses YAW motion have now a large bump at 3 Hz. ITMY also has large lines at 0.45 and 0.63 Hz. Finally ETMY pitch shows a large line at 6 Hz and some excess noise above that frequency.
Not sure if all of this is really relevant...
Checked the End-Y building and found no apparent problems with doors, doors seals, or penetrations into the building. Given general air quality (high winds, smoke, dust, and no rain) over the past few of weeks, spikes or upward trends in counts would not be surprising.
At End-Y, there were 17 spikes over 1000 0.5-micron particles over the last 60-days. The 60-day trends do not show any upward slope. Over the last 24 hours, there were seven spikes over 1000 0.5-micron particle. There were seven recorded entries into End-Y during Tuesday’s maintenance window, at least one where the roll-up door open between receiving and the VEA was open. The 0.5-micron plot shows a sharp spike during the maintenance window, which trends upward, then drops. There is a second sharp spike last night/this morning. There is some coloration with the wind speeds during this period. The air filtration system takes several hours to clean the air within the VEA.
The alarm level were set much tighter during the vent activities when chambers were open. They will be relaxed to the class-10000 levels for O1.
Several things to keep in mind:
1). The 227b monitor at End-Y is archaic and not entirely dependable. It is slated for replacement.
2). The 227b samples for 20 seconds then multiples the sample by 30 to correct it to the standard 0.1CFM sample rate. Sample times of less than 60 seconds are known to be less reliable.
3). The VEAs are required to maintain a clean-100000 level. The dust monitors are set to alarm at less than a clean-10000 level. The counts recorded are well below these levels.
4). The reference particle size for a class designation is 0.5-microns not 0.3-microns. The 0.3 particle size counts tend to be 10x those of the 0.5 particle.
5). Based upon particle and air temperature trends the internal air filtration system is working properly when the doors are closed.
End-Y Measured Particle Counts 08:09 through 08:23 08/12/015
Location | Particle Size | Counts |
---|---|---|
Outside | ||
0.3-micron | 2,529,000 | |
0.5-Microns | 209,600 | |
1.0-Microns | 38,690 | |
Receiving | ||
0.3-Microns | 124,880 | |
0.5-Micorn | 9,870 | |
1.0-Microns | 2,330 | |
Change Room | ||
0.3-Microns | 2,980 | |
0.5-Microns | 1,750 | |
1.0-Microns | 1,940 | |
VEA next to 227b | ||
0.3-Microns | 1,080 | |
0.5-Microns | 480 | |
1,0-Microns | 200 |
DetChar has requested we generate the front end model ADC channel lists. This is a new RCG feature in which all the ADC channels a model uses are listed along with the name and type of the first connected part in the simulink model. This feature is in the trunk release of RCG and not available in RCG-2.9.6. To generate the lists I performed a "make -i World" in the trunk build area (taking care to save the H1.ipc file before and ensuring it was not modified). I then copied the files from their build area into the location these files will eventually occupy (/opt/rtcds/lho/h1/chans/adc/). I then copied these files over to the exports area of the web server for offsite access:
https://lhocds.ligo-wa.caltech.edu/exports/adc/adclist_11aug2015/
Note that the ADC numbering always starts from zero and is not necessarily the physical card number. For example, h1tcscs.mdl has two adc parts (called ADC0 and ADC1), which are card_num 2 and 3 (the third and fourth ADCs in the chassis). In the h1tcscs_adclist.txt file the cards are referenced as 0 and 1.
- commissioning / testing long locks all shift
- alginment investigation ongoing
--- good test to run today: with IMC locked, change IM(1-3) by some small amount, look for the change on IM4 Trans, alog results
- EY is dusty!
Some random commissioning tasks from tonight:
LSC rephasing
In full lock, the phases of POP9 and POP45 were adjusted to minimize the appearance of PRCL in POP9Q and the appearance of SRCL in POP45Q. Then the input matrix element for POP9I→SRCL was tuned to minimize the appearance of a PRCL excitation in the SRCL error signal. New settings attached.
We should take a sensing matrix measurement sometime soon.
LSC OLTFs
I took OLTFs of PRCL, MICH, and SRCL. The data are attached.
[I also did some noise injections into CARM for frequency budgeting purposes. Those are attached too.]
Front-end LSC triggering
Jamie and I started this a few weeks ago, and now it is completed.
There are occasions when the DOWN state of the ISC_LOCK guardian (which, among other things, turns off feedback to the suspensions) is not run immediately after a lockloss (e.g., because the guardian is paused or in manual mode). Therefore, Jamie and I set up the LSC trigger matrix so that PRCL, MICH, SRCL, DARM, and MCL are turned off if POP DC goes below 100 normalized counts. This is set in the DRMI_ON_POP state.
CARM gain redistribution in the Guardian
The state REFL_IN_VACUO now redistributes the CARM gain slightly in order to improve the noise performance of the CARM loop. This state has not been tested and has been left commented out.
The CARM gain code was uncommented and seems to work fine.
Dan, Cheryl, Evan
Sometime after 6 am local yesterday, we lost the ability to have stable locks with full input power.
It's not clear what about the interferometer changed, but the symptom again seems to be a slow drift of the AS36→SRM yaw loop which causes a sudden lockloss after a few minutes at full power.
In the end we simply retuned the input matrix elements for this loop in order to keep AS90 and POP90 from drifting downward at high power.
Before, the matrix elements were
ASA36I→SRC1: −3
ASB36I→SRC1: +1,
and now they are
ASA36I→SRC1: −3
ASB36I→SRC1: +0.5.
This combination was not chosen particularly carefully, but it is good enough to get to high power. We may want to see if there is a better combination.
The attachment shows the behavior of the four AS36 signals during power up and the subsequent full-power lock.
Although we've recovered high-power operation, there is severe scattering causing shelves reaching up past 100 Hz.
I checked the velocity of OM1, OM2, OM3 and OMC measured by BOSEMs just to make sure that these are not shaken badly, and they are not.
In the first attachment, thicker lines are from this morning, thinner ones are from four days ago. All of the plots are already multiplied with 2*pi*f in dtt calibration, so these are velocity.
Doesn't look like these are moving more than before.
Second attachment shows non-stationary low frequency thing from this morning.
10 day plot of dust at EY - do we know why EY is so dusty?
Annoying dust alarm at EY silenced with alarm level changes:
- old: 2000 (yellow, dashed line), 3000 (red dahsed line), note that EY routinely goes above these alarm levels
- new (not saved but running tonight): 5000 (yellow solid line), 10000 (red solid line)
Jenne, Sheila, Terra, Cheryl, Corey
Tonight we worked on some ASC (the wind was gusting to 30mph when we started this, the IFO locked just fine but the buildups were fluctating at low frequencies). At first we wanted to check on CHARD. We increased the gain for CHARD P by 6dB, and then were able to add a gentle boost (the same MSboost used in DHARD) to increase the gain at low frequencies. The attched screenshot and template were measured at 3Watts, with the new configuration (which is in guardian) and the ITM oplevs on. For an old measurement taken at 23 Watts see Evan's alog 19934
Jenne did a similar job for yaw, adding the same gentle boost, and including the 20dB in the guardian. Both the pitch and Yaw loops could have ugfs below 1 Hz as the power increases, but they should be stable.
Once we increased the power to 24 Watts, we found we weren't stable for more than 5 minutes. This happened both before and after the CHARD changes. We ran Hang's lockloss script which picked MICH ASC loops as a suspicous channel. We locked at 15 Watts were we seemed to be indefintely stable, and measured both MICH loops. They look fine, I'm just attaching them here so we have the templates.
For posterity, here is the CHARD Yaw measurement.
Blue is before putting in the MsBoost, Red is afterward. It's hard to see the effect of the boost in magnitude, since we didn't wait to finish the low frequency portion of the measurement, but the phase descrepancy between red and blue exactly matches the Foton filter, so we were satisfied.
Roughly from last night (5:00UTC [10pmPT]), EY has been steadily increasing in dust levels (and obviously alarming constantly the whole time). 0.3um counts are over 10,000. No other VEAs are seeing an increase in dust like this building. I don't see a temperature increase. Could a door be open? HVAC issue? Something burning?
(Will send an email to Bubba, Jeff B, John)
FRS Report 3447 entered.
V. Sandberg & R. McCarthy
The SUS-TMSX OSEM Satellite Box was driving a ~2kHz oscillation on its DC power line again. (See https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=20428) This time we replaced the satellite box. Verified that H1:SUS-TMSX_M1_OSEMINF_RT_IN1 and H1:SUS-TMSX_M1_OSEMINF_SD_IN1 were behaving themselves. Work was done during the interval 18:00 - 18:45 UTC (11:00am - 11:45am PDT).
Apologie: I failed to follow my own rules and neglected to notify the operator on duty before we left and changed out the satellite box at End X. Shame and humiliation duly noted. :-( Fortunately, no damage was done.
After the replacement there was a DC readout lock (Aug 12 4 UTC). The TMSX QPDs look cleaner during this lock, although not quite perfect. The first plot is the spectrum of the TMSX QPD sum at three times - ER7 (blue), Aug 1 (red), and today (green). The second plot is the same for TMSY. It looks like there might still be some excess noise with the same shape but a much smaller amplitude. We'll check this again when there's a nominal lock. Contrary to what I reported in my previous alog, it is possible to see the 1821 Hz noise in the OSEMs even though they are recorded at 256 Hz. The third plot shows the OSEM RT signal (and SD is the same) at four different times. Blue is the time when Matt and Evan alogged the problem. Red is the lock after the amplifier had been smacked and the audible oscillation was gone. Green is Aug 8 (a week later), and yellow is today. The noise is gone today after the amplifier was replaced. The confusing thing is that the red trace is the same time as the plot in my alog. It seems that the high frequency noise in the amplifier was not there, but there was still just as much excess noise on the TMSX QPD. That might mean that the decrease in the excess noise today is just coincidental. We'll have to keep monitoring this.
There's a later DC lock today that seems to have a more nominal configuration. The noise in the TMSX QPD is like it was before the amplifier swap, so it hasn't been fixed and must be unrelated to the satellite amplifier problem. The previous lock, where the noise looked smaller, had a factor of ten lower DC level than the current value. The second plot is just a quick check showing that the high frequency line from the satellite amplifier is still gone since the swap.
The S number and S/N for the "old" satellite box for SUS-TMSX_M1_OSEM SD and RT channels are:
S-number bar code S1100176
S/N 3202-022
The identifiers for the new box will be obtained at a future oportunistic entry to the EX electronics racks area.
h1susex re-install original computer WP5430
Dave, Carlos, Jim:
The original h1susex front end computer was re-installed at EX to fix the glitching seen with the newer faster computers. The script which was clearing the EX glitch errors has been stopped, any FEC errors will now latch on until cleared by the operator.
h1tcscs model change WP5422
Duncan, Nutsinee, Dave:
The new L1 TCS model was installed on h1tcscs. A new TCS_MASTER.mdl was used. The h1tcscs.mdl file was modified to: add the ez_ca_read parts to get TCS CO2 and Ring Heater channels from the Beckhoff computers (corner and end stations) into the model; add new inputs to the CO2 parts for RH input; add ODC output from the CO2 parts; add new ODC block; add SHMEM ODC sender (to be injested by h1odcmaster model).
I discovered that there is a naming mis-match in the Beckhoff Ring Heater channels in the corner station slow controls system. There is no mismatch in the end station Beckhoff systems. I modified h1tcscs use use the H1 names.
Conlog channel reconfiguration
Dave:
After the TCS, CAL and ASC model changes I rescanned the channel lists for Conlog. It still showed a disconnection for the Guardian LSC_CONFIG node, which I tracked down to a very out-of-date autoBurt.req file for h1guardian0 target. I updated the autoBurt.req file and reconfigured Conlog, it is now happy.
H1LSCAUX filter coeff load
Dave:
The h1lscaux model was reporting a modified filter file. I took a look and could not see any difference between the current file and the archived file. I pushed the coeff-load button to clear this.
DAQ Restart
Dave:
After the TCS, CAL and ASC model changes, the DAQ was restarted. There were no GDS frame broadcaster changes made today (ECR is pending).
CDS overview is nice and green. The only red we expect are the TIM bits on the ETM Quad Suspension models now they are running on slower computers (a rate of about four per day is anticipated). We are investigating offloading the violin mode monitors from these models to resolve this.
I have updated the RCG Versions MEDM screen, all models are running RCG SVN version 4054 (2.9.6)
Dave mentioned to me if you notice a red TIM error on the ETM models (i.e. H1SUSEY [EX]), click the button for the offending ETM (i.e. H1SUSETMX_GDS_TP.adl). Here you will notice a red CPU Max value. To clear this, hit the Diag Reset button on the upper right of the window (under the GPS time).
I did this for ETMx at around 6:50utc (23:50pt).
model restarts logged for Tue 11/Aug/2015
2015_08_11 08:37 h1calcs
2015_08_11 08:57 h1iopsusex
2015_08_11 08:57 h1susetmx
2015_08_11 08:57 h1susetmxpi
2015_08_11 08:57 h1sustmsx
2015_08_11 09:00 h1alsex
2015_08_11 09:00 h1calex
2015_08_11 09:00 h1hpietmx
2015_08_11 09:00 h1iopiscex
2015_08_11 09:00 h1iopseiex
2015_08_11 09:00 h1iscex
2015_08_11 09:00 h1isietmx
2015_08_11 09:00 h1pemex
2015_08_11 10:15 h1asc
2015_08_11 12:03 h1tcscs
2015_08_11 12:54 h1broadcast0
2015_08_11 12:54 h1dc0
2015_08_11 12:54 h1fw0
2015_08_11 12:54 h1fw1
2015_08_11 12:54 h1nds0
2015_08_11 12:54 h1nds1
no unexpected restarts. CAL model change, SUS-EX computer swap-out, ASC model change, TCS-CS model change, supporting DAQ restart.
Sudarshan, Gabriele
Today when the IFO was locked in low noise, we could see the 300-400 Hz peaks in the sensitivity again. Those are know to be due beam jitter caused by the PSL periscope.
We looked into the performance of the ISS second loop, which was closed at the time. The first strange thing that we can't understand is that both the in-loop and out-of-loop signals are dominated by a flat noise background at a level of 1e-8 /rHz. We would expect the out-of-loop to be at this level, but the in-loop should be squeezed down by the larg loop gain.
We changed the ISS gain, anmd discovered that if we turned down the gain at the minimun (-26-20 dB with respect to nominal), and disengage both boost and integrator, the in and out of loop ISS signals got worse, but the sensitivity got better at the 300-400 Hz peaks. So out guess is that beam jitter on the ISS array is actually causing an excess sensing noise on the ISS diodes, which is then re-injected into the laser beam as real intensity noise when the ISS second loop is closed with high gain.
We tried briefly to optimize the centering of the beam on the ISS array. We couldn't find any spot better than the initial one in terms of power on the diodes. One possible strategy would be to move the beam into the ISS array with the IFO in full lock, and use the coupling of a PZT jitter line to the sensitivity as a figure of merit.
I saw the same strange noise floor that were at a level of 1x10-8 RIN/sqrtHz a week ago. See the spectra from this alog:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=20148
In light of this strange noise floor, I looked at some of the RIN spectrum I had taken peridocally. This change in In-loop PD noise seem to have happened after we started using the actual DC signal from the PD to obtain RIN ( we were using a hardcoded offset that was an estimate of DC signal). So we implemented a low pass filter with two poles at 10 mHz in the DC signal this morning and the problem is gone. Attached is a spectrum before and after the low pass filter was implemented. SDF table is appropriately updated.
I looked into the glitches created by Robert's dust injections. In brief: some of the glitches, but not all of them, look very similar to the loud glitches we are hunting down
Here is the procedure:
In this way I could detect a total of 42 glitches. The last plot shows the time of each glitch (circles) compared with the time of Robert's taps (horizontal lines). They match quite well, so we can confidently conclude that all the 41 glitches are due to dust. The times of my detected glitches are reported in the attached text file, together with a rough classification (see below)
I then looked at all the glitches, one by one, to classify them based on the shape. My goal was to see if they are similar to the glitches we've been hunting.
A few of them (4) are not clear (class 0), some others (14) are somehow slower than what we are looking for (class 3). Seven of them have a shape very close to the loud glitches we are looking for (class 1), and 16 more are less obvious but they could still be of the same kind, just larger (class 2).
See the attached plots for some examples of classes.
It seems the text file of glitch times didn't make it into the attachments, would you mind trying to attach it again?
Ops! Here's the file with the glitch times.
Gabriele, Did you check which of Robert's glitches caused any ADC/DAC adjurations? The glitch shape will start changing significantly once the amplitude is big enough to start saturations. PS: The ODC master channel has a bit that will toggle to red (0) is any of the subsystems reports a saturation (with 16k resolution) - it might be exactly what you need in this case.
Stefan, I checked for some ADC and DAC overflows during this data segment. The OMC DCPDs ADCs overflowed during several of these. There were still some with SNRs of 10,000 that didn't overflow like this. The segments are pasted below. They're a bit conservative because I'm using a 16 Hz channel without great timing. There were no ADC overflows in POP_A, and no DAC overflows in the L2 or L3 of the ETMs. I didn't check anything else. This is not quite the same as what the ODC does, which is a little more stringent. I'm just looking for changes in the FEC OVERFLOW_ACC channels. 1123084682.4375 1123084682.5625 1123084957.3750 1123084957.5000 1123086187.3750 1123086187.5000 1123086446.8125 1123086447.3750 1123088113.0625 1123088113.1875 1123088757.4375 1123088757.6250 1123088787.1875 1123088787.3125 1123088832.3125 1123088832.4375 1123089252.6250 1123089252.7500 1123089497.2500 1123089497.3750
Evan's script to automatically take frequency and intensity transfer functions is now running. As a reminder, the summing note B path was repurposed for this run. The interferometer won't relock unless we reconnect them. At 4:20 UTC we started changing the TCS X CO2 power from 0.23W to 0.4W. The rotation stage took us on some full circles, but by 04:23 we reached 0.4W. At 4:45 I increased the frequency noise drive by a factor of 5 to gain back coherence. At 6:04 I decreased the power to 0.35W - trying to find the minimal frequency noise point. (The coupling sign had changed.) At 6:34 I reduced it to 0.3W.
It wasn't really clear from this run where the minimum in frequency coupling was (maybe because of the 5 W blast at the start), so I went back to 0.23 W of heating and let the frequency coupling reach a steady state (by driving the same line as before, this time with 100 ct amplitude). Around 2015-08-08 10:18:30 I kicked the TCS power to 0.53 W and started the datataking again.
Once the frequency coupling reached a steady state again, I made a guess for what TCS power we need to minimize the coupling. At 12:43:30 I changed the TCS power to 0.43 W.
I have revered the ALS/CARM cabling to its nominal configuration.
Preliminary analysis from this second run suggests that, of the TCS powers that we probed, our current TCS power is the best in terms of intensity noise coupling.
The attached plot shows the transfer function which takes ISS outer loop PD #1 (in counts) to DCPD A (in milliamps). The coloring of the traces just follows the sequence in which they were taken. Black was taken at 0.23 W of TCS power, and the lightest gray was taken at 0.53 W of TCS power.
The measurements and the plotting script are in evan.hall/Public/Templates/LSC/CARM/FrequencyCouplingAuto/Run2.