Friday: Noise hunting
Sat/Sun: SRC/noise hunting
Monday: Noise hunting
Last Friday, the STS2-A near HAM2 had its igloo installed. The masses still look well centered. Attached is a view of the three LVEA instruments' ASD and coherence with each other. The thin current traces are from 0100utc when wind and seismic was minimal; the thick reference traces are from a week ago.
The improved thermal state of the HAM2 unit is evident on the Z dof where all sensors follow each other well and the coherence is much improved especially below 40mHz.
However, HAM2's Y & X performance at low frequency but especially the Y dof, show that HAM2 continues to have a problem even though Quanterra had the machine in their vault for a month and could not determine an issue. Since the Z signal is determined from all 3 masses, it suggests the problem might be in electronics converting U V W to Y and X.
The Z coherence plot suggests the HAM2 and HAM5 Z dofs are doing well but ITMY Z dof is flakey. The Y coherence indicates the HAM5 and ITMY Y dofs are okay but the HAM2 Y is not. The X coherence suggests the HAM2 is better than the ITMY as ITMY has worse coherence with both HAM2 & 5.
So summary:
The HAM2 unit, STS2-A does not work well in Y and is marginally better in X; it is good in Z.
The ITMY unit, STS2-B, does not work well in Z or X but is tolerable in Y.
The STS2-C, HAM5 unit works best even though we have to surmise X since both other units are poor in that dof. Jim would agree based on detchar and sensor correction performance.
Attached now is a comparison to above current traces, low wind, and, 1300utc when the wind was up to 20-30mph.
These results does not counter the conclusions above. The coherence amplitudes tend to be lower except for HAM5 to ITMY where they are actually much better. This maybe makes sense given they instruments are 1 m apart and Y is good or tolerable on those sensors. The wind tilt shows strong in HAM2 Y & X but is still evident for ITMY & HAM5. So maybe we could still look for a better placement in the LVEA at which to hide from the wind. I'd suggest moving -Y.
Brynley, Vinny, Please see the attached presentation, regarding the work summarized in previous aLOG: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=25772
MIchael Laxen at LLO has been working on this. Already tackling FPGA firmware change test (timing slave).
LIGO has 15 hardware watchdog (HWWD) units. They are distributed five per IFO (one per Quad Suspension plus a spare).
Of the 15, 12 have the new firmware loaded. The remaining three had been in use at LHO and have been replaced with new units and shipped to Caltech for upgrade.
I have tested all 12 of the upgraded units on the LHO DAQ Test Stand (DTS). All 12 units have passed the tests.
My test procedure is:
power cycle unit 10 times in succession.
run the script hwwd_test.py which:
for each LED raises an error and verifies error is detected and the countdown starts
raise error on all LEDs and wait for the 20 minute countdown to expire, verify the SEI enable voltages go to zero
for each Photodiode, simulate a ring up to greater than 110mV RMS and verify unit detects the error and the countdown start
ring up the F1 PD and wait for the 20 minute countdown to expire, verify the SEI enable voltages go to zero
The test script is under cds_user_apps SVN repository cds/h1/scripts/hwwd_test.py
For these tests the units were connected as they will be when installed on H1 and L1. Test configuration drawing is sheet 2 in D1300475 We also tested Rolf's latest hwwd RCG part in the test model x1susetmywdt.mdl.
The tested units (which all passed) are:
S1301697, S1301699, S1301701, S1301704, S1301706, S1301707, S1301708, S1301713, S1301714, S1500331, S1500332, S1500333
The units shipped to Caltech for upgrade are: S1301703, S1301709, S1301712
LHO HWWD installed status
We have two units currently installed on H1, on ITMY and ETMY.
ITMY S1301708: this unit is running in monitor-only mode. It is monitoring the ITMY M0 top stage OSEM and cannot disable the ISI coil drivers for BSC1.
ETMY S1301701: this unit is fully functional. It is monitoring the ETMY M0 top stage OSEM and can power down all three ISI coil drivers for BSC10 if an error condition is raised for 20 minutes.
WP5756
h1asc model was restarted 10:17PST. Slow channels were changed, requiring a DAQ restart at 10:35PST.
During O1 run we have monitored slow variations in the DARM actuation and sensing functions with several ~35 Hz and a ~350 Hz line at both observatories.
Systematics in the actuation function mostly affect systematic errors at frequencies below UGF, while systematics in the sensing mostly show up at higher frequencies.
Variation in the DARM sensing is parametrized with an overall sensing gain κC and a cavity pole frequency fC. Most dramatic changes in both of these parameters appear in the beginning of locks, which could be a result of changing of cavity modes due to thermal heating of test masses and possibly some other effects.
Variation in the DARM actuation is parametrized with κTST and κPU. The κTST is a scalar gain factor of the ESD driver actuation which drives only the TST stage. We believe that it changes mostly due to charge accumulation on the surface of an ETM. The κPU is a scalar gain factor of the actuation functions of the upper stages PUM and UIM. The coil-drivers as used to for actuation of these stages. We do not believe that κPU should change over time, but monitoring it helps to make sure that we do not miss any slow variations that we did not account for.
Time-frequency plots of the known time-depedent systematics in the overall DARM response function calculated from κTST, κPU, κC and fC in O1 run are attached.
Update: replaced figures (portrait -> landscape orientation) for convenience.
Summary
Details
The time-frequency plots of the time-dependent systematic errors in the reconstructed ΔLext and plots of "kappa" values during O1 are attached to this report.
The state vector in C01 seemed to give a noisier set of values, to filter out "good data points" for these plots we have used the state vector from C02 frames, and 128 second median values from C01 frames for kappas.
The median kappa values are taken from the values extracted from C01 are saved to CalSVN:
Runs/O1/$(IFO)/Measurements/TimeDependence/20160301_C01_kappas_AllOfO1/kappa_C01_$(IFO)_all_wStateVector.txt
From C02 we took a single value every 128 seconds (without taking any average or median), these values are saved to
Runs/O1/$(IFO)/Measurements/TimeDependence/20160301_C02_kappas_AllOfO1/kappa_C02_$(IFO)_all_wStateVector.txt
We have produced a plot of systematic uncertainty boundaries for 50%, 75%, 90%, 99%, ~100% of the cases in O1 when HOFT_OK was 1.
This information or similar analysis can be used to set a 1-sigma uncertainty bars on the time-dependent systematics in C01 due to uncorrected kappas (the values were taken only for times when all of the KAPPA*_OK and HOFT_OK were 1).
The plots for C02 give an estimation of time-dependent systematic errors caused by not correcting fC.
17:20UTC With Sheila's "ok" I cleared the excitation (tp and awg) on H1ASC as an exerscise in vetting the instruction WIKI for "Tracking Down Excitations".
As of 16:47UTC (8:47PT):
16:30UTC (08:30PT) Meeting:
Site Activities:
H1 Plans:
Rob, Lisa, Matt Evan Sheila Patrick
Tonight we were able to get back to locking on RF, but were unable to engage the soft loops. (we were stable with all other ASC on) We spent some time trying to understand the problem. It seems possible that 90MHz centering will help aleviate this problem.
We watched all the top mass control signals as we turned on CSOFT P, with the SRC1 loops open and the AS centering on DC. We saw that all test masses move as expected, but so do the other optics, including PR3, PRM, SR3, and BS. The second attached screenshot shows how the OMs are driven by the centering loops, which see the CSOFT change. We hypothesize that this is how CSOFT couples to the BS and other loops, and also see this signal in the 90MHz centering signals wich are out of loop here.
We looked a little bit at using 90 MHz centering, but see some problems.
For one, the 90MHz signals are not normalized by the sum, they are normalized by the input power. It seems like we should be using the signals normalized by the sum, and that this would be a fairly simple model change replacing the input power normalization with the sum. If we normalize this way we can simply use 1s in the input matrix and the centering loop gains should stay the same.
The sum on ASARF90 is negative, so this centering loop might have a sign flip compared to the DC centering.
We measured the MICH ASC looop to do a comparison between the gain with 90MHz centering on and with DC centering on. While doing this we turned off the centering loops, and saw that the MICH loop was not stable this way. This probably means that the MICH ASC loop stability somehow depends on the centering loop, and that we need to make sure we keep the centering loop BW the same when we change sensors.
For the first people to come in the morning to work on locking, some good steps would be:
1) Add normalization of the 90 MHz centering to the ASC models.
2) Measure the MICH ASC loops (templates available /ligo/home/evan.hall/Public/Templates/MichPitchSweep.xml and MichYawSweep.xml ) with the DC centering on.
3)measure the centering loop OLGs, for DC centering and 90 MHz centering.
4) measure the MICH ASC OLGs with the 90 MHz centering on. Make sure the gain is the same as for DC centering.
Matt, Lisa, Evan
We wanted to remeasure the modematching of the DARM mode into the OMC. At 2 W, the mode mismatch is 2% to 2.5%.
To do this, we used the eLIGO technique of looking at the DARM signal in the OMC reflection QPDs, with the OMC locked and unlocked. Whatever DARM residual is still present in reflection of the locked OMC indicates the amount of mode mismatch.
The attachment shows some of the test mass violin mode first harmonics, as seen in reflection of the OMC. Only about 2% to 2.5% of the DARM signal remains after the OMC is locked.
We need to get data at 20+ W in order to infer the mode mismatch during low-noise operation.
The OMC was unlocked from 07:16:00 to 07:21:00, and it was locked from 07:24:30 to 07:29:30 (all times 2016-03-03 Z).
0:29 UTC Kiwamu to hartmann table by HAM4 to take pictures 0:43 UTC Kiwamu back Evan, Sheila, Jenne, Matt and Lisa working on ASC.
Since we are looking at ASC we reverted the AS36 B phases which were adjusted durring the day. The settings are attached (the setpoints were the AS90 team phasings) just in case people want a record.
Jenne, Hang
We tried more WFS phasing with AS 90 as our centering loop today.
## AS B 36 I:
We first looked at AS B 36 I and tried to use it for SRM sensing. We excited a length signal first and phased each segment s.t. the signal showed up in I phase, to fix the relative phase between the segments. We found that the demod phase of segs. 3 & 4 was 90 deg diff than that of segs. 1 & 2. Then we drive SRM in pitch (yaw) and rotated the demod phase for all quardrants together to optimize the angular signal in I phase. However, we could not close the SRC loop even after re-phasing. By applying a static misalignment to SRM, we found the AS 36 B I responsed only to one direction but not the other. No clue why this was the case yet.
## AS A 36:
We then decided to use only AS A 36, with Q phase for BS and I for SRM. The demod. phase was set by maximizing BS angluar signal in Q (w/ 90 centering, DRMI locked).
The new (old) demod phase for the 4 quardrants were:
-165 (-140); -145 (-140); -145 (-140); -135 (-140)
At this config., for BS, the Q signal was factor ~ 10 of the I signal, and for SRM, the I signal was factor ~3 of Q.
## Difficulties w/ locking the IFO:
After that we tried to go further in the locking sequence, but kept losing lock at RF_DARM/DHARD_WFS.
We first noticed that the DARM gain at RF_DARM was too large for the new config. We set it to 400 instead of increasing to 1000. Also the DHARD pit & yaw gains were too high. We decreased it by a factor of 10 and saw some oscillation at 15 Hz when offloading was almost done. Then we reverted all settings and handed the IFO to noise-hunting team.
## Things to do:
Rephase the 90 centering WFS. The last time we did it we did not open the MICH ASC loop. Then recheck all other WFS' phasing again. If we still have trouble locking at the DHARD_WFS, stage, we may further decrease the DHARD pit/yaw gains by another factor of ~4ish. We may also just commented out the new offloading gains.
I updated the TCS IFO model/simulation page. The major changes and additions are summarized below:
OPTIC LENS/ROC model (left side of screen):
IFO SIMULATOR:
This is a set of calculations of interferometer values that are dependent on ROC and ITM lenses
The MEDM screen is being (slowly) updated to illustrate these parts
SITEMAP > TCS > MASTER > SIMULATION
Plotting the cross-correlated DARM noise (band 5) and LEVA temperature on the same plot doesn't show any obvious relationship. The .fig is included in case someone has a good idea of how to use this data.
This analysis has been inspired by the recent investigations on the L1 noise , that shows some correlation of DARM variations vs LVEA temperature. By superimposing the current best L1 curve and the best H1 curve from O1 (see plot), one can see that the noise in the L1 bucket seems to have more "scattering looking" peaks (which can be modulated by temperature-induced alignment variations), while the H1 noise less so. The noise at high frequency is notably lower in L1, mostly due to the higher cavity pole frequency.
I have extended Matt's previous analysis to the entire O1. In addition, I added another interesting channel, the vertical sensor of the top stage of ITMY. Here is the result.
I went through trend of some interesting channels where I was looking for signals showing similar variation to the band limited rms of the cross spectra. I came across ITMs' top stage vertical monitors and found them showing two relatively big bumps (actually dips in the raw signals) which seemingly match the ones in the band limited rms on Dec 2nd and Dec 29th. However, even through they look like showing a good agreement in the last half of the O1 period, the first half does not show an obvious correlation. Does this mean that the modulation mechanism of the noise level changed in the middle of the run and somehow noise level became sensitive to vertical displacement of ITMs or in-chamber temperature ?
For completeness, I have looked at other vertical monitors. Here is the result. They all show qualitatively the same behavior more or less. The fig file can be found on a server.
Day Shift 16:00-23:59UTC (08:00-16:00PT)
State of H1: DRMI lock for Jenne
Shift Summary:
Site Activities:
- 19:30 Kyle - LVEA to survey for upgrade
- 19:45 Kyle - out of LVEA
- 21:27 Corey - optics lab
- 21:15 Ed - LVEA to reset PSL watchdog
- 21:25 Ed - out of PSL
Locking / Initial Alignment:
- locking X arm in red was very fast, and had good power
- ITMY needed more of an offset during SRC alignment
- PRMI needed to adjust BS for DRMI
- DRMI is well aligned and relocking quickly for the last 2-3 hours of the shift
Control Room:
- Video0 froze, now fixed after a reboot
- Video4 had the wrong striptool displayed, now fixed
ITMY mis-alignment issue tracked down to offset, now fixed.
ITMY drivealign_P2L_gain change from 1.05 to 0.6 accepted after the OK by Sheila, had been 0.6 for at least 10 days.
The Front End watchdog reset takes place in the LASER Diode Room, not in the LVEA/PSL enclosure. :)
I've been slowly trying to get stuff figured out for testing a wind fence set up at LHO, and am getting ready to try to set something up. I'll summarize where I think things are here.
Currently, I want to try a small, cheap wind fence at EX, mostly to explore how effective screens are at slowing wind, effects on ground motion and tumbleweed build up. The fence would be a couple of 4x4-ish 12-15 foot posts and some fine polymer netting like that used around tennis courts, gardens and the like. It may be necessary to add guy lines, as well. In addition to the fence, Richard has said he will help me get an STS buried at EX, similar to Robert's set up at EY, and we are ordering 3 anemometers with stand alone data collection so no changes need to be made to CDS for this. I think this set up will allow me to look at a few of the concerns that people have brought up. So far the concerns I've heard are:
1. Increased ground motion. Fences slow wind by applying a force to the airstream, this is transmitted to the ground and produces increased tilt and other high frequency motion. I think the tilt can be addressed by placing the fence some few tens of meters from the building, per Robert's measurements of building tilt. Higher frequency motion can hopefully be addressed by design of the fence support structure, but we'll have to see how bad the motion is.
2. Similarly, the fence could make airflow more turbulent. I suspect that airflow at the building level is probably turbulent anyway. Hopefully, a well designed fence push turbulent flows around the building, while slowing most of the air makes it through.
3. Tumbleweed build up. Anything that blocks the wind will gather tumbleweeds around here, which could make a fence a fire hazard and maintenance issue. This could be addressed by leaving a gap at the bottom. The airflow below a few feet probably isn't a significant source of problems for us, but I don't know how big this gap would need to be. I also plan on using a mesh fine enough that tumbleweeds won't stick to the fence very easily. Industrial fences are flame resistant, and won't ignite on their own.
4. Wind damage. We have seen winds above 100 mph during a storm, this would create very high loads on any fence. I haven't been able to figure out how to calculate wind loads on a permeable wall yet, but Civil Engineers have building codes dealing with this. For my test, I'm trying to get some idea of the loads involved with moderatewind, and just making the fence so that the mesh will tear free in a way that won't damage the EX building if the wind gets too bad. Industrial fences are designed to stand similar wind loads, and their screens are held in place with replaceable break-away clips to prevent damage.
5. Cost/size. BrianL talked to a company that makes industrial fences a few months ago. The ball park figure for a 40 x 200 foot fence was about $250,000. That was a first pass at a price and the company had some suggestions at how to cut down on the cost. This price also needs to be weighed against the 10-15 % of down time we have due to wind. Something of that size would also probably have to be approved by the DOE. It's also unclear if we would have to completely surround each endstation, or if we could get away with less coverage. Probably, we don't need to "protect" EY along the X-axis, or EX along the Y-axis.
Comments, criticism, praise are all welcome.
Comments;
Any break away components will need to be constrained so the EPA doesn't come after us for polluting the desert. I suggest that even a temporary test fence be built to withstand any expected wind/snow/tumbleweed loads.
Be aware that any wind speed and direction measurements are likely influenced by ground effects until you are well above the ground and nearby obstructions - say 25- 50 feet???
Thanks John. The ones I saw advertised had a cable top and bottom which suspended the wind fabric. The top attachments from the fabric to the cable were "permanent" and the attachments to the lower cable were the break-away. This should allow it to yield to the wind load, but to keep it from blowing away and causing more trouble.