1505 -1525 hrs. local -> To and from Y-mid LN2 at exhaust after 2 mins with 1/2 turn open. Next overfill to be Monday before 4:00 pm local
Rob, Evan
After some difficulties, we were able to get back to low noise locking.
Jenne, Hang, Sheila, Rob,
durring the day today we worked on RF centering a bit more. No real breakthroughs, but there are a few things to note:
The lock point of the MICH loop is not necessarily good when we are locked with RF centering. The problem could be that we have the SRC uncontrolled when we are using RF centering, and currently the phasing of AS36 is set to maximize BS in Q, not to minimize SRM in Q which is what we probably want. Or it could be that we need to check the gain matching on the quadrants for the 90 MHz centering.
Rob, Evan
The control room workstations are freezing up every few minutes.
Some of the Macs on the wall are complaining intermittently that they can't find /ligo.
The AS port camera (cam18) is frozen (perhaps unrelated?).
The verbal alarm handler crashed and was restarted. I copied the error into a sticky note on the alarm handler computer. There have been some issues with tidal that Evan H. has looked at. Evan H. and Robert W. have made it to DC readout and are still here.
It is happening pretty consistently during each lock acquision.
It looks like it started around noon today.
Shuttering ALS now causes violent fluctuations at the AS port, and causes the test masses to saturate.
This is apparently because of a rewrite of the LOCKING_ALS state in ISC_LOCK which was not tested.
Both X and Y tidal are requested to go to "transition" (i.e., IMC-F offloading enabled), but Y does not do so. This causes X to offload IMC-F by itself (unsuccessfully).
Sleeps must be added to the guardian code in order to make both X and Y actually go to the "transition" state.
I have also removed some logging commands in the run method (as these continually output garbage to the logfile while the guardian loops over that method).
Quick Summary: Commissioning.
(All time in UTC)
Between 17:00 - 19:00 Richard's student fixing all sky camera on the roof
17:54 Bryn and Vinny to EY to grab stuff, then EX CER (magnetometer work)
18:12 Corey to Optics lab
19:20 Kyle unloading batteries near OSB receiving area
19:34 Kyle done
21:04 Kyle going out to solder some high voltage cable at EX (WP5746)
22:08 Richard to either one or both of the end station
22:26 Vinny back
22:47 Richard + Fil back
23:20 Kyle + Gerado out
Kyle, Gerardo ~1330 - 1525 hrs. local -> To and from X-end VEA
Listed below are the past 10 day trends. For further in-depth analysis, please refer to Jason O., Pete K. or Rick S.
I just noticed that Chreyl's name is attached to this. Not quite sure how it happened.
Tried the following:
Also applies to a range of other ASC channels, like CHARD_Y_EXC, INP2_P_EXC, CHARD_P_A_EXC, etc.
I tried an excitation in LSC-XARM_EXC and it worked fine.
I tried awg clear 19 * in the diag terminal; no change.
Am I missing something here?
the awgtpman process on h1asc was reporting the incorrect channel number for this channel (H1:ASC-CHARD_P_EXC is chnnum 20298 but the awgtpman was reporting it as 20304). Interestingly diaggui was able to excite CHARD_P_EXC despite the error but awggui was not. I also saw an error along the lines of "awgtpman cannot start as it is already running" and suspect the issue was from the bad restart of h1asc yesterday when there was a collision of DAC channels between h1asc and h1ascimc.
I killed to running awgtpman_h1asc and restarted it, all looks good now.
0:59 UTC Kyle back from mid Y Early in the evening I had to trend and move back the IMC PZT offsets to get the IMC to lock. Commissioners have been working on RF90 centering.
Jenne, Hang, Ketia, Sheila, Kiwamu, Matt, Lisa, Robb, Evan, everyone in the control room...
today we mostly focused on RF centering.
We made two model changes:
We noticed that our normal DC centering loops for the AS port have eveloved to a bad state. In the end we have had about 5 degrees of phase margin and 40dB gain peaking in these loops with the nominal settings we used durring O1.
We reduced the gain and switched over to 90 MHz centering. The input matrix that resulted in an upper ugf around 2 Hz for all 4 loops is attached. The next four attachments are OLG measurements of the centering loops, the blue curves are for the DC centering with the nominal settings used for O1, the red ones are with the input matrix shown in the attached screenshot.
After adjusting the gains of the RF loops, we closed the BS pitch loop using AS36AQ (our normal sensor). The closed loop response of the centering loop is clearly a feature of the MICH loop with this centering, as the 6th screenshot shows. When we are on the DC centering, changing the bandwidth of the centering loops didn't change the shape of the MICH loop, but clearly it does with the RF centering, which must be responding to BS. FOr the time being I turned off the ELF20Hz (a low pass in the centering loops), which gives us more phase and less gain peaking, so that our MICH loops still seem stable. This results in the OLG shown in purple in the 6th attachment. The Yaw OLG is the 7th attachment. (The MICH digital gains here were 0.8 for pit and 0.7 for yaw, our nominal settings are 0.7 for both, which is probably fine.)
We left the AS36I to SRM loop open and locked the IFO (including the DRMI on POP state which has coil drivers switching and caused problems last week) without any problems. I checked that the MICH YAW gain was the same in full lock as it was in DRMI. In engage ASC part 2 a 5Hz low pass is engaged on the MICH loop, this was not stable and blew the lock. I've commented this out of the guardian for now. The next two locking attempts failed, AS90 dropped as we came into resonance and never recovered, which could be due to the SRC loop being open, or a problem with MICH loop. I've saved an SDF snap with the current settings as ASC_progress_March3evening.snap
To sumarize changes:
The next step is probably to open the MICH ASC loops after they have run in DRMI, and check in full lock that the lock point is still good. It is clear that there is a difference between DC centering and RF centering when we reduce the CARM offset.
Lisa, Matt, Rob, Evan
DC3 P seemed to still have too little phase margin, so we turned down the gain by a factor of 3. (0.17 ct/ct → 0.06 ct/ct).
We tried closing SRC1 around ASA36I, but pitch and yaw seemed cross-coupled.
We weren't able to close cHard yaw, since the error signal was large, and attempting to reduce it lowered the recycling gain significantly.
We searched around for awhile for a new SRM error signal. We think we should try to make a sensing matrix measurement of cHard, IM4, PR3, and SR3/SRM into the REFL WFSs (9 and 45 MHz) and try to invert it. This would be a good task for the morning.
In summary, it seems like the optical plant for the ASC has changed significantly; we have to pick error signals and close loops anew before proceeding with full locking again.
Plotting the cross-correlated DARM noise (band 5) and LEVA temperature on the same plot doesn't show any obvious relationship. The .fig is included in case someone has a good idea of how to use this data.
This analysis has been inspired by the recent investigations on the L1 noise , that shows some correlation of DARM variations vs LVEA temperature. By superimposing the current best L1 curve and the best H1 curve from O1 (see plot), one can see that the noise in the L1 bucket seems to have more "scattering looking" peaks (which can be modulated by temperature-induced alignment variations), while the H1 noise less so. The noise at high frequency is notably lower in L1, mostly due to the higher cavity pole frequency.
I have extended Matt's previous analysis to the entire O1. In addition, I added another interesting channel, the vertical sensor of the top stage of ITMY. Here is the result.
I went through trend of some interesting channels where I was looking for signals showing similar variation to the band limited rms of the cross spectra. I came across ITMs' top stage vertical monitors and found them showing two relatively big bumps (actually dips in the raw signals) which seemingly match the ones in the band limited rms on Dec 2nd and Dec 29th. However, even through they look like showing a good agreement in the last half of the O1 period, the first half does not show an obvious correlation. Does this mean that the modulation mechanism of the noise level changed in the middle of the run and somehow noise level became sensitive to vertical displacement of ITMs or in-chamber temperature ?
For completeness, I have looked at other vertical monitors. Here is the result. They all show qualitatively the same behavior more or less. The fig file can be found on a server.
I've been slowly trying to get stuff figured out for testing a wind fence set up at LHO, and am getting ready to try to set something up. I'll summarize where I think things are here.
Currently, I want to try a small, cheap wind fence at EX, mostly to explore how effective screens are at slowing wind, effects on ground motion and tumbleweed build up. The fence would be a couple of 4x4-ish 12-15 foot posts and some fine polymer netting like that used around tennis courts, gardens and the like. It may be necessary to add guy lines, as well. In addition to the fence, Richard has said he will help me get an STS buried at EX, similar to Robert's set up at EY, and we are ordering 3 anemometers with stand alone data collection so no changes need to be made to CDS for this. I think this set up will allow me to look at a few of the concerns that people have brought up. So far the concerns I've heard are:
1. Increased ground motion. Fences slow wind by applying a force to the airstream, this is transmitted to the ground and produces increased tilt and other high frequency motion. I think the tilt can be addressed by placing the fence some few tens of meters from the building, per Robert's measurements of building tilt. Higher frequency motion can hopefully be addressed by design of the fence support structure, but we'll have to see how bad the motion is.
2. Similarly, the fence could make airflow more turbulent. I suspect that airflow at the building level is probably turbulent anyway. Hopefully, a well designed fence push turbulent flows around the building, while slowing most of the air makes it through.
3. Tumbleweed build up. Anything that blocks the wind will gather tumbleweeds around here, which could make a fence a fire hazard and maintenance issue. This could be addressed by leaving a gap at the bottom. The airflow below a few feet probably isn't a significant source of problems for us, but I don't know how big this gap would need to be. I also plan on using a mesh fine enough that tumbleweeds won't stick to the fence very easily. Industrial fences are flame resistant, and won't ignite on their own.
4. Wind damage. We have seen winds above 100 mph during a storm, this would create very high loads on any fence. I haven't been able to figure out how to calculate wind loads on a permeable wall yet, but Civil Engineers have building codes dealing with this. For my test, I'm trying to get some idea of the loads involved with moderatewind, and just making the fence so that the mesh will tear free in a way that won't damage the EX building if the wind gets too bad. Industrial fences are designed to stand similar wind loads, and their screens are held in place with replaceable break-away clips to prevent damage.
5. Cost/size. BrianL talked to a company that makes industrial fences a few months ago. The ball park figure for a 40 x 200 foot fence was about $250,000. That was a first pass at a price and the company had some suggestions at how to cut down on the cost. This price also needs to be weighed against the 10-15 % of down time we have due to wind. Something of that size would also probably have to be approved by the DOE. It's also unclear if we would have to completely surround each endstation, or if we could get away with less coverage. Probably, we don't need to "protect" EY along the X-axis, or EX along the Y-axis.
Comments, criticism, praise are all welcome.
Comments;
Any break away components will need to be constrained so the EPA doesn't come after us for polluting the desert. I suggest that even a temporary test fence be built to withstand any expected wind/snow/tumbleweed loads.
Be aware that any wind speed and direction measurements are likely influenced by ground effects until you are well above the ground and nearby obstructions - say 25- 50 feet???
Thanks John. The ones I saw advertised had a cable top and bottom which suspended the wind fabric. The top attachments from the fabric to the cable were "permanent" and the attachments to the lower cable were the break-away. This should allow it to yield to the wind load, but to keep it from blowing away and causing more trouble.
svn up at .../SusSVN/sus/trunk/QUAD/Common/MatlabTools/QuadModel_Production/
1) Added an option for optical lever damping that actuates at the PUM (L2) stage. Like top mass damping, this can be imported from the sites, or added in locally.
2) Added options for violin modes at all stages. Previously this was only available for the fibers. You can choose how many modes you want at each stage, doesn't have to be the same number.
3) Added an option to load damping from a variable in the matlab workspace. Previously this could only be done from a saved file or imported from the sites.
Detailed instructions fpr generate_QUAD_Model_Production.m are commented into the header. See G1401132 for a summary of the features, and some basic instructions on running the model.
I am tagging this to the svn now as
quadmodelproduction-rev7995_ssmake4pv2eMB5f_fiber-rev3601_h1etmy-rev7915_released-2016-03-01.mat
...the file is large (386 MB) so it is slow to upload.
The tagged model includes 25 violin modes for the fibers, 20 for the uim-pum wire, 15 for the top-uim wire, and 10 for the top-most wire. For the 25 fiber violin modes, the first 8 are based on measured frequencies from h1etmy, the remainder are modeled frequencies. All metal wire modes are modeled values. The oplev filters are turned off in this model as well (I imported the filters from LHO, and they were turned off at the time).
rev 7359: now reads foton files for main chain and reaction damping
rev 7436: Changed hard coded DAMP gains to get the correct values for LHO ETMX specifically.
rev 7508: Restored damping filter choice for P to level 2.1 filters as opposed to Keita's modification. Cleaned up error checking code on foton filter files, and allowed handling of filter archive files and files with the full path.
rev 7639: renaming lho etmy parameter file
rev 7642: Adding custom parameter file for each quad. Each one is a copy of h1etmy at this point, since that one has the most available data.
rev 7646: added ability to read live filters from sites, and ability to load custom parameter files for each suspension
rev 7652: updated to allow damping filters from sites from a specific gps time (in addition to the live reading option)
planned future revision - seismic noise will progate through the damping filters as in real life. i.e the OSEMs are relative sensors and measure the displacement between the cage and the suspension.
rev 7920: big update - added sus point reaction forces, top OSEMs act between cage and sus, replaced append/connect command with simulink files
rev 7995: added oplev damping with actuation at the PUM (L2); added options for violin modes at all stages, rather than just for the fibers; added option to load damping from a variable in the workspace, in addition to the existing features of loading damping from a previously saved or importing from sites.
no recent (at least 4 years) functional changes have been made to this file.
- rev 2731: name of file changed to quadopt_fiber.m, removing the date to avoid confusion with Mark Barton's Mathametica files.
- rev 6374: updated based on H1ETM fit in 10089.
- rev 7392: updated pend.ln to provide as-built CM heights according to T1500046
- rev 7912: the update described in this log, where the solidworks values for the inertias of the test mass and pum were put into the model, and the model was then refit. Same as h1etmy.m.
- rev 7640: created the H1ETMY parameter file based on the fit discussed in 10089.
- rev 7911: the update described in this log, where the solidworks values for the inertias of the test mass and pum were put into the model, and the model was then refit. Same as quadopt_fiber.m.
I added more comments to the header of the model file, generate_QUAD_Model_Production.m, explaining how to run the model with measured violin modes and Qs. I also clarified the comments on including custom damping. I updated the feature summary doc G1401132 with the same information.
We have changed the whitening filters in CAL_DELTAL_EXTERNAL_DQ, to the filter described in
We are now using 6 zeros at 0.3 Hz and 6 poles at 30 Hz. Hopefully this will take care of the aliasing problem with DTT, and we can use the calibrated channel when making comparisons with seismic/ sus or PEM channels.
This doesn't impact the GDS pipeline, only CAL_DELTAL_EXTERNAL
Robert, Sheila, Evan, Gabriele
I tried to look at one of Robert's injections from yesterday, and we noticed a dangerous bug, which had previously been reported by Annamaria and Robert 20410. This is also the subject of https://bugzilla.ligo-wa.caltech.edu/bugzilla3/show_bug.cgi?id=804
When we changed the Stop frequency on the template, without changing anything else, the noise in DARM changes.
This means we can't look at ISI, ASC, PEM, or SUS channels at the same time as DARM channels and get a proper representation of the DARM noise, which is what we need to be doing right now to improve our low frequency noise. Can we trust coherence measurements between channels that have different sampling rates?
This is not the same problem as reported by Robert and Keita alog 22094
people have looked at the DTT manual and speculate that this could be because of the aggressive whitening on this channel, and the fact that DTT downsmaples before taking the spectrum.
If there is no near term prospect for fixing the problem in DTT, then we would want to have less aggressive whitening for CAL_DELTA_L_EXTERNAL
I spent a little time looking into this and added some details to the bug report. As you said, it seems to be an issue of high frequency noise leaking through the downsampling filter in DTT.
Until this gets fixed, any reason you can't use DARM_IN1 instead of DELTAL_EXTERNAL as your DARM channel? It's better whitened, so it doesn't suffer from this problem.
The dynamic range issue in the whitened channel can be improved by switching to five zeros at 0.3 Hz and five poles at 30 Hz.
The current whitening settings (five zeros at 1 Hz, five poles at 100 Hz) produce more than 70 dB of variation from 10 Hz to 8 kHz, and 130 dB of variation from 0.05 Hz to 10 Hz.
The new whitening settings can give less than 30 dB of variation from 10 Hz to 8 kHz, and 90 dB of variation from 0.05 Hz to 10 Hz.
We could also use 6 zeros at 0.3 Hz and 6 poles at 30 Hz, which would give 30 dB of variation from 10 Hz to 8 kHz, and 66 dB of variation from 0.05 Hz to 10 Hz.
The 6x p/z solution was implemented: LHO#25778