Kyle Soft-closed GV5 and GV7 -> Resumed rough pumping of HAM1 -> New CDS vacuum signals were added for HAM1 pressure gauge pair (much thanks to Richard M., Filiberto C. and Dave B.) -> Associated vacuum computer(s) had to be rebooted to facilitate this -> CP1 level marginally impacted -> Switched pumping over to HAM1 turbo -> Connected leak detector (LD) and helium leak tested all view ports on HAM1 -> Pressure ~1 x 10-5 torr, LD baseline < 5 x 10-10 torr*L/sec, sprayed audible flow of helium for 20 seconds on air side of each window (through hole in VP protector) -> No LD response -> Disconnected LD and resume pumping with HAM1 turbo For tonight I am pumping HAM1 with its turbo backed by the HAM pump cart and am leaving GV5 and GV7 soft-closed.
Richard, Kyle, Filiburto, Patrick, Dave
the h0vely EPICS system was modified to add the records to monitor/control the new Pirani/Cold-Cathode gauge pair connected to HAM1. HVE-LY:X0.db database file was created and added to the startup.h0vely file. The h0vely VME crate was rebooted and the new database was started. The vaccum MEDM screens were updated to show the new signals.
The H0EDCU_VE.ini file was updated and the DAQ restarted. The Pirani and Cold-Cathode calculated pressures in Torr are being recorded by the DAQ with the channel names HVE-LY:X0_100ATORR and HVE-LY:X0_100BTORR.
The vacuum overview MEDM screen which is being displayed on the CDS Web page was also updated for offsite monitoring.
The realization that I had not completed all of the required steps following Tuesday's pumping of HAM1 has required that I interrupt commissioning. How soon the Corner Station can be opened to the End Stations is under discussion. Until then, I am running rotating pumps on and near HAM1. I am sorry for the inconvenience.
Well, unlike at EndY, the pump station maintenance did not pay off as well. Bottom line: remaining coherences seen in the Z and HP dofs are not reduced as much as they are at EndY after the maintenance. Possible reason--while I found lots of Accumulators that needed charging (just like at EndY,) the explicit grounding of the power supply common legs did not make the Pressure Sensing noise better. In fact something in this process has made the sensor noise worse; but it was't the adding of the ground.
Details: First I added the explicit jumper from the common legs on the power supply to the supply ground plug but this had no observable affect on the striptool I was watching. I figured it was doing no harm. After Guardian brought ISI/HPI down to OFFLINE, I ramped the pressure down with the servo. Then the motor was greased and Accumulators were charged: most Accumulator were essentially uncharged and two on the pump station were leaking. I was able to play with the Schaeder Valve and stop the leaks (may be iffy.) After the Accumulators were charged, the system was brought back online and Guardian reisolated with no problems.
See the first attachment for the second trends where the system was down and then back on. This is where I first saw that the noise on the pressure sensor channel more than doubled. What happened to cause the noise to change like this? I plugged the ground wire in before bring the pump down and didn't see an noise increase. Is the power supply flaky? I did have to move the servo box around to access the Accumulators...
It is clear this is bad when you look at the second attachment with the Pump Pressure ASD: it is a few to several factors worse than Sunday(Reference traces.) The remaining attachments are the coherences between the Pump Pressure and the HEPI L4C & ISI T240s. The coherences have improved suggesting the Accumulator serve us well. But the improvements are not as good as seen at EndY where the Pump Pressure Noise dropped 5 to x10.
It is clear even from the thumbnail that the RZ had some coherence as well but is too now reduced with the Accumulator charging. The RZ didn't have near this much coherence at EndY.
Hugh, Jim
Short version: Control loops have been installed on HAM1 HEPI. It's a hack, it could get ugly. Or it could totally work.
Since unlocking HAM1 HEPI was discussed, Hugh and I decided to see if we could quickly run through the commissioning scripts. I tried to look at the transfer functions we took in June of 2013, but only got errors. When I look for the mat files, I found that the data have never been collected. Hugh reported having troubles, so we figured that our data collection scripts must have broken and Hugh was never able to get help to fix the problem. For now, we've copied the isolation filters from HAM2 HEPI (the HEPI contollers for HAM are all generic, should be okay). There other things missing: blend filters (which are 1 for IPS and 0 for the L4C, anyway), actuator symmetrization filters (which are just gains that should be close to 1) and L4C symmetrization filters (not needed). I've also installed the Z sensor correction filter, like we use on the BSC HEPI, which might allow us to get some inertial isolation between .1 and 1 hz. It would be best to have data to at least compare transfer functions to HAM2, but maybe we'll get lucky.
And if we get it unlocked, and; get some time, we could collect the TFs and possibly get these close enoughs to be actually correct. The L4C is still a problem but as long as we don't try any actual Isolation (which none of the HEPI do now,) we will be okay.
LVEA: Laser Hazard Observation Bit: Undisturbed 07:15 Karen & Cris – Cleaning in the LVEA 08:00 Reset Observation Bit to Commissioning 08:12 Jim – Running testing on HAM4 & HAM5 08:14 Filiberto – Cabling vacuum gauge at HAM1 08:15 Elli – Going to HAM1 area to look for parts 08:15 Hugh – Doing HEPI maintenance at End-Y & End-X 08:20 Elli – Out of the LVEA 08:39 Nutsinee – Going to End-X to recycle PCal camera 08:52 Jim – Finished testing 09:12 Nutsinee – Back from End-X 09:23 Adjusted ISS RefSignal to reduce diffracted power 09:41 Kiwamu – Transition LVEA to Laser Safe 09:55 Sudarshan – Installing accelerometers on IOT2R and ISCT6 10:30 Mitch – Working in the LVEA on 3IFO stuff 10:50 Dave & Jim – Moving fibers at End-Y & restart End-Y PEM/PCal model 10:52 Betsy – Making Safe.Snap of all CS suspensions 11:21 Filiberto – Going to Mid-Y to get parts 11:32 Dave & Jim – Back from End-X 11:33 Dave & Jim – Going to End-Y to move fibers 11:52 TJ – Going to End-Y and End-X to drop off tools 12:10 Kyle – Closing GV5 & GV7 12:14 Dave & Jim – Back from End-Y – Restarting PEM models 12:15 Sheila – Transitioning LVEA back to Laser Hazard 12:16 Contractor on site to see Bubba 12:38 Karen – Cleaning at End-Y 12:44 Richard & Filiberto – Working on electronics at HAM1 12:53 Vender on site to stock snack machines 13:03 TJ – Back from End stations 13:08 Bubba – Going to End-Y to get equipment 13:44 Karen – Leaving End-X 13:45 Mitch – Out of the LVEA 14:05 Stuart & Jason – Running B&K testing of ITM-Y OpLev piers 14:10 Hugh – Going to End-Y to check HEPI 14:15 Richard – Restarting vacuum system computer 14:18 Dave – Restarting vacuum model 14:19 Dick – Going into LVEA to checking RF analyzer 14:35 Betsy, Mitch, & Travis – Going into the LVEA 15:05 Doug & Danny – Looking at OpLev piers to prep for grouting 15:20 Sudarshan – Working on IOT tables 15:25 Doug & Danny – Out of the LVEA 15:25 Dough – Going to End-Y & End-X to check OpLev piers to prep for grouting 15:30 Stuart & Jason – Out of the LVEA 15:52 Sudarshan – Out of the LVEA
This morning, Stuart and I used the SDF front end to take new SAFE.SNAP snapshots of the balance of the suspensions - recall, he had done the BS previously, see alog 16896.
Repeating his alog instructions on how to perform this:
- Transition the Suspension to a SAFE state via Guardian - On the SDF_RESTORE MEDM screen available via the Suspension GDS_TP screen select FILE TYPE as "EPICS DB AS SDF" & FILE OPTIONS as "OVERWRITE" then click "SDF SAVE FILE" button to push a SDF snap shot to the target area (which is soft-linked to userapps). - This safe SDF snapshot was then checked into the userapps svn: /opt/rtcds/userapps/release/sus/h1/burtfiles/ M h1susbs_safe.snap
Note, please don't use the BURT interface to take SAFE.SNAP snapshots anymore, as the SDF monitoring will become disabled. All other snapshots are free to be taken via BURT, however.
Also, we found many alignment values not saved on IFO_ALIGN which, after confirming with commiss, we saved.
[Betsy W, Stuart A, Jamie R] After taking safe SDF snapshots for the IM Suspensions, we found that Guardian had crashed for the IM2 and IM3 when we had attempted to transition them from SAFE to ALIGNED states. Oddly IM1 and IM4 still transitioned fine. Upon checking the Guardian log it was apparent it was falling over for IM2 & IM3 immediately after reading the alignments from the *.snap files. Under initial inspection there was nothing obviously different or wrong with the IM2 & IM3 alignment files. After contacting Jamie, he noticed an extra carriage return at the end of the IM2 & IM3 alignment files, which was causing issues for Guardian. Removal of the carriage return, and reloading Guardian rectified the problem.
To be clear, the IM2/3 guardian nodes hadn't crashed, they had just gone into ERROR because of the snap file parsing issue.
Also to be clear, there is a bug in the ezca.burtwb() method, which is being used to restore the alignment offsets, such that it doesn't properly ignore blank lines. This will be fixed.
Unclear why these alignment snap files had these extra blanklines. My guess is that they were hand edited at some point.
Above, when I said "Note, please don't use the BURT interface to take SAFE.SNAP snapshots anymore, as the SDF monitoring will become disabled. All other snapshots are free to be taken via BURT, however."
I was just speaking to SUS safe.snaps. Sorry for the confusion.
I have updated the SUS Driftmon again with the values from the middle of the most recent good lock stretch (1108983616 GPS). As of the update time, all SUSes are in the green, with the exception of the OMs, RMs, and OMC, whose alarms values have yet to be evaluated.
Took 24 hour OpLev trend measurements.
Altough the high frequency is kind of screwed up by the wandering line, we can get some interesting information about the lower frequencies.
The full report can be found at the following address:
https://ldas-jobs.ligo.caltech.edu/~gabriele.vajente/bruco_1108981336/
The most interesting coherence is with SUS-BS_M1_ISIWIT_PIT_DQ, which seems enough to explain most of the noise up to 100 Hz. This is consistent with what Sheila told me, i.e. that we're not fully using BS ISI.
For those interested in the BruCo details, I managed to reduce a lot the time needed to analyze the data, basically with the following modifications: split coherence computation into the single FFT computations, to reduce redundancy; parallelize the computation and expcially the disk access using all available processors. This brought down the typical execution time to analyze 10 minutes of data from 8 hours to about 20-30 minutes. The new code is attached.
Here are all the files needed to run BruCo:
bruco.py: main file to execute, see inside for instructions and configurations
functions.py: some auxiliary functions are defined here
markup.py: a library to create HTML pages
bruco_excluded_channels.txt: list of all channels that must be excluded from the coherence computation
J. Kissel, K. Izumi,
Regrettably, I have to put this work on hold for the weekend, but it turns out calibration of the IMC in the new CAL-CS infrastructure will be more involved that I thought.
(1) I've managed to install the frequency dependence of the suspensions. Sadly, I've given up (once again) on developing an automated way to generate a photon design string from our matlab dynamical state space models of the suspension. Instead, I used zpkdata on the state space model, and by-hand-cancel all poles and zeros that were obviously the same (for some reason minreal can't do this for me). The end result is happily what I expect -- see first attachment.
Here're the final filters as implemented in foton, which have been stuck in FM5:
"M1toM3"
zpk([0.018585-i*3.410679;0.018585+i*3.410679;1.606379;0.044818-i*1.036165;0.044818+i*1.036165;
0.118436-i*0.689908;0.118436+i*0.689908],
[0.264197-i*3.085648;0.264197+i*3.085648;0.019394-i*3.411951;0.019394+i*3.411951;1.586567;
0.103907-i*1.688080;0.103907+i*1.688080;0.085448-i*0.524339;0.085448+i*0.524339;
0.043202-i*0.791714;0.043202+i*0.791714;0.043839-i*1.040618;0.043839+i*1.040618], 5.070123e+05,
"n")
gain(1.97234e-06)
"M2toM3"
zpk([0.018965-i*3.411371;0.018965+i*3.411371;0.383092-i*2.775995;0.383092+i*2.775995;
0.109233-i*0.626005;0.109233+i*0.626005;0.045076-i*1.037432;0.045076+i*1.037432;1.595790],
[0.264197-i*3.085648;0.264197+i*3.085648;0.019394-i*3.411951;0.019394+i*3.411951;1.586567;
0.103907-i*1.688080;0.103907+i*1.688080;0.085448-i*0.524339;0.085448+i*0.524339;
0.043202-i*0.791714;0.043202+i*0.791714;0.043839-i*1.040618;0.043839+i*1.040618], 2.394780e+03,
"n")gain(0.000417575)
"M3toM3"
zpk([0.019384-i*3.411943;0.019384+i*3.411943;0.268344-i*3.080800;0.268344+i*3.080800;1.587283;
0.119880-i*1.524350;0.119880+i*1.524350;0.105649-i*0.583444;0.105649+i*0.583444;
0.046372-i*1.036411;0.046372+i*1.036411],
[0.019394-i*3.411951;0.019394+i*3.411951;0.264197-i*3.085648;0.264197+i*3.085648;1.586567;
0.103907-i*1.688080;0.103907+i*1.688080;0.085448-i*0.524339;0.085448+i*0.524339;
0.043202-i*0.791714;0.043202+i*0.791714;0.043839-i*1.040618;0.043839+i*1.040618], 2.462902e+01,
"n")
gain(0.0406025)
Again, the design script can be found here:
/ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/MC2/Common/H1SUSMC2_GlobalTFs_to_Foton_20150224.m
(2) The DC gain of the actuation chain -- the M3 (optic) displacement in [m] per (M1, M2, M3) LOCK L drive [ct] gain of the actuation function have not been updated to reflect that we've increase the drive range of the M2 stage. So these should be recalculated.
(3) In the current calibration scheme, we use the error point * 1 / (the sensing function) + the control signal * actuation function. However, the IMC's sensing function will vary with input power, because it doesn't get normalized with ever level of input power. We need to think about how to continually compensate for the optical gain change due to input power change.
(4) In the calibration current scheme, where we use the IMC error point AND control signals to reconstruct the mode cleaner length isn't properly supported with the current LSC infrastructure. Namely, the error point of the IMC control loop is an analog signal that's *before* the control signal is split into the FAST and SLOW paths. See attached screenshot. The signal is already digitized in the LSC frontend, as "H1:IMC-I_OUT_DQ," but it is not sent over IPC to the CAL-CS model. Further, for the control signal, we need both the FAST and SLOW actuator paths (which *are* already sent to the CAL-CS model). So, if we want to continue with the new CAL-CS calibration scheme, we need send the output of the IMC_I filter bank to the CAL-CS model -- which requires a modification to the LSC model and to the CALCS model.
So.... more work to do on this than I had initially planned and hoped for. We'll get back to it on Monday. *Maybe* I can convince people to make the front end model change on next Tuesday.
I just pushed a patch to cdsutils (now r441) that includes improved CDSMatrix support:
See the built-in help for full documentation:
jameson.rollins@operator1:~ 0$ guardian -i
--------------------
aLIGO Guardian Shell
--------------------
ezca prefix: H1:
In [1]: from cdsutils import CDSMatrix
In [2]: help(CDSMatrix)
Last night we noticed, looking at the real time spectrum of DARM, that there was a wandering line. The attached spectrograms show the peculiar behavior: about every 270 seconds (not regular) this line enter the spectrum from the high frequency range and moves down in a quite repetible way (the frequency has quite perfect exponential evolution with time). Then there is some sort of burst of noise before the line starts again from the high frequency.
This behavior seems different from the wandering line related to IMC-F seen at Livingston.
I was just looking at the same feature. The burst of noise looks like a beatnote whistle, similar to what we saw at Livingston with IMC-F. At first glance, it looks like the whistle is occuring when the drifting signal crosses through the OMC length dither at 3.3kHz. I'm attaching a few spectrograms zoomed on to various levels to look more closely at the feature. The frequencies look discrete when you zoom in, it doesn't seem to be a continuous signal. Was there some kind of swept sine injection that was unintentionally left on during the lock?
I plotted a spectrum long enough to catch all of the frequencies of the signal as it swept down. The placement of frequencies seems more sparse at higher frequencies and becomes more densely packed as it dips below the kHz range.
The feature is visible in REFL signals as well, hinting in the direction of something going on in the laser. It's visible as well in LSC-IMCL and LSC-REFL_SERVO_ERR
This feature is showing up in MICH, SRCL, and PRCL. It's more faint in MICH, but is very strong in PRCL and SRCL. It's also showing up in the input to BS M3 LOCK filter for the length DoF, but it looks like MICH was being used to feed back on the BS position. I didn't see any evidence of the signal in MC2 trans, IM4 trans, IMC-F, or the IMC input power.
Problem solved: a SR785 was connected to the excitation input of the common mode board, and the excitation was on. We disabled the excitation input from the common board medm screen
Sheila, Alexa, Gabriele, Evan
In addition to the differential ETM loops, we now have closed the common ETM degrees of freedom using REFLA9I + REFLB9I. These loops are slow, with bandwidths of a few tens of millihertz.
Previously (LHO#16883), we had closed loops around IM4 in order to reduce the amount of reflected light into REFL_A_LF. However, tonight we decided instead to close the common ETM DOF, so that the ETMs are nominally controlled in all four angular degrees of freedom. This (hopefully) leaves us free to pursue more loop-closing with the corner optics.
The common ETM loops are implemented in the CHARD filter modules. These modules are stuffed with the same filters as for their DHARD counterparts.
This is a screen shot of QPDs durring a well aligned lock tonight.
Since about 10:22 UTC Feb 26th, the IFO has been locked on DC readout with 4 ASC loops closed : DHARD PIT+YAW and CHARD PIT and YAW.
We are leaving this locked with the intent bit undisturbed.
For the records, ~3h lock
For this lock stretch the ETM ASC loop settings were a bit different from what I said above: