Jeff, Arnaud, Sheila
This afternoon the arm cavity was staying locked for minutes at a time, not really long enough to make a useful progress on WFS.
We decided to work on OpLev damping, in the hopes that we would be able to stay locked longer with OpLevs.
First we noticed that in the locally modified quad models for ETMX and ITMX, the OpLev damping signal was summed into the drive to the PUM after the drive align matrix, meaning that we would not be able to take advantage of all Keita and Arnaud's work of diagonalizing the PUM drive. So we made a change to the front end model to sum the oplev damping signals in before drivealign, but after the lock out switch. For the record this means that there can be a signal going to the osems even when the lock outout switch is off. Screenshot attached of the new version of the model, which is committed to the svn. This hasn't yet been done for L1 or ITMX, but we will want to do it if we are going to use OpLev damping permanently.
Based on Keita's measurement in alog 10747 we designed a controller to have a lower ugf of 0.1Hz and an upper ugf of 2.5Hz, with a pole at 10 Hz, which is now loaded in FM4 of H1SUS-ETMX_L2_OLDAMP_P. This immediately saturated the DAC, and we had to turn the gain down to -0.1 to prevent constant saturation. The current loop gain is attached, with gain from 0.4-0.65 Hz and gain margins around 40 degrees. If we want to use more gain, we will need to drive the UIM which means we will need to diagonalize it.
Since we are planning to use PUM for WFS as well, we will probably run into the same saturation problem with WFS, another reason to diagonalize the UIM.
We measured the RMS three times, before we started with no damping off (green) 85.5nrad then with damping on (magenta) 29.5nrad, and again with damping off (red) 48.2nrad. Because the ground motion was changing as we made he measurements, it is not totaly clear from the spetra that the loop is helping. The ratio between the ISI tilt (RY) and the OpLev spectrum at 0.43 Hz is decreases by ~5dB as we would expect from the loop gain.
Is it railing due to high frequency component?
You should look at the ASD and RMS of the coil output with damping on and off. See also the alog of Jeff.
I had a look at Yaw, where the situation seems better than in Pitch. The plant for Yaw is verry similar to PItch ( see Keita's alog linked above), however most of the yaw rms comes from low frequenies, mostly from a peak around 0.13Hz. So this loop has a lower lower UGF, one at 0.03Hz with around 75degrees phase margin and the upper ugf is just above 1 Hz with 130 degrees phase margin. The controler has a zero at zero, a pole at 0.1Hz and a notch at 0.6Hz. (See attached open loop TF)
After looking at the spectrum with the loop on, I decided to add a resonant gain at 0.13 Hz, shown in open loop design in the attached foton screen shot. The filter Plant is an imitation of the Yaw to yaw transfer function in Keita's alog.
The last attached screen shot are spectra with and without damping. With the damping and resonant gain on the ASD is reduced by a factor of 10 at 0.13Hz, and the rms is reduced by a factor of 2 compared to no damping.
The Yaw damping doesn't saturate the DAC, so I think this is good and we can run with this on. Pitch will need more work, as Keita says.
Has the analog whitening settings changed on the ETMX optical lever? I've compared a high-frequency spectrum with a measurement I took in February, and there's reported tons more motion above 2 [Hz], when the ISI motion is roughly equivalent if not better at these frequencies (see attached). Looks there's an extra 1:10 analog filter that's not being compensated or something...
Looking at a previous OPLEV spectra that was made in February on ETMX ITMX and ITMY (calibrated in urad/sqrtHz), it looks like the signal amplitude @ 10Hz (on the three of them) was similar to what you have on your red curve (~0.1 nrad/sqrtHz). Maybe the green trace (reference) was taken with a digital compensation filter engaged, but no analog whitenning ?
Refer to ALOG-10267 for the changes in whitening settings on Feb 21st 2014. This accounts for the two orders of magnitude difference you see above 10 Hz.
Aidan, Thomas, Dave H.
We spent most of today hooking up cables in the mechanical room. Both chillers now have DAC control (although the output channels are still to be configured). We installed most of the TCS cables in the CER too. A few of the long cables (cables X&Y: 46 and 47) are in back to front: we need to pull them out and install them flipped around (we're not going to use gender changers as we'd need too many of them).
Status is current in the attached PDF.
1500 hrs -> Hard-closed GV10 1700 hrs -> Adjusted instrument air at GV6's piston from <5 psi indicated to 5 psi -> GV6 is likely not in full contact with its gate O-rings -> may adjust air for full contact tomorrow
The hardware watchdog unit has been installed in the DAQ test stand for testing. All electronics associated with this unit are powered up, wiring has been restored to it's previous configuration for testing.
Right now the SUS and HEPI are untripped but if they don't survive the exit of the Chamber by the ISC crew, could someone untrip the SUS and HEPI? Thanks much--H
160 channels added and 4 removed for a current total of 122,626.
Updated again to remove H1:IMC-VCO_CONTROLS_EXTFREQUENCYOFFSET.
8:49 am, Aaron to Y-End, stage relay for noise eater.
9:04 am, Apollo to LVEA, move in scissor lift into LVEA via the high bay area, to remove pre-filters from HAM5 cleanroom.
9:15 am, Corey, Jax and Keita to Y-End, TMS work.==>out by 11:40 am.
9:52 am, Aaron to LVEA, move rack by HAM4/BSC3 area for TCS.
10:19 am, Jeff and Adres to LVEA, do chamber side work inside cleanroom by HAM4/5.
10:29 am, Filiberto to LVEA, to visit both spool pieces to terminate OL cables.==>done by 11:30 am.
10:35 am, John to X-end, inspection of VEA area.==>done by 10:35 am.
11:01 am, Karen to Y-End, clean.==> done by 11:31 am.
11:02 am, Betsy to LVEA, west bay area, ITM assembly.
11:25 am, Craig C to LVEA, work inside H2 laser enclosure.
12:55 pm, Corey to Y-End, TMS work.
1:00 pm, Sheila to X-End, ==>done by 1:21 pm.
2:16 pm, Apollo, Jeff and Kate to LVEA, door removal from HAM5.==>Jeff and kate out by 3:29 pm, Apollo out by 3:36 pm.
2:40 pm, Kyle to Y-Mid, hard close GV10.
3:34 pm, Cyrus and Jim to LVEA, West bay area work.==>done by 3:45 pm.
The I/O chassis and front-end computer for the h1susquadtst test stand has been powered on. The h1iopsusquadtst model was modified to remove a Dolphin IPC channel to allow the model to run on the computer without a dolphin connection. The computer is currently running h1iopsusquadtst, which should be adequate for testing. The rtsystab file was also modified to include the h1susquadtst computer.
Ham 5 doors were removed and laid on pallets near ham 4/bsc 3 to be transfered over beam tube tomorrow morning.
[Sheila Arnaud]
HAM2 HAM3 ITMX ETMX HEPI L4C watchdog thresholds have been increased to 99000, to avoid other HEPI trips for today (we had one on HAM2 today, and following the ones from friday). It's unclear what kind of ground motion event would cause this kind of trip, but that would be something to investigate.
This threshold increase shouldn't affect the security of the ISIs since the CPS and actuator watchdogs are still set to their nominal safe values.
We seem to have lost H1:ALS-X_LASER_IR_DC during the WFS rework.
Jeff asked for an entry showing the order "make install-modelname" commands were issued yesterday afternoon to see if it has any link to the relocking time of the IMC. Note there is a bug in the file name were the month is actually the hour, this has been fixed.
Mar 18 16:39 install-h1susmc1_2014_39_18_16:39:15
Mar 18 16:40 install-h1susmc1_2014_40_18_16:40:25
Mar 18 17:41 install-h1iopsusb123_2014_41_18_17:41:18
Mar 18 17:41 install-h1iopsusb123_2014_41_18_17:41:34
Mar 18 17:42 install-h1susitmy_2014_42_18_17:42:10
Mar 18 17:42 install-h1susbs_2014_42_18_17:42:45
Mar 18 17:43 install-h1susitmx_2014_43_18_17:43:09
Mar 18 17:45 install-h1susmc3_2014_44_18_17:44:40
Mar 18 17:45 install-h1susprm_2014_45_18_17:45:21
Mar 18 17:45 install-h1suspr3_2014_45_18_17:45:41
Mar 18 17:46 install-h1iopsush2b_2014_46_18_17:46:14
Mar 18 17:46 install-h1susim_2014_46_18_17:46:39
Mar 18 17:47 install-h1susmc2_2014_47_18_17:47:18
Mar 18 17:48 install-h1suspr2_2014_47_18_17:47:52
Mar 18 17:48 install-h1sussr2_2014_48_18_17:48:24
Mar 18 17:49 install-h1sussr3_2014_48_18_17:48:48
Mar 18 17:49 install-h1sussrm_2014_49_18_17:49:10
Mar 18 17:49 install-h1susomc_2014_49_18_17:49:30
Mar 18 17:51 install-h1iopsusauxb123_2014_51_18_17:51:13
Mar 18 17:51 install-h1susauxb123_2014_51_18_17:51:37
Mar 18 17:54 install-h1susauxh2_2014_54_18_17:54:05
Mar 18 17:54 install-h1iopsusauxh34_2014_54_18_17:54:54
Mar 18 17:55 install-h1iopsusauxh56_2014_55_18_17:55:40
Mar 18 17:56 install-h1susauxh56_2014_56_18_17:56:07
Mar 18 17:56 install-h1iopseib1_2014_56_18_17:56:43
Mar 18 17:57 install-h1iopseib2_2014_57_18_17:57:06
Mar 18 17:57 install-h1iopseib3_2014_57_18_17:57:33
Mar 18 17:58 install-h1hpiitmx_2014_58_18_17:58:17
Mar 18 17:59 install-h1iopseih16_2014_58_18_17:58:59
Mar 18 18:00 install-h1hpiham6_2014_00_18_18:00:18
Mar 18 18:00 install-h1isiham6_2014_00_18_18:00:36
Mar 18 18:01 install-h1hpiham3_2014_01_18_18:01:21
Mar 18 18:02 install-h1isiham2_2014_01_18_18:01:47
Mar 18 18:02 install-h1isiham3_2014_02_18_18:02:15
Mar 18 18:02 install-h1iopseih45_2014_02_18_18:02:51
Mar 18 18:03 install-h1hpiham4_2014_03_18_18:03:14
Mar 18 18:03 install-h1hpiham5_2014_03_18_18:03:33
Mar 18 18:04 install-h1isiham5_2014_04_18_18:04:13
Mar 18 18:04 install-h1ioppemmy_2014_04_18_18:04:43
Mar 18 18:05 install-h1pemmy_2014_05_18_18:05:52
Mar 18 18:06 install-h1ioppsl0_2014_06_18_18:06:18
Mar 18 18:07 install-h1pslfss_2014_06_18_18:06:53
Mar 18 18:07 install-h1pslpmc_2014_07_18_18:07:15
Mar 18 18:07 install-h1psldbb_2014_07_18_18:07:37
Mar 18 18:08 install-h1iopoaf0_2014_08_18_18:08:12
Mar 18 18:08 install-h1peml0_2014_08_18_18:08:37
Mar 18 18:09 install-h1tcscs_2014_09_18_18:09:02
Mar 18 18:09 install-h1odcmaster_2014_09_18_18:09:26
Mar 18 18:10 install-h1omc_2014_09_18_18:09:57
Mar 18 18:10 install-h1iopasc0_2014_10_18_18:10:25
Mar 18 18:10 install-h1amcimc_2014_10_18_18:10:59
Mar 18 18:11 install-h1sushtts_2014_11_18_18:11:30
Mar 18 18:15 install-h1ioppemmx_2014_15_18_18:15:36
Mar 18 18:16 install-h1pemmx_2014_15_18_18:15:54
Mar 18 18:16 install-h1iopsusey_2014_16_18_18:16:22
Mar 18 18:16 install-h1iopsusex_2014_16_18_18:16:38
Mar 18 18:17 install-h1susex_2014_17_18_18:17:06
Mar 18 18:17 install-h1susetmx_2014_17_18_18:17:23
Mar 18 18:17 install-h1susetmx_2014_17_18_18:17:32
Mar 18 18:19 install-h1susetmx_2014_19_18_18:19:17
Mar 18 18:21 install-h1iopseiey_2014_21_18_18:21:27
Mar 18 18:22 install-h1hpietmy_2014_22_18_18:22:18
Mar 18 18:23 install-h1isietmy_2014_22_18_18:22:44
Mar 18 18:23 install-h1isietmx_2014_23_18_18:23:13
Mar 18 18:23 install-h1iopiscey_2014_23_18_18:23:46
Mar 18 18:24 install-h1pemey_2014_24_18_18:24:14
Mar 18 18:24 install-h1iscex_2014_24_18_18:24:52
Mar 18 18:25 install-h1odcy_2014_25_18_18:25:11
Mar 18 18:25 install-h1odcx_2014_25_18_18:25:26
Mar 18 18:25 install-h1iopsusauxey_2014_25_18_18:25:55
Mar 18 18:26 install-h1iopsusauxex_2014_26_18_18:26:05
Mar 18 18:27 install-h1susauxex_2014_26_18_18:26:59
Mar 18 18:28 install-h1susetmx_2014_28_18_18:28:17
I had a look at the IMC ODC summary around the times stated above and the ODC reports the IMC is green for the following times (in PDT): 16:30:39 - 16:31:40 17:50:31 - 17:50:32 17:50:36 - 17:50:38 17:51:00+ I've also attached a plot showing the IMC ODC over a 3 hour time period which includes the times of interest (plot starts at 16:30 PDT)
Attached are plots of dust counts for locations 1 and 2 in the end Y VEA. These are minutes trends going back 30 days from 08:00 am PDT March 19, 2014 (February 17 - March 19). H0:PEM-EY_DUST_VEA1_500NM_PCF: Dust counts greater than .5 microns at location 1 normalized to particles per cubic foot. H0:PEM-EY_DUST_VEA1_300NM_PCF: Dust counts greater than .3 microns at location 1 normalized to particles per cubic foot. H0:PEM-EY_DUST_VEA2_500NM_PCF: Dust counts greater than .5 microns at location 2 normalized to particles per cubic foot. H0:PEM-EY_DUST_VEA2_300NM_PCF: Dust counts greater than .3 microns at location 2 normalized to particles per cubic foot. March 5, 2014 Betsy finds contamination on the cartridge, notes that it must have occurred after installation, comments that dust counts recorded at BSC 10 prior to this date are not useful due to the location of the dust monitor there. alog February 28, 2014 - March 3, 2014 No dust counts available, Comtrol found mysteriously unplugged alog February 24, 2014 cartridge installation alog Jeff's plots of dust counts during cartridge installation alog February 21, 2014 ISI wiped down alog February 20, 2014 TMS bagged alog SUS bagged alog December 5, 2013 Dome and north door removed from BSC10 alog
(Alexa, Sheila, Kiwamu) -- a post from yesterday that didn't get uploaded due to alog maintenance
I repeated the IMC intensity noise measurement that was done for HIFOY (alog 7364):
Step 1: Measure the power spectrum of MC RIN via H1:PSL-ISS_PDA/B_CALI_DC_OUT with both the ISS off (REF 7) and on (REF 11) (see 20140318_MCRIN_Data png for these measurements).
Step 2: Calibrate via the following:
IMC_trans_freq_noise = f0/L*PWR*(2+1+1)/c/mMC2/(2*pi*f)^2 * RIN where f0=2.818e14 Hz (red frequency) L=16.4736 m (IMC length, one way) PWR=1.36e3 Watt (Power in the IMC, = 8290 mWatt * Finesse/pi, Finesse=516) mMC2=2.9kg (Mass of MC2 mirror, same as for MC1 and MC2) (2+1+1) (effect of MC2 at 0deg, MC1 at 45deg, MC3 at 45deg) f (Audio frequency)
This calibration is set up in matlab file: calib.m (and returns the HZ txt file). We only use PDB_DC_CALIB since this measurement gives the out of loop sensor noise (meanwhile PDA gives the in-loop). See screenshot for result.
Note: In order to ensure a proper calibration of these channels with the increased power into the MC, I adjusted H1:PSL-ISS_PDA/B_CALI_DC_GAIN such that the tsdavg was 1. PSL TEAM: does this change the gain of the loop?? Feel free to return to the nominal values. The ISS and MC still lock under the new configuration.
(all these files can be found in: /ligo/home/alexan.staley/Public/IMC_IntensityNoise/... I was not able to upload them because the files were too large)
The next step will be to insert this result into the model...TBD
I attached the wrong DTT snap shot. I calibrated PDB (not PDA); however, the DTT snapshot I previously posted was of PDA. I have attached a matlab plot of the dtt power spectrum of PDB.
For some reason (maybe related to the CDS boogie man???) the supervised guardian nodes (ie. the nodes running under the guardian supervision infrastructure on h1guardian0) are unable to talk to any of the h1ecatx1 TwinCat IOC channels.
Sheila first noticed this when guard:ALS_XARM was not able to connect to the H1:ALS-X_LOCK_ERROR_STATUS channel. The truly weird thing, though, is that all other channel access methods can access the h1ecatx1 channels just fine. We can caget channels from the command line and from ezca in python. We can do this on operator workstations and even from the terminal on h1guardian0. It's only the supervised guardian nodes that can't connect to the channels.
I tried reloading the code, restarting the guardian nodes, nothing helped. The same problem regardless of which node was used. Note:
I'm at a loss.
There's clearly something different about the environment inside the guardian supervision infrastructure that makes this kind of failure even possible, although I honestly have no idea what the issue could be.
I'm going to punt on trying to spend any more time diagnosing the problem, since I'm just going to chalk it up to the other weirdness. Hopefully things will fix themselves tomorrow.
The other thing to note is that I did an svn update on that computer right before it crashed, it might be worth looking at what was included in the update to see if it changed the behavoir of the IoC somehow.
(As also discussed in person.) This may be due to a difference in the environment setup between the Guardian supervisors, and a login shell. The EPICS gateway processes are still in place on the FE subnet, as we have not changed the data concentrator or FE systems to directly broadcast to other subnets. So, the channel behavior will be dependent on the setting of the EPICS_CA_ADDR_LIST environment variable, specifically whether CA will traverse the gateway or route through the core switch. The problem described sounds a lot like the issue the gateway has with reconnecting to Beckhoff IOCs, if the Guardian processes are connecting to the gateway then this would explain the behavior described. Jaime was going to look at the Guardian environment setup as time permits, to see how it differs from the current cdscfg setup.
some of the digital video cameras used to have names, that appeared in on the links in the H1VIC_DIGITAL_OVERVIEW.adl screen.
The names disappeared sometime this afternoon, maybe related to the deleted files?
The overview is one of the files that went missing. I had an older copy saved elsewhere which I restored so as to provide access to the cameras, but this does not have the (later) name updates. Once the backup copies are retrieved from tape, I should be able to restore the latest version. It was also an oversight that these were not in SVN, which will also be corrected.
I have restored the latest version of the camera screens, they have also been added to the useraps repo in cds/h1/medm, and symlinked in the /opt/rtcds/lho/h1/medm/cds area.