J. Kissel, R. Abbott, T. Hardwick, C. Blair, D. Barker, and J. Betzwieser After a good bit of chasing our tails and simulink model juggling, we've confirmed that the new infrastructure necessary for the new ITM ESD drivers (see E1600064) is ready to go. I'll post more details and screens later, but below are a summary of the things we'd touched during the day's installation: - Several top-level models are affected by this change, /opt/rtcds/userapps/release/sus/h1/models h1susauxb123.mdl <-- for the new voltage monitor channels h1susbs.mdl <-- in charge of distributing the BIO signals h1susitmx.mdl <-- integration into the control system h1susitmy.mdl <-- integration into the control system - We also, sadly needed to create a new QUAD library part, because the ITM ESD driver infrastructure is that much different from the ETM ESD driver system; that now lives here: /opt/rtcds/userapps/release/sus/common/models/QUAD_ITM_MASTER.mdl and in addition, we added a new function to the /opt/rtcds/userapps/release/sus/common/src/ CD_STATE_MACHINE.c called "ESD_ITM" which is called in the BIO block of the new QUAD library part. - Also, in the clean up of models, we also re-installed all DAC cards directly from the CDS_PARTS library, which means that a few of the MEDM macro descriptions needed updating -- plus the new macros for the ITM ESD drivers themselves. Those are in /opt/rtcds/userapps/release/sus/common/medm susitmx_overview_macro.txt susitmy_overview_macro.txt - Both the new BIO block and QUAD Library parts needed associated entirely new MEDM screens which are now separate offshoots of the ETMs, /opt/rtcds/userapps/release/sus/common/medm/quad/ SUS_CUST_QUAD_ITM_OVERVIEW.adl SUS_CUST_QUAD_ITM_BIO.adl - I updated the sitemap to call these new SUS_CUST_QUAD_ITM_OVERVIEW screens, passing the appropriate macro files for each optic. The sitemap has been committed here: /opt/rtcds/userapps/release/cds/h1/medm/ SITEMAP.adl
More details about this install can be found in G1601304.
During the power outage the controller at Mid-Y failed its transition from facility power to generator power, Kyle borrowed the controller from Y-End to power the ion pump at Y-Mid (IP9). Today I attempted to swap the DUAL type controller (borrowed controller) with a Gamma type, but the Gamma controller failed, brand new controller, so by mistake I installed the original controller that had been removed by Kyle, such controller decided to work now without problems, so I left installed. I returned the borrowed controller back to Y-End station to power IP11, but it turns out that the signal wire connector is falling apart and needs to be fixed, which will be done on the next opportunity.
The new Gamma controller was bench tested and fuses checked, it continues to fail, it only outputs -50 VDC, not good, it will get send out for warranty repair.
Pulled cable from the annulus ion pump controller on BSC5 all to way to the rack on the south wall of X-End VEA. Next opportunity we'll have the cable landed and terminated.
Michael, Krishna
We are investigating the electrostatic/capacitive actuator on BRS-Y by driving it with a small 50 mV modulation at 20 mHz and at different Bias voltages. BRS sensor correction is turned off and should not be turned ON during the test (winds are low). Avoid going near the instrument, if possible.
Looked into the RMS binary reset for the PUM ETMX chassis.
1. On the PUM Chassis, disconnected the DB37 binary input cable and grounded Pin 18 on the chassis side. This untripped the RMS watchdog.
2. Looked at the output of the binary output interface chassis. Did not see any voltage change when I cycled through the reset in MEDM. This was looking at Pin 18.
3. Spoke to Dave and determined the output for the RMS reset on the binary output inferface chassis is on the second DB37 connector on Pin 2.
3. Tested our cable going from the the binary output interface chassis to the PUM chassis and confirmed it is a one to one pin layout.
It looks like we need a special cable with Pin 2 on male side tied to Pin 18 on female side. For now I reconnected all cabling and left all orignal electronics in place.
EY is reported to be operational, so I will compare cabling with that of EX. As of now, EX RMS reset is still not operational.
Filiberto Clara
Worked on the DBB 200W beam path alignment this morning. One side effect of the power outage that I hadn't considered was that it reset the DBB alignment PZTs to zero. This is actually a good thing as these PZTs need to be as close to zero as possible, and were rather far off in vertical alignment due to the previous bad DBB alignment. This reset essentially forced me to completely realign the beam to the DBB PMC, but with the added benefit of not having to tailor my alignment to also minimize the PZT voltage; it's already at zero.
For this alignment I scan the DBB PMC by ramping its PZT at 10Hz and look at the output of the DBB PMC TPD on an oscilloscope. As a single scan of the PZT covers several FSRs of the DBB PMC the oscilloscope output shows the different higher order mode peaks that resonate in the PMC as the PZT ramps. Alignment is then done by identifying and reducing the TEM10 and TEM01 peaks. At first I didn't really get anywhere and the beam transmitted by the DBB PMC looked really ugly no matter what I did to the alignment, so I checked the distance between the 2 mode matching lenses. Should be ~168mm (this is where I left it last time I worked on the DBB), was more like 187mm (not sure why this moved, didn't see the change documented in the alog). Moved DBB_MML2 to make the distance 168mm, which then gave me a signal through the DBB PMC that I could use in alignment (had a transmitted beam that was somewhat round). Horizontal and vertical alignment is now much better than previous, alignment peaks have almost vanished.
I then worked on the mode matching by moving one of the mode matching lenses slightly (~0.3mm, can do this with the DBB MEDM screen. Note to self, positive movement on the MEDM screen moves the lenses AWAY from DBB_M1), realigning to the DBB PMC (as moving lenses changes alignment), then assessing whether or not this change improved the mode matching peaks in the PZT scan. Lather, rinse, repeat. All told I moved DBB_MML1 by +1.6mm and DBB_MML2 by +4.0mm. This appeared to improve the mode matching peaks signficantly and the beam transmitted by the DBB PMC is much more round (although still slightly oblong in the vertical direction to my eye). Even with these apparent improvements the DBB PMC would not lock, so there is still alignment/mode matching work to do (or something else going on that I haven't considered/worked out yet). I will continue this work in the mornings as commissioning allows.
It should be noted that all this work was done solely on the 200W beam path. Due to the resetting of the DBB PZTs and the subsequent realignment of the 200W path, the 35W path into the DBB will need significant realignment. At this time the 35W path into the DBB is not usable. I will realign/mode match the 35W beam path once the 200W path is done (as the 35W alignment depends on the 200W path being aligned/mode matched).
Chiller filter replacements, planned power outage, re-calibreation of flow sensor pulse calibration...etc
~1000 - 1430 hrs. local (2001 era pump shorted following power cycling power cord) Work is complete. Pump cart removed and HAM10 in its as found (nominal) state.
(All time in UTC)
14:57 Jeff K. running charge measurement both ETMs.
15:07 Joe to LVEA checking batteries on folk lifts and etc.
15:21 Gerado to MY. WP#5920
15:35 Fil+Ed to CER
15:43 Chris S. taking pest control to LVEA (~20 min)
15:48 HEPI weekly inspection at End stations (Hugh)
15:50 Ken installing lightbulbs in CER
15:52 Kristina+Karen to LVEA
16:07 Chris+Mich taking pest control to end stations
16:12 Joe out
16:21 Jason to PSL working on diagnostic breadboard
16:33 Nutsinee running CO2 RS script (~20 mins).
16:35 Ken open up the roll up door between high bay and LVEA to bring a lift through.
16:39 Fil to EX working on PUM chasis
17:09 Portable truck on site
17:29 Jeff B. to EX
17:35 Kyle to HAM10 (WP#5923)
Ed installing Newtonian chasis in CER (TCS rack)
17:43 Joe to LVEA
17:48 Gerado to EX (WP#5922)
17:57 Kristina+Karen to End stations
Carl+Ross+Terra to EY (PI ESD work)
18:02 Jeff B back
18:17 Chris+Mich out
18:25 Hugh back. HEPI Maintenance done.
Joe back
18:50 Jason out
18:53 Karen leaving EY
18:55 Gerado out of EX
19:15 Kyle out
19:22 Kristina leaving EX
19:40 Terra+ out
Fil to EX
19:44 Terra+ to EX
19:55 Dave restarting h1susb123 model and iop, ITM PI model, and DAQ
20:01 Ed to MX
20:04 Kyle to HAM10 (ion pump)
20:26 Gerado to LVEA
20:32 Fil back
20:34 Chris+Mich to LVEA (clean room inspection)
20:40 Gerado out
20:44 RIch to EX (ESD work)
20:55 Mich to LVEA
21:00 Mich out
21:11 Terra+ out
14:17 Jeff Bartlett is taking over until 4.
Title: 06/18/2016, Day Shift 15:00 – 23:00 (08:00 – 16:00) All times in UTC (PT) State of H1: IFO unlocked. Recovering from the power outage & maintenance window Commissioning: Commissioners working on IFO. Outgoing Operator: Nutsinee Activity Log: All Times in UTC (PT) 21:20 (14:20) Take over for Nutsinee 21:25 (14:25) Jeff K. & Rich A. – Going into the CER to work on Binary IO for ITM ESD 21:55 (14:55) Jeff K & Rich A. – Going into LVEA to Beer Garden area 22:00 (15:00) Start initial alignment 22:00 (15:00) Kyle – Is out of the LVEA 22:01 (15:01) Dave & Jim – Going to End-Y CER 22:01 (15:01) Jeff K & Rich A. – Out of LVEA and back into CER 22:06 (15:06) Jeff K. & Rich A. – Out of the CER 22:50 (15:50) Jeff K. – Restarting BS model 23:00 (16:00) Turn over to Travis End of Shift Summary: Title: 06/07/2016, Day Shift 15:00 – 23:00 (08:00 – 16:00) All times in UTC (PT) Support: Sheila Incoming Operator: Travis Shift Detail Summary: Working on initial alignment with Sheila after end of maintenance window.
Betsy, Jeff K, TJ
Yesterday it was found that bringing the suspension Guardians straight to ALIGNED right after a power outage/reboot/etc may be a bit too agressive. We have changed the initial request state to be SAFE so the operator, or who ever else, could then check each suspension before bringing it to its appropriate state.
Added the following channels to the exclude list: H1:SUS-ETMX_PI_OSC_DAMP_OSC_LINE1_COSGAIN H1:SUS-ETMY_PI_OSC_DAMP_OSC_LINE1_COSGAIN H1:SUS-ETMX_PI_OSC_DAMP_OSC_LINE2_COSGAIN H1:SUS-ETMY_PI_OSC_DAMP_OSC_LINE4_SINGAIN H1:SUS-ETMX_PI_OSC_DAMP_OSC_LINE4_COSGAIN H1:SUS-ETMY_PI_OSC_DAMP_OSC_LINE3_COSGAIN H1:SUS-ETMX_PI_OSC_DAMP_OSC_LINE2_SINGAIN H1:SUS-ETMX_PI_OSC_DAMP_OSC_LINE4_SINGAIN H1:SUS-ETMY_PI_OSC_DAMP_OSC_LINE3_SINGAIN H1:SUS-ETMX_PI_OSC_DAMP_OSC_LINE3_COSGAIN H1:SUS-ETMY_PI_OSC_DAMP_OSC_LINE4_COSGAIN H1:SUS-ETMX_PI_OSC_DAMP_OSC_LINE3_SINGAIN H1:SUS-ETMY_PI_OSC_DAMP_OSC_LINE2_SINGAIN H1:SUS-ETMX_PI_OSC_DAMP_OSC_LINE1_SINGAIN
Re wandering 59Hz bump first seen around 25 May and reported Jun 1: With the DQ channels, I checked to 125Hz and it does not show itself. Maybe the power cycle cleaned it out.
Otherwise, attached are the CPS spectra for all the BSCs:
The ETMX St1 V1 looks much better now. The ITMX St2 H3 is a little better but the ITMX St1 V1 is quite noisy now. I better check this tomorrow to see if it changes.
Routine Maintenance but since the Motor greasing is just an annual event figured it warrented a log entry. Of note, found that many of the zerk fittings would not open and pass the grease. So found one that did and installed that one as needed. Will open an FRS ticket and purchase better zerks to replace the negative function units.
Sheila, Evan, Haocun
We drove the ETM's in pitch and yaw, and took measurements for the HARD loop.
As Evan posted in aLOG 27518, the usual "IN1/IN2" method for FFT-based OLTF measurements were biased because the IN2/EXC coherences were low, so I exported the data of CLTF's using IN1/EXC (whose coherences were higher), and converted them into OLTF's.
The measurements for both pitch and yaw of CHARD and DHARD are plotted together with the modelling curved as attached:
- The measurements were taken under 2W of laser power. The blue curves are the modelling DHARD plots at 2W, and the red or orange dots are measurement values.
- Only values with IN2/EXC coherence higher than 0.4 were used, otherwise the data were dropped.
- I tried to calculate the uncertainties of the measurements using the method in aLOG 10506, shown as error bars on the plots.
- Number of measurements taken:
CHARD Pitch: 111
CHARD Yaw: 44
DHARD Pitch: 121
DHARD Yaw: 33
We will take more measurements with same UGF values in the model and do more comparasion.
RCG 3.0.3 upgrade.
Jim, Dave:
All the frontends were rebuilt against RCG TAG 3.0.3 on Friday afternoon before the shutdown. The build started with an empty H1.ipc file to clean out any unused IPC chans. After all the frontends were restarted today, they are now running with 3.0.3
Infrastructure Restart
Richard, Carlos and Jim:
The CDS infrastructure was restarted between 7:00 and 8:30. These include networking, DC power, timing, DNS, DHCP, NTP, NFS, auth.
Front End Startup
Richard, Jim, Dave:
h1boot was started. Then h1psl0 was started as an non-dolphin FEC. This permitted the DAQ to be restarted (needs a running front end). Then the remaining non-dolphin MSR FECs were started. The Richard started EX, EY, MX and MY FECs. Finally we started the MSR dolphin machines. All started up once the fabric was complete.
h1lsc0 reported timing issues. The timing card on this IO Chassis is independently powered, I activated this DC supply and the system started normally.
h1pemmy needed on more IOP restart, as is expected for these non-IRIGB units.
During the startup of the SWWDs, all MSR dolphin channels were found to be not working. We tracked this down to the Dolphin switches not being powered up, this was easily resolved.
Eagle eye-ed Jim noticed the system time on h1psl0 was way off (by 7 hours). We assumed because it was the first to be started soon after h1boot (which perhaps had not NTP synced). We manually set the time and handed control back to NTPD.
EPICS Gateways
Dave:
All the epics gateway processes on cdsegw0 were restared using the start.all script. This started too many gateways resulting in duplicate data paths, the redundent gateways were removed and the script was corrected.
h1hwinj1
Jim, Dave:
We started the PSINJECT process on h1hwinj1. Again the SL firewall prevented EPICS CA access, which we re-remembered and fixed. CW injection to PCAL is running.
Slow Controls SDF
I started Jonathan's SDF monitor pseudo-target on h1build.
Vacuum Controls and FMCS cell phone notification
I got the cell phone alarm texter running again on cdslogin.
PI Model changes
Tega, Dave
A new PI_MASTER.mdl file was created at 15:10 Friday, which just missed the 3.0.3 build cut-off time. We recompiled and restarted the four models which use this file (h1omcpi, h1susitmpi, h1susetmxpi and h1susetmypi). A DAQ restart was also required.
NDS processes not giving trend data
Jeff, Jim, Dave
here was an interesting one, the nds processes were giving real-time data but no archived trend data. The cronjob which was keeping the jobs directory in check by deleting all files older than one day, today cleaned out the directory and then deleted the jobs directory itself. Re-creating the directory and stopping the cronjob from erasing it again fixed the problem.
h1tw0 raid issues
Carlos, Ryan
h1tw0 did not like being power cycled and the new RAID controller card stopped working again. It looks like we are going to have to recreate the raid again tomorrow.
DAQ instability
Dan, Jim, Dave
sadly we leave the system with an unstable h1fw0. Earlier in the day h1fw1 was unstable, then h1fw0 became very unstable. Dan did some QFS diagnostics, I did some NFS diagnostics, we cannot see any reason for the instablities, all NFS/disk access is well within bandwidth with no indication of packet/re-transmission errors.
in the evening I started the camera copy software which sends digital camera centroid data from the corner station to the ALS models at the end stations.
Started up the 'lhoepics' remote-MEDM/remote-EPICS server this morning, noticing it had been overlooked or de-prioritized during the startup Monday.
We saw the noise at XEnd ESD LL again at ~1.28 MHz, 15mV pk2pk. Rich A. came out to have a look; believe the noise be real (a real oscillation in the hardware).
On LR drive, we also saw the spectrum glitching (bouncing up and down) when driving at 15.5kHz; goes away with no drive signal (note we did not try other drive frequencies yet). Some beat making a low frequency component?
The source of the 1.2Mhz oscillation was identified by opening the spare chasis and looking for marginally stable opamp stages.
The stage that is marginally stable is U6 page 7 of D1500016. It can be made to oscillate at ~600khz or ~1.2Mhz. The stage is a configurable with the pole/zero bypass bit.
When for example the H1:SUS-ETMX_BIO_L3_UL_STATEREQ control word is set to 2 the stage has a pole at 2.2Hz. This is the normal low noise configuration. In this configuration there is no 1.2MHz oscillation.
When this control word is set to 1 the stage is nominally a unitly gain stage. In this configuration some channels (like UL UR LL and LR) have a Q of >5 at 1.2MHz and can be induced to freely oscillate. This oscillation may be damped with a 30pF capacitor across R21.
As this oscillation is not a problem in the low noise configuration no changes will be made. Testing PI channels should be performed with the H1:SUS-ETMX_BIO_L3_UL_STATEREQ control word set to 2.
It occured to me that the hysteresis of the pzt might be something I could overcome by dithering the pzt around the values that were restored when we brought the system up.
I started the dither at +/-5000, then dithered at +/-1000, then +/-100, then +/-10, then +/-1, with 20 second sleeps between each change.
I dithered both pitch and yaw.
When the beam from the PSL was restored, the IMC REFL beam was completely on the camera, and the IMC locked (though at low value, WFS are off).
I've uploaded the executable file that will automatically dither the pzt to the OPS wiki, under the page name "pzt dither to recover alignment," and copied the executable into the userapps/release/isi/h1/scripts directory, file name 20160606_pzt_dither.txt.
Summary:
After the test on CW injections (LHO alog 27409), I decided to revisit the ETMX Pcal actuation function and inverse actuation filter because I have some concern that the new actuation function is not as accurate as I initially desired. Following a similar procedure as LHO alog 27176, I created new filters and obtain an inverse actuation function that is within 5% in magnitude and 5 degrees of phase up to 2 kHz, and within 5% and ~10 degrees of phase up to 3 kHz. The new inverse actuation filters are loaded into the PINJ_TRANSIENT bank. Waveforms using the inverse actuation filters should assume a time advance of 240 usec. Attached are the actuation function files, one without the 240 usec delay and one with the 240 usec delay.
Details:
In the same way as LHO alog 27176, I created new AI analog and AI digital approximations. The new approximations are actually not very good by themselves, but when used in conjunction with all of the filters in the PINJ_TRANSIENT bank, it accurately reproduces the true Pcal actuation function within 5% in magnitude and 5 degrees in phase up to 2 kHz.
Changes:
In addition, I used the Matlab quack3 function to produce SOS filters to be copied into Foton. I found that entering the values as zpk in Foton did not reproduce the Matlab results. A transfer function was exported from Foton and loaded into Matlab for detailed comparison. No significant differences were found
Attached are figure files showing the comparison of the AI analog/digital approximations, comparison of the approximate Pcal actuation function to the real actuation function, comparison of approximate inverse actuation function to real inverse actuation function, and the impulse response of the inverse actuation filters. Also attached is the actuation function with and without a 240 usec delay for use with the CW hardware injections.
Attached is a comparison of the first try inverse actuation function (LHO alog 27368) and this version that fixes some of the issues described above.
Rana, Evan
WE measured the SRM to SRCL TF today to find the frequency and Q of the internal mode. Our hypothesis is that the thermal noise from the PEEK screws used to clamp the mirror into the mirror holder might be significant contribution to DARM.
The attached Bode plot shows the TF. The resonance frequency is ~3340 and the Q ~150. Our paper and pencil estimate is that this may be within an order of magnitude of DARM, depending upon the shape of the thermal noise spectrum. If its steeper than structural damping it could be very close.
"But isn't this ruled out by the DARM offset / noise test ?", you might be thinking. No! Since the SRCL->DARM coupling is a superposition of radiation pressure (1/f^2) and the 'HOM' flat coupling, there is a broad notch in the SRCL->DARM TF at ~80 Hz. So, we need to redo this test at ~50 Hz to see if the changing SRCL coupling shows up there.
Also recall that the SRCLFF is not doing the right thing for SRM displacement noise; it is designed to subtract SRC sensing noise. Stay tuned for an updated noise budget with SRM thermal noise added.
** see https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=27455 for pictures of the SRM compsoite mass.
The peak is also visible in the DARM spectrum. In this plot the peak is at 3335 instead of 3340 Hz. Why is there a 1.5% frequency shift?
Here are projected SRM thermal noise curves for structural and viscous damping.
Given a typical SRC coupling into DARM of 1×10−4 m/m at 40 Hz, 20 W of PSL power, and 13 pm of DARM offset (25019), this would imply a noise in DARM of 1×10−20 m/Hz1/2 at 40 Hz if the damping is structural.
When I modelled the optics in https://dcc.ligo.org/LIGO-T1500376 and in particular the surrogate SRM I had assumed optic was bonded. After looking again earlier with Rana and Betsy realised it is held with 2 set screws (Peek?) on barrell at 12 o'clock and two line contacts at 4 and 8 o'clcok. See https://dcc.ligo.org/LIGO-D1200886.
The previous bonded model for the SRM surrogate (I believe) had a fisrt mode predicted around 8k Hz. However, from a quick model I ran today (with the set screws etc ... ) the first mode appears to be around 3400 Hz. The mode is associated with the optic held with the peek screws. (Now I was doing model using remote desktop so I will need to check it again when I get a better connection, so more to follow on this. I will also post updated model, once I get back to Caltech.)
The ~3340Hz peak is also clearly visible in the PDA/PDB x-correlation spectrum. See alog 26345.
A couple of comments on this topic:
Danny, Matt (Peter F remotely)
Due to the issues currently seen at LHO, we were asked how the LLO SRM surrogate was put together and if we could add to the alog for a record of the process. The easiest way is to do it via photos (which we have of the assembly process).
IMG_1462....There are only two setscrews that hold the optic in place. Can be seen putting these in place below in the "cup" that holds the optic (eventually). Im not sure of the material but Peter F's speculation is that "I think those set screws must be the carbon-loaded PEEK type. The only other option I can think of for a black set screw would be carbon-steel, and it surely isn’t that."
IMG_1455...Here you seen the three main parts. The optic, the “cup” that the optic goes into and then the main mass the cup goes in. Note in the “cup” you see the two raised parts at around 4 and 8 o’clock that the setscrews ‘push’ the optic onto. So its not 'really' a three point contact, its 2 points (set screws) and 2 lines (in the holder)
IMG_1466...Here is the optic going into the cup making sure the fiducial on the optic lines up with the arrow on the cup
IMG_1470.....Optic now in the cup and doing up the setscrews that hold it in place. I cant remember how much we torqued it up (we only did it by hand). But as Peter F again speculated that perhaps we just did the setscrews up tighter than LHO
IMG_1475....Flipping the cup (with the optic in it) over and placing in main mass
IMG_1478....Cup now sitting in Main mass (without screws holding cup into main mass)
IMG_5172......the SRM surrogate installed into the suspension
It looks like there might be a mode in the L1 SRM at 2400 Hz. See the attached plot of SRCL error signal from January, along with DARM and the coherence. There is also a broad peak (hump) around 3500 Hz in SRCL, with very low coherence (0.04 or so) with DARM. The SRCL data has been scaled by 5e-5 here so that it lines up with DARM at 2400 Hz.
Here are two noise budgets showing the expected DARM noise assuming (1) structural (1/f1/2) SRM damping and (2) a hyperstructural (1/f3/4) SRM damping. This hyperstructural damping could explain the DARM noise around 30 to 40 Hz, but not the noise at 50 Hz and above.
I also attach an updated plot of the SRCL/DARM coupling during O1, showing the effect of the feedforward on both the control noise and on the cavity displacement noise (e.g., thermal noise). Above 20 Hz, the feeforward is not really making the displacement noise coupling any worse (compared to having the feedforward off).
Note that the PEEK thermal noise spectrum along with the SRCL/DARM coupling is able to explain quite well the appearance of the peak in DARM.
I am attaching noise budget data for the structural case in 27625.