I have analyzed the IM OSEM signals from the May 19th ISI trip, while both the ISI and the IM optics were tripped, and have injected a step funtion into TEST PIT and TEST YAW to measure Q values.
Power Spectrums:
Length signals for IM1-4, and the signal for IM4 shows clean peaks for yaw and it's 1st harmonic and length and it's first harmonic. All other IMs have peaks that are lower than the peaks in IM4.
Q values:
I measured the length and yaw Q values for each IM, and my original post is alog 27327. In that post, I list pitch and yaw, however the resonant frequency I measured that's labeled as pitch is actually length.
IM4 shows the highest Q values for length and yaw, and comparing the Q values for IM1-3, they are all between 54% and 95% of the IM4 Qs.
Time Series:
The time series of the individual OSEM signals reveals a difference in amplitude in the oscillations of OSEMs, when comparing the four OSEMs on each optic.
I measured the period of oscillations for each OSEM on each IM, and in reality the period was not very enlightening, however, I also collected the oscillation amplitude of each OSEM near the end of the time the optic was tripped. The oscillation amplitudes proved to be very revealing, and show that the OSEMs on IM4, the IM that has never shown an alignment shift , have very different behavior than the OSEMs on IM1-3, that have all had significant alignment shifts.
Details / Summary of power spectrums for IM1-4: attachment 1
| IM1 length | 1.21 |
| IM2 length | 1.43 |
| IM3 length | 1.36 |
| IM4 length | 1.50 |
| IM1 yaw | 1424 |
| IM2 yaw | 1087 |
| IM3 yaw | 1270 |
| IM4 yaw | 2009 |
Details / Summary of oscillation amplitude data for IM1-4: attachment 2
IM4: no alignment shifts observed in my IM investigation, highest average percent of max oscillation at 94%
IM2: largest and most consistent alignment shifts observed, has the OSEM with the lowest percent of max OSEM oscillation at 70%
IM3: second largest alignment shifts, has the lowest average percent of max oscillation at 81%
IM1: smallest alignment shifts, one OSEM at 92%of max oscillation, others at 76% and 82%
TITLE: 06/08 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: None
SHIFT SUMMARY: Spent the entire shift getting through Initial Alignment. Major by-hand alignment required at every step of IA. Things are still drifting as a result of temperature excursion during power outage. But we made it . . through IA that is.
LOG: 8 hours of aligning
We were able to engage vertex angular loops and hard arm loops without issue. The soft arm loops pulled the recycling gain up but made the sideband powers drop. This may indicate the PRM pointing loop is fighting the soft loops.
Violin modes are still high (about where they were on Friday) and will need attention if we want to engage DCPD whitening.
One of the problems we encountered was that the X arm alignment was so far off when we started that the ITM camera image was clipped on the digital aperature, and the servo appeared to be running but didn't bring the spot to the reference position. Travis moved the ITM by hand to fix this.
DAQ frame writers
The frame writers instability cleared up overnight without us doing anything. Today fw1 was more unstable than fw0 (one fw0 restart, three fw1 restarts). Still a far cry from fw0 restarting every few minutes yesterday afternoon.
DAQ EDCU connection to digital video channels
After the power outage the DAQ EDCU would not connect to any digital video channels. Today I remembered that the EDCU only uses EPICS gateways to access IOCs on other subnets. The gateway between the H1AUX lan and the H1FE lan was not in the start script. I started the gateway and corrected the script. The only channels not connecting to the EDCU are now the end station HWS channels.
HWS code at end stations
Nutsinee, Jim, Dave:
When we tried to start the HWS EY code ('python run_HWS.py') on h1hwsey, the EPICS IOC started with both ETMY and ETMX channels (though only the ETMY channels had valid data). This appears only a problem with the new python based system at EY, EX only runs ETMX channels. We'll continue this tomorrow.
SUS PI work
Terra, Rich, Jeff, Carl, Betsy, Tega, Ross, Dave
model changes were made to h1susitmx, h1susitmy, h1susbs, h1susitmpi, h1omcpi. DAC allocations were changed, SHMEM IPC added for binary data exchange, new quad master, new C code. There were several rounds of model and DAQ changes.
ETMY HWWD
Dave, Jim
We are unsure if the hardware watchdog unit at ETMY is working after restoration of power. One sure way is to power cycle the unit and monitor the LEDs for the startup sequence, but that would power down the ISI coil drivers. We can check if the code is running by disconnecting one of the monitor cables and checking for the LED error. We'll test some more on the test stand before risking ETMY.
DTS
Jim
The DTS was powered up and made fully functional. Interestingly like H1 some front ends wound up with the incorrect time because the boot server had not synced up. We'll remember to check on that in future (Tega got compile errors on x1lsc0 because of this).
Attached are the usual long trends of the charge on ETMx and ETMy with the latest data appended. The ETMx charge accumulation is still decaying towards 0, but not there yet. The ETMy charge is also near zero in almost all 4 quadrants. Kissel and I decided we will leave the signs of the ESD bias as they are for another week. It doesn't look like the ETMx data from today was affected too much with the ISI tripped this morning (the SUS was still damped).
J. Kissel, R. Abbott, T. Hardwick, C. Blair, D. Barker, and J. Betzwieser After a good bit of chasing our tails and simulink model juggling, we've confirmed that the new infrastructure necessary for the new ITM ESD drivers (see E1600064) is ready to go. I'll post more details and screens later, but below are a summary of the things we'd touched during the day's installation: - Several top-level models are affected by this change, /opt/rtcds/userapps/release/sus/h1/models h1susauxb123.mdl <-- for the new voltage monitor channels h1susbs.mdl <-- in charge of distributing the BIO signals h1susitmx.mdl <-- integration into the control system h1susitmy.mdl <-- integration into the control system - We also, sadly needed to create a new QUAD library part, because the ITM ESD driver infrastructure is that much different from the ETM ESD driver system; that now lives here: /opt/rtcds/userapps/release/sus/common/models/QUAD_ITM_MASTER.mdl and in addition, we added a new function to the /opt/rtcds/userapps/release/sus/common/src/ CD_STATE_MACHINE.c called "ESD_ITM" which is called in the BIO block of the new QUAD library part. - Also, in the clean up of models, we also re-installed all DAC cards directly from the CDS_PARTS library, which means that a few of the MEDM macro descriptions needed updating -- plus the new macros for the ITM ESD drivers themselves. Those are in /opt/rtcds/userapps/release/sus/common/medm susitmx_overview_macro.txt susitmy_overview_macro.txt - Both the new BIO block and QUAD Library parts needed associated entirely new MEDM screens which are now separate offshoots of the ETMs, /opt/rtcds/userapps/release/sus/common/medm/quad/ SUS_CUST_QUAD_ITM_OVERVIEW.adl SUS_CUST_QUAD_ITM_BIO.adl - I updated the sitemap to call these new SUS_CUST_QUAD_ITM_OVERVIEW screens, passing the appropriate macro files for each optic. The sitemap has been committed here: /opt/rtcds/userapps/release/cds/h1/medm/ SITEMAP.adl
More details about this install can be found in G1601304.
During the power outage the controller at Mid-Y failed its transition from facility power to generator power, Kyle borrowed the controller from Y-End to power the ion pump at Y-Mid (IP9). Today I attempted to swap the DUAL type controller (borrowed controller) with a Gamma type, but the Gamma controller failed, brand new controller, so by mistake I installed the original controller that had been removed by Kyle, such controller decided to work now without problems, so I left installed. I returned the borrowed controller back to Y-End station to power IP11, but it turns out that the signal wire connector is falling apart and needs to be fixed, which will be done on the next opportunity.
The new Gamma controller was bench tested and fuses checked, it continues to fail, it only outputs -50 VDC, not good, it will get send out for warranty repair.
Pulled cable from the annulus ion pump controller on BSC5 all to way to the rack on the south wall of X-End VEA. Next opportunity we'll have the cable landed and terminated.
Michael, Krishna
We are investigating the electrostatic/capacitive actuator on BRS-Y by driving it with a small 50 mV modulation at 20 mHz and at different Bias voltages. BRS sensor correction is turned off and should not be turned ON during the test (winds are low). Avoid going near the instrument, if possible.
Looked into the RMS binary reset for the PUM ETMX chassis.
1. On the PUM Chassis, disconnected the DB37 binary input cable and grounded Pin 18 on the chassis side. This untripped the RMS watchdog.
2. Looked at the output of the binary output interface chassis. Did not see any voltage change when I cycled through the reset in MEDM. This was looking at Pin 18.
3. Spoke to Dave and determined the output for the RMS reset on the binary output inferface chassis is on the second DB37 connector on Pin 2.
3. Tested our cable going from the the binary output interface chassis to the PUM chassis and confirmed it is a one to one pin layout.
It looks like we need a special cable with Pin 2 on male side tied to Pin 18 on female side. For now I reconnected all cabling and left all orignal electronics in place.
EY is reported to be operational, so I will compare cabling with that of EX. As of now, EX RMS reset is still not operational.
Filiberto Clara
Worked on the DBB 200W beam path alignment this morning. One side effect of the power outage that I hadn't considered was that it reset the DBB alignment PZTs to zero. This is actually a good thing as these PZTs need to be as close to zero as possible, and were rather far off in vertical alignment due to the previous bad DBB alignment. This reset essentially forced me to completely realign the beam to the DBB PMC, but with the added benefit of not having to tailor my alignment to also minimize the PZT voltage; it's already at zero.
For this alignment I scan the DBB PMC by ramping its PZT at 10Hz and look at the output of the DBB PMC TPD on an oscilloscope. As a single scan of the PZT covers several FSRs of the DBB PMC the oscilloscope output shows the different higher order mode peaks that resonate in the PMC as the PZT ramps. Alignment is then done by identifying and reducing the TEM10 and TEM01 peaks. At first I didn't really get anywhere and the beam transmitted by the DBB PMC looked really ugly no matter what I did to the alignment, so I checked the distance between the 2 mode matching lenses. Should be ~168mm (this is where I left it last time I worked on the DBB), was more like 187mm (not sure why this moved, didn't see the change documented in the alog). Moved DBB_MML2 to make the distance 168mm, which then gave me a signal through the DBB PMC that I could use in alignment (had a transmitted beam that was somewhat round). Horizontal and vertical alignment is now much better than previous, alignment peaks have almost vanished.
I then worked on the mode matching by moving one of the mode matching lenses slightly (~0.3mm, can do this with the DBB MEDM screen. Note to self, positive movement on the MEDM screen moves the lenses AWAY from DBB_M1), realigning to the DBB PMC (as moving lenses changes alignment), then assessing whether or not this change improved the mode matching peaks in the PZT scan. Lather, rinse, repeat. All told I moved DBB_MML1 by +1.6mm and DBB_MML2 by +4.0mm. This appeared to improve the mode matching peaks signficantly and the beam transmitted by the DBB PMC is much more round (although still slightly oblong in the vertical direction to my eye). Even with these apparent improvements the DBB PMC would not lock, so there is still alignment/mode matching work to do (or something else going on that I haven't considered/worked out yet). I will continue this work in the mornings as commissioning allows.
It should be noted that all this work was done solely on the 200W beam path. Due to the resetting of the DBB PZTs and the subsequent realignment of the 200W path, the 35W path into the DBB will need significant realignment. At this time the 35W path into the DBB is not usable. I will realign/mode match the 35W beam path once the 200W path is done (as the 35W alignment depends on the 200W path being aligned/mode matched).
Chiller filter replacements, planned power outage, re-calibreation of flow sensor pulse calibration...etc
~1000 - 1430 hrs. local (2001 era pump shorted following power cycling power cord) Work is complete. Pump cart removed and HAM10 in its as found (nominal) state.
(All time in UTC)
14:57 Jeff K. running charge measurement both ETMs.
15:07 Joe to LVEA checking batteries on folk lifts and etc.
15:21 Gerado to MY. WP#5920
15:35 Fil+Ed to CER
15:43 Chris S. taking pest control to LVEA (~20 min)
15:48 HEPI weekly inspection at End stations (Hugh)
15:50 Ken installing lightbulbs in CER
15:52 Kristina+Karen to LVEA
16:07 Chris+Mich taking pest control to end stations
16:12 Joe out
16:21 Jason to PSL working on diagnostic breadboard
16:33 Nutsinee running CO2 RS script (~20 mins).
16:35 Ken open up the roll up door between high bay and LVEA to bring a lift through.
16:39 Fil to EX working on PUM chasis
17:09 Portable truck on site
17:29 Jeff B. to EX
17:35 Kyle to HAM10 (WP#5923)
Ed installing Newtonian chasis in CER (TCS rack)
17:43 Joe to LVEA
17:48 Gerado to EX (WP#5922)
17:57 Kristina+Karen to End stations
Carl+Ross+Terra to EY (PI ESD work)
18:02 Jeff B back
18:17 Chris+Mich out
18:25 Hugh back. HEPI Maintenance done.
Joe back
18:50 Jason out
18:53 Karen leaving EY
18:55 Gerado out of EX
19:15 Kyle out
19:22 Kristina leaving EX
19:40 Terra+ out
Fil to EX
19:44 Terra+ to EX
19:55 Dave restarting h1susb123 model and iop, ITM PI model, and DAQ
20:01 Ed to MX
20:04 Kyle to HAM10 (ion pump)
20:26 Gerado to LVEA
20:32 Fil back
20:34 Chris+Mich to LVEA (clean room inspection)
20:40 Gerado out
20:44 RIch to EX (ESD work)
20:55 Mich to LVEA
21:00 Mich out
21:11 Terra+ out
14:17 Jeff Bartlett is taking over until 4.
Title: 06/18/2016, Day Shift 15:00 – 23:00 (08:00 – 16:00) All times in UTC (PT) State of H1: IFO unlocked. Recovering from the power outage & maintenance window Commissioning: Commissioners working on IFO. Outgoing Operator: Nutsinee Activity Log: All Times in UTC (PT) 21:20 (14:20) Take over for Nutsinee 21:25 (14:25) Jeff K. & Rich A. – Going into the CER to work on Binary IO for ITM ESD 21:55 (14:55) Jeff K & Rich A. – Going into LVEA to Beer Garden area 22:00 (15:00) Start initial alignment 22:00 (15:00) Kyle – Is out of the LVEA 22:01 (15:01) Dave & Jim – Going to End-Y CER 22:01 (15:01) Jeff K & Rich A. – Out of LVEA and back into CER 22:06 (15:06) Jeff K. & Rich A. – Out of the CER 22:50 (15:50) Jeff K. – Restarting BS model 23:00 (16:00) Turn over to Travis End of Shift Summary: Title: 06/07/2016, Day Shift 15:00 – 23:00 (08:00 – 16:00) All times in UTC (PT) Support: Sheila Incoming Operator: Travis Shift Detail Summary: Working on initial alignment with Sheila after end of maintenance window.
RCG 3.0.3 upgrade.
Jim, Dave:
All the frontends were rebuilt against RCG TAG 3.0.3 on Friday afternoon before the shutdown. The build started with an empty H1.ipc file to clean out any unused IPC chans. After all the frontends were restarted today, they are now running with 3.0.3
Infrastructure Restart
Richard, Carlos and Jim:
The CDS infrastructure was restarted between 7:00 and 8:30. These include networking, DC power, timing, DNS, DHCP, NTP, NFS, auth.
Front End Startup
Richard, Jim, Dave:
h1boot was started. Then h1psl0 was started as an non-dolphin FEC. This permitted the DAQ to be restarted (needs a running front end). Then the remaining non-dolphin MSR FECs were started. The Richard started EX, EY, MX and MY FECs. Finally we started the MSR dolphin machines. All started up once the fabric was complete.
h1lsc0 reported timing issues. The timing card on this IO Chassis is independently powered, I activated this DC supply and the system started normally.
h1pemmy needed on more IOP restart, as is expected for these non-IRIGB units.
During the startup of the SWWDs, all MSR dolphin channels were found to be not working. We tracked this down to the Dolphin switches not being powered up, this was easily resolved.
Eagle eye-ed Jim noticed the system time on h1psl0 was way off (by 7 hours). We assumed because it was the first to be started soon after h1boot (which perhaps had not NTP synced). We manually set the time and handed control back to NTPD.
EPICS Gateways
Dave:
All the epics gateway processes on cdsegw0 were restared using the start.all script. This started too many gateways resulting in duplicate data paths, the redundent gateways were removed and the script was corrected.
h1hwinj1
Jim, Dave:
We started the PSINJECT process on h1hwinj1. Again the SL firewall prevented EPICS CA access, which we re-remembered and fixed. CW injection to PCAL is running.
Slow Controls SDF
I started Jonathan's SDF monitor pseudo-target on h1build.
Vacuum Controls and FMCS cell phone notification
I got the cell phone alarm texter running again on cdslogin.
PI Model changes
Tega, Dave
A new PI_MASTER.mdl file was created at 15:10 Friday, which just missed the 3.0.3 build cut-off time. We recompiled and restarted the four models which use this file (h1omcpi, h1susitmpi, h1susetmxpi and h1susetmypi). A DAQ restart was also required.
NDS processes not giving trend data
Jeff, Jim, Dave
here was an interesting one, the nds processes were giving real-time data but no archived trend data. The cronjob which was keeping the jobs directory in check by deleting all files older than one day, today cleaned out the directory and then deleted the jobs directory itself. Re-creating the directory and stopping the cronjob from erasing it again fixed the problem.
h1tw0 raid issues
Carlos, Ryan
h1tw0 did not like being power cycled and the new RAID controller card stopped working again. It looks like we are going to have to recreate the raid again tomorrow.
DAQ instability
Dan, Jim, Dave
sadly we leave the system with an unstable h1fw0. Earlier in the day h1fw1 was unstable, then h1fw0 became very unstable. Dan did some QFS diagnostics, I did some NFS diagnostics, we cannot see any reason for the instablities, all NFS/disk access is well within bandwidth with no indication of packet/re-transmission errors.
in the evening I started the camera copy software which sends digital camera centroid data from the corner station to the ALS models at the end stations.
Started up the 'lhoepics' remote-MEDM/remote-EPICS server this morning, noticing it had been overlooked or de-prioritized during the startup Monday.
We saw the noise at XEnd ESD LL again at ~1.28 MHz, 15mV pk2pk. Rich A. came out to have a look; believe the noise be real (a real oscillation in the hardware).
On LR drive, we also saw the spectrum glitching (bouncing up and down) when driving at 15.5kHz; goes away with no drive signal (note we did not try other drive frequencies yet). Some beat making a low frequency component?
The source of the 1.2Mhz oscillation was identified by opening the spare chasis and looking for marginally stable opamp stages.
The stage that is marginally stable is U6 page 7 of D1500016. It can be made to oscillate at ~600khz or ~1.2Mhz. The stage is a configurable with the pole/zero bypass bit.
When for example the H1:SUS-ETMX_BIO_L3_UL_STATEREQ control word is set to 2 the stage has a pole at 2.2Hz. This is the normal low noise configuration. In this configuration there is no 1.2MHz oscillation.
When this control word is set to 1 the stage is nominally a unitly gain stage. In this configuration some channels (like UL UR LL and LR) have a Q of >5 at 1.2MHz and can be induced to freely oscillate. This oscillation may be damped with a 30pF capacitor across R21.
As this oscillation is not a problem in the low noise configuration no changes will be made. Testing PI channels should be performed with the H1:SUS-ETMX_BIO_L3_UL_STATEREQ control word set to 2.
It occured to me that the hysteresis of the pzt might be something I could overcome by dithering the pzt around the values that were restored when we brought the system up.
I started the dither at +/-5000, then dithered at +/-1000, then +/-100, then +/-10, then +/-1, with 20 second sleeps between each change.
I dithered both pitch and yaw.
When the beam from the PSL was restored, the IMC REFL beam was completely on the camera, and the IMC locked (though at low value, WFS are off).
I've uploaded the executable file that will automatically dither the pzt to the OPS wiki, under the page name "pzt dither to recover alignment," and copied the executable into the userapps/release/isi/h1/scripts directory, file name 20160606_pzt_dither.txt.
I have copied Evan's new actuation function to the h1hwinj1 directory currently used for CW injections: ~hinj/Details/pulsar/O2test/. I used the one that corrects for the actuation delay: H1PCALXactuationfunction_withDelay.txt. For reference, the uncorrected version (no "_withDelay") sits in the same directory, along with the one we first tried last week: H1PCALXactuationfunction.txt.25may2016. The perl script that generates the command files in.N (N=0-14) has been updated to use "_withDelay" and the command files regenerated. The CW injections have been killed and automatically restarted by monit. Attached are second trends before and after the gap showing that things look about the same, as expected, but there is a small increase in injection amplitude (~5%).
Evan wondered if the ~5% increase in total injection amplitude was dominated by the highest frequency injection or one at lower frequencies. I took a look for this time interval and found that the total amplitude is dominated by the injection at ~1220.5 Hz. Simply comparing spectral line strengths before and after the changeover turned out not to be a robust way to estimate the frequency-dependent ratio of the new to the old inverse actuation function, because some pulsar injections (especially the highest frequency one) are going through rapid antenna pattern modulations during this period. But comparing the new to the old spectral line strengths at the same sidereal time several days later (after power outage recovery) gives robust measures for a sampling of low-, medium- and high-frequency injections:
| Freq (Hz) | "Old" amplitude (before switchover) | New amplitude (4 sidereal days later) | Ratio (new/old)
| 190.90 | 0.32292 | 0.32211 | 1.00
| 849.00 | 60.502 | 62.344 | 1.03
| 1220.5 | 299.37 | 318.70 | 1.06
| 1393.1 | 207.50 | 224.37 | 1.08
| 1991.2 | 28.565 | 32.788 | 1.15
| |
Summary:
After the test on CW injections (LHO alog 27409), I decided to revisit the ETMX Pcal actuation function and inverse actuation filter because I have some concern that the new actuation function is not as accurate as I initially desired. Following a similar procedure as LHO alog 27176, I created new filters and obtain an inverse actuation function that is within 5% in magnitude and 5 degrees of phase up to 2 kHz, and within 5% and ~10 degrees of phase up to 3 kHz. The new inverse actuation filters are loaded into the PINJ_TRANSIENT bank. Waveforms using the inverse actuation filters should assume a time advance of 240 usec. Attached are the actuation function files, one without the 240 usec delay and one with the 240 usec delay.
Details:
In the same way as LHO alog 27176, I created new AI analog and AI digital approximations. The new approximations are actually not very good by themselves, but when used in conjunction with all of the filters in the PINJ_TRANSIENT bank, it accurately reproduces the true Pcal actuation function within 5% in magnitude and 5 degrees in phase up to 2 kHz.
Changes:
In addition, I used the Matlab quack3 function to produce SOS filters to be copied into Foton. I found that entering the values as zpk in Foton did not reproduce the Matlab results. A transfer function was exported from Foton and loaded into Matlab for detailed comparison. No significant differences were found
Attached are figure files showing the comparison of the AI analog/digital approximations, comparison of the approximate Pcal actuation function to the real actuation function, comparison of approximate inverse actuation function to real inverse actuation function, and the impulse response of the inverse actuation filters. Also attached is the actuation function with and without a 240 usec delay for use with the CW hardware injections.
Attached is a comparison of the first try inverse actuation function (LHO alog 27368) and this version that fixes some of the issues described above.