Added the following channels to the exclude list: H1:SUS-ETMX_PI_OSC_DAMP_OSC_LINE1_COSGAIN H1:SUS-ETMY_PI_OSC_DAMP_OSC_LINE1_COSGAIN H1:SUS-ETMX_PI_OSC_DAMP_OSC_LINE2_COSGAIN H1:SUS-ETMY_PI_OSC_DAMP_OSC_LINE4_SINGAIN H1:SUS-ETMX_PI_OSC_DAMP_OSC_LINE4_COSGAIN H1:SUS-ETMY_PI_OSC_DAMP_OSC_LINE3_COSGAIN H1:SUS-ETMX_PI_OSC_DAMP_OSC_LINE2_SINGAIN H1:SUS-ETMX_PI_OSC_DAMP_OSC_LINE4_SINGAIN H1:SUS-ETMY_PI_OSC_DAMP_OSC_LINE3_SINGAIN H1:SUS-ETMX_PI_OSC_DAMP_OSC_LINE3_COSGAIN H1:SUS-ETMY_PI_OSC_DAMP_OSC_LINE4_COSGAIN H1:SUS-ETMX_PI_OSC_DAMP_OSC_LINE3_SINGAIN H1:SUS-ETMY_PI_OSC_DAMP_OSC_LINE2_SINGAIN H1:SUS-ETMX_PI_OSC_DAMP_OSC_LINE1_SINGAIN
Re wandering 59Hz bump first seen around 25 May and reported Jun 1: With the DQ channels, I checked to 125Hz and it does not show itself. Maybe the power cycle cleaned it out.
Otherwise, attached are the CPS spectra for all the BSCs:
The ETMX St1 V1 looks much better now. The ITMX St2 H3 is a little better but the ITMX St1 V1 is quite noisy now. I better check this tomorrow to see if it changes.
Routine Maintenance but since the Motor greasing is just an annual event figured it warrented a log entry. Of note, found that many of the zerk fittings would not open and pass the grease. So found one that did and installed that one as needed. Will open an FRS ticket and purchase better zerks to replace the negative function units.
Sheila, Evan, Haocun
We drove the ETM's in pitch and yaw, and took measurements for the HARD loop.
As Evan posted in aLOG 27518, the usual "IN1/IN2" method for FFT-based OLTF measurements were biased because the IN2/EXC coherences were low, so I exported the data of CLTF's using IN1/EXC (whose coherences were higher), and converted them into OLTF's.
The measurements for both pitch and yaw of CHARD and DHARD are plotted together with the modelling curved as attached:
- The measurements were taken under 2W of laser power. The blue curves are the modelling DHARD plots at 2W, and the red or orange dots are measurement values.
- Only values with IN2/EXC coherence higher than 0.4 were used, otherwise the data were dropped.
- I tried to calculate the uncertainties of the measurements using the method in aLOG 10506, shown as error bars on the plots.
- Number of measurements taken:
CHARD Pitch: 111
CHARD Yaw: 44
DHARD Pitch: 121
DHARD Yaw: 33
We will take more measurements with same UGF values in the model and do more comparasion.
[Dave, Rich, Terra, Carl]
We have removed DAC card_num=7 from h1susitmypi and re-ordered the outputs on DAC card_num=6 in accord with the update to D1500464.
Specifically H1:IOP-SUSB123_MDAC6_TP_CH0-7 now follow the ordering ITMX UR LR UL LL, ITMY UR LR UL LL.
there is a collision of DAC channels on h1iopsusb123, we will rebuild and restart all the models on this frontend when new code is available.
RCG 3.0.3 upgrade.
Jim, Dave:
All the frontends were rebuilt against RCG TAG 3.0.3 on Friday afternoon before the shutdown. The build started with an empty H1.ipc file to clean out any unused IPC chans. After all the frontends were restarted today, they are now running with 3.0.3
Infrastructure Restart
Richard, Carlos and Jim:
The CDS infrastructure was restarted between 7:00 and 8:30. These include networking, DC power, timing, DNS, DHCP, NTP, NFS, auth.
Front End Startup
Richard, Jim, Dave:
h1boot was started. Then h1psl0 was started as an non-dolphin FEC. This permitted the DAQ to be restarted (needs a running front end). Then the remaining non-dolphin MSR FECs were started. The Richard started EX, EY, MX and MY FECs. Finally we started the MSR dolphin machines. All started up once the fabric was complete.
h1lsc0 reported timing issues. The timing card on this IO Chassis is independently powered, I activated this DC supply and the system started normally.
h1pemmy needed on more IOP restart, as is expected for these non-IRIGB units.
During the startup of the SWWDs, all MSR dolphin channels were found to be not working. We tracked this down to the Dolphin switches not being powered up, this was easily resolved.
Eagle eye-ed Jim noticed the system time on h1psl0 was way off (by 7 hours). We assumed because it was the first to be started soon after h1boot (which perhaps had not NTP synced). We manually set the time and handed control back to NTPD.
EPICS Gateways
Dave:
All the epics gateway processes on cdsegw0 were restared using the start.all script. This started too many gateways resulting in duplicate data paths, the redundent gateways were removed and the script was corrected.
h1hwinj1
Jim, Dave:
We started the PSINJECT process on h1hwinj1. Again the SL firewall prevented EPICS CA access, which we re-remembered and fixed. CW injection to PCAL is running.
Slow Controls SDF
I started Jonathan's SDF monitor pseudo-target on h1build.
Vacuum Controls and FMCS cell phone notification
I got the cell phone alarm texter running again on cdslogin.
PI Model changes
Tega, Dave
A new PI_MASTER.mdl file was created at 15:10 Friday, which just missed the 3.0.3 build cut-off time. We recompiled and restarted the four models which use this file (h1omcpi, h1susitmpi, h1susetmxpi and h1susetmypi). A DAQ restart was also required.
NDS processes not giving trend data
Jeff, Jim, Dave
here was an interesting one, the nds processes were giving real-time data but no archived trend data. The cronjob which was keeping the jobs directory in check by deleting all files older than one day, today cleaned out the directory and then deleted the jobs directory itself. Re-creating the directory and stopping the cronjob from erasing it again fixed the problem.
h1tw0 raid issues
Carlos, Ryan
h1tw0 did not like being power cycled and the new RAID controller card stopped working again. It looks like we are going to have to recreate the raid again tomorrow.
DAQ instability
Dan, Jim, Dave
sadly we leave the system with an unstable h1fw0. Earlier in the day h1fw1 was unstable, then h1fw0 became very unstable. Dan did some QFS diagnostics, I did some NFS diagnostics, we cannot see any reason for the instablities, all NFS/disk access is well within bandwidth with no indication of packet/re-transmission errors.
in the evening I started the camera copy software which sends digital camera centroid data from the corner station to the ALS models at the end stations.
Started up the 'lhoepics' remote-MEDM/remote-EPICS server this morning, noticing it had been overlooked or de-prioritized during the startup Monday.
We saw the noise at XEnd ESD LL again at ~1.28 MHz, 15mV pk2pk. Rich A. came out to have a look; believe the noise be real (a real oscillation in the hardware).
On LR drive, we also saw the spectrum glitching (bouncing up and down) when driving at 15.5kHz; goes away with no drive signal (note we did not try other drive frequencies yet). Some beat making a low frequency component?
The source of the 1.2Mhz oscillation was identified by opening the spare chasis and looking for marginally stable opamp stages.
The stage that is marginally stable is U6 page 7 of D1500016. It can be made to oscillate at ~600khz or ~1.2Mhz. The stage is a configurable with the pole/zero bypass bit.
When for example the H1:SUS-ETMX_BIO_L3_UL_STATEREQ control word is set to 2 the stage has a pole at 2.2Hz. This is the normal low noise configuration. In this configuration there is no 1.2MHz oscillation.
When this control word is set to 1 the stage is nominally a unitly gain stage. In this configuration some channels (like UL UR LL and LR) have a Q of >5 at 1.2MHz and can be induced to freely oscillate. This oscillation may be damped with a 30pF capacitor across R21.
As this oscillation is not a problem in the low noise configuration no changes will be made. Testing PI channels should be performed with the H1:SUS-ETMX_BIO_L3_UL_STATEREQ control word set to 2.
Kissel started the charge measurements on the SUS ETMs this morning around 8am. However, I just discovered that both ITM and the ETMx ISI's have been tripped since ~4am, so the ETMx data isprolly junk. Boo. Not sure why this wasn't addressed this morning. ISI's are back up now.
Now that the Anthropogenic plot is updating correctly (thx Kissel who poked CDS, and thx CDS), there appears to be a large EQ around the ISI trip time - USGS says M6.2 in Mexico at 03:51 AM local time.
The new idle state, which doesn't do anything (idle), was connected to both DOWN and READY. When the ISC_LOCK Guardian had DOWN requested, it would jump from DOWN to READY as it normally would, and then it would move to IDLE and back to DOWN and repeat. I just removed the path from READY to IDLE to stop this loop, but this makes getting to IDLE a bit akward since it has to pass through DOWN. Whomever made the state may want to reconsider the organization of the path, but this works for now.
Topped off the PSL Crystal Chiller with 125ml of water. This is a normal top off after swapping the filters yesterday. Both chillers are running well.
Added H1:SUS-ETMY_PI_OSC_DAMP_OSC_LINE2_COSGAIN and H1:SUS-ETMY_PI_OSC_DAMP_OSC_LINE3_SINGAIN to the exclude list and updated the channel list. Added 138 channels and removed 56 channels (see attached).
All accumulators were in good shape with little to zero drop in pressure since last checked--26 Jan 2016.
Of course you can't do this task without impacting the accumulators. After checking the HAM5 return unit, the valve was leaking upon pulling the gauge and I could not get it to reseat. The schraeder valve was replaced. Several units lost more than the usual when the gauge did not come off the valve cleanly(quickly) and subsequently had to be rechanged. There were several units which were marginal in January and although they were still in okay range, they were charged up to the top of the acceptable range.
The values have been recorded in T1500280 and I will update this on the DCC after today's HEPI maintenance. Levels I suspect will be different after the full shutdown and additionally I'll do a thorough purge of the resistors which also could affect the Reservoir Levels. Also, motor greasing is about a year old.
This was done yesterday during outage recovery.
Talked to John about a CP7 Pump Level alarm we were getting. He advised me to turn the 'Manual Setting' to 72%.
Note: According to Gerardo, who called back after John, he did not get an alarm notification by phone about this as he should have. Someone needs to check the machine responsible for these calls.
I did not get a text or email alarm either.
No CP7 texts for me
J. Driggers, J. Kissel, J. Warner While beginning recovery of the IFO, Jenne noticed that the Y arm fiber polarization was high (showing ~24% of input light rejected, where < 5% is where we want to be). We followed instructions similar to what can be found in the OPS Wiki. Since the percent-of-rejection channels are buried deep in the heart of the ALS MEDM screen jungle, I pulled the channels, H1:ALS-X_FIBR_LOCK_FIBER_POLARIZATIONPERCENT H1:ALS-Y_FIBR_LOCK_FIBER_POLARIZATIONPERCENT into a StripTool, which made it easier to by-eyeball, hand-tune the value via the tiny CDS laptop we were using. Also, there's no rhyme or reason to which of the three knobs to use; we just slowly turned all the knobs in both directions until we started to see the rejected light value go down sub-5%. While there, we also brought the X-arm below 5%. The fiber polarization tuning box was turned off once we were done.
There are instructions on how to perform this task in the Ops Wiki, also. It can be found in the Troubleshooting section.
Summary
On Friday, Pcals were turned off for the June 3-5 power outage (LHO alog 27558).
Details
After finding out that PcalY PDs are not measuring any power I went to the End-Y and checked following:
From this limited information and under the assumption that nothing is broken in the Pcal system, one of possible sources of the issue might be that control signals are not reaching the Pcal Interface module through an EtherCAT connector. The laser on/off and power level (5.0 V) signals should come through this interface (the Pcal Interface Chassis back-side board circuit is given in D1400149).
The issue in currently under investigation.
Filed FRS 5651
Filiberto, Darkhan,
Summary
Details
As it was reported in the original report above, there was an issue with turning on the PcalY laser and operating the shutter. The issue was discovered yesterday around 11am. After that I came back to get breaker circuits to proceed with investigations, but I found out that it is not needed.
At End Y there were issues with Beckoff system that were discovered independent from Pcal work. The Beckoff system feeds control signals to Pcal through EtherCAT cable into Pcal Interface Module. In the afternoon Filiberto went to EndY and replaced couple interface boards of the Beckoff system (see LHO alog 27583). After he let me know about the replacement work, I went back to EndY to turn on PcalY (needed to switch shutter control back to "remote" and double-check power switches on Pcal modules). Now the issue is resolved and PcalY is operating in its nominal configuration.
I updated the ETMY L3 stage digital compensation filters (FM2, FM6 and FM7 of ETMY_L3_ESDOUTF_{UL,LL,UR,LR}_GAIN
) to the more accurate ones which are based on the recent measurements by Evan and Jeff. The coefficients are already loaded, but we have not gotten a chance to use the ETMY ESD yet.
To minimize possible errors for editing the foton file, I wrote a python script which automatically populate the filters, rather than editing the file by hand. The script is in the SVN at
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER9/H1/Scripts/SUS/setETMYLVVNcompensations.py
This is to note the measured state of the ETMY ESD electronics and the compensation (7 June 2016)
Measured analog electronics:
Diff Receiver Summing Node LP1 LP2 Overall DC Gain
(z:p) [Hz] (z:p) [Hz] (z:p) [Hz] (z:p) [Hz] [V/V]
UL 117e3:25.9e3 :3158.93 42.32:2.16 54.20:2.16 1.882
LL 167e3:27.6e3 :3229.11 42.93:2.08 49.73:2.09 1.881
UR 140e3:26.0e3 :3269.85 47.79:2.15 47.87:2.16 1.881
LR 160e3:26.7e3 :3323.42 47.17:2.06 47.61:2.20 1.881
Compensation filters:
AntiLP AntiAcq
(z:p) [Hz] (z) [Hz]
UL [2.1580;2.1590], [42.3160;54.2020] 3158.9340
LL [2.0810;2.0870], [42.9260;49.7260] 3229.1150
UR [2.1500;2.1570], [47.7850;47.8680] 3269.8530
LR [2.0640;2.1970], [47.1660;47.6100] 3323.4210
So the only uncompensated part of the anlog electronics is the zero-pole pair above 20 kHz.