Power was interrupted to Comtrol serial adapter, so the EPICS IOC for the weather at end Y needed to be restarted.
EndX has been transitioned to laser Hazard for Pcal work.
SEI: Getting things back up after AA/AI/IOC work
SUS: EX and EY are back, Jeff will be at EY this morning measuring AI chassis
CDS: All AA/AI boards are swapped
Power supplies on IO all swapped.
Some problems overnight to investigate.
Installing cables at EY.
PSL: Going ot EX to turn on Pcal.
Fac: Moving 3IFO pallet
There will NOT be a safety meeting today
J. Kissel I'm heading down to the X end to begin measurements of the QUAD's AI and Coil Driver Chassis. I've brought the QUAD to SAFE in prep. No signs of the trouble I briefly saw last night (see LHO aLOG 18519).
As part of the investigation into the cause of the laser trips, I looked at the status of a few bits of the TwinSafe codes. This is a rough core dump of my notes from the session. The status screen was red on "Interlock OK". The following values were recorded bAmpNPRORunning = 0 bAmpShutterOpen = 1 bAmpShutterClosed = 0 bILOK = 0 bILTwinSafeFBError = 0 bILSoftwareEvent = 1 StandardQX1[0-7] = 11111111 StandardQX2[0-7] = 11100000, this indicates bILDChilFlow, bILXChilFlow and bILDiodeRoomSafetyKeyLock are 1 StandardQX3[0-7] = 00000000 Standard1X1[0-7] = 00100000, this indicates that bILSoftwareEvent is 1 consistent with the above observation Standard1X2[0-7] = 00000000 Standard1X3[0-7] = 00000000 Seemingly the above suggests that at some point there was a problem with the chiller. No indication on the chiller of a problem, in fact both chillers were still running. TwinCAT indicates that 1 frame (out of many tens of thousands) was lost. I don't know if this is enough of a loss to trigger the TwinSafe safety relay or not.
J. Kissel In order to facilitate studies of the timing systems inside the "important" (i.e. those involved in the DARM loop) IO chassis, I've turned ON the DAC DuoTone Enable switches for the h1susex, h1susey, and h1lsc0 front end computers, i.e. EX SUS Front End (h1susex) = H1:FEC-87_DUOTONE_TIME_DAC EY SUS Front End (h1susey) = H1:FEC-97_DUOTONE_TIME_DAC OMC DCPD's Front End (h1lsc0) = H1:FEC-7_DUOTONE_TIME_DAC which, because all of the 32nd and 31st channels on these chassis' ADC0s are otherwise empty should now pipe the DAC0's DuoTone signal back into ADC0. Recall that the 32nd ADC0 channel on each front end is that ADC's DuoTone signal it has received from the timing slave also internal to the I/O chassis, e.g. H1:IOP-SUS_EX_ADC_DT_OUT_DQ H1:IOP-SUS_EY_ADC_DT_OUT_DQ H1:IOP-LSC0_ADC_DT_OUT_DQ and the 31st ADC0 channel (if the H1:FEC-${DCUID}_DACDT_ENABLE button is ON) is DAC0's DuoTone, e.g. H1:IOP-SUS_EX_DAC_DT_OUT_DQ H1:IOP-SUS_EY_DAC_DT_OUT_DQ H1:IOP-LSC0_DAC_DT_OUT_DQ should contain some useful timing diagnostics. Note, the inherent assumption in the system is that if one ADC and one DAC is well synchronized to the timing slave, then all DAC and ADCs in the IO chassis are well synchronized. Unclear whether we'll ever need to turn these off, so they should probably go into the IOP model's SDF system when someone's more awake than I am right now. Also -- just because I can't think of a better place to put them at the moment since there're so many open questions, I attach my notes on further data-mining-type Timing System studies that can be done now that Jim has set of the auxiliary timing system check (see LHO aLOG 18384).
This morning I also turned on the end-station ISC front ends (which host PCAL; in fact EX was already turned on -- dunno for how long it's been so, but that's good!). ISC EX Computer: H1:FEC-83_DUOTONE_TIME_DAC ISC EY Computer: H1:FEC-93_DUOTONE_TIME_DAC ADC0 timing monitor Channels for these front ends: H1:IOP-ISC-EX_ADC_DT_OUT_DQ H1:IOP-ISC-EY_ADC_DT_OUT_DQ DAC0 timing monitor Channels for these front ends: H1:IOP-ISC-EX_DAC_DT_OUT_DQ H1:IOP-ISC-EY_DAC_DT_OUT_DQ
J. Kissel I've recovered FULL ISOLATION on the SEI system at the end station, as well as damped all the SUS. While watching/waiting for the ISI to isolate, I noticed that H1 SUS ETMY's M0 L damping loop output was consistently rather large (was +/- ~700 [ct], where it's normally +/- 1 [ct]). Pulled open a dataveiwer trace, and found the main chain ringing at ~5 [Hz] with consistent amplitude in L. Grabbed a few "quick" spectra in DTT, narrowed down the frequency and to demonstrate the problem for this aLOG.... ... and the problem disappeared. I turned on the L, P and Y damping loops after just getting enough data for the "V T R damping loops ON, L P Y damping loops OFF" measurement. And no 5 [Hz] oscillation. All DOF's damping outputs restored to just barely above 1 [ct]. *sigh* So I'll leave the title up to gather enough attention such that a little forensics can be done, but at least the SEI / SUS at EY pass these superficial, tired Jeff, tests (isolating the SEI system and turning on the damping loops for the SUS). As mentioned in LHO aLOG 18518, I'm going to come in early again and measure H1 SUS ETMY's AI chassis, but I'll leave the SEI/SUS system in its up-and-running state overnight before I attack.
J. Kissel I've measured high precision transfer functions of every relevant channel of H1 SUS ETMX's AI chassis today from 10 [Hz] to 100 [kHz]. This was mostly a dress rehearsal because EY wasn't available, but eventually we may be using both QUADs to actuate DARM so we'll need these for calibration accuracy, and this felt like a good time to measure, since the rest of the world was down for upgrades. I'll measure EY tomorrow. The messages: - all channels are functioning fantastically, - there is a DC gain of 0.9897 +/- 0.0001 [V/V] on all AI chassis' channels I've measured, - the notch frequency is 65883 +/- 374 [Hz], - and by 2 [kHz] there's already a 2% drop in gain from that (though we're not surprised by this). We will use these measurements (well, ETMY's are more important currently, but...) to inform the calibration model. Attached are several things: 2015-05-19_H1SUSETMX_AIChassis.pdf Plots of the results, showing the full frequency response from 10 to 1e5 [Hz] and a zoom in on the gravitational wave band. The last plot shows the data analysis performed to transform the raw data into the final answer -- I measured the full signal chain both with the device under test (DUT) and by-passing it as a reference, and then taking the ratio to find just the DUT's response. 2015-05-19_AIChassis_Measurement.pdf Diagrams showing the measurement setup. Thanks to the good Dr. O'Rielly for the verbal list of tips/tricks/"gotchas." 2015-05-19_H1SUSETMX_AIChassis_Meast_Pics.pdf Pictures of the setup to aide the diagram. Details: All raw measurements, and results can be found committed to the CalSVN repo here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER7/H1/Measurements/ElectronicsMeasurements/ The analysis script which takes out the reference transfer function and makes plots: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER7/H1/Measurements/ElectronicsMeasurements/process_H1SUSETMX_AI_Measurements_20150519.m Of particular interest for the future is the SR785 measurement readout scheme, in which I used a GPIB box configured against a portable wireless router that can access the CDS network (thanks to Elli and Evan for helping me with this). As such, I could run the GPIB measurements from a nearby work-station, and all data retrieval is automatic without the need for clumsy floppy-disk or USB drives. Note that this process uses the latest checkout of the github repo for this stuff maintained by Eric Quintero at the 40m Lab, https://github.com/e-q/netgpibdata/ which has now been checked out in an "official" location here: /ligo/svncommon/NetGPIBDataGIT/netgpibdata/ where the key function (for a transfer function measurement) is /ligo/svncommon/NetGPIBDataGIT/netgpibdata/SRmeasure and parameter file template /ligo/svncommon/NetGPIBDataGIT/netgpibdata/SPSR785template.yml So to take such a measurement, you - Connect the GPIB to the SR785 and to the wifi router - Setup of the SR785 "OUTPUT" button to have Destination as "GPIB," GPIB Control as "SR785," and GPIB Address as "[some number]," where "[some number]" is defined in the first few lines of parameter file. For this measurement setup, I tried to make the measurement parameters as identical to what I found on the LLO CDS machines in /data/brian/gpib/configuration.py where Brian told me to hunt. My configuration file now lives in the CalSVN repo in the same place as everything else for these measurements, /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER7/H1/Measurements/ElectronicsMeasurements/TFSR785_AIChassis_Config.yml thus, to run a given measurement, I can run the following command (assuming I'm in the CalSVN ElectronicsMeasurements directory), ]$ /ligo/svncommon/NetGPIBDataGIT/netgpibdata/SRmeasure ./TFSR785_AIChassis_Config.yml and ~10 minutes later I get a happy little .txt file export of the measurement from the SR785 in that directory. Sweet!
Using the data from May 17th lock strech where spherical power was stable and the calculation method from alog16558 and alog16579, the estimate power absorbed by ITMX was 390 +/- 40 parts per billion (agrees with what has been previously calculated in alog16603).
The primary source of uncertainty comes from assuming 10% uncertainty in the arm power.
As Kiwamu suggested, I used 3.1% (measured value) for PRM transmissivity instead of 2.97% which is vender's value.
The arm buildup = 1233 cts, recycling gain = 38, CR power to PRC = 19W, power stored in the arm = 102360W, and the absorbed power = 40 mW. Yields the absorbtion in the ITMX coating to be 390 +/- 40 ppb.
Richard, Jim, Dave:
We programmed all the remaining IO Chassis timing slave FPGA's today. In the morning we did MY and EY. This afternoon Jim did all the corner station chassis (SUS, SUSAUX, SEI, PSL, ISC)
We replaced all18bit DAC cards with new firmware versions in: h1susey, h1susb123, h1sush2a, h1sush34. Remaining SUS to do, h1sush56, h1sush2b. We have enough modified cards to do h1sush56 tomorrow, using the one card which fails on autocal* for the OMC DAC.
We had to shuffle some DAC cards around to make space for the new DC power supply. Details will be posted tomorrow.
We powered up all frontends. There is a problem with h1seib3, looks like the 16chan Contec BIO card is not being seen by the front end (it was in the slot we needed to use for the DC power supply). Will fix this tomorrow.
Some FECs have large positive IRIG-B values which will eventually come back down.
* 18-bit DAC card S/N 101208-05 fails its autocal. I replaced its firmware chipset and re-tested on DTS, it still fails. Jeff and Dan H suggest for now we install on h1sush56 for the OMC drive.
Dan, Greg, Jim, Dave:
Dan found that one of the LDAS Q-Logic fiber channel switches is reporting receive errors. These errors correlate to the SATABOY RAID controller reported errors, and in turn to the fw0 restarts.
We first changed which the long-haul single mode fiber optics pair which is used between buildings. Instead of using the first pair of the A-Block, we moved to the first pair of the B-Block. The change was done at 15:00 PDT this afternoon. This did not fix the problem, h1fw0 restarted 50 minutes later and Dan saw errors on the Q-Logic switch.
Following the philosophy of "one change at a time", we are swapping out the single mode fiber optics cable in the LDAS server room which connects to the Q-Logic switch to the LDAS patch panel. It that does not improve the situation we will do the same in the MSR.
Sudarshan, Dave:
After the PSL front end restart following the DC power and FPGA reprogramming, we manually burt restored all four models to 13:10 PDT. The safe.snap files for the PSL most probably should be updated.
Summary:
Now the ISS second loop RPN signals ( H1:PSL-ISS_SECONDLOOP_SUM14_REL_OUT_DQ and H1:PSL-ISS_SECONDLOOP_SUM58_REL_OUT_DQ) are normalized using the DC signals from the photodiodes.
Before:
The original electronics on the photodiode readout for ISS second loop had a whitening so we were not able to get the DC signals. Temporary arrangments were made to monitor these DC signals by using testpoints but they were not included in the front end model. To calibrate the output signals we were using a constant number (12 V in our case) which was obtained by taking the DC signals from the test point when the ISS was operating at its nominal configuration at about 2.8 W. However, this calibration factor changes with the change in laser power.
After:
On our last ISS second loop model update (see alog18381) we included these DC channels into the front end model so we can acquire the DC signals from the photodiodes. Now the ISS output signals are calibrated (normalized) using these newly acquired DC signals. See attached filter module to see the changes that have been implemented.
LVEA: Laser Safe Observation Bit: Commissioning 07:00 Karen & Cris – Cleaning in the LVEA 08:17 Hugh – Going into the LVEA 08:28 Party Rentals on site to setup 08:30 Jodi & Bubba – 3IFO work in LVEA 08:30 S & K Electronics – Electrical work at End-Y and End-X 08:33 Cris – Going to End-X and Mid-X 09:00 Richard – Going to End-Y 09:05 Audio/Visual person on site to setup 09:10 IO Power Supplies & 18-bit DACs (Richard) 09:15 Reprogramming of IO Chassis Timing at EY (Dave & Jim) 09:20 Beckhoff Code Change (Patrick) 09:28 Praxair on site for Nitrogen delivery 09:52 Jim & Dave – At Mid-Y for PEM chassis swap 10:15 Jim – Shutting down Seismic computers for Work Permits 5207, 5205, 5204 12:10 Jim & Dave – Going to End-Y 12:30 Jim & Dave – Back from End-Y 12:56 Jim & Filiberto – Shutting down SUS computers for Work Permits 5207, 5205, 5204 13:49 Jeff K – End-X doing SUS measurements 14:00 – 15:30 – Tours in LVEA and Control Room 15:02 Jim & Dace – Shut down PSL & ISC frontends for Work Permits 5207, 5205, 5204 15:03 Catering group on site to clean up after lunch
Burtrestored to 7:10 this morning.
Daniel, Dave, Patrick Daniel added 100 to the list of acceptable Duotone firmware code revision numbers. I added 133 to the list of acceptable IrigB firmware code revision numbers. Restarted PLC1 on h1ecatc1, h1ecatx1 and h1ecaty1. Burtrestored each to 07:10 this morning. WP 5216
WP 5202 Updated nds2-client software to nds2-client-0.11.6 for Ubuntu 12, Ubuntu 14, and Scientific Linux 6 computers.
J. Kissel I'm heading down to the X end to begin measurements of the QUAD's AI and Coil Driver Chassis. I've brought the QUAD to SAFE in prep.
Josh Smith, TJ Massinger, Andy Lundgren, Laura Nuttall
We've taken a look at the ~8h lock stretch on 15th May where the intent bit was active. We've specifically looked for evidence of DAC (for example 17555) and whistle (17452) glitches which have been present in the past. We find no sign of whistles glitches correlated with IMC-F (could be other sources but we haven't seen any yet) and we also find no evidence of DAC glitches.
The glitch rate for this lock is some of the best we have seen for aLIGO (at either site). Attached is the glitch rate plot for the 15th, which shows the glitch rate to slowly decrease throughout the lock. We'll continue to investigate this lock.
I've taken a look at the data from a number of lock stretches (when the intent bit was not active) since the 10th April (to 17th May) when Daniel/Sheila powered off the fixed frequency source being used for ALS (17825). Since this work was completed I cannot find any evidence of whistle glitches. I've specifically looked for whistle glitches correlated with IMC-F (which was the indication in the past). We will keep an eye on future lock stretches to see if they come back, but for now the problem seems to have been solved!