STATE OF H1: Still recovering from power outage. ACTIVITY LOG (some missing): 15:44 UTC Unexpected power outage 16:38 UTC Fire department through gate to check on RFAR boxes 16:56 UTC Richard, Jeff B. and Jim W. to end stations to turn on high voltage and HEPI pumps 17:14 UTC Richard turning on h1ecaty1 17:14 UTC Jason and Peter to LVEA to look for an optic 17:39 UTC Richard and company done at end X and going to corner station to turn on TCS chillers and HEPI pumps 17:49 UTC Filiberto to end X to reset fire panel 17:56 UTC Vacuum group touring LVEA (6 or 7 people) 17:57 UTC HEPI pump stations and TCS chillers started in corner station 18:18 UTC Filiberto back from end X 18:24 UTC Jeff B. and Jason to LVEA to start TCS X laser 18:49 UTC Hugh bringing up and isolating HAM ISIs 19:16 UTC Richard and Jim W. to end stations to turn on ALS lasers 19:30 UTC Sheila to CER to power cycle all of the Beckhoff chassis 22:55 UTC Jason to LVEA to look at optical levers 23:02 UTC Jason back 23:34 UTC Dave restarting the EPICS gateway between the slow controls network and the frontend network in an attempt to fix an issue with the Beckhoff SDF 23:47 UTC Dave restarting the Beckhoff SDF code Other notes: End stations: end Y IRIG B chassis power cycled; end X, end Y high voltage turned on; end X, end Y HEPI pump station computers started; end X, end Y ISI coil drivers chassis reset; end Y Beckhoff computer turned on Corner station: TCS chillers turned on; HEPI pump controllers turned on (Jeff B. had to push power button on distribution box); Sheila power cycled all Beckhoff chassis in CER; Sheila turned on the AOS baffle PD chassis in LVEA I started the h0video IOC on h0epics2. Conlog was running upon arrival. However it crashed during the recovery. It still needs to be brought back. Joe D. and Chris worked on beam tube enclosure sealing. There was some trouble starting the corner station HEPI pump controller computer. There was some trouble finding the strip tool template for the wind speeds. I'm not sure if it was found or if TJ created a new one. Things to add to the 'short power outage recovery' document: - Turn on high voltage supplies - Push reset on ISI coil drivers - HEPI pump controller and pumps - TCS chillers - TCS lasers - ALS lasers - Turn on video server
Upon arrival, on the CDS overview, under the Slow Controls diagnostics, all of the corner station Beckhoff PLCS were green and updating. All of the end station Beckhoff PLCS had white boxes. Corner Station: I was informed that the TCSX laser was not working. There were errors on some of the terminals in the system manager, so I asked Sheila to powercycle all of the chassis in the CER. This seemed to fix the errors and the TCSX laser. She also found all of the AOS Baffle PD chassis in the LVEA off and turned them on. I burtrestored all the PLCs for h1ecatc1 to 6:10 AM PST. End Stations: Richard turned on or powercycled h1ecaty1 (I do not know if he found it off). I believe the Beckhoff PLCS for end Y turned green and started updating on the CDS overview after this. I was able to log into h1ecatx1. I found that the EPICS IOC had not started cleanly (see attached screenshot). I quit it and started it again. The Beckhoff PLCS for end X turned green and started updating on the CDS overview after this. The EtherCAT vacuum gauges did not come back cleanly. All of them were reporting a pressure of flat 0. We have not been able to bring back the gauges in the beam tube enclosures and they have been disabled in the system managers. At end Y, on Vacuum Gauge ETM, under Process Data, Daniel hit 'Load info from device'. On Vacuum Gauge NEG, under Process Data, Daniel toggled the checkboxes under PDO Assignment. For some reason, once he did this and activated the configuration and relinked the variables, the gauges started reporting reasonable values. This did not work for me at end X. If I just hit 'Load info from device' or toggled the checkboxes, and then hit activate configuration, the gauges would start reporting reasonable values. However, as soon as I tried to relink the variables, the data went invalid again. I ended up having to remove and add back the gauges in the system manager and relink them. After that they seemed to work. None of these changes have been (and probably should not be) commited to subversion. We do not know the root cause of these problems. I burtrestored all the PLCS for h1ecatx1 to 6:10 AM PST. At some point I did the same for h1ecaty1, but I do not remember if the IOC got restarted since. Things seem to be working for it for now, so I am not attempting to burtrestore it again.
At one point I may have accidentally requested 'DEGAS' for one of the EtherCAT vacuum gauges at end X or end Y. Unfortunately I can not recall which one and I do not know if it actually engaged.
Gabriele, Sheila
The message:
We had a look at the upconversion we would expect from the quadratic response of DC readout. The message is that the large DARM residual at around 3Hz (due to our LSC feedforward) limits our noise near 10Hz, and upconversion around the calibration lines is about a factor of 4 below DARM at 40 Hz.
Details:
The DCPD photocurrent is proportional to the power on them:
P_DC offset current (20mA), G_opt is the optical gain (3.3 mA/pm), and x_0 is the darm offset (12 pm).
If we inject a line we expect to see upconversion at the second harmonic, the amplitude of the second harmonic (seen in the DCPDs) over the fundamental should be:
This is the explanation for the upconversion mentioned in alogs 25001 and 21240 .
Since the quadratic term should be small we can approximate the DARM residual, and use it to predict the noise in the DC PDs due to upconversion:
We took the data from a time when I was injecting a 6 Hz line into DARM and made this projection. The upconversion of the 6 Hz peak predicts the 12 Hz peak well.
The rms of our DARM residual is dominated by bad feedforward around 3 Hz, this is upconverted and limits our sensitivity at around 10 Hz. There is also upconversion around the calibration lines that is about a factor of 4 below DARM near 40 Hz.
Restarted all computers and services on x1 test stand following power failure.
Peter, Jason, Jeff B, Nutsinee
The TCS CO2X laser was down due to the power outage this morning along with many other systems. Jeff Bartlett reported that the chiller was tripped and he restarted it. We later restarted the power supply on the mezzaine which then brought the system back up. However, the timeseries of the flow alarm didn't imply that the alarm was tripped when the chiller was shut off and the laser output continued to have non-zero reading for over an hour until the frontend was restarted. Moral of the story is, some of the channels would continue to record fault values after a power outage until the frontend gets restarted. Another moral of the story, maybe we should move TCS CO2 power supply to the CER.
I think the TCS power supplies were located in the mechanical room because they are switching supplies and we wanted them far away from other electronics. By the way, there is a second mechanism that shuts off the laser if the chiller goes off (not just the flow meter) which is that the chiller has a built-in relay that acts on the laser controller to turn off the controller. It was added so that we weren't reliant on low flow rate turning off the laser.
Here are some recovery actions I have performed:
Restarted weather station IOC's at EY, MY, MX, and EX following the power failure.
Notables:
HEPI CS Pump Controller required a couple power cycles before it would respond to ssh for restart.
HAM2 ISI tripped on CPSs during Isolation--Jim suggests we increase the DC Bias Ramp time.
All other platforms started and isolated without trouble.
All SDF snaps have been set to OBSERVE. Still some reds as Jim is setting up blends and sensor correction.
Kyle, Gerardo
Noticed that the pressure at X-End continued to climb after the power outage, we drove to X-End and found the controller on "standby mode", restarted the HV manually.
We then changed the controller settings for Auto HV Start to "Enabled".
We have begun recovery from an unexpected power outage that occurred at 7:44 PST and lasted for 3 minutes (as noted from the UPS).
FRS ticket 4245 opened.
These are the 60 day trends for the PSL crystal and diode chillers.
This afternoon I took measurements of the DHARD Yaw loop at different PSL powers. In addition to general characterization of the O1 IFO, I will use this data to verify the ASC loop model. Once we're confident in the loop model at powers that we can measure, we will use it to try to design ASC filters that we can use for high power operation in a few months.
In the first attached screenshot and .xml file, the measurement at 2 W is blue, the measurement at 10 W is orange, and the measurement at 20 W is red.
The 2 W measurement was taken at the DC_READOUT state, and only FM6 of the DHARD Yaw filter bank was engaged. This measurement was taken from a lock stretch earlier in the day, using 40 points at 3 avg each. In the xml file, the 2 W data is saved as references 0-4. In the second screenshot and .xml file, I include some higher resolution measurements of the peaks, with 5 avg for each point.
The 10 W measurement was taken at INCREASE_POWER, and both FM2 and FM6 were engaged. This measurement used 60 points at 5 avg each. In the xml file, the 10 W data is saved as references 5-9. I had modified the lscparams.py guardian code to stop the power increase at 10 W, but I have reverted that change, so everything should still be as normal.
The 20 W measurement was taken at INCREASE_POWER, and both FM2 and FM6 were engaged. This measurement used 60 points at 5 avg each. In the xml file, the 20 W data is the "live" traces.
The 10W and 20W measurements today are broadly consistent with the measurements from 31 July 2015 (alog 20084), which is good.
I have plotted yesterday's Dhard Yaw measurements against the ASC model that I have.
The ASC model seems to be missing some gain related to the laser power, since I need a different fudge factor for each input power to get the upper UGF of the model to match the measurement. This is probably a problem with the Optickle part of the model since that's the only thing that should change very significantly in overall gain as a function of power. The suspension model (which includes radiation pressure) shows the peaks from the lower stages moving to higher frequency with higher input power as expected.
In the individual plots (eg. 2W_only), I show the measurement (dark blue) with some error bars (light blue) derived from the measured coherence plotted against the model (black trace). The 10 W and 20 W measurements match the model pretty well (except for the gain fudge required), but the 2 W measurement doesn't match the model very well below a few Hz. I'm not yet sure why this is.
In the final plot attached, I show all 3 models (solid traces) and all 3 measurements (dotted traces), but without error bars to avoid clutter.
Corey, Adam, We're just about to start a detchar safety injection.
We've finished this injection. I'll post a few more details shortly.
More Details: I injected the waveform from 'https://daqsvn.ligo-la.caltech.edu/svn/injection/hwinj/Details/detchar/detchar_03Oct2015_PCAL.txt'. The injection start time was 1136927627. The log file is checked into the svn - 'https://daqsvn.ligo-la.caltech.edu/svn/injection/hwinj/Details/detchar/O1/log_H1detcharinj_20160115.txt', although for some reason it only shows the start time.
The time noted in the alog entry is incorrect. The correct time from the log is 1136927267. The injections are visible at the corrected time.
Evan G., Jeff K.
Revisiting measurements Jeff made in the field [1],[2],[3] and new measurements with those I took in the EE lab, we compared with the UIM residuals measurements obtained using the Pcal and ALS DIFF measurements. Attached is a figure showing the electronics chain and comparing with the residuals obain. We find that the BOSEM electronics account for some of the residuals found in the UIM measurements, but not all. At this point, we have only clues, but no solid evidence for what remains of the residuals. We have three theories:
I set up in the EE shop a UIM driver, satellite box, and BOSEM to repeat Jeff's measurements and verify we observe the same effects. Indeed, I observed similar issues that Jeff had observed in his measurements from the floor. We put these measurements on top of the UIM actuation residuals measurement/model but, unfortunately, find that they are not completely accounted for by the electronics chain.
We started to think about what else could be going wrong with the residuals, but so far have come up with the only three theories above. To undertand this effect in more detail, Jeff is currently undertaking exploratory measurements of the UIM-->DARM and Pcal-->DARM to frequencies higher than 100 Hz. Hopefully these measurements will shine some light on this effect.
The quad model on the svn does not have UIM-PUM wire violin modes. I just drafted an update that does include these, which I used to generate the attached figures. I'll commit this update if it appears consistent with measurements.
The plot ViolinModes_12Jan2016.jpg compares the model UIM L to Test L transfer function with and without the UIM-PUM modes, but with the fiber modes in both cases. I guessed the UIM-PUM violin modes to be a Q of 100,000, but that could be off an oder of magnitude or two. The second figure plots the ratio of these 2 transfer functions.
According to this second figure, the UIM-PUM violin modes explain some, but not all of the discrepancy seen between the measurements and the model in the log above. So either the model is not correct, or there is still something in the feedback loops we are missing.
For the Bench measurements, the data is stored at:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PostO1/H1/Measurements/Electronics/BenchUIMDriver/2016-01-12
Attached are plots showing the individual components of the coil driver electronics fitted with the vectfit program in Matlab and using LISO. I report the fitted LISO values below with respective uncertainties.
Dummy BOSEM connected, with output impedance network (see figure UIM_out_impedance.pdf):
Best parameter estimates: zero0:f = 84.1507169277 +- 1.627 (1.93%) pole0:f = 303.5726548020 +- 5.431 (1.79%) pole1:f = 127.6915337428k +- 3.535k (2.77%) factor = 2.2065872530m +- 17.45u (0.791%)
This fit shows the calibration of 2.2 mA/V, one zero at 84.15 Hz, and two poles at 303.57 Hz and 127.7 kHz.
BOSEM only (output impedance network divided out so only BOSEM transfer function remains, see figure UIM_bosem.pdf):
Best parameter estimates: zero0:f = 334.8526120460 +- 3.892 (1.16%) zero1:f = 1.2383234778k +- 43.39 (3.5%) zero2:f = 8.2602408615k +- 320.8 (3.88%) pole0:f = 747.0160319882 +- 27.6 (3.69%) pole1:f = 5.3613221192k +- 210.2 (3.92%) pole2:f = 25.8483289876k +- 310.1 (1.2%) pole3:f = 232.8627791989k +- 3.041k (1.31%) factor = 11.6075630096m +- 14.92u (0.129%)
For some reason, this transfer function is tricky to fit. These are the fewest number of zeros and poles I could put in LISO and still provide a good fit to the data. LISO does complain that strong correlation exists between pole1<-->zero2 and pole0<-->zero1. When I removed these pairs, the fit became much worse, so I left them in.
As a comparison with the full chain: digital AntiAcq x analog Acq (output impedance network) x BOSEM (see figure UIM_full.pdf). The model fits the measurement with 2% up in magnitude to 40 kHz, and within 1 degree in phase up to 50 kHz.
Finally, the previously shown plot in the original post divides out the full BOSEM measurement in the field ('field BOSEM'), but the model already takes care of the analog output impedance network, so this original plot was double-counting. I attach here a corrected version of the plot (see UIM_res_with_elec.pdf). This shows that the BOSEM does indeed correct for some of the excess residual, it is not the dominant contributor to the behaviour above ~60 Hz.
J. Kissel, R. Savage, LHO Operators Tallying up the progress so far on the schedule of PCALX excitations at high frequency (see plan in LHO aLOG 24802): Achieved Planned Frequency Amplitude Start Time Stop Time Duration Duration Success? (Hz) (ct) (mm-dd UTC) (mm-dd UTC) (hh:mm) (hh:mm) (Yes / No, reason if no) ------------------------------------------------------------------------------------------------------------------------------ 1001.3 35k 01-09 22:45 01-10 00:05 01:20 01:00 Yes 1501.3 35k 01-09 21:12 01-09 22:42 01:30 01:00 Yes 2001.3 35k 01-09 18:38 01-09 21:03 02:25 02:00 Yes 2501.3 40k 01-09 12:13 01-09 18:31 06:18 02:00 Yes 3001.3 35k 01-10 00:09 01-10 04:38 04:29 04:00 Yes 3501.3 35k 01-10 04:41 01-10 12:07 05:26 06:00 Good Enough! 4001.3 40k 01-09 04:11 01-09 12:04 07:55 08:00 Good Enough! 4501.3 40k 01-10 17:38 01-11 06:02 12:24 12:00 Yes 5001.3 40k 01-11 06:18 on-going (as long as we can get) Thanks to all of the operators who have been dilligently caring for these lines while we sleep! For the record, while these PCALX calibration lines are on, the majority (if not all) of the range is consumed, so we cannot perform PCALX hardware injections.
I used the high frequency calibration lines injected above to estimate the sensing function at those frequencies. For this analysis, SLM Tool was used to obtain the line amplitude and phase of these calibration lines at different relevant channels.
Sensing Function = DARM_ERR[ct] / PCAL_TXPD[m]
The DARM_ERR signal is dewhitened and the PCAL_TXPD is corrected to get metres using the scheme described in G1501518.
Furthermore, ratio of GDS/Pcal is calculated and is included in the attached plot.
A data quality flag has been created to capture times when these extra PCAL lines were in the data. It is H1:DCH-EXTRA_PCAL_LINES:1 and a description of this flag can be found on the detchar wiki.