H1 had a lockloss at 4:10utc where there was no obvious culprit (nothing seismic, but I forgot to look at strip tools to see anything obvious).
Commissioning Mode for A2L: Once in Nominal Low Noise, ran the A2L script, but unfortunately it had errors, and left the SDF with diffs (after getting some consulting with Jenne able to Revert the SDF). So spent about another 22min running & troubleshooting A2L recovery. Will send Jenne errors in A2L script session.
Then went to Observation Mode at 5:10utc.
I implemented an automatic restart system using monit for the GRB alert code See LLO entry 21671. For unknown reasons, this new method of running the script (as a detached process from an init script, not in a terminal window) also made all the GraceDB comm errors go away. Implementation details are posted in the log entry if LHO wishes to follow.
Topped off to the Max level. Required 125mL. (last fill was 2 days ago and required 250mL [full beaker]).
So this takes care of the Shift Check Sheet item of addressing PSL chillers on Thursay
TITLE: 10/15 EVE Shift: 23:00-07:00UTC (16:00-00:00PDT), all times posted in UTC
STATE of H1: Taken to Observation Mode at 22:55 by Jim with a range of 75Mpc. Current lock going on 2.5hrs.
Outgoing Operator: Jim
Support: Occupied Control Room, Kiwamu is On-Call if needed
Quick Summary:
useism clearly becoming more quiet in last 10hrs. Winds under 15mph. Smooth observational sailing!
J. Kissel, K. Izumi, D. Tuyenbayev As expected from preliminary analysis of the actuation strength change (see LHO aLOG 22558), DARM Open Loop Gain TFs and PCAL to CAL-CS transfer functions reveal minimal change in the calibration of the instrument. For these measurements, all we have changed in the CAL-CS model is switching the two signs in the ESD stage, which in turn cancel each other. As such, the only change should be the change in discrepancy between the canonical model and the actual current strength of the TS / L3 / ESD stage. Again, this change was small (sub ~10%, 5 [deg]), and has been well-tracked by calibration lines, so we will continue forward with focusing our efforts on determining how to apply the time-dependent corrections. Attached are screenshots of the raw results. Darkhan's working on the more formal results. Stay tuned! The after-bias-flip results live here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/DARMOLGTFs/ 2015-10-15_H1_DARM_OLGTF_7to1200Hz_AfterBiasFlip.xml /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/PCAL/ 2015-10-15_PCALY2DARMTF_7to1200Hz_AfterBiasFlip.xml
Jeffrey K, Kiwamu, Sudarshan, Darkhan
Some conclusions based on the analysis of the DARM OLG TF and the PCAL to DARM TF measurements taken after flipping the ESD bias sign:
1although the total L3 stage actuation function sign did not change, the ESD bias sign flip was reflected in two of the SUSETMY L3 stage model EPICS replicas in CAL-CS model: H1:CAL-CS_DARM_ANALOG_ETMY_L3_GAIN and H1:CAL-CS_DARM_FE_ETMY_L3_DRIVEALIGN_L2L_GAIN.
Below we list kappas used in the kappa corrected parameter file for the measurements taken after the ESD bias sign flip on Oct 15. We used 43 minute mean kappas calculated from 60sec FFTs starting at GPS 1128984060 (values prior bias sign flip are given in brackets, previously they were reported in LHO alog comment 22552):
κtst = 1.004472 (1.057984)
κpu = 1.028713 (1.021401)
κA = 1.003843 (1.038892)
κC = 0.985051 (0.989844)
fc = 334.512654 (335.073564) [Hz]
An updated comparison script and parameter files for the most recent measurements were committed to calibration SVN (r1685):
/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/CompareDARMOLGTFs_O1.m
/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/H1DARMparams_1128979968.m
/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/H1DARMparams_1128979968_kappa_corr.m
Comparison plots were committed to CalSVN (r1684):
/trunk/Runs/O1/H1/Results/DARMOLGTFs/2015-10-15_H1DARM_O1_cmp_AfterSignFlip_*.pdf
As a double check we created Matlab file with EPICS values for kappas using H1DARMparams_1128979968.m parameter file and made sure that the values agree with the ones currently written in the EPICS. The logs from calculating these EPICS values was committed to CalSVN (r1682):
/trunk/Runs/O1/H1/Scripts/CAL_EPICS/D20151015_H1_CAL_EPICS_VALUES.m
/trunk/Runs/O1/H1/Scripts/CAL_EPICS/20151015_H1_CAL_EPICS_*
For the record, we changed the requested bias voltage from -9.5 [V_DAC] (negative) to +9.5 [V_DAC] (positive).
TITLE: 10/14 EVE Shift: 23:00-07:00UTC (16:00-00:00PDT), all times posted in UTC
STATE of H1: Low noise, undisturbed
Support: Usual CR crowd
Quick Summary: Most of the day spent doing commissioning work, switching bias on ETMY ESD
Shift Activities:
This is actually 10/15 & Jim's DAY shift. :)
We have seen slow change in optical gain at the beginning of each lock stretch at LHO. Jenne has a nice alog (LHO alog 22271) about this varying optical gain. Here, I am trying to use the calibration parameters kappa_C which is a measure of change in optical gain from its nominal valure of 1. For this analysis, I take the the data that has Guardian state vector greater than 600 (Low nominal noise) and plot kappa_C from the beginning of each lock stretch to 2 hours into the lock.
The attached plot has total of nine segments starting from Oct 03, 2015. There were total of 12 viable segments but I excluded 3 segments which were more flatter at the beginning. Not sure why these are flatter than the other but excluding those stretches help to visualize the time constant of optical gain better.
The estimated time it takes for the optical gain to reach its nominal values after the IFO is locked is about 30 minutes or so as seen in attached plot.
J. Kissel, K. Izumi (with relocking team help from J. Warner, J. Driggers, and S. Dwyer) We've (finally) completed the couple of ETMY iStage actuator to DARM sweeps + PCAL to DARM sweeps with the IFO locked on ETMX. This took several attempts because (a) the transition from locking the DARM on ETMY to ETMX failed, (b) we accidentally rapidly flipped the ETMY bias while locked on ETMX which glitched the mass, and (c) in efforts to lock, we've rearranged when ASC gets turned on, and in doing so, during the last attempt we forgot to turn on a few of the loops. However, rubbin's racin'. More details to come once we've processed the measurements carefully, but the preliminary message is that the ESD actuation strength has decreased by 7-8% -- going from ~5-6% stronger than the model, as expected from the continuous estimate of the actuation strength from calibration lines to ~1-2% weaker than the model. Both the UIM and PUM have not changed strength at all, also as expected. As such, because the changes have been accurately tracked by the calibration lines, and the change in TST stage actuation has brought the model systematic from +5 to 6% to -2 to -1%, we will *not* be updating the front-end CAL-CS calibration (and that *includes* the EPICs record to compute the time-dependent parameters). We should also expect to see this change perfectly in the estimation of the actuation strength in the next few hours of this lock stretch. We're currently retaking DARMOLGTFs and a PCAL2DARM TF to get a similar before vs. after result comparing against this morning's result (LHO aLOG 22546), but we expect to be up and running within the hour, with as good a calibration accuracy as we've had before. More details, plots, and quantitative results to come! The data have been committed to /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/FullIFOActuatorTFs/2015-10-15 2015-10-15_H1SUSETMY_L1toDARM_FullLock.xml 2015-10-15_H1SUSETMY_L2toDARM_FullLock.xml 2015-10-15_H1SUSETMY_L3toDARM_LVLN_LPON_FullLock_negativebias.xml 2015-10-15_H1SUSETMY_L3toDARM_LVLN_LPON_FullLock_positivebias.xml 2015-10-15_H1SUSETMY_PCALYtoDARM_FullLock.xml and exported to /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/FullIFOActuatorTFs/2015-10-15/ 2015-10-15_H1DARM_ETMY_L1_State1_Drive_coh.txt 2015-10-15_H1DARM_ETMY_L1_State1_Drive_tf.txt 2015-10-15_H1DARM_ETMY_L2_State3_Drive_coh.txt 2015-10-15_H1DARM_ETMY_L2_State3_Drive_tf.txt 2015-10-15_H1SUSETMY_L3toDARM_LVLN_LPON_FullLock_negativebias_coh.txt 2015-10-15_H1SUSETMY_L3toDARM_LVLN_LPON_FullLock_negativebias_tf.txt 2015-10-15_H1SUSETMY_L3toDARM_LVLN_LPON_FullLock_positivebias_coh.txt 2015-10-15_H1SUSETMY_L3toDARM_LVLN_LPON_FullLock_positivebias_tf.txt 2015-10-15_H1SUSETMY_PCALYtoDARM_FullLock_coh.txt 2015-10-15_H1SUSETMY_PCALYtoDARM_FullLock_tf.txt
For the record, we changed the requested bias voltage from -9.5 [V_DAC] (negative) to +9.5 [V_DAC] (positive).
Because we are now using two different BSC blend filters depending on ground motion, I wanted to make the job of checking and switching them easier. With copious coaching from Jenne, I copied and pasted the buttons of the two filters from each ISI to a single screen, so you should be able to tell at a glance what is running. This now lives under the O-1 tab on the site map as ISI BLEND FILTERS. It's not pretty and it's not perfectly organized so I will clean it up eventually. I also need to test that each button does what it should, but we are locked and people taking measurements.
Jenne, Jim, Sheila
This morning we are trying to flip the bias sign on ETMY with high ground motion. This has caused a few locklosses, and since we currently have moderately high ground motion (20 mph winds, normal microseism) it is difficult to relock.
This gave us a chance to relook at the locklosses durring the early stages of CARM offset reduction, that are currently the biggest problem we have with locking durring modestly high ground motion (see here). A few weeks ago I had added pulling the OMC off resonance to the guardian (as is done at LLO) alog. I think that the OMC was probably not part of the problem, it is just that when the ground is moving more we get more fluctuations in the power at the AS port, which tended to cause the OMC to flash when it was on resonance.
In the locklosses durring the switch to QPD step, the first attachment is fairly typical. We have glitches of about 1/10 of a second that happen as the CARM offset is reduced, and show up as dips in REFL LF, AS LF, AS_C, and AS45 Q. You could imagine that this is caused by an alignment fluctuation, but I have looked at optical levers and witness sensors for many of these locklosses and don't see anything. It seems more likely that these "glitches" are from the ALS DIFF loop, because as soon as we transition to RF DARM they stop.
One thing that I've seen is that because AS_C sees these glitches, the loop that sends AS_C to SRM and SR2 has a large glitch when these happen. This morning we edited the gaurdian to turn off both of the SRC loops in the state TR CARM (after the ASC is offloaded, when the refl WFS loops are turned off), and turn them back on in ENGAGE_ASC_PART1 (beofre turning on any other loops). We used to always have these loops off durring CARM offset reduction so we think this should be OK. However, it didn't completely solve the problem.
In this state we run an ezca servo that takes the AS45Q signal and adjusts the ALS DIFF offset. For some but not most of these SWITCH_TO_QPD locklosses it looks like this servo isn't able to keep AS45Q at zero because it simply doesn't have enough gain. For this reason we also tried increasing the gain of this servo in the switch to QPD step by a factor of 2 (to -82222). There are other locklosses however where this isn't the case. Also, in many of the glitches that we survive, the glitch is over before the servo reacts, which makes it seem like we need more proportional gain in the servo (its just an simple integrator).
Perhaps the best solution would be to try transitioning to RF DARM at a higher CARM offset. We occasionally have difficulty with the RF DARM transition at the current offset (sqrt TR CARM = -3.3, which should be about 140 pm carm offset depending on alignment) when the inital alingment is not good. If we move the transition to a lower CARM offset we will become more sensitive to inital alignment. One idea would be to try engaging the SOFT loops durring the CARM offset reduction to help compensate for a bad inital alignment.
It appears that the AIP for BSC8 has failed, we will investigate its status next Tuesday.
J. Kissel We're taking the IFO out of observation intent and beginning measurement prep for flipping the H1 ETMY bias sign. Stay tuned for detail aLOGs.
J. Kissel DARM OLTGF Is Complete. Results saved and committed to /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/DARMOLGTFs/ 2015-10-15_H1_DARM_OLGTF_7to1200Hz_BeforeBiasFlip_A_ETMYL3LOCKIN2_B_ETMYL3LOCKEXC_coh.txt 2015-10-15_H1_DARM_OLGTF_7to1200Hz_BeforeBiasFlip_A_ETMYL3LOCKIN2_B_ETMYL3LOCKEXC_tf.txt 2015-10-15_H1_DARM_OLGTF_7to1200Hz_BeforeBiasFlip_A_ETMYL3LOCKIN2_B_ETMYL3LOCKIN1_coh.txt 2015-10-15_H1_DARM_OLGTF_7to1200Hz_BeforeBiasFlip_A_ETMYL3LOCKIN2_B_ETMYL3LOCKIN1_tf.txt 2015-10-15_H1_DARM_OLGTF_7to1200Hz_BeforeBiasFlip.xml
J. Kissel PCAL to DARM Transfer Functions complete. We're going to quickly run an A2L measurement, then open the beam diverters. After which, we'll try to preserve the lock by transitioning over to ETMX for DARM control. However, this will require turning the ETMX HV ESD driver back ON, and we're on 50% confident that the IFO lock can survive that turn on transient. Wish us luck! PCAL 2 DARM results have been committed to /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/PCAL/ 2015-10-15_PCALY2DARMTF_7to1200Hz_BeforeBiasFlip_A_PCALRX_B_DARMIN1_coh.txt 2015-10-15_PCALY2DARMTF_7to1200Hz_BeforeBiasFlip_A_PCALRX_B_DARMIN1_tf.txt 2015-10-15_PCALY2DARMTF_7to1200Hz_BeforeBiasFlip.xml
DARMOLGTF model vs. meas. comparison plots from measurements prior to ESD bias sign flip are attached to this report.
Parameter files for these measurements and updated comparison scripts are committed to calibration SVN (r1673):
/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/CompareDARMOLGTFs_O1.m
/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/H1DARMparams_1127083151_kappa_corr.m
/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/H1DARMparams_1128956805.m
/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/H1DARMparams_1128956805_kappa_corr.m
Plots and a .MAT file with the models are committed to calibration SVN (r1673):
/trunk/Runs/O1/H1/Results/DARMOLGTFs/2015-10-15_H1DARM_O1_cmp_bfSignFlip_*.pdf
/trunk/Runs/O1/H1/Results/DARMOLGTFs/2015-10-15_H1DARM_O1_cmp_bfSignFlip_DARMOLGTF.mat
In the kappa corrected parameter file for the measurements taken on Oct 15 we used average kappas over 1 hour calculated from SLM tool starting at GPS 112894560. The values are listed below:
κtst = 1.057984
κpu = 1.021401
κA = 1.038892
κC = 0.989844
fc = 335.073564 [Hz]
For kappas used in parameter files for Sep 10 and Sep 23 see LHO alog comment 22071.
For the record, we changed the requested bias voltage from -9.5 [V_DAC] (negative) to +9.5 [V_DAC] (positive).
Tamper injections showed some upconversion from the tens of Hz region into the region above 60 Hz. The HVAC makes noise in this region so I did the test I had done in iLIGO, I shut down all turbines and chiller pad equipment on the entire site. This increased the range by almost 5 Mpc (see figure - the 3 range peaks are during the shutoff periods below).
Checks:
1) make sure all VFDs are running at 45 or less
2) if possible use only 2 turbines for the LVEA
We did not drop out of science mode but here are the times of the changes (Oct. 15 UTC):
2:05:00 shutdown started, 2:08:00 shutdown complete
2:18:00 startup started, 2:21:30 startup complete
2:31:00 shutdown started, 2:37:00 shutdown complete
2:47:00 startup started, 2:51:00 startup completed
3:01:00 shutdown started, 3:03:30 shutdown complete
3:13:30 startup started, 3:17:00 startup complete
Here is a comparison of the calibrated DARM spectrum from times when the HVAC was ON and OFF, in the frequency band that was affected.
I plotted glitchgrams and trigger rates during this time. Doesn't seem to have made a noticable change.
https://ldas-jobs.ligo-wa.caltech.edu/~jordan.palamos/detchar/HVAC/glitchgram_HVAC_1128909617.png
https://ldas-jobs.ligo-wa.caltech.edu/~jordan.palamos/detchar/HVAC/rate_HVAC_1128909617.png
Attached are ASDs of DARM and one of the PEM seismometer channels (corner station Z axis) for all of the times when the HVAC was turned on and off (not including the times of transition). In general, the noise level between 40-100 Hz is lower during the times when HVAC was off. The peak around 75 Hz was better during the second two off times, but not in the first segment. (1128910297 to 1128910697)
More PEM seismometer channels are here: https://ldas-jobs.ligo-wa.caltech.edu/~marissa.walker/O1/Oct15HVACtest/
(note: the seismometer calibration from pem.ligo.org is only valid from 0-20 hz)
As Jim had almost relocked the IFO, we had an epics freeze in the gaurdian state RESONANCE. ISC_LOCK had an epics connection error.
What is the right thing for the operator to do in this situation?
Are these epics freezes becoming more frequent again?
screenshot attached.
epics freezes never fully went away completely and are normally only a few seconds in duration. This morning's SUS ETMX event lasted for 22 seconds which exceeded Guardian's timeout period. To get the outage duration, I second trended H1:IOP-SUS_EX_ADC_DT_OUTMON. Outages are on a computer basis, not model basis, so I have put the IOP Duotone output EPICS channel into the frame as EDCU channels (access via channel access over the network). When these channels are unavailable, the DAQ sets them to be zero.
For this event the time line is (all times UTC)
16:17:22 | DAQ shows EPICS has frozen on SUS EX |
16:17:27 | Guardian attempts connection |
16:17:29 | Guardian reports error, is retrying |
16:17:43 | Guardian timesout |
16:17:45 | DAQ shows channel is active again |
The investigation of this problem is ongoing, we could bump up the priority if it becomes a serious IFO operations issue.
To be clear, it sounds like there was a lockloss during acquisition that was caused by some kind of EPICS drop out. I see how a lockloss could occur during the NOMINAL lock state just from an EPICS drop out. guardian nodes might go into error, but that shouldn't actually affect the fast IFO controls at all.
Sorry, I meant that I can not see how a guardian EPICS dropout could cause a lock loss during the nominal lock state.
Related: alog 22199
We measured the output of the Low Voltage ESD monitor (D1500129) at the AA side at the remote rack using SR785 right after the IFO was locked up again after the maintenance.
No surpise was found, see attached. 'UR', 'UL' etc. are the analog measurements of the corresponding ESD monitor. Noise floor was measured with the inputs of SR785 short circuited.
Don't worry about LR trace, it looks as if it's noisier than the others but it's not, it's just the spectral leakage artefact (somehow I did not save the small BW measurement for LR).
'digital LL signal projected' is H1:SUS-ETMY_L3_MASTER_OUT_LL, but with a correction of 20/2^17 Volt/count for DAC conversion, zpk([50;50;3250], [2.2;2.2;152]) for SUS awhite type filters, and 2 *4.99/(1.2+15+4.99)*zpk([],[40]) for the monitor circuit.
The measurements agree well with the projection up to 1kHz. Higher than that, the SR785 noise dominates, but this should be good enough for our purpose.
What was done:
We disconnected the LV ESD monitor cable (SUS ESD 06) from the front panel of the AA, and connected DB9 breakout board to the cable but not to the AA.
For each quadrant (pin1-6 for UR, 2-7 for LR, 3-8 for UL and 4-9 for LL), we used two clip-BNC cables to connect the positive pin (1, 2, 3 or 4) to A connector of SR785 and negative to B, and the BNC shell was connected to the shell of the DB9 connector on the AA chassis.
SR785 was in A-B mode, AC coupled. The plot doesn't account for the AC coupling, but its roll off is at 0.16Hz and does not affect the plot. Sensitivity was fixed at -8dBVpk.
Low frequency measurements (0-400Hz, 800 lines) were performed for all quadrants but it was not saved for LR. As a result, in the plot it looks as if LR is noisier than the other channels, but in reality it's showing the spectral leakage artefact due to larger bandwidth measurement for f<400Hz.
Higher frequency measurements (0-1.2k and 0-12.8k, 800 lines) were performed and saved for all quadrants.
Only for LL we measured up to 102kHz to make sure that there's no high frequency bump or anything, but all measurements are dominated by SR785 noise for f>1kHz anyway.
Attached is the matlab file containing the voltage data for all quadrants and the SR785 noise.
Um, count-to-voltage DC gain conversion of the DAC is 20Vpp differential per 2^18 counts. So the grandparent entry is incorrect by a factor of 2 there. But I'm measuring the voltage in the differential output stage of LV driver which has the DC gain of 2 (i.e. 10V pp single ended is converted to 10V pp differential), which I didn't account for in the script.
So in the end the conclusion doesn't change.
The online GDS calibraiton filters were updated today to fix several bugs and add a few more corrections that were uncovered by the calibration group over the past week. The updates include the following changes:
1) Compensation for known IIR warping by Foton in the inverse sensing filter installed in the CALCS model. See DCC G1501013.
2) Fix bug in how digital response of AA/AI filters were called. (par.C.antialiasing.digital.response.ss -> par.C.antialiasing.digital.response.ssd
andpar.A.antiimaging.digital.response.ss -> par.A.antiimaging.digital.response.ssd
)
3) Additional compensation for OMCDCPD (par.C.omcdcpd.c
).
The new filters were generated using
create_partial_td_filters_O1
checked into the calibration SVN under
aligocalibration/trunk/Runs/O1/Common/MatlabTools/
H1GDS_1128173232.npz
under aligocalibration/trunk/Runs/O1/GDSFilters/.
Attached are plots comparing the frequency response of the GDS FIR filters to the model frequency response.
There was a request to see how the inverse correction filters for the sensing chain were rolled off at high frequencies in these filters. This is done with a simple smooth roll-off above ~6500 Hz. I've attached plots that zoom in on the 5000-8192 Hz range to show how the rolloff causes the GDS filters to differ from the ideal inverse correction filters for the sensing chain.
Operators: Do not run A2L script until Jenne troubleshoots script. (had "Permission denied" errors, so maybe there's an issue of running script logged in as ops?).
Will put this "pause" in running A2L in the new Ops Sticky Note page.
I was able to run the a2l script on my account. The script cleared all the SDF after it's done as Jenne advertised. All is well.
For lack of a better place to write this, I'm leaving it as a comment to this thread.
The problem was that the results directory wasn't write-able by the Ops accounts, although it was by personal accounts. I've chmod-ed the directory, so the A2L script should run no matter who you are signed in as.
Please continue running it (instructions) just before going to Observe, or when dropping out of Observe (i.e. for Maintenence).