O-1 Days 22-28
model restarts logged for Thu 15/Oct/2015
No restarts reported
model restarts logged for Wed 14/Oct/2015
No restarts reported
model restarts logged for Tue 13/Oct/2015
2015_10_13 08:03 h1calex
2015_10_13 08:05 h1broadcast0
2015_10_13 08:05 h1dc0
2015_10_13 08:05 h1nds0
2015_10_13 08:05 h1nds1
2015_10_13 08:05 h1tw0
2015_10_13 08:05 h1tw1
Maintenance: new calex model with associated DAQ restart
model restarts logged for Mon 12/Oct/2015
No restarts reported
model restarts logged for Sun 11/Oct/2015
No restarts reported
model restarts logged for Sat 10/Oct/2015
No restarts reported
model restarts logged for Fri 09/Oct/2015
No restarts reported
TITLE: "10/16 [OWL Shift]: 07:00-15:00UTC (00:00-08:00 PDT), all times posted in UTC"
STATE Of H1: Observing at ~80 Mpc for the past 2 hours.
SUPPORT: Kiwamu, Sheila
SHIFT SUMMARY: Difficulty locking more than half of the shift. First I ran into Guardian issue not turning on the gains for INP1 and PRC1. Corey probably ran the a2l script and turned on the right gains by accident when he reverted the SDF differences (after a2l script failed) so we weren't aware of this happening. After couple of times turning the gains on by hand and eventually reloaded Guardian code as Kiwamu suggested, I ran into another problem and kept losing lock at ENGAGE_ASC_PART3. During phone calls with Sheila and Kiwamu we though it was an issue with the ASC. Kiwamu came over and after noticing a huge drop in RF90 prior to a lockloss he suggested I run an initial alignment. Turned out there were 2urad offset in TMSY. I didn't have any trouble locking arms green and DRMI so I didn't suspect a bad alignment (although I kept having trouble finding IR. Maybe that could have been a red flag?).
After the initial alignment we lost lock at Switch to QPDS once but everything went smoothly afterward. We got to NOMINAL_LOW_NOISE, ran the a2l script (had no trouble running it on my account), and finally Observing for the first time in almost 7 hours at 12:56 UTC.
Tonight I learned how 2 urad can make life miserable. My world will never be the same again.....
INCOMING OPERATOR: Jim
ACTIVITY LOG:
See Shift Summary.
Back Observing at 12:56:10 UTC after running the a2l script. It went fine for me (I ran it on my account).
POP_A_LF and AS_AIR RF90 aren't looking good. I while I was wondering what's going on I noticed differences in SDF and ASC INP and PRC1 pitch and yaw gains were 0 (setpoint=1). Was this a leftover from a2l script error? I reverted it, accepted the changes and hoped for the best. No luck though. IFO just lost lock at NOMINAL LOW NOISE. CSOFT was running away.
Seeing how a2l script left us with problems tonight I WILL NOT RUN IT until I know that it's safe to run.
Lost lock at ENGAGE ASC PART3 twice in a row. I've attached the log and the lockloss plots of both locklosses (and the ASC Striptool from the recent lockloss). Sorry about the terrible screenshots of the log (only showing the DOWN state).
I called Kiwamu to make sure the ASC loop gains were correct. Turned out INP and PRC1 (pitch and yaw) gains were 0 when they're supposed to be 1. I put in the gains by hand before realized that PRC1 had no ramp time at all. There was a spike from PRC1 yaw and the ifo lost lock shortly after. So I gave PRC1 ramp time of 10 s.
Nutsinee, Sheila, Kiwamu,
Nutsinee had a number (5-ish times) of lock losses in ENGAGE_ASC_PART3. We blame TMSY for these lock losses.
Nutsinee went through the initial alignment and it corrected TMSY in pitch by 2 urad. See the attached screenshot showing 2 day trend of TMS angles. A large step shown in the middle of the screenshot must be the temporary correction that Corey introduced in the last night (alog 22535). You can see that the TMSY angle is now back to where it was 2 days ago. Tonight, the symptom was that when we engaged the SOFT loops, something started running away and decreasing the cavity power everywhere. We think that TMSY was pulling the SOFT loops to some kind of bad alignment point due to the different TMSY angle. It was nothing to do with the 20 dB boost in the SOFT loops (for example, alog 21587) which we suspected at the very beginning of the investigation. After the initial alignment, everything seems to be going as smooth as usual and we made it back to low noise (alog 22579). We are happy.
In additon, there was another issue with INP1 and PRC1 where they were not engaged in ENGAGE_ASC_PART1 and PART2. This was fixed by reloading the ISC_LOCK guardian since it had been edited in the last evening but was not reloaded since then.
TITLE: 10/15 EVE Shift: 23:00-07:00UTC (16:00-00:00PDT), all times posted in UTC
STATE of H1: Currently in Lock Acquisition
Incoming Operator: Nutsinee
Support: Jenne on phone about A2L script
Quick Summary: Two mysterious locklosses toward the end of the shift (see specific entries).
Shift Activities:
H1 dropped out at 6:15 for unknown reasons (seismic quiet & Strip Tools looked fine).
Attempt #1: Guardian stuck at Check IR. So, moved ALS PLL DIFF OFFSET slider until Yarm power started flashing. Then Guardian continued. Engage ASC Part3 seems to be a bit of a time sink here (it took about 6min for the control signals to take over). Had a lockloss during the Reduce Modulation Depth step.
Attempt #2: Nutsinee was in early for her shift so I handed over the ifo to her. She mentioned watching Jenne run the A2L script, so she will give it a try during her shift.
I also asked Nutsinee to take a look at the previous locklosses since she has experience running lockloss tools.
I took a quick look at this particular lockloss and it seems like PRC2 P was growing prior to the lockloss. The zoomed-in version of the same plot doesn't tell me much except that DHARD might be glitchy prior to the lockloss. Looking at the LSC channel list it seems like POPAIR RF90 and ASAIR_A_LF glitched right before the lockloss. The Witness channel list tells me that SR3 is likely responsible for the glitch.
I don't really have a conclusion here. Just reporting what I observed.
H1 had a lockloss at 4:10utc where there was no obvious culprit (nothing seismic, but I forgot to look at strip tools to see anything obvious).
Commissioning Mode for A2L: Once in Nominal Low Noise, ran the A2L script, but unfortunately it had errors, and left the SDF with diffs (after getting some consulting with Jenne able to Revert the SDF). So spent about another 22min running & troubleshooting A2L recovery. Will send Jenne errors in A2L script session.
Then went to Observation Mode at 5:10utc.
Operators: Do not run A2L script until Jenne troubleshoots script. (had "Permission denied" errors, so maybe there's an issue of running script logged in as ops?).
Will put this "pause" in running A2L in the new Ops Sticky Note page.
I was able to run the a2l script on my account. The script cleared all the SDF after it's done as Jenne advertised. All is well.
For lack of a better place to write this, I'm leaving it as a comment to this thread.
The problem was that the results directory wasn't write-able by the Ops accounts, although it was by personal accounts. I've chmod-ed the directory, so the A2L script should run no matter who you are signed in as.
Please continue running it (instructions) just before going to Observe, or when dropping out of Observe (i.e. for Maintenence).
I implemented an automatic restart system using monit for the GRB alert code See LLO entry 21671. For unknown reasons, this new method of running the script (as a detached process from an init script, not in a terminal window) also made all the GraceDB comm errors go away. Implementation details are posted in the log entry if LHO wishes to follow.
Topped off to the Max level. Required 125mL. (last fill was 2 days ago and required 250mL [full beaker]).
So this takes care of the Shift Check Sheet item of addressing PSL chillers on Thursay
J. Kissel, K. Izumi, D. Tuyenbayev As expected from preliminary analysis of the actuation strength change (see LHO aLOG 22558), DARM Open Loop Gain TFs and PCAL to CAL-CS transfer functions reveal minimal change in the calibration of the instrument. For these measurements, all we have changed in the CAL-CS model is switching the two signs in the ESD stage, which in turn cancel each other. As such, the only change should be the change in discrepancy between the canonical model and the actual current strength of the TS / L3 / ESD stage. Again, this change was small (sub ~10%, 5 [deg]), and has been well-tracked by calibration lines, so we will continue forward with focusing our efforts on determining how to apply the time-dependent corrections. Attached are screenshots of the raw results. Darkhan's working on the more formal results. Stay tuned! The after-bias-flip results live here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/DARMOLGTFs/ 2015-10-15_H1_DARM_OLGTF_7to1200Hz_AfterBiasFlip.xml /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/PCAL/ 2015-10-15_PCALY2DARMTF_7to1200Hz_AfterBiasFlip.xml
Jeffrey K, Kiwamu, Sudarshan, Darkhan
Some conclusions based on the analysis of the DARM OLG TF and the PCAL to DARM TF measurements taken after flipping the ESD bias sign:
1although the total L3 stage actuation function sign did not change, the ESD bias sign flip was reflected in two of the SUSETMY L3 stage model EPICS replicas in CAL-CS model: H1:CAL-CS_DARM_ANALOG_ETMY_L3_GAIN and H1:CAL-CS_DARM_FE_ETMY_L3_DRIVEALIGN_L2L_GAIN.
Below we list kappas used in the kappa corrected parameter file for the measurements taken after the ESD bias sign flip on Oct 15. We used 43 minute mean kappas calculated from 60sec FFTs starting at GPS 1128984060 (values prior bias sign flip are given in brackets, previously they were reported in LHO alog comment 22552):
κtst = 1.004472 (1.057984)
κpu = 1.028713 (1.021401)
κA = 1.003843 (1.038892)
κC = 0.985051 (0.989844)
fc = 334.512654 (335.073564) [Hz]
An updated comparison script and parameter files for the most recent measurements were committed to calibration SVN (r1685):
/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/CompareDARMOLGTFs_O1.m
/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/H1DARMparams_1128979968.m
/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/H1DARMparams_1128979968_kappa_corr.m
Comparison plots were committed to CalSVN (r1684):
/trunk/Runs/O1/H1/Results/DARMOLGTFs/2015-10-15_H1DARM_O1_cmp_AfterSignFlip_*.pdf
As a double check we created Matlab file with EPICS values for kappas using H1DARMparams_1128979968.m parameter file and made sure that the values agree with the ones currently written in the EPICS. The logs from calculating these EPICS values was committed to CalSVN (r1682):
/trunk/Runs/O1/H1/Scripts/CAL_EPICS/D20151015_H1_CAL_EPICS_VALUES.m
/trunk/Runs/O1/H1/Scripts/CAL_EPICS/20151015_H1_CAL_EPICS_*
For the record, we changed the requested bias voltage from -9.5 [V_DAC] (negative) to +9.5 [V_DAC] (positive).
J. Kissel We're taking the IFO out of observation intent and beginning measurement prep for flipping the H1 ETMY bias sign. Stay tuned for detail aLOGs.
J. Kissel DARM OLTGF Is Complete. Results saved and committed to /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/DARMOLGTFs/ 2015-10-15_H1_DARM_OLGTF_7to1200Hz_BeforeBiasFlip_A_ETMYL3LOCKIN2_B_ETMYL3LOCKEXC_coh.txt 2015-10-15_H1_DARM_OLGTF_7to1200Hz_BeforeBiasFlip_A_ETMYL3LOCKIN2_B_ETMYL3LOCKEXC_tf.txt 2015-10-15_H1_DARM_OLGTF_7to1200Hz_BeforeBiasFlip_A_ETMYL3LOCKIN2_B_ETMYL3LOCKIN1_coh.txt 2015-10-15_H1_DARM_OLGTF_7to1200Hz_BeforeBiasFlip_A_ETMYL3LOCKIN2_B_ETMYL3LOCKIN1_tf.txt 2015-10-15_H1_DARM_OLGTF_7to1200Hz_BeforeBiasFlip.xml
J. Kissel PCAL to DARM Transfer Functions complete. We're going to quickly run an A2L measurement, then open the beam diverters. After which, we'll try to preserve the lock by transitioning over to ETMX for DARM control. However, this will require turning the ETMX HV ESD driver back ON, and we're on 50% confident that the IFO lock can survive that turn on transient. Wish us luck! PCAL 2 DARM results have been committed to /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/PCAL/ 2015-10-15_PCALY2DARMTF_7to1200Hz_BeforeBiasFlip_A_PCALRX_B_DARMIN1_coh.txt 2015-10-15_PCALY2DARMTF_7to1200Hz_BeforeBiasFlip_A_PCALRX_B_DARMIN1_tf.txt 2015-10-15_PCALY2DARMTF_7to1200Hz_BeforeBiasFlip.xml
DARMOLGTF model vs. meas. comparison plots from measurements prior to ESD bias sign flip are attached to this report.
Parameter files for these measurements and updated comparison scripts are committed to calibration SVN (r1673):
/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/CompareDARMOLGTFs_O1.m
/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/H1DARMparams_1127083151_kappa_corr.m
/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/H1DARMparams_1128956805.m
/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/H1DARMparams_1128956805_kappa_corr.m
Plots and a .MAT file with the models are committed to calibration SVN (r1673):
/trunk/Runs/O1/H1/Results/DARMOLGTFs/2015-10-15_H1DARM_O1_cmp_bfSignFlip_*.pdf
/trunk/Runs/O1/H1/Results/DARMOLGTFs/2015-10-15_H1DARM_O1_cmp_bfSignFlip_DARMOLGTF.mat
In the kappa corrected parameter file for the measurements taken on Oct 15 we used average kappas over 1 hour calculated from SLM tool starting at GPS 112894560. The values are listed below:
κtst = 1.057984
κpu = 1.021401
κA = 1.038892
κC = 0.989844
fc = 335.073564 [Hz]
For kappas used in parameter files for Sep 10 and Sep 23 see LHO alog comment 22071.
For the record, we changed the requested bias voltage from -9.5 [V_DAC] (negative) to +9.5 [V_DAC] (positive).
As Jim had almost relocked the IFO, we had an epics freeze in the gaurdian state RESONANCE. ISC_LOCK had an epics connection error.
What is the right thing for the operator to do in this situation?
Are these epics freezes becoming more frequent again?
screenshot attached.
epics freezes never fully went away completely and are normally only a few seconds in duration. This morning's SUS ETMX event lasted for 22 seconds which exceeded Guardian's timeout period. To get the outage duration, I second trended H1:IOP-SUS_EX_ADC_DT_OUTMON. Outages are on a computer basis, not model basis, so I have put the IOP Duotone output EPICS channel into the frame as EDCU channels (access via channel access over the network). When these channels are unavailable, the DAQ sets them to be zero.
For this event the time line is (all times UTC)
16:17:22 | DAQ shows EPICS has frozen on SUS EX |
16:17:27 | Guardian attempts connection |
16:17:29 | Guardian reports error, is retrying |
16:17:43 | Guardian timesout |
16:17:45 | DAQ shows channel is active again |
The investigation of this problem is ongoing, we could bump up the priority if it becomes a serious IFO operations issue.
To be clear, it sounds like there was a lockloss during acquisition that was caused by some kind of EPICS drop out. I see how a lockloss could occur during the NOMINAL lock state just from an EPICS drop out. guardian nodes might go into error, but that shouldn't actually affect the fast IFO controls at all.
Sorry, I meant that I can not see how a guardian EPICS dropout could cause a lock loss during the nominal lock state.
The online GDS calibraiton filters were updated today to fix several bugs and add a few more corrections that were uncovered by the calibration group over the past week. The updates include the following changes:
1) Compensation for known IIR warping by Foton in the inverse sensing filter installed in the CALCS model. See DCC G1501013.
2) Fix bug in how digital response of AA/AI filters were called. (par.C.antialiasing.digital.response.ss -> par.C.antialiasing.digital.response.ssd
andpar.A.antiimaging.digital.response.ss -> par.A.antiimaging.digital.response.ssd
)
3) Additional compensation for OMCDCPD (par.C.omcdcpd.c
).
The new filters were generated using
create_partial_td_filters_O1
checked into the calibration SVN under
aligocalibration/trunk/Runs/O1/Common/MatlabTools/
H1GDS_1128173232.npz
under aligocalibration/trunk/Runs/O1/GDSFilters/.
Attached are plots comparing the frequency response of the GDS FIR filters to the model frequency response.
There was a request to see how the inverse correction filters for the sensing chain were rolled off at high frequencies in these filters. This is done with a simple smooth roll-off above ~6500 Hz. I've attached plots that zoom in on the 5000-8192 Hz range to show how the rolloff causes the GDS filters to differ from the ideal inverse correction filters for the sensing chain.
I apolgize for not reloading the guardian with the INP1+PRC1 fix.
The A2L script does not touch the loop gains at all, that is all handled by gaurdian.