POP_A_LF and AS_AIR RF90 aren't looking good. I while I was wondering what's going on I noticed differences in SDF and ASC INP and PRC1 pitch and yaw gains were 0 (setpoint=1). Was this a leftover from a2l script error? I reverted it, accepted the changes and hoped for the best. No luck though. IFO just lost lock at NOMINAL LOW NOISE. CSOFT was running away.
Seeing how a2l script left us with problems tonight I WILL NOT RUN IT until I know that it's safe to run.
Lost lock at ENGAGE ASC PART3 twice in a row. I've attached the log and the lockloss plots of both locklosses (and the ASC Striptool from the recent lockloss). Sorry about the terrible screenshots of the log (only showing the DOWN state).
I called Kiwamu to make sure the ASC loop gains were correct. Turned out INP and PRC1 (pitch and yaw) gains were 0 when they're supposed to be 1. I put in the gains by hand before realized that PRC1 had no ramp time at all. There was a spike from PRC1 yaw and the ifo lost lock shortly after. So I gave PRC1 ramp time of 10 s.
Nutsinee, Sheila, Kiwamu,
Nutsinee had a number (5-ish times) of lock losses in ENGAGE_ASC_PART3. We blame TMSY for these lock losses.
Nutsinee went through the initial alignment and it corrected TMSY in pitch by 2 urad. See the attached screenshot showing 2 day trend of TMS angles. A large step shown in the middle of the screenshot must be the temporary correction that Corey introduced in the last night (alog 22535). You can see that the TMSY angle is now back to where it was 2 days ago. Tonight, the symptom was that when we engaged the SOFT loops, something started running away and decreasing the cavity power everywhere. We think that TMSY was pulling the SOFT loops to some kind of bad alignment point due to the different TMSY angle. It was nothing to do with the 20 dB boost in the SOFT loops (for example, alog 21587) which we suspected at the very beginning of the investigation. After the initial alignment, everything seems to be going as smooth as usual and we made it back to low noise (alog 22579). We are happy.
In additon, there was another issue with INP1 and PRC1 where they were not engaged in ENGAGE_ASC_PART1 and PART2. This was fixed by reloading the ISC_LOCK guardian since it had been edited in the last evening but was not reloaded since then.
TITLE: 10/15 EVE Shift: 23:00-07:00UTC (16:00-00:00PDT), all times posted in UTC
STATE of H1: Currently in Lock Acquisition
Incoming Operator: Nutsinee
Support: Jenne on phone about A2L script
Quick Summary: Two mysterious locklosses toward the end of the shift (see specific entries).
Shift Activities:
H1 dropped out at 6:15 for unknown reasons (seismic quiet & Strip Tools looked fine).
Attempt #1: Guardian stuck at Check IR. So, moved ALS PLL DIFF OFFSET slider until Yarm power started flashing. Then Guardian continued. Engage ASC Part3 seems to be a bit of a time sink here (it took about 6min for the control signals to take over). Had a lockloss during the Reduce Modulation Depth step.
Attempt #2: Nutsinee was in early for her shift so I handed over the ifo to her. She mentioned watching Jenne run the A2L script, so she will give it a try during her shift.
I also asked Nutsinee to take a look at the previous locklosses since she has experience running lockloss tools.
I took a quick look at this particular lockloss and it seems like PRC2 P was growing prior to the lockloss. The zoomed-in version of the same plot doesn't tell me much except that DHARD might be glitchy prior to the lockloss. Looking at the LSC channel list it seems like POPAIR RF90 and ASAIR_A_LF glitched right before the lockloss. The Witness channel list tells me that SR3 is likely responsible for the glitch.
I don't really have a conclusion here. Just reporting what I observed.
H1 had a lockloss at 4:10utc where there was no obvious culprit (nothing seismic, but I forgot to look at strip tools to see anything obvious).
Commissioning Mode for A2L: Once in Nominal Low Noise, ran the A2L script, but unfortunately it had errors, and left the SDF with diffs (after getting some consulting with Jenne able to Revert the SDF). So spent about another 22min running & troubleshooting A2L recovery. Will send Jenne errors in A2L script session.
Then went to Observation Mode at 5:10utc.
Operators: Do not run A2L script until Jenne troubleshoots script. (had "Permission denied" errors, so maybe there's an issue of running script logged in as ops?).
Will put this "pause" in running A2L in the new Ops Sticky Note page.
I was able to run the a2l script on my account. The script cleared all the SDF after it's done as Jenne advertised. All is well.
For lack of a better place to write this, I'm leaving it as a comment to this thread.
The problem was that the results directory wasn't write-able by the Ops accounts, although it was by personal accounts. I've chmod-ed the directory, so the A2L script should run no matter who you are signed in as.
Please continue running it (instructions) just before going to Observe, or when dropping out of Observe (i.e. for Maintenence).
I implemented an automatic restart system using monit for the GRB alert code See LLO entry 21671. For unknown reasons, this new method of running the script (as a detached process from an init script, not in a terminal window) also made all the GraceDB comm errors go away. Implementation details are posted in the log entry if LHO wishes to follow.
Topped off to the Max level. Required 125mL. (last fill was 2 days ago and required 250mL [full beaker]).
So this takes care of the Shift Check Sheet item of addressing PSL chillers on Thursay
TITLE: 10/15 EVE Shift: 23:00-07:00UTC (16:00-00:00PDT), all times posted in UTC
STATE of H1: Taken to Observation Mode at 22:55 by Jim with a range of 75Mpc. Current lock going on 2.5hrs.
Outgoing Operator: Jim
Support: Occupied Control Room, Kiwamu is On-Call if needed
Quick Summary:
useism clearly becoming more quiet in last 10hrs. Winds under 15mph. Smooth observational sailing!
J. Kissel, K. Izumi, D. Tuyenbayev As expected from preliminary analysis of the actuation strength change (see LHO aLOG 22558), DARM Open Loop Gain TFs and PCAL to CAL-CS transfer functions reveal minimal change in the calibration of the instrument. For these measurements, all we have changed in the CAL-CS model is switching the two signs in the ESD stage, which in turn cancel each other. As such, the only change should be the change in discrepancy between the canonical model and the actual current strength of the TS / L3 / ESD stage. Again, this change was small (sub ~10%, 5 [deg]), and has been well-tracked by calibration lines, so we will continue forward with focusing our efforts on determining how to apply the time-dependent corrections. Attached are screenshots of the raw results. Darkhan's working on the more formal results. Stay tuned! The after-bias-flip results live here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/DARMOLGTFs/ 2015-10-15_H1_DARM_OLGTF_7to1200Hz_AfterBiasFlip.xml /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/PCAL/ 2015-10-15_PCALY2DARMTF_7to1200Hz_AfterBiasFlip.xml
Jeffrey K, Kiwamu, Sudarshan, Darkhan
Some conclusions based on the analysis of the DARM OLG TF and the PCAL to DARM TF measurements taken after flipping the ESD bias sign:
1although the total L3 stage actuation function sign did not change, the ESD bias sign flip was reflected in two of the SUSETMY L3 stage model EPICS replicas in CAL-CS model: H1:CAL-CS_DARM_ANALOG_ETMY_L3_GAIN and H1:CAL-CS_DARM_FE_ETMY_L3_DRIVEALIGN_L2L_GAIN.
Below we list kappas used in the kappa corrected parameter file for the measurements taken after the ESD bias sign flip on Oct 15. We used 43 minute mean kappas calculated from 60sec FFTs starting at GPS 1128984060 (values prior bias sign flip are given in brackets, previously they were reported in LHO alog comment 22552):
κtst = 1.004472 (1.057984)
κpu = 1.028713 (1.021401)
κA = 1.003843 (1.038892)
κC = 0.985051 (0.989844)
fc = 334.512654 (335.073564) [Hz]
An updated comparison script and parameter files for the most recent measurements were committed to calibration SVN (r1685):
/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/CompareDARMOLGTFs_O1.m
/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/H1DARMparams_1128979968.m
/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/H1DARMparams_1128979968_kappa_corr.m
Comparison plots were committed to CalSVN (r1684):
/trunk/Runs/O1/H1/Results/DARMOLGTFs/2015-10-15_H1DARM_O1_cmp_AfterSignFlip_*.pdf
As a double check we created Matlab file with EPICS values for kappas using H1DARMparams_1128979968.m parameter file and made sure that the values agree with the ones currently written in the EPICS. The logs from calculating these EPICS values was committed to CalSVN (r1682):
/trunk/Runs/O1/H1/Scripts/CAL_EPICS/D20151015_H1_CAL_EPICS_VALUES.m
/trunk/Runs/O1/H1/Scripts/CAL_EPICS/20151015_H1_CAL_EPICS_*
For the record, we changed the requested bias voltage from -9.5 [V_DAC] (negative) to +9.5 [V_DAC] (positive).
TITLE: 10/14 EVE Shift: 23:00-07:00UTC (16:00-00:00PDT), all times posted in UTC
STATE of H1: Low noise, undisturbed
Support: Usual CR crowd
Quick Summary: Most of the day spent doing commissioning work, switching bias on ETMY ESD
Shift Activities:
This is actually 10/15 & Jim's DAY shift. :)
We have seen slow change in optical gain at the beginning of each lock stretch at LHO. Jenne has a nice alog (LHO alog 22271) about this varying optical gain. Here, I am trying to use the calibration parameters kappa_C which is a measure of change in optical gain from its nominal valure of 1. For this analysis, I take the the data that has Guardian state vector greater than 600 (Low nominal noise) and plot kappa_C from the beginning of each lock stretch to 2 hours into the lock.
The attached plot has total of nine segments starting from Oct 03, 2015. There were total of 12 viable segments but I excluded 3 segments which were more flatter at the beginning. Not sure why these are flatter than the other but excluding those stretches help to visualize the time constant of optical gain better.
The estimated time it takes for the optical gain to reach its nominal values after the IFO is locked is about 30 minutes or so as seen in attached plot.
J. Kissel We're taking the IFO out of observation intent and beginning measurement prep for flipping the H1 ETMY bias sign. Stay tuned for detail aLOGs.
J. Kissel DARM OLTGF Is Complete. Results saved and committed to /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/DARMOLGTFs/ 2015-10-15_H1_DARM_OLGTF_7to1200Hz_BeforeBiasFlip_A_ETMYL3LOCKIN2_B_ETMYL3LOCKEXC_coh.txt 2015-10-15_H1_DARM_OLGTF_7to1200Hz_BeforeBiasFlip_A_ETMYL3LOCKIN2_B_ETMYL3LOCKEXC_tf.txt 2015-10-15_H1_DARM_OLGTF_7to1200Hz_BeforeBiasFlip_A_ETMYL3LOCKIN2_B_ETMYL3LOCKIN1_coh.txt 2015-10-15_H1_DARM_OLGTF_7to1200Hz_BeforeBiasFlip_A_ETMYL3LOCKIN2_B_ETMYL3LOCKIN1_tf.txt 2015-10-15_H1_DARM_OLGTF_7to1200Hz_BeforeBiasFlip.xml
J. Kissel PCAL to DARM Transfer Functions complete. We're going to quickly run an A2L measurement, then open the beam diverters. After which, we'll try to preserve the lock by transitioning over to ETMX for DARM control. However, this will require turning the ETMX HV ESD driver back ON, and we're on 50% confident that the IFO lock can survive that turn on transient. Wish us luck! PCAL 2 DARM results have been committed to /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/PCAL/ 2015-10-15_PCALY2DARMTF_7to1200Hz_BeforeBiasFlip_A_PCALRX_B_DARMIN1_coh.txt 2015-10-15_PCALY2DARMTF_7to1200Hz_BeforeBiasFlip_A_PCALRX_B_DARMIN1_tf.txt 2015-10-15_PCALY2DARMTF_7to1200Hz_BeforeBiasFlip.xml
DARMOLGTF model vs. meas. comparison plots from measurements prior to ESD bias sign flip are attached to this report.
Parameter files for these measurements and updated comparison scripts are committed to calibration SVN (r1673):
/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/CompareDARMOLGTFs_O1.m
/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/H1DARMparams_1127083151_kappa_corr.m
/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/H1DARMparams_1128956805.m
/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/H1DARMparams_1128956805_kappa_corr.m
Plots and a .MAT file with the models are committed to calibration SVN (r1673):
/trunk/Runs/O1/H1/Results/DARMOLGTFs/2015-10-15_H1DARM_O1_cmp_bfSignFlip_*.pdf
/trunk/Runs/O1/H1/Results/DARMOLGTFs/2015-10-15_H1DARM_O1_cmp_bfSignFlip_DARMOLGTF.mat
In the kappa corrected parameter file for the measurements taken on Oct 15 we used average kappas over 1 hour calculated from SLM tool starting at GPS 112894560. The values are listed below:
κtst = 1.057984
κpu = 1.021401
κA = 1.038892
κC = 0.989844
fc = 335.073564 [Hz]
For kappas used in parameter files for Sep 10 and Sep 23 see LHO alog comment 22071.
For the record, we changed the requested bias voltage from -9.5 [V_DAC] (negative) to +9.5 [V_DAC] (positive).
Tamper injections showed some upconversion from the tens of Hz region into the region above 60 Hz. The HVAC makes noise in this region so I did the test I had done in iLIGO, I shut down all turbines and chiller pad equipment on the entire site. This increased the range by almost 5 Mpc (see figure - the 3 range peaks are during the shutoff periods below).
Checks:
1) make sure all VFDs are running at 45 or less
2) if possible use only 2 turbines for the LVEA
We did not drop out of science mode but here are the times of the changes (Oct. 15 UTC):
2:05:00 shutdown started, 2:08:00 shutdown complete
2:18:00 startup started, 2:21:30 startup complete
2:31:00 shutdown started, 2:37:00 shutdown complete
2:47:00 startup started, 2:51:00 startup completed
3:01:00 shutdown started, 3:03:30 shutdown complete
3:13:30 startup started, 3:17:00 startup complete
Here is a comparison of the calibrated DARM spectrum from times when the HVAC was ON and OFF, in the frequency band that was affected.
I plotted glitchgrams and trigger rates during this time. Doesn't seem to have made a noticable change.
https://ldas-jobs.ligo-wa.caltech.edu/~jordan.palamos/detchar/HVAC/glitchgram_HVAC_1128909617.png
https://ldas-jobs.ligo-wa.caltech.edu/~jordan.palamos/detchar/HVAC/rate_HVAC_1128909617.png
Attached are ASDs of DARM and one of the PEM seismometer channels (corner station Z axis) for all of the times when the HVAC was turned on and off (not including the times of transition). In general, the noise level between 40-100 Hz is lower during the times when HVAC was off. The peak around 75 Hz was better during the second two off times, but not in the first segment. (1128910297 to 1128910697)
More PEM seismometer channels are here: https://ldas-jobs.ligo-wa.caltech.edu/~marissa.walker/O1/Oct15HVACtest/
(note: the seismometer calibration from pem.ligo.org is only valid from 0-20 hz)
As Jim had almost relocked the IFO, we had an epics freeze in the gaurdian state RESONANCE. ISC_LOCK had an epics connection error.
What is the right thing for the operator to do in this situation?
Are these epics freezes becoming more frequent again?
screenshot attached.
epics freezes never fully went away completely and are normally only a few seconds in duration. This morning's SUS ETMX event lasted for 22 seconds which exceeded Guardian's timeout period. To get the outage duration, I second trended H1:IOP-SUS_EX_ADC_DT_OUTMON. Outages are on a computer basis, not model basis, so I have put the IOP Duotone output EPICS channel into the frame as EDCU channels (access via channel access over the network). When these channels are unavailable, the DAQ sets them to be zero.
For this event the time line is (all times UTC)
16:17:22 | DAQ shows EPICS has frozen on SUS EX |
16:17:27 | Guardian attempts connection |
16:17:29 | Guardian reports error, is retrying |
16:17:43 | Guardian timesout |
16:17:45 | DAQ shows channel is active again |
The investigation of this problem is ongoing, we could bump up the priority if it becomes a serious IFO operations issue.
To be clear, it sounds like there was a lockloss during acquisition that was caused by some kind of EPICS drop out. I see how a lockloss could occur during the NOMINAL lock state just from an EPICS drop out. guardian nodes might go into error, but that shouldn't actually affect the fast IFO controls at all.
Sorry, I meant that I can not see how a guardian EPICS dropout could cause a lock loss during the nominal lock state.
The online GDS calibraiton filters were updated today to fix several bugs and add a few more corrections that were uncovered by the calibration group over the past week. The updates include the following changes:
1) Compensation for known IIR warping by Foton in the inverse sensing filter installed in the CALCS model. See DCC G1501013.
2) Fix bug in how digital response of AA/AI filters were called. (par.C.antialiasing.digital.response.ss -> par.C.antialiasing.digital.response.ssd
andpar.A.antiimaging.digital.response.ss -> par.A.antiimaging.digital.response.ssd
)
3) Additional compensation for OMCDCPD (par.C.omcdcpd.c
).
The new filters were generated using
create_partial_td_filters_O1
checked into the calibration SVN under
aligocalibration/trunk/Runs/O1/Common/MatlabTools/
H1GDS_1128173232.npz
under aligocalibration/trunk/Runs/O1/GDSFilters/.
Attached are plots comparing the frequency response of the GDS FIR filters to the model frequency response.
There was a request to see how the inverse correction filters for the sensing chain were rolled off at high frequencies in these filters. This is done with a simple smooth roll-off above ~6500 Hz. I've attached plots that zoom in on the 5000-8192 Hz range to show how the rolloff causes the GDS filters to differ from the ideal inverse correction filters for the sensing chain.