TITLE: 04/08 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Wind
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
Wind: 28mph Gusts, 24mph 5min avg
Primary useism: 0.12 μm/s
Secondary useism: 0.57 μm/s
QUICK SUMMARY:
Re-Locking has been a bit daunting: Bad fiber polarization, SRC acting up, locklosses occurring at CARM_ON_TR. the wind is coming back to more reasonable levels so I may try going back to the nominal WINDY state for the ISI configuration. Stay tuned.
[JimW, Jenne]
While the IFO was down due to high microseism and high wind this afternoon, we have measured the 2 transfer functions needed for MC2's M1 stage to create the L2A decoupling feedforward filter.
For both L2P and P2P measurements, the pre-existing flat L2P decoupling gain of -0.007 was disengaged. So, we started from scratch since we weren't sure about that number.
In order to get good coherence for the L2P measurement, I was driving MC2 enough that the IMC was having trouble staying locked. So, I unlocked the IMC and just used the MC2 OSEMs as my pitch witness. Before unlocking the mode cleaner, I saw that I was already getting better coherence with the OSEMs than MC2 Trans, so I don't think I lost out on anything by doing the measurements with the IMC unlocked.
We fit the FF TF using vectfit, and hand-added a pair of poles at a few Hz to make the high frequency shape roll off rather than being flat. The result looks very similar to what Arnaud and Marie got for the PRM at LLO (LLO alog 32503), which is good since they're the same type of suspension.
The new filter is in the H1:SUS-MC2_M1_DRIVEALIGN_L2P FM3. The IFO has started relocking though, so we are leaving the pre-existing flat L2P decoupling gain in place and our new filter off. When we turn on the new filter, we'll need a gain of unity in the filter bank.
Next commissioning window, I think we're ready to do an on/off test with L2P transfer functions, to show that the filter is doing good things.
Attached are the individual L2P and P2P transfer functions (excitations done with awggui), and the measured and fit TFs that we've installed.
TITLE: 04/07 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ed
SHIFT SUMMARY:
Locked for a few hours at the beginning of the shift & then high winds (as high as 50 mph!) caused us grief for most of the shift.
LOG:
I have been investigating the seismic system watchdogs. We want to know if the watchdogs threshold amount of motion to shut down the controls of the system can be relaxed. As a first step in doing this I found the threshold values for each of the suspension watchdogs. The channel and the corresponding threshold are shown below. H1:SUS-ITMX_M0_WD_OSEMAC_RMS_MAX 80000.0 H1:SUS-ITMX_R0_WD_OSEMAC_RMS_MAX 10000.0 H1:SUS-ITMX_L2_WD_OSEMAC_RMS_MAX 8000.0 H1:SUS-ITMX_L1_WD_OSEMAC_RMS_MAX 80000.0 H1:SUS-TMSY_M1_WD_OSEMAC_RMS_MAX 8000.0 H1:SUS-SRM_M1_WD_OSEMAC_RMS_MAX 15000.0 H1:SUS-SRM_M3_WD_OSEMAC_RMS_MAX 80000.0 H1:SUS-SRM_M2_WD_OSEMAC_RMS_MAX 80000.0 H1:SUS-SR3_M1_WD_OSEMAC_RMS_MAX 80000.0 H1:SUS-SR3_M3_WD_OSEMAC_RMS_MAX 80000.0 H1:SUS-SR3_M2_WD_OSEMAC_RMS_MAX 80000.0 H1:SUS-PR2_M1_WD_OSEMAC_RMS_MAX 8000.0 H1:SUS-PR2_M3_WD_OSEMAC_RMS_MAX 8000.0 H1:SUS-PR2_M2_WD_OSEMAC_RMS_MAX 8000.0 H1:SUS-ETMX_M0_WD_OSEMAC_RMS_MAX 25000.0 H1:SUS-ETMX_R0_WD_OSEMAC_RMS_MAX 25000.0 H1:SUS-ETMX_L2_WD_OSEMAC_RMS_MAX 25000.0 H1:SUS-ETMX_L1_WD_OSEMAC_RMS_MAX 25000.0 H1:SUS-OMC_M1_WD_OSEMAC_RMS_MAX 12000.0 H1:SUS-IM2_M1_WD_OSEMAC_RMS_MAX 8000.0 H1:SUS-OM3_M1_WD_OSEMAC_RMS_MAX 80000.0 H1:SUS-ITMY_M0_WD_OSEMAC_RMS_MAX 13000.0 H1:SUS-ITMY_R0_WD_OSEMAC_RMS_MAX 13000.0 H1:SUS-ITMY_L2_WD_OSEMAC_RMS_MAX 13000.0 H1:SUS-ITMY_L1_WD_OSEMAC_RMS_MAX 13000.0 H1:SUS-BS_M1_WD_OSEMAC_RMS_MAX 8000.0 H1:SUS-BS_M2_WD_OSEMAC_RMS_MAX 8000.0 H1:SUS-IM1_M1_WD_OSEMAC_RMS_MAX 8000.0 H1:SUS-MC3_M1_WD_OSEMAC_RMS_MAX 25000.0 H1:SUS-MC3_M3_WD_OSEMAC_RMS_MAX 15000.0 H1:SUS-MC3_M2_WD_OSEMAC_RMS_MAX 15000.0 H1:SUS-RM1_M1_WD_OSEMAC_RMS_MAX 8000.0 H1:SUS-IM4_M1_WD_OSEMAC_RMS_MAX 8000.0 H1:SUS-OM1_M1_WD_OSEMAC_RMS_MAX 80000.0 H1:SUS-MC1_M1_WD_OSEMAC_RMS_MAX 25000.0 H1:SUS-MC1_M3_WD_OSEMAC_RMS_MAX 15000.0 H1:SUS-MC1_M2_WD_OSEMAC_RMS_MAX 15000.0 H1:SUS-ETMY_M0_WD_OSEMAC_RMS_MAX 16000.0 H1:SUS-ETMY_R0_WD_OSEMAC_RMS_MAX 8000.0 H1:SUS-ETMY_L2_WD_OSEMAC_RMS_MAX 80000.0 H1:SUS-ETMY_L1_WD_OSEMAC_RMS_MAX 8000.0 H1:SUS-TMSX_M1_WD_OSEMAC_RMS_MAX 8000.0 H1:SUS-SR2_M1_WD_OSEMAC_RMS_MAX 11000.0 H1:SUS-SR2_M3_WD_OSEMAC_RMS_MAX 80000.0 H1:SUS-SR2_M2_WD_OSEMAC_RMS_MAX 8000.0 H1:SUS-OM2_M1_WD_OSEMAC_RMS_MAX 80000.0 H1:SUS-PR3_M1_WD_OSEMAC_RMS_MAX 15000.0 H1:SUS-PR3_M3_WD_OSEMAC_RMS_MAX 15000.0 H1:SUS-PR3_M2_WD_OSEMAC_RMS_MAX 15000.0 H1:SUS-MC2_M1_WD_OSEMAC_RMS_MAX 18000.0 H1:SUS-MC2_M3_WD_OSEMAC_RMS_MAX 80000.0 H1:SUS-MC2_M2_WD_OSEMAC_RMS_MAX 80000.0 H1:SUS-RM2_M1_WD_OSEMAC_RMS_MAX 25000.0 H1:SUS-PRM_M1_WD_OSEMAC_RMS_MAX 80000.0 H1:SUS-PRM_M3_WD_OSEMAC_RMS_MAX 80000.0 H1:SUS-PRM_M2_WD_OSEMAC_RMS_MAX 80000.0 H1:SUS-IM3_M1_WD_OSEMAC_RMS_MAX 8000.0
A link to the corresponding report for Livingston https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=32886
Starting CP3 fill. LLCV enabled. LLCV set to manual control. LLCV set to 50% open. Fill completed in 1984 seconds. TC B did not register fill. LLCV set back to 18.0% open. Starting CP4 fill. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Fill completed in 3264 seconds. LLCV set back to 35.0% open.
Raised CP3 to 19% open and CP4 to 38% open.
this is the first run of an actual overfill using the new virtual strip tool system. This completes the cp3, cp4 portion of FRS7782. This ticket has been extended to cover the vacuum pressure strip tool.
Reduced CP4 valve setting to 37% open - looked to be overfilling.
Summary: there seems to be a correlation between computer errors (for instance timing or IPC errors) with one or two types of blip glitches in DARM.
Explanation:
With the help of Dave Barker, in the last days I have been looking at several FEC_{}_STATE_WORD channels to find times of computer errors. First I used the minute trends to go back 90 days, and after that I used the second trends to find the times of the errors with an accuracy of a second (going back two weeks). I have been looking for error codes up to 128, where 2=TIM, 4=ADC, 8=DAC, 16=DAQ, 32=IPC, 64=AWG, and 128=DK. Finally, I generated omega scans of GDS-CALIB_STRAIN for the times of errors and found that they often show glitches.
Since there were some times that did not glitch, we are trying to track down how the coupling between the errors and the glitches is happening to see if it can be fixed or if the times can be vetoed. It might not be possible to fix the issue until we go into commissioning time at the end of O2, so DetChar might want to consider creating a flag for these times.
Investigations:
With the times obtained from the second trends, I have created tables of omega scans to see how the errors propagate and where the glitches appear. All those tables can be found here as html files, and the lists of times of errors for the last ~two weeks as txt files. So far, the model with the highest rate of glitches vs non-glitches is the susetm(x/y)pi.
I have been looking at some of these glitches, and the loudest ones coincide with small range drops. Also, the three glitches that had a strong residual in the subtraction of the MASTER signal from the NOISEMON signal (see aLog 34694) appear in the minute trends as times when there were computing errors (I haven't tracked those times down to the second though) in the h1iopsusey model.
Between March 24 and April 3, this population of glitches corresponds in average to approximately 10% of the population of blip glitches reported by the low-latency blip hunter (see aLog 34257 for more details on the blip hunter). The maximum percentage obtained is 22.2%, and the minimum is 2.8%
17:44 Lockloss
Had had been degrading in range, BUT we have been in the middle of a windstorm where we are currently riding through sustained 30mph winds with gusts up to 50mph.
Have taken Observatory Mode to WIND!
18:12 With winds in mid30s - mid 40s & useism (0.1-0.3Hz) at about 0.7microns, took ISI CONFIG to VERY_WINDY_NOBRSXY (Guide recommends VERY_WINDY, but that is no longer a state).
Even if we can't lock, we'll stay here during these high winds to give Jim W. some data with these conditions and in this state.
VERY_WINDY_NOBRSXY didn't look like it improved the situation (Sheila had a strip tool up while we were LOCKING_ARMS_GREEN & it looked noticeably worse). So we collected ~15min of data for Jim in this state as the winds blow.
18:27-18:57 MORE_WINDY state
18:58 Back to WINDY, but the ETMy BRS Velocity trace (middle right on medm) has a RED box around it & signifies DAMPING is ON. But this is just because we have so much wind at EY.
At any rate, Jim W. is here & working through various ISI states while we still have high winds.
We have full gradient field data from the ITMX HWS archived back to March-2016. In order to see the thermal lens, we need coincident times between low noise HWS operation and a relatively high power lock-acquisition or lock-loss of the IFO.
By mining the archived data, I can see evidence for the point source going back to 12-March-2016 [the gradient field is relative noisy]. Unfortunately, there is no archived data earlier than this time.
TITLE: 04/07 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
Wind: 21mph Gusts, 15mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.61 μm/s
Looks like there was a bit of a wind storm starting at about 5am PDT (13:00utc) with a few minutes of steady winds of 45-50mph 30min before the H1 lockloss.
QUICK SUMMARY:
TITLE: 04/07 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Corey
SUMMARY: Locked and Observing until the last hour or so and then PI mode 19 SDF diffs and a lockloss. I could not get ALSX to relock and was unable to bring up a camera to see how it was misaligned. The normalized power was ~.5 and moving ETMX could not improve it. Without being able to see what I was doing made it basically impossible. Corey came in early and Richard went to see what he could do as well.
A few min after I fixed the SDF diff it dropped lock, not sure of the cause yet.
PI mode 19 started to ring up a bit to cause the SUS_PI node to change some of the PLL filters causing an SDF diff. Mode 19 is not one of the modes that we regularly damp while at 30W, but it may have been excited from the wind. I went to clear the SDF diff but just trying to click the unmonitor button would not work, there was an exclamation point next to the "MON" but I'm not quite sure if that was suppose pop up a screen for me that didn't work remotely, or if it was something else. I managed to select the unmonitor all, with only that channel having an FM3 diff, and getting it to accept it that way. I'm worried that this may have accepted the entire filter bank, and there was another screen that I couldn't see that would allow me to choose what parts on that filter bank to monitor.
We are back to Observing and there is a screenshot below of the SDF diff.
I brought this back to how it was before I unmonitored this entire bank, but I unmonitored the FMs since the PI Guardian can change them. I was correct that I was suppose to get another screen for the filter bank when clicking the !MON button, but I could not get this remotely.
TITLE: 04/07 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 69Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: Commissioning changes made staying locked difficult, but sorted now. TJ attempting to do his shift remotely (otherwise he's "out sick")
LOG:
3:30 UTC Noticed range was falling off & ASC was ringing up
4:00 UTC Lockloss, started trying to detangle commissioning changes from earlier, got locked again, but range deteriorated again and ASC range up again. Sheila helped me figure out that her CSOFT changes hadn't been added to the ISC_LOCK guardian, so CSOFT currently still has to be turned up by hand. We should sort this out tomorrow.
Just to clarify what happened:
ITMY oplev has been going bad, it got bad enough to start causing locklosses during the commissioning window yesterday, so we spent some of our commissioning time doing corrective maintence to get rid of oplev damping. Since my change to the CSOFT gain got overwritten in the guardian (not sure why) it didn't come on with the same gain next lock. This meant that we had the radiation pressure instability that the oplev damping was originally intended to suppress, which is different from the 0.44 Hz oscillations we've been having for the past few weeks which are actually caused by the oplev.
So the original cause of the trouble was not commissioning, but the oplev going bad.
J. Kissel Gathered regular bi-weekly calibration / sensing function measurements. Preliminary results (screenshots) attached; analysis to come. The data have been saved and committed to: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Measurements/SensingFunctionTFs 2017-03-21_H1DARM_OLGTF_4to1200Hz_25min.xml 2017-03-21_H1_PCAL2DARMTF_4to1200Hz_8min.xml 2017-03-06_H1_PCAL2DARMTF_BB_5to1000Hz_0p25BW_250avgs_5min.xml
J. Kissel After processing the above measurement, the fit optical plant parameters are as follows: DARM_IN1/OMC_DCPD_SUM [ct/mA] 2.925e-7 Optical Gain [ct/m] 1.110e6 (+/- 1.6e3) [mA/pm] 3.795 (+/- 0.0053) Coupled Cavity Pole Freq [Hz] 355.1 (+/- 2.6) Residual Sensing Delay [us] 1.189 (+/- 1.7) SRC Detuning Spring Freq [Hz] 6.49 (+/- 0.06) SRC Detuning Quality Factor [ ] 25.9336 (+/- 6.39) Attach are plots of the fit, and how these parameters fit in within the context of all measurements from O2. In addition, given that the spread of the course of the detuning spring frequency is between, say 6.5 Hz and 9 Hz, I show the magnitude ratio of two toy transfer functions, where the only difference is the spring frequency. One can see that -- if not compensated for, that means a systematic magnitude error of 5%, 10%, 27% at 30, 20, and 10 Hz, respectively. Bad news for black holes! We definitely need to track this time dependence, as was prototyped in LHO aLOG 35041.
Attached are plots comparing the sensing and response function with and without detuning frequency. Compared to LLO (a-log 32930), at LHO the detuning frequency of ~7 Hz has significant effect on the calibration around 20 Hz (see response function plot). The code used to make this plot is added to svn,
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/SRCDetuning/springFreqEffect.m
Attached are plots showing differences in sensing functions and response functions for spring frequencies of 6 Hz and 9 Hz. Coincidentally they are very similar to the plots in the previous comment which show differences when the spring frequencies are 0 Hz and 6.91 Hz.
I've been looking to see if LHO needs to pursue better L2A de-coupling in the corner station suspensions to improve our wind and earthquake robustness. The good news is I had to look for a while to find a candidate, but I know better what to look for now, so I'll see what else I can find. Looking at a couple of recent earthquakes, I noticed that we seemed to lose lock when the IM4 TRANS qpd pitch hit a threshold of -.6. After talking to Jenne about it, we looked at other QPDs close by and it was immediately obvious that MC2 trans qpd pitch was being driven by MC2 M1 length drive. The attached plot shows the story.
Both plots are time series for and earthquake on March 27 of this year, where we lost lock at around 1174648460 UTC. The top plot shows the MC2_TRANS_PIT_INMON, MC2_M1_DRIVEALIGN_L_OUTMON and MC2_TRANS_SUM_OUT16. The bottom plot is the ITMY STS in the Y direction. The first 600 seconds are before the earthquake arrives and is quiet. The spike in the STS at about 700 seconds is the arrival of the P waves. This causes MC2 sus to move more, but the MC2 trans sum isn't affected much. At about 900 seconds the R waves arrive and MC2 starts moving more and more, moving the spot on the qpd more and driving down the qpd sum. I've looked at the other pds used for asc and only IM4 trans and MC2 trans seem to move this much during an earthquake.
[Vaishali, JimW, Jenne]
We started looking at transfer functions yesterday to do the length-to-angle decoupling, but I mis-read Jim's plot, and focused on the lowest M3 stage, rather than the low frequency top stage.
Anyhow, hopefully we can take some passive TFs over the next few days (especially now, with the >90%ile useism and >98%ile wind), and have a decoupling filter ready for the next commissioning window.
Successful locking at WINDY ISI config. Gonna stick with what works.