TITLE: 10/29 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 58.0319Mpc
OUTGOING OPERATOR: Cheryl
CURRENT ENVIRONMENT:
Wind: 11mph Gusts, 7mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.46 μm/s
QUICK SUMMARY:
Currently have a lock approaching 2Hrs (started at around 73Mpc, but was slowly trending down, and in the last few minutes took a step down to 55Mpc) .
Robert is here and has headed out to the end stations for injection work (the BRS was switched to the VERY_WINDY_NOBRSXY state). Switching Observatory Mode to: UNKNOWN (since I don't see an "injection" option).
Cheryl gave me a suggestion of keeping TCSx at 0.4 & moving TCSy from 0.0 to 0.2 for the next change.
State of H1: Locked in NLN, TCSX at 0.4W, TCSY at 0W
Incoming Operator: Corey
On Site: Robert
Possible issues that might effect locking / locking stability:
BS optical lever is glitching in sum and yaw, and glitching enough that this might have killed a lock already.
Might have broken the lock 2.5 hours ago, and could be a problem today.
While the BS oplev can certainly cause trouble during lock acquisition, once we get to Engage_DRMI_ASC that oplev's feedback is turned off, so it is unlikely that it caused a lockloss from any state after DRMI is locked.
State of H1: locked in NLN
Activities / Changes
It looks like about 20 minutes after the increase in TCSY from 0 to 0.4W the range started to drop and continued until lock loss. Plot attached.
I've turned the pre-heating off on TCSY - if anything the optic is overheated, may need to cool before relocking is stable.
Set to 2 seconds and saved in SDF.
11:23UTC, changed TCSY from 0W to 0.4W, TCSX was at 0.2W and remains at 0.2W, next change in about 90 minutes.
State of H1: Nominal Low Noise
Activities that will contnue:
I did a few more things tonight
Now that we have the DBB plugged in again, we can relook at the coherences of the QPDs. The screenshot shows coherences for DARM and SRCL with for the 200W beam (opening the 35W shutter unlocks the IFO).
We have small coherences with DARM just below 1kHz and around 4.2kHz, but not otherwise. SRCL does have more coherence with one of the QPDs below 100Hz.
I think that the nominal setting of the CAL-PINJX_TRANSIENT filter bank gain is supposed to be zero. When a transient injection is imminent, then the gain is ramped to 1.0, the injection happens, and then the gain is ramped to zero. However, the SDF safe set point is 1.0. Therefore, I am setting the gain to 0 and accepting the change in the SDF safe file.
The existing code in the INJ_TRANS Guardian script does indeed do this.
if guardian is controlling the gain, perhaps sdf shouldn't be monitoring it.
Jeff B, Cheryl, Travis and Kiwamu,
In the past two days, Jeff, Cheryl and Travis performed a random walk on the CO2 settings for me (30920 and comments therein). I don't see significant change so far. In fact, it might have deteriorated the jitter peaks slightly.
I now ask the operators to perform differential scans instead (e.g. raising only one CO2 at a time).
The motivation was to see if we can exert any kind of effects on the jitter peaks in 200-1000Hz by changing the CO2 settings. Because TCS measurements usually take a long time, I have asked the operators to do some random walk on the TCS settings when possible. So far we have done a common scan (i.e. raising both CO2 powers simultaneously) and I don't see big change in the jitter peaks in 200-1000 Hz, in particular the ones at 285, 365 and 620 Hz. The attached shows DARM spectra with different CO2 settings.
These curves correspond to the following time.
As you can see, the ambient noise (most of the time appears to be shot noise) varies depending on the time because some of them overlapped with the active injection tests by Robert. But, this is not something I am looking for. Among the 6 noise curves, the best jitter noise was obtained from 27/10/2016 9:40:00 UTC which is actually before the series of CO2 tests started. So it is possible that the common CO2 may have deteriorated the jitter peaks slightly. We should do a differential scan next.
J. Kissel, B. Weaver Betsy grabbed charge measurements yesterday. I've processed them. The charge is still right around 0 [V] effective bias -- we're ready for regular bias flipping.
Summary- the sensing sign in the online calibration for SRCL has been wrong.
This has been causing overestimated noise in 10 - 100 Hz in the past years(!). My bad. This is now fixed.
Details- Daniel and Stefan a week or two ago told me that changing the shape of the digital filter in SRCL affected the calibrated SRCL displacement spectrum. This statement made me suspect that something was wrong in the online calibration or aka CALCS. Today, I have re-measured the SRCL open loop gain in nominal low noise with an input power of 25 W. A plot of the open loop is attached in this etnry. The absolute value of the SRCL sensing was found to be the same. However, the measurement indicated that the sensing gain should be a negative number (dP/dL < 0 or smaller counts as the SRC length expands). This contradicted with what we have had in CALCS where the sensing was set to positive. This is very simillar to what we had in the online DARM calibration (29860), but this time SRCL has been wrong for years.
Flipping the sensing gain in CALCS (so as to match the sign with the measurement) decreased the noise level in the online monitor by a factor of 2 at 60 Hz in 10 - 100 Hz. You can see the difference below.
The cyan is before the sign flip in CALCS, and the green is after. In order to double check the validity, I produced the calibrated spectrum only using SRCL_IN1 (blue) which agreed with the online calibration. There is small discrepancies between SRCL_IN1 and CAL-CS by a few % which, I believe, is due to the fact that we don't take the time delay of the system into account in CAL-CS. The sign flip is now implemented by adding a minus sign in FM1 of CAL-CS_SUM_SRCL_ERR (which is now a scaler value of -9.55e-6). I did not change the absolute value.
Additionally, I looked at some calibration codes that I made some time ago (18742) and confirmed that I mistakenly canceled the minus sign in the sensing gain of the model. Also, according to the guardian code ISC_LOCK and trend data, the sign of the SRCL servo gain in LSC or the relevant filters in the SUS SRM did not change at least in this past year. I am fairly sure that this calibration has been wrong for quite some time.
The relevant items can be found at the following locations.
Open loop measurement: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER10/H1/Measurements/LscDrmi/SRCL_oltf_25W_20161028.xml
Open loop analysis code: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER10/H1/Scripts/LscDrmi/H1SRCL_OLTFmodel_20161028.m
Plots for open loop: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER10/H1/Results/LscDrmi/2016-10-28_SRCL_openloop.pdf
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER10/H1/Results/LscDrmi/2016-10-28_SRCL_openloop.png
The results of cdsutils.avg() in guardian is sometimes giving us very weird values.
We use this function to measure the offset value of the trans QPDs in Prep_TR_CARM. At one point, the result of the average gave the same (wrong) value for both the X and Y QPDs, to within 9 decimal places (right side of screenshot, about halfway down). Obviously this isn't right, but the fact that the values are identical will hopefully help track down what happened.
The next lock, it correctly got a value for the TransX (left side of screenshot, about halfway down), but didn't write a value for the TransY QPD, which indicates that it was trying to write the exact same value that was already there (epics writes aren't logged if they don't change the value).
So, why did 3 different cdsutils averages all return a value of 751.242126465?
This isn't the first time that this has happened. Stefan recalls at least one time from over the weekend, and I know Cheryl and I found this sometime last week.
This is definitely a very strange behavior. I have no idea why that would happen.
As with most things guardian, it's good to try to get independent verification of the effect. If you make the same cdsutils avg calls from the command line do you get similarly strange results? Could the NDS server be getting into a weird state?
On the one hand, it works just fine right now in a guardian shell. On the other hand, it also worked fine for the latest acquisition. So, no conclusion at this time.
This happened again, but this time the numbers were not identical. I have added a check to the Prep_TR_CARM state that if the absolute value of the offsets are larger than 5 (normally they're around 0.2 and 0.3, and the bad values have all been above several hundred) then notify and don't move on.
Operators: If you see the notification Check Trans QPD offsets! then look at H1:LSC-TR_X_QPD_B_SUM_OFFSET and H1:LSC-TR_Y_QPD_B_SUM_OFFSET. If you do an ezca read on that number and it's giant, you can "cheat" and try +0.3 for X, and +0.2 for Y, then go back to trying to find IR.
This happened again to Jim, and Cheryl, today and caused multiple locklosses
I've commented out the averaging of the offsets in the guardian.
We used to not do this averaging, and jsut rely on the dark offsets not to change. Maybe we could go back to that.
For operators, until this is fixed you might need to set these by hand:
If you are having trouble with FIND IR, this is something to check. From the LSC overview sceen, click on the yellow TRX_A_LF TRY_A_LF button toward the middle oc the left part of the screen. Then click on the R INput button circled in the attachment, and from there check that both the X and Y arm QPD SUMs have reasonable offsets. (If there is not IR in the arms, the offset should be about -1*INMON)
Opened as high priority fault in FRS:
Ed, Sheila
Are ezca connection errors becoming more frequent? Ed has had two in the last hour or so, one of which contributed to a lockloss (ISC_DRMI).
The first one was from ISC_LOCK, the screenshot is attached.
Happened again but for a different channel H1:SUS-ITMX_L2_DAMP_MODE2_RMSLP_LOG10_OUTMON ( Sheila's post was for H1:LSC-PD_DOF_MTRX_7_4). I trended and found data for both of those channels at the connection error times, and during the second error I could also caget the channel while ISC_LOCK still could not connect. I'll keep trying to dig and see what I find.
Relevant ISC_LOCK log:
2016-10-25_00:25:57.034950Z ISC_LOCK [COIL_DRIVERS.enter]
2016-10-25_00:26:09.444680Z Traceback (most recent call last):
2016-10-25_00:26:09.444730Z File "_ctypes/callbacks.c", line 314, in 'calling callback function'
2016-10-25_00:26:12.128960Z ISC_LOCK [COIL_DRIVERS.main] USERMSG 0: EZCA CONNECTION ERROR: Could not connect to channel (timeout=2s): H1:SUS-ITMX_L2_DAMP_MODE2_RMSLP_LOG10_OUTMON
2016-10-25_00:26:12.129190Z File "/ligo/apps/linux-x86_64/epics-3.14.12.2_long-ubuntu12/pyext/pyepics/lib/python2.6/site-packages/epics/ca.py", line 465, in _onConnectionEvent
2016-10-25_00:26:12.131850Z if int(ichid) == int(args.chid):
2016-10-25_00:26:12.132700Z TypeError: int() argument must be a string or a number, not 'NoneType'
2016-10-25_00:26:12.162700Z ISC_LOCK EZCA CONNECTION ERROR. attempting to reestablish...
2016-10-25_00:26:12.175240Z ISC_LOCK CERROR: State method raised an EzcaConnectionError exception.
2016-10-25_00:26:12.175450Z ISC_LOCK CERROR: Current state method will be rerun until the connection error clears.
2016-10-25_00:26:12.175630Z ISC_LOCK CERROR: If CERROR does not clear, try setting OP:STOP to kill worker, followed by OP:EXEC to resume.
It happened again just now.
Opened FRS on this, marked a high priority fault.
J. Kissel, for the Calibration Team I've updated the results from LHO aLOG 21825 and G1501223 with an ASD from the current lock stretch, such that I could display the computed time dependent correction factors, which have recently been cleared of systematics (LHO aLOG 22056), sign errors (LHO aLOG 21601), and bugs yesterday (22090). I'm happy to say, that not only does the ASD *without* time dependent corrections still fall happily within the required 10%, but if one eye-balls the time-dependent corrections and how they would be applied at each of the respective calibration line frequencies, they make sense. To look at all relevant plots (probably only interesting to calibrators and their reviewers), look at the first pdf attachment. The second and third .pdfs are the money plots, and the text files are a raw ascii dump of respective curves so you can plot them however or whereever you like. All of these files are identical to what is in G1501223. This analysis and plots have been made by /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Scripts/produceofficialstrainasds_O1.m which has been committed to the svn.
Apparently, this script has been moved to a slightly different location. The script can be found at
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Scripts/DARMASDs/produceofficialstrainasds_O1.m
Holding off on any changes to TCS since Robert is doing work. Current lock approaching ~4hrs. Have been hovering between 50-60Mpc.
10:28: Turning BRSx back ON (Robert came to check in saying he'll head back out to EY, but won't be at EX for a few hours.) Thought of making a change on TCSy, but decided to hold off so we don't possibly cause a lockloss---want to give Robert his time.
He plans to transition EY to do possible black glass measurements at viewports (per WP 6274).