3:30 pm local Took 75 seconds to overfill CP3 by increasing LLCV from 20% open to 50% open. Attached is 1 hr plot in seconds showing TC temp fall, and the last several fills.
From Stefan's offline request, I ran bruco on two times,
Nov. 07, 2016, 19:30:00 (GPS time : 1162582217):
on DARM: https://ldas-jobs.ligo-wa.caltech.edu/~youngmin/BruCo/PRE-ER10/H1/Nov07/H1-DARM-1162582217-600/
on MICH: https://ldas-jobs.ligo-wa.caltech.edu/~youngmin/BruCo/PRE-ER10/H1/Nov07/H1-MICH-1162582217-600/
on PRCL: https://ldas-jobs.ligo-wa.caltech.edu/~youngmin/BruCo/PRE-ER10/H1/Nov07/H1-PRCL-1162582217-600/
on SRCL: https://ldas-jobs.ligo-wa.caltech.edu/~youngmin/BruCo/PRE-ER10/H1/Nov07/H1-SRCL-1162582217-600/
Nov. 07, 2016, 16:30:00 (GPS time : 1162571417):
on DARM: https://ldas-jobs.ligo-wa.caltech.edu/~youngmin/BruCo/PRE-ER10/H1/Nov07/H1-DARM-1162571417-600/
The PMC length noise no longer shows up in DARM. Even for the 1 kHz and 4 kHz peaks there is a factor of a few safety margin. Increasing the PMC gain slider from 0 dB to 16 dB (actually from 0 dB to 12 dB), we see no significant change in DARM. The 4 kHz peak starts touching the noise floor with a 0.3 coherence.
This means that the increased coherence between DARM and jittery peaks after the increase in the modulation depth and associated PMC changes (alogs 31095 and 31203) is a mystery.
We are running about 8dB higher PMC length gain than before as that's the lowest we can go with the current electronics with higher modulation depth. It would be interesting to see if modifying the electronics will do anything.
(In the attached, IMC_F and IMC-WFS traces are scaled arbitrarily such that it's easier to make an eyball comparison.)
Readjusted the REFSIGNAL (diffraction power) and AC offset for the ISS second loop. New values in SDF (down).
Sheila has mentioned Operators should take a look at squashing down some of the 2nd Harmonics of the Violin Modes. Using Nutsinee's procedure, I went about taking a First Look. Attached you will see a power spectrum during our current 10+hr lock. On here we get the following which are above a 10/23/16 reference: (freq & wiki notes)
Decided to look at ETMx's 1005.94. On its Damping Filter medm, we had a filter bank already set up for this: MODE10.
This squashed this line. This has been entered in the Violin Mode wiki.
Addressed the next biggest line of 1003.78 for ETMx. This is taken care of in MODE9. A positive gain rung it up, so tried some negative gains which was fine. Then I tweaked phase from -60deg and went to 0deg. This was fine. Unfortunately, when I tried a gain of +60deg, this rung it up. And it looks like it excited the neighboring line of 1003.667. I've been trying to now damp out the latter (1003.667), but it is going reeeealllly slow.
Since I wasn't successful with the 1005.94, I will not make a note of what I attempted and won't update the wiki.
Repeating the same idea as we did for frequency noise in aLog 31176
we measured the sensitivity of REFL_A_RF45_I, POP_A_RF9_I and POP_A_RF45_I to PRC length noise, and then used this transfer function to project the noise into DARM.
(REFL_A_RF9_I is used for the CARM loop -so there is no interesting signal there.)
Around 40Hz the three projections basically agree, and are a factor of 2 below the noise floor. Also, it seems that REFL_A_RF45_I would actually have a lower noise floor than POP_A_RF45_I. We should consider using it for PRCL.
FInally, we should implement a PRCL_feed-forward path.
The templates for the measurment are here:
/ligo/home/controls/sballmer/20161107/PRCLNoiseProjection.xml
/ligo/home/controls/sballmer/20161107/REFLPOPV4.xml
J. Kissel, D. Tuyenbayev Following preliminary results from Darkhan on the individual actuation strength of the UIM and PUM stages for H1SUSETMY (see, thus far LHO aLOG 31275), and the current delightfully long lock stretch with them in place, I'm bringing this study to a close. I've turned off the temporary L1 and L2 calibration lines at 33.7 and 34.7 Hz, respectively. We do not intend on turning on these lines again for the duration of the run. These lines were turned OFF at Nov 07 2016 21:21:49 UTC.
Summary
A refined analysis of the L1, L2 and L3 stange actuation strenghts was done using the data from last several days that include several low-noise lock stretches. Actuation strength factors are:
KU = 8.020-8 +/- 2.983-10 N/ct ( std(KU) / |KU| = 0.0037 )
KP = 6.482-10 +/- 2.748-12 N/ct ( std(KP) / |KP| = 0.0033 )
KT = 4.260-12 +/- 1.313-14 N/ct ( std(KT) / |KT| = 0.0031 )
Details
Following 4 lines were used to calculate the factors: UIM (L1) line at 33.7 Hz, PUM (L2) line at 34.7 Hz, TST (L3) line at 35.9 Hz and PcalY line at 36.7 Hz. The most recent DARM model parameters were used for this analysis. Also, values past Nov 5 were calculated with the updated DARM filters (see LHO alog 31201), not accounting for this would produce results biased by 1-2%.
Each data point is a quantity calculated from 10s FFTs. The outliers were removed in two steps:
- took the mean and the standard deviation of all data points in intervals when the IFO range was >=50 MPC, removed 3-sigma outliers;
- removed the 3-sigma outliers from the mean of the remaining data points.
The mean values and the standard devitaions noted above were taken from GPS time interval [1162369920 1162413500], ~11 hours of low-noise data (blue markers). Standard errors on the mean values, std(Ki) / sqrt(N), are orders of magnitude smaller compared to the Pcal and the DARM loop model uncertainties (number of data points in the seletected interval - N=4251).
For preliminary results from Nov 4 data and before see related reports: 31183, 31275.
Recall the ER8/O1 values for these coefficients were
'Optic' 'Weighted Mean' '1-sigma Uncertainty' '1-sigma Uncertainty'
'Stage' '[N/ct]' '[N/ct]' '%'
'ETMY L1' '8.17e-08' '3.2e-09' '3.9'
'ETMY L2' '6.82e-10' '5.2e-13' '0.076'
'ETMY L3' '4.24e-12' '4.1e-15' '0.096'
from LHO aLOG 21280.
Comparing against numbers above,
KU = 8.020-8 +/- 2.983-10 N/ct ( std(KU) / |KU| = 0.0037 )
KP = 6.482-10 +/- 2.748-12 N/ct ( std(KP) / |KP| = 0.0033 )
KT = 4.260-12 +/- 1.313-14 N/ct ( std(KT) / |KT| = 0.0031 )
This means a change of
(ER8 - ER10)/ER8 =
ETMY L1 0.0183
ETMY L2 0.0495
ETMY L3 -0.0047
We will compare these numbers against those determined by frequency-dependent transfer functions, e.g. the to-be processed data from LHO aLOG 31303, and update the low-latency/ calibration accordingly next week. It will also be interesting to re-cast the L1 and L2 numbers into a combined actuation strength change from ER10/O1, and compare it against the constantly calculated kappa_PU and check consistency there.
Data points prior to DARM filter update mentioned in the report were analyzed with the help of following DARM model parameters:
ifoIndepFilename : ${CalSVN}/Runs/PreER10/Common/params/IFOindepParams.conf (r3519)
ifoDepFilename : ${CalSVN}/Runs/PreER10/H1/params/H1params.conf (r3640)
ifoMeasParams : ${CalSVN}/Runs/PreER10/H1/params/H1params_2016-10-13.conf (r3519)
and after the the DARM filters were updated (GPS 1162336667) the following configuration was used:
ifoIndepFilename : ${CalSVN}/Runs/PreER10/Common/params/IFOindepParams.conf (r3519)
ifoDepFilename : ${CalSVN}/Runs/PreER10/H1/params/H1params_since_1162336667.conf (r3640)
ifoMeasParams : ${CalSVN}/Runs/PreER10/H1/params/H1params_2016-10-13.conf (r3519)
Scripts were uploaded to CalSVN at
${CalSVN}/Runs/PreER10/H1/Scripts/Actuation/2016-11-08/
5 days SLM data (75 MB): ${CalSVN}/Runs/PreER10/H1/Measurements/Actuation/2016-11-08/
Plots: ${CalSVN}/Runs/PreER10/H1/Results/Actuation/2016-11-08_H1_UPT_act_strengths_*
We discovered that in the single-line analysis we had an incorrect sign for TST stage actuation (we incorrectly set the sign of the N/ct coefficient).
The updated results have been posted in LHO alog 31668.
J. Kissel While opening the PCALX overview screen in hopes of moving around the roaming high-frequency PCALX calibration line, I found the OFS railed. A trend shows that it had been railed since 2016-11-01 at 19:04 UTC, just after maintenance day. Regrettably, due to our problems getting the IFO back up after last Tuesday’s maintenance (see LHO aLOG 31119), that means we did not getting any good data after changing the roaming line to 1001.3 Hz on Oct 31 2016 15:44:29 UTC (see LHO aLOG 31024). I've "power-cycled" the OFS loop, by turning OFF then ON the H1:CAL-PCALX_OPTICALFOLLOWERSERVOENABLE switch, and that restored the servo's nominal behavior. We should stick a DIAG_MAIN monitor of the OFS PDs for both PCALs to monitor for OFS servo malfunctions like this. Channel to watch: H1:CAL-PCALX_OFS_PD_OUTPUT (the channel comes pre-calibrated into [V], with a range of +/- 10 [V]) Test: If a ~120 sec average of the channel is more than +/- 8 [V], then throw an error. (The mean should typically by around 5 [V]) While trending and thinking about this aLOG, I'd briefly changed the line frequency to 4001.3 Hz, but after realizing we had no good 1001.3 Hz data, I've changed it back to 1001.3 Hz. We'll stay like this for a hours and then resume the sweep at the highest frequency points. As such, I erase the last 1001.3 Hz start time with the new start time from today: Frequency Planned Amplitude Planned Duration Actual Amplitude Start Time Stop Time Achieved Duration (Hz) (ct) (hh:mm) (ct) (UTC) (UTC) (hh:mm) --------------------------------------------------------------------------------------------------------------------------------------------------------- 1001.3 35k 02:00 39322.0 Nov 11 2016 21:37:50 UTC 1501.3 35k 02:00 39322.0 Oct 24 2016 15:26:57 UTC Oct 31 2016 15:44:29 UTC ~week @ 25 W 2001.3 35k 02:00 39322.0 Oct 17 2016 21:22:03 UTC Oct 24 2016 15:26:57 UTC several days (at both 50W and 25 W) 2501.3 35k 05:00 39322.0 Oct 12 2016 03:20:41 UTC Oct 17 2016 21:22:03 UTC days @ 50 W 3001.3 35k 05:00 39322.0 Oct 06 2016 18:39:26 UTC Oct 12 2016 03:20:41 UTC days @ 50 W 3501.3 35k 05:00 39322.0 Jul 06 2016 18:56:13 UTC Oct 06 2016 18:39:26 UTC months @ 50 W 4001.3 40k 10:00 4301.3 40k 10:00 4501.3 40k 10:00 4801.3 40k 10:00 5001.3 40k 10:00
It seems that ETMs and ITMs display approx 3-4urad swings in PIT at times. Is this normal? BS is < or = 2urads.
YAW seems to be about 2rad deviation across the board.
Doing a PRCL noise injection (into PR2) I noticed that H1:LSC-POP_A_RF45_PHASE_R was ~10deg off. Since changing this phase also affects the SRCL sensing matrix, I updated that one too
Old values:
H1:LSC-POP_A_RF45_PHASE_R 77.3
POP_A_RF9_I to SRCL input matrix (H1:LSC-PD_DOF_MTRX_5_1): -0.025
POP_A_RF45_I to SRCL input matrix (H1:LSC-PD_DOF_MTRX_5_3): 0.08
New values:
H1:LSC-POP_A_RF45_PHASE_R 86.5
POP_A_RF9_I to SRCL input matrix (H1:LSC-PD_DOF_MTRX_5_1): -0.0242
POP_A_RF45_I to SRCL input matrix (H1:LSC-PD_DOF_MTRX_5_3): 0.08 (unchanged)
WeeklyXtal - normal
WeeklyLaser - BOX RH levels have come down a bit since last week.
WeeklyEnv - normal
WeeklyChiller - normal
Lots of stuff going on in the attached ITMY spectra of FASTIMONs at all 3 stages (M0, L1, L2) taken just now with the IFO in an 11+ hour lock. Tried looking for old vs new differences to possibly account for the extra "swinging" misbehavior of ITMy as observed just this weekend (alogs SUN 31247, SAT 31230 31225, Fri 31226, and Kissel's coil check last Wed 31136). I checked a few locked stretches from earlier last week compared to now - indeed there is a noisier peak/shelf from 6.06Hz to 7.6Hz on L2 in the current lock (all 4 channels LL, LR, UL, LR). I can't correlate any activity in the alog to why ITMy, but maybe I missed something subtle that someone did.
Since the IFO is locking well right now, I saved new OBSERVE.snap files for all suses except PIs. I suppose I could do those.
I've since switched the SDF overviews back to the DOWN/SAFE for those that show differences in the event someone finds them useful for the next locking transitions.
This morning H1's Intent Bit was taken out of UNDISTURBED. I quickly asked around for any activities going in the Control Room and no one admitted to anything. I tooke it back to UNDISTURBED and about 30sec later we dropped out again!
For the 2nd drop, I happened to be watching the Guardian Overview & noticed the message: "EXC: pemmx excitation!" on the DIAG_EXC Guardian Node. The Log for this node said we had this excitation at:
Is this an automated excitation?
If this is an acceptable excitation, can we remove this as a channel SDF monitors to knock us out of UNDISTURBED?
I did that, testing a diaggui issue. The PEM-MX_CHAN channels are not connected to an AI chassis, so it has been a convenient place to run tests that can't disturb anything. I'll refrain from doing so in the future.
Corey, any excitations except calibration and hardware injections should (and in this case did) kick us out of observation. Don't remove this channel out of guardian's watch list.
Laser Status: SysStat is good Front End Power is 34.64W (should be around 30 W) Front End Watch is GREEN HPO Watch is GREEN PMC: It has been locked 1.0 days, 11.0 hr 50.0 minutes (should be days/weeks) Reflected power is 26.42Watts and PowerSum = 125.8Watts. FSS: It has been locked for 0.0 days 10.0 hr and 37.0 min (should be days/weeks) TPD[V] = 3.567V (min 0.9V) ISS: The diffracted power is around 3.384% (should be 5-9%) Last saturation event was 0.0 days 10.0 hours and 37.0 minutes ago (should be days/weeks) Possible Issues: PMC reflected power is high
There are now THREE a2l scripts that we will be running once during EACH lockstretch. Patrick mentioned them in his time log. I'm tagging Opsinfo with the information again.
cd /opt/rtcds/userapps/release/isc/common/scripts/decoup ./a2l_min_LHO.py ./a2l_min_PR2.py ./a2l_min_PR3.py
that is all :)
A note on the (3) A2L measurements.
As of this week, we want to run the ...LHO.py file at the beginning of locks (i.e. right after we reach NLN). It takes on the order of 10min.
Sometimes you might not want to run it. If the DARM specrtra looks good around 20Hz, then you are good. It's a judgement call, but in general, this helps with sensitivity. Will put this in the Ops Sticky Note wiki & should get in the Ops Checksheet soon.
(Thanks to Jenne & Ed for sharing the alogs about this!)
J. Kissel, D. Tuyenbayev We're still not getting the IFO duty cycle to get the desired uncertainty on the single-frequency actuation strength scale factor measurement of the UIM and PUM stage and we're running out of time, so I've increased the SNR of these temporary lines by another factor of ~3 over yesterday's increase (see LHO aLOG 31108). Oscillator Freq (Hz) Old Amp (ct) New Amp (ct) ETMY UIM 33.7 180 500 ETMY PUM 34.7 81 300 Hopefully Robert's activities tonight won't impact the ~30 Hz region of the sensitivity, and we can turn things off soon.
Jeff K, Darkahn T,
We calculated actuation strengths of L1 (UIM), L2 (PUM) and L3 (TST) actuation stages using calibration lines from 6 lock stretches in the last two days.
Preliminary results are given in the attached plots. Standard deviation on estimations of AU and AP are ~1.2% and for AT is ~0.4%.
Data points were taken at GPS times with HOFT_OK_BIT = 1 (segments are listed below).
# seg start stop duration
0 1162123790.0 1162124750.0 960.000
1 1162124840.0 1162125190.0 350.000
2 1162125280.0 1162125480.0 200.000
3 1162127120.0 1162129210.0 2090.000
4 1162134280.0 1162137150.0 2870.000
5 1162162170.0 1162163370.0 1200.000
Increased line amplitudes will hopefully allow us to get AU and AP actuation strengths with subpercent 1-sigma error bounds.
Jeff K, Greg M, Darkhan T,
After the L1 and L2 line amplitude increase it seems that we got better uncertainties in the esitamtions of the sus. stage actuation strengths.
This time we filtered the data using the IFO range channel, we used 50 MPC as a threshold. And from the remaining data we made historams of three different time intervals. We did not yet investiage why the noise levels of the lines are different at each of these intervals. The uncertainties for A{U,P,T} are given for the least noisy interval (blue data points).
Jeff K, Greg M, Darkhan T,
We calculated [N/ct] actuation force factors calculated from the ~35 Hz independent L1, L2 and L3 lines:
KU = 8.012-8 +/- 3.873-10 N/ct ( std(KU) / |KU| = 0.0048 )
KP = 6.482-10 +/- 2.748-12 N/ct ( std(KP) / |KP| = 0.0042 )
KT = 4.253-12 +/- 1.679-14 N/ct ( std(KT) / |KT| = 0.0039 )
During a Nov. 4 lock stretch we got a factor of 3 improvement of the standard deviations compared to the previous day (blue data points vs. green).
In most recent 2 days we got more data with longer lock stretches, which can help us to better bound the uncertainties. Analysis of this data would require including an updated DARM digital filters, the IFO response changed on Nov. 5 (see LHO alog 31201) which can bias on our calculations if not taken into account.
The outliers were removed in a following way:
- Took >= 50 MPC data and removed data points that fall outside of 2-sigma std. deviation (some large outliers were not filtered by this step);
- one more time calculated std. and mean in the remaining data points and removed 2-sigma outliers (this step helped to remove large outliers).
- The mean and 2std. of these data points were shown with black solid line and dashed lines.
The final reported mean values and standard deviations were taken from the blue data points ( GPS [1162252470, 1162271230] ), L1 and L2 data was least noisy during this period. This mean value and its std. were shown with blue solid and dashed lines.
Patrick, Kiwamu,
In this morning, Patrick found that the CO2Y was not outputting a laser power. In the end we could not figure out why it had shut off. The laser is now back on.
[Some more details]
I thought this was a return of the faulty behavior that we were trying to diagnose in the early October (30472). However, the combination of looking at the front panel of the laser controller and trending the warning/alarm states did not show us something conclusive. So no conclusion again.
When I went to the floor and checked the front panel, no red LED was found. The only unusual thing is the GATE LED which was found to be off. Pressing the red gate button then brought the GATE LED back in green as expected. This could be an indication of the IR sensor momentarily went to the fault state and came back normal leaving the laser shut off. In this scenario, the IR sensor does not latch any LEDs and for this reason I thought this could be it. Then looking at the trend at around the time the laser went off, I did not find any alarm flags raising at all. Even if it is a fast transient in the IR sensor, I would expect to see it in the trend. So these two observations together can't support the IR sensor scenario. Another plausible scenario would be somebody accidentally hitting the gate button resulting in no laser output.
I also went to the chiller and confirmed no error there - water level was mid-way (which I topped off), all seemed good.
That certainly sounds like the IR sensor. Unfortunately we don't currently have analogue readout from that channel, or a good error reporting system. We are already planning on fixing this with a new version of the controller that we should be getting ready for post O2 install.
Has there been a temperature change in the LVEA recently? And the Y-arm laser power is a bit highrer than before, but not as high as during your recent testing? I'm just wondering what else could be causing this sensor to be close to its tripping point.
Alastair, if this was due to the IR sensor, how do you exlain the fact that it didn't show up in ITMY_CO2_INTRLK_RTD_OR_IR_ALRM? Is it so fast that the digital system can not reacord the transient?
I don't understand that. Even if it doesn't latch the laser off, it should still show up on that channel. Is it possible that the chassis itself got a brief power glitch? If that was turned off/on momentarily then it would also put the laser into this state.
From trends the laser tripped off around 15:52 UTC this morning. This was well before the work on h1oaf took it down.
It's very possible that the Tuesday maint activity that involved IO chassis hardware work which may or may not have been involved in the dolphin network glitch -> Beckhoff issues (which lasted most of the day Tues), is what caused this particular TCS laser issue. It was compounded by the later h1oaf work that day which caused other chiller tripping. Cause of this full saga TBD...