Also II 1203. Otherwise:
It looks like the Block Properties specifying the parameters for HAMs4 & 5 are still just duplicates of HAM3. See attached. There may be more to it but at the moment I'll say, I need to correct these parameters and recompile the models to get the correct medm generated. I'll file an integration issue as I see this as just an oversite correction.
Yesterday Jim and I resurrected the hardware watchdog (HWWD) system on the DTS. It did not take too much work as the system had been pretty much left intact. We hooked up the ADC and DAC lines to the x1susey chassis and reconfigured x1boot to PXE boot the new faster computer which is running as x1susey. This front end is running the x1iopsuseywdt and the x1susetmywdt models. We did a quick test of setting the countdown timer to 2 minutes and then disabling the OSEM LEDS (via relay unit). After 2 minutes the HWWD zeroed the SEI enable signals. Restoring the LEDs and pressing the RESET button restored the SEI drives.
I went through the wiring and have made a new as-built drawing as sheet 2 of https://dcc.ligo.org/D1300475
Dave, Elli
We are using two cameras on ISCT6, AS_AIR plus CAM_17, sert up at different gouy phases, which we are using to measure the SRC gouy phase. Today we shifted the positions of these two cameras, and took some images with them as we moved PR2 and BS and tracked the spot locations across the cameras. Attached is a photo with the changes to the table. Analysis to follow.
DIAG_MAIN - Temporarily removed the HW inj test until the CW inj are running again.
TCS_CO2 - Made the nominal state LASER_UP and added them to the GUARD_OVERVIEW medm.
(And as if on cue, H1 had a fabulous 14hr lock stretch last night with a range near 80Mpc and it was very clean/not many glitches. See attached image.)
The local Press Conference event occuped much of the morning and went swimmingly. After the hub bub, here are a few of the Day's Activities:
Nutsinee has kindly offered to cover the last couple hours of the DAY.
May The Gravitational Force Be With You.
22:13 Evan to LVEA wokring on 9 MHz stuff.
22:17 Elli and Dave out of the LVEA
23:06 Joe done with the Xarm beam tube filling
23:08 TJ reloading TCS guardian
23:15 Elli to HAM6 adjusting AS camera
Happy Announcement Day! Well done keeping GW150914 a secret until this morning =)
Most cpu overruns associated with the faster SUS computers (which were installed in the end stations this Tuesday) only glitch the SUS-IOP and SUS-ETMX,Y models. Once in a while the glitch extends to the SEI IOP/ISI model and the ISC/ALS models and also glitches the SUS-TMS and SUS-PI models.
To keep the overview GREEN and help with trending, I have extended my end_station_sus_diag_resets.bsh script to clear these additional models. This script is ran roughly once a minute on opsws16.
WHAM6 ISI is lifting itself almost 40um from its free hanging position during isolation. The drive to the Z actuators is running nearly 2000 cts on average.
What's the issue? Excess drive ==> excess noise? Unnecessary heating? Trip vulnerablility?
The attached plot shows 15 minutes when I deisolated the ISI. Channel 9 shows the free hang sag position. This was not so low until the HAM6 work back in April/May when it decidedly dropped. While I balanced and leveled the platform, maybe we need to bias that work high to deal with thermal sagging. I don't see it obviously sagging lower over time. See Channels 14 thru 16 for the vertical drives.
Second attachment is 120 days of the outside temp and the vertical drives. I'd say there is nice inverse correlation with phase thrown in related to our typical step wise heating response to seasonal temperature changes.
My recommendation is remove some payload to raise the free hang position. Maybe not to big a deal but the free hang positions of the Vertical CPS are on the low side (Channels 4 thru 6.) This requires a vent of course so maybe not.
Otherwise, we should accept & show that the OMC position does not care about the 90um shift and reset.
Evan and I left the interferometer in the observing mode at around 20:00 pm local last night. The ISS 2nd loop now keeps correcting the diffraction power all the time via a feedack loop and maintains it at 8%.
By the way, we had a difficulty in engaging the ASC soft loops yesterday. Those loops kept dragging the optics to a point where the recycling gain is low enough to unlock the interferometer.
This issue is not resolved yet although we were able to manually engage the loops through a very careful and slow process.
I suspect some part of the interferometer has misaligned in a very subtle way.
[The symptoms]
The recycling gain could reach as high as roughly 38 which is more than enough to stably achieve full lock. However, as soon as the guardian came across the ASC_PART3 state, the interferometer unlocked. This happened multiple times yesterday. The error signals for all four soft loops had signals were as big as 0.2 before engaging the loops. Since the error signals are derived from the TRX and Y QPDs, this roughly means that the beam spots on TRX and TRY were off from the optimal by 20% of the QPD range. With these big offsets, once we closed the loops, the interferometer started drifting slowly away from the optimal alignment on a time scale of probably a few minutes which was then followed by rapid decrease in the sideband power in the recycling cavity and the carrier power every where on a time scale of about 10 seconds and then unlocked.
[ I don't think TMSs are culprit]
Then I looked for suspicious drift or jump in the angles of the TMSs in trend because this is very similar to the case we had during O1 (alog 22575). I did not find obvious misalignment with the TMSs in the past few days. In addition, the digital offsets in the TR QPDs seem to have unchanged at least in the past week.
For the record, in the past we had a trouble when the input optics were not well aligned (alog 23538). Looking at trends of those optics, I did not find obvious misalignment in the past few days either. The IM4 TRANS and POP B QPDs seem to have almost the same values as they should be. Performing a full initial alignment sequence did not help this issue either.
[Manual engagement]
Later, we tried engaging the loop manually and succeeded in it. This is just a test and not a permanent solution. We added an intentional digital offset to each loop so as to cancel the large signals at the error points. Then we engaged all the loops with the nominal gains. Since the error signals are already close to zero by the digital offsets, the servo did not really do on the soft degrees of freedom as expected. We then gradually reduced the digital offsets to zero very slowly so that the soft loops come back to the intended operating points. Since we did not want to lose the lock we went very slowly and took about 20 minutes or so to complete the process. Lame.
A good thing is that as we reduced the offsets, TRX, TRY and POP signal kept increasing. This indicates that the QPD signals from TRX and TRY are still good reference point. I conclude that the interferometer is the one which has been misaligned rather than the TMSs. Since the recycling gain can be still as high as 38, the misalignment we are chasing here may be something subtle.
This is yet another followup of Kiwamu's cage servo followup: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=25416
In the attached left panel is a 12-days trend of SR3 signals. Left column is the EPICS version of M3 witness sensors, and PMON is used for the cage servo. Middle column is a DQ-version of the same signal.
At first YMON and PMON were going happy together with DQ version, but at around Feb 02/03 midnight UTC, PMON and YMON got huge at the same time the cage servo went crazy, but WIT_Y_DQ didn't feel it. Later when Kiwamu turned down the cage servo gain, WIT_P_DQ (middle bottom) went back to normal-ish but PMON and YMON didn't.
It turns out that when you look at non-DQ version of these signals (attached middle panel), there are huge lines in all four M3 OSEMs of SR3 at 1.7kHz and its harmonics, totally dominating the RMS (top). Bottom plot shows the comparison between SR3 (green) and PR3 (red), and PR3 doesn't show any of these huge lines.
DQ version is decimated by the frontend, but it seems like PMON (as well as YMON and LMON) are just sparsely sampled version of the fast signals, no filter modules therefore no decimation, thus the high frequency junk directly goes into low frequency (right panel).
In the past, the solution for oscillating satellite amps was to power-cycle them repeatedly until the oscillation stops.
(Keita, Kiwamu)
Satellite amp power cycled several times, 1.7kHz don't go away, new 1.9kHz oscillation shows up, and it won't go away either. It seems like the entire group of satellite amps are now with 1.7Hz and 1.9kHz lines (e.g attached bottom).
Attached, top blue is before power cycling, red and brown are after several rounds of power cycling, green is when the amp was powered down.
Maybe another round of pwer cycling fest, and if that doesn't do it the satellite amp swap, but not today.
1515 - 1600 hrs. local -> To and from Y-mid Next over fill to be Friday, Feb. 12th before 4:00 pm
Today I cut off the excess length of HV cable at the controller end which included the suspect portion of cable previously noted but the short did not change. I inspected the length within the raceway in the VEA but nothing looks suspicious. Thus, all of the length which is accessible looks unremarkable. There is still 10'-15' that is within EMT conduit in the VEA that could be exposed if pulled through the conduit body at the exterior wall penetration but other than that there isn't much else to do barring pulling another 900' run of "not free" 10,000V rated cable. Recall that the ion pump controller was found tripped off several days ago and could not restart the ion pump do to its power limiting feature and can only achieve 500VDC @ 0.5 amps implying 1000 ohms between HV conductors. This "shorted" resistance value is duplicated with the "Megger" unit, as well as, with the low voltage VOM which measures 1080 ohms.
C. Cahillane There was some concerned voiced by Alan and Peter about the LLO uncertainty budget combined with the systematic error corrections at around 100 Hz at gpstime = 1126259461. This post is merely to mirror the study I did at LLO with a similar one at LHO explaining why we do not see the LLO 'hump' at LHO. See LLO aLOG 24796 I have included the LHO C01 vs C03 response functions systematic error and statistical uncertainty plot. (See PDF 1) I began with the "perfect" C03 version of calibration response which includes all systematic corrections. Then, one by one I removed each systematic correction and compared it to the original C03 response: (See PDF 2) Sensing: C_r (Plot 1) Actuation: A_tst, A_pum, and A_uim (Plot 2, 3, and 4) Kappas: kappa_tst, kappa_pu, kappa_C, and f_c (cavity pole) (Plot 5, 6, 7, and 8) So you don't have to go through and view every plot yourself, I have compiled all of the systematic error values at 100 Hz for each of the parameters: C_r = 0.98932 A_{tst} = 1.0018 A_{pum} = 0.99381 A_{uim} = 1 kappa_{tst} = 1.0251 kappa_{pu} = 0.99465 kappa_C = 1.002 f_C = 1 Total Syst Error = +0.0068 So the total systematic error at 100 Hz is < 1% because it is countered by the C_r, A_pum, and kappa_pu systematic errors.
model restarts logged for Tue 09/Feb/2016
2016_02_09 10:05 h1psliss
2016_02_09 10:23 h1broadcast0
2016_02_09 10:23 h1dc0
2016_02_09 10:23 h1nds0
2016_02_09 10:23 h1nds1
2016_02_09 10:23 h1tw1
2016_02_09 10:42 h1sysecaty1plc3sdf
2016_02_09 10:43 h1sysecaty1plc3sdf
2016_02_09 10:45 h1sysecaty1plc2sdf
2016_02_09 10:49 h1sysecaty1plc1sdf
2016_02_09 10:50 h1sysecatx1plc3sdf
2016_02_09 10:52 h1sysecatx1plc2sdf
2016_02_09 10:53 h1sysecatx1plc1sdf
2016_02_09 10:54 h1sysecatc1plc3sdf
2016_02_09 10:56 h1sysecatc1plc2sdf
2016_02_09 10:58 h1sysecatc1plc2sdf
2016_02_09 10:59 h1sysecatc1plc1sdf
2016_02_09 11:50 h1iopsusey
2016_02_09 11:52 h1susetmy
2016_02_09 12:02 h1iopsusey
2016_02_09 12:02 h1susetmy
2016_02_09 12:03 h1iopsusey
2016_02_09 12:03 h1susetmy
2016_02_09 12:04 h1susetmypi
2016_02_09 12:04 h1sustmsy
2016_02_09 12:28 h1iopsusey
2016_02_09 12:28 h1susetmy
2016_02_09 12:28 h1sustmsy
2016_02_09 12:29 h1iopsusey
2016_02_09 12:29 h1susetmy
2016_02_09 12:29 h1sustmsy
2016_02_09 12:30 h1susetmypi
2016_02_09 12:35 h1hpietmy
2016_02_09 12:35 h1iopseiey
2016_02_09 12:35 h1isietmy
2016_02_09 12:37 h1alsey
2016_02_09 12:37 h1caley
2016_02_09 12:37 h1iopiscey
2016_02_09 12:37 h1iscey
2016_02_09 12:37 h1pemey
2016_02_09 13:27 h1iopsusex
2016_02_09 13:27 h1susetmx
2016_02_09 13:27 h1susetmxpi
2016_02_09 13:27 h1sustmsx
2016_02_09 13:28 h1hpietmx
2016_02_09 13:28 h1iopseiex
2016_02_09 13:28 h1isietmx
2016_02_09 13:42 h1susetmx
2016_02_09 13:46 h1iopiscex
2016_02_09 13:46 h1pemex
2016_02_09 13:48 h1alsex
2016_02_09 13:48 h1calex
2016_02_09 13:48 h1iscex
Maintenance day. New PSLISS code, new Beckhoff C1PLC2 code and associated DAQ restart.
New Beckhoff SDF system code install.
Install of faster SUS computers at both end stations, required restart of all Dolphin attached computers for SEI and ISC.
model restarts logged for Mon 08/Feb/2016 No restarts reported
model restarts logged for Sun 07/Feb/2016 No restarts reported
model restarts logged for Sat 06/Feb/2016 No restarts reported
model restarts logged for Fri 05/Feb/2016 No restarts reported
IM2's alignment changes after a shaking event (ISI trip) such that the alignment drive is unchanged, but the pointing of the optics (as measured on OSEMs) is different.
These jumps in alignment are anywhere from 5urad to 42urad in pitch.
IM2 is the most effected, but IM1 and IM3 also show this behavior.
I've looked at six shaking / alignment change events, and recorded the amount of motion (shaking) the optic, ISI, and HEPI experienced.
The results shows that the amount of HEPI-ISI shaking can vary during an event that results in a change in IM2 alignment.
What is consistant is the average amount the optic shakes (pitch plus yaw divided by two) to produce a jump in alignment are between 2.1 and 6.1 urad of optic alignment change per urad of average optic shaking.
This suggests the conclusion that optic shaking is the cause of the alignment change, however, in looking at a few events where the optic tripped but HEPI and ISI didn't, I have not seen a significant alignment change, so this part of the IM alignment jump investigation is ongoing.
Based on my curent data, the mechanism that creats and alignment jump on IM2 is a combination of optic shaking and HEPI-ISI shaking.
Feb. 10 2016 ~ 21:51 UTC Stopped Conlog process on conlog-test-master. Had connected to channels on Feb. 3 2016 16:38 UTC (alog 25344).
Feb. 11 2016 00:04 UTC Reconnected and disconnected to ~ 99,614 channels a couple of times. Left connected to 99,614 channels.
Before O1, LLO installed some high UGF loops and blend filters on their HAM1 HEPI, in an attempt to better isolate some of the steering mirrors in HAM1. When I tried here, I found that our HAM1 piers weren't as stiff as LLO's in some DOF's because the piers had never been grouted. The grouting was done during O1, and Hugh got new tf's over a couple of down days. I just finished working through the commissioning scripts and have some higher UGF loops to install. These higher UGF loops are needed in order to get sufficient gain at a few hertz. Attached is a pdf of the loops. I'll wait for a convenient maintenance day to try installing.