Similar to the HEPI's done last week, I took the text file generated by Barker, which is extracted from the Guardian logs, and removed the truncated channels from that list. The MASTERSWITCH was also added to the list. This I used for the do not monitor channel list. I saved this file with the safe.snap files in /opt/rtcds/userapps/release/isi/h1/burtfiles.
Next the safe.snap is modified to include the monitor bit on all channels not RO and not fields. This is then loaded into the FE with the LOAD TABLE ONLY button.
After confirming a successful load and noting the DIFF list, the safe.snap is then modified with the donotinclude list. When theTABLE is LOADED again, most of the diffs go away and the remaining differences represent the differences we care about wrt most recent safe.snap file. Either the channel is set wrong or the snap needs updating.
All HAM-ISI are good except HAM3, there remains some blend diffs as we test configs for the 0.6hz peak issue.
I'll work on the BSC-ISI soon.
This exercise exposed a problem: if you have more than 40 diffs, it only shows 40 and it only lists 40. It makes you think there is only 40 when there are more. I've filed bug 796.
07:50 Cris to EX
08:00 Karen to EY
08:48 Bubba & Co to EX - no VEA
09:15 Karen leaving EY
09:21 Fil out to HAM 2 area for video work.
09:54 R Savage out to LVEA to look at viewports
10:00 Ron Carlson contractors on site for R McCarthy
10:02 Corey and Rick out of LVEA
10:29 McCarthy out to HAM6 area w/Daniel approval
11:03 McCarthy out of LVEA
11:09 Corey back out to squeezer bay to look for parts w/comm approval
13:40 Gary Traylor into the optics lab.
15:50 Gary out of optics lab
10:00AM Guidline in effect for Commissioning starting today.
SEI - work on installing blend filters from LLO on HAM3
SUS - working on SDF
CDS - video work around HAM2. Microphone problem is fixed.
3IFO - no work in te LVEA this week and will be postponed for some time.
Corey moving things in Squeezer bay.
Beam Tube Cleaning to resume today on X arm - Apollo on site.
Sudarshan mentioned a desire to work on P-Cal at EX and EY
Nutsinee reporting to LHO for operator position
Thomas Abbott visiting. Will be helping with Drift monitor/ P-Cal etc
The ITMY OPLEV SUM has been oscillating with 10min-ish period for a while. Apparently YAW picks this up with 3urad pp amplitude, making it unusable as an angle monitor (first plot).
It's getting worse slowly. On Tuesday the oscillation was there but it was about 1urad pp (2nd plot).
Something happened on Wednesday afternoon and the SUM dropped by a factor of 2 (third plot), but it's not like the noise didn't jump jumped up right after that event.
Though this is not blocking the locking activity, II asked Doug if he can swap the laser during maintenance. If not, he might try early morning on Wed. or Thu.
While I can't comment on the apparent noise increase, the sawtooth with the 10 minute period looks like the problem we had last September. Check to see if the copper lines that carry the instrument air are not touching the transmitter pylon; the instrument air line is what causes the 10 minute period sawtooth. See alog 13932 for when this happened last September, with pictures.
The user environment for controls on video0 - video6 and projector0, projector1 has been modified to eliminate redundant and outdated command aliases. The aliases are now supposed to be set by a common file of aliases at login. If there are any problems running programs that used to work, let me know. This was driven by an outdated "sitemap" alias found on these computers.
Thank You Jim.
Laser Status: SysStat is good Output power is 29.3 W (should be around 30 W) FRONTEND WATCH is GREEN HPO WATCH is RED PMC: It has been locked 6 day, 1hr 15minutes (should be days/weeks) Reflected power is 2.0 Watts and PowerSum = 25.6 Watts. (Reflected Power should be <= 10% of PowerSum) FSS: It has been locked for 1 h and 37min (should be days/weeks) Threshold on transmitted photo-detector PD = 1.59V (should be 0.9V) ISS: The diffracted power is around 4.7% (should be 5-15%)
People have asked for my alignment/lock acquisition DTT and StripTool templates. They can be found here:
/ligo/home/alexan.staley/Public/DTTTemplate
/ligo/home/alexan.staley/Public/StripToolTemplates/AlignmentStripTemp
Following Jeff's log last week, we wanted to try a different blend filter on HAM3 to see if we could get any improvement. Arnaud suggest a 500 mhz blend that is installed on the Livingston HAMs, shown as the the green traces in the first attached image (01_28 blend is shown in blue, 250mhz blend is in red) . I copied the filters over this morning, and turned the new filter on in the RY dof. It didn't do anything good, shown second and third images. Current confguration is shown in solid lines ( normal 01_28 blends everywhere), dashed lines are with 500 mhz blend on RY only. The .6hz peak disappeared, but a new one at 1.14hz appeared, there's some more junk just below 2 hz, and it looks like a bunch of CPS noise is being reinjected above 1hz. I've returned the ISI to the nominal config.
model restarts logged for Sun 25/Jan/2015
2015_01_25 03:28 h1fw1
2015_01_25 06:40 h1fw0
2015_01_25 09:43 h1fw0
2015_01_25 17:49 h1fw0
2015_01_25 23:05 h1fw0
all unexpected restarts. Conlog frequently changing channels report attached.
Both x and y arm were locked for most of the past 18 hours. The attached plot shows the tidal motion together with the alignment controls. The only clear correlation seems to be ETMY/TMSY yaw.
Currently, iscex reports maximum CPU times of 29µs in an average 1 second period with an absolute maximum of 36µs. iscey reports average maxima of 28µs and an absolute maximum of 33µs. The fiber delay is 21.5µs for the x-arm and 21.7µs for the y-arm. The absolute worst case delay to the corner is then 58µs and 55µs for iscex and iscey, respectively. This is still well below the cycle time of 61µs. However, the corner lsc model still reports IPC errors. Something doesn't add up. The attached plot shows the IPC error counts in the corner lsc, their timestamps and the iscex/y CPU times.The IPC errors are latched when when they occur. This has the unfortunate side effect of making the first twelve plots in the attachment essentially useless, except indicating that "errors are happening".
In contrast, the lsc uses an average maxima of 35µs with an absolute maximum of 40µs. Both susetmx and susetmy report around 10 IPC errors per second in each and every second.
This shows the error rate in the end station sus together with lsc CPU maximum times.
There are some additional delays 1) Propagation delay for every RFM card the message transits. Vendor reports 0.5 mu-sec per adapter. LHO has 6 cards on each arm (so up to 3 mu-sec). In one direction, it may traverse very few cards, while the other direction can have the maximal number. Removing the unneeded card on the end-station SEI machine may help at LHO. 2) PCIe bus contention. On the current front-end computers, we already see bus contention issues with receiving all the ADC interrupt alerts in a timely manner. Delays of a few mu-sec can happen. This may affect traffic to/from the RFM cards as well. - Testing on newer front-end computers shows both reduced real-time loop time (faster processor) and the variance (better/faster bus controller). This is also a likely path forward.
I followed the Dan Hoak prescription from the December OM / RM damping work and applied it to the IMs today.
The filters are now all the same for all the IM DoFs. Also, each of the IMs now have the same set of L, P, and Y gains (-20, -0.02, & -0.04 respectively). The low pass I installed is almost the same as what's at LLO, but slightly less agressive so as to allow for more damping later if we want.
I have also turned on limits of 555 counts on all of the damping loops. This gives them a huge margin, but better than what they had, which was no limits.
I reduced the pitch gains from -0.04 to -0.02 since they were overdamped before. From the attached in-loop spectra (with the new settings), you can see that there is still some anomalies, but all in all, I think the IMs can now be considered OK and can be left alone until we want to make a jitter noise budget to get past 100 Mpc or so.
Elli, Sheila, Alexa, Rana, Evan
Since we haven't been able to reliably transition ALS COMM to sqrt(TRX+TRY), we decided we'd back off a bit and try the transition with a single arm only (i.e., no DRMI).
The sequence that we found worked for us is as follows:
For most of this process we had a GPIB-controlled SR785 hooked up to the common mode and summing node boards in order to monitor the relative strengths of the ALS COMM and sqrt(TRX) signals. For the excitation we drove EXC A on the common-mode board; for ALS COMM we monitored TEST1 on SUM A, and for sqrt(TRX) we monitored TEST2 on SUM A. This was incredibly helpful for sorting out how to properly shape sqrt(TRX) and how to engineer a stable crossover during the transition.
And if anyone is looking for the freshest copy of the 40m GPIB scripts, they are maintained by Eric Quintero on github: https://github.com/e-q/netgpibdata
At around 3:30 this morning, the ETMX T240s started to drift to some large values. This morning we were having trouble locking the X arm with green light, and noticed that the REFL CNTRL output was huge, indicating that the cavity length was changing by ~20 um in about 10 seconds. We looked at the ISI screens, saw that the T240 mons were red. After interupting Jeff's Saturday morning errands, we decided to try zeroing the T240s by turning on FM6 (labeled AutoZ) in the T240 X filter banks, when we did this the ISI tripped. It started to Isolate, the outputs of the T240s were still large but not as large as they had been, stage 2 watchdog tripped, then it isolated and the cavity motion is much much less.
The firt attached screen shot shows the green laser control signal roughly calibrated in um, you can see the abrubt change when we tripped the ISI and the much smaller fluctuations after. The second screen shot shows a trend of the outputs over the last two days.
I've looked at this a bit more, but i can't find anything obvious. At ~11:40 UTC, something agitated the ISI and it stayed that way until 21:30 when Sheila tripped the ISI ( T240 time series trend shown in first screenshot) The ETM optic was in a similar state, and returned to normal after the ISI tripped. The ground sensors don't show anything (second plot, reference is from 8:00 UTC, active is from 12:00), and I didn't see any evidence that the ISI isolation loops were ringing (last screen shot, I only show a few, but I looked at them all).
I have confirmed that the guardian reports (displays a notification) when the filter module is not in the correct state. This may not be true for other subsystem Guardians. Betsy and I saw this was the case on some ETM filter module that Dave's list reports that guardian touched. The monitoring that Guardian does for the SEI channels may need to be coded explicitly.
This has nothing to do with the new SDF functionality, nor has guardian behavior changed. The SEI guardian systems were programmed from the beginning to notify on changes to any of the controller filter banks. This is specific to SEI and is not being done by any of the other subsystems.