There are a few problems with the way violin damping is managed in the guardian. We have fixed some but it would be good to have somebody look at the VIOLIN mode damping state to improve it (to reduce the amount of time we go without checking for locklosses, and to use some for loops)
The violin mode damping state takes too long between times that it checks for locklosses. We had one lockloss earlier today caused by a large violin mode, which unlocked us early in the VIOLIN mode damping state, but the guardian didn't recognize the lockloss for 17 seconds causing things to get quite rung up. Vaishali and I edited the state so that the 10 second sleep is replaced by a timer (which continuously checks for lockloses), but it still goes for about 11 seconds without checking for locklosses.
Kiwamu found that the adjust damper gain function was adjusting the gain to set the filter output of each mode to 1.5e5, so we reduced the target output to 1.5e4.
I also saw that the LSC triggering meant that the violin damping loops didn't send huge signals to the suspensions even though the gaurdian took a long time to recognize the lockloss. However, in the down state we were setting the TRAMP and the gain to 0, which was causing a large signal to go to the optics. I have removed setting the TRAMP to 0, so it will be the value that it is set to in the VIOLIN mode damping state (10 for most modes).
I opened WP 7005 this evening after consulting Keita in order to keep testing the next release of the nds2-client. During office hourse I had run into some test failures in the day that I needed to track down when testing on a CDS workstation. I worked remotely from ~9:30pm - 12:30 localtime on nucws20. My work did not require accessing the local nds or epics resources, just simulated or recorded data, so there was no impact on the recovery work.
The issues I had ended up being related to the matlab environment setup and test scripts that did not account for matlab 2012b still being used. After I was able to get a cleaner environment and adjust some of the test scripts for matlab 2012b all the tests passed.
Unless something comes up quickly the code I was testing should be submitted as the next nds2-client release and I will likely file a work permit to install it in the control room next week.
TITLE: 05/26 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC STATE of H1: Planned Engineering INCOMING OPERATOR: None SHIFT SUMMARY: Sheila, Kiwamu, Jeff K. and Vaishali working on recovery. Made it to 26 W. LOG: 15:18 UTC Jim running low frequency measurements on end Y ISIs. 15:44 UTC Jason to LVEA to look for a crate lid by the H2 PSL enclosure. 15:53 UTC Jason done. 16:35 UTC Jim done measurements and reverting configuration back for locking. 16:39 UTC Kiwamu starting locking attempts. 16:59 UTC Travis driving car to receiving to load material. 16:59 UTC Christina to mid X. 17:24 UTC Aidan working on HWS computer through remote login. 17:42 UTC Dave restarting h1hwsmsr. 18:15 UTC Amber leading school tour through CR. 18:21 UTC Nutsinee to LVEA TCS cabinet. 18:53 UTC Nutsinee out of LVEA. 19:49 UTC Starting initial alignment. 19:54 UTC Dave restarting nds1. 20:02 UTC Dave restarting nds0. 20:21 UTC Initial alignment done. 20:49 UTC Gerardo to end X. 21:08 UTC Kyle to mid Y to retrieve cable. 21:14 UTC Gerardo back. 21:22 UTC Kyle back. 21:51 UTC Travis to LVEA to retrieve tools. 21:59 UTC Travis back. 22:17 UTC Jeff K. taking Nick on tour through LVEA. 22:43 UTC fw0 crashed. 22:46 UTC Jeff K. and Nick back 23:13 UTC fw0 crashed. 23:24 UTC fw0 crashed. 00:22 UTC Starting initial alignment. 01:02 UTC Initial alignment done.
Sheila D., Patrick T. Reached 26 W at approximately 04:13 UTC. Increased CSOFT gain by 3dB. This seems to have improved the stability. Lines just above 20 Hz are from a2l script. Frequency noise is high above 2kHz. Jitter noise appears better.
The second screenshot attached is the same as Patricks plot with the best spectrum that we got just before loosing lock.
Here are the Hartmann sensor measurements from this time. The ITMY thermal lens looks to be around 10cm in diameter, or roughly the same spatial scale as the IFO beam. This, and the lack of any strong localized features, is consistent with only uniform absorption on this optic.
ITMX shows the point absorber again.
Note: we made every effort to center the HWS beams on the optics. As yet, I have not quantified the exact center of each optic in the HWS coordinate system. The plotting code defaults to assuming that the HWS beam is exactly centered but it is feasible that we may be off center by a couple of centimeters.
The contour lines are spaced 8nm apart.
The double-pass thermal lens here is around 28 uD. The single-pass thermal lens is about 14 uD.
Looking at the Seidel coefficients of HWSs X and Y, I see that our TCS pre-loading scheme on ITMY is not doing a great job. It leaves a spherical power of 15 uD as we remove the pre-loaded CO2 power. See the attached. I am not sure why HWSX data is noisier than HWSY.
I need to implement a weighted best fit parameter estimator for the HWS Python code. Chances are that HWSX has some noisy centroids that are skewing the measurement. When we weight the parameter fitting by the inverse of the variance of each centroid, we should get a much cleaner estimation of the spherical power.
I will implement that this week.
V. Adya, S. Dwyer, K. Izumi, J. Kissel, P. Thomas After doing the same kind of alignment reference adjustment to the POP QPDs -- i.e. moving around PR2, we were able to surpass yesterday's problems and get through the CARM reduction and closing the ARM's SOFT loops. As such, while holding just before the transition to DC READOUT, we captured new green alignment offsets (updating green QPD offsets and green camera offsets). I attach the relevant screenshots showing the deference between the previous values and the old values. Y arm QPD B Yaw moved the most. The changes to the green camera offsets where in the same direction and rought the same value to which Sheila and I adjusted the offset manually last night. We lost lock due to a dP/dTheta 0.45 Hz oscillation, and are re-running initial alignment, and trying again.
J. Kissel Having worked with Bubba and the Apollo Crew, we've finally settled on a stable VEA temperature that's roughly where we were before (EY dead on, EX about 0.5 [deg C] / 1 [deg F] colder). However, with the upgrade, we now see a diurnal oscillation in temperature at the 0.25 (EY) 0.5 [deg C] (EX) (0.5 / 1 [deg F], respectively) level. While there are spikes 0.1 [deg C] every ~24 hours that coincide with the peaks of the oscillation, every other temperature sensor around the VEAs report some sort of temperature oscillation that fluctuates more than this. I attach 20 day trends for each end station, 3 types for each: (1) Showing the FMCS temperature sensor against the most calibrated / precise sensors in and around the VEA (with the PCAL receiver module being most representative) Here, we see that the VEAs (FMCS, PCAL) were stable compared the exterior electronics rooms (EBAY) prior to the upgrade (16 days ago for EY, 9 days ago for EX). Now it appears that the VEA is in-sync (or delayed/anti-corellated) with the EBAYs. (2) Showing the PCAL receiver module temperature against the vertical position of the three suspensions in the end-stations. Just in case you needed proof that changes in temperature are real, and larger than before the upgrade. (3) Showing all temperature sensors in the VEA. More proof that even the worst, uncalibrated temperature sensors have seen the fluctuations over the past 20 days. We haven't yet recovered the IFO fully, so it's difficult to say yet how our alignment (and other things) are really impacting the duty cycle of the IFO, but we should continue to keep an eye on this. These temperature oscillations are definitely slower than the bandwidth of the ASC loops so we should be able to control for such fluctuation -- T1100595 suggest we have about 500 [urad] of range -- but it's a question of whether we'll need to do initial alignment more often between lock stretches. Plus there's always a risk of temperature impacting electronics in some terrible unknown way, but given that the EBAY electronics have been oscillating in temperature since before the upgrade, it may not be any worse than before.
Keita noticed that the PSL power watchdogs were off and alerted me to this. When we went into the PSL yesterday (1st bullet point), we turned off the power watchdogs; this is standard procedure when working in the PSL enclosure (if something we do briefly effects the output power of the laser to the point it would trigger the watchdog, we don't want it taking the whole system down). Apparently when we left the enclosure we forgot to turn the watchdogs back on, which then of course defeats the purpose of having power watchdogs in the first place. Luckily, nothing happened in the last ~26 hours and the laser is still running. We (Team PSL) need to be more vigilant about this in the future.
I turned them back on at approximately 23:14 UTC (16:14 PDT).
Attached is an RGA scan of the full, or open, LHO vacuum volume prior to isolating and venting the Vertex volume on May 8th (see h0rgacs_SEM_analog_05082017a). Also included is a scan taken after the Vertex was pumped down and re-combined with the rest of the vacuum volume (see h0rgacs_SEM_analog_05262017a).
Thanks for combining them. Much more convenient to compare.
after being super reliable for all of O2, h1fw0 daqd stopped running at 15:42 PDT. The error log suggests it was not able to write to its QFS disk system, and internal buffers filled up. Monit restarted daqd, and it has been running for 20 minutes so far, but second trend frame file writing looks too slow
[Fri May 26 15:41:55 2017] main profiler warning: 1 empty blocks in the buffer
[Fri May 26 15:41:56 2017] main profiler warning: 0 empty blocks in the buffer
[Fri May 26 15:41:57 2017] main profiler warning: 0 empty blocks in the buffer
[Fri May 26 15:41:58 2017] main profiler warning: 0 empty blocks in the buffer
[Fri May 26 15:41:59 2017] main profiler warning: 0 empty blocks in the buffer
....
h1fw0 daqd crashed again at 16:13. This was not so clearly a file issue, went into a retransmission storm.
Dan says nothing is changing on the LDAS system. In the bad old days when the DAQ was unstable sometimes power cycling the Solaris QFS/NFS machine helped. So I power cycled h1ldasgw0 and got h1fw0 writing again, recovered at 16:36. It has been running 900 seconds so far.
for the record, procedure followed for power cycling h1ldas0 was:
(root on h1fw0) stop monit running
(user on workstation) kill daqd on h1fw0 via telnet
(root on h1fw0) umount /ldas-h1-frames
(root on h1ldasgw0) power off
Power h1ldasgw0 back up with front panel power button
(root on h1ldasgw0) manually mount the QFS file system, manually export it via NFS
(root on h1fw0) manually mount /ldas-h1-frames, start monit. At this point monit starts daqd.
When Dan goes online again, he says he will check the QFS and SATABOY logs.
Work Permit | Date | Description | alog/status |
6659 | 5/25/2017 14:01 | Move Pcal periscope pre-alignment cradles (white powder-coated steel frames) from near the NE corner of the H2 Laser Area Enclosure to the outside of the vertex, likely in the Large Item Access Area and eventually to the Staging Building. To be done during Tuesday maintenance next week. | |
6658 | 5/24/2017 12:59 | I will turn h1dns1 server off to survey memory specifications to upgrade it from 48GB to 72GB as recommended on FRS 8198 | |
6657 | 5/24/2017 7:46 | Test air flows on both end station Axivane supply fans. Calibrate if necessary. | |
6656 | 5/23/2017 15:50 | Check HEPI Accumulators' charge: Deisolate platforms and spin down HEPI pumps, check and charge Accums AR. Return to full operation. | duplicate of WP #6655 |
6655 | 5/23/2017 15:34 | Check HEPI Accumulators' charge: Deisolate platforms and spin down HEPI pumps, check and charge Accums AR. Return to full operation. | 36356, 36373 |
6654 | 5/23/2017 13:19 | Replace leaking ball valve assembly on Crystal Chiller return line. Will require shutting down the PSL while the valve is being replaced. | 36343, 36393 |
6653 | 5/23/2017 12:59 | Connect and run a pump cart at HAM11 annulus pump port (South side of HAM11). Replace whatever is broken. May require admitting gas into the annulus volume if the pump body requires replacement. | |
6652 | 5/23/2017 9:07 | Connect BRS vacuum pressure readback signals to Vacuum System at end stations. Vacuum controls chassis S1600286 and S1600287 will need to be modified. Wiring diagram E1500368 and E1500369 will need to be updated. Software model will need to be updated to add new channels. | 36396 |
6651 | 5/23/2017 8:49 | Perform regular PCal calibration measurements at both end stations. The will require the end station to be laser hazard while the work is being performed. | 36431 |
6650 | 5/22/2017 18:00 | Test 118MHz modulation & replace IMC RFPD. | 36354 |
6649 | 5/22/2017 12:42 | Increase upper limit of thermocouple temperature validity check in scripts to autofill CP3 and CP4. | |
6648 | 5/22/2017 11:35 | Measure the transfer function of the RFPD (24 MHz for locking the IMC) in situ with the AM laser at IOT2L in order to determine whether the installed unit has the correct resonance at 24 MHz or not. During the measurement the IMC will not be able to lock. | |
6647 | 5/22/2017 11:13 | Remove EX and EY Instrument air cell phone alarms. | |
6646 | 5/22/2017 10:58 | Replace 24V power to one of the Inficon gauges on BSC6. Power to both Inficon gauges will now come from the Vacuum rack. A new interlock signal cable will need to be pulled from BSC6 chamber to the ESD Safety Relay interlock box. ESD HV power supplies will need to be powered off for part of this work. | |
6645 | 5/22/2017 6:20 | Erect scaffolding in the SW corner of the CER room to access a very strategically placed VAV box for the HVAC controls upgrade. | |
6644 | 5/20/2017 18:50 | This Work Permit replaces WP #6643 Monday, 5/22 -> Turn off High Voltage sources in Vertex vacuum volume. Dump unpumped Vertex RGA volume into combined Vertex+YBM+XBM turbo-pumped volumes. Energize Vertex RGA filament. Energize IP1. Valve-in IP2-IP6. Valve-out Vertex, YBM and XBM turbo pumps. Tuesday, 5/23 -> Take RGA scan of combined Vertex, YBM and XBM vacuum volumes. Shut down Vertex, YBM and XBM turbos. Dump GV5 and GV7 unpumped gate annulus volumes into annulus ion pumped volume or attached pump cart if required. Disconnect pump cart(s) if applicable. Open GV5 and GV7. | 36362, 36380 |
6643 | 5/19/2017 19:01 | Monday morning, May 22nd -> While still pumped by Vertex, YBM and XBM Turbos, slowly "crack" open the 2 1/2" metal isolation valve that separates the Vertex RGA from the Vertex while closely monitoring the Vertex pressure (dump RGA volume into pump cart prior to opening 2 1/2" isolation valve to a significant degree if deemed necessary) * Energize Vertex RGA filament | |
6642 | 5/19/2017 14:43 | NOVA Film Crew will set up on the roof of the OSB, a minimum of 6 feet from any edge to shoot down the arms. Escorted by Vern and myself. |
I've started the process of archiving the raw minute trend files from h1tw1 as its SSD-RAID is 87% full.
The first part of this procedure is to archive the minute_raw directory on h1tw1 as minute_raw_1179862578 and create a new empty minute_raw within the five minute quiet period when h1tw1 is not writing minute trends. This permitted the archival to be done without stopping and restarting the raw minute trender.
Then h1nds1 and h1nds0's daqdrc files were temporarily modified to read the past 140 days of minute trends from h1tw1's archive directory. This requires a restart of the daqd processes on these machines:
h1nds1 | restart 12:54 PDT |
h1nds0 | restart 13:02 PDT |
I verified that last day's minute trends are now being read by h1nds1 from the online and archived minute_raw directories on h1tw1.
The next step in the process is to copy all the files from h1tw1 /trend/minute_raw_1179862578 over to the MSR SATABOY using h1fw1. This is done using a low priority flag, so as not to interfere with h1fw1's frame writing. It is a slow process, taking about 3 days. This allows he wiper script to keep the SATABOY RAID from filling.
Starting CP3 fill. LLCV enabled. LLCV set to manual control. LLCV set to 50% open. Fill completed in 17 seconds. TC B did not register fill. LLCV set back to 17.0% open. Starting CP4 fill. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Fill completed in 107 seconds. LLCV set back to 38.0% open.
Lowered CP3's manual LLCV's %open value to 15% down from 17%. Lowered CP4's manual LLCV's %open value to 37% down from 38%.
H1 Status:
SITE:
Patrick Kiwamu and I reversed these IM moves this morning. It looks like after this move the recycling gain dropped by about 2%, the reflected power dropped by about 13% (which is not necessarily a bad thing), and for some reason MC2 trans sum became more noisy. I do not understand why MC2 trans sum would become more noise, the mode cleaner alignment didn't change during this time.
Right now we are having trouble locking because of low recycling gain, which is why we have reverted.