Sheila commented out the elements which reset the WHAM6 watchdogs at specific times. This has been replaced by the model upgrades to the ISI on Tuesday.
I asked Sheila to remove the resetting the guardian was doing to give the model bleed off feature a chance to prove itself functional.
See E1500406 for specifics & details. In summary, saturations that occur in a given minute will clear in 60 minutes. If there are no saturations for an hour, the saturation counter should be back to zero. Saturations can be cleared manually with the H1:ISI-HAM6_SATCLEAR button labeled CLEAR SATURATIONS. There may be a delay on the actual clearing of the saturations after this button is pressed.
Chris S.90% Joe D.70% 7/24-8/3/15 The crew cleaned the original caulking and installed metal strips on 853 meters of enclosure. We were starting to run low on both caulking and aluminum strip material so I have ordered more of both. I have also ordered some battery powered angle grinders with soft bristle brushes for cleaning the existing caulking for better adhesion of the new caulking. We were cleaning them by hand and that was a slow process, hopefully this will expedite the cleaning and relieve the arms of the guys somewhat.
Scott L. Ed P. Rodney H. 8/3/15 The crew moved the lights and cleaned 71 meters of tube ending 5.3 meters east of HSW-2-034. 8/4/15 We relocated the support vehicles and all related equipment and cleaned 74.3 meters of tube ending at HSW-2-030. Tested clean sections, results posted in this entry.
Had the IFO at Engage_ACS while Kiwamu was running some OMC checks. The IFO lost lock, which appears to correspond to a sudden wind gust of ~25MPH. The lockloss plot posted shows this gust.
We did some further investigation using the lockloss tool (alog #: 20337) on this event. The first 48 channels that went wrong before the lock loss were plotted, and a complete list of all misbehaving DQ channels were listed in text file, ordered by time before lockloss. It seemed that we saw a violent ground motion, and lots of coil drievers were trying hard to react on it before they finallyt failed.
==================================================================================================================================================
There was a mistake while I filtering the data into different frequency bands, so the plots I posted yesterday did not really make lots of sense. The error has been fixed and new plots attached.
We moved the Pcal lines to first set of oscillators to comply with the cal-cs model.
X-end :
Oscillator 1 : 3001.3 Hz - 39322 cts amplitude
Y-end:
Oscillator 1: 36.7 Hz - 125 cts amplitude
Oscillator 2: 331.9 Hz- 2900 cts amplitude
Oscillator 3: 1083.7 Hz - 15000 cts amplitude
In order to fix some of the issues seen with the OFFLINE state seen recently, I had to move some things around on the graph and make a few protected states.
Previously, the OFFLINE state would Misalign MC2 and turn off the inputs to the IMC servo board. Over the weekend this was found to not be enough because there were still drive signals being sent to MC2 and caused it to trip.
Now when OFFLINE is requested, it will also run through similar logic to the DOWN state along with what it did previously. This will ensure no drive signals are being sent to MC2 while it is misaligned and keep it in a safe state.
If a SEI platform is not in its nominal state the node checker will bring it into the MOVE_TO_OFFLINE state that will execute the code mentioned above and arrive at OFFLINE, but if it is requested it will go through DOWN then to MISALIGNING and then arrive at OFFLINE (The "brute force method" as Jamie described). This round-a-bout way of doing things needed to be done to allow the DOWN state to remain as a GOTO state.
I tested this by bringing IMC_LOCK from LOCKED to OFFLINE, back to LOCKED, then I took HAM3 to DAMPED which brought IMC_LOCK to OFFLINE successfully!
Kyle, Gerardo 0935 hrs. local -> Valved-in RGA to X-end with N2 cal-gas on 0955 hrs. local -> Valved-out N2 cal-gas 1005 hrs. local -> Valved-out NEG from X-end 1015 hrs. local -> began NEG regeneration (heating) ~1145 hrs. local -> NEG regeneration (heating) ends -> Begin NEG cool down 1240 hrs local -> Valved-in NEG to X-end Data attached (see also LIGO-T1500408) Leaving RGA valved-in to X-end, N2 cal-gas valved-out and filament off
Attached is a plot of the pressure inside of the NEG pump's vessel during regeneration, along with temperature.
Temperature started at 22 ºC and eventually reached 250 ºC.
Pcal Team,
During maintenace and calibration yesterday we found that the PCAL AA Chassis (S1400574) at EndX has problems with channel 5-7. Chanenl 5 is dead and 6 and 7 are railed at ~15000 cts. These channels are connected to DB9-to-BNC chassis (D1400423) at the other end. We isolated this unit from AA chassis to troubleshoot the location of the problem and confirmed that it is the AA chassis.
Fil, Sudarshan
We tried power-cycling the AA chassis to see if it solves the problem. It didnot so we replaced the broken AA chassis with a spare one (S1102791) and brought the broken one back to EE shop for troubleshooting. We will swap it back with original, once it is fixed.
There has been some speculation that the huge glitches in DARM on weekends and in the middle of the night might be beam tube particulate falling through the beam. The absence of correlated events in auxiliary channels (Link) and the lack of saturations, have not helped dissipate this speculation.
I think that we can test the, in my mind unlikely, hypothesis that these huge glitches are particulate glitches by comparing rate variations to what we would expect. If the glitches are produced by a constant ambient rate of particles falling through the beam, then we would not expect large gaps like the one at the beginning of the Aug. 1 run that Gabriele analyzed for the above linked log (see attached inspiral range plot). This is a fairly weak test when applied to this one day: I calculate that the distribution of glitches on Aug. 1 is only 20% likely to be consistent with a constant rate. But perhaps DetChar could strengthen this argument by looking at future variations in rates to test the hypothesis that the rate is constant. I checked that there was no cleaning or wind above 10 MPH for the Aug. 1 period.
If bangs during cleaning on July 30th had freed up some particulate that then fell over the next few days, and this dominated the glitch rate, than the expected rate would not be constant but exponentially declining starting at the last cleaning. Since the gap was at the beginning of the Aug. 1 run, this would be even more unlikely than 20%. Bubba keeps a record of cleaning so we could also test for exponential declines in rates.
But for starters, maybe DetChar could check for consistency with a constant rate for those glitches that are not associated with saturations, have auxiliary channel signatures similar to known particulate glitches (e.g. Link, and more to come), and happen on days without cleaning (weekends for sure), and with wind under 10 MPH. Since particulate glitches are likely to be an ongoing concern for some, and since glitch rate statistics can be a good discriminant for particulate glitches, I think that it would be worth setting up this infrastructure for rate statistics of unidentified glitches, if it doesn't already exist.
Also good to look for potential variation in rate due to other environmental conditions in addition to wind -- temperature (absolute or derivative) would be good to test.
I can't find the posts now, but several months ago, an intermittent issue with ETMX was spotted that was narrowed down to the CPS's, possibly specfically the corner 2 cpses (?). This problem then somehow "fixed" itself and was quiet for months. As of the night of the 4th, it seems to be back, intermittently (first attached image, spectr should be pretty smooth around 1 hz, it's decidedly toothy). Looking at the Detchar pages, it shows up about 8 UTC and disappears sometime later. I took spectra from last night (second image) and everything was normal again.
Still don't know what this is. Anybody turn anything on Monday afternoon at EX that shouldn't be?
I had turned on the NEG Bayard Alpert gauge at end X yesterday, but I have verified at least through Beckhoff that I turned it back off.
Kyle and Gerardo regenerated the NEG pump at X End yesterday. The attached shows 30 days of the BSC chamber pressure. We gained a factor of two from the regen.
Kyle's alog; https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=20221
alog for the Y End regen is here; https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=19998
RGA data to follow.
I adjusted the AOM diffracted power from 9% to ~7%.
PEM channel renaming
WP5414. Robert, Dave:
h1pemcs was modified to rename the accelerometer channels PR1, SR1, MC to PRM, SRM, IMC. Change was applied to DAQ when it was restarted this morning.
h1isiham3 phantom ini change
Hugh, Dave
After the 07:55 restart of h1isiham3 and a DAQ restart the front end was reporting an ini file mismatch. We are not sure this was a true mismatch since the ini file had not been modified since Monday. We restarted h1isiham3 which cleared the alert.
h1susey computer
WP5413. Carlos, Dave, Jim:
the original h1susey computer was re-installed to remove the IOP glitching. The ETMY model is running longer than when this machine was last in service (was 51uS, now 55uS). We checked the new SUSETMYPI model runs well with this hardware.
Reboot digital video servers
Dave
The digital video servers h1digivideo[0,1,2] were power cycled as part of the PM reboot project. These machines have been running for 280, 280 and 251 days respectively. We saw no problems with reboots.
Digital video snapshop problem. FRS3410.
Jenne, Dave, Jim, Elli, Kiwamu:
Jenne found that certain digital cameras are producing tiff files which cannot be viewed by most graphics tools. The reboot of the video servers did not fix this. We tried power cycling the ITMX spool camera which also did not help. We tracked this down to the "Frame Type = Mono12" change in the camera configuration. Kiwamu and Elli have methods to read the 32 bit tiff files. This problem is now resovled, FRS3410 will be closed.
EPICS Gateway
Dave
Due to the extended downtime of h1ecatx1, CA clients on the H1FE network did not reconnect to this IOC (EDCU, conlog, Guardian). I restarted the h1slow-h1fe EPICS gateway which prompted the clients to reconnect to h1ecatx1.
DAQ Restarts
Jim, Dave
There were several DAQ restarts. The last restart was quite messy, details in earlier alog.
The FPGA duotone channels were added to the frame broadcaster for DMT timing diagnostics.
restart log for Tuesday 04Aug2015 is attached. No unexpected restarts.
Patrick, Jeff, Evan
We spent a few minutes cooking the IX compensation plate while trying to make the TCS rotation stage behave.
This was after the Beckhoff chassis was power cycled.
If after the beckhoff chassis is cycled and if you do not first "refind home" then you will find these issues with the rotation stage not knowing what angle to go with as it loses its mind where its at. I cant say I have noticed the waveplate acting up after this step is taken (but maybe it has without me noticing)
Also note that search for home does not take the waveplate to minimum angle necessarily. You should be pressing "go to minimum power". So after a Beckhoff restart, usually here at LLO we search for home first, so that the rotation stage finds its 0 point again, then go to minimum power and then start operating it from there. A known issue all along is that as the waveplate rotates to home it could go through a brief period where it allows maximum power into the IMC or onto the ITM CPs
JimW, HughR
We took all the SEIs down with guardian. Ran foton -c foton hepifile and then loaded the modified file. Re-isolated all platforms.
I've looked at a few archived foton files to see if this caused any significant changes in any coefficients. Mostly what I've found are changes in the order of header information, but H1HPIBS.txt show a bunch of changes, all at the ~10^-6 level , so probably still harmless. Also, these changes are likely the result of a known (and now resolved) issue with quacking foton files with Matlab.
Rana pointed out to me that the PR3 and SR3 suspensions may still have some shift due to wire heating during locks (which we won't see until a lockloss, since we control the angles of mirrors during lock).
Attached are the oplev signals for PR3 and SR3 at the end of a few different lock stretches, labeled by the time of the end of the lock. The lock ending 3 Aug was 14+ hours. The lock ending 31 July was 10+ hours. The lock ending 23 July was 5+ hours. The lock ending 20 July was 6+ hours.
The PR3 shift is more significant than the SR3 shift, but that shouldn't be too surprising, since there is more power in the PRC than the SRC so there is going to be more scattered light around PR3. Also, PR3 has some ASC feedback to keep the pointing. SR3 does not have ASC feedback, but it does have a DC-coupled optical lever. SR3 shifts usually a few tenths of a microradian, but PR3 is often one or more microradians. Interestingly, the PR3 shift is larger for medium length locks (1 or 1.5 urad) than for very long locks (0.3 urad). I'm not at all sure why this is.
This is not the end of the world for us right now, since we won't be increasing the laser power for O1, however we expect that this drift will increase as we increase the laser power, so we may need to consider adding even more baffling to the recycling cavity folding mirrors during some future vent.
Note - The PR3 and SR3 have 2 different baffles in front of them which do different things. The PR3 HAS a baffle which specifically shields the wires from the beam. The SR3 does not have this particular baffle, however I believe we ave a spare which we could mount at some point if deemed necessary.
Attached is a picture of the PR3 "wire shielding baffle D1300957, showing how it shields the suspension wires at th PR3 optic stage. In fact, a picture of this baffle was taken from the controlroom and is in alog 8941.
The second attachment is a repost of the SR3 baffle picture from alog 16512.
from the pictures, it seems like we could get most of the rest of the baffling we need if the wire going under neath the barrel of PR3 were to be covered. Perhaps that's what accounts for the residual heating. Also, if it became a problem perhaps we can get an SR3 baffle with a slightly smaller hole to cover its wires.