In preparation for updating the IMC to LLO style control, I've brought all the *current H1 ISC filter files under proper version control, such that we have a version controlled back up. This meant (1) Adding and/or Ccmmitting any outstanding changes that were present in the local copy in the userapps repo (if they already existed) (2) Copying the chans directory copy over to the userapps repo, if they are different, assuming the chans directory copy is the latest and greatest. (3) Commit the latest and greatest to the repo. (4) Remove the chans copy, and turn it into a soft link to the userapps copy. (In case you're worried about whether Foton, the RCG, and autoquack can correctly follow this link, SUS has been interacting with the filter file / softlink for many months now without issue. It survived many re-compiles, RCG upgrades, opening and editing the chans "copy" with Foton, and quacking new filters in.) NOTE: It should now be common practice to open and edit the filter file in the userapps repo, NOT in the chans directory even though foton can handle both. Now the ISC filter files in the chans directory look like this (and point to -> that): controls@operator1:chans 0$ pwd /opt/rtcds/lho/h1/chans controls@operator1:chans 0$ ls -l H1{I,L,A}SC*.txt lrwxrwxrwx 1 controls controls 59 Jul 31 12:07 H1ASCIMC.txt -> /opt/rtcds/userapps/release/asc/h1/filterfiles/H1ASCIMC.txt lrwxrwxrwx 1 controls controls 58 Jul 19 16:14 H1ASCTT.txt -> /opt/rtcds/userapps/release/asc/h1/filterfiles/H1ASCTT.txt lrwxrwxrwx 1 controls controls 56 Jul 31 12:24 H1ASC.txt -> /opt/rtcds/userapps/release/asc/h1/filterfiles/H1ASC.txt lrwxrwxrwx 1 controls controls 58 Jul 31 12:11 H1ISCEY.txt -> /opt/rtcds/userapps/release/isc/h1/filterfiles/H1ISCEY.txt lrwxrwxrwx 1 controls controls 56 Jul 31 12:06 H1LSC.txt -> /opt/rtcds/userapps/release/lsc/h1/filterfiles/H1LSC.txt
Right now there is a mismatch between the MEDM HEPI screens and the model, more precisely in the BLND screen: three ON/OFF switches are still active but not displayed in the MEDM screen.
By default, the switches are OFF and the ouput of the BLND is 0 (see pictures attached).
This bug should disappeared in the last model version (we got rid of the switches). I'll try to put a WP and do the update this week.
For now, the quick solution is to use caput commands in the terminal. Example for HEPI-HAM3:
put the BLND L4C switche ON -> caput H1:HPI-HAM3_BLND_L4C_SWITCH 1
put the BLND IPS switche ON -> caput H1:HPI-HAM3_BLND_IPS_SWITCH 1
put the BLND BIAS switche ON -> caput H1:HPI-HAM3_BLND_BIAS_SWITCH 1
I took the PSL from "science" to "commisioning" mode for 10 minutes, from 11:20 to 11:32. I only entered the anteroom, where I retrieved an ALS faraday and swapped a dust monitor for patrick. Sheila
Just started a measurement on HEPI-HAM2 and ISI-HAM3 (on the computer 'operator3' in the control room). Should be done in the morning around 10h30.
[J. Kissel, A. Pele] We've launched acceptance transfer function measurements on H1 SUS BS, using a matlab session on operator1 computer. Measurement start 30-Jul-2013 18:23:34 PDT. We expect it to take 12 (!!) hours, so it should finish 31-Jul-2013 06:30 PDT.
measurements succesfully completed.
[S. Biscans, K. Izumi, J. Kissel, A. Pele, J. Rollins] Log of how we brought the IMC back up to locking today after the RCG 2.7 upgrade in rough chronological order: - SUS (for each SUS -- MC1, MC2, MC3, PRM, PR2, PR3, BS, ITMY, IM1-IM4) - Given to us as it comes up from a reboot, in SAFE -- watchdogs tripped, masterswitch off, (stale*) alignment offsets OFF. ** - Reset IOP watchdog - Reset Inner-most USER watchdogs - Reset USER DACKILL - Turned on Masterswitch (damping loops should already be trying to run) - Turned stale alignment offsets (in anticipation of having to burt restore ripe offsets that are not that different) - Recovered ripe alignment offsets for MC1, MC2, and MC3 - snapped to 2013-07-29 20:00 (Local) - /ligo/cds/lho/h1/burt/2013/07/29/20:00/h1susmc*epics.snap - HPI-HAM2 & HPI-HAM3 - Given to us as it comes up from a reboot, in SAFE -- watchdogs tripped, masterswitch off *** - Make sure no offsets are requested, ramp down master gain - reset watchdog - turned on master switch - manually restored offsets in OUTF - put in a 10 sec ramp - entered and turned on offsets from LHO aLOG 7180 (offsets turn on use the ramp time now as well) - ISI-HAM2 & ISI-HAM3 - Given to us as it comes up from a reboot, in SAFE -- watchdogs tripped, masterswitch off - Make sure no offsets are requested, ramp down master gain - reset watchdogs - Turned on Master Switch - Used Command window, to - damp ISI - isolate ISI Level 2 - PSL - Runs entirely on Beckhoff, and was unaffected by the the RCG 2.7 upgrade so there was no recovery needed - IOO - Given to us as it comes up from a reboot - burt restored h1ascimc.snap (mainly to restore alignment of Beckhoff controlled mirror at the top of the periscope -- all other mirrors are fixed) - 2013-07-29 20:00 (Local) - /ligo/cds/lho/h1/burt/2013/07/29/20:00/h1susascimc.snap - turned off IMC WFS loops - ramp down master gain in bottom left corner - turned on guardian **** - go to IMC screen, have the guardian screen handy/visible - In a terminal ]$ ssh h1gaurdian0 ]$ screen -RAad imcguardian (opens a "screen" environment, which looks just like a new terminal) ]$ cd /opt/rtcds/userapps/release/ioo/h1/scripts/ ]$ ./runIMCGuardian (some lights should go green on the IMC Guardian Screen) (hit successive crtl+a, then ctrl+d which "detaches" the screen session, taking you back to the original h1gaurdian0 terminal) ]$ exit (logs you out of h1guardian) - the guardian starts up not doing anything (state request or current state are blank) - hit "manual", "quiet," or "damped" on the IMC Guardian Screen (since there are currently no "goto_" scripts for "locked," hitting "locked" doesn't do anything) - Manually Ramp IMC WFS master gain back up (guardian doesn't know to touch it) - IMC Locks up like a charm * Because the safe.snaps for most suspensions had been captured a while ago, one can never trust the alignment offsets. ** For MC1, MC2, BS, and ITMY, the safe state came up with the BIO settings frozen and red -- specifically the Test/Coil Enable switches were red, and changing the state request didn't change the filter settings. Scared that this was a bug in RCG's interaction with RCG 2.7, I had considered going through a full boot process and/or declaring a failure mode of the upgrade. However, since this did *not* appear on MC3, PRM, PR2, or PR3 (which ruled out that the bug was common to a given computer), my quick test to see whether it was frozen on a given SUS was open open the BIO screen and randomly pick a field to try to change. When I got to the beam splitter, my randomness drove me to try changing the TEST/COIL Enable bit instead of the State Request. When I requested to turn on the SUS-BS M3 TEST/COIL Enable, it flipped the M1 and M2 TEST/COIL enable and all BIO functionality returned, and the stages began behaving independent as expected and as normal. I went back to the other suspensions, and saw similar behavior i.e. that the BIO EPICs record was stuck, and the SAFE burt restore didn't sufficiently tickle the EPICS database to get the BIO request and monitors to report back correct information. Once tickled in the right way, all functionally returned. YUCK! *** Sebastian, having never seen spectra of LHO IPS, we had assumed that the HPIs were locked, or the IPS were broken. Because of the nonsense in described in ** I told him to wait a bit. Then we convinced ourselves that we needed to power-cycle the H1SEIH23 I/O Chassis (because we'd heard that Dave had to restart the PSL I/O chassis this morning because it complained it had lost connection with the dolphin network or something). **** We tried to follow Chris' instructions from LHO aLOG 6920, but this failed because the h1script0 machine is still running Ubuntu 10.4 (as opposed to 12.04 that h1guardian has been updated to recently, like the rest of the control room machines) -- so we don't know how Chris did it. Thankfully Jamie was helping and identified the problem immediately after querying Jim.
**** Regarding the guardian/h1script0 mystery -- it sounds like LHO's installation of perl has been upgraded recently. At the time of entry 6920, the situation was the opposite: the IMC guardian could not be started on the 12.04 workstations, and only the more outdated machines (such as h1script0) were capable of running it.
WP 4065
this morning we upgraded CDS to tag advLigoRTS-2.7. All daqd systems were compiled from source, all front end models were rebuilt, all front end computers were rebooted.
At the same time, the EY computers were powered down and will remain off for several weeks while we reconfigure EY to the aLIGO configuration.
The EY systems were taken out of the Dolphin manager, and their channels were marked as acquire=0 in DAQ so that they are not written to the frame. Their channels are still in the DAQ to allow trending of the old data.
h1dc0 required a reboot to load the new GPS drivers. h1susb123 and h1psl0 were power cycled for permit comms with their IO Chassis.
I did not update the PSL models today. This system was burt restored to 8am today.
We are testing the new science frames features in the DAQ. We have found some problems with the channel stats.
Unfortunately our dataValid flags are still incorrect (set to ONE, should be ZERO). Jim rebuilt tag 2.7 on the DTS and its DAQ there is writing correct dataValid values. We are investigating what is different between the two.
Jeff reports strange binary I/O behavior on the sus models (QUAD and HAM). The BIO readbacks are bogus until an output channel has been switched, at which point the data becomes correct. We are investigating.
This behavior was noted at LLO during the RCG 2.7 upgrade by Stuart Aston (See aLOG entry last Thursday.
Was able to open GV5 and leak test the gate annulus piping joints on GV6 today -> 1 of 5 CFF was found to be leaking. Volume = Y1, Y-mid, Y2, YBM, XBM, HAM2-3, BSC1,2,3,7,8 being pumped by CP1, CP2, YBM MTP (backed by leak detector), CP3, CP4 and IP9. Helium background 6.5x10-8 torr*L/sec -> 1.33 CFF on North side of GV6 climbed steadily from background to 1.5x10-7 torr*L/sec beginning 18 seconds after filling bag until purged with air at the 200 second mark. Demonstration was repeated after signal fell back to near the original starting value.
Started fluid draining this morning and today got 8 liters out. Also, all the HEPI electronics at the Piers were removed and all the Actuators were disconnected from the Crossbeam Feet. Actuator lock-down is ongoing. Additionally, most of the ISI cabling sans EM Actuators were disconnected from the chamber and the CPS racks brought down. Greg & Hugh
Took a new safe_snapshot of ITMY using save_safe_snap('H1','ITMY') matlab script. It includes the new gain values for the 2.1 damping filters recently installed.
h1susitmy_safe.snap is located in /opt/rtcds/userapps/release/sus/h1/burtfiles
EY: Cleanroom move, ISCT-EY move to LVEA, powering down lots of electronics, rerouting VAC conduit
CDS work: RCG 2.7 work begins (which takes down pretty much everything), reboot h0pemmx, much of afternoon devoted to restoring systems from RCG work.
Terminals upgraded to software to make medm faster (BUT, we can no longer look at out-building videos on Control Room terminals....because they freeze the computer(!))
Snow Valley working on Chillers, Praxair made a delivery, Sprague on-site
I measured the ETMX OL values
prettyOSEMgains('H1','ETMX')
M0F1 19894 1.508 -9947
M0F2 29453 1.019 -14727
M0F3 30661 0.978 -15330
M0LF 24512 1.224 -12256
M0RT 24724 1.213 -12362
M0SD 21784 1.377 -10892
R0F1 28922 1.037 -14461
R0F2 23013 1.304 -11507
R0F3 25716 1.167 -12858
R0LF 26662 1.125 -13331
R0RT 24388 1.230 -12194
R0SD 21961 1.366 -10980
L1UL 24267 1.236 -12133
L1LL 26538 1.130 -13269
L1UR 24545 1.222 -12273
L1LR 26259 1.142 -13130
L2UL 17935 1.673 -8967
L2LL 18726 1.602 -9363
L2UR 25124 1.194 -12562
L2LR 25518 1.176 -12759
I entered the gains and offsets into the OSEMINF screens and updated and committed the /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susetmx_safe.snap .
Betsy was concerned about the M0 F1 and SD OL values, which were a bit low (in spec, but rather lower than average). So she swapped out those two and I remeasured those two values only. The new master list is as follows:
M0F1 24018 1.249 -12009 (new)
M0F2 29453 1.019 -14727
M0F3 30661 0.978 -15330
M0LF 24512 1.224 -12256
M0RT 24724 1.213 -12362
M0SD 28519 1.052 -14259 (new)
R0F1 28922 1.037 -14461
R0F2 23013 1.304 -11507
R0F3 25716 1.167 -12858
R0LF 26662 1.125 -13331
R0RT 24388 1.230 -12194
R0SD 21961 1.366 -10980
L1UL 24267 1.236 -12133
L1LL 26538 1.130 -13269
L1UR 24545 1.222 -12273
L1LR 26259 1.142 -13130
L2UL 17935 1.673 -8967
L2LL 18726 1.602 -9363
L2UR 25124 1.194 -12562
L2LR 25518 1.176 -12759
I entered the new values into the OSEMINF screen and redid the safe.snap.
Now with associated serial numbers which are also in ICS:
M0F1 - 643
M0F2 - 486
M0F3 - 414
M0LF - 580
M0RT - 507
M0SD - 563
R0F1 - 426
R0F2 - 497
R0F3 - 504
R0LF - 468
R0RT - 418
R0SD - 434
When the ETMX came up after the software upgrade, I attempted to measure the OL values on the OSEMs, which had been installed and left retracted by Betsy yesterday.
I noticed that four channels, R0 F1/F2/F3/SD were behaving oddly. Trending revealed that these channels had gone bad at 00:09 Pacific this morning (see attached). Richard replaced the satellite amp and they came good.
Summary:
Off-center the beam on curved mirrors significantly in one direction and everything is within tolerance (first plot, this is the best quality data we managed to squeeze out of this setup).
Center the beam on the curved mirrors and the data looks OK-ish but things are somewhat out of nominal tolerance (second plot, some green error bars are outside of two vertical lines representing the nominal tolerance), and the ellipticity and the astigmatism become worse (you can tell by the fact that X data and Y data moved in the opposite direction).
The third/4th attachment show how off-centered the beams were on the secondary and on the primary when the good data was obtained.
The 5th/6th attachment show the centering on the secondary and the primary when the second set of data was taken.
This is repeatable. Every time a good looking data is obtained, the beam position on the primary is offset to the left. Every time we re-centered the beam on the primary and the secondary, the scan data is worse (but still OK-ish, not terrible).
Sounds like a problem of the mirror surface figure to me. Maybe we got unlucky on this pair.
This is the best we can achieve, and since it's not terrible when the beam is centered on the curved mirror, I'll declare that this is the final tuning of the H1TMSX. Tomorrow we'll mate the ISC table to the tele.
The dust monitors in the LVEA are NOT currently being recorded. It appears swapping the dust monitor in the H1 PSL enclosure has broken the communications.
Upon startup the IOC communicates correctly with each dust monitor until it gets to location 16 (the one that was swapped yesterday). After this it starts reporting back errors of the form: Error: ../commands.c: 49: Sent command � not echoed Received ?
I powercycled the Comtrol this morning. It worked after location 16 for a little while, but the error has returned.
Robert says he swapped the dust monitor in the H1 PSL laser enclosure. First one dust monitor was disconnected from the breakout box outside the entire H1 PSL enclosure. If I recall correctly, the dust monitor at location 16 was then still found by the IOC. The communication errors persisted. The first dust monitor was plugged back in and the other one disconnected. The IOC still found the dust monitor at location 16, but the communication errors went away. The dust monitor at location 16 reported calibration errors. It may be that the wrong dust monitor was swapped, leading to two set at the same location number, but this would not explain why the communication errors persisted after the first one was disconnected. As it stands, one of the dust monitors in the H1 PSL enclosure is disconnected. The dust monitor at location 16 is reporting calibration errors. I am not sure where the dust monitor at location 16 is. The dust monitor at location 10 is not found by the IOC. The remainder of the dust monitors in the LVEA are running again.
Sheila swapped the dust monitor in the anteroom with one programmed at location 10. The one she removed from the anteroom is labeled 'H'. It had no charge left in the battery when I got it. There was no change in the status. The dust monitor at location 10 is still unseen, and the dust monitor at location 16 is still giving calibration errors. This leads me to believe that: The dust monitor at location 16 is in the laser room and has calibration errors. The dust monitor at location 10 is in the anteroom and is unplugged at the breakout box outside the enclosure.
Daniel Halbe, Josh Smith, Jess McIver Summary: strong, semi-periodic transient ground motion is propagating up the SEI platforms and down the suspension stages at ETMY. Cause of the ground motion is not yet determined. Effect on the HIFOY signal is not yet evaluated. Glitching in the top stage of the ETMY BOSEMs was first identified by Daniel Halbe (see Spectrogram_SUS_Longitundinal_M0_BOSEM_July2.png). These glitches are seen in all DOFs of the suspensions and seismic isolation platforms, have an irregular periodicity of about every 10-20 minutes, a duration of a few minutes, a central frequency of 3-5 Hz, an amplitude in Stage 1 T240s of the ISI on the order of a thousand nm/s (~ um/s) away from baseline noise, and have been occurring since at least June 12, 2013. They are not seen in ITMY suspensions channels. For a table that traces these glitches across each DOF and up the stages of seismic isolation to the top stage of the suspension, see: https://wiki.ligo.org/DetChar/HIFOYETMYglitching > Normalized spectrograms (PSD) of the periodic glitches for 1 hour 10 min Daniel also found them in the lower stage ETMY OSEMs: https://wiki.ligo.org/DetChar/SpectrogramsOfH1AllMassQuadSUSETMY And Josh Smith traced them to excess ground motion using a representative top stage BOSEM channel (see EMTY_top_stage_BOSEM_pitch_correlation_to_excess_ground_motion.png). These glitches have a strong correlation with local ground motion and significant correlation with ground motion near the vault. There appears to be faint correlation with ground motion near MX and the LVEA that merits further investigation. (See the normalized spectrogram Top_stage_BOSEM_ETMY_longitudinal_glitching.png and compare to normalized spectrograms Ground_motion_PEM_{location}_spectrogram.png of the same time period) For additional plots of ground motion at various locations around the ifo during these glitches, see again: https://wiki.ligo.org/DetChar/HIFOYETMYglitching (If you are unable to see some of the plots on this page, please see the instructions under 'Normalized spectrograms (PSD) of the periodic glitches for 1 hour and 10 min'). Note that the reported units of counts are incorrect for all plots (a bug in ligoDV) - these channels are calibrated to nm/s for inertial sensors or to um for BOSEMs and OSEMs.
According to the summary bit of the ODC, the ETMY ISI was not in a 'good' state during this time.
From the Hanford cluster:
$ligolw_segment_query -t https://segdb-er.ligo.caltech.
Returned no results.
TJ Massinger, Jess McIver
TJ did a similar study in the H1 BS top stage BOSEMs and found glitching at a lower frequency (2.8Hz) than we've seen in the ETMY (3-5Hz).
A comparison of the top stage BOSEMs of the core optics at Hanford is attached. The glitches seen in the beam splitter BOSEMs do not seem coincident in time with the glitches in the ETMY.
ISI states at this time are below (note that if an isolation loop is not indicated to be in a good state, it may be because the 'correct state' value for the comparison to generate the ODCs was wrong/outdated for some chamber until Celine fixed it a few hours ago):