One problem that took a while to deal with as a part of IFO recovery was the lack of good .snap values for the OPTICALIGN slider values. I am told that part of this is that the values in the safe.snap files are not updated very often. In particular, they had not been updated since before some of the suspensions were mechanically realigned to center the slider values, so the computer reboots this morning put the optics in very bad places. (We had to hand-trend each slider value and type the values in.)
As a solution, I have created a new .req file that includes all of the OPTICALIGN values from the IFO_Align screen. The .req file (and the corresponding .snap file) lives in /opt/rtcds/userapps/release/sus/h1/burtfiles/OptAlignBurt.req . I have also written scripts to capture new .snap files, and restore the .snap file. The idea is that the capture script be run just before maintenence begins, and the restore script be run at the end of maintenence.
To run the capture script, in a terminal paste the following:
/opt/rtcds/userapps/release/sus/h1/scripts/CaptureOptAlignBurt.sh
To run the restore script, in a terminal paste the following:
/opt/rtcds/userapps/release/sus/h1/scripts/RestoreOptAlignBurt.sh
----------------
As a side note, the slider values for ITMX, ITMY, ETMX and ETMY have been accepted in the SDF system (made to be monitored, accepted, then un-monitored), so computer reboots should keep us closer, even if we forget to run the above scripts. We should do the same with the other major suspended optics.
Sudarshan, Jeff K.
The ESD calibration line at 538.1 Hz is moved up to 540.5 Hz, 0.2 Hz away from the Pcal Line at Yend (540.7 Hz) to estimate the ESD actuation strength. This is a temporary arrangment and the ESD line will be moved back to its original position next week after we have some locked data to analyze. The SDF table is updated accordingly.
The ESD line is switched back to its original position at 538.1 Hz. SDF table is updated accordingly.
For SDF tables which have many items and therefore are being paged, we found that selecting the "Sort on Substring" option crashes the EPICS process with the segfault error 4. This was the cause of the restarts of h1susetmy, h1susitmy, h1susitmx, h1isiham6 and h1odcmaster this afternoon. Some of these restarts were to verify the error. Initial testing on the DTS against RCG-trunk is not showing the error. FRS ticket 3330 has been opened.
Plots of the PSL chiller 60 day trends. Most look normal, except the crystal chiller conductivity. It is getting close to the upper limit. Will need to monitor this and replace the Di filter cartridge in the future.
I can only assume that I spaced this out when I pulled the previous STS2-B out to go to the EndX last Tuesday and replaced it with the previous STS2-A unit. If anyone else actually uncovered the igloo, please let me know so I don't have to alert my doctor. So, the ITMY STS2-B unit which is not used for any SEI function, was not ideally thermally insolated from mid-morning 7 July until late morning 14 July.
All fifty (50) Accumulators were checked for charge today. No Accumulator needed charging. Only three accumulators showed a decrease in pressure since the last charge check on 21 April, see T1500280. These were small decreases (few psi) and likely reflect loss from gauge pulloff (does the uncertainy principle apply?) The acceptable range of 60-93% of operating pressure is quite broad and the lowest reading today was at 80%.
Given these results, and, the reservoir-fluid-level-indication of Accumulator charge which can be checked with the system pumping, this invasive, must have system off accumulator pressure check could be done just quarterly. As long as the weekly check of reservoir fluid levels show no decrease, the accumulators can be assumed to be adaquately charged. If a weekly check of the reservoir fluid indicates a volume loss, then the accumulators could be checked.
good to hear that the accumulators are holding well. I like your plan -Brian
Added 1962 channels. Removed 192 channels.
h1susetmy epics process died after running for a few hours, here is the dmesg output
[12094.731546] h1susetmyepics[17317]: segfault at 5a5c8c0 ip 00007f0fb70f2814 sp 00007fff81a06540 error 4 in libc-2.10.1.so[7f0fb7077000+14c000]
[12095.732645] h1susetmyepics used greatest stack depth: 2984 bytes left
Error 4 = The cause was a user-mode read resulting in no page being found (also known as null pointer dereference)
It should not be on during data taking.
Also added warnings for when the cameras and the frame grabbers are on.
I've commented out the HWS and frame grabber on warnings because we want to use them during commissioning. We should uncomment this for the science run though.
CO2X laser RTD sensor alarm (H1:TCS-ITMX_CO2_INTRLK_RTD_OR_IR_ALRM) tripped at 14 Jul 15 17:15:00 UTC this morning (10:15am), shutting off the CO2X laser. Folks were pulling cables near HAM4 this morning, which is probably why it tripped. CO2X laser was restarted at 19:30:00 UTC, and is now running normally again.
Just adding some words, parroting what Elli told me: this temperature sensor (RTD) is nominally supposed to be on "the" viewport (HAM4? Some BSC? The Injection port for the laser? Dunno). This sensor is not mounted on the viewport currently, it's mounted on "the" chassis, which (I believe) resides in the TCS remote racks by HAM4. She's seen this in the past: even looking at this sensor wrong (my words, not hers) while you're cabling / electronics-ing near HAM4, this sensor trips. As she says, this was noticed and recovered by her before it became an issue with the IFO because recovery went much slower than anticipated.
If I understand correctly the sensor I think your talking about then yes this should be on the viewport (the BSC viewport which the laser is injected in). The viewport sensor though is an IR sensor, but for some parts of the wiring in the control box (and thus on the MEDM screen) the IR sensor and RTD sensor are wired in together making it hard to know which one caused the trip. Its supposed to monitor scattered light coming off that viewport. It is very sensitive and can be affected by humans standing near it, light being shown onto it (one of the ways to set the trip level is to hold a lighter up to it ), maybe also heat from electronics, etc. So just sitting in the rack I am not at all surprised that it is tripping all the time and causing grief.
My suggestion is to try to get this installed on the viewport if you can, otherwise if you can’t and it really is causing problems all the time, there is a pot inside the control box which you can alter to change the level at which it trips.
After bringing the IMC down this morning, we had trouble getting the IMC back up. The alignment was bad enough after recovering the seismic and sus that the wfs would walk out of alignment. Keita recovered the alignment by ramping the WFS gain down (on IMC_WFS_MASTER screen (under the IOO dropdown), lower left, it's nominally .1) and moving MC1 and MC2 to maximize the power on MC2_TRANS_SUM and minimize IMC_REFL_DC_OUT, while watching them in dataviewer. He then tweaked MC2 to center MC2_TRANS P and Y to near 0 (~.001) on the PD (on the IMC_CUST_OVERVIEW, top right, on the little pd graphic right of MC2). The gain on the WFS was then ramped up to some small number (I think ~.004? maybe .04) and the IMC watched to make sure it didn't go unstable. When it looked stable, the gain on the WFS was ramped back up to .1.
We wasted a bunch of time trying to retrieve earlier alignments, clearing wfs histories and moving the PZT (a strange land where pitch is yaw and yaw is pitch), before Keita decided to just do the realignment by hand.
Note that the manual alignment was done after everything was brought back to old numbers (MC1, MC2, MC3 using witness BOSEMs, PZT offset using the output of the PZT before the maintenance). Even though the PZT change was not huge (as seen in the MC REFL position), I cannot claim that the change was nothing.
Also, at first I was confused by the fact that the MC trans camera looks as if the beam is split (01 mode) even though MC is locked to 00. Kiwamu told me that the only way to make sure is to look at IFO cameras like IFO REFL and AS.
PSL FSS oscillation precluding IMC locking
It appears that FSS oscillation was wreaking havoc with the ISS and this was the cause of the IMC not locking.
Reducing the FSS Common gain to zero then bringing it back up to 26 dB stopped the oscillation, as seen on the "PZT MON (FAST)" trend display on the PSL FSS screen.
It the display shows a black region (rail to rail oscillation), then the FSS is oscillating. In the nominal state, it should just show a think black horizontal line.
The FSS was tuned up this morning and is now operating as designed ~ 500 kHz UGF, 60 deg phase margin, no features (peaks) up to 5 MHz. However, it may not be as robust against kicks from the IMC as it uses the FSS as its frequency actuator.
We did not take time this mornining to investigate the stability of the FAST/Pockels Cell crossover. I was somewhat surprised to see the FAST gain at 5 dB. It may be that the crossover is not stable.
We used to run the FAST gain at 15 dB. Seems that it was turned down to 5 dB on April 15, 2015.
Next opportunity will check the crossover by looking at the mixer monitor noise spectrum in the 1-50 kHz frequency range as we adjust the fast gain - optimize the tradeoff between FAST gain and noise peaking at the crossover.
JimW is generating a request to TJ to add an alarm for the FSS range to the Guardian.
FSS oscillation was fixed while I was tweaking the MC2 trans position. Before that, IMC locked to the correct mode but the IMC WFS ran away.
So it seems like the alignment thing was a red herring.
The FSS went into oscillation again, after talking with RIck on the phone we turned the common gain down from 26dB to 23dB.
Sheila, Jenne, Matt, Stefan, Evan
List of things done today:
We had a spectrum earlier in the night last night that had better low frequency sensitivity. One difference between this and later locks was the BS coil driver switching, the DARM offset could also have been different but I am not sure.
[Matt, Jenne, Evan, Sheila] There is an enormous peak in the DARM spectrum at 4735 Hz. Shown in the DTT printout below is the IOP channel for the OMC DC PD (H1:IOP-LSC0_MADC0_TP_CH12), from 1 kHz to 25 kHz, and this 4.7 kHz peak is dominating by about 2 orders of magnitude. We wonder if this is perhaps an acoustic internal mode of one of the test masses, although we are having trouble finding a listing of such modes. Does anyone know where we can find a listing of test mass acoustic modes? Or, alternatively, does anyone have any thoughts on what this mode might be?
Sort of unsatisfying (because they're not the real deal, or their incomplete) FEA results for the test mass body modes can be found here: http://www.ligo.caltech.edu/~coyne/AL/COC/AL_COC.htm (Only for a right cylinder) and here T1400738 (only shows the modes which are likely to be parametrically unstable). A quick glance through the above doesn't show anything at or near that frequency (including abs(16384 - FEA results)). I've yet to see FEA analysis of non-test-mass optics, but I've been told that Ed Daw and/or Norna/Calum's summer students on working on it. The best I've seen on that is the ancient 2004 document for the Beam Splitter, T040232 which is where we colloquialy get the frequency of the beam splitter's butterfly mode, which was done by eyeballing the current beam splitter's parameter location Figure 2. (But, the modeled dimensions are wrong, and the wording is confusing on whether the listed frequencies are from the model with flats or not.)
It appears to be a 10th order violin mode on EY.
It is damped with a 1 Hz wide butterworth (unity gain in the passband), a +100 dB filter, and a gain of -30. No rotation needed.
Jeff
As you notes there is some data in the links you already included and we have started to fill in the blanks. Refer to https://dcc.ligo.org/T1500376-v1. When we talk I (we) can complete.
Calum
For reference, with a combination of Slawek's (T1400738) and Calum's (T1500376) FEA models, and Calum's video of the test mass internal mode shapes (T1500376), we expect to find the drumhead mode around 8029 Hz, the x-polarized butterfly mode around 5821 Hz, and the +-polarized butterfly mode around 5935 Hz (using Slawek's values for the mode frequencies). The next two modes (at 8102 Hz and 8156 Hz) do not involve distortion of the test mass face in the direction of the beamline.
Matt, Sheila, Eli
At some point today the bounce mode on EX got excited enough that we could see it in the PUM OSEMs as pitch motion. The RMS of the observed "pitch" was about 3 nrad, and the line in DARM was about 1e-13 m. Assuming that OSEM misalignent is providing the roll to observed pitch motion, and that this misalignment is of order 1 degree, the estimated roll motion was about 3e-7 rad.
This gives an order of magnitude estimate of the Roll to DARM coupling of 3e-7 m / rad.
Assuming a 10cm lever arm, this give a dimentionless coupling of 3e-6. Compared to the bounce to DARM coupling, which is order 1e-3, the roll coupling is tiny, which means that the roll motion is HUGE (since they both look about the same in DARM).
My 24 hours have passed, but the first sentence should read "At some point today the roll mode on EX..."
Suppose that the beam is at (X, Y)=R(cos(theta), sin(theta)) on the mirror where R=0 is the center of roll rotation and theta=0 is the horizontal line crossing the center. Though the COG is somewhat lower than the mirror center due to wedge, R should be more or less equal to the radial distance of the beam from the center of the mirror.
Mirror thickness at this position is
T(R, theta) ~ -R*sin(theta)*w + T0
where T0 is the thickness at the center and w is the wedge in radians that is about 0.08deg=1.4mrad for all ITMs and ETMs.
Roll changes the thickness by adding some small angle d_theta to theta: dT=-R*cos(theta)*w*d_theta=-X*w*d_theta.
When the rolling plane is in the middle of the front and the back surface, the light see the half of the total thickness change, so the roll-to-length coupling coefficient should be
length/roll ~ |dT/d_theta /2| = X*w/2
= (X/5mm) * 3.5E-6 [m/rad].
For Matt's estimate of 3E-7 m/rad to hold true, the horizontal centering should be 0.5mm or so, which is pretty good but not outrageously so.
What this probably means is that Matt's estimate about the roll angle was reasonable, as in it cannot be off by that much. A factor of something, not orders of magnitude.
[edit on Jul 15] However, if the roll plane is parallel to the local gravity, the above doesn't hold true.
In this case, w/2 is replaced by the angle between the local gravity and LIGO global vertical: 8urad for LHO EX, 639urad for EY, -619urad for IX and 12urad for IY (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=14876):
length/roll(EX)~ X*8urad = (X/5mm) * 4e-8 [m/rad],
length/roll(EY)~ X*619urad = (X/5mm) * 3E-6 [m/rad].
For EX and IY, that's two orders of magnitude smaller than what I showed yesterday, though EY and IX it didn't change.
It seems that we need a suspension model to find out the actual rotation plane.
Recall that we started writing the IFO ALignment slider values to an hourly burt such that we can easily grab-n-restore best alignment values - see alog 18799 from June 2.
The hourly burts are at:
/ligo/cds/lho/h1/burt/2015
under the appropriate date /h1ifoalignepics.snap
'Sorry that no one in the CR recalled this info from last month for you yesterday...