There are two channels alarming high for the temperature of the supplied air to the LVEA from air handlers 1 and 2. H0:FMC-CS_LVEA_REHEAT_1B_DEGF and H0:FMC-CS_LVEA_REHEAT_4_DEGF have both exceeded their alarm level of 100 deg F. Trends for the past 7 days are attached. John has been notified.
FRS ticket 4264 opened.
Sometime between last night and today, foton seems to have become unable to generate certain elliptic filters of high order.
For example: choose elliptic band pass, first frequency 5 Hz, second frequency 500 Hz, fifth order, 1 dB ripple. Foton freezes as soon as you click ok. Same for a sixth-order elliptic bandpass, and a fifth-order elliptic low-pass.
Last night, we were able to generate fifth-order elliptic filters no problem. What happened between then and now?
Filed FRS Ticket 4266.
TITLE: Jan 21 EVE Shift 00:00-07:11UTC (16:00-23:11:00 PDT), all times posted in UTC
STATE Of H1: Down
SUPPORT: Sheila,Evan, Gabriele
INCOMING OPERATOR: N/A
SHIFT SUMMARY:
IFO Locked re-locked for commissioning ≈ 5 times
Last lock of the night is hopefully going to be a keeper. Sheila and I are going to try to get the SDF cleared for Observing mode. She suggests that the safe.snaps probably need to be updated.
Ezca Connection error ≈ 07:00UTC caused lockloss when Sheila attempted to clear it.
No one home at Livingston. We’re checking out (07:11UTC)
We have a lot of out of date safe.snaps, which mean that after our power outage we have a lot of red on sdf, which Ed and I have spent some time fixing but there is still a lot to go.
UPdating safe.snaps would help make the next power outage recovery easier.
We are going to leave the IFO in down and hope to sort out SDF in the morning.
[Sheila, Evan, Gabriele]
The goal is to cancel the PRCL length chane induced by driving MICH using only the BS. For this reason we plan to implement a path from MICH control signal to PR2.
We checked the result with a MICH line at 132 Hz: when the compensation is on, the line amplitude in the PRCL error signal is reduced by at least a factor 10, as expected from the fit residual (figure 2).
We're not leaving this in the guardian.
The attachment shows the sensing matrix after this diagonalization. PRM and SRM were not remeasured (they are from the data taken on the 12th).
Mid-SHIFT SUMMARY:
IFO Locked at NLN 78Mpc 02:57UTC
Commissioners working 54Mpc
SDF is LIT up Red. Sheila mentioned that it probably hasn’t been cleared since the power outage?
µSei Z-axis is riding above the 90%ile. EQ bands are ≈.08µm/s. Winds are ≤15mph
I started a cleanup of SDF by doing some reverts on TRAMP values (some of which I changed in SUS trying to re-align yesterday). The changes effected were on: HAMs 2,4,5,6 ISI; and SUS-BS,IM, PR2, SR2, and SR3. Also, the Gain/State_Good on the BS ISI was reverted. If 'your' sub-system(s) are showing changes in SDF due to power outage recovery or just trying to make things mo' betta, please have a look and accept or revert.
ALL LHO SEI Platform SDFs were green'd Wednesday morning. Any SDF reds Thursday evening were not related to the power outage.
These changes relate to Jim & me making some guardian changes; we should have more quickly updated the SDFs.
STATE OF H1: Relocking SHIFT SUMMARY: Moved RM1 and RM2. Ran initial alignment. Obtained first lock at NLN since power outage. Lost lock shortly afterwards to a 6.6 magnitude earthquake in Mexico (see attached). Earthquake took about 5 hours to ring down. Manually damped ITMX and ITMY roll modes. SUPPORT: Jenne, Evan, Sheila INCOMING OPERATOR: Ed ACTIVITY LOG: 17:02 UTC Bubba and Chris into end X mechanical room to lubricate fans (WP 5685) 17:45 UTC Bubba back from from end X 18:47 UTC Bubba and Chris to end Y mechanical room to lubricate fans (WP 5685) 19:18 UTC Rick and Liyuan to LVEA and optics lab to look for optics 20:11 UTC Rick and Liyuan done 20:30 UTC Filiberto to Y2-8 to check on solar powered battery charge 21:05 UTC Kyle to end X air handling room to take measurements for building a stand for the ion pump 22:08 UTC Filiberto done at Y2-8 and going to X2-8 22:19 UTC Kyle back 22:21 UTC Dick G. to LVEA to look at RF racks 22:53 UTC Jeff B. to optics lab to work on dust monitors 22:59 UTC Filiberto done at X2-8 23:04 UTC Nutsinee and Jim B. to end Y VEA to turn on HWS 23:19 UTC The tidal common length integrator limits (H1:LSC-X_COMM_CTRL_LIMIT, H1:LSC-Y_COMM_CTRL_LIMIT) were being reached. Evan had me change them from 10 to 20. 23:27 UTC Nutsinee and Jim B. back 23:30 UTC Dick G. back 23:33 UTC Jeff B. back. He turned off the dust monitors in the optics labs. 00:34 UTC Kyle to mid Y to overfill CP3
1635 - 1650 hrs. local -> Back and forth Y-mid Next scheduled over fill Saturday, Jan. 23rd
Jim B, Nutsinee
Yesterday I managed to restart ETMX HWS software but wasn't able to connect to EY HWS camera via telnet (this camera is plugged in to an external power supply). Today Jim and I went out, removed the relay box for troubleshooting, and plugged EY HWS camera directly to the power supply (no way of powering it on and off remotely at the moment). I trended the cylindrical power of both EX and EY HWS to see if the software had been running before the power outage yesterday. None of them had been running since the past 10 days. ETMY cylindrical power trend shows that the software in fact hasn't been running since the lost power outage in October (I couldn't get ETMX data). Since nobody noticed this and no one ever told me that it wasn't running, I hope it wasn't needed.
TITLE: Jan 19 EVE Shift 00:00-08:00UTC (16:00-00:00 PDT), all times posted in UTC
STATE Of H1: Locking
DAY OPERATOR: Patrick
BRIEF SUMMARY: Recovering from a very large earthquake off the coast of Western Mexico. Currently the IFO bounce and roll modes are being monitored and manually damped. This appears to be the reason for lock losses around the RF_DARM stage.
Xarm green alignment
Ed was struggling with the Xarm green alignment. We set the guardian to locked no slow, no wfs. I then turned on the loops one at a time. Usually the camera centering loops are the problem, but they were the easy ones this time. Eventually it was DOF2 that was causing trouble, so I had DOFs 1 and 3 closed and touched TMS by hand to get the error signals for DOF2 closer to zero. I was able to close all the loops, and let the alignment run like normal after that.
Xarm IR alignment
Not really sure what the problem is here, but it's getting late and I'm getting frustrated, so I'm going to see if I can move on with just hand aligning the input.
I suspect that the IMs need some more attention, so if Cheryl (or someone else following Cheryl's procedures) could check on those again in the morning, that would be great.
Also, I'm not sure if the RMs got any attention today, but the DC centering servos are struggling. I've increased the limits on DC servos 1 and 2, both pitch and yaw (they used to all be 555, now they're all 2000). I also increased H1:SUS-RM2_M1_LOCK_Y_LIMIT from 1000 to 2000.
Allowing the INP ASC loops to come on is consistently causing the arm power to decay from 0.85ish to lockloss. I didn't touch the ITM or ETM, but I'm not getting any more power by adjusting PR2 or IM4.
MICH Dark not locking. Discovered that BS's M2 EUL2OSEM matrix wasn't loaded, so no signal being sent out. Hit load, MICH locked, moved on.
Bounce needed hand damping (guardian will prompt you in DARM_WFS state), roll will probably need it too. This isn't surprising, since it happens whenever the ISI for a quad optic trips. Recall that you can find the final gains (and the signs) for all of these in the ISC_LOCK guardian. I like to start with something small-ish, and increase the gain as the mode damps away. No filters need to be changed, just the gains. My starting values are in the table below, and I usually go up by factors of 2 or 3 at a time.
| (starting gain values) | Bounce | Roll |
| ETMY | +0.001 | -1 |
| ETMX | +0.001 | +1 |
| ITMX | -0.001 | +1 |
| ITMY | -0.001 | -1 |
I lost lock while damping the bounce mode (in DARM_WFS state), and the DRMI alignment is coming back much worse than the first few DRMI locks I had.
I don't actually have a lot of faith in my input beam alignment, so I probably wouldn't be happy with any ASC loop measurements I take tonight even if I got the IFO locked. Since we have an 8am meeting, I'm going to call it a night, and ask the morning operator to check my alignment and fix anything that sleepy-Jenne messed up.
STATE OF H1: Still recovering from power outage. ACTIVITY LOG (some missing): 15:44 UTC Unexpected power outage 16:38 UTC Fire department through gate to check on RFAR boxes 16:56 UTC Richard, Jeff B. and Jim W. to end stations to turn on high voltage and HEPI pumps 17:14 UTC Richard turning on h1ecaty1 17:14 UTC Jason and Peter to LVEA to look for an optic 17:39 UTC Richard and company done at end X and going to corner station to turn on TCS chillers and HEPI pumps 17:49 UTC Filiberto to end X to reset fire panel 17:56 UTC Vacuum group touring LVEA (6 or 7 people) 17:57 UTC HEPI pump stations and TCS chillers started in corner station 18:18 UTC Filiberto back from end X 18:24 UTC Jeff B. and Jason to LVEA to start TCS X laser 18:49 UTC Hugh bringing up and isolating HAM ISIs 19:16 UTC Richard and Jim W. to end stations to turn on ALS lasers 19:30 UTC Sheila to CER to power cycle all of the Beckhoff chassis 22:55 UTC Jason to LVEA to look at optical levers 23:02 UTC Jason back 23:34 UTC Dave restarting the EPICS gateway between the slow controls network and the frontend network in an attempt to fix an issue with the Beckhoff SDF 23:47 UTC Dave restarting the Beckhoff SDF code Other notes: End stations: end Y IRIG B chassis power cycled; end X, end Y high voltage turned on; end X, end Y HEPI pump station computers started; end X, end Y ISI coil drivers chassis reset; end Y Beckhoff computer turned on Corner station: TCS chillers turned on; HEPI pump controllers turned on (Jeff B. had to push power button on distribution box); Sheila power cycled all Beckhoff chassis in CER; Sheila turned on the AOS baffle PD chassis in LVEA I started the h0video IOC on h0epics2. Conlog was running upon arrival. However it crashed during the recovery. It still needs to be brought back. Joe D. and Chris worked on beam tube enclosure sealing. There was some trouble starting the corner station HEPI pump controller computer. There was some trouble finding the strip tool template for the wind speeds. I'm not sure if it was found or if TJ created a new one. Things to add to the 'short power outage recovery' document: - Turn on high voltage supplies - Push reset on ISI coil drivers - HEPI pump controller and pumps - TCS chillers - TCS lasers - ALS lasers - Turn on video server
Jeff K. turned and left on wifi at end stations since we are no longer running in science.
From Jeff K.: - The PSL periscope PZTS had to be aligned using the IM4 and MC2 QPDs. Setting them back to their previous offsets did not work. - There was trouble with the end Y ESD beyond the fact that the high voltage got tripped off with the Beckhoff vacuum gauges.
01:43 UTC TMSX guardian set to 'SAFE'. Ed and Jenne reset controller at end X. TMSX guardian set to 'ALIGNED'.
J. Kissel, T. Shaffer, E. Merilh, E. Hall A little more detail on the problems with the EY driver: - We knew / expected the High Voltage Driver of the ESD system to need reseting because of the power outage, and because of several end-station Beckhoff computer reboots (which trip the vacuum interlock gauge, and kill the high voltage power supplies to the HV driver) - However, after one trip to the end station, to perform the "normal" restart (turn on high voltage power supplies via rocker switches, set output voltage and current limit to 430 [V] and 80 [mA], turn on output, go into VEA and hit the red button on the front of the high voltage driver chassis), we found that the *high voltage* driver was railed in the "usual" fashion, in which the high voltage monitors for all 5 channels (DC, UL, LL, UR, LL) show a fixed -16k, regardless of request output. - We'd tried the "usual" cure for an EY railed high voltage driver (see LHO aLOG 19480), where we turn off the driver from the red button, unplug the "preamp input" (see LHO aLOG 19491), turn on the driver from the red button, plug in the cable. That didn't work. - Desperate, after three trips to EY, I tried various combinations of unplugging all of the cables and turning on and off the driver both from the remote switch on the MEDM screen and the front-panel red button, and only when I'd unplugged *every* cable from the front, and power cycled the chassis from the front-panel did it clear up the stuck output. Sheesh!
Corey, Adam, We're just about to start a detchar safety injection.
We've finished this injection. I'll post a few more details shortly.
More Details: I injected the waveform from 'https://daqsvn.ligo-la.caltech.edu/svn/injection/hwinj/Details/detchar/detchar_03Oct2015_PCAL.txt'. The injection start time was 1136927627. The log file is checked into the svn - 'https://daqsvn.ligo-la.caltech.edu/svn/injection/hwinj/Details/detchar/O1/log_H1detcharinj_20160115.txt', although for some reason it only shows the start time.
The time noted in the alog entry is incorrect. The correct time from the log is 1136927267. The injections are visible at the corrected time.
**Short version: Increased RY input motion (maybe HEPI, maybe wind/ground) causes ISI X loops to ring up when running 45mhz blends. The suspension/tidal is not the cause. The 90mhz blends seem to be immune to this. Other than using 90mhz blends, I'm not sure how to fix the ISI's configuration, short term to prevent the ISI from ringing up. But we should put a StripTool of the end station ISI St1 CPS locationmons somewhere in the control room so operators can see when ground tilt has rung up an ISI. Alternatively, we could add a notification to VerbalAlarms or the DIAG node when an ISI has been moving something like 10 microns peak to peak for several minutes.
This morning while the IFO was down for maintenance, Evan and I looked at ETMX to see if we could figure out what is causing the ISI to ring up. First we tried driving the L1 stage of the quad to see if some tidal or suspension drive was the cause. This did not have on the ISI, so I tried driving on HEPI. When I drove HEPI X, the ISI rang up a bit, but no more than expected with the gain peaking of the 45mhz blends. When I drove HEPI in RY, however, the ISI immediately rang up in X, and continued to ring for several minutes after I turned the excitation off. The attached image shows the ISI CPS X(red), RY (blue), HEPI IPS RY(green) and X (magenta). The excitation is visible in the left middle of the green trace, also visible in the sudden increase in the red trace. I only ran the excitation for 300 seconds (from about 1134243600 to 1134243900), but the ISI rang for twice that. After the ISI settled down I switched the blends to the 90mhz blends and drove HEPI RY again. The ISI moved more in X but it never rang up even after I increaed the drive by a factor of 5. The second plot shows the whole time series, same color key. The large CPS X motion (with barely noticeable increase in the IPS RY) is the oscillation with the 45mhz blend , the larger signal on the IPS RY (with small increase in CPS X) is with 90mhz blends. The filter I used for each excitation was zpk([0 0],[ .01 .01 .05 .05], 15111).
Did a bit more analysis of this data - Not sure why things are so screwy. There might be non-linearity in the T240s. Jim's entry indicates that it is NOT a servo interaction with the tidal loop. so it is probably something local - still not really sure what. Based on the plots below I strongly recommend a low-freq TF of Stage 1 (HEPI servos running, ISI damping loops on, iso loops off) drive hard enough to push stage 1 T240s to +/- 5000 nm/sec what I see fig 1 (fig_EX_ringingXnY)- time series of X and Y and drive signal - This is the same as Jim's data, but we also see significant motion in Y - In the TFs we need to look for X and Y cross coupling fig 2 (fig_X_ringup_time) - this is the time I used for the other analysis - We can see the CPS-X and T240-X signals here. Note that I have used bandpass_viafft to keep only data between 0.02 and 0.5 Hz. The T240 and CPS signals are clearly related - BUT - does the T240 = derivative of the CPS? signals are at input to the blend filters fig 3 (fig_weirdTFs) - These are some TFs from ST1 X drive to ST1 CPS X and from ST1 X drive to ST1 T240. If all the drive for X is coming from the actuators, then the CPS TF should be flat and the T240 TF should be Freq^1 The CPS TF looks fine, I can not explain the T240 TF The coherence between T240 and CPS sigs are in the bottom subplot fig 4 (fig_coh) Coh for drive -> CPS, drive -> T240 and CPS->T240. All are about 1 from 0.03 to .15 Hz. So the signals are all related, but not in the way I expect. NOTE - If the ground were driving a similar amount to the actuators, then these TFs would be related by the loops and blend filters - I don't think this is the case. decent driven TFs would be useful, here. fig 5 - sensor_X_difference : Take the numerical derivative of the CPS and compare it to the T240 as a function of time. Also - take the drive signal * 6.7 (plant response at low freq from TF in fig 3) and then take the derivative of that. These 3 signals should match - BUT they do not. The driven plant and the CPS signals are clearly similar, but the T240 is rather different looking, esp in the lower subplot. As if the higher frequency motion seen by the CPS is not seen by the T240. What the heck? fig 6 - fig_not_gnd - could it be from ground motion? So I add the ground motion to the CPS signal - but this doesn't look any more like the T240 signal than the straight CPS signal. So the signal difference is not from X ground motion.
Has the tilt decoupling on stage 1 been checked recently? with the 45mhz blends running we are not far from instabilty in this parameter (a factor of 2 maybe?)