ISI ITMX, HAM5 SUS SRM Eathquake band seismic now above 10 um/s. See attached.
O1 day 61
model restarts logged for Tue 17/Nov/2015
2015_11_17 08:02 h1broadcast0
2015_11_17 13:33 h1broadcast0
2015_11_17 13:33 h1dc0
2015_11_17 13:33 h1nds0
2015_11_17 13:35 h1nds1
2015_11_17 13:35 h1tw0
2015_11_17 13:35 h1tw1
2015_11_17 16:41 h1ioplsc0
2015_11_17 16:41 h1lsc
2015_11_17 16:41 h1omc
2015_11_17 16:42 h1ioplsc0
2015_11_17 16:42 h1lscaux
2015_11_17 16:42 h1lsc
2015_11_17 16:42 h1omc
2015_11_17 16:49 h1iopiscey
2015_11_17 16:49 h1iscey
2015_11_17 16:49 h1pemey
2015_11_17 16:50 h1alsey
2015_11_17 16:50 h1caley
2015_11_17 16:50 h1iopiscey
2015_11_17 16:50 h1iscey
2015_11_17 16:50 h1pemey
2015_11_17 16:57 h1dc0
2015_11_17 16:58 h1broadcast0
2015_11_17 16:58 h1nds0
2015_11_17 16:58 h1nds1
2015_11_17 16:58 h1tw0
2015_11_17 16:58 h1tw1
2015_11_17 17:03 h1sysecatc1plc1sdf
2015_11_17 17:03 h1sysecatc1plc2sdf
2015_11_17 17:05 h1sysecatc1plc3sdf
2015_11_17 17:05 h1sysecatx1plc1sdf
2015_11_17 17:05 h1sysecatx1plc2sdf
2015_11_17 17:06 h1sysecatx1plc3sdf
2015_11_17 17:08 h1sysecaty1plc1sdf
2015_11_17 17:08 h1sysecaty1plc2sdf
2015_11_17 17:08 h1sysecaty1plc3sdf
Maintenance day followed by wind induced power glitch. DAQ restarts in morning for maintenance changes. h1lsc0 and h1iscey restarts due to power glitch. DAQ restart due to LSC reconfiguration. Started Beckhoff SDF systems.
6.8 119km SW of Dadali, Solomon Islands 2015-11-18 18:31:04 UTC 10.8 km deep Spike above 1 um/s in both earthquake and microseism bands.
USGS updated it 7.0 122km SW of Dadali, Solomon Islands 2015-11-18 18:31:04 UTC 10.9 km deep
Had to adjust dark offsets for ASAIR_B_RF90. Now waiting for DRMI to lock. Kiwamu unplugged the cable for the LVEA CDS wifi and turned off the lights.
Bailing and clean up at LHO.
The power glitch yesterday tripped the high voltage for the OMC and PMC. DIAG_MAIN had a test for the OMC, but not the PMC. Until now! Jenne requested this in alog23493 and it has been loaded in and committed to the svn.
Code below:
@SYSDIAG.register_test
def PMV_HV():
"""PMC high voltage is nominally around 4V, power glitches and other
various issues can shut it off.
"""
if -0.5 < ezca['PSL-PMC_HV_MON_AVG'] < 0.5:
yield 'PMC high voltage is OFF'
Control room displays for Guardian status, Pitch and Yaw control signals, Vacuum Site Overview, Wind Speed, CDS Overview, and Operator Overview have been added to the control room screens web page at https://lhocds.ligo-wa.caltech.edu/cr_screens/
Clean up is starting. Images are from out the kitchen window and the front door.
TITLE: 11/18 [DAY Shift]: 16:00-24:00 UTC (08:00-16:00 PDT), all times posted in UTC STATE Of H1: Down OUTGOING OPERATOR: Nutsinee QUICK SUMMARY: Wind has died down. Richard is diagnosing the ESD driver.
Wind speed decreased below 20. The first problem I ran into is when requested SET_SUS_FOR_ALS the ALS flashes on the camera disappeared. DIAG_MAIN also complians that both ETMs ESD drivers are off. I toggled the on/off button but nothing changes. I'm not sure at what state the ESD drivers has to be turn on. Hopefully Guardian will take care of this later in lock acquisition.
According to ALIGN_IFO Guardian node I couldn't move on because of these ESD driver notifications. HV ON/OFF on ETMX is blinking between green and gray. I'm feeling a Deja Vu (see alog21030). If the problem is the same this requires driving to the end stations to powercycle the ESD box and the high voltage supply box (I hardly remember how to do this). I'm calling somebody to make sure this is the case.
Was there any power glitch a the site today? I know there was a power glitch in Richland. Last time this happened to me was due to a power glitch.
Found Dave's alog about the power glitch. And the problem wasn't dealt with because no one expected any locking tonight (alog23493). Great...
So I called Mike. Because of the work alone policy thing I'm not allowed to drive to the end stations. He suggested we call it off. LLO isn't going to be observing anytime soon anyway....
11/18 Owl Shift 08:00-16:00 UTC (00:00-08:00 SPT)
Quick Summary: The wind is still fluctuating between 20-30 mph. Since LLO has shut down the site I will be taking my time and wait until the wind speed decrease a little more before starting an initial alignment. Things are looking possitive.
Just saw a wind gust at 40 mph. I spoke to soon. I'll be playing in the EE shop while waiting for the wind to die down. If you call the control room and no one answer please send me an email.
What was done:
1. RF AM measurement (D0900891) unit installation
To better diagnose the problem, we inserted RF AM measurement unit (D0900891) between the EOM driver and the EOM.
For connection cartoon, see the first attachment.
The unit was placed on top of the DCPD interface box that was next to the EOM driver.
Couplers are connected back to back such that the first one sees the forward going RF, the second one the reflection from EOM. Insertion loss of one coupler is rated 0.3dB and this was confirmed by the power measurement. Driver output was 23.34dBm (measured by RF meter with 30dBm attenuator), after the second coupler it was 22.67, so the insertion loss of two couplers is 0.67dB. We didn't do anything to compensate, it's not a big deal. But this means that the modulation index is smaller by that much.
EOM reflection was measured by looking at reverse direction coupler output on the scope, which was about -11.6dBm (about 59.1 mVrms with 50 Ohm input). The reflection from EOM should be something like 20-11.6=9.4dBm, i.e. EOM is only consuming 22.67-9.4 ~ 13.3dBm.
Just so we can, Kiwamu tightened the SMA connectors on the EOM inductor box. We also wiggled various things but didn't get any new insight except that wiggling/tapping power cable/connector on the EOM driver and on +-24V distribution strip didn't do much.
Forward going coupled output was connected to the manually adjusted channel. Front panel was adjusted so the MON voltage becomes closest to zero. That was MON=-300mV at 2.6dBm setting.
Reverse going couple output was connected to the automatically biased channel.
This unit needs >+-28V supply in addition to +-17. Filiberto made a special cable that has bananas for +-30V and usual 3-pin for +-17, and we put a Tenma supply outside of the PSL room for +-30V-ish.
A long DB9 was routed from CER to the PSL rack, and a short one was routed from the PSL rack to the RF AM measurement unit, for DAQ. This was plugged into the spigot that was used for the spare EOM driver unit before (i.e. "RF9" monitors).
H1:LSC-MOD_RF9_AM_ERR_OUT_DQ and H1:LSC-MOD_RF9_AM_CTRL_OUT_DQ are for EOM reflection monitor.
H1:LSC-MOD_RF9_AM_AC_OUT_DQ and H1:LSC-MOD_RF9_AM_DC_OUT_DQ are the channels for forward going RF monitor. AC corresponds to ERR and DC to CTRL.
2. Taping down 45MHz cable
We changed the routing of the RF cable between the driver and the ISC rack. Inside the PSL room, it used to go under the table but over the under-table cable tray, and kind of floating in the air from the floor to the cable tray, and from the cable tray to the EOM driver, pushed by other cables.
We rerouted the cable so that it never leaves the floor, and taped it to the floor using white tape. We also taped down some of the cables that were pressing against the RF cable. See the second attachment.
3. Rephasing
In the lab, the delay of the double couplers alone for 45.5MHz signal was measured to be about 0.8 ns or 13 degrees. Kiwamu made a long cable, we added two N-elbows, and we measured the transfer function from ASAIR_A_RF45_I_ERR to Q_ERR. We ended up having:
Q/I = +4.45 (+-0.14), or 77.3 (+-0.4) degrees.
Before the installation this was 77.5 (+-0.1) deg (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=23254), so this is pretty good.
One day later, two things.
1. RFAM monitor unit glitch more often than the RF45 stabilization.
The dataviewer plot starts just before we were done with the installation/phasing, there is a huge glitch which was caused by us, and after that RF45 channels were relatively quiet. Four vertical lines in the dataviewer plot show the time of different dtt traces. In the DTT plot, bottom is forward-going RF monitor and the top is reflection.
It's hard to believe that this is real.
One thing is that the Tenma +-30V ground was connected to the ground of the AC outlet on the outside wall of the PSL room and the +-17V ground of the ISC rack at the same time.
Tenma mid point of +-30V - Tenma GND on the front panel - AC outlet ground | ISC rack GND (via +-17V cable)
We might (or might not) be better off disconnecting the +-30V mid point from Tenma GND on the front panel, so I did that at around 11-19-2015 1:39 UTC. Current draw of Tenma supply didn't change by this.
After the change:
Tenma mid point of +-30V (floating from Tenma GND on the front panel - AC outlet ground)
|
ISC rack GND (via +-17V cable)
I don't know if the +-17V ground on the ISC rack is the same as +-24V ground inside the PSL room, though.
2. H1:LSC-MOD_RF9_AM_CTRL_GAIN was set to zero yesterday for some reason.
You can see this in the dataviewer plot top middle panel. I put it back to 1.
Jonathan, Jim, Patrick, Dave:
Looks like we got away lightly with this glitch (famous last words). We just needed to reboot h1lsc0 and h1iscey. We got permission from Mike to make the h1lsc model change requested by Duncan and Detchar to promote the channel H1:LSC-MOD_RF45_AM_CTRL_OUT_DQ to the science frame (WP5614). We restarted the DAQ to resync the lsc model. Interestingly while the DAQ was down, my epics freeze alerts went crazy until the data concentrator came back. This did not happen during the 13:30 DAQ restart, but might suggest a link between DAQNET and FELAN ports on the front ends.
We have restarted all the Beckhoff SDF systems to green up the DAQ EDCU. At the time of writing the CDS overview is green with the exception of an excitation on SUS-BS.
The wind continues to hammer on the wall of the CER, hopefully no more power glitches tonight.
It looks to me that all DCS computers/services running in the LSB and the warehouse survived the power glitch. (I'm not sure if it was site-wide.)
here is what the overview looked like before we started the recovery.
[Patrick, Kiwamu, Jenne]
After we recovered the PSL, the PMC wasn't locking. Rick reminded us that we might have to enable the output of the high voltage power supplies in the power supply mezzanine for the PZT. We went up there, and enabled the output on the PMC HV supply (labeled PSL or something like that?) as well as on the OMC PZT supply (labeled HAM6 PZTs). For the PMC power supply, the Vset was correct (according to the sticker on the front panel), but Iset needed to be set. On the OMC power supply we set the Vset, and it came back (Iset was fine).
DIAG_MAIN reminded us about the OMC HV supply, but perhaps TJ can add the PMC HV supply to that list?
DIAG_MAIN is also telling us that both ESD drivers are off, but since probably no locking will be happening tonight, we'll deal with that in the morning when the wind decides to die down.
We have known for a while that the front-ends (and DAQ computers) do not set things properly so that EPICS traffic is limited to only one Ethernet port (FE-LAN) instead of all connected networks. This is straightforward to correct for executables started in a shell (where the environment variable can be asserted). A bit harder for ones started from an init script. Certainly something to test on a test stand.
Today I re-zeroed all of the active H1 optical levers. The table below shows their pitch/yaw values before re-zeroing and the values as they read when I finished each oplev (there could be relaxation in the motors causing a slow, slight drift, especially in the BS and HAM2 oplevs which are zeroed with piezo motors steering a 2" turning mirror). This closes work permit #5606.
| Optical Lever | Old (µrad) | New (µrad) | ||
| Pitch | Yaw | Pitch | Yaw | |
| ETMx | -2.4 | -2.2 | -0.2 | 0.3 |
| ETMy | -1.3 | -6.7 | 0.1 | 0.0 |
| ITMx | -10.2 | -8.2 | -0.2 | 0.0 |
| ITMy | 7.9 | -2.6 | -0.2 | -0.0 |
| PR3 | -1.1 | -2.9 | 0.0 | 0.0 |
| SR3 | 10.1 | -5.7 | 0.0 | 0.2 |
| BS | -13.3 | -9.2 | -0.4 | 0.4 |
| HAM2 | -20.6 | -39.0 | 0.1 | -0.1 |
Tagging SUS, DetChar, and ISC for future reference.
Calibration parameter updates that includes data until Nov 11.
Kappa_tst is slowly trending up after the bias sign flip on Oct 16 and rest of the paremeters show normal variations that we have seen in the past.
The first plot contains all the parameters that we calculate.
The second plot highlights kappa_tst, kappa_C and cavity pole of which the kappa_tst and kappa_C will be applied to h(t).
This data is filtered for locked state using GDS state vector and thus contains some outliers at times when the IFO is locked but not in anlaysis ready state. In future, we plan to use GDS state vector which will give us the flexibility to only use analysis ready data.
Correction: Data is filtered using guardian State vector not GDS.
Attached plot includes calibration parameters from Nov 12- Nov 30. This time the data has been filtered using GDS state vector and includes the data that has first four bit of GDS vector as 1. Detailed definition of the GDS state vector can be found here.
The output mat file from this calculation is located at the svn location below: