6.8 119km SW of Dadali, Solomon Islands 2015-11-18 18:31:04 UTC 10.8 km deep Spike above 1 um/s in both earthquake and microseism bands.
Had to adjust dark offsets for ASAIR_B_RF90. Now waiting for DRMI to lock. Kiwamu unplugged the cable for the LVEA CDS wifi and turned off the lights.
Bailing and clean up at LHO.
The power glitch yesterday tripped the high voltage for the OMC and PMC. DIAG_MAIN had a test for the OMC, but not the PMC. Until now! Jenne requested this in alog23493 and it has been loaded in and committed to the svn.
Code below:
@SYSDIAG.register_test
def PMV_HV():
"""PMC high voltage is nominally around 4V, power glitches and other
various issues can shut it off.
"""
if -0.5 < ezca['PSL-PMC_HV_MON_AVG'] < 0.5:
yield 'PMC high voltage is OFF'
Control room displays for Guardian status, Pitch and Yaw control signals, Vacuum Site Overview, Wind Speed, CDS Overview, and Operator Overview have been added to the control room screens web page at https://lhocds.ligo-wa.caltech.edu/cr_screens/
Clean up is starting. Images are from out the kitchen window and the front door.
TITLE: 11/18 [DAY Shift]: 16:00-24:00 UTC (08:00-16:00 PDT), all times posted in UTC STATE Of H1: Down OUTGOING OPERATOR: Nutsinee QUICK SUMMARY: Wind has died down. Richard is diagnosing the ESD driver.
Wind speed decreased below 20. The first problem I ran into is when requested SET_SUS_FOR_ALS the ALS flashes on the camera disappeared. DIAG_MAIN also complians that both ETMs ESD drivers are off. I toggled the on/off button but nothing changes. I'm not sure at what state the ESD drivers has to be turn on. Hopefully Guardian will take care of this later in lock acquisition.
According to ALIGN_IFO Guardian node I couldn't move on because of these ESD driver notifications. HV ON/OFF on ETMX is blinking between green and gray. I'm feeling a Deja Vu (see alog21030). If the problem is the same this requires driving to the end stations to powercycle the ESD box and the high voltage supply box (I hardly remember how to do this). I'm calling somebody to make sure this is the case.
Was there any power glitch a the site today? I know there was a power glitch in Richland. Last time this happened to me was due to a power glitch.
Found Dave's alog about the power glitch. And the problem wasn't dealt with because no one expected any locking tonight (alog23493). Great...
So I called Mike. Because of the work alone policy thing I'm not allowed to drive to the end stations. He suggested we call it off. LLO isn't going to be observing anytime soon anyway....
11/18 Owl Shift 08:00-16:00 UTC (00:00-08:00 SPT)
Quick Summary: The wind is still fluctuating between 20-30 mph. Since LLO has shut down the site I will be taking my time and wait until the wind speed decrease a little more before starting an initial alignment. Things are looking possitive.
Just saw a wind gust at 40 mph. I spoke to soon. I'll be playing in the EE shop while waiting for the wind to die down. If you call the control room and no one answer please send me an email.
What was done:
1. RF AM measurement (D0900891) unit installation
To better diagnose the problem, we inserted RF AM measurement unit (D0900891) between the EOM driver and the EOM.
For connection cartoon, see the first attachment.
The unit was placed on top of the DCPD interface box that was next to the EOM driver.
Couplers are connected back to back such that the first one sees the forward going RF, the second one the reflection from EOM. Insertion loss of one coupler is rated 0.3dB and this was confirmed by the power measurement. Driver output was 23.34dBm (measured by RF meter with 30dBm attenuator), after the second coupler it was 22.67, so the insertion loss of two couplers is 0.67dB. We didn't do anything to compensate, it's not a big deal. But this means that the modulation index is smaller by that much.
EOM reflection was measured by looking at reverse direction coupler output on the scope, which was about -11.6dBm (about 59.1 mVrms with 50 Ohm input). The reflection from EOM should be something like 20-11.6=9.4dBm, i.e. EOM is only consuming 22.67-9.4 ~ 13.3dBm.
Just so we can, Kiwamu tightened the SMA connectors on the EOM inductor box. We also wiggled various things but didn't get any new insight except that wiggling/tapping power cable/connector on the EOM driver and on +-24V distribution strip didn't do much.
Forward going coupled output was connected to the manually adjusted channel. Front panel was adjusted so the MON voltage becomes closest to zero. That was MON=-300mV at 2.6dBm setting.
Reverse going couple output was connected to the automatically biased channel.
This unit needs >+-28V supply in addition to +-17. Filiberto made a special cable that has bananas for +-30V and usual 3-pin for +-17, and we put a Tenma supply outside of the PSL room for +-30V-ish.
A long DB9 was routed from CER to the PSL rack, and a short one was routed from the PSL rack to the RF AM measurement unit, for DAQ. This was plugged into the spigot that was used for the spare EOM driver unit before (i.e. "RF9" monitors).
H1:LSC-MOD_RF9_AM_ERR_OUT_DQ and H1:LSC-MOD_RF9_AM_CTRL_OUT_DQ are for EOM reflection monitor.
H1:LSC-MOD_RF9_AM_AC_OUT_DQ and H1:LSC-MOD_RF9_AM_DC_OUT_DQ are the channels for forward going RF monitor. AC corresponds to ERR and DC to CTRL.
2. Taping down 45MHz cable
We changed the routing of the RF cable between the driver and the ISC rack. Inside the PSL room, it used to go under the table but over the under-table cable tray, and kind of floating in the air from the floor to the cable tray, and from the cable tray to the EOM driver, pushed by other cables.
We rerouted the cable so that it never leaves the floor, and taped it to the floor using white tape. We also taped down some of the cables that were pressing against the RF cable. See the second attachment.
3. Rephasing
In the lab, the delay of the double couplers alone for 45.5MHz signal was measured to be about 0.8 ns or 13 degrees. Kiwamu made a long cable, we added two N-elbows, and we measured the transfer function from ASAIR_A_RF45_I_ERR to Q_ERR. We ended up having:
Q/I = +4.45 (+-0.14), or 77.3 (+-0.4) degrees.
Before the installation this was 77.5 (+-0.1) deg (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=23254), so this is pretty good.
One day later, two things.
1. RFAM monitor unit glitch more often than the RF45 stabilization.
The dataviewer plot starts just before we were done with the installation/phasing, there is a huge glitch which was caused by us, and after that RF45 channels were relatively quiet. Four vertical lines in the dataviewer plot show the time of different dtt traces. In the DTT plot, bottom is forward-going RF monitor and the top is reflection.
It's hard to believe that this is real.
One thing is that the Tenma +-30V ground was connected to the ground of the AC outlet on the outside wall of the PSL room and the +-17V ground of the ISC rack at the same time.
Tenma mid point of +-30V - Tenma GND on the front panel - AC outlet ground | ISC rack GND (via +-17V cable)
We might (or might not) be better off disconnecting the +-30V mid point from Tenma GND on the front panel, so I did that at around 11-19-2015 1:39 UTC. Current draw of Tenma supply didn't change by this.
After the change:
Tenma mid point of +-30V (floating from Tenma GND on the front panel - AC outlet ground)
|
ISC rack GND (via +-17V cable)
I don't know if the +-17V ground on the ISC rack is the same as +-24V ground inside the PSL room, though.
2. H1:LSC-MOD_RF9_AM_CTRL_GAIN was set to zero yesterday for some reason.
You can see this in the dataviewer plot top middle panel. I put it back to 1.
(Borja, Vinny Roma)
This is a continuation of the work started yesterday here. Today, during maintenance, we worked all morning on the hunting of the 60Hz glitch noise and we can now confirm that the issue was identified and solved.
At 2015-11-17 17:10 (UTC) we arrived at the EndY station. We noticed an aircon unit outside of the building (although different model to that reported at Livingston) also used for cooling old clean rooms and no longer in use. We were sure that it was not running at the times that we observed the 60Hz bursts. We also noticed a fridge ON as we came in...more on this later.
We carried portable magnetometers similar to the ones used at the sites but plugged into oscilloscopes for portability. The area we concentrated most of our noise hunting was the electronics bay (EBAY) as from previous measurements we noticed that the bursts were stronger at the magnetometers located there (MAG_SUSRACK and MAG_SEISRACK) in comparison with MAG_VEA (see attached figure 'Comparison_MAGs_QUAD_SUM.png'). Looking at the spikes in more detail (see 'Zoom-spikes_Mag_VEA_and_EBAYs.png') we observe that while the spikes in MAG_VEA has a frequency of 60Hz, the spikes on MAG_EBAY_SUS and MAG__EBAY_SEI has double the frequency. This seems to be cause to a non-linear response of the transducer to the magnetic field, stronger in MAG_SEI than in MAG_SUS as both are identical sensors we assume the magnetic field from the spike is stronger at the SEI magnetometer.
Another clue that pointed to EBAY as the area of interest is the coherent plot attached of the MIC_EBAY and MIC_VEA_MINUSY with MAG_EBAY_SEI, we can clearly see correlations at 60Hz and harmonics being always stronger at MIC_EBAY. Notice however that we were never able to hear the burst so we assume the microphones pick them up electromagnetically.
In order to confirm that the bursts were actually real signals (instead of rack related issues) we swapped the axes of both magnetometers on EBAY as we observed they had different signal strenght. The change in the observed signal strength after the swap was compatible with the axes changes. Notice that we undid these changes after the morning work, so now is all back to normal.
Then we moved the portable magnetometer around the EBAY racks and noticed no strong magnetic noise anywhere with the exception of the 'PEM Endevco Power supply' which powers the accelerometers. The magnetic field around this box was very strong and MAG_EBAY_SEI is not far away from it. We also noticed that this was the only device connected to the wall AC power supply (see attached pictures) and this is also the case anywhere this PEM power supply is used.
We attach a time plot of EY_MAG_EBAY_SEI during the whole morning working period and we can see several things:
1) The time interval between bursts is much shorter and less regular than before (this was also observed previously when work was done at the End station). Compare attached plots from yesterday night ('Latest-60Hz_Bursts', very regular 85minutes separation between spikes) and today ('Morning_60Hz_timeplot_MAG', totally irregular with as short separation as 3 minutes).
2) The burst structure is different than the one previously related to 60Hz glitch noise (see here). For instance see the red circled area. During this time the vacuum cleaner was on near EBAY.
At this point we realized human activity with electric devices plugged to the wall at the station was involved with the generation of 60 Hz bursts although with a different signature to the bursts we knew and came to hunt.
Suddently for almost an hour (between hours 1.7 and 2.5 in plot 'Morning_60Hz_timeplot_MAG') we saw nothing. Then the burst became more spaced so after a while we tried to reproduce the vacuum cleaner burst signature by switching it on. The vacum cleaner was in the same room as the fridge and we noticed that the fridge was now turned OFF (we later learned that John and Bubba turned it OFF).
Then everything started to make sense...the fridge compresor only needs to be on when the temperature inside the fridge drops below a threshold which it can happen every 1.5 to 2 hours or longer depending on the environment temperature and quality of the fridge insulation. Notice that the interval between burst was shorter in summer than current months. Then the compresor is usually on for a few tens of minutes until the temperature is winthing desired range and then the compresor turns off. So in order to confirm the fridge as the cause of our 60Hz burst and glitches we tested turning it ON and we saw a burst (circled green on the previous plot at hour 3.5). And as we turned it OFF then the 60Hz burst dissapeared.
It appears that the fridge was ON the whole O1, this will no longer happen. But notice that any device drawing current from the mains seem to generate 60Hz bursts at least picked up by the magnetometers in EBAY, so soon we thought that maybe this is relared with the only device in that room that is plugged to the mains and that has a considerable Magnetic contamination...the 'PEM Endevco Power supply'.
So after lunch we went back to EndY station (arriving at UTC 23:07:00) with the intention of checking if unplugging the PEM Power Supply from the wall would be enough for the EBAY magnetometers not seeing the current draw by the fridge as this was turned on and off on 1 minute intervals for 3 times. For comparison we did the same test before with the Power supply still plugged and turned on. Unfotunatelly we see no difference between these two cases on MAG_EBAY_SEI as per attached plot 'Checking_PEM_Power_Supply_Coupling.png' magenta circle is with PEM Power Supply ON and brown is with the Power supply OFF. Interestingly however we can see a small spike at about UTC (23:34:00) when the turned off the Power supply and at 23:55:00 when we turned it back on.
Notice the spikes at the beginning correspond to our arrival to End station probably due to switching ON the shoe cleaner at the entrance and the desktop computer at EBAY.
As a follow up from yesterday entry. I attach a time plot of MAG_EBAY_SEIRACK at EndY for the last 19 hours after yesterday fix of the 60Hz bursts. We can see that the regular 60Hz burst are no longer happening. The only spike in that 19 hours period took place at UTC 2015-11-18 16:41:30 which is 8:41:30 am local time which agrees perfectly with the time at which several people went into the building to look at some ESD tripping related issues. Therefore as expected the spike is related to current draw at the building due to human activity.
A final follow up on the FIX of the 60Hz glitches.
Now that LHO has been looked for quite some time I decided to compare Omicron triggers spectrograms before and after the fix. The evidence is clear that the regular 60Hz gliches are now gone.
TITLE: Nov 17 EVE Shift 00:00-08:00UTC (08:00-04:00 PDT), all times posted in UTC
STATE Of H1: Down
OUTGOING OPERATOR: Patrick
QUICK SUMMARY: I’ve been asked to stand down the swing shift due to deplorable wind conditions that are expected to last until 11:00UTC. LHO Wind Speed indicators are reporting winds in excess of 50mph at this time. WOW! I just saw a 75mph gust. By the looks of Terramon the Alaska quake didn’t cause too much of a ruckus. I’m not aware of anyway to remotely see µSeism. The Observatory Mode was left in 'Preventative Maintenance' Mode. I will continue to monitor the situation. Nasty situation science fans!
The control room screen shots include the seismic DMT plots:
Jonathan, Jim, Patrick, Dave:
Looks like we got away lightly with this glitch (famous last words). We just needed to reboot h1lsc0 and h1iscey. We got permission from Mike to make the h1lsc model change requested by Duncan and Detchar to promote the channel H1:LSC-MOD_RF45_AM_CTRL_OUT_DQ to the science frame (WP5614). We restarted the DAQ to resync the lsc model. Interestingly while the DAQ was down, my epics freeze alerts went crazy until the data concentrator came back. This did not happen during the 13:30 DAQ restart, but might suggest a link between DAQNET and FELAN ports on the front ends.
We have restarted all the Beckhoff SDF systems to green up the DAQ EDCU. At the time of writing the CDS overview is green with the exception of an excitation on SUS-BS.
The wind continues to hammer on the wall of the CER, hopefully no more power glitches tonight.
It looks to me that all DCS computers/services running in the LSB and the warehouse survived the power glitch. (I'm not sure if it was site-wide.)
here is what the overview looked like before we started the recovery.
[Patrick, Kiwamu, Jenne]
After we recovered the PSL, the PMC wasn't locking. Rick reminded us that we might have to enable the output of the high voltage power supplies in the power supply mezzanine for the PZT. We went up there, and enabled the output on the PMC HV supply (labeled PSL or something like that?) as well as on the OMC PZT supply (labeled HAM6 PZTs). For the PMC power supply, the Vset was correct (according to the sticker on the front panel), but Iset needed to be set. On the OMC power supply we set the Vset, and it came back (Iset was fine).
DIAG_MAIN reminded us about the OMC HV supply, but perhaps TJ can add the PMC HV supply to that list?
DIAG_MAIN is also telling us that both ESD drivers are off, but since probably no locking will be happening tonight, we'll deal with that in the morning when the wind decides to die down.
We have known for a while that the front-ends (and DAQ computers) do not set things properly so that EPICS traffic is limited to only one Ethernet port (FE-LAN) instead of all connected networks. This is straightforward to correct for executables started in a shell (where the environment variable can be asserted). A bit harder for ones started from an init script. Certainly something to test on a test stand.
Today I re-zeroed all of the active H1 optical levers. The table below shows their pitch/yaw values before re-zeroing and the values as they read when I finished each oplev (there could be relaxation in the motors causing a slow, slight drift, especially in the BS and HAM2 oplevs which are zeroed with piezo motors steering a 2" turning mirror). This closes work permit #5606.
Optical Lever | Old (µrad) | New (µrad) | ||
Pitch | Yaw | Pitch | Yaw | |
ETMx | -2.4 | -2.2 | -0.2 | 0.3 |
ETMy | -1.3 | -6.7 | 0.1 | 0.0 |
ITMx | -10.2 | -8.2 | -0.2 | 0.0 |
ITMy | 7.9 | -2.6 | -0.2 | -0.0 |
PR3 | -1.1 | -2.9 | 0.0 | 0.0 |
SR3 | 10.1 | -5.7 | 0.0 | 0.2 |
BS | -13.3 | -9.2 | -0.4 | 0.4 |
HAM2 | -20.6 | -39.0 | 0.1 | -0.1 |
Tagging SUS, DetChar, and ISC for future reference.
Calibration parameter updates that includes data until Nov 11.
Kappa_tst is slowly trending up after the bias sign flip on Oct 16 and rest of the paremeters show normal variations that we have seen in the past.
The first plot contains all the parameters that we calculate.
The second plot highlights kappa_tst, kappa_C and cavity pole of which the kappa_tst and kappa_C will be applied to h(t).
This data is filtered for locked state using GDS state vector and thus contains some outliers at times when the IFO is locked but not in anlaysis ready state. In future, we plan to use GDS state vector which will give us the flexibility to only use analysis ready data.
Correction: Data is filtered using guardian State vector not GDS.
Attached plot includes calibration parameters from Nov 12- Nov 30. This time the data has been filtered using GDS state vector and includes the data that has first four bit of GDS vector as 1. Detailed definition of the GDS state vector can be found here.
The output mat file from this calculation is located at the svn location below: