11/18 Owl Shift 08:00-16:00 UTC (00:00-08:00 SPT)
Quick Summary: The wind is still fluctuating between 20-30 mph. Since LLO has shut down the site I will be taking my time and wait until the wind speed decrease a little more before starting an initial alignment. Things are looking possitive.
Just saw a wind gust at 40 mph. I spoke to soon. I'll be playing in the EE shop while waiting for the wind to die down. If you call the control room and no one answer please send me an email.
What was done:
1. RF AM measurement (D0900891) unit installation
To better diagnose the problem, we inserted RF AM measurement unit (D0900891) between the EOM driver and the EOM.
For connection cartoon, see the first attachment.
The unit was placed on top of the DCPD interface box that was next to the EOM driver.
Couplers are connected back to back such that the first one sees the forward going RF, the second one the reflection from EOM. Insertion loss of one coupler is rated 0.3dB and this was confirmed by the power measurement. Driver output was 23.34dBm (measured by RF meter with 30dBm attenuator), after the second coupler it was 22.67, so the insertion loss of two couplers is 0.67dB. We didn't do anything to compensate, it's not a big deal. But this means that the modulation index is smaller by that much.
EOM reflection was measured by looking at reverse direction coupler output on the scope, which was about -11.6dBm (about 59.1 mVrms with 50 Ohm input). The reflection from EOM should be something like 20-11.6=9.4dBm, i.e. EOM is only consuming 22.67-9.4 ~ 13.3dBm.
Just so we can, Kiwamu tightened the SMA connectors on the EOM inductor box. We also wiggled various things but didn't get any new insight except that wiggling/tapping power cable/connector on the EOM driver and on +-24V distribution strip didn't do much.
Forward going coupled output was connected to the manually adjusted channel. Front panel was adjusted so the MON voltage becomes closest to zero. That was MON=-300mV at 2.6dBm setting.
Reverse going couple output was connected to the automatically biased channel.
This unit needs >+-28V supply in addition to +-17. Filiberto made a special cable that has bananas for +-30V and usual 3-pin for +-17, and we put a Tenma supply outside of the PSL room for +-30V-ish.
A long DB9 was routed from CER to the PSL rack, and a short one was routed from the PSL rack to the RF AM measurement unit, for DAQ. This was plugged into the spigot that was used for the spare EOM driver unit before (i.e. "RF9" monitors).
H1:LSC-MOD_RF9_AM_ERR_OUT_DQ and H1:LSC-MOD_RF9_AM_CTRL_OUT_DQ are for EOM reflection monitor.
H1:LSC-MOD_RF9_AM_AC_OUT_DQ and H1:LSC-MOD_RF9_AM_DC_OUT_DQ are the channels for forward going RF monitor. AC corresponds to ERR and DC to CTRL.
2. Taping down 45MHz cable
We changed the routing of the RF cable between the driver and the ISC rack. Inside the PSL room, it used to go under the table but over the under-table cable tray, and kind of floating in the air from the floor to the cable tray, and from the cable tray to the EOM driver, pushed by other cables.
We rerouted the cable so that it never leaves the floor, and taped it to the floor using white tape. We also taped down some of the cables that were pressing against the RF cable. See the second attachment.
3. Rephasing
In the lab, the delay of the double couplers alone for 45.5MHz signal was measured to be about 0.8 ns or 13 degrees. Kiwamu made a long cable, we added two N-elbows, and we measured the transfer function from ASAIR_A_RF45_I_ERR to Q_ERR. We ended up having:
Q/I = +4.45 (+-0.14), or 77.3 (+-0.4) degrees.
Before the installation this was 77.5 (+-0.1) deg (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=23254), so this is pretty good.
One day later, two things.
1. RFAM monitor unit glitch more often than the RF45 stabilization.
The dataviewer plot starts just before we were done with the installation/phasing, there is a huge glitch which was caused by us, and after that RF45 channels were relatively quiet. Four vertical lines in the dataviewer plot show the time of different dtt traces. In the DTT plot, bottom is forward-going RF monitor and the top is reflection.
It's hard to believe that this is real.
One thing is that the Tenma +-30V ground was connected to the ground of the AC outlet on the outside wall of the PSL room and the +-17V ground of the ISC rack at the same time.
Tenma mid point of +-30V - Tenma GND on the front panel - AC outlet ground | ISC rack GND (via +-17V cable)
We might (or might not) be better off disconnecting the +-30V mid point from Tenma GND on the front panel, so I did that at around 11-19-2015 1:39 UTC. Current draw of Tenma supply didn't change by this.
After the change:
Tenma mid point of +-30V (floating from Tenma GND on the front panel - AC outlet ground)
|
ISC rack GND (via +-17V cable)
I don't know if the +-17V ground on the ISC rack is the same as +-24V ground inside the PSL room, though.
2. H1:LSC-MOD_RF9_AM_CTRL_GAIN was set to zero yesterday for some reason.
You can see this in the dataviewer plot top middle panel. I put it back to 1.
(Borja, Vinny Roma)
This is a continuation of the work started yesterday here. Today, during maintenance, we worked all morning on the hunting of the 60Hz glitch noise and we can now confirm that the issue was identified and solved.
At 2015-11-17 17:10 (UTC) we arrived at the EndY station. We noticed an aircon unit outside of the building (although different model to that reported at Livingston) also used for cooling old clean rooms and no longer in use. We were sure that it was not running at the times that we observed the 60Hz bursts. We also noticed a fridge ON as we came in...more on this later.
We carried portable magnetometers similar to the ones used at the sites but plugged into oscilloscopes for portability. The area we concentrated most of our noise hunting was the electronics bay (EBAY) as from previous measurements we noticed that the bursts were stronger at the magnetometers located there (MAG_SUSRACK and MAG_SEISRACK) in comparison with MAG_VEA (see attached figure 'Comparison_MAGs_QUAD_SUM.png'). Looking at the spikes in more detail (see 'Zoom-spikes_Mag_VEA_and_EBAYs.png') we observe that while the spikes in MAG_VEA has a frequency of 60Hz, the spikes on MAG_EBAY_SUS and MAG__EBAY_SEI has double the frequency. This seems to be cause to a non-linear response of the transducer to the magnetic field, stronger in MAG_SEI than in MAG_SUS as both are identical sensors we assume the magnetic field from the spike is stronger at the SEI magnetometer.
Another clue that pointed to EBAY as the area of interest is the coherent plot attached of the MIC_EBAY and MIC_VEA_MINUSY with MAG_EBAY_SEI, we can clearly see correlations at 60Hz and harmonics being always stronger at MIC_EBAY. Notice however that we were never able to hear the burst so we assume the microphones pick them up electromagnetically.
In order to confirm that the bursts were actually real signals (instead of rack related issues) we swapped the axes of both magnetometers on EBAY as we observed they had different signal strenght. The change in the observed signal strength after the swap was compatible with the axes changes. Notice that we undid these changes after the morning work, so now is all back to normal.
Then we moved the portable magnetometer around the EBAY racks and noticed no strong magnetic noise anywhere with the exception of the 'PEM Endevco Power supply' which powers the accelerometers. The magnetic field around this box was very strong and MAG_EBAY_SEI is not far away from it. We also noticed that this was the only device connected to the wall AC power supply (see attached pictures) and this is also the case anywhere this PEM power supply is used.
We attach a time plot of EY_MAG_EBAY_SEI during the whole morning working period and we can see several things:
1) The time interval between bursts is much shorter and less regular than before (this was also observed previously when work was done at the End station). Compare attached plots from yesterday night ('Latest-60Hz_Bursts', very regular 85minutes separation between spikes) and today ('Morning_60Hz_timeplot_MAG', totally irregular with as short separation as 3 minutes).
2) The burst structure is different than the one previously related to 60Hz glitch noise (see here). For instance see the red circled area. During this time the vacuum cleaner was on near EBAY.
At this point we realized human activity with electric devices plugged to the wall at the station was involved with the generation of 60 Hz bursts although with a different signature to the bursts we knew and came to hunt.
Suddently for almost an hour (between hours 1.7 and 2.5 in plot 'Morning_60Hz_timeplot_MAG') we saw nothing. Then the burst became more spaced so after a while we tried to reproduce the vacuum cleaner burst signature by switching it on. The vacum cleaner was in the same room as the fridge and we noticed that the fridge was now turned OFF (we later learned that John and Bubba turned it OFF).
Then everything started to make sense...the fridge compresor only needs to be on when the temperature inside the fridge drops below a threshold which it can happen every 1.5 to 2 hours or longer depending on the environment temperature and quality of the fridge insulation. Notice that the interval between burst was shorter in summer than current months. Then the compresor is usually on for a few tens of minutes until the temperature is winthing desired range and then the compresor turns off. So in order to confirm the fridge as the cause of our 60Hz burst and glitches we tested turning it ON and we saw a burst (circled green on the previous plot at hour 3.5). And as we turned it OFF then the 60Hz burst dissapeared.
It appears that the fridge was ON the whole O1, this will no longer happen. But notice that any device drawing current from the mains seem to generate 60Hz bursts at least picked up by the magnetometers in EBAY, so soon we thought that maybe this is relared with the only device in that room that is plugged to the mains and that has a considerable Magnetic contamination...the 'PEM Endevco Power supply'.
So after lunch we went back to EndY station (arriving at UTC 23:07:00) with the intention of checking if unplugging the PEM Power Supply from the wall would be enough for the EBAY magnetometers not seeing the current draw by the fridge as this was turned on and off on 1 minute intervals for 3 times. For comparison we did the same test before with the Power supply still plugged and turned on. Unfotunatelly we see no difference between these two cases on MAG_EBAY_SEI as per attached plot 'Checking_PEM_Power_Supply_Coupling.png' magenta circle is with PEM Power Supply ON and brown is with the Power supply OFF. Interestingly however we can see a small spike at about UTC (23:34:00) when the turned off the Power supply and at 23:55:00 when we turned it back on.
Notice the spikes at the beginning correspond to our arrival to End station probably due to switching ON the shoe cleaner at the entrance and the desktop computer at EBAY.
As a follow up from yesterday entry. I attach a time plot of MAG_EBAY_SEIRACK at EndY for the last 19 hours after yesterday fix of the 60Hz bursts. We can see that the regular 60Hz burst are no longer happening. The only spike in that 19 hours period took place at UTC 2015-11-18 16:41:30 which is 8:41:30 am local time which agrees perfectly with the time at which several people went into the building to look at some ESD tripping related issues. Therefore as expected the spike is related to current draw at the building due to human activity.
A final follow up on the FIX of the 60Hz glitches.
Now that LHO has been looked for quite some time I decided to compare Omicron triggers spectrograms before and after the fix. The evidence is clear that the regular 60Hz gliches are now gone.
TITLE: Nov 17 EVE Shift 00:00-08:00UTC (08:00-04:00 PDT), all times posted in UTC
STATE Of H1: Down
OUTGOING OPERATOR: Patrick
QUICK SUMMARY: I’ve been asked to stand down the swing shift due to deplorable wind conditions that are expected to last until 11:00UTC. LHO Wind Speed indicators are reporting winds in excess of 50mph at this time. WOW! I just saw a 75mph gust. By the looks of Terramon the Alaska quake didn’t cause too much of a ruckus. I’m not aware of anyway to remotely see µSeism. The Observatory Mode was left in 'Preventative Maintenance' Mode. I will continue to monitor the situation. Nasty situation science fans!
The control room screen shots include the seismic DMT plots:
I put a strip chart of the wind speed over the DARM spectrum on nuc3 so that it would be posted to the web.
Jonathan, Jim, Patrick, Dave:
Looks like we got away lightly with this glitch (famous last words). We just needed to reboot h1lsc0 and h1iscey. We got permission from Mike to make the h1lsc model change requested by Duncan and Detchar to promote the channel H1:LSC-MOD_RF45_AM_CTRL_OUT_DQ to the science frame (WP5614). We restarted the DAQ to resync the lsc model. Interestingly while the DAQ was down, my epics freeze alerts went crazy until the data concentrator came back. This did not happen during the 13:30 DAQ restart, but might suggest a link between DAQNET and FELAN ports on the front ends.
We have restarted all the Beckhoff SDF systems to green up the DAQ EDCU. At the time of writing the CDS overview is green with the exception of an excitation on SUS-BS.
The wind continues to hammer on the wall of the CER, hopefully no more power glitches tonight.
It looks to me that all DCS computers/services running in the LSB and the warehouse survived the power glitch. (I'm not sure if it was site-wide.)
here is what the overview looked like before we started the recovery.
[Patrick, Kiwamu, Jenne]
After we recovered the PSL, the PMC wasn't locking. Rick reminded us that we might have to enable the output of the high voltage power supplies in the power supply mezzanine for the PZT. We went up there, and enabled the output on the PMC HV supply (labeled PSL or something like that?) as well as on the OMC PZT supply (labeled HAM6 PZTs). For the PMC power supply, the Vset was correct (according to the sticker on the front panel), but Iset needed to be set. On the OMC power supply we set the Vset, and it came back (Iset was fine).
DIAG_MAIN reminded us about the OMC HV supply, but perhaps TJ can add the PMC HV supply to that list?
DIAG_MAIN is also telling us that both ESD drivers are off, but since probably no locking will be happening tonight, we'll deal with that in the morning when the wind decides to die down.
We have known for a while that the front-ends (and DAQ computers) do not set things properly so that EPICS traffic is limited to only one Ethernet port (FE-LAN) instead of all connected networks. This is straightforward to correct for executables started in a shell (where the environment variable can be asserted). A bit harder for ones started from an init script. Certainly something to test on a test stand.
I acknowledged a trouble alert on the fire control panel in the CUR in the OSB which was triggered by the 16:11 PST power glitch.
We would like to know the range of counts we have available for doing hardware injections with PCAL. Hardware injections are filtered with an inverse actuation filter to get from strain to counts, this is the CAL-PINJX_HARDWARE filterbank. Here I've filtered waveforms with the CAL-PINJX_HARDWARE filterbank and see how many counts they require. I've attached plots of the time series in counts. Last Thursday a filter was added to compensate for the upsampling that is done in the analog electronics. This seems to have a significant effect on the counts. I did a test with the coherentbbh0 waveform (used in hardware injections tests before) and saw that it increased its counts by a factor of ~3. I've attached plots for before and after adding this filter. Here I report the max number of counts from the waveforms: * CW injections: ~1500 counts * 1.4-1.4 non-spinning SEOBNRv2 waveform (target SNR is ~15): 7e7 counts * 25-25 non-spinning SEOBNRv2 waveform (target SNR is ~15): ~10000 counts * coherentbbh0 waveform from hardware injections tests (recovered SNR is ~18-23): ~20000 counts Note these SNRs are approximates and can change depending on sky location. It was quoted that perhaps the CW injections could change by a factor of 2. There is also a higher-frequency line that gets injected in PCALX and we need to take this into consideration as well.
I filtered the 1.4-1.4 non-spinning BNS template with and without the inverse anti-imaging filter that was added last week. The first plot is the h(t) time series for the waveform. It is near the time of the merger where the maximum number of counts occurs. The second plot is the filtered time series in counts without the inverse anti-imaging filter. The maximum is 5e6 counts. The third plot is the filtered time series in counts with the inverse anti-imaging filter. The maximum is 7e7 counts.
With the release of HWInjReport v 2.2, it was necessary to repeat the prior analysis run spanning the first half of O1, Sep 12 2015 00:00:00 UTC (1126051217) to Oct 23 2015 20:14:43 UTC. In the previous iteration of this run, there were a number of anomalies that necessitation reexamining the logic and implementation of HWInjReport, leading to the creation of version 2.2. While some anomalies, which turned out to not be anomalies, were removed, a few anomalies, which have proven to be true anomalies, still remained, as desired for a properly functioning injection analysis software.
The actual output report, along with the generated log file and the input schedule file containing scheduled injections, have been attached to provide details of output results. Below is a summary analysis of the results.
This run was performed with the following parameters:
GPS Start Time = 1126051217 # Beginning of time span, in GPS seconds, to search for injections GPS End Time = 1129666500 # Ending of time span, in GPS seconds, to search for injections Check Hanford IFO = True # Check for injections in the Hanford IFO frame files. Check Livingston IFO = True # Check for injections in the Livingston IFO frame files. IFO Coinc Time = 0.012 # Time window, in seconds, for coincidence between IFO injection events. Check ODC_HOFT = True # Check ODC-MASTER_CHANNEL_OUT_DQ channel in HOFT frames. Check ODC_RAW = True # Check ODC-MASTER_CHANNEL_OUT_DQ channel in RAW frames. Check ODC_RDS = True # Check ODC-MASTER_CHANNEL_OUT_DQ channel in RDS frames. Check GDS_HOFT = True # Check GDS-CALIB_STATE_VECTOR channel in HOFT frames. Check CAL_RAW = True # Check CAL-INJ_ODC_CHANNEL_OUT_DQ channel in RAW frames. Report Normal = True # Include normal (IFO-coincident, consistent, and scheduled for network injections and consistent and scheduled for IFO injections) injections in report Report Anomalous = True # Include anomalous (non-IFO-coincident, inconsistent, or unscheduled) injections in report Use CONDOR optimizations = True # Enable optimizations that assume execution on a CONDOR machine
The schedule file contained some 63 injections. Of these, 43 were found to occur within at least one IFO, and 20 were found to not occur in any IFO. Some of the non-occurring scheduled injections were listed as having zero amplitude scaling; so, it is reasonable that these injection would not be detected as they have no amplitude.
This run yielded 28 normal network injections, all of which were CBC injections.
There were a significant number of UNSCHEDULED injections, all of which appear to be single-IFO injections.
There were a number of CAL-INJ resets; however, every such reset had the peculiar property that they only show as occurring in only two of the frame channels or bits, with all other channels or bits showing as non-occurring. Further, only specific doublet combinations frame channels/bits occurred in this fashion. These doublet combinations were
In other words, for all the report CAL-INJ resets, the frame flags would only show an injection occurring in only one of the above combinations and all other frame flags would show the injection as non-occurring in their associated channel/bit. This phenomenon was verified to occur for several of the CAL-INJ resets; so, report of this phenomenon appears reliable.
There were 3 UNKNOWN injections that occurred only in L1:
These are known hardware injections that were inserted without specifying the type of the injection.
There were several pairs of H1 and L1 single-IFO injections that matched to the same scheduled injection. These are injections that should have been IFO-coincident but had time differences greater than the set IFO coincidence time of 12 ms. These anomalous injections can be seen as occurring together, one after the other, with both being matched to the same scheduled injection. The easiest way to spot this is that a specific Inj UID value will occur twice in immediate succession. In this case, Inj UIDs 29, 30, 31, 38, 40, 48, and 54 occur in this fashion.
It is hypothesized that these anomalies may be due to a known bug that existed at the time in which the amplitude threshold for setting the ODC bits for the occurrence of an injection was not being checked using the absolute value of the output waveform, but instead only checked for positive crossing of the waveform. If waveforms sent to the IFOs are significantly out of phase (such as may occur from one IFO inverting the waveform relative to the other), then, at low frequencies, the ODC bits for one IFO would not register the occurrence of the waveform until significantly after the other IFO. This is because while one IFO outputs a signal crossing into the positive, thus being registered as having an injection at the time of crossing, the other IFO would be outputting a signal crossing into the negative, thus not registering as an injection even though one is occurring. It would not be until the second IFO output a signal crossing into the positive that it would register as having an injection, but this could be significantly outside the 12 ms IFO coincidence window. In fact, one case, Inj UID 31, was found to have a 31 ms (spooky!) difference between start of the injection times for H1 and L1.
It is presumed that corrections have been made to the threshold checks for determining injection occurrence, but I have not, yet, verified that this is the case. It is possible there are other issues that can conspire to evince the above phenomenon.
A follow-on run is currently underway that covers the period from Oct 23 2015 00:00:00 UTC to Nov 17 2015. The intent is to catch-up on missed analysis opportunities due to fixing the software.
We experienced a power glitch at approx 16:11 PST. We are the process of recovering.
Due to a number of bugs and other issues, analysis reports from HWInjReport had been suspended indefinitely until these issues could be resolved. I am happy to report that these issues have been resolved and have resulted in the release of HWInjReport v2.2. This version contains a number of fixes to the analysis logic to improve accuracy and reliability.
Several features were added to HWInjReport for this version. The first is that the application has been modified to use direct access to the data nodes when running on a CONDOR machine; this has the effect of significantly improving runtime by a factor 3 or better due to no longer needing to wait on the archive robot to load files, which was the case when running from LDAS-pcdev1, for instance. In part of this, HWInjReport now uses gw_data_find to find frame files instead of the deprecated ligo_data_find. The second feature is support for analyzing the CAL-INJ channel in RAW frame files and checking the TRANSIENT bit within that channel for hardware injections. The third feature is support for recognizing STOCHASTIC injections.
Also with this version, the output format has been compressed some for better readability (the output is not as wide as it originally was), and the threshold for IFO coincidence between Hanford and Livingston has been increased to 12ms from 10ms.
With these changes, version 2.2 is the latest stable release of HWInjReport. It is still expected that changes and improvements to the software will continue into the future as necessary; however, hopefully, these changes will not necessitate another cessation of analysis using HWInjReport due to validation issues.
TITLE: 11/17 [DAY Shift]: 15:00-23:00 UTC (08:00-16:00 PDT), all times posted in UTC STATE Of H1: Down SHIFT SUMMARY: Too windy to lock. Entire day devoted to maintenance. INCOMING OPERATOR: Ed (holding off on coming in upon instructions from Mike) ACTIVITY LOG: 15:00 Ken installing GPS clock in control room 15:07 Bubba and Peter K. to H2 PSL enclosure to check on makeup air fan 15:53 Jeff and Jodi to H2 PSL enclosure for OMC extraction 15:59 Jim B. restarting broadcaster daqd 16:03 Jim B. done restarting broadcaster daqd 16:16 Gerardo to LVEA to join Jeff and Jodi 16:16 Filiberto pulling cable from electronics room to H1 PSL racks over HAM1 and HAM2 for EOM monitoring chassis 16:21 Joe D. to check charge in scissor lift at end Y 16:23 Keita to H1 PSL enclosure to start installation of chassis. Karen going with him to clean. 16:34 Hugh to end stations to check HEPI fluid levels 16:34 Sudarshan to MSR to check PCAL camera 16:37 Jason to H1 PSL diode room to reset PSL watchdog 16:41 Jason done resetting PSL watchdog, going to end X and then end Y to zero optical levers 16:43 Vinnie and Borja to end Y for glitch hunting 16:52 Kyle going back and forth to X28 in preparation for baking ion pump 17:08 Bubba done checking H2 PSL enclosure makeup air fan 17:16 LN2 delivery through gate (two trucks) 17:20 Joe D. back from end Y 17:21 Ryan working on alog 17:24 Sand truck through gate 17:24 Travis at end Y restarting PCAL camera 17:28 Travis done 17:28 Rick and Evan to end stations to put tamper stickers on PCAL hardware 17:37 Hugh and Travis back 17:37 Jason done at end X, going to end Y 17:39 Filiberto done pulling cable in LVEA, going to end Y to pull fiber for vacuum gauge 17:40 LN2 truck through gate 17:42 Bubba and John to end Y to check for noise from trap primers 17:54 Karen and Christina done cleaning in the LVEA, going to end stations. Christina reports a problem with the card reader between the LVEA and high bay. 17:56 Ryan done working on alog 18:17 Jodi and Jeff done. OMC extracted and craned over beamtube. 18:23 Gerardo out of LVEA 18:14 Started trying to lock ALS. 18:31 Arms would not stay locked on green. Gave up and put guardian back to down state. 18:32 Bubba and John back from end Y. Bubba getting meter left in LVEA. 18:37 Jason done zeroing optical levers at end stations, going to zero optical levers in LVEA 18:37 Offloaded SR3 M2 per "SR3 Cage Servo warning: Align SR3 to offload M2 actuators" guardian notice. Jenne put instructions in the ops "Troubleshooting the IFO" wiki. 18:42 Karen and Christina done at end Y, heading to end X 18:47 Betsy to electronics room to take picture of wifi cable for wiki 18:48 Rick and Evan done putting tamper stickers on PCAL hardware 18:56 Betsy to turn on wifi in LVEA for Jason in LVEA 19:01 Keita done installing chassis, starting rephasing 19:04 US Linen delivery, LN2 truck leaving 19:08 Jeff B. working on dust monitor plugging near HAM4 and HAM5 19:13 Pressed 'DAQ Clear Accumulated CRC' on CDS overview 19:15 Joe D. to end Y to unplug battery charger 19:26 Travis to end X and end Y to install protection covers on PCAL hardware 19:31 Karen and Christina done cleaning, leaving end X 19:38 Restarted CDS and OPS overview on video2. Dave changed the OPS overview medm to increase the gracedb query failure notice timeout time. 19:47 Filiberto done pulling fiber for vacuum gauge at end Y, going to pull fiber for vacuum gauge at end X 19:49 Hugh to LVEA to inventory 3IFO equipment 19:50 Jeff B. done 19:56 Joe D. back 19:58 Jeff B. back to LVEA to make one connection on dust monitor plumbing 20:11 Keita done rephasing 20:11 Betsy starting charge measurements 20:15 Jason done 20:21 Hugh done 20:30 Madeline W. updating calibration filters (WP5605). Notified over teamspeak. 20:31 Travis back 20:33 Madeline W. done. "The DMT/GDS h(t) pipeline is restarted and running again at LHO" 20:55 Vinnie and Borja back 21:20 Charge measurements done 21:21 Filiberto back 21:31 Dave restarting DAQ 21:31 Keita and Kiwamu remeasuring phase 21:50 Kyle and Gerardo to X28 to drill anchor holes 22:23 Keita and Kiwamu done 22:36 Bubba to mid X to check water/fogging on surveillance camera lens 22:47 Keita to PSL enclosure to measure RF level 22:56 Vinnie and Borja back to end Y to recheck glitch hunting results 22:58 Beckhoff SDF crashed, Dave leaving off, EDCU is red 23:07 Bubba back from mid X 23:07 Kyle and Gerardo back 23:20 Keita done 00:11 Power glitch !!! 00:14 Vinnie and Borja back 00:18 Dave restarting h1lsc0
With help from Peter King and Ken from S&K Electric we determined that the disconnect was turned off for the H-2 PSL make up air fan. I turned the disconnect on and the fan is working properly now. The fan disconnect was not labeled, however, Peter said he would print a label and install it on the disconnect.
Cleared counters in attached screenshot.
I took a few charge measurements this morning, however a few of each set were garbage due to an atttempted MICH lock. Then, the other few look noisy due to the ongoing 30-70mph winds we've felt all morning. I'll try to catch a quieter period for more measurements this week. Plots coming...