While working on the HVAC Controls Upgrade, Apollo was restarting the air handling unit (AHU-4) which supplies air and heat to the office areas of the OSB. This apparently stirred up some dust in the duct work activating the smoke detector and setting off the building fire alarm. This brought about an abrupt end to the weekly meeting and also served as our annual fire alarm drill. Many parts of our system have not operated correctly for several years and this alarm was testament that this upgrade is paying off in that we will now be able to control our HVAC system the way it was originally intended to perform, possibly even better with the advancement in technology of the replacement parts. KUDOS to everyone on site for doing exactly what they were supposed to do during a fire alarm.
J. Kissel I've completed analysis of the new 2017-01-03 reference measurements (LHO aLOG 32942) that were taken after the L2/L3 cross-over upgrade (see LHO aLOG 32933), and including the updates to the AA/AI filtering (see LHO aLOG 32907). I will post more details later. However, I have not yet pushed these results to DARM loop model parameters or the front end replica, nor have I generated new EPICs records to go with them. Therefore, the time-dependent correction factors, which use the update EPICs records will still be ~15-30% incorrect as they've been since the L2/L3 crossover change (i.e. the entire datset since the restart after the break), and will flag the calibration as bad, which means the ANALYSIS_READY bit will still be flagged as bad the up-coming lock stretches over night. My hope is to get better information out within 24 hours. Apologies, and thank you for your continued patience.
~1625 - 1630 hrs. local -> Kyle in and out of LVEA Shut down rotating shaft vacuum pumps which had been pumping PT180 (BSC8) for the past 8 days. Also, removed ladder which had been leaning against BSC8. We should now have enough data to compare and contrast PT180's behavior between when it is exposed/pumped by the YBM to that of when exposed/pumped by a locally mounted turbo. Recall that the post-detect era installed Bayard-Alpert gauges PT170, PT180 and PT140 all exhibit a slow upward drift that the iLIGO era Cold Cathode gauges (sampling same vacuum volume) do not. Understanding/believing our gauges is critical. We will decouple the vacuum pumps from PT180 on the next maintenance day.
State of H1: Observe, and there's a potential for some site noise sources today listed below
Site Activities:
Bubba raised some concerns about the EY HVAC system and after running it by Mike, drove to EY to investigate.
I have produced filters for offline calibration of Hanford data from the beginning of O2 A until the end of 2016. The filters can be found in the calibration SVN at this location: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/GDSFilters/H1DCS_1163173888.npz For information on the calibration model, see https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=31693 https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=32329 https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=32907 For suggested command line options to use when calibrating this data, see: https://wiki.ligo.org/Calibration/GDSCalibrationConfigurationsO2 The filters were produced using this Matlab script in SVN revision 4050: ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/TDfilters/H1_run_td_filters_1163173888.m The parameters files used (all in revision 4050) were: ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/Common/params/IFOindepParams.conf ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/H1/params/H1params.conf ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/H1/params/2016-11-12/H1params_2016-11-12.conf ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/params/H1_TDparams_1163173888.conf ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/H1/Scripts/CAL_EPICS/D20161122_H1_CAL_EPICS_VALUES.m Several plots are attached. The first four (png files) are spectrum comparisons between CALCS, GDS, and DCS. Kappas were applied in both GDS and DCS plots with a coherence uncertainty threshold of 0.4%. Time domain vs. frequency domain comparison plots of the filters are also attached. Lastly, brief time series of the kappas and coherences are attached, for comparison with CALCS.
More plots from beginning of O2 (Nov 30) to show that these filters still have the right model and EPICS.
Same set of plots one more time, this time in early ER10 (Nov 16). Note that kappas were not applied in the GDS pipeline this time, leading to a notable difference in the spectra.
These filters have been updated to account for corrections made to the DARM loop parameters since the AA/AI filter bug fixes. For information on the model changes, see: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=33153 The updated filters were produced using all the same files (updated versions) in SVN revision #4133. The only exception is that the EPICS file and the parameters file to produce it were: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/H1/Scripts/CAL_EPICS/DCS20161112_H1_CAL_EPICS_VALUES.m /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/H1/Scripts/CAL_EPICS/callineParams_20161118.m Note from the plots the slight discrepancy between GDS and DCS, presumably due to the corrections to the model. Also note that DCS and CALCS do not agree on the kappas. This is likely not cause for concern, as the model used to compute them was different. The EPICS and pcal correction factors were produced using the same parameter files as the filters, so they should be correct.
Starting CP3 fill. LLCV enabled. LLCV set to manual control. LLCV set to 50% open. Fill completed in 28 seconds. LLCV set back to 15.0% open. Starting CP4 fill. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Fill completed in 79 seconds. TC A did not register fill. LLCV set back to 35.0% open.
State of H1: in Observe
Site Activities:
Current Weather Conditions:
model restarts logged for Tue 03/Jan/2017
2017_01_03 12:10 h1iopsusauxh34
2017_01_03 12:10 h1susauxh34
Restarted susauxh34 as part of testpoint problem investigation, also restarted h1asc's awgtpman process (which fixed the problem).
Note, DAQ has now been running for 29.9 days (h1tw0 down due to bad raid, h1fw0 uptime is lesser due to previous VPW power problems)
Frigid temperatures on site today with negative windchills
EX temp 13.9(degF) wind 13.0(mph) windchill -0.6(degF)
MX temp 13.4(degF) wind 12.0(mph) windchill -0.6(degF)
CS temp 15.3(degF) wind 07.0(mph) windchill 05.4(degF)
MY temp 11.7(degF) wind 08.0(mph) windchill 00.1(degF)
EY temp 14.0(degF) wind 15.0(mph) windchill -1.5(degF)
wind chill average over site 0.6 degF
TITLE: 01/04 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 71.8083Mpc
INCOMING OPERATOR: Cheryl
SHIFT SUMMARY: Not much happened. a2l dtt template shows high coherence with ETMX and ETMY both pitch and yaw. Been waiting for LLO to go down so I didn't run the script. Keep getting EY temperature alarm.
I just increased the heat by 1ma at E Y.
TITLE: 01/04 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 70.6717Mpc
OUTGOING OPERATOR: Ed
CURRENT ENVIRONMENT: Wind: 21mph Gusts, 15mph 5min avg Primary useism: 0.07 μm/s Secondary useism: 0.20 μm/s
QUICK SUMMARY: Been Locked and Observing when I arrived.
(08:40 - 08:46 UTC) After pausing the code during the holidays they are running again. HWSY is left alone until the glitch issue when running both cameras is solved. I also cleared 5% of space removing ETM data from November. Thining code still has to be rewritten for the new file format (.p).
He was informing me that they were going to go to Observing. I told him we had been there for a few hours already but he brought to my attention the fact that GWI stat is reporting us as NOT ok. Anyone?
Apologies. We've been at NLN for about that long. In Observation for only about 1 hour.
Seems like H1:DMT-CALIBRATED is 0 (zero) not 1. Is this related to the calibration task performed today?
Is this why GWI stat thinks that H1 is not OK?
Sent a message to Jeff Kissel, Aaron Viets and Alex Urban.
I tried a few things to see if I could figure out why the calibration flag wasn't set. 1) restarted the redundant calibration pipeline, This probably caused some of the backup frames to be lost but the primary and low latency frames would not be affected. The Science_RSegs_H1 process https://marble.ligo-wa.caltech.edu/dmt/monitor_reports/Science_RSegs_H1/Segment_List.html is generating segments from the output of the (restarted) redundant pipeline, but it is getting the same results. 2) Checked for dataValid errors in the channels in the broadcaster frames. dataValid would probably cause the pipeline to flush the h(t) data. No such errors were found 3) checked for subnormal/Nan data in the broadcaster frames. Another potential proble,m tha tmight cause the pipeline to flush the data. No problems of this type were found either. 4) checked pipeline log file - nothing unusual 5) Checked for frame errors or broadcaster restarts flagged by the broadcast receiver. Last restart was Dec 5! So, I can see no reason for the ht pipeline to not be running smoothly.
Alex U. on behalf of the GDS h(t) pipeline team
I've looked into why the H1:DMT-CALIBRATED flag is not being set, and TL;DR: it's because of the kappa_TST and kappa_PU factors.
Some detail: the H1:DMT-CALIBRATED flag can only be active if we are OBSERVATION_READY, h(t) is being produced, the filters have settled in, and, since we're tracking time-dependent corrections at LHO, the kappa factors (except f_CC) must each be within range -- outside of 10% their nominal value, the DMT-CALIBRATED flag will fail to be set. (See the documentation for this on our wiki page: https://wiki.ligo.org/viewauth/Calibration/TDCalibReviewO1#CALIB_STATE_VECTOR_definitions_during_ER10_47O2)
I attach below a timeseries plot of the real and imaginary parts of each kappa factor. (What's actually plotted is 1 + the imaginary part, to make them fit on the same axes.) As you can see, around half an hour or so in, the kappa_TST and kappa_PU factors go off the rails, straying 20-30% outside their nominal values. (kappa_C, which is a time-dependent gain on the sensing function, and f_CC both stay within range during this time period.)
Earlier today, Jeff reported on some work done with the L2/L3 actuation stages (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=32933) which may in principle affect kappa_TST and kappa_PU. It's possible we will need a new set of time domain filters to absorb these changes into the GDS pipeline. (I also tried a test job from the DMT machine, but the problems with kappas were still present, meaning a simple restart won't solve the problem.)
GWIstat (also the similar display gwsnap) was reporting that H1 was down because of the h(t) production problem; it did not distinguish between that and a down state. I have now modified GWIstat (and gwsnap) to indicate if there is no good h(t) being produced but otherwise the detector is running.
The attached pdf shows that CALCS and GDS agree on the calculation of kappa_tst. I suspect we may need to calculate new EPICS. Jeff (or perhaps Evan or Darkhan) will need to confirm this based on the recent L2/L3 crossover changes that Alex pointed out.
Here is a comparison between h(t) computed in C00 frames (with kappas applied) and the "correct"-ish calibration, with no kappas applied. The first plot shows the spectra of the two from GPS time 1167559872 to 1167559936. The red line is C00, and the blue line has no kappas applied. The second plot is an ASD ratio (C00 / no-kappas-applied) during the same time period. The cache file that has the no-kappas-applied frames can be found in two locations: ldas-pcdev1.ligo-wa.caltech.edu:/home/aaron.viets/H1_hoft_GDS_frames.cache ldas-pcdev1.ligo-wa.caltech.edu:/home/aaron.viets/calibration/H1/gstreamer10_test/H1_hoft_GDS_frames.cache Also, the file ldas-pcdev1.ligo-wa.caltech.edu:/home/aaron.viets/H1_hoft_test_1167559680-320.txt is a text file that has only h(t) from GPS time 1167559680 to 1167600000.
Summary: Calibration measurements utilizing Pcal to DARM transfer functions can be impacted by non-unity gain when making corrections for the modeled frequency response of the AA filtering. At LHO, the ER10/O2 model had an analog AA transfer function with non-unity gain based on the LTI measurements from ER8 (at LLO, the LTI model for ER10/O2 was normalized to 1). The analog AA model at LHO has a gain of ~0.99. Below we detail the impact on sensing function and actuation coefficients. In summary, the sensing function gain is ~1% larger than originally modeled, and the actuation coefficients are ~1% smaller. This would imply that the inspiral range is ~1% higher than currently predicted. This new understanding means that the analysis code needs to be re-run for the optical response parameters and actuation coefficients. The front-end calibration will need to be updated, and, finally, the GDS pipeline needs new filters generated and installed. Details: The calibration of the Pcal channels (ex: H1:CAL-PCALX_RX_PD_DQ) determines the watts reflecting from the ETM per count of the channel at DC. Whatever gain of the analog AA, this is already accounted for in this calibration procedure. This gain is implicitly accounted for when the value of the calibration is installed in the front-end filter module. A Pcal to DARM transfer function was previously understood as follows (note that PD calib., susnorm, m/N coeff are taken care of in the front end and (1+G)*(1/f^2) are taken care of in analyzing the measurements): DARM (IFO opt. resp.) (OMC DCPD TF) (AA(a) freq. resp.) (AA(a) gain) (AA(d) TF) ---- = ---------------------------------------------------------------------------------------------- PCAL (PD calib.) (AA(a) freq. resp.) (AA(a) gain) (AA(d) TF) (susnorm) (m/N coeff.) (1 + G) (1/f^2) where AA(a) is the analog AA, AA(d) is the digital AA, and "freq. resp." means the normalized transfer function, and G is the open loop gain. In the above (incorrect) understanding, the AA(a) and AA(d) terms cancel. The reason this is incorrect is that the AA(a) gain has an inverse in the PD calibration factor. So the real (correct) Pcal to DARM transfer function is: DARM (IFO opt. resp.) (OMC DCPD TF) (AA(a) freq. resp.) (AA(a) gain) (AA(d) TF) ---- = --------------------------------------------------------------------------------- . PCAL (PD calib.) (AA(a) freq. resp.) (AA(d) TF) (susnorm) (m/N coeff.) (1 + G) (1/f^2) Thus, to isolate the IFO optical response, we need to divide out the modeled AA(a) gain. Since the gain is ~0.99, then the gain of the optical response should go up by ~1%. The above equations are laid out in a graphical subway map schematic in G1501518-v14. The actuation coefficients will also be impacted by this, although the coefficients will be multiplied by the AA(a) gain so that the overall DARM OLG remains unchanged. I have pushed the changes to the DARM model code and scripts that account for this. Specifically: computeSensing.m (r4025) create_partial_td_filters.m (r4026) create_full_td_filters.m (r4027) fitDataToC_20161116.m (r4028). Re-running analysis of optical response parameters requires re-running the fitDataToC_20161116.m script first (with printing data to file), then running the fitCTF_mcmc.m script. Re-running analysis of actuator coefficients requires re-running actuatorCoefficients_Npct.m (with printing data to file), then re-running fitActCoefs_Npct.m. Hopefully this takes care of everything. Unfortunately, I cannot verify these changes with Matlab because I have no way of running Matlab offsite (need a network license). :(
For reference to the full paths, these files live at: {CALSVN}/trunk/Runs/O2/DARMmodel/src/computeSensing.m {CALSVN}/trunk/Runs/O2/TDfilter/create_partial_td_filters.m {CALSVN}/trunk/Runs/O2/TDfilter/create_full_td_filters.m {CALSVN}/trunk/Runs/ER10/H1/Scripts/PCAL/fitDataToC_20161116.m {CALSVN}/trunk/Runs/ER10/H1/Scripts/PCAL/fitCTF_mcmc.m {CALSVN}/trunk/Runs/ER10/H1/Scripts/FullIFOActuatorTFs/actuatorCoefficients_Npct.m {CALSVN}/trunk/Runs/ER10/H1/Scripts/FullIFOActuatorTFs/fitActCoefs_Npct.m
Summary: shutting down the main HVAC system increased the range by 2-3 Mpc. The shutdown produced a large change in DARM, 8-10 Hz, and ~ 3% change between 48-130 Hz. The feature in the 8-10 Hz band may be due to turbulence in the Y-end chilled water flow and would probably be reduced by lowering the frequency of the VFD at Y-end. It is not yet known if this will reduce the noise in the 48-130 Hz band.
Thursday, Dec. 22, before we shut down, I cycled the main HVAC multiple times in order to see if it was costing us range. In the blue “off” periods in the figures, the chiller pad chillers and water pumps were off, the turbines were off at EX, EY, and CS, and the OSB fan was off. Figure 1 shows that the range improved by 2 to 3 Mpc during the “off” periods (we were averaging about 70 Mpc).
Figure 2 shows that the DARM spectrum improved in the 6-18Hz region, by a large factor between 8 and 10 Hz, and by roughly 3% in the 48-130 Hz regions. It is not clear whether the noise produced by the HVAC in the 48-130 Hz band is produced by vibration at lower frequency or by direct coupling. Low coherence with vibration sensors (< 1% in most of the 48-130 Hz band), and the observation that there is little change in peaks in that band that are known to be driven by vibration, suggest that the increased noise in this band is produced by vibration at lower frequencies and not by linear coupling. However, a preliminary look at PEM injection suggest that it is not impossible that the noise in the 48-130 Hz band is produced by direct vibration coupling. We will investigate this with further analysis of data from PEM injections.
The biggest difference between “on” and “off” was in the 8 -10 Hz band (Figure 2). Figure 3 shows that Y-motion of ETMY ST2 is quite coherent with DARM, while X-motion at ETMX is not coherent with DARM. The CS is also not as coherent at these frequencies. The chilled water flow at EY has previously been shown to produce vibration in this band from turbulence (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=11466), so reducing the setting of the variable frequency drive below the current 50Hz level (I recommend 35-45Hz in the linked log) might reduce the 8-10 Hz region of DARM. Further analysis of PEM injections and additional injections at 9 Hz at EY would help us determine if the 8-10 Hz drive at Y-end is also responsible for the noise in the 48-130 Hz band.
Shutdown times:
UTC Dec. 22
21:24:00 shutdown starts, 21:26:00 complete
21:34:00 startup starts, 21:36:40 complete
21:44:00 shutdown starts, 21:45:40 complete
21:54:00 startup starts, 21:55:40 complete
22:04:00 shutdown starts, 22:05:40 complete
22:14:00 startup starts, 22:15:40 complete
Given the large coherence between EY GS13 and DARM in the 8-10Hz band, can this be used to subtract out the noise in the affected band? perhaps in the F?E or in the calibration pipeline? Or would it be easier to eliminate the noise at its source?
I was wondering if the coherence between DARM and the GS13s could be somehow caused by our feedback of DARM to the ETMY suspension, but I think that it is not and that the coherence is probably due to noise from EY coupling to DARM as Robert suggests.
I found a 4 hour period starting at 11:51 UTC on Dec 1st when we had the interferometer locked on ETMX. The coherence between DARM and the GS13s is pretty similar when we are locked on ETMX. I wasn't able to plot data from NDS2 and NDS1 on the same dtt template, but the noise was higher in the lock from Dec 1st, which explains the somewhat lower coherence. In the attached screenshot both plots are made with 30 averages, despite the DTT display that says the one taken today only has 2 averages.
The point is that the coherence is higher with ETMY GS13s than ETMX no matter which suspension we are feeding back to.
Evan G., Aaron V. I have checked a new filters file into the calibration SVN for offline (C01) calibration. The filters were made using revision #3987 using the script /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/TDfilters/H1_run_td_filters_1165799036.m The file can be found here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/GDSFilters/H1DCS_1165799036.npz In order to achieve agreement with GDS calibration, it was necessary to add a delay of 2 16kHz clock cycles to the actuation filters. Evan G. is reviewing the DARM model codes to determine where this timing discrepancy originated. The attached plots include: 1) h(t) spectrum from CALCS and DCS 2) ASD ratio (DCS / CALCS) 3 - 5) plots of filters and errors
These filters have been removed from the SVN. They were replaced by those discussed in https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=32965 For information on why this was necessary see https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=32907
Pressure reading from PT180 is once again rising after being valved into main volume. We suspect the water load is causing this. End stations show no sign but also get 2.5x more pumping speed from cyropumps due to smaller volume.
PT140 pressure reading on diagonal volume (no crypopump) started dropping this week after a three month incline. These changes are due to LVEA temperature fluctuations.
Correction: water pumping speed is more like 1.8x more at end stations.
Corner volume = 445,000 liters
End volume = 88,000 liters
Corner cyropumping = [100,000 - 3,000] x 2 (L/s)
End cryopuming = 100,000 - 30,000 (L/s)