I noticed that the End Y temperature was not recoverling after Robert's HVAC shutdowns. Since it appears the chiller was not working correctly I have started the other one. Water temps are now falling. For the record I turned off Chiller 2 and turned on Chiller 1.This is done indirectly by turning the chiller's water pump off or on.We do not control the chiller remotely. There may be some overshoot on the VEA temperature as it corrects - so don't be surprised if it gets cold before it comes back to normal.
Tamper injections showed some upconversion from the tens of Hz region into the region above 60 Hz. The HVAC makes noise in this region so I did the test I had done in iLIGO, I shut down all turbines and chiller pad equipment on the entire site. This increased the range by almost 5 Mpc (see figure - the 3 range peaks are during the shutoff periods below).
Checks:
1) make sure all VFDs are running at 45 or less
2) if possible use only 2 turbines for the LVEA
We did not drop out of science mode but here are the times of the changes (Oct. 15 UTC):
2:05:00 shutdown started, 2:08:00 shutdown complete
2:18:00 startup started, 2:21:30 startup complete
2:31:00 shutdown started, 2:37:00 shutdown complete
2:47:00 startup started, 2:51:00 startup completed
3:01:00 shutdown started, 3:03:30 shutdown complete
3:13:30 startup started, 3:17:00 startup complete
Here is a comparison of the calibrated DARM spectrum from times when the HVAC was ON and OFF, in the frequency band that was affected.
I plotted glitchgrams and trigger rates during this time. Doesn't seem to have made a noticable change.
https://ldas-jobs.ligo-wa.caltech.edu/~jordan.palamos/detchar/HVAC/glitchgram_HVAC_1128909617.png
https://ldas-jobs.ligo-wa.caltech.edu/~jordan.palamos/detchar/HVAC/rate_HVAC_1128909617.png
Attached are ASDs of DARM and one of the PEM seismometer channels (corner station Z axis) for all of the times when the HVAC was turned on and off (not including the times of transition). In general, the noise level between 40-100 Hz is lower during the times when HVAC was off. The peak around 75 Hz was better during the second two off times, but not in the first segment. (1128910297 to 1128910697)
More PEM seismometer channels are here: https://ldas-jobs.ligo-wa.caltech.edu/~marissa.walker/O1/Oct15HVACtest/
(note: the seismometer calibration from pem.ligo.org is only valid from 0-20 hz)
TITLE: 10/14 EVE Shift: 23:00-07:00UTC (16:00-00:00PDT), all times posted in UTC
H1's been locked for 8+hrs and in Observation Mode 22:58utc. useism continues to trend down.
Robert took roughly an hour to perform some PEM non-injection work with the HVAC system (basically shutting off fans & chiller yard stuff). With Landry's approval, Robert said we would stay in Observation Mode for these injections. Here are my rough notes on the times:
Darkhan, Sudarshan, GregM, RickS
The plots in the first attached multi-page .pdf file use SLMtool data (60-sec. long FFTs) taken during the month of Oct. so far.
The first page shows the time-varying calibration factors.
The next eight pages have two plots for each of the four Pcal calibration lines (36.7 Hz, 331.9 Hz, 1083.7 Hz, and 3001.3 Hz).
The first of each set shows the calibrated magnitudes and phases of the strain at each frequency (meters_peak/meter).
The second plot in each set shows ratios (mag and phase) of the three methods (Cal/Pcal, GDS/Pcal, and Cal/GDS). The center panels (GDS/Pcal) are most relevant because we expect discrepancies arising from the CAL_DeltaL_External calculations not including all the necessary corrections.
The plots in the second multi-page .pdf file show the GDS/Pcal ratios at the four Pcal line frequencies over the time period from Oct. 7 to Oct. 11, with histograms and calculated means and standard errors (estimated as standard deviation of the data divided by the square root of the number of points).
Note that these time-varying factors (kappa_tst and kappa_C) have NOT been applied to the GDS calibrations yet, so we expect the GDS/Pcal comparisons to improve once they are applied.
The difference of ~9% in 3 kHz line (mean value) probably comes from the foton IIR filtering which is ~10% at 3 kHz i.e., the front-end DARM is 10% higher than the actual value. SInce online GDS (C00) is derived from output of front-end model, it would also show similar difference. However the offline GDS (C01 or C02) corrects for this and hence expect not to show this difference.
Travis and Darkhan's calibration measurements made two days ago indicated that the clipping observed at Xend is getting worse.
We think this reduction is almost completely one beam and that the reduction in Rx power is taking place on the Rx side of the ETM (more on this later).
We expect to go inside the vacuum envenlope and investigate at our next opportunity. In the meantime, we are switching to using the Tx PD rather than the Rx PD for calibration.
We need to make this switch for calibrating the hardware injections too.
In the first attached plot, SLM-tool data, 60 sec.-long TTFs of the Pcal Rx and Tx PD signals is plotted, the receiver side (Rx) data divided by the transmitter side (Tx) data.
The second plot is a comparison of the GDS calibration to the Pcal calibration at 3 kHz using the Rx PD, and the Tx PD in the last plot.
The mean magnitude ratio using the Rx PD is about 14%. It is 8% using the Tx PD.
TITLE: 10/14 EVE Shift: 23:00-07:00UTC (16:00-00:00PDT), all times posted in UTC
STATE of H1: Taken to Observation Mode @75Mpc during Operator hand-off. Current lock going on 5hrs.
Outgoing Operator: Jim
Support: Occupied Control Room, Kiwamu is On-Call if needed
Quick Summary:
All looking quiet on the seismic front (useism low, EQ band also low, & winds are under 10mph).
Jim walked me through/reminded me how to address low (under 8%) ISS Diffracted Power by tweaking the REFSIGNAL slider.
Robert also wanted me to keep him posted on any possible FMCS alarms (since he's been making HVAC changes). He mentioned possibly doing more of this work if time allows.
Jenne also walked me through running the A2L scripts every time before going to Observation Mode per her alog (WP22524).
Operators,
As part of WP 5552, we'd like to run the A2L script several times over the next week. I have modified the script such that it will put all settings back to what they were, so the configuration of the IFO won't change (normally when we run A2L, it will change some gains that we must make a decision on whether or not to accept). I've also incorporated the "stop_osc" script that finishes clearing the SDF diffs, so this should be a one-liner no muss, no fuss operation. The script takes less than 2 minutes to run.
So, after reaching NOMINAL_LOW_NOISE but before hitting the Observe intent, please run the following:
cd /opt/rtcds/userapps/release/isc/common/scripts/decoup
./a2l_min.py
We'd like to do this at the beginning of every lock stretch for about the next week. If you're ending a commissioning / maintenence / other period and about to go to Observe but the IFO stayed locked, or are about to go into one of those periods but the IFO is still locked, that would also be a good time to run the code, so that we get even more data after the IFO has had time to settle. But please do not drop out of Observe just to run this measurement.
The code should clear all the SDF diffs, but if anything is left over, please revert everything, so that the state of the IFO isn't changing.
TITLE: 10/14 day Shift: 15:00-23:00UTC
STATE of H1: Low-noise for 4 hours, periodic commissioning during single IFO time
Support: Usual control room population
Quick Summary:
Quiet, busy shift with lots of noise investigations, H1 was well behaved
Shift Activities:
16:04 lockloss
18:03 LLO has a bounce mode rung up, so we go to commissioning
18:05 Evan turning on BLRMS filters, Richard to EY, Robert to LVEA
19:00 Gerardo to EX, JeffB to LVEA
19:00 Robert to EX for tamping
21:10 JeffB to LVEA
21:00 Evan trying some BLRMS filters
22:30 Robert turning HVAC off, done 22:50
Now that all systems are using OBSERVE.snap for their SDF reference I modified the check_h1_files_svn_status script to scan for OBSERVE.snap file status as well as safe.snap. Here is the current source code SVN status:
david.barker@sysadmin0: check_h1_files_svn_status
SVN status of front end code source files...
done (list of files scanned can be found in /tmp/source_files_list.txt)
SVN status of filter module files...
M /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSMC1.txt
M /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSMC3.txt
M /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSPRM.txt
M /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSPR3.txt
M /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSPR2.txt
M /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSSR2.txt
M /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSSRM.txt
M /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSSR3.txt
M /opt/rtcds/userapps/release/isc/h1/filterfiles/H1OAF.txt
M /opt/rtcds/userapps/release/lsc/h1/filterfiles/H1LSC.txt
M /opt/rtcds/userapps/release/asc/h1/filterfiles/H1ASC.txt
done (list of filter module files scanned can be found in /tmp/full_path_filter_file_list.txt)
SVN status of safe.snap files...
done (list of safe.snap files scanned can be found in /tmp/safe_snap_files.txt)
SVN status of OBSERVE.snap files...
M /opt/rtcds/userapps/release/psl/h1/burtfiles/iss/h1psliss_OBSERVE.snap
M /opt/rtcds/userapps/release/isc/h1/burtfiles/h1oaf_OBSERVE.snap
M /opt/rtcds/userapps/release/lsc/h1/burtfiles/h1lsc_OBSERVE.snap
M /opt/rtcds/userapps/release/omc/h1/burtfiles/h1omc_OBSERVE.snap
M /opt/rtcds/userapps/release/asc/h1/burtfiles/h1asc_OBSERVE.snap
M /opt/rtcds/userapps/release/asc/h1/burtfiles/h1ascimc_OBSERVE.snap
done (list of observe.snap files scanned can be found in /tmp/observe_snap_files.txt)
SVN status of guardian files...
M /opt/rtcds/userapps/release/isc/h1/guardian/ISC_DRMI.py
M /opt/rtcds/userapps/release/isc/h1/guardian/ISC_LOCK.py
M /opt/rtcds/userapps/release/isc/h1/guardian/lscparams.py
done
Summary of Tuesday's mainteance work:
h1calex model change for hardware injection
Jeff, Jim, Dave: WP5553, ECR1500386
h1calex model was modified to add CW and TINJ hardware injection filter modules. Also ODC channel names were changed from EX to PINJX. Three new channels were added to the science frame (HARDWARE_OUT_DQ and BLIND_OUT_DQ at 16k, ODC_CHANNEL_OUT_DQ at 256Hz)
The DAQ was restarted. Conlog was rescanned to capture the new ODC channel names.
Guardian DIAG_EXC node was modified to permit both calex and calcs excitations while in observation mode.
MSR SATABOY firmware upgrades
Carlos: WP5544
The two Sataboy RAID arrays used by h1fw1 had their controller cards firmware upgraded. Also the one Sataboy used by the DMT system was upgraded. No file system downtime was incurred.
Beckhoff SDF testing
Jonathan, Dave: WP5539
Tested Gentoo version of the SDF on h1build machine as user controls. For initial testing we are only connecting to h1ecatcaplc1. We discovered that this version of the sdf system set all the PLC1 setpoints each time it was restarted, so ECATC1PLC1 was reset several times between 10am and 1pm PDT Tuesday morning. This system was left running overnight for stability testing. We discovered that some string-out records cannot be changed. If we change the string for these records (when we accidentally applied the safe.snap strings on restarts) the PLC immediately (100uS later) reset to an internally defined string.
Long running CDS Server reboots, workstation updates
Carlos:
As part of our twice-yearly preventative maintenance, Carlos patched and rebooted some non-critical servers. CDS workstations were inventoried and updated.
Complete OBSERVE.snap install
Dave: WP5557
The models which were still running with safe.snap as their SDF reference were updated to us OBSERVE.snap. Models in the following systems were modifed: IOP, ODC, SUSAUX, PEM. Their initial OBSERVE.snap files were copied of their safe.snap.
Several systems had static OBSERVE.snap files in their target areas instead of a symbolic link to the userapps area. I copied these files over to their respective userapps areas and check them into SVN. During Wed morning non-locking time, I moved the target OBSERVE.snap file into an archive subdirectory and setup the appropriate symbolic links. I relinked within the 5 second monitor period, so no front end SDF reported a "modified file".
As part of Wednesday's commissioning excercises, we looked at the coupling of input jitter into DARM.
I injected band-limited white noise into IM3 pitch (and then IM3 yaw) until I saw a rise in the noise floor of DARM.
We can use the IM4 QPD as an estimate of the amount of jitter on the interferometer's S port. On the AS port side, we can use the OMC QPDs as an estimate of the AS port jitter, and DCPD sum indicates the amount of S port jitter coupling into DARM.
One thing of note is that the jitter coupling from IM3 to DARM is mostly linear, and more or less flat from 30 to 200 Hz:
The upper limit on IM3 jitter that one can place using the IM4 QPD seems to be weak. At 40 Hz, projecting the quiescent level of the IM4 yaw signal to the DCPD sum suggests a jitter noise of 2×10−7 mA/rtHz, but this is obviously not supported by the (essentially zero) coherence between IM4 yaw and DCPD sum during low-noise lock. Of course, this does not rule out a nonlinear coupling.
As for AS port jitter, the coupling is seen more strongly in OMC QPD B than OMC QPD A.
The test excitation for yaw was 6 ct/Hz1/2 at 100 Hz.
We can propagate this to suspension angle as follows:
This gives 73 prad/Hz1/2 of yaw excitation at 100 Hz, which implies a DCPD coupling of 550 RIN/rad at 100 Hz.
Repeating the same computation for pitch [where the excitation was about 10 ct/Hz1/2 at 100 Hz, and the compliance at 100 Hz is 0.012 rad/(N m)] gives a pitch excitation of 140 prad/Hz1/2, which implies a DCPD coupling of 130 RIN/rad at 100 Hz. So the IM3 yaw coupling into DARM is a factor of 4 or so higher than the IM3 pitch coupling.
These excitations amount to >100 µV/Hz1/2 out of the DAC. Unless the IMs' electronics chains have an outrageous amount of input-referred noise, it seems unlikely that electronics-induced IM jitter is anywhere close to the DARM noise floor. Additionally, the seismically-induced motion of IM3 must be very low: projections of the HAM2 table motion suggest an IM3 suspension point motion of 10 prad/Hz1/2, and this motion will be filtered by the mechanical response of the suspensions before reaching the optics.
The plots attached to this elog show the trend of the LHO detector noise over the O1 time span so far. Each plot shows the band-limited RMS of the CAL-DELTAL_EXTERNAL signal, in a selected frequency band. I didn't apply the dewhitening filter. The BLRMS is computed over segments of 60 seconds of data, computing the PSD with 5s long FFTs (Hann window) and averaging. The orange trace shows a smoothed version, averaged over one hour segments. Only time in ANALYSIS_READY have been considered.
The scripts used to compute the BLRMS (python, look at trend_o1_fft_lho.py) are attached and uploaded to the NonNA git repository. The data is then plotted using MATLAB (script attached too).
I think there's a lot of interesting information in those plots. My plan would be to try to correlate the noise variations with IFO and environmental channels. Any suggestion from on-site commissioners is welcome!
Here are my first comments on the trends:
So what happenend on September 23rd around the end of the day, UTC time?
S. Dwyer, J. Kissel, G. Vajente Gabriele notes a drop in BLRMS that is most (only) visible in the 30-40 [Hz] band, and questions what happened around Sept 23rd. Sheila (re)identified that I had made a change to the calibration that only affects DELTAL_EXTERNAL around the day, see LHO aLOG 21788. This change, increasing the delay between the calibrated actuation and sensing chains just before they're added, would indeed affect only the region around the DARM UGF (~40 [Hz]), where the calibrated actuation and sensing functions are both valid, and roughly equal in contribution to the DELTAL_EXTERNAL signal. Indeed, the (6.52 - 6)/6 = 0.086 = 8% drop is consistent with the 8% amplitude change expected that Peter describes in his motivation for me to make the change of the relative delay, see LHO aLOG 21746. Gabriele has independently confirmed that he's using DELTAL_EXTERNAL (and not GDS-CALIB_STRAIN which would *not* have seen this change) and the change happened between Sept 22 2015, between 15:00 and 20:00 UTC (*not* Sept 23rd UTC that he mentioned above). I've confirmed that the EPICs record was changed right smack in between there on Sept 22 17:10 UTC. Good catch, Sheila!
The first attached plot (H1L1DARMresidual.pdf) shows the residual DARM spectrum for H1 and L1, from a recent coincident lock stretch (9-10-2015, starting 16:15:00 UTC). I used the CAL-DELTAL_RESIDUAL channels, and undid the digital whitening to get the channels calibrated in meters at all frequencies. The residual and external DARM rms values are:
residual DARM | external DARM | |
---|---|---|
H1 | 6 x 10-14 m | 0.62 micron |
L1 | 1 x 10-14 m | 0.16 micron |
The 'external DARM' is the open loop DARM level (or DARM correction signal), integrated down to 0.05 Hz. The second attached plot (H1L1extDARMcomparison.pdf) shows the external DARM spectra; the higher rms for H1 is mainly due to a higher microseism.
Some things to note:
The 3rd attached plot (H1L1DARMcomparison.pdf) shows the two calibrated DARM spectra (external/open loop) in the band from 20-100 Hz. This plot shows that H1 and L1 are very similar in this band where the noise is unexplained. One suspect for the unexplained noise could be some non-linearity or upconversion in the photodetection. However, since the residual rms fluctuations are 6x higher on H1 than L1, and yet their noise spectra are almost indentical in the 20-100 Hz band, this seems to be ruled out - or at least not supported by this look at the data. More direct tests could (and should) be done, by e.g. changing the DARM DC offset, or intentionally increasing the residual DARM to see if there is an effect in the excess noise band.
We briefly tried increasing the DCPD rms by decreasing the DARM gain by 6 dB below a few hertz (more specifically, it's a zero at 2.5 Hz, a pole at 5 Hz, and an ac gain of 1... it's FM5 in LSC-OMC_DC). This increased the DCPD rms by slightly less than a factor of 2. There's no clear effect on the excess noise, but it could be we have to be more aggressive in increasing the rms.
interesting, but do I interpret it right that you (on the experiment reported in the comment) assume that the DARM errorpoint represents the true DARM offset/position? I thought that it is the case at least at L1 that when DARM is locked on heterodyne, and the OMC is locked onto carrier (with the usual DC offset in DARM), then the power in transmission of the OMC fluctuates by several 10%. Assuming that the TEM00-carrier coupling to the OMC would be no different when DARM is locked to OMC trans. power, then also the 'true' DARM would fluctuate this much impressing this fluctuation onto DARM. This fluctuation should show up in the heterodyne signal then. So in this case increasing the DARM gain to reduce the rms would probably not do anything. Or?
As Jim had almost relocked the IFO, we had an epics freeze in the gaurdian state RESONANCE. ISC_LOCK had an epics connection error.
What is the right thing for the operator to do in this situation?
Are these epics freezes becoming more frequent again?
screenshot attached.
epics freezes never fully went away completely and are normally only a few seconds in duration. This morning's SUS ETMX event lasted for 22 seconds which exceeded Guardian's timeout period. To get the outage duration, I second trended H1:IOP-SUS_EX_ADC_DT_OUTMON. Outages are on a computer basis, not model basis, so I have put the IOP Duotone output EPICS channel into the frame as EDCU channels (access via channel access over the network). When these channels are unavailable, the DAQ sets them to be zero.
For this event the time line is (all times UTC)
16:17:22 | DAQ shows EPICS has frozen on SUS EX |
16:17:27 | Guardian attempts connection |
16:17:29 | Guardian reports error, is retrying |
16:17:43 | Guardian timesout |
16:17:45 | DAQ shows channel is active again |
The investigation of this problem is ongoing, we could bump up the priority if it becomes a serious IFO operations issue.
To be clear, it sounds like there was a lockloss during acquisition that was caused by some kind of EPICS drop out. I see how a lockloss could occur during the NOMINAL lock state just from an EPICS drop out. guardian nodes might go into error, but that shouldn't actually affect the fast IFO controls at all.
Sorry, I meant that I can not see how a guardian EPICS dropout could cause a lock loss during the nominal lock state.
Jordan, Sheila
In the summary pages, we can see that something non stationary appeared in DARM from about 80-250 Hz durring the long lock that spanned Oct 11th to 12th, and has stayed around. Links to the spectra from the 11th and the 12th.
HVETO also came up with a lot of glitches in this frequency span starting on the 12th, (here) which were not around before. These glitches are vetoed by things that seem like they could all be realted to corner statin ground motion: refl, IMC and AS WFS, all kinds of corner station seismic sensors, PEM accelerometers, MC suspensions.
Although this noise seems to have appeared durring a time when the microseism was high for us, I think it is not directly related. (high microseism started approximately on the 9th, 2 days before this noise appeared and things are quieting down now but we still have the non stationary noise sometimes up to 200 Hz.)
The blend switching could also seem like a culprit, but the blends were not switched at the begining of the lock in which this noise appeared, and we have been back on the normal (90mHz blends) today but we still have this noise. We've seen scattering from the OMC with velocities this high before (17264 and 19195).
Nutsinee and Robert have found that some of the glitches we are having today are due to RF45, but this doesn't seem to be the case on the 12th.
Robert, Sheila, Nutsinee
The first plot attached is a timeseries of DARM vs RF45 mod of the 10/11 - 10/12 lock stretch (~25 hours). The second plot is the same channels during the beginning of 10/13 lock stretch (30 minutes). You can see RF45 started to act up on 10/13. I've also attached the BLRMS plot of DARM using the bandpass filter Robert used to find the OMC scattering. The none stationary noise we see is likely caused by two different sources.
RF45 started to glitch Monday afternoon (16:04 PDT, 23:04 UTC). According to TJ's log no one was in the LVEA that day. The glitches stopped around 03:22 UTC (20:22 PDT)
Here is one example of one of these glitches rlated to ground motion in the corner that we had durring high microseim over the weekend (but not the entire time that we had high mircoseism). This is from Oct 12th at 14:26 UTC. Even though these have gone away, we are moitvated to look into them because as Jordan and Gabriele have both confirmed recently, the noise in the unexplained part of the spectrum (50-100 Hz) is non stationary even with the beam diverter closed. If the elevanted ground motion over the weekend madde this visible in DARM up to 250Hz, it is possible that with more normal ground motion this is lurking near our sensitivity from 50-100 Hz.
If you believe these are scattering shelves.
See josh's alogs about similar problems, especially 19195 and recently 22405
One more thing to notice is that at least in this example the upconversion is most visibe when the derivative of DARM (loop corrected) is large. This could just be because that is the time when the derivative of the ground motion is large.
Nairwita, Nutsinee
Nairwita pointed out to me that the non-stationary glitches we're looking at was vetoed nicely by HPI HAM2 L4C on October 12th, so I took a closer look. The first plot attached is an hour trend of DARM and HPI HAM2 L4C. But if I zoom into one of the glitches it seems to me that there's a delay in response between HPI and DARM up to ~10 seconds from just eye-balling it (second plot). I've also attached the spectrogram during that hour from the sumary page.
For reference here is a plot showing the temperature excursion which is believed to have caused a lockloss. 7 days shown.
BLUE is the YEND temperature, RED is XEND, and BLACK is the LVEA.
YEND experienced ~ +/- 1degree F of a swing.