In the attached, left is the trend of 45MHz signals after the driver swap in the PSL room. Signals named MOD_RF45 are from the PSL room, MOD_9MHz are actually from the old unit now installed in CER (and it's not 9MHz, it receives 45MHz signal from the 45MHz distribution amp).
Anyway, the new driver remained very glitchy for 6 or 7 hours but we don't see any correlation with the CER unit, then it became quiet for the rest of the night except three easily visible glitches. The largest one was about 0.06 counts pk-pk in the control signal.
But the old driver had good and bad period. For example the day before (middle panel) it was mostly quiet, but three days ago (right panel) it was nasty, and the largest glitch there was about 0.14 counts pk-pk.
Two things we learned so far are:
We made a test with the CER unit by changing its output termination. Nominally, it runs into 50 Ohms. In the attached plot just before 70s the terminator was removed briefly and put back again. This resulted in a up-down glitch. Then, this was repeated around 80s. Between 200s and 210s the terminator was removed and replaced by a cable with clips attached to the end. The clips were then shorted repeatedly resulting in pairs of down-up glitches. Looking at alog 21789 and its second attachment we can see two up-down glitches—albeit at a much smaller scale.
Keita went out during a lockloss and started tapping at different points in the RF distribution chain of the 45.5MHz RF signal.
No effect:
Effect similar to what we see:
Effect much larger than what we see:
Only the removal of the terminator was seen in both units. The other glitches were only seen by the unit which is fed by the tapped cable or connector.
The most sensitive point was the tapping of the elbow indicating a possible connector, cable or adapter problem nearby. We should probably redo the extension cable which was inserted to account for the phase delay.
Title: 09/30/2015, Day Shift 15:00 – 23:00 (08:00 – 16:00) All times in UTC (PT) State of H1: At 15:00 (08:00) Locked at NOMINAL_LOW_NOISE, 22.4W, 71Mpc Outgoing Operator: TJ Quick Summary: Wind is calm; no seismic activity. All appears normal.
Title: 9/30 OWL Shift: 7:00-15:00UTC (00:00-8:00PDT), all times posted in UTC
State of H1: Observation Mode at 75Mpc for the last 6hrs
Shift Summary: I had one lockloss, but it came back up with relative ease. The RF Noise wasn't bothering me like in my previous shifts.
Incoming Operator: Jeff B
Activity Log:
Had one lockloss at 7:43 UTC, but brought it back up and into observing at 8:37. I'm still not sure what caused the lockloss.
Aside from that it is a quiet environment and everything seems to be humming along.
There was a GraceDB query failure with the last query at 11:48 UTC. I followed the instructions on this wiki and it started up just fine.
Back to Observing
Lockloss at 07:43 UTC.
No idea what may have caused it yet. There was an ITMX saturation, control loops looked normal, no seismic activity, all the monitors showed normal operation.
Title: 9/29 Eve Shift 23:00-7:00 UTC (16:00-24:00 PST). All times in UTC.
State of H1: Observing
Shift Summary: One lockloss due to ITMy saturation. One lockloss due measurements being made while LLO was down. This resulted in a net of 16 minutes of lost coincident observing time. Observing for all but ~1 hour of my shift. RF45 has been stable the entire shift. WInd and seismic quiet.
Incoming operator: TJ
Activity log:
23:25 Lockloss, ITMy saturation
23:26 Kyle and Gerardo back from EY
2:04 Out of observing while LLO is down so Sheila can make measurements
2:32 Lockloss due to measurements
3:00 Observing Mode
When DTT gets data from NDS2, it apparently gets the wrong sample rate if the sample rate has changed. The plot shows the result. Notice that the 60 Hz magnetic peak appears at 30 Hz in the NDS2 data displayed with DTT. This is because the sample rate was changed from 4 to 8k last February. Keita pointed out discrepancies between his periscope data and Peter F's. The plot shows that the periscope signal, whose rate was also changed, has the same problem, which may explain the discrepancy if one person was looking at NDS and the other at NDS2. The plot shows data from the CIT NDS2. Anamaria tried this comparison for the LLO data and the LLO NDS2 and found the same type of problem. But the LHO NDS2 just crashes with a Test timed-out message.
Robert, Anamaria, Dave, Jonathan
It can be a factor of 8 (or 2 or 4 or 16) using DTT with NDS2 (Robert, Keita)
In the attached, the top panel shows the LLO PEM channel pulled off of CIT NDS2 server, and at the bottom is the same channel from LLO NDS2 server, both from the exact same time. LLO server result happens to be correct, but the frequency axis of CIT result is a factor of 8 too small while Y axis of the CIT result is a factor of sqrt(8) too large.
Jonathan explained this to me:
keita.kawabe@opsws7:~ 0$ nds_query -l -n nds.ligo.caltech.edu L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ
Number of channels received = 2
Channel Rate chan_type
L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ 2048 raw real_4
L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ 16384 raw real_4
keita.kawabe@opsws7:~ 0$ nds_query -l -n nds.ligo-la.caltech.edu L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ
Number of channels received = 3
Channel Rate chan_type
L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ 16384 online real_4
L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ 2048 raw real_4
L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ 16384 raw real_4
As you can see, both at CIT and LLO the raw channel sampling rate was changed from 2048Hz to 16384Hz, and raw is the only thing available at CIT. However, at LLO, there's also "online" channel type available at 16k, which is listed prior to "raw".
Jonathan told me that DTT probably takes the sampling rate number in the first one in the channel list regardless of the actual epoch each sampling rate was used. In this case dtt takes 2048Hz from CIT but 16384Hz from LLO, but obtains the 16kHz data. If that's true there is a frequency scaling of 1/8 as well as the amplitude scaling of sqrt(8) for the CIT result.
FYI, for the corresponding H1 channel in CIT and LHO NDS2 server, you'll get this:
keita.kawabe@opsws7:~ 0$ nds_query -l -n nds.ligo.caltech.edu H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ
Number of channels received = 2
Channel Rate chan_type
H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ 8192 raw real_4
H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ 16384 raw real_4
keita.kawabe@opsws7:~ 0$ nds_query -l -n nds.ligo-wa.caltech.edu H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ
Number of channels received = 3
Channel Rate chan_type
H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ 16384 online real_4
H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ 8192 raw real_4
H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ 16384 raw real_4
In this case, the data from LHO happens to be good, but CIT frequency is a factor of 2 too small and magnitude a factor of sqrt(2) too large.
Part of this that DTT does not handle the case of a channel changing sample rate over time.
DTT retreives a channel list from NDS2 that includes all the channels with sample rates, it takes the first entry for each channel name and ignores any following entries in the list with different sample rates. It uses the first sample rate it receives ans the sample rate for the channel at all possible times. So when it retreives data it may be 8k data, but it looks at it as 4k data and interprets the data wrong.
I worked up a band-aid that inserts a layer between DTT and NDS2 and essentially makes it ignore specified channel/sample rate combinations. This has let Robert do some work. We are not sure how this scales and are investigating a fix to DTT.
As followup we have gone through two approaches to fix this:
Back to Observing Mode @ 3:03 UTC.
Since LLO went out of lock, Sheila asked if she could complete some measurements that she didn't finish during maintenance. I gave her the OK and went to commissioning mode since we aren't losing any coincident data time.
I caused a lockloss by moving TMSX too quickly while doing this test.
I also spent some time earlier in the day (durring maintence recovery) to do some excitations on TMS and the End station ISIs to investigate the noise that seems to come from TMSX. An alog with results will be coming soon.
I updated the GDS calibration correction filters today to reflect the bug fixes to the actuation and sensing time delays (see aLOG #22056). Attached are plots of the residual and control correction filters, which include the updated time delays. I have also attached plots that compare the h(t) spectra from the CALCS and GDS calibration pipelines and the spectrum residuals. There is now a larger discrepency between CALCS and GDS because the time delays that were added to CALCS to bring the two closer together are now no longer as accurate. Updates to the delays in CALCS may be coming as the differences are investigated more.
The new GDS calibration correction filters were generating using
create_partial_td_filters_O1.m
which is checked into the calibraiton SVN (r1560) under
aligocalibration/trunk/Runs/O1/Common/MatlabTools.
aligocalibration/trunk/Runs/O1/GDSFilters
The filters file is called H1GDS_1127593528.npz.
Back to Observing @ 23:50 UTC.
Lockloss @ 23:25 UTC. ITMy saturation.
Since LLO had already gone down (we think for maintence) TJ let me start some maintence work that needs the full IFO locked. at about 14:32 UTC Sept 29th we went to commisioning to start running the A2L script as described in WP # 5517.
The script finished right before an EQ knocked us out of lock. Attached are results, we can decide if we are keeping these decouplings durring the maintence window.
The three changes made by the script which I would like to keep are ETMX pit, ETMY yaw, and ITMY pit. These three gains are accepted in SDF. Since we aren't going to do the other work described in the WP, this is now finished.
All the results from the script are:
ETMX pit changed from 1.263 to 1.069 (1st attachment, keep)
ETMX yaw reverted (script changed it from 0.749 to 1.1723 based on the fit shown in the second attachment)
ETMY pit reverted (script changed it from 0.26 to 0.14 based on the 3rd attachement)
ETMY yaw changed from -0.42 to -0.509, based on fit shown in 4th attachment
ITMX no changes were made by the script, 5th +6th attchments
ITMY pit (from 1.37 to 1.13 based on 7th attachment, keep)
ITMY yaw reverted (changed from -2.174 to -1.7, based on the 8th attachment which does not seem like a good fit)
By the way, the script that I ran to find the decoupling gains is in userapps/isc/common/decoup/run_a2l_vII.sh Perhaps next time we use this we should try a higher drive amplitude, to try to get better fits.
I ran Hang's script that uses the A2L gains to determine a spot position (alog 19904), here are the values after running the script today.
vertical (mm) | horizontal(mm) | |
ITMX | -9 | 4.7 |
ITMY | -5.1 | -7.7 |
ETMX | -4.9 | 5.3 |
ETMY | -1.2 | -2.3 |
I also re-ran this script for the old gains,
vertical(mm) |
horizontal (mm) | |
ITMX | -9 | 4.7 |
ITMY | -6.2 | -7.7 |
ETMX | -5.8 | 5.3 |
ETMY | -1.2 | -1.9 |
So the changes amount to +0.4 mm in the horizontal direction on ETMY, -0.9 mm in the vertical direction on ETMX, and -1.1mm in the vertical direction on ITMY.
Please be aware that in my code estimating beam's position, I neglected the L2 angle -> L3 length coupling, which would induce
an error of l_ex / theta_L3,
where l_ex is the length induced by L2a->L3l coupling when we dither L2, and theta_L3 is the angle L3 tilts through L2a->L3a.
Sorry about that...
When you compare "H1 SNSW EFFECTIVE RANGE (MPC) (TSeries)" data in DMT SenseMonitor_CAL_H1 with its copy in EPICS (H1:CDS-SENSEMON_CAL_SNSW_EFFECTIVE_RANGE_MPC), you will find that the EPICS data is "delayed" from the DMT data by about 109 seconds (109.375 sec in this example, I don't know if it varies with time significantly).
In the attached, vertical lines are minute markers where GPS second is divisible by 60. Bottom is the DMT trend, top is its EPICS copy. In the second attachment you see that this results in the minute trend of this EPICS range data becoming a mixture of DMT trend from 1 minute and 2 minutes ago.
This is harmless most of the time, but if you want to see if e.g. a particular glitch caused the inspiral range to drop, you need to do either a mental math or a real math.
(Out of this 109 seconds, 60 should come from the fact that DMT takes 60 seconds of data to calculate one data point and puts the start time of this 1 min window as the time stamp. Note that this start time is always at the minute boundary where GPS second is divisible by 60. Remaining 49 seconds should be the sum of various latencies on DMT end as well as on the copying mechanism.)
The 109s delay is a little higher than expected, but not to strange. I'm not sure where DMT marks the time, as the start/mid/end of the minute it outputs.
Start Time | Max End Time | Stage |
0 | 60 | Data being calculated in the DMT. |
60 | 90 | The DMT to EPICS IOC queries the DMT every 30s. |
90 | 91 |
The EDCU should sample it at 16Hz and send to the frame writter. |
The 30s sample rate of the DMT to EPICS IOC is configurable, but was chosen as a good sample rate for a data source that produces data every 60 seconds.
It should also be noted that at least at LHO we do not make an effort to coordinate the sampling time (as far as which seconds in the minute) that happen with the DMT. So the actual delay time may change if the IOC gets restarted.
EDITED TO ADD:
Also, for this channel we record the GPS time that DMT asserts is associated with each sample. That way you should be able to get the offset.
The value is available in H1:CDS-SENSMON_CAL_SNSW_EFFECTIVE_RANGE_MPC_GPS
Attached is the dark noise of REFL9Q, along with an estimate of the shot noise and a conversion of these noises into equivalent frequency noise in CARM.
The dark noise appears to be slightly below the shot noise level.
I took the TNC that goes directly into the common-mode board and put it into an SR785. Also attached is the noise with the input of the SR785 terminated.
I also have tried to estimate how this compares to the shot noise on the diode. In full lock at 24 W, we see 3.6 mW of dc light on the PD (according to the calibrated REFL_A_LF channel). Off resonance and at 2.0 W, we have 13.6 mW of dc light. So the CARM visibility is about 98%.
The shot noise ASD (in W/rtHz) and the CARM optical plant (in W/Hz) are both given in Sigg's frequency response document. With a modulation index of 0.22 rad and an incident power of 24 W, the shot noise is 9.4×10−10 W/rtHz, the CARM optical gain is 11 W/Hz, and the CARM pole is 0.36 Hz. [Edit: I was missing some HAM1 attentuation when first calculating the shot noise level. Out of lock, the amount of power on REFL A should be 24 W × 0.1 × 0.5 × 0.5 × 0.5 = 300 mW. That gives a predicted shot noise level of 7.7×10−11 W/rtHz, assuming a sideband amplitude reflectivity of 0.44. On the other hand, from the measured in-lock power we can calulate 2(hνP)1/2 = 5.2×10−11 W/rtHz for P = 3.6 mW. This includes the factor of sqrt(2) from the frequency folding but does not include the slight cyclostationary enhancement in the noise from the sidebands (although this latter effect is not enough to explain the discrepancy).] Additionally, I use Kiwamu's measurement of overall REFL9 response (4.7×106 ct/W) in order to get the conversion from optical rf beat note power into demodulated voltage (2900 V/W). These numbers are enough to convert the demodulated dark noise of REFL9Q (and the shot noise) into an equivalent frequency noise. At 1 kHz, the shot noise is about 10 nHz/rtHz; as a phase noise this is 10 prad/rtHz (which is smaller than Stefan's estimate of 80 prad/rtHz). The dark noise, meanwhile, is about 5 nHz/rtHz.
Hang, Evan
We measured the input-referred voltage noise of the summing node and common-mode boards.
According to this estimate, the CARM loop is not shot noise limited; rather, at 1 kHz the noise is about a factor of 3 in ASD above shot noise.
I looked back at the CARM sensing noise data I took (on 12 Aug) using the new gain distribution: 0 dB SNB gain, −13 dB CMB common gain, 0 dB CMB fast gain, and 107 ct/ct digital MCL gain.
[For comparison, the old CARM gain distribution was 0 dB SNB gain, −20 dB CMB common gain, 7 dB CMB fast gain, and 240 ct/ct digital MCL gain.]
☞ For those looking for a message in this alog: something about the current frequency noise budgeting doesn't hang together. The projection based on the CARM sensing noise and the measured CARM-to-DARM coupling TF suggests a CARM-induced DCPD sum noise which is higher than what can be supported by coherence measurements.
☞ Second attachment: As expected, the noise (referred to the input of the SNB) is lower; at 40 Hz, it is about 350 nV/Hz1/2. However, we are not really shot-noise (or dark-noise) limited anywhere.
☞ Third attachment: I am also including the CARM-to-DARM coupling TF from a few weeks ago. This TF was taken by injecting into the CARM excitation point and measuring the response in OMC DCPD sum, using the old CARM gain distribution. Then I referred this TF to the SNB input by multiplying by the SNB gain (0 dB), the CMB common gain (−20 dB), and the CMB common boost (40 Hz pole, 4 kHz zero, ac gain of 1).
This gives a coupling which is flat at 1.0×10−2 mA/V, transitioning to 1/f2 around 250 Hz. Or, to say it in some more meaningful units:
☞ Synthesis of the above: based on the measurements described above, at 40 Hz we expect a coupling into the DCPD sum of 350 nV/Hz1/2 × 0.4 mA/V = 1.4×10−7 mA/Hz1/2, which is very close to the overall DCPD sum noise of 3.2×10−7 mA/Hz1/2.
But what is wrong with this picture? If 1.4/3.2 = 44 % of the DCPD sum noise comes from CARM sensing noise, we should expect a coherence of 0.442 = 0.19 between the DCPD sum and the CARM error point.
☞ First attachment: The coherence between the CARM error point and the DCPD sum is <0.01 around 40 Hz. Now, it is almost certainly the case that not all of the CARM error point noise is captured by LSC-REFL_SERVO_ERR, since this channel is picked off in the middle of the CMB rather than the end. Conservatively, if we suppose that LSC-REFL_SERVO_ERR contains only dark noise and shot noise, this amounts to 180 nV/Hz1/2 of noise at 40 Hz referred to the SNB error point, or 0.72×10−7 mA/Hz1/2 referred to the DCPD sum. This would imply a coherence of 0.05 or so.
☞ What is going on here?: Four possibilities I can think of are:
☞ A word about noise budgeting: In my noise budget, there was a bug in my interpolating code for the CARM-to-DARM TF, making the projection too low below 100 Hz. With the corrected TF, the projected CARM noise is much higher and begins to explain the mystery noise from 30 to 150 Hz. However, given that the above measurements don't really hang together, this is highly speculative.
According to the CMB schematic and the vertex cable layout, the CARM error point monitor goes through some unity-gain op-amps and then directly into the ADC. So I don't think we have much chance of seeing the 180 nV/Hz1/2 of shot/dark noise above the 4 µV/Hz1/2 of the ADC.
According to the CMB schematic and the vertex cable layout, the CARM error point monitor goes a gain of 200 V/V and then directly into the ADC. So the 180 nV/Hz1/2 of shot/dark noise appears as 36 µV/Hz1/2 at the ADC. But as Daniel pointed out, this should be heavily suppressed by the loop. For comparison, the ADC's voltage noise is 4 µV/Hz1/2.
For the sake of curiosity, I'm attaching the latest noise budget with the corrected CARM-to-DARM coupling TF. However, I note again that this level of frequency noise coupling is not supported by the required amount of coherence in any of our digitally acquired channels. Additionally, this level of frequency noise coupling is not seen at Livingston, although they've done a better job of TCS tuning than we have. I would not be surprised to find out that this coupling is somehow an overestimate.