Pcal Team,
We calibrated the Photon calibrator readback photodiodes at EndX and the new displacement calibration factors for TxPD and RxPD compared to the last calibration are reported below. The difference in the calibration factors are within our reported uncertainty. These number are also uploaded to DCC document T1500252. A detail report of the calibration factors, optical efficiency and other intermediate calculations is attached to this alog.
2015/08/04 |
2015/05/20 |
|
TxPD(m/V) |
8.428E-13/f^2 +/- 0.59% |
8.464E-13/f^2 +/- 0.59% |
RxPD(m/V) |
6.730E-13/f^2 +/- 0.68% |
6.722E-13E-13 +/- 0.68% |
We had a beam clipping issue at ENDX (alog #19899) and fixed it after aligning the beam using the steering mirror on transmitter module last week. (alog #20054). We checked if this alignment was still good by measuring the power at the transmitter side and the receiver side and alignment seems to be good. In addition, we also measured the output power of the laser before the AOM. The measured value (1.6 W) was consistent with what we measured last time on 04/10/2015.
PCAL AA chassis had 3 channels which were either railing or dead and this issue has been reported in a separate alog (LHO alog #20259) and a fix is underway.
We had installed a DB9-BNC chassis to route the working standard signal (for calibration) and AOM drive signal (monitoring) to the AA chassis few weeks ago. These chassis were not tested after installation so we used a calibrated voltage source to test the chassis before proceeding with the calibration. 1 V of calibrated source gave us ~1637 cts on our readback channel.
As a part of calibration we also measured the transfer function of the Optical Follower Servo and a plot is attached. OFS has a unity gain of 100 KHz with a phase margin of about 62 degrees.
Hannah, Stefan, Daniel
It was determined that the broadband noise we've been seeing has been caused by the Harmonic Generator (SN 1043-03, DCC S1000798). To see if this was an issue with the unit or if this is the normal operation, we tested the spare. Using an RF frequency of 9 kHz, lowpass filter of 10.7 kHz, we tested all seven outputs and measured the following noise:
|
Voltage V |
Noise nV/√Hz |
Noise 1/√Hz |
9 kHz RF signal |
0.380 |
8 |
4.2x10-8 |
2x Output |
0.205 |
23 |
1.2x10-7 |
3x Output |
0.177 |
105 |
5.9x10-7 |
4x Output |
0.404 |
34 |
8.4x10-8 |
5x Output |
0.243 |
55 |
2.2x10-7 |
6x Output |
0.268 |
172 |
6.4x10-7 |
10x Output |
0.337 |
252 |
7.5x10-7 |
15x Output |
0.291 |
197 |
6.8x10-7 |
The x5 output is what we're interested in, and the noise seems to be about the same as what we're already seeing. The power supply was also checked to make sure there were no stray signals.
The ETMX ESD got stuck again, with both the bias and all quadrants at a constant negative value. Jeff B and I drove down the the end station to reset the driver by toggling the red button on the driver box. Jeff has a picture of this and will add a note to the operator troubleshooting wiki about this problem.
This has been happening more since we installed the low noise driver (which was probably an unnecessary change for EX where we do not plan to use it).
ALL TIMES POSTED IN UTC
15:00 IFO locked at DC_Readout
15:45 Jeff B. (H1 op) proceeded from DC-Readout. IFO broke lock. Trying to recover.
16:31 Fil and Peter taking an RF multiplier to Mid Y
16:59 Fil and Peter back from Mid Y
17:00 Fil into CER to look for a power cable for the RF multiplier. None at the MY station.
17:20 Peter into Optics Lab
17:40: Ellie headed to HWS table in LVEA
17:50 Eliie out
19:53 Tour Group into control room
20:09 Sudarshan and Fil out to EX to power cycle Pcal AA chassis. No VEA entry.
20:50 Sudarshan and Fil back from EX
21:28 Kyle, John, Robert out yo Y-2-8
21:41 Jeff B and Sheila to End X
21:55 Kyle and company back from Y arm
21:55 Sheila and Jeff out of X End
Using the calibrated DRMI channels created and described by Kiwamu in entry 18742, I grabbed data from the lock of August 3, 2015, starting at 04:20:00 UTC. The attached 4 page PDF shows spectra of the open loop and residual displacement noise for SRCL, MICH and PRCL. The 4th page shows the coherence of SRCL with the other 2 degrees of freedom.
Degree of freedom | Residual rms | Shot noise level |
---|---|---|
SRCL | 8 pm | 1.3 x 10-15 m/rtHz |
MICH | 3 pm | 1.5 x 10-16 m/rtHz |
PRCL | 0.8 pm | 4 x 10-17 m/rtHz |
The SRCL spectra has a curious shape: it comes down quickly with frequency to 10 Hz, then is fairly flat from 10 Hz to 50 Hz, then falls by a factor of 5 or so to the (presumed) shot noise level that is reached above 100 Hz. What is this noise shelf between 10 Hz and 80 Hz? That is exactly the region where the SRCL noise coupling to DARM is troublesome. Our usual approach is to send a SRCL correction path to DARM to reduce the coupling, but this spectrum shows that there should also be some gain to be had by reducing this noise shelf.
The last page of the PDF shows the coherence between SRCL and MICH & PRCL, and it indicates that the SRCL noise shelf could be coupling from PRCL noise -- the coherence is fairly high in this band, though not unity. This suggests that the DRMI signals could use some of the demodulator phase and input matrix tuning that Rana has recently done on L1, reported in LLO log entry 19540.
To complete this log entry, it would be useful if someone at LHO could add the open loop transfer functions for each loop (models), and other pertinent info such as DC photocurrents for these detectors and the input matrix coefficients.
The shelf appears to be gain peaking in SRCL. We have an 80 LPF to get rid of SRCL control noise in the bucket, but it makes the control noise from 30 to 60 Hz a bit worse.
I had the filter off between 2015-08-23 00:36:00 Z and 00:39:00 Z. The attachment shows the error and control signals with filter off (dashed) versus filter on (solid).
Evan, is the shelf in Peter's open loop spectrum there because of OLTF model without LPF? Otherwise, we still need to investigate.
Jenne noticed in alog 20099 that PR3 jumps on lock-lass when the integrators are cleared, but then comes back with a 4min time constant to more or less the same location. Thus I implemented a split integrator for PR3 PIT: FM1 z4min:p0 zpk([6.6315e-04],[0],6.6315e-4,"n") FM2 :p4min zpk([],[6.6315e-4],1507.955,"n") Instead of clearing the filter module, the down script now simply turns off FM1 and turns it back on, producing a 4min exponential decay. The guardian is updated to use this (set the right filters, use new clearing procedure in DOWN state). Additionally I took out take out any clearing in LOCK_DRMI_1f. (PR3 is no used in DRMI ASC, so there should not be anything to clear anyway.)
Sheila commented out the elements which reset the WHAM6 watchdogs at specific times. This has been replaced by the model upgrades to the ISI on Tuesday.
I asked Sheila to remove the resetting the guardian was doing to give the model bleed off feature a chance to prove itself functional.
See E1500406 for specifics & details. In summary, saturations that occur in a given minute will clear in 60 minutes. If there are no saturations for an hour, the saturation counter should be back to zero. Saturations can be cleared manually with the H1:ISI-HAM6_SATCLEAR button labeled CLEAR SATURATIONS. There may be a delay on the actual clearing of the saturations after this button is pressed.
Chris S.90% Joe D.70% 7/24-8/3/15 The crew cleaned the original caulking and installed metal strips on 853 meters of enclosure. We were starting to run low on both caulking and aluminum strip material so I have ordered more of both. I have also ordered some battery powered angle grinders with soft bristle brushes for cleaning the existing caulking for better adhesion of the new caulking. We were cleaning them by hand and that was a slow process, hopefully this will expedite the cleaning and relieve the arms of the guys somewhat.
Scott L. Ed P. Rodney H. 8/3/15 The crew moved the lights and cleaned 71 meters of tube ending 5.3 meters east of HSW-2-034. 8/4/15 We relocated the support vehicles and all related equipment and cleaned 74.3 meters of tube ending at HSW-2-030. Tested clean sections, results posted in this entry.
Had the IFO at Engage_ACS while Kiwamu was running some OMC checks. The IFO lost lock, which appears to correspond to a sudden wind gust of ~25MPH. The lockloss plot posted shows this gust.
We did some further investigation using the lockloss tool (alog #: 20337) on this event. The first 48 channels that went wrong before the lock loss were plotted, and a complete list of all misbehaving DQ channels were listed in text file, ordered by time before lockloss. It seemed that we saw a violent ground motion, and lots of coil drievers were trying hard to react on it before they finallyt failed.
==================================================================================================================================================
There was a mistake while I filtering the data into different frequency bands, so the plots I posted yesterday did not really make lots of sense. The error has been fixed and new plots attached.
We moved the Pcal lines to first set of oscillators to comply with the cal-cs model.
X-end :
Oscillator 1 : 3001.3 Hz - 39322 cts amplitude
Y-end:
Oscillator 1: 36.7 Hz - 125 cts amplitude
Oscillator 2: 331.9 Hz- 2900 cts amplitude
Oscillator 3: 1083.7 Hz - 15000 cts amplitude
In order to fix some of the issues seen with the OFFLINE state seen recently, I had to move some things around on the graph and make a few protected states.
Previously, the OFFLINE state would Misalign MC2 and turn off the inputs to the IMC servo board. Over the weekend this was found to not be enough because there were still drive signals being sent to MC2 and caused it to trip.
Now when OFFLINE is requested, it will also run through similar logic to the DOWN state along with what it did previously. This will ensure no drive signals are being sent to MC2 while it is misaligned and keep it in a safe state.
If a SEI platform is not in its nominal state the node checker will bring it into the MOVE_TO_OFFLINE state that will execute the code mentioned above and arrive at OFFLINE, but if it is requested it will go through DOWN then to MISALIGNING and then arrive at OFFLINE (The "brute force method" as Jamie described). This round-a-bout way of doing things needed to be done to allow the DOWN state to remain as a GOTO state.
I tested this by bringing IMC_LOCK from LOCKED to OFFLINE, back to LOCKED, then I took HAM3 to DAMPED which brought IMC_LOCK to OFFLINE successfully!
Kyle, Gerardo 0935 hrs. local -> Valved-in RGA to X-end with N2 cal-gas on 0955 hrs. local -> Valved-out N2 cal-gas 1005 hrs. local -> Valved-out NEG from X-end 1015 hrs. local -> began NEG regeneration (heating) ~1145 hrs. local -> NEG regeneration (heating) ends -> Begin NEG cool down 1240 hrs local -> Valved-in NEG to X-end Data attached (see also LIGO-T1500408) Leaving RGA valved-in to X-end, N2 cal-gas valved-out and filament off
Attached is a plot of the pressure inside of the NEG pump's vessel during regeneration, along with temperature.
Temperature started at 22 ºC and eventually reached 250 ºC.
Pcal Team,
During maintenace and calibration yesterday we found that the PCAL AA Chassis (S1400574) at EndX has problems with channel 5-7. Chanenl 5 is dead and 6 and 7 are railed at ~15000 cts. These channels are connected to DB9-to-BNC chassis (D1400423) at the other end. We isolated this unit from AA chassis to troubleshoot the location of the problem and confirmed that it is the AA chassis.
Fil, Sudarshan
We tried power-cycling the AA chassis to see if it solves the problem. It didnot so we replaced the broken AA chassis with a spare one (S1102791) and brought the broken one back to EE shop for troubleshooting. We will swap it back with original, once it is fixed.
There has been some speculation that the huge glitches in DARM on weekends and in the middle of the night might be beam tube particulate falling through the beam. The absence of correlated events in auxiliary channels (Link) and the lack of saturations, have not helped dissipate this speculation.
I think that we can test the, in my mind unlikely, hypothesis that these huge glitches are particulate glitches by comparing rate variations to what we would expect. If the glitches are produced by a constant ambient rate of particles falling through the beam, then we would not expect large gaps like the one at the beginning of the Aug. 1 run that Gabriele analyzed for the above linked log (see attached inspiral range plot). This is a fairly weak test when applied to this one day: I calculate that the distribution of glitches on Aug. 1 is only 20% likely to be consistent with a constant rate. But perhaps DetChar could strengthen this argument by looking at future variations in rates to test the hypothesis that the rate is constant. I checked that there was no cleaning or wind above 10 MPH for the Aug. 1 period.
If bangs during cleaning on July 30th had freed up some particulate that then fell over the next few days, and this dominated the glitch rate, than the expected rate would not be constant but exponentially declining starting at the last cleaning. Since the gap was at the beginning of the Aug. 1 run, this would be even more unlikely than 20%. Bubba keeps a record of cleaning so we could also test for exponential declines in rates.
But for starters, maybe DetChar could check for consistency with a constant rate for those glitches that are not associated with saturations, have auxiliary channel signatures similar to known particulate glitches (e.g. Link, and more to come), and happen on days without cleaning (weekends for sure), and with wind under 10 MPH. Since particulate glitches are likely to be an ongoing concern for some, and since glitch rate statistics can be a good discriminant for particulate glitches, I think that it would be worth setting up this infrastructure for rate statistics of unidentified glitches, if it doesn't already exist.
Also good to look for potential variation in rate due to other environmental conditions in addition to wind -- temperature (absolute or derivative) would be good to test.
I can't find the posts now, but several months ago, an intermittent issue with ETMX was spotted that was narrowed down to the CPS's, possibly specfically the corner 2 cpses (?). This problem then somehow "fixed" itself and was quiet for months. As of the night of the 4th, it seems to be back, intermittently (first attached image, spectr should be pretty smooth around 1 hz, it's decidedly toothy). Looking at the Detchar pages, it shows up about 8 UTC and disappears sometime later. I took spectra from last night (second image) and everything was normal again.
Still don't know what this is. Anybody turn anything on Monday afternoon at EX that shouldn't be?
I had turned on the NEG Bayard Alpert gauge at end X yesterday, but I have verified at least through Beckhoff that I turned it back off.
[Sheila, Jenne]
We have had a violin mode at ~ 508.29 Hz rung up for the last several days.
Part of the problem was that the ETMY Mode7 filter was railing at its limiter. This filter bank has the new "flat phase" filter that is being tried, for damping many modes at once. Evan ramped the gain to zero for this filter bank. After ~30 minutes we didn't see any noticeable change in the height of the peak.
The only other filter bank that was enabled was the Mode5 bank, with a 506-513 band pass. We turned off the "-60deg" FM2. After an hour and a half or so, we see the height of the violin mode has been reduced by almost a factor of 10. We'll check back on it tomorrow.
Also, there is a new violin mode blrms screen, accessible from the quad overview screen.
When looking into seeing if the violin mode is still going down, it occurred to me to look at the output of the violin filter module (ETMY Mode 7) that Evan set to zero last night. Turns out it's been railing for days. My guess is that this was turned on for testing of violin mode damping, and then never added to the guardian so it never gets turned off (violin mode damping should only be on when the IFO is locked). Ooops. It's been off since last night, which is good.
Plotted are the Mode 5 output, which gets turned on and off appropriately, as well as the Mode 7 output which is just going rail-to-rail (where the rail is set by the filter bank's limiter here).
Rana pointed out to me that the PR3 and SR3 suspensions may still have some shift due to wire heating during locks (which we won't see until a lockloss, since we control the angles of mirrors during lock).
Attached are the oplev signals for PR3 and SR3 at the end of a few different lock stretches, labeled by the time of the end of the lock. The lock ending 3 Aug was 14+ hours. The lock ending 31 July was 10+ hours. The lock ending 23 July was 5+ hours. The lock ending 20 July was 6+ hours.
The PR3 shift is more significant than the SR3 shift, but that shouldn't be too surprising, since there is more power in the PRC than the SRC so there is going to be more scattered light around PR3. Also, PR3 has some ASC feedback to keep the pointing. SR3 does not have ASC feedback, but it does have a DC-coupled optical lever. SR3 shifts usually a few tenths of a microradian, but PR3 is often one or more microradians. Interestingly, the PR3 shift is larger for medium length locks (1 or 1.5 urad) than for very long locks (0.3 urad). I'm not at all sure why this is.
This is not the end of the world for us right now, since we won't be increasing the laser power for O1, however we expect that this drift will increase as we increase the laser power, so we may need to consider adding even more baffling to the recycling cavity folding mirrors during some future vent.
Note - The PR3 and SR3 have 2 different baffles in front of them which do different things. The PR3 HAS a baffle which specifically shields the wires from the beam. The SR3 does not have this particular baffle, however I believe we ave a spare which we could mount at some point if deemed necessary.
Attached is a picture of the PR3 "wire shielding baffle D1300957, showing how it shields the suspension wires at th PR3 optic stage. In fact, a picture of this baffle was taken from the controlroom and is in alog 8941.
The second attachment is a repost of the SR3 baffle picture from alog 16512.
from the pictures, it seems like we could get most of the rest of the baffling we need if the wire going under neath the barrel of PR3 were to be covered. Perhaps that's what accounts for the residual heating. Also, if it became a problem perhaps we can get an SR3 baffle with a slightly smaller hole to cover its wires.