J. Kissel I'm not sure why, but INJ EPICs "settings" channels that are used to store, news, state and time information have become re-monitored in the SDF system, which means that the CAL-CS model shows differences when an external alert arrives and/our when a standard hardware injection goes through. I had removed the monitoring of these channels before O1 had started last year -- see LHO aLOG 21154 -- so I don't know why these have suddenly come back into monitoring. Merp. They're back to being ignored by the SDF system as they should be.
Kyle, John In and out of X-end VEA. Energized RGA filament and ran RGA just long enough to confirm that the Kr cal-gas isolation valve wasn't leaking at that the 5 x 10-4 torr*l of Krypton that I had dumped into the X-end had dissipated below detectable limits - OK all traces of Kr are gone.
Matt Daniel Fil (WP 6294)
The MCL and MCF readbacks have a 10/100 Hz Sallen-Key whitening stage which amplifies the high frequency spectrum to get above ADC noise. Since a while we have observed a 20-50 mHz/√Hz flat noise level in the these spectra when we are locked with the IMC only. Looking with the oscilloscope we estimated about 10m V signal between 100 kHz and 1 MHz, before the whitening. This seems too much for the AA board, so we included additional low pass filters in the readbacks with cut-offs around 15 kHz. A 15/150 kHz pole-zero was added to the Sallen-Key, and another 15 kHz pole was added to the output stage.
In detail (common mode board, IMC, s/n 1102626):
The attached spectra now show a frequency noise level which is compatible with the one observed in full lock. The coherence is also improved. The ADC noise is not too far away in regions with reduced coherence.
Here is a comparison between MCF fully locked and 2W IMC only (REF traces). The changes are much smaller now, indicating that MCF sees frequency noise from the laser.
The IMC shot noise limit here should be about 1 mHz/Hz1/2, assuming 0.3 mW of light (mostly carrier) on the PD with the IMC locked, 5 mW of light on the PD with the IMC unlocked, and a modulation depth of 0.01 rad.
Nutsinee's trouble damping PI mode 27 has continued into my shift. Trouble locking PRMI after the last lock loss. Starting an initial alignment.
I have glued QTY 4 AR black glass 8.5" OD circles into the 9" ID of the pcal red anodized viewport guillotines. All 4 are with Robert now for testing.
I used the Araldite 2012 2-part epoxy which is routinely used to glue the glass fiber stock to the aluminum holder during fiber pulling production. This glue seemed a much better choice than the EP30 which is used for many in-vac applications. I followed the assembly procedure for the Araldite which the fiber lab has had good experience with, namely a ~30min 60 deg C bake on a hot plate.
| Work Permit | Date | Description | alog/status |
| 6293.html | 2016-11-01 12:28 | Activity: 1. Isolate turbo from RGA volume by closing the 1 1/2" turbo isolation valve. 2. Expose RGA to the X-end volume (with cal-gases isolated) by opening the 2 1/2" RGA isolation valve. 3. Vent turbo via turbo vent valve while monitoring X-end pressure. 4. Remove turbo and install 1 1/2" O-ring valve in its place. 5. Pump to high/rough vacuum the volume between the two series connected 1 1/2" valves. 6. Shut down and decouple pumping components | 31080 |
| 6292.html | 2016-11-01 09:05 | Install chassis in MSR and LVEA Chiller closet. We will power it up and monitor the channels for a while so no functions will be impacted. | 31090 |
| 6291.html | 2016-11-01 08:32 | Recenter BRSY. SC will be turned off. | |
| 6290.html | 2016-11-01 08:30 | Perform scheduled maintenance to scroll compressor #1, #2 @ X-End vent/purge-air supply skid. Maintenance activity will require for the compressors to run for brief periods of time. Lock-out/tag-out power to skid. | 31085 |
| 6289.html | 2016-10-31 14:32 | The guardian machine has been showing some awgstream errors. It is due a reboot (running 27 days). | 31076 |
| 6288.html | 2016-10-31 14:30 | Activity: see alog 30908, ADC chans showed an offset when h1iscex computer was power cycled but not the IO Chassis. Perform a power cycle of both computer and chassis to see if offset is cleared. | 31071 |
| 6287.html | 2016-10-31 14:27 | Add an extra PEM ADC to the h1oaf0. Shuffle the new ADC with the one used by h1ngn. h1iopoaf0 model change. DAQ restart. | 31075 |
| 6286.html | 2016-10-31 10:13 | Remove Pringles from restored DOFs list of HEPI platforms: Edit guardian parameter file for each platform. Test as needed (not really needed.) | not done |
| 6285.html | 2016-10-31 09:59 | Degas PT170 & PT180 Inficon wide range nude ion gauges, one at a time. | 31089 |
| 6284.html | 2016-10-27 13:28 | Power off the power cart/transformer along the output arm. The Op levers for HAM4 and HAM5 will need to be disconnected. The HWS power supply will also be moved. Op levers will be left powered off. | 31081 |
| 6283.html | 2016-10-27 09:10 | Adding an input offset adjustment to the AC coupling loop (FE model change). | 30928 |
| 6282.html | 2016-10-26 16:29 | First Contact the PSL-HAM1 viewport window. This viewport is a double-layered viewport with the outer surface intending to be cleaned on an in-air window. Will need LASER SAFE in LVEA, Light pipe will be removed. | 30924, 30937 |
| 6281.html | 2016-10-26 08:50 | Lift leads of thermocouple wiring at Beckhoff rack at MY station to test TC readouts at CP3 |
Per FAMIS #6870, ran & checked ISI CPS signals for the BSCs & HAMs. Both measurements are attached.
Comparing to measurement from 10/20 there are no glaring high-frequency issues. Small differences are the following:
BSC ITMx Stage1 appears to have smaller/less lines in the 40-60Hz band.
BSC ETMx Stage1 appears to have more lines in the 20-30Hz band.
Weekly site wide meeting at 3 PM local. SEI Jim W. recentered BRS at end Y Hugh was not able to complete guardian change for HEPI pringle mode DOF (WP 6286). Changes reverted. Will consult with TJ. Operators should take note of Jim's alog regarding SEI configuration. TCS Nutsinee reported in an alog that the temperature channels for both TCS laser channels are glitching. EE Working on remote access to newly installed Beckhoff safety terminals. Bad Beckhoff module replaced in corner chassis 3. CDS Tried to install an additional ADC card for PEM in the h1oaf IO chassis. Doing so prevented booting even to the BIOS screen. This glitched all of the models on the dolphin network. The ADC card was removed and the front end models restarted. No changes were made to the models on h1oaf. Testing the ADC card in the test stand reproduces the same situation. LLO range is invalid in DMT, being addressed. PSL Added polarization minimization. VAC Kyle looking for an opportunity to go to end stations. FAC Ongoing search for a water leak.
Measured transfer functions after yesterday's circuit board modifications. The MEDM gain slider was increased from 0 in increments of 3.
I noticed this before, where the gain slider is miscalibrated by a factor of 2. 0dB to 40dB is mapped into +/-10V, but it looks like you only need +/-5V for the full range.
Started yesterday evening. Doesn't seems to be equally spaced.
It could be pickup through the unshielded part of the RTD cable (where it comes out of the laser), but it's strange that it started in X and Y at the same time. Can you check the FLIR cameras are turned off on the MEDM screen.
Apparently there is a "glitch" in the laser power of both X and Y TCS systems every time the Laser power meter sees a request for a power change. (see 1 day trend with additional channels to correlate with). However, this did not just start yesterday, it has been going for at least a month (see second plot of the last 3 days). I spot checked back a month and the behavior is the same. Note, there are a few extra spikes on these channels but this doesn't seem to be new behavior.
Update: I looked closer at these glitches.
Characteristic:
A single glitch spans across multiple sample points, lasted a little less than half a second. Almost everyone of these glitches lead by one smaller glitch that's roughly 1.5 second apart, followed by an even smaller glitch and immediately followed by a larger glitch.


Timeline:
I looked back several months and it seems like these glitches started to show up much more frequently on May 13th this year (started at 16:28 UTC to be precise). I looked back on the alog and didn't find any invasive work with the TCS on May12th or 13th.

After today's changes to the PMC servo, which included increasing the loop gain, we noticed a familiar bump around 4.2kHz in DARM. Lowering the PMC gain from 16dB to 0dB made the bump smaller in DARM (see images). Given the coherence with REFL9, it seems that this is coupling through frequency noise.
We also made a small exploration around the already good PMC input beam alignment. This didn't have much impact, but it revealed that small changes in the alignment can change the input offset to the PMC servo. That is, the offset which maximizes the PMC throughput can be changed with small alignment tweaks. Using this effect, we made a 0mV input close to optimal (see second image).
Tagging PSL and ISC.
Notes:
0) A2L gains of order two are getting suspicious, because it essentially means some coils are shut off.
1) ITMY_Y2L was always large (~-2) , but since 10/29 7UTC it is even bigger.
2) ITMX_Y2L also got big (~2) on 10/29 7UTC
Not sure what to make of this...
Tagging SUS.We should check functionality of coils.EDIT: Checked the functionality of the coils by taking a quick transfer function between COILOUT_EXC channels (OSEM basis excitation channels closest to the DAC output), and the various coil driver monitor channels. Though I've not taken the time to calibrate or understand the signals, I see no differences between the quadrants, and plenty of coherence between all drives and responses. I don't think there's anything wrong with the ITM L2 actuators. Note to self -- next time include OSEM sensors and optical levers in your template for those who don't trust the functionality of the coil driver monitor channels. The templates (attached) live here: /ligo/home/jeffrey.kissel/2016-11-02/ (apologies for my laziness -- I figured it would be faster to just create new templates than to find the one's Ed used to characterize the monitor signals that are likely committed to the SusSVN. #slapsownwrist) Also note that - The IFO was DOWN during this measurement. - I turned off the PIT optical lever damping during the measurement so as to not confuse anything.
For both PMC locking and injection locking, servo board and field box board were modified as per FRS 6500 (FRS id=6500 and alog 31047).
After the modified boards went in, we removed the 20dB attenuator on the PSL rack RF patch panel for 35.5MHz for PSL EOM.
This seems bump up the ILS and PMC length locking optical gain by about a factor of 10 (for ILS the analog gain was changed to compensate).
We also remeasured the demod phase for ILS and PMC length locking. For ILS we didn't see much change (1ns), but PMC was significant, we ended up changing 3ns, i.e. 39 degrees for 35.5MHz.
Demod setting are shown in the first two pictures, ILS (first attachment) and PMC (second). Third picture shows that the ILS is on the left bottom of the four delay lines, and PMC is on the right bottom.
The fourth picture shows the OLTF of PMC that is pretty close to the one we have now. Note that UGF is much much higher than before simply because of the optical gain and the demod phase (might have to change).
Unfortunately the floppy for SR785 failed, so we'll remeasure the ILS loop gain again tomorrow.
Here are the spectra of HVMons and Mixer_out, both in counts at the input and in calibrated in Volts.
J. Kissel, for M. Evans, D. Sigg, K. Kawabe With these electronics changes comes new compensation filters scattered throughout the PMC model, which has resulted in SDF differences. The following filter banks now have new filters that have been accepted into the SDF system as ON: H1:PSL-PMC_INOFFSET_CALI FM3 "LP1" zpk([],[1],1,"n") H1:PSL-PMC_MIXER FM2 "aWhite" gain(-0.005) H1:PSL-ILS_MIXER FM2 "aWhite" gain(-0.005) H1:PSL-ILS_HV_MON FM10 "aWhite" zpk([100],[1],-1,"n")
(Patrick, Gerardo)
Degassed 2 nude ion gauges in the corner station, PT170 and PT180. We did the degassing twice since the first time the degassing was terminated too early for both gauges, we sent second command before the 3 minutes were up, and it turns out that if you do send the command degassing stops. As usual pressure went up to 10-07 torr for both gauges while degassing then, trended down after.
Trends for Inficon gauges PT170 and PT180 (BSC7 and BSC8) over 60 days, compared to the signal from PT120B (BSC2) using a MKS HPS 903 inverted magnetron cold cathode.
Note that PT120B trend has been shifted up to compare slope of signals, the other trends remain unchanged.
WP#6255 and WP#6293 completed 0930 hrs. local -> Valved-in RGA turbo to RGA volume and energized filament. 1130 hrs. local -> Took scans of the RGA volume with and without cal-gases -> isolated RGA turbo from RGA volume -> combined RGA volume with X-end volume and took scans of the X-end with and without calibration gases (inadvertently dumped ~5 x 10-4 torr*L of Krypton or 2 hrs accumulation @ 5 x 10-8 torr*L/sec into site) -> vented RGA turbo and removed from RGA hardware -> installed 1 1/2" UHV valve in its place -> Pumped volume between two 1 1/2" valves to 10-4 torr range before decoupling and de-energizing all pumps, controllers and noise sources with the exception of the RGA electronics which was left energized and with its fan running 24/7. Leaving RGA exposed to X-end, filament off and cal-gases isolated. Will post scan data as a comment to this entry within next 24 hrs..
Here are the scans from yesterday: Note the presence of amu 7 obviously "sourced" from the N2 cal-gas bottle. I will need to revisit the noted observation of the appearance of amu 7 when the cal-gas isolation valve used with Vacuum Bake Oven C is closed and the baffling disappearance of this amu when the isolation valve is opened???.
During the windstorm yesterday, the PCal team attempted to complete end station calibrations of both ends. The calibration for EY went off without a hitch (results to come in a separate aLog). However, while setting up for the EX calibration, I dropped the working standard from the top of the PCal pylon onto the floor of the VEA. The working standard assembly ended up in 3 pieces: the integrating sphere, one spacer piece, and the PD with the second spacer piece. Minor damage was noted, mostly to the flanges of the integrating sphere and spacer pieces where the force of the fall had pulled the set screws through the thin mating flanges. I cleaned up and reassembled the working standard assembly and completed the end station calibration. Worried that some internal damage had occurred to the PD or integrating sphere, I immediately did a ratios measurement in the PCal lab. The results of this showed that the calibration of the working standard had changed by ~2% which is at the edge of our acceptable error. As a result of this accident, we are currently working to put together a new working standard assembly from PCal spares. Unfortunately this means that we will lose the calibration history of this working standard and will start fresh with a new standard. We plan to do frequent (~daily) ratios measurements of the new working standard in the PCal lab in order to establish a new calibration trend before the beginning of O2.
Opened FRS Ticket 6576 - PCal working standard damaged.
J. Kissel
We're exploring the functionality of the new features of the front-end calibration that calculates the coherence and subsequent uncertainty of the transfer function between each CAL line source and DARM. As such, I plot three, one-hour data stretches from different lock stretches in the past 24 hours.
Data Set A: 2016-10-31 02:30 UTC
Data Set B: 2016-10-31 07:00 UTC
Data Set C: 2016-10-31 10:00 UTC
Note the translation between channel names and to which line they're analyzing:
H1:CAL-CS_TDEP_..._[COHERENCE/UNCERTAINTY] Frequency Used In Calculating
DARM_LINE1 37.3 kappa_TST (ESD Actuation Strength)
PCAL_LINE1 36.7 kappa_TST & kappa_PU (ESD and PUM/UIM Act. Strength)
PCAL_LINE2 331.9 kappa_C & f_C (Optical Gain and Cavity Pole)
SUS_LINE1 35.9 kappa_PU (PUM/UIM Act. Strength)
where you can refer to P1600063 and T1500377
Recall also, that our goal is to have the uncertainty in the time-dependent parameters (which are calculated from combinations of these lines) to around ~0.3-5%, such that these uncertainties remain non-dominate (lines are strong enough), but non-negligible (not excessively strong). An example total response function uncertainty budget in LHO aLOG 26889, to see at what level the time-dependent parameter estimation uncertainty impacts the total uncertainty. That means the uncertainty in each line estimate should be at the 0.1-0.3% level if possible. So, we can use these data sets to tune the amplitude of the CAL lines, so as to optimize uncertainty needs vs. sensitivity pollution.
There are several interesting things. It's best to look at the data sets in order B, then A, then C.
In data set B --
- this is what we should expect if we manage to get a stable, O1-style interferometer in the next week or so for ER10 and O2.
- With the current amplitudes, the uncertainty on the ~30 Hz lines hovers around 0.1% -- so we can probably reduce the amplitude of these lines by a factor of a few if the sensitivity stays this high.
- The 331 Hz line amplitude should probably be increased by a factor of a few.
In data set C -- (this is during the ghoulish lock stretch)
- One can see when the data goes bad, it goes bad in weird, discrete chunks. The width of these chunks is 130 sec (almost exactly), which I suspect is a digital artifact of the 13 averages and 10 sec FFTs. The sensitivity was popping, whistling, and saturating SUS left and right during this stretch, at a much quicker timescale than 100s of seconds.
In data set B --
- This is an OK sensivity stretch. The good thing is that the coherence/uncertainty appears to be independent of any fast glitching or overall sensitivity, as long as we stick in the 60-75 Mpc range.
- Interestingly, there's either a data dropout, or terrible time period during this stretch (as indicated by the BNS range going to 0) -- but it's only ~120 sec. If it's a data drop out -- good, the calculation is robust against whatever happens in DMT land. If it's a period of glitchy interferometer, it's very peculiar that it doesn't affect the uncertainty calculation, unlike with data set C.
Based on these data sets, I think it'll be safe to set the uncertainty threshold at 1%, and if the uncertainty exceeds that threshold, the associated parameter value gets dumped from the calculation of the average that is applied to h(t).
So, in summary -- looks like the calculations are working, and their calculated value roughly makes sense when the IFO is calm. There're a few suspicious things that we need to iron out when the IFO isn't feeling so well, but I think we're very much ready to use these coherence calculations as a viable veto for time-dependent parameter calculations.
Jeff K, Darkhan T,
We investigated further the 130s drop in coherence in the data set C (see LHO alog 31040 above).
This drop was possibly caused by a bad data point(s) ("glitch") at the beginning of this drop (when first glitchy data point entered the 130s averaging buffer). A quick look at kappas calculated in PcalMon from 10s FFTs during 600s around time of the glitch indicates that outliers in κTST and κPU values are found only in one of the 10s intervals. This interval is GPS [1161968880, 1161968890) (see attachment 1).
A look at slow channels indicate that the glitch produced impulse responses lasting just under 10s before 0.1Hz low-pass filter and roughly 30s after the filter, DARM_ERR demodulated at 35.9 Hz (see upper panes in attachment 2, ). Start of the glitch is at ~1910s (GPS 1161968887). In the coherence calculation block of the CAL-CS model (attachments 3 and 4), it can be seen that the glitch lasts 20-30s in EPICS records preceeding the 130s averaging blocks (BUFFER_AND_AVERAGE), but results in reduction of the calculated coherence value for 130s (see attachment 5).
If we use coherence values from the CAL-CS front-end model as a threshold for "bad kappas", this kind of glithces will result in unnecessarily marking 130s of kappas as "bad". GDS median kappas should not be sensitive to this kind of short glitches, however CAL-CS front-end κTST were affected for ~250s (front-end kappas are low passed with a 1/128 Hz IIR filter) (see attachment 5).
A potential (not yet tested) solution would be to replace BUFFER_AND_AVERAGE (running aveage) script with a running median. And a similar script can be used for averaging of the front-end kappas, which would also reduce discrepancies between GDS and front-end kappas.