Finished adding Caps to the Negative regulator per latest dwg. Today we completed the ITMx,ITMy,BS, and all of Ham5 and Ham6 Suspensions. Also verified and corrected were needed Ham2 optics. This should stem any oscillations caused by the long cables from the SAT amp side.
IM2 signals changed behavior on 23 Feb around 19:00UTC.
Pitch and Yaw signals sent from about 1 count pp to 6 counts pp.
Looking at OSEM signals, UL and LL decreased in pp counts, UR remained about the same, and LR increased it's pp counts.
Plots attached: power spectrums look very similar to the "good" values, time series shows the OSEMs and the increased pitch and yaw signals.
Monday: Noise hunting (sort of)
Tuesday: ITM preloading, SRC work, get back to locking
Wednesday: WFS (morning) & noise hunting (later in the afternoon)
Thursday: Noise hunting (all day)
Nairwita and Jeff found yesterday that the foton file for HAM5 did not show the same controller that the seismic matlab commissioning scripts said were installed. The changes weren't huge (some very different low frequency gains, but only factors of a couple at a few hz), and are shown in the two attached plots. First figure is the old filters, second is the new filters. While everything was down this afternoon I've installed the Matlab filters. I don't expect any difference in the performance at HAM5, the diffrences are not huge, this was mostly a housekeeping task, to make offline modeling work easier. I still haven't figured out why the difference, but it would likely take digging through months of archived foton files. This also makes the controllers at HAM5 more similar to HAMs 4&6 over 1-10hz.
(Fil Keita Daniel)
OMC PZT dither: change cap for 1000 less analog gain. Issue 1208, ECR TBD. Done
OMC DCPD bias voltage: increase bypass caps on power supply. Issued 1207, ECR E1600061. Done
H1:OMC-LSC_OSC_CLKGAIN was increased by a factor of 10 (from 600 to 6000) as the old resistive divider was providing a factor of 100, not 1000.
Units modified are listed below:
1. OMC PZT Dither Modification (HV power supply set to 100V, 80mA)
Chassis D1300485 SN S1301298
Board SN S1301288
2. OMC DCPD Bias Voltage Modification
Chassis D1300520 SN S1101603
Board SN S1301540
Cao, Nutsinee, Elli
We replaced the ITMX HWS SLED with a spare.
We remeasured power coming out of the SLEDs using an Ophir power meterhelp directly in front of the fiber.
The ITMX SLED power was 1.35mW (running at 98mA current)
The ITMY SLED power was 2.90mW (running at 103mA current.)
The SLED power channel H1:TCS-ITMX/Y_HWS_SLEDPOWERMON was recalibrated using new gains and dark offsets
H1:TCS-ITMY_HWS_SLEDPOWERMON=0.48*intput-0.0071
H1:TCS-ITMX_HWS_SLEDPOWERMON=1.089*input-0.154
We tested ITMX's HWS by applying a high heat on CP by the CO2 laser. It seems that it is properly functioning. Also the simulator does not seem to be terribly off in the prediction. Once we know the interferomter heating, we can start testing the pre-loading on the X arm.
Posted are the data from the 3IFO desiccant cabinet and the two dry storage boxes for February. Data shows no apparent problems.
Site activities: some started before 16:00UTC (08:00PT)
- 15:14 Richard - SUS work, ITMs, BS, and MMT1
- 15:23 JeffB - both end stations for Dust Monitor install
- 15:30 ChrisS to mid-X to retrieve items, no VEA entry
- 16:11 Fire Department - testing in OSB
- 16:11 Filiberto/TJ - EY then EX RF cable pulling
- 16:27 Christina - EX and mids
- 16:28 Hugh - restarted HAM4 and HAM5 models, all working
- 16:43 Ryan - restarting Remote Access, alog
- 17:00 GRB test alert
- 17:30 ChrisS - resumes beam tube sealing on X-arm roughly 300 yards from corner station
- 18:00 Gerardo - BSC5 recharge canister outside of chamber, 6 hours
- 18:00 Cheryl - H1 PSL Anti Room access for parts
- 18:02 Brynley/Vinny - heading to EY to get their magnetometer equipment
- 18:10-18:43 DaveB - SUS hardware wd swap out (CERitmy & EY ETMy)
- 18:10 portapotty service
- 18:12 ITMx sattelite back
- 18:13 Paradise water
- 18:23 Elli, Nutsinee - looking for TCS sled in west bay
- 18:44 Dave - ETMy SUS hardware wd swap
- 18:45 Ken - variacs work for bubba
- 18:45 JeffB - completed dust monitor work at end stations
- 18:57 Ed, JoeD - HAM5/6re-install sattelite amps
- 19:21 DaveB - done at EY with SUS hardware watchdog
- 19:21 JimB going to EX to look at electronics
- 19:50 Kiwamu - CO2 laser heating tests using ITMs and SRC
As of 23:20UTC
- 21:05 Kyle - back from EX - Gerardo still out there
- 21:05 Filiberto and Ed - back from end stations - Filiberto will return after lunch
- 21:14 Joe - back in the LVEA for battary checks, complete
- 21:21 Daniel/Filiberto - at OMC for PZT dithering work
- 21:59 Daniel/Filiberto - returned
- 22:12 Gerardo - done at EX at BSC5
- 23:20 Gerardo - at My to fill CP3
- 23:20 Hugh - changing out hardware installed earlier today at EY
8:50 - 9:53: Ran charge measurements on ETMX & ETMy, per these instructions.
The only activity at end stations were: Fil & TJ were pulling CPS RF cables at EY (8:25-8:58) & then EX (9:49-ongoing).
Here are the alignment slider values (before/after):
ETMx:
ETMy
The Vacuum crew was also at EX while we were there, Just in case this is important.
WHAMs 2 & 3 models were built by Hugo/Vincent while Jim and I were busy building things. When Jim & I got to bringing the other HAMs up, we did not realise there was a modification needed to generate a non-cust medm screen. This screen is the PAYLOAD button on the ISI OVERVIEW that gives access to the SUS and the SUS-model-is-not-running-so-trip-the-ISI override controls.
The attached image shows the model elements needing correction.
I switched the Chamber Guardian to STOP. Interestingly, it then reloaded its code and switched itself to PAUSE, huu? Anyway, it stayed there and then the ISI Guardian was switched to READY. The utility of oing this is to keep HEPI ISOLATED. Once there, I opened the MASTERSWITCH and updated the safe.snap for FF settings.
JimB then restarted the FE. The medms are now available and the ISI reisolated fine.
Apparently, I failed to file a work permit nor Inegration Issue, else, I'd list those items here.
02:18 UTC Sheila and Jenne at end X checking green beam alignment 02:53 UTC Nutsinee to LVEA to find equipment for Elli 03:22 UTC Nutsinee back 03:40 UTC Sheila and Jenne back 04:15 UTC X arm locked on green 05:29 UTC Initial alignment done 09:31 UTC ISC_LOCK set to down. Started script for ring heater test. Cheryl, Jenne and Sheila were struggling to get the X arm realigned on green after HEPI and ISI trips earlier in the day. After this was resolved we ran through an initial alignment. We have since been having trouble with engaging the ASC loops. We have left the ISC_LOCK guardian at down and started the script for the ring heater test.
Patrick, Sheila
We have been loosing lock while engaging the ASC tonight. It seems as though turning on the REFL WFS loops without the SOFT loops reduces the recyclling gain, sometimes too much for us to survive. With the soft loops engaged with low gain as they are they sometimes don't improve the recycling gain fast enough. We might be better off engaging all of our ASC loops at the same time.
We noticed that the ASC offloading was a factor of 10 slower for the ETMs than the ITMs, and fixed this so that now the ETMs have the higher offloading gain. It seems like the offloading gain on all of the test masses could be increased by 10 dB more at least, wich may save us some locklosses due to saturations in the future.
We lost lock a few times because the roll mode saturated the ASC. This happened when we had damped the roll mode before turning on the ASC, zeroed the gain of the damping loops, and engaged the ASC. As the recycling gain dropped the roll mode seemed to couple to the AS WFS more strongly and saturate the PUMs. We also noticed that the ETMs have 2 bounce roll notches in the suspension filters, while the ITMs only have one. These might need to be upstream, and should probably be the same on all test masses.
[Sheila, Jenne, Hang]
For some reason, after the several ETM HEPI and ISI trips earlier this afternoon, we really struggled to get the green Xarm aligned. (Note that I had a similar struggle yesterday, but was eventually able to succeed from the control room)
In the end, we went down to the end station and looked at the pointing on the table. We decided that since the input beam was going through all the irides there was no misalignment on the table for the injected beam.
To get the alignment, we did:
We then went back to the corner station. We needed to help the PLL out a bit, but that's likely because we were out futzing around on the table. After the PLL relocked, we were able to lock and align the green Xarm like normal. We let the WFS converge for a long time, since they really needed it.
We're still not sure what the actual problem was. Perhaps the operator tomorrow can look at how the pointing of ITMX, ETMX and TMSX changed versus the maximum green transmission flashes and see if there's any clear patterns?
We have changed the whitening filters in CAL_DELTAL_EXTERNAL_DQ, to the filter described in
We are now using 6 zeros at 0.3 Hz and 6 poles at 30 Hz. Hopefully this will take care of the aliasing problem with DTT, and we can use the calibrated channel when making comparisons with seismic/ sus or PEM channels.
This doesn't impact the GDS pipeline, only CAL_DELTAL_EXTERNAL
[Kiwamu, Elli, Aidan, Cao]
In order to test the alignment of HWS Y-arm beam and the spherical power measured, we need to run a test by using the ring heater (RH) of ITMY. tonight after commissioning. Here are the steps
1. Leave all optics aligned. The most important optics need to be aligned are ITMY, BS, and SR3.
2. Run the the python script to turn on the RH to 1 W. This script turns on the RH if ITMY and leaves the RH on for 2 hours before turning it off.
To run the script, type python ringheaterITMY.py to the terminal, which should have been readily available on the desktop.
NOTE:
The script and all terminal windows should be readily seen on desktop opsws6. Attached is the screen of what the desktop should look like:
There are two terminals:
1. Terminal where the script is executed, it is in the folder ~/HWSYAlignment. the command python ringheaterITMY.py should be already typed onto the terminal and just need to prss enter to execute the script.
2. One other terminal shows the HWS is running. DON'T close this terminal, otherwise the HWS will stop running
There is also a window showing the script ringheaterITMY.py
Following on with measurements made in the field by Jeff (see [1]), we wanted to repeat these measurements for a PUM AOSEM in the lab up to 100 kHz. In order to characterize the AOSEM, I used a spare AOSEM, PUM coil driver, satellite box, and a coil driver test box. These were connected in a similar manner as Jeff has done previously (see alog 24725). For these measurements, I only needed the test "bench" setup, and the full connected chain. The data is stored at
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PostO1/H1/Measurements/Electronics/BenchPUMDriver/2016-02-29
Dividing out the bench setup from the full connected chain, we can recover the AOSEM behavior without the coil driver and test box. LISO is used to fit this resulting transfer function to one zero and two poles (see the attached figure):
Best parameter estimates:
Zero: 760.627 Hz +/- 3.734 Hz (0.491%)
Pole: 22.053 kHz +/- 167.9 Hz (0.761%)
Pole: 661.6 kHz +/- 46.36 kHz (7.01%)
factor = 4.880 mA/V +/- 14.19 uA/V (0.291%)
[Kiwamu, Ellie, Aidan, Cao]
We noticed a fast degradation in the output power of the HWS-Y SLED over time. Despite the wrong calibration, calculation of the fractional power output shows that there is almost a 30% decrease:
Time | Power (mean - 10 minute trend) (mW- WRONG calibration) |
1138577882 | 83.69 |
1138617346 | 79.66 |
1139790957 | 79.62 |
1140362164 | 70.25 |
1140830276 | 64.70 |
Between the 04/02 and 18/02, the SLED was off so the power output did not change. As soon as the SLED started being used again, the power output quickly decays. This means that if running continuously, the SLED's power output decayed 30% over 15 days. A plot from dataviewer is attached, which shows this clear decay.
This fractional 30% decay excluded the effect of amplication factor due to wrong calibration. The only effect of wrong calibration may have is some unknown offset that we are not aware of.
Nevertheless, it is worthwhile to investigate what is the cause of this fast degradation. We are suspecting whether the current driving the SLED exceed the limit but we are not aware of due to some calibration error. The current recorded from the channel SLEDCURRENTMON is around 100 mA, which is below the upper limit of this SLED. However, we will need to test this to confirm the actual current value.
It surprising that these SLEDs are degrading so fast. Last year Nutsinee and I adjusated the SLED driver boards (alog 18069, D1200600) so they can only output a maximum of 100mA, thinking that this would improve the lifetime of the SLEDs. The data sheet for these SLEDs indicates a maximum current draw of 350mA (see http://www.qphotonics.com/Fiber-coupled-superluminescent-diode-5mW-790nm.html and http://www.qphotonics.com/Fiber-coupled-superluminescent-diode-5mW-840nm.html). Yet it appears that limiting the max SLED current has not preventing the SLED power from dropping rather rapidly.
I have extended the cross spectrum analysis (see 25009 and its comments) to the entire O1 run.
Preliminary conclusions are that:
I will dig into some interesting days or periods and study how different things were.
[Setting up band limited rms]
To analyze the cross spectrum as a function of time, I decide not to do spectrogram type analysis which may contain too much information. Rather I wanted to study the time evolution of the band limited rms. This approach is very similar to what Gabriele does (e.g. 22514).
I selected eight interesting bands in which I avoided tall peaks in the spectrum because I am not interested in their amplitudes in this study. The attached below shows the eight selected frequency bands.
The frequency bands are chosen as follows, Band 1 = [20 25], Band 2 = [30 35], Band 3 = [50 55], Band 4 = [81 95], Band 5 = [101 110], Band 6 = [121 126], Band 7 = [130 140] and Band 8 =[150 156].
[Time-varying calibration parameters are corrected]
I have corrected for all kappas, the time varying calibration parameters, by applying them to the H1 DARM model of the calibration group. Since one cross spectrum is produced from a twelve minutes integration, kappas are also averaged for twelve minutes. Therefore, the data sets that I analyzed here must be less sensitive to calibration errors due to the time varying parameters which can deviate roughly by 10% from the reference values.
[Glitches and transients are removed]
I have removed data segments that were contaminated by some glitches or transients because I am not interested in their characteristics. This was done by computing an average Rayleigh static over the frequency range from 60 to 100 Hz and rejecting the segment which had a Rayleigh statistic deviating from unity by more than 10%. So the data set I analyze here has more or less good Gaussianity.
[Result 1: time evolution]
The attached plot below shows the band limited rms of the selected eight frequency bands as a function of time in days starting from Sep-01 00:00:00 UTC.
Typically, the two lowest bands (band 1 and band 2) vary more than the others presumably due to the fact that they can be easily degraded by seismic activities, alignment loop and LSC feedforward. In contrast, the rest of high frequency bands (bands 3 -8) are more stable but still do fluctuate. Notice that the high frequency bands decreased their rms at t = 50 days or so. I believe that this is due to Robert's improvement of reducing the PSL jitter coupling (22497). Also, one can find many interesting periods where one of the bands increased the displacement while the rest stayed unchanged and so on. I will try to check those interesting days later.
The fluctuation of the four highest bands (bands 5 -8) are at 15-20 % level (or 30-40 % in peak to peak) with high sigma data points excluded.
By they way, I forgot to multiply sqrt(df) with df being the frequency resolution to all the band limited rms . The frequency resolution df is currently 0.1 and therefore all the data shown in the above plot should be lowered by a factor of sqrt(0.1) = 0.3. This does not change the main conlusions.
[Result 2: behavior of the high frequency bands]
Now, the plot below shows a primitive correlation plot.
where the horizontal axis represents the displacement of band 8 and the vertical axis is for the rest of the bands in order to show correlation with band 8. I have excluded the first 50 days in which the high frequency noise was higher due to the PSL jitter coupling. By the way, the x-axis is in log scale although it may look linear to some people. As seen in the plot, bands 4-7 show a positive correlation with band 8. Also their slopes seem to be identical. These indicate that noise level above 80 Hz fluctuate in such a way that it keeps the same spectral color. As mentioned above, the size of fluctuation is about 15-20% for bands 4-8.
All the data in the above plot should be lowered by a factor of 0.3 for the same reason as the time series plot . This does not change the main conlusions.
A follow up: I have looked into the difference between the data before and after t = 44 days (Oct-13-2015). I still think that it is due to Robert's improvement of reducing PSL periscope jitter coupling.
[DARM and DCPD cross spectra]
Here is what the data says. See the attached below.
Here I plot two different spectra from two different days: a DARM spectrum generated by using the C01 recalibrated frame data and cross spectrum of DCPD A and B calibrated into the displacement for each day.
First of all, the DARM spectrum shows an improvements in the range from 50 Hz to 400 Hz (see cyan and yellow curves). Noise from the later days shows featureless noise in this frequency range. The same improvement is visible in the cross spectra. The PSL periscope jitter peaks in 300-400 Hz had almost gone below some other smooth noise floor and structures in 60 -300 Hz disappeared and left a smooth noise floor. Noise above 400 Hz seems unchanged between two days.
By the way, DARM of Oct-7-2015 showed a wandering peak (?) in 166-176 Hz band which are not visible in the above plot because they are averaged out by the 24 hours integration. Also, the discrepancy below 30 Hz between two days could be due to seismic and alignment, but I have not paid attention to this frequency region. I attach the actual fig file as well.
[Qalitatively same improvement seen in detechar summary page]
Somewhat consistent improvement can been seen in the detchar summary page. I attach a gif animation comparing the spectra from the two days -- one can see the improvements not only at the 300-400Hz peak but also in 50 - 300 Hz.
Note that despite the different GWINC curves and title location, the x and y axes seem to be identical.
Later, it turned out that the optical gain was not compensated in all the above data. See the correction at alog 25918