h1fw1 froze up last night at 22:41 PDT with a kernel panic. It was restarted at 10:16 PDT today (I pressed the front panel reset button)
last part of the console error message was:
VFS: Close: file count is 0
Kernel panic - not syncing: CRED: put_cred_rcu() sees xxxxxxxxxxxxxx with usage -1
where xxxxxx is a large hex number.
We've had much more success tonight with the non-broken Xarm Trans QPD. We once again re-centered the spots on the ETMs, although they didn't need much moving. We are able to sit at 10W and 12W just fine now. Now, we're running into regular ol' loop oscillations, so we've been measuring loops at different powers, and trying to re-tune them.
CHARD Y seemed the most egregious, so we created a new control and boost filter combo, which live in FMs 4 and 5. Unfortunately, these filters are totally unuseable at 2W, although they improve our stability at 10W, so right now the guardian still only engages the old loop shape filters. We'll have to re-think the 2W filter situation to make sure we can transition between these filters. Right now, we were by-hand turning off the CHARDY loop, changing the filters, then re-engaging the loop. Attached is an open loop gain for the new loop.
PRC2 pitch we've decided is kind of okay if we use a factor of 2 less gain.
Now, we're seeing oscillations that also show up in AS 90, so we suspect either the MICH or SRC angular loops. Unfortunately, there's something going on with NDS/the lockloss plotter/something, such that I can't get data from the last ~5 locklosses. The ones before that, I can still get and plot, but it can't find data for the last several even if it's been an hour since that lockloss.
So, next up: Measure the MICH and SRC loops at 10W to see if they're close to unstable. Measure again at 15W, and then think about going from there.
I feel I should know this already, but what is known about the QPD failure (circumstance at failure, failure mode, etc.)
It's not totally clear to me yet what the exact problem was. R.McCarthy is looking into why (apparently) putting the PI chassis spoiled the signal. Removing the new PI chassis seems to have fixed our problems. See alog 26328 and comments for symptoms and Rich's comment.
The PI AA was off and its OpAmp inputs were probably 'shorted' to ground due to the input protection diodes.
I am designing the input circuitry for the ITM PI Driver using the same input chip as that used on the ETM PI AA, so I will hedge our bets by including some input protection circuitry (current limit and clamp) to avoid this if that turns out to be the case.
In the lab, Fil reproduced the situation at EX by connecting a function generator to a coil driver test box (D1000931) and only used the single to differential converter of the board inside (D1000879) to drive the input of unpowered PI bandpass, and daisy chained to powered AA board.
Things looked OK until the PI input reached about +-1V differential (that's +-500mV positive and -+500mV negative), anything larger than that and the voltage started to be pulled down. Looked like a diode and a small resistor in series to me. As soon as the PI bandpass was powered on, everything got back to normal.
Daniel is correct. The chips used on the input to the PI filters have internal input protection diodes that will (up to the limit of their current handling capacity, which is not much over 10mA or so) clamp the voltage from the QPD amplifier to something around a volt. This is not a problem if the PI BPF is powered, which is the normal state of the system. This event prompted a redesign of the differential input to the ITM ESD Driver to avoid this in the future. Another case of incremental learning.
I have updated the documentation in the DCC here,
https://dcc.ligo.org/LIGO-T1500502
about hoft generation for ER8/O1 to include information about C02 hoft.
In particular, these caveats are included:
* For the exact versions of filter files, code, and command lines used to generate C01 and C02 hoft see: https://wiki.ligo.org/Calibration/GDSCalibrationConfigurationsO1
* Users should check with their working group about which vetoes and which veto-definer files to use with each type of data before producing final results.
Michael, Krishna, Hugh, Jeff
This morning, we took out the piezo stacks from under the BRS-2 platform. We then turned on the ion pump. The current initially went up to 20 mA and slowly dropped down to ~300 microamps in ~1 hr. We then disconnected the Pump station. When we checkd it later after about 4 hrs, the pump current was ~200 microamps (P ~ 1e-6 torr) and steadily dropping.
Michael and I then hooked up the rest of the electronics, wrapped the vacuum can in several sheets of foam, and finally set up the thick-foam-box around the instrument.
Hugh had set up the EY_GND medm screen showing the BRS-2 data coming into the ISI frontends (similar to EX_GND). After correcting a cable connection mistake, we are now getting the BRS-2 angle data into CDS. Currently we are getting the following channels in to ISI AA Chasis:
ADC0, channel 27: Raw Tilt Signal : Calibration - to be worked out soon. This is a high-passed value of the raw angle measured by the autocollimator, high pass: 2 pole at ~0.5 mHz.
ADC0, channel 28: Drift Signal : Calibration: 58.33 nrad/ct. This is a scaled down version of the raw angle signal.
ADC0, channel 29: Ref Signal : Calibration: - to be worked out soon. This is the angle of the reference mirror. It is useful as a measure of the autocollimator noise. Unlike BRS-1, this signal has a higher noise floor and is useful only at very low frequencies, below ~ 5 mHz.
ADC0, channel 30: Status Signal : This is currently zero. The plan is to use this as an indication of the health of BRS-2.
The Tilt channel is currently noisy and we will investigate further tomorrow.
Introducing the new LASER_PWR Guardian node! Use it to change the power by selecting one of the three requestable powers (2W, 10W, 22W) or open the "ALL" screen to select any 'POWER_#W'. I attached a shot of the graph if you're curious. This node is nominally manged by ISC_LOCK and then ALIGN_IFO during initial alignment (though this has created some usermsg issues that I am trying to work around, more on that later).
I have also changed around some of the management for initial alignment. If the user wants to run initial alignment:
1. From READY in ISC_LOCK, go to manual and select INITIAL_ALIGNMENT
2. Wait for ALIGN_IFO to run through its DOWN state. This will temporarily take control of the ALS's, LASER_PWR, and IMC_LOCK nodes.
3. Run the green alignment from ALIGN_IFO instead of each of the ALS nodes. Offload the green WFS at the same ti.
4. Continue as usual, during SRC the LASER_PWR node will bring the power up to 10W and then back down after.
6. When done with initial alignment, make sure to go to INIT on ISC_LOCK to take back control of the nodes. then continue as usual.
I added the LASER_PWR, as well as the SR3_CAGE_SERVO nodes to the Guardian Overview screen (shot attached).
I also slightly changed the ISC_GUARDIANS medm to show who the managers are of each node and placed the new node on the bottom. (shot also attached).
As I mentioned above, there are some user messages that will pop up on ALIGN_IFO and ISC_LOCK about the nodes being stolen by one another. This isn't a big deal for ISC_LOCK to report this whie we are in the middle of initial alignment, but it isn't good for ALIGN_IFO to be continuously notifying during normal locking. I have something that I want to try next opportunity I get, hopefully tomorrow. And if that doesn't work, Jamie seemed to have an idea of how to solve these types of situations that will be in the next Guardian release. Until then we can not manage the nodes during initial alignment just as before.
Quick update on Initial Alignment procedure.
After switching to the INITIAL_ALIGNMENT state in ISC_LOCK, the nodes usually used for IA will no longer be automatically managed. These will stay UNmanaged just like before all of this.
I tried to make a go around for the management by redefining the list of nodes that were control by the NodeManager object. This was plan B since I could not create a second object like I had originally hoped. Plan B also did not work though. While it did manage to get rid of the user notifications, it would regularly get confused and say something like "Node X was in state A, now state B" even though it was the one that had set state A and B. Seems like we will just have to keep it this way until the next grd release, but since this is basically like it was before it's not a big deal.
This is an update of the DCPD cross correlated spectra. This time I have added the one from Livingston which was integrated over 1266 hours using the O1 data. High noise durations (e.g. LLO 23453) are excluded from the integration.
The fig and mat files are attached.
Similar to previous entries, I put in the coating thermal noise and some oscillator AM to see how things look (plot 1). I also made a similar plot with higher thermal noise (by 1.4) and a 1/f^2 mystery noise to show what the limits are on these types of added noises (plot 2). Plot 3 shows the "uncorrelated noise budget", which required a 350Hz DARM cavity pole, and leaves something unexplained below 100Hz. There is something strange happening around 700Hz, but maybe I have overestimated the oscilator AM.
TITLE: 03/30 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: Jim
SHIFT SUMMARY: Good day for locking. Had the IFO as far as DC Readout by 16:30. Then some QPD and other work took me down. Since then, we have been back to locking consistently.
LOG:
15:16 Krishna and Michael to EY BRS work
15:41 Bubba driving inside of both arms checking tumbleweeds
16:05 Chandra pump cart work in diagonal
16:26 Fil to EX QPD work
16:27 Karen and Chris done in LVEA
16:40 John and Bubba to LVEA
16:58 John and Bubba out
17:42 Fil and Richard back
17:45 Fil and Ed to LVEA HAM3 cabling
18:08 Fil and Ed out
18:18 Ed to LVEA
18:50 Ed out
19:30 Fil to EX QPD
19:53 Fil done
19:54 Jeff B to both ends
20:11 DAQ restart
20:30 Jeff B back
20:36 Ed to LVEA
20:47 Bubba and Nicole to MY
(Tasked by Aidan)
We were curious of how the HWS sled power decrease over time so we know when to swap them out. Below I've attached plots of HWSX and HWSY sled power (corrected for gains) and input current. The start time is when we replaced the SLED (April 26, 2015 for YSLED and March 1, 2016 for XSLED). I only plotted the time when the SLEDs were turned on (displays in number of days). XSLED shows deteroration of ~2% per day and YSLED shows deterioration of ~1% per day (given 7mW to be 100%, I don't think we are shooting 7mW out though but the trend looks reasonable). YSLED power decreased slightly faster when the input current = 94 mA.
XSLED data can be found here and YSLED data can be found here. The column goes Time|Power (mW)|Current(mA). The script to fetch and plot data can be found in the same directory.
The fast decay in the beginning is a known problem.
Carlos, Dave, Aidan:
h1tcsey was upgraded from U10 to U14. This week Aidan and I were able to install all the packages needed for HWS operation. We have handed the system over to Aidan and the TCS group for commissioning of this system.
Install instructions are in the wiki:
https://lhocds.ligo-wa.caltech.edu/wiki/HWSServerInstall
A busy maintenance day.
ISI-HPI EY BRS model change WP5798
Hugh, Jeff:
The h1isietmy and h1hpietmy models were changed to use the newly installed BRS in EY, same as what was done at EX.
SEI End Station SUSPOINT and GND-STS transmission to OAF WP5799
Hugh, Jeff, Joe B, Dave:
The suspoint and gound STS channels were transmitted from the end station ISI models to the corner OAF model using an additional RFM channel per arm. The two channels were MUXed into a single RFM channel. The path is: both channels sent from h1isietm[x,y] to h1peme[x,y] via two Dolphin channels. The two channels are filtered and then mux'ed inside h1peme[x,y] and sent out via a single RFM channel. The OAF model demux'es the RFM into two channels, assuming a single cycle delay in getting the data.
SEI HAM SUSPOINT addition WP5796
Hugh, Jeff:
The suspoint code was added to all the HAM ISI models.
SUSAUX upgrades WP5804
Jeff, Betsy:
Latest susaux model changes made.
Guardian PSL node added WP5797
Jamie, TJ:
new node added. Still needs to be added to DAQ?
NAT router OS upgrade WP5800
Ryan, Carlos:
Upgraded lhocds nat router to latest version. Discovered some issues, old system was reverted. Will invesitgate offline.
Machine reboots for security patching
Carlos:
machines which required rebooting for OS patching were rebooted.
DAQ frame writer instability, move wiper from writer to solaris server
Dave, Jim:
On monday we moved the h1fw1 wiper from h1fw1 to h1ldasgw1, running hourly at 10 minutes in the hour. Tuesday we did the same for h1fw0, running on the hour.
Accidental restarts
cast of many:
We had two accidental restarts.
Around 9am the timing slave on h1lsc0 IO Chassis was powered down. Initially we restarted all the models, but later in the day Evan reported bad ADC channels, we we then did a full computer+IOChassis power cycle in the afternoon.
Around 4pm the h1psl0 IO Chassis was accidentally glitched. We did a full computer+IOChassis power cycle to recover following our earlier LSC lesson learnt.
EX Beckhoff vacuum gauges moved from slow-controls to vac-controls
Richard, Dave, Patrick:
Patrick found the problem when we tried this last week, so this week the X4,5,6 BPG vacuum gauges were moved from their temporary slow-controls location to the final beckhoff VAC controls at EX.
The channel names were changed in the process. DAQ minute trends were changed to preserve old trend data (older archives still need moving).
DAQ Restart
Jim, Dave:
The DAQ was restarted several times over the day to support the above changes. We had two bad starts:
On one start all front ends resynced except for h1susaush56 which required a start_streamers, which fixed it.
On one start many frontends did not resync and required a start_streamers, during this h1psl0 went out of sync and many start_streamers did not fix it. We eventually did a second DAQ restart and all was good.
HAM 8: 8 mA @ IP and 3.8e-6 Torr at turbo HAM 9: 10 mA (no more red light) and 2.3e-6 Torr at turbo HAM 11: 10 mA+ (still with red light) and 4e-6 Torr at turbo (pressure went up)
This is a belated log. Some people suggested studying correlation between the cross correlated spectrum and DARM residual. So I looked into that.
I conclude that there is no clear correlation between the noise floor (or band limited rms) of the cross spectrum around 153 Hz and DARM residual.
[DARM residual and band limited rms]
Here is a plot showing how they evolved as a function of time throughout O1.
As seen in the plot, I do not see a clear correlation between the two data. Note that the band limited rms is the one for a frequency range of [150 156] and is artificially scaled and offsetted to make the plot more readable.
[DARM residual is a strong function of seismicity and ISI blend configurations]
Looking at the plot shown above, one can notice that there are some periods where the residual is small and steady, and the others are large by a factor of a few. This seems to be related to seismicity and the ISI blend configurations. Here I give one example study about it.
I looked into one particular interesting time where the EX ISI blend configuration was switched from one to another without losing lock. This is Dec. 26 2015 7:00-ish UTC (24483). The zoomed version of the above plot is shown below.
The lock stretch started at around 4:00 UTC of Dec 26 and the blend configuration was switched at around 7:00 UTC without losing lock. When the EX ISI blend was switched, the DARM residual increased by a factor of two or so. This is simply because of an increase of the low frequency motion. I made two spectra of the DARM residual, one for the time before the blend was switched and the other for the time right after the blend was switched.
From the plot, it is clear that the low frequency motion below 0.4 Hz increased after the switch while the rest of the frequency contents remained unchanged. So this tells us that the fluctuation of DARM residual is a strong function of seismicity and ISI blend configurations. Although I have not looked into other days throughly, I am guessing that the fluctuation we see in the DARM residual is just a measure of a combination of seismicity and ISIs' blend configurations. We know that the blend configurations were switched many times during O1 in order to stably lock the interferometer against winds and micro seismic.
Jenne, Hang, Sheila, Keita, Evan
Tonight it seems like we made some progress, although we haven't powered up yet.
The first screenshot attached shows the symptom of the xarm soft loop not working, as we power up the optical levers move, and the loops work to bring the QPDs back to their starting points but bring the optical levers in the wrong direction, causing the green to become misalined. The second screenshot shows the individual segments of the X arm QPDs durring this time. The third screenshot shows a power up (to 20 Watts) towards the end of O1, with no evidence of saturation, despite almost twice the power on each segment. The last screenshot shows the noise of each segment, the bottom panel is before reducing the whitening gain, the top panel is after (the A2L dither lines were on, that is the forest of lines around 20 Hz). Although it seems like we have more sensible signals coming from the QPDs with the lower whitening gain, something still doesn't quite add up when you compare the second and third screenshots.
I re-did the spot centering on the ETMs using the ADS procedure described above, but after we reduced the analog gain of all the transmon QPDs. I was able to sit stably at 12W, and lost lock at 15W from our dear friend the ~0.4Hz oscillation. Note that here, as earlier, we moved around and found the offsets we wanted while in the X and Y arm bases, rather than common and differential. After letting the alignments offload to the top stages of the quads, the loops were turned off again, and reverted to common and differential bases and the individual QPD pitch and yaw offsets were set, before re-engaging the loops and trying to power up.
This time around, CHARD was high gain with boosts, no resg in the DARM length loop, no notches in the SOFT loops (all these are nominal settings, undoing some of our tests tonight), but also no oplevs for the ITM pitch damping (not nominal, although we want it to be nominal in the future). Also, the A2L gains in the L2 stages of the test masses are still set to zero for now.
I tried powering up one more time, with the ITM oplevs re-engaged, and again I saw no difference. I was still stable at 12W, and unstable at 15W. I'm leaving the ITM oplevs off.
There's clearly something wrong with TMS X QPD A, segment 1, and maybe segment 2 as well. The second image shows them flat-lining at just under 3000 counts, which is way below the hardware saturation levels when things are working right. I think you're going to have to look into the transimpedance amp and whitening/vga chassis for this unit.
Looks like the PI interface chassis that parallels the signal path was causing problems. We have removed the chassis and plugged the QPDs directly into the ISC chassis again. We will look into the PI interface to see what might be happening.
WP 5799 FR 4694 (PKA II 1193)
Continuing the offload of SUS calculating the SUSPOINT motion from ISI GS13 cartesian motion, this has ISI doing the calculating. We'll stop the SUS doing the calcs as soon as we confirm that DetChar is happy with the ISI numbers.
All the HAM models were rebuilt & installed and FrontEnds restarted. safe and OBSERVE.snaps were checked and updated (lots of new channels.)
No issues with HAM ISIs reisolating.
Medm editing is ongoing for the SUSPOINT. Each medm is custom for the number of suspensions. HAM6 and HAM2 still need to be built.
Along these lines, the BSC Models were adjusted to put the SUSPOINT on IPC and at ENDY, the BRS was added as at ETMX.
SVN COMMITS:
hugh.radkins@opsws1:models 0$ svn commit -m "HAM SUSPOINT calcs moved to ISI and BSC SUSPOINT calcs added to IPC and BRS at ETMY"
Sending models/h1isibs.mdl
Sending models/h1isietmx.mdl
Sending models/h1isietmy.mdl
Sending models/h1isiham2.mdl
Sending models/h1isiham3.mdl
Sending models/h1isiham4.mdl
Sending models/h1isiham5.mdl
Sending models/h1isiham6.mdl
Sending models/h1isiitmx.mdl
Sending models/h1isiitmy.mdl
Transmitting file data ..........
Committed revision 12961.
hugh.radkins@opsws1:models 0$ pwd
/opt/rtcds/userapps/release/isi/h1/models
hugh.radkins@opsws1:hamisi 0$ svn commit -m "Added suspension OPTICs for medm generation"
Sending hamisi/H1_isiham2_overview_macro.txt
Sending hamisi/H1_isiham3_overview_macro.txt
Sending hamisi/H1_isiham4_overview_macro.txt
Sending hamisi/H1_isiham5_overview_macro.txt
Sending hamisi/H1_isiham6_overview_macro.txt
Transmitting file data .....
Committed revision 12962.
hugh.radkins@opsws1:hamisi 0$ pwd
/opt/rtcds/userapps/release/isi/h1/medm/hamisi
hugh.radkins@opsws1:burtfiles 0$ svn commit -m "Update snaps for SUSPOINT & commissioning"
Sending burtfiles/h1isibs_down.snap
Sending burtfiles/h1isibs_safe.snap
Sending burtfiles/h1isietmx_safe.snap
Sending burtfiles/h1isietmy_OBSERVE.snap
Sending burtfiles/h1isietmy_safe.snap
Sending burtfiles/h1isiham2_safe.snap
Sending burtfiles/h1isiham3_safe.snap
Sending burtfiles/h1isiham4_OBSERVE.snap
Sending burtfiles/h1isiham4_safe.snap
Sending burtfiles/h1isiham5_OBSERVE.snap
Sending burtfiles/h1isiham5_safe.snap
Sending burtfiles/h1isiham6_safe.snap
Sending burtfiles/h1isiitmx_safe.snap
Sending burtfiles/h1isiitmy_OBSERVE.snap
Sending burtfiles/h1isiitmy_safe.snap
Transmitting file data ...............
Committed revision 12963.
hugh.radkins@opsws1:burtfiles 0$ pwd
/opt/rtcds/userapps/release/isi/h1/burtfiles
I'll commit the medm adls once I'm finished with them.
A full description of the changes committed about can be found in G1600795.
J. Kissel, B. Weaver, H. Radkins, D. Barker, J. Batch Here's a list of all of the upgrades we were involved in today. There will be more details of the upgrade, more debugging, associated MEDM screen changes, svn commits, and other clean up tomorrow as we continue to explore what we've installed and debug. Bear with us, and thanks for your patience. 1) Fixed UIM coil driver path's automatic compensation bug by updating CD_STATE_MACHINE.c. See Int. Issue 1178. 2) Installed ISI GS13's projection to SUSPOINT Euler basis projection into the SEI models. See ECR E1600028. 3) Sent the Euler Basis Longitudinal DOF for each SUS involved in a cavity over various IPC (PCIE and RFM) and models (ISI, End-station PEM, H1OAF) to be collected in OAF. See ECR E1600028. 8) Removed the excess RFM channels from End-Station ALS models (noteably *not* the excess channels in the ISC models). This made room for 3). See LHO aLOG 25216 4) Added various rotational sensor correction paths to the BSC ISIs (GND BRS to ST1, and ST1 to ST2). (Prototyping, this stuff, no ECRs just yet) 5) Removed the L4C sensor correction path. See SEI aLOG 666 6) Added new infrastructure for the EY BRS. (Most Copied from EX, covered by ECR E1500246) 7) Reverted the BSC SUS's coil driver monitors to store the NOISEMON in the frames (and pushed the FASTIMON to the commissioning frames). Also, we put filter modules in front of the NOISEMONs (like was done for the FASTIMONs the last time we touched this) in case we ever wish to calibrate them. See ECR E1600033 and LHO aLOG 26313 Because we haven't built MEDM screens and actually *used* any of these paths yet, there's still potential for bugs and we haven't explored and/or fixed all of the collateral damage. Stay tuned as we continue to work on all of these updates tomorrow.
J. Kissel I've documented all of the front-end model changes that were necessary for items 2, 3, 5, 6, and 7 (i.e. all of the SEI model changes). Check out G1600795. As indicated in LHO aLOG 26321, all of the simulink model changes have been committed to the repository.
J. Kissel, H. Radkins Here're the updated MEDM screens that correspond to the above SEI model updates. I'll work on the Cavity Basis OAF MEDM screen tomorrow. The following screens where changed and/or added: /opt/rtcds/userapps/release/isi/common/medm/bscisi A ISI_CUST_CHAMBER_ST1_ROT_SENSCOR_FIR_ALL.adl A ISI_CUST_CHAMBER_ST1_ROT_SENSCOR_IIRHP_ALL.adl A ISI_CUST_CHAMBER_ST1_ROT_SENSCOR_MATCH_ALL.adl A ISI_CUST_CHAMBER_ST2_ROT_SENSCOR_FIR_ALL.adl A ISI_CUST_CHAMBER_ST2_ROT_SENSCOR_IIRHP_ALL.adl A ISI_CUST_CHAMBER_ST2_ROT_SENSCOR_MATCH_ALL.adl Sending ISI_CUST_CHAMBER_ST1_SENSCOR_OVERVIEW.adl Sending ISI_CUST_CHAMBER_ST2_SENSCOR_OVERVIEW.adl Sending ISI_CUST_CHAMBER_OVERVIEW.adl=
18VDC power cables are in pplace and terminated in the corner station. The BS oplev power cable is not yet installed.
Fil and I ran and terminated the BS OpLev pwr cable today.
We did a little bit of ASC work today.
First, while Kiwamu was running a TCS test I started a script to automate phasing of the WFS. It uses the lockin, first runs a servo to set the phase of the lockin demod, then servos to minimize some signal. We have it set up right now to phase the refl WFS to minimize the PR2 pit signal in Q for both REFL 9 and 45, and to minimize the SRM pit signal in AS 36 Q. There is some code for exciting DHARD, but we need to test amplitudes, phases and gains for this. The current version of the script does its job although it is painfully slow, and is checked into the svn under asc/h1/scripts THe resulting phases are in the attached screenshot.
We saw that the instability in CHARD pit was becasue somehow the LP9 got turned on again, this is now off and CHARD seems fine.
We tried powering up, were fine at 10 Watts. We had an instability in PRC1 and PRC2 yaw at 13 Watts. I reduced the Q on the complex zeros at 1.1 Hz for PRC2Y, which gives us slightly better phase and gain near the point where we seem to be unstable. Attached is a screenshot of the OLG measured with white noise at both 2 Watts and 10Watts, we might need to do a swept sign to get a good measurement around 1 Hz.
After about 10 minutes at 12 Watts, we had the usual fluctuations in the recycling gain. So the high bandwidth PRC2 loops haven't totally solved the problem.
For the record, these are angle settings that give approximately good CO2 powers tonight, and the powers to aim for from Kiwamu's note:
X power (W) | X angle | Y | Y angle | |
unlocked | 0.5 | 76 | 0.23 | 82 |
10W | 78 | 79 | ||
20W | 0.3 | 0.1 |
We have twice had the rotation stage for CO2 Y go to an angle that was wrong by a lot (sending a few watts to the test mass for a few seconds).
I'm leaving the IFO locked at 10Watts.
Sheila,
Do you know how much power was transmitted at CO2 Y to any precision? Can you say what the upper limit was?
thanks
The first time H1:TCS-ITMY_CO2_LSRPWR_MTR_OUTPUT read back 3.2 Watts for about 20 seconds.
THe second time H1:TCS-ITMY_CO2_LSRPWR_MTR_OUTPUT read 3 Watts for about 10 seconds.
This morning I looked at some of the data from friday night when we had our usual CSOFT instability. (16-03-26 7:52:56 UTC)
First, I used the moment of interia here, and the calibration of the arm circulating power from the transmon QPDs here, to estimate i it is reasonable that radiation pressure due to the fluctuations in arm circulating power (on the order of 2.5% fluctuations on 35 kWatts of circulating power) could cause the angular motion that we see (0.1-0.4 urad pp on the test masses), and it is not, the miscentering that would be required is far too large.
I never attached the sreenshot of the PRC2 Y OLG to the original alog. Here it is.
This is a quick summary of today's TCS joy. I ran another differential lensing test today. I went to the other side of the differential lensing (CO2X goes higher power).
The highest cavity pole was 352 Hz in this test.
This time, I also took many measurements of the intensity and frequency noise couplings periodically throughout the test using Evan's automated measurement script (20470). I will analyze and post them later. The second attachment is trend of some relevant channels.
This is a report on the intensity noise coupling measurement to DARM during the same TCS testing period.
The below is an animated plot showing how the intensity noise coupling evolved as a function of time during the test. The transfer function was measured from ISS-SECONDLOOP_SUM14_REL to CAL-DELTAL_EXTERNAL. DELTAL_EXTERNAL is unwhitened.
As shown in the above animated plot, the intensity noise increased at the beginning and then went back down to where it was. The overall spectral shape almost did not change, but the scaling factor has changed roughly by a factor of two comparing the minimum and maximum. The magnitude of the coupling rises in proportion to frequency -- if I plotted them for a coupling to DCPDs, they would be almost flat due to the cavity pole correction taken out.
Here is another plot showing the evolution of coupling as a function of time.
The upper plot shows the transfer coefficient at 2500 Hz (in arbitrary unit) as a function of time. The bottom plot shows the CO2 lensing from the same period. The transfer coefficient shows a clear correlation with the defocus of ITMs. I can not say for sure if the differential was a dominant cause of this effect because I had a few uD defocus as well in the same fashion.
Here is the same analysis for the frequency noise coupling to DARM. The variation in the coupling is more drastic than that of intensity noise.
The below is a same type of animated plot. The transfer function was measured from REFLA_RF9_I_ERR to CAL-DELTAL_EXTERNAL. Note that DELTAL_EXTERNAL is properly unwhitend.
It seems that the coupling has two different mechanisms, one for the coupling below 300 Hz and the other for the above. As the CO2 setting changed, the high frequency part increased at the beginning and decreased later while keeping the same spectral shape. On the other hand the low frequency part varied in an opposite fashion; it decreased as the high frequency part increased. The slope of the high frequency coupling seems to be almost proportional to f. If we convert it into [OMC DCPDs [A] / laser frequency [Hz]], it will be more like 1/f due to the cavity pole and REFL's transfer functinon against the laser frequency.
Here is another plot showing the evolution of the transfer coefficient at 2500 Hz. The coupling coefficient changed by a factor of 15 at this frequency. This is much more drastic than that of the intensity noise coupling which varied by a factor of two or so.
A preliminary conclusion:
With the 2 W PSL, the DARM cavity pole prefers a high CO2 differential lensing while the laser noise couplings prefer a low differential lensing.
This is a belated analysis on the intensity noise coupling. The punch lines are:
[Noise coupling v.s. differential lensing]
As seen in the plot above, the coupling coefficient shows a linear relation to the differentianl lensing. This likely indicates that the differential lensing is not optimized to minimize the intensity noise coupling. I should note that this measurement had used the badly clipped COY beam (27433) which was later fixed in May 2016; a smaller differential lensing means less power in CO2Y than CO2X.
[Intensity noise coupling]
Here is a plot showing the intensity noise coupling of the various TCS settings. This time the coupling coefficient is converted to OMC power [W] / input RIN. The dashed line in the magnitude represents the expected value calculated by
(coupling) = 2 * J1^2 * Pin * Tomc * Tifo [W/RIN] = 5.5e-6 [W/RIN],
where Pin = 2 W is the PSL input power, Tomc = 61.4 ppm is the OMC transmission for the 45 MHz RF sidebands, and Tifo is the transmission of the intereferometer for the 45 MHz RF sidebands which I have assumed to be 1 for quick calculation. As seen in the plot, the expected noise level (limited by the 45 MHz RF sidebands) is lower then the measurement by roughly a factor of 10. These two plots support the hypothesis that we are far from the optimum point.
Here are the beamsplitter angles as a function of differential lensing. (There are some data dropouts in the trends).
This seems to indicate that a differential lens change of a few tens of microdiopters causes the beamsplitter yaw to change by a few hundreds of nanoradians, presumably via changes in the 36 MHz angular plant. In pitch it is less clear whether we are seeing angular control effects or simply drift over time.