Due to PCAL and OPLEV work at End Y this morning, I was only able to take the SUS ETMX charge measurements. Attached are plots of both ETMX and ETMY since they are kind of like salt and pepper shakers and are always together, however only the ETMX has added points.
We have not made any sign changes in the last 20 days, yet there is a small turn over in the ETMX long trend apparent in the data points from today. Not sure why this is...
Shiela's bias change in full lock (written in Guardian) started on ~Nov 4th (alog 31172) which is the likely cause of the charge turn over this week. This automated bias change takes the nominal +1.000 gain and switches it to -1.000 (see attached trends of the last 20 days).
For the record, we have not added any water to the TCSY chiller for a week, and the indicator only shows a small drop in the level during this week. Looks like we finally have the system topped off.
In total, we have added 15.6L since Sept 30, 2016. Just prior to the start of these cumulative additions, there was a 6L fill (alog 30041) at the time of the dry-chiller event. So, we've addede a total of 21.6L to the full circulatory system, noted for next time!
J. Kissel I've found several hardware injection channels AGAIN monitored in the SDF system. Why do these channels keep getting monitored?? This has happened two weeks ago (LHO aLOG 31141), after originally not monitoring them several moons ago (LHO aLOG 21154).
The EXTTRIG MEDM screen was apparently not working correctly on first glance. The last query time was updating correctly, but the last event was from Nov 6th and there have been many events since that time.
Upon investigation, it turns out that the system is behaving correctly. Here is the sequence:
the epics channels exttrig uses are served by the h1calcs model. This runs on h1oaf0, which has been restarted many time over the past week. Each time the h1calcs model is restarted, two things happen:
For the record, here is the startup sequence of the code on h1fescript0:
process is controlled by monit. Its file is /etc/monit/conf.d/monit_ext_alert. It monitors a process whose PID is stored in the file /var/log/ext_alert/ext_alert.pid.
If the process needs to be started/restarted, monit executes (as root) the file /etc/init.d/ext_alert. This in turn, using the start-stop-daemon runs the script /home/exttrig/run_ext_alert.sh, and this runs the script /opt/rtcds/userapps/release/cal/common/scripts/ext_alert.py with the appropriate arguments.
We are investigating this morning's Tuesday test events were not recorded. These are T-events in gracedb with a label of H1OPS. The run_ext_alert.sh script was not calling ext_alert.py with the '--test' argument to query for test events. We turned on the gracedb-query of test events in /home/exttrig/run_ext_alert.sh and got this morning's test event on startup. Duncan pointed out that there are many non-ops test events per day so this will generate many false positives.
On Duncan's recommendation we turned off the reporting of test events.
By the way, it looks like today's SNEWS external test event at 09:00PST did not get posted to Gracedb?
Today I looked at the output voltage from REFLduring H1 locks at 22W, 31W, and 50W input power, and calulated how the output voltage increased with input power to H1.
22W | 31W | 50W |
|
|
|
|
|
|
In the above chart the incident power at 50W is calculated from the measured 22W power budget numbers.
I used the 22W incident power to get a conversion from mW to counts on REFL_DC_INMON, and used that conversion to calculate the incident power at 31W and 50W.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
At 50W input power REFL sees 82mW incident power.
Above is incorrect (unlocked numbers, not locked).
Numbers below are calculated, based on unlocked transmition through the HWP and CWP in the IMC MCR path.
input power | REFL DC | incident power | mW/count | incident power |
measured, Watts | measured, counts | calculated, uW | calculated, uW | |
22 | 520 | 8 | 0.0154 | |
31 | 1137 | 17 | ||
50 | 1586 | 24 |
The IMC WFS were not well centered, in part because we recently moved the uncontrolled DOF and in part because they weren't well centered before that.
I picomotored them during maintence this morning.
J. Kissel, S. Aston WP #6318 FRS #6511 Stuart had pointed out a bug in the channel ordering for the monitor signals of the new ITM LV ESD drivers (see LHO aLOG 30861). In return, I proposed a change to our common library part because otherwise we'd bee creating a two-wrongs-make-a-right situation. He's since graciously offered to make that fix first, such that I merely need to svn up and re-arrange make ADC inputs. He has done so -- see LLO aLOG 29498 -- so I've made the update, and corrected the ADC inputs. I attach a few screenshots for proof. In this case, I only attach the "after" shots. The fixed models have been compiled, installed, and restarted. This fix did not require a DAQ restart. This closes out the work permit and FRS ticket.
...as per WP6316. ≈ -1.6 µrad offset in YAW due to 'kick' when the positioner is turned on/off. Multiple attempts to offset made no difference.
I also centered the BS oplev, using a different picomotor controller from the EE shop (see LHO alog 31333 for details on BS oplev centering issues). This closes WP 6316.
I noticed that YAW had changed to ≈2.8µrad after I returned to the corner. PCal work began immediately after I left the alignment. I noticed the change in position and re-centered after the PCal work was completed. During the PCal work, the PCal shutter was open and closed so that I could observe any action on the alignment. There didn't seem to be any affect with the shutter in either of it's positions. After a brief period of time it seems that PIT has drifted to ≈ -1.8. This seems to be an inherent issue thoughout most of the OpLev system.
Currently h1oaf0 has been stable for 22 hours following the one-stop cable-transceiver replacement (as suggested by Daniel).
When the oaf stopped driving the DAC, the h1iop model's proc status file showed a very large value for adcHoldTimeEverMax (in the 90's) while most systems showed this value around 17uS.
If we can take this value was an indicator of a failing PCI-bus extender transceiver, I have written a script to scan all the front end computers and report this value. This was ran at 10:10PST and the results are tabulated below.
Note that they are all in the 16-20uS range except for the h1suse[x,y] systems which are in the 70's. The end station SUS machines are the newer type and this is a known issue not related to possible one-stop fibers.
h1iopsush2a | 17 |
h1iopsush2b | 18 |
h1iopsush34 | 19 |
h1iopsush56 | 20 |
h1iopsusauxh34 | 18 |
h1iopsusauxh56 | 18 |
h1iopsusauxh2 | 18 |
h1iopsusauxb123 | 19 |
h1ioppsl0 | 17 |
h1iopsusex | 74 |
h1iopseiex | 21 |
h1iopiscex | 18 |
h1iopsusauxex | 20 |
h1iopsusey | 71 |
h1iopseiey | 20 |
h1iopiscey | 18 |
h1iopsusauxey | 19 |
h1iopoaf0 | 17 |
h1iopsusb123 | 17 |
h1iopseib1 | 18 |
h1iopseib2 | 18 |
h1iopseib3 | 21 |
h1ioplsc0 | 17 |
h1iopseih16 | 19 |
h1iopseih23 | 16 |
h1iopseih45 | 17 |
h1iopasc0 | 17 |
h1ioppemmx | 18 |
h1ioppemmy | 19 |
As CP3 Dewar is being filled right now with LLCV set to 21% in manual mode, the exhaust pressure rose to 0.5 psi and the TCs are reading lower than normal temps. So I lowered LLCV to 16% open which was the setting we used after the last Dewar fill.
Per WP 6320, yesterday I opened the exhaust bypass valves on cryopumps along x-arm and CP1. CP 3,4 at MY are already open. The only one left to open is CP7 at EY. These valves will remain open during normal operations as an added layer of safety for over pressurization. LLO has been operating in this mode for some time.
WP 6319 Updated nds2_client software to version 0.13.1 for Ubuntu 12, Ubuntu 14, and Debian 8.
TITLE: 11/15 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: TJ
SHIFT SUMMARY:
LOG:
10:32 Set SRC1_P and SRC1_Y gain to 0 per Sheila's recommendation. Reopened POP beam diverters to monitor POP90 signal. Successfully made it to NLN. LLO lost lock just as we were getting to NLN, so I am going to wait 30-45 minutes before making Kissel's measurements and going to Observe to see if things seem stable.
11:04 Running a2l_min_LHO.
11:09 PI mode 27 ringing up. Changed phase from 130 to 180 and gain from 3000 to 4000.
11:10 Running a2l_min_PR2.
11:15 Running a2l_min_PR3.
11:26 Closed POP beam diverters. Starting Kissel's PCAL2DARMTF measurement.
11:37 Finished PCAL2DARMTF. Saved as /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/Measurements/PCAL/2016_11_15_H1_PCAL2DARMTF_4to1200Hz_fasttemplate.xml.
11:38 Started Kissel's DARMOLGTF measurement.
11:51 Saved DARMOLGTF measurement as /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/Measurements/DARMOLGTF/2016_11_15_H1_DARM_OLGTF_4to1200Hz_fasttemplate.xml.
11:55 Restarted PCAL lines.
11:56 Set to Observing.
12:15 Out of Observing to damp PI mode 28. Changed phase from 60 to -60, no gain change.
12:23 Observing
12:32 Lockloss. From the error signal striptools, it appears that MICH_P, DHARD_P, and SRC1_P rang up over the course of 10 minutes prior to lockloss. Recall that I had set the SRC1 gains to 0 at the beginning of this lock stretch. Perhaps it needed to be turned back on at some point during the lock, but wasn't an issue for 2 hours or so. HAM6 ISI WD tripped at lockloss.
14:29 NLN. Took SRC1 gains to 0 again since it seemed to work last time.
14:35 Observing.
14:52 PI mode 28 ringing up. Changed phase from -60 to 60. Forgot to go out of Observing to do so.
After a bit of a struggle to get to NLN, with SRC1 loop turned off, we are back to Observing. Unfortunately, coincident with LHO coming to full lock, LLO lost lock.
TITLE: 11/15 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Travis
SHIFT SUMMARY:
A bit of a rough shift with H1 making it to NLN, but only staying locked on the order of ~25min. Observe ASC signals growing (over a period of 4-5min) just before it breaks out of lock.
LOG:
Notes:
Evan, Sheila, Jenne
The overall message: looking at refl control and IMC control signals leads to different conculsion about our frequency noise, but we can slightly improve our DARM noise at high frequencies by engaging an additional boost in the IMC. The sensing noise should be large enough to see on the DBB, so it would be helpful to get the DBB running again.
IMC control signal suggests a lot of excess IMC sensing noise:
Yesterday afternoon Evan and I looked again at the frequency noise as seen in IMC control signal. The attached screenshot shows the IMCF spectrum with and without CARM locked.
We think that when the mode cleaner is locked the IMC control signal should be the sum of:
Once CARM is locked, it will be the sum of:
Between 100Hz and 1kHz, the noise stays in the same with and without CARM locked, so this noise should be either laser noise or ref cav sensor noise imposed by the laser. The noise which goes away when we lock CARM should be IMC sensing noise or VCO noise. Since there isn't any frequency where locking CARM increases the noise in IMCF, this measurement gives us an upper limit on the REFL9 sensing noise.
We think that the IMC shot noise should be abot 15uHz/rt Hz, the ref cav shot noise should be about 0.4mHz/rt Hz, and the shot noise on refl 9 should be about 5uHz/rt Hz at 1kHz, increasing as f. The second attached png shows the measurements for IMC with and without the mode cleaner locked, the estimated levels of shot noise we would expect to see in there and in maroon an estimate of the where the IMC loop noise should appear when CARM is locked. If you believe that and make a projection to the light blue and maroon traces are IMC sensor noise and make a projection to Watts on refl 9, then to DARM using the measurements posted here, I predict a noise in DARM of around 1-2e-21 m/rt Hz at 100 Hz. There are several things that don't quite make sense, the projection doesn't agree with the measured REFL9 spectrum or Evan's estimate of the REFL 9 spectrum using the refl control signal (those things don't agree with each other either).
The refl control signal tells a different story:
Tonight, Jenne and I tried engaging boosts in both the IMC and CARM. The third attached screenshot shows that the refl control signal was reduced when we added a boost to the IMC, both peaks at around 1 kHz and the higher frequency noise. This means that at these frequencies the frequency noise is not limited by sensing noise from REFL or the IMC, which contradicts the conclusion above. Not suprisingly, it looks like the peak at 280Hz is sensing noise from the IMC or REFL. In any case, we saw a small improvement in DARM at high frequencies by using the IMC boost, so we should probably make this a regular part of our locking sequence. Boosting the CARM loop (using the 40Hz/4kHz filter) didn't change anything in DARM.
Edit:
I've done a quick calibration of the REFL control signal by wathcing the IMC PDH signal at the point where the REFL AO path gets summed in. We have about 2.4Vpp there (at 2Watts, with 16 dB of gain), using the IMC cavity pole of 8812Hz, Alexa's quick PDH calibration in 7054, -24 dB AO path gain, and 0.00061Volts/count, REFL control has about 0.1413 Hz/count. I've added this to the front end filter for REFL control. Now we can plot the IMC and refl control signals together. At 1kHz, we expect a supression of about 200 without the IMC boost on, so the noise at 1kHz makes sense as laser frequency noise or ref cav sensing noise.
Daniel and I had another look at the calibration of REFL control, and sorted out some factors of 2.
Just to be clear: I measured the IMC ODH signal at out1 with 2 Watts into the IMC and the loop disengaged. We get 2.4V pp for the error signal, measured after 15dB IN1 gain. The attached matlab script has a simple model of the IMC loop without taking into account the FSS gain, with the gain at the point where the AO signal is added is 3.67kHz/Volt. However, when we transition to analog CARM, we reduce the fast gain by 6dB and add 6dB gain to the in1 gain slider, so in full lock the calibration is 1.86kHz/Voltat the summation point. This will have to be updated if we change the IMC IN1 gain in full lock other than scaling it for input power changes.
The calibration is now in the refl control filter:
anti whitening: Two zeros at 100Hz, two poles at 10 Hz, DC gain of -6dB. This previously had a DC gain of 0dB, which was not correct because the input is differential.
cnts2Volts: 6.1e-4
-24dB of AO gain (this will have to be updated if we change this)
imcV2Hz: 1836 Hz/V at the point where the IMC error signal is summed with REFL control. (probably too many digits here)