J. Kissel I've gathered our usual Tuesday charge measurements, but have had some fun with them. Good news all around: (1) ETMX charge accumulation leveling off, (of course, for unknown reasons, but good all the same), and (2) After flipping the applied bias' voltage sign from negative to positive, ETMY's charge is trending back towards zero as expected. (3) Estimates of the strength change in ETMY between using PCAL vs. Optical Levers as a reference show excellent agreement. I attach four charge plot collections, where I've done a good bit of work trying to make the plots more informative and relatable to other metrics. The Charge_Trend plots y'all've seen before, but is there are more ticks on the X axis to give a better feel of the days passing. Further, introduced last week but refined here, is the actuation strength change, assuming all the variations in strength are due to the relative change of Effective Bias from Charge and our Applied Bias. Also appropirately zoomed to the expected +/- 10%, and with lots of Xticks, one can immediately compare this to plots of the test mass stage actuation strength change over time as measured by calibration lines, because we know from Eq. 12 in T1500467, F_{lin} propto 2 alpha V_{S} V_{B} ( 1 - V_{EFF} / V_{B} ) so, if we believe that charge is the only thing causing the actuation strength of ESD to change over time, then - V_{EFF} / V_{B} from the OpLev measurements should directly map on to Change in actuation strength as measured by the PCAL calibration lines a. la "kappa_TST" in T1500377. As such, I attach a brand-spankin' new money plot, where I compare the actuation strength of each quadrant as measured by the optical lever in pitch and yaw, against the actuation strength of all quadrants acting collectively in the longitudinal. They agree quite beautifully, as we've suspected all along. I think the only thing I'm sad about is the slope difference after the ETMY's bias sign flip, but (a) we only have a few points of optical lever data, and we know the scatter is pretty large from day-to-day and (b) comparing strength change in longitudinal against strength change in pitch & yaw may be a little bit mis-leading. Anyways -- I think we can safely say that the actuation strength change we see in calibration is almost entirely attributed to the slow changes in electric field around the test mass. This time, I'm careful in my words not to just colloquially say "charge," since we haven't yet narrowed down whether the dominant mechanism is actual charge close to the ESD pattern, or slowly drifting voltage on the cage (again, see Leo's work in T1500467). ------------- Details: For today's charge measurements, I've followed the newly updated instructions on the aWIKI. To produce the attached .pngs, I've updated and committed the changed to /ligo/svncommon/SusSVN/sus/trunk/QUAD/Common/Scripts/Long_Trend.m The charge data from this trend was subsequently exported to /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Results/CAL_PARAM/2015-10-27_H1SUSETMY_ChargeMeasResults.mat For the estimate of actuation strength change from the Cal Lines, I've used Sudarshan's data from SLM tool aLOGged here, and committed to the CalSVN repo here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Results/CAL_PARAM/2015-10-27_Oct_Kappas.mat In addition to the 60 [s] FFTs from SLM Tool, I've taken a 1 [hr] rolling average of the data to through out what we're now confident is mostly Gaussian Noise to clean up the plot. The script used to analyze these results collectively lives and is committed here: compare_chargevskappaTST_20151027.m
Have remained in observing. 23:32 UTC Kyle back from Y28.
This is time varying calibration parameter trend for the month of Sep-Oct plotted using the data obtained from the SLM tool. The output data in mat format is also attached to this alog.
Michael T., Jim, Carlos, Jonathan, Ryan, Dave
we tested the ability for a front end computer to send syslogs to the central loghost with mixed results. We tested on h1pemmx and h1pemmy. We were able to send the dmesg string to the loghost, and output from the command line "logger". We were unable to send sshd logs. When we repeated the process on the DTS (logging to Ryan's loghost) we were able to log everything. Work is ongoing.
I have created a Beckhoff SDF overview MEDM screen, it is accessible from the sitemap via the GRD button (last entry at bottom of list).
For each of the 9 PLCs I have done the following:
1. In the target area for the beckhoff target (not the sdf target) I created a new autoBurt.req file by parsing the Beckhoff channel list which I had created last week from the raw epics database file.
cd /opt/rtcds/lho/h1/target/h1ecatc1/h1ecatc1plc1epics/
cat ~david.barker/sdf/h1ecatc1_PLC1_sdf_chanlist.ini| grep "H1"|sed 's/[//g'|sed 's/]//g' |sort > autoBurt.req
2. Use the new autoBurt.req to snapshot the running system and create an OBSERVE.snap file in the sdf target area
cd /opt/rtcds/lho/1/target/h1sysecatc1plc1sdf/h1sysecatc1plc1sdfepics/burt
burtrb -f /opt/rtcds/lho/h1/target/h1ecatc1/h1ecatc1plc1epics/autoBurt.req > OBSERVE.snap
3. Set the OBSERVE.snap to monitor all channels as a starting point
set_sdf_monitor 1 OBSERVE.snap
4. For each PLC, configure to use the OBSERVE.snap as the reference table file
5. Copy the OBSERVE.snaps from the target into the SVN userapps area for sys/h1/burtfiles, create symbolic links in the target area
david.barker@sysadmin0: cd h1sysecatc1plc1sdf/h1sysecatc1plc1sdfepics/burt/
david.barker@sysadmin0: ll OBSERVE.snap
lrwxrwxrwx 1 controls controls 76 Oct 27 17:30 OBSERVE.snap -> /opt/rtcds/userapps/release/sys/h1/burtfiles/h1sysecatc1plc1sdf_OBSERVE.snap
david.barker@sysadmin0:
I added SDF monitors for the Beckhoff PLCs. Dave is working on getting OBSERVE.snap files for them and will link the SDF_TABLE views into the sitemap as he gets them ready.
These are built through the RCG system, but have no real time component. There are entries in the target area for them
These run on h1build, NOT on a front end. They use channel access to communicate with the plcs and so can run anywhere we have a friendly environment.
We are not ready to have the guardian monitor the difference count yet.
TITLE: 10/27 [EVE Shift]: 23:00-07:00 UTC (16:00-00:00 PDT), all times posted in UTC STATE Of H1: Observing @ ~ 71 MPc OUTGOING OPERATOR: Nutsinee QUICK SUMMARY: Lights appear off in the LVEA, PSL enclosure, end X, end Y and mid X. I can not tell from the camera if they are off at mid Y. Winds are less than 20 mph. ISI blends are at 45 mHz. Earthquake seismic band is between 0.01 and 0.1 um/s. Microseism is between 0.1 and 0.6 um/s. In standdown for GRB until 23:41 UTC.
TITLE: "10/27 [DAY Shift]: 15:00-15:00UTC (8:00-16:00 PDT), all times posted in UTC"
STATE Of H1: Observing at 78 Mpc
SUPPORT: Control room folks
SHIFT SUMMARY: Maintenance period went well. There were couple of things that happened before I was informed and both had potentials of interfering with on-going tasks so please do not forget to come by or call the control room to check with operator on duty. We had a little trouble locking DRMI. After locking PRMI everything went smoothly. Jenne ran a2L script and and Evan was doing some measurement before we went to Observing. Then a GRB alert shortly after. The ifo is currently in Stand-down mode.
INCOMING OPERATOR: Patrick
ACTIVITY LOG:
15:08 Richard transitioning LVEA to LASER SAFE. Bubba begins grouting work. PSL is not down. IMC still locks.
15:21 Hugh to EY to check HPI fluid and secure light pipe between table and BSC
15:23 Jeff and TJ to pipe bridge between HAM4 and HAM5
Ryan patching alog
15:25 Ken to the beamtube 250 meters from end X
Jodi et al to mid X
15:35 Fil to both ends for temperature sensors installation
Dale to LVEA taking pictures
15:40 Karen to LVEA and EY to cleam
Jason to diode room to restart PSL WD
15:48 Hugh leaving EY, heading to EX. Jason out of diode room.
15:52 Christna to LVEA.
Ryan done patching alog.
Kiwamu leaves laser at high power for ISS measurement
16:00 Dale back
16:05 Mitchell taking Sprague to LVEA
Jonathan installing EPICS IOC to monitor Beckhoff PLC
16:14 Jodi to LVEA checking on 3IFO stuff
16:19 Kyle to check on HAM3 ion pump
Mitchell out
16:27 Hugh done
Jodi out of LVEA
16:36 Karen out of EY
16:52 Maddie called to make sure it's okay to restart h(t) DMT. Things went well.
17:00 Christina opening the roll up door. Karen called saying she's at EX. Fill also there.
17:18 DMT0 - Ryan Fisher called to say he's restarting some process and shouldn't affect anything. Next time if people don't speak English-English I'm just gonna hand the phone over to Jeff Kissel.
17:26 Jeff done with charge measurement (started about an hour ago).
Evan to LVEA installing cable at ISC rack (R1 to R4).
17:41 Fil done at EX. Heading to EY.
17:44 Jeff K. begins EX charge measurement.
17:48 HAM2 tripped. Jeff K. recovered everything. HAM2 ISI switched to low gain.
18:04 TJ and Jeff B. done
18:13 Kiwamu and Sudarshan to LVEA measuring ISS second loop TF.
18:17 Richard at EX pulling some cable.
18:18 Hugh to HAM1 to check on grouting.
18:22 Hugh out.
18:44 Jeff done with EX charge measurement.
18:46 Jodi back from the Mid.
Kiwamu out
Grouting done
18:48 DAQ restart (Jim B)
18:59 Kiwamu done with ISS for the day.
19:01 NDS not come back. DAQ restart again.
Fil done at EY and done with all temperature sensors installation.
19:06 Richard leaving EX
19:50 Begins initial alignment. Cheryl and Jeff K to LVEA doing sweep.
19:52 Cheryl to Optics lab
20:07 Cheryl out
20:45 Kyle to anchor ion pump 275m from EY.
Richard to HAM6 to put on safety sticker.
20:58 Richard done
21:52 Evan to unplug the cable.
21:56 Evan out.
21:59 Observing/Sciencing
22:40 GRB alert. Stand down.
I don't see this reported elsewhere in the log, but the DAQ restarts seem to have introduced a large (5 minute?) lag in the calibration pipeline. To fix this, Maddie restarted the pipeline at about 21:04 UTC (14:04 PDT). This resulted in a few (unlocked data) h(t) frames being lost, but the pipeline latency is reduced to ~11s.
The HAM 1 HEPI piers have been grouted. If there is a lock loss at some point tomorrow or the following day, it would be good to go into the LVEA to remove the grout forms.
DO NOT TOUCH THE IFO (for at least an hour).
Crusing at ~78 Mpc.
I made a new screen to help Operators check the status of the CAL lines. It shows the current value of the sine waves with their amplitudes and frequencies as well as the status of the Optical Follower Servos. The OFS Oscillation Monitor will have a green border if the absolute value stays under 0.01, and red otherwise. There are links to each of the line's medms for more information, if needed.
This is located under the CS CAL on the Sitemap.
LHO's fourth-Friday public tour occurred on the afternoon of 10/23. Arrival time at LSB = 2:30 - 3:00 PM. Departure time = 5:00 PM. Group size = ~12 adults. Vehicles at the LSB = ~6 passenger cars. The group was on the overpass near 4:15 PM and in the control room from about 4:30 to 4:50.
J. Kissel, N. Kijbunchoo, S. Dwyer After a pretty breezy maintanence day, we've successfully achieved nominal low-noise in record time. Good job team! P.S. We're going to run a few commissioning measurements before heading into observation mode. Expect science in about a half hour.
Sudarshan, Kiwamu, (WP#5569)
During the maintenance period in this morning, we looked at a few things in order to improve the ISS robustness.
In summary, the 2nd loop ISS is not back in a great shape yet. There will be some times where it needs the manual engagement.
(1) The open loop measurement suggested that the UGF is at 10 kHz with a phase margin of about 50 deg. No crazy thing was found.
(2) The PID loop seems to have been suffering by unidentified extra offset, which explains the behavior we saw yesterday (alog 22863). I have edited the IMC_LOCK guardian so that it servos to a better locking point where the transient is less in the diffraction power.
(3) The IMC seems to be adding extra intensity noise below 1 Hz. This is the main reason why the PID loop does not converge simply because it is too high.
Change in the PID locking point
After a number of the 2nd loop engagement tests, I confirmed that it was the PID loop which pulled the diffraction power to a small value (alog 22870). This happens even without any additional gain or boosts which are engaged in the CLOSE_ISS loop. The reason of this bad behavior was found to be due to a different (unidentified) offset in the SECONDLOOP_SINGAL. I do not know why it changed. Originally the guardian was supposed to servo SECONDLOOP_SIGNAL to 0.7 which had been fine in the past in terms of transient kick to the first loop. However, as reported in the recent alogs, this locking point became bad in the sense that it gave a too large kick and eventually unlocked both 1st and 2nd loops. I experimentally adjusted the offset point which ended up with -0.4. I have edited the guardian accordingly and checked in to the SVN.
I then tested the manual engagement with the new PID locking point multiple times (Note, I did it manually because the PID loop was not able to converge due to large motion in the IMC). I confirmed that it did not pull the diffraction to a small value. Though it often lifts up the diffraction up to about 10% which is acceptable.
Extra intensity noise from IMC
This is something we already knew (alog 22482), but intensity noise became higher at low frequencies as the light goes through the IMC. This is likely due to some static misalignment somewhere (or perhaps very large offset in the length control) in the IMC. In order to identify which optics are contributing most, I looked at several channels for SUS-MC1, MC2 and MC3, and HAMs 2&3 table motion. However, sadly, all three MC optics and both HAM tables seem to equally contribute to the intensity fluctuation seen in MC2_TRANS and ISS PD array below 1 Hz according to coherence. The contribution from yaw tend to be larger than that of pitch according to the coherence, but they all are above a coherence of 0.85 or so below 1 Hz. It is difficult to say what optics are contributing most.
We need to study why the IMC motion is so high these days.
For those who will engage the 2nd loop in the future
The manual engagement now works fine. As usual, when the 2nd loop does not engage on its own for more than 5 minutes or so, please follow the instruction for the manual engagement (alog 2249). Also, when the manual attemp fails (i.e. the deffraction power is pulled by more than 5 % when manually engaging), one can prevent a lockloss by manually disengaging the second loop input (the very top red button in the 2nd loop ISS screen). Subsequently one can make another attempt once the diffraction power settles.
We implemented a version of the PRMI to DRMI transition that Stefan did by hand 20698 a few weeks ago (22472). It has worked a few times and not worked a few times, operators are using this only when the DRMI alignment is bad to improve alignment in PRMI. Here are plots of two examples of a sucessfull and unsucesfull transition. (failure at 20:37:19 UTC today, Oct 27, sucess at 10/26 23:35:05 UTC )
In the failure, the PRMI lock was lost when the SRM suspension started to realign SRM, before any feedback came on. Jeff and I changed the ramp time for SRM in the guardian to 10 seconds (I think the default is 2 seconds). We will see if this helps the PRMI lock to survive realigning the SRM.
Also, in the sucessfull attempt the SRCL feedback came on at a time that AS90 was high, and POP90 was low. It might be that triggering the SRCL output on one of these channels would be better than POP18.
GregM, RickS, DarkhanT, JeffK, SudarshanS
This was an attempt to study what the GDS output will look like with kappa factors applied. GregM volunteered to create test data with kappa applied, kappa_C and kappa_tst, on 'C01' data for the days between October 1 through 8. The kappa corrected data is referred as 'X00'. The correction factors are applied by averaging the kappa's at 128s. This was loosely determined from the study done last week (alog 22753) on what the kappa's look like with different time-averaging duration.
Here, comparisons are made between 'C01' as 'X00'. The first plot contains the kappa factors that are relevant to us. kappa_tst and kappa_C are applied and are thus relevant, whereas cavity pole (f_c) varies quite a bit at the beginning of each lock-stretch and is thus significant but we don't have an infrastructure to correct it. The first page contains kappa's calculated at a 10s FFT and is plotted on red and a 120s averaged kappa's plotted in blue. Page 2 has similar plot but has kappa plotted at 20 minutes averaging (it helps to see the trend more clearly).
Page 3 and onwards has plots of GDS/PCal at pcal calibration line frequencies for both magnitude and phase plotted for C01 and X00 data. The most interesting plots are the magnitude plots because applying real part of kappa_tst and kappa_c does not have a significant impact on phase. The most interesting thing is that applying kappa's flattens out the long-term trends in GDS/Pcal in all four frequencies. However, at 36 Hz, it flattens out the initial transient as well but introduces some noise into the data. At 332 Hz and 1 Khz it introduces the transient at the beginning of the lock stretch and it does not seem to have much effect at 3 KHz line. We think that this transient should be flattened out as well with the application of kappa's. The caveat is we don't apply cavity pole correction and we know that the cavity pole has a significant effect in the frequency region above the cavity pole.
DarkhanT, RickS, SudarshaK
After seeing the ~2 % transient at the beginning of almost each lock stretch in GDS [h(t)] trend at around 332 Hz, we had a hunch that this could be a result of not correcting for cavity pole frequency fluctuation. Today, Drarkhan, Rick and I looked at old carpet plots to see if we expect variation similar to what we are seeing and indeed the carpets plot predicted few percent error in h(t) when cavity pole is varying by 10 Hz.
So we decided to correct for the cavity pole fluctuation to h(t) at calibartion line frequency. We basically assumed that h(t) is sensing dominated at 332 Hz and used absolute value of the correction factor that change in cavity pole would incur [C/C' = (1+ i* f /f_c)/(1+ i* f /f_c')] and appropriately multiplied it to the GDS output.
The result is attached below. Applying cavity pole fluctuation gets rid of the transient seen at the beginning of each lock stretch as well as 1 % overall systematic we saw on the whole trend. We used cavity pole as 341 Hz for nominal value which is calculated from the model at the time of calibration. In the plot below, the cyan in both bottom and top left are the output of GDS CALIB_STRAIN/ PCAL uncorrected for Kappas, the green on the top left is corrected for kappa_tst, kappa_C and cavity pole whereas the green on the bottom left is corrected for kappa_C and kappa_tst only ( we only know how to correct these in time domain).