Ed, Patrick
There have been some oscillations occasionally occurring that Patrick noticed on the previous shift and that I have been watching. They aren't destructive to the lock and they wax and wane on their own. Patrick and I also noticed that the "MICH live" DMT trace is "breathing" up from its reference trace between ~3Hz to ~20Hz. I also noticed that the last time these oscillations occurred there was a high SNR glitch in the LF range of the DMT Omega plot.
Excerpt from Activity Log:
07:57 Oscillation: @08:04 IMC-F_OUTMON(tidal) grew to +/- 5 cts. ASC-DHARD_Y & P grew to +/- 2000cts. total duration is ~ 12minutes. Correlating(?) DMT Omega glitch shows up at approx the start time of the oscillation. (08:04UTC + 9 minutes)
TITLE: Oct 28 OWL Shift 7:00-15:00UTC (00:00-08:00 PDT), all times posted in UTC
STATE Of H1: Observing
OUTGOING OPERATOR: Patrick
QUICK SUMMARY: IFO is in Observing @ ~78Mpc. Eq sei bands are all in the .22micron range. µSei is around .4µ. Wind is ≤10mph. There is an occasional oscillation that Patrick has noticed that can be seen in POP_A_LF_OUTPUT and also reflected in IMC-F_OUT16(tidal) and ASC-DHARD_Y_OUTMON. This doesn’t seem to be a destructive oscillation. It seems the 45MhzRFAM plot has crashed. It appeared that the live trace was right on top of the reference. There is usually a good amount of space between the two.
TITLE: 10/27 [EVE Shift]: 23:00-07:00 UTC (16:00-00:00 PDT), all times posted in UTC STATE Of H1: Observing @ ~79 MPc. SHIFT SUMMARY: Remained in observing entire shift. Low frequency ASC oscillations came and went a few times. At some point the RF45 noise DMT monitor stopped updating. I restarted it but it still wouldn't run. I left it closed. I had to restart the GraceDB query script. It is still occasionally failing. Seismic and winds remain about the same. INCOMING OPERATOR: Ed
Low frequency oscillations in the ASC loops have come and gone twice now. Attached are full data plots from 2:30 UTC to 5:30 UTC on Oct. 28 2015.
As part of Maintanence recovery, I realigned the IMs after the HAM2 ISI tripped.
IM2 pitch showed a big change from beofre to after the ISI trip at -42urad, now corrected.
There had been several notifications on the CAL_INJ_CONTROL medm that the GraceDB querying had failed, but each time it succeeded shortly after. Finally it seemed to fail and stay failed. I logged into h1fescript0 and killed the screen listed in the PID file (4403). It looks like there are two other screens left over that it also failed in (22363 and 21590). I'm leaving those for diagnosis. I restarted the script in a new screen (25515). The medm updated that it succeeded. It has failed and succeeded again twice since. Logging into the new screen it reports: [ext-alert 1130044041] CRITICAL: Error querying gracedb: timed out [ext-alert 1130044071] CRITICAL: Error querying gracedb: timed out [ext-alert 1130044343] CRITICAL: Error querying gracedb: timed out [ext-alert 1130044373] CRITICAL: Error querying gracedb: timed out [ext-alert 1130044644] CRITICAL: Error querying gracedb: timed out [ext-alert 1130044674] CRITICAL: Error querying gracedb: timed out
When I restarted the GraceDB script the verbal alarms script reported a GRB alert. It appears this was an old one though. Gamma Ray Burst (Oct 28 05:06:07 UTC) Gamma Ray Burst Acknowledge event? Gamma Ray Burst (Oct 28 05:06:12 UTC) Gamma Ray Burst Acknowledge event? Gamma Ray Burst (Oct 28 05:06:18 UTC) Gamma Ray Burst Acknowledge event?
J. Kissel I've gathered our usual Tuesday charge measurements, but have had some fun with them. Good news all around: (1) ETMX charge accumulation leveling off, (of course, for unknown reasons, but good all the same), and (2) After flipping the applied bias' voltage sign from negative to positive, ETMY's charge is trending back towards zero as expected. (3) Estimates of the strength change in ETMY between using PCAL vs. Optical Levers as a reference show excellent agreement. I attach four charge plot collections, where I've done a good bit of work trying to make the plots more informative and relatable to other metrics. The Charge_Trend plots y'all've seen before, but is there are more ticks on the X axis to give a better feel of the days passing. Further, introduced last week but refined here, is the actuation strength change, assuming all the variations in strength are due to the relative change of Effective Bias from Charge and our Applied Bias. Also appropirately zoomed to the expected +/- 10%, and with lots of Xticks, one can immediately compare this to plots of the test mass stage actuation strength change over time as measured by calibration lines, because we know from Eq. 12 in T1500467, F_{lin} propto 2 alpha V_{S} V_{B} ( 1 - V_{EFF} / V_{B} ) so, if we believe that charge is the only thing causing the actuation strength of ESD to change over time, then - V_{EFF} / V_{B} from the OpLev measurements should directly map on to Change in actuation strength as measured by the PCAL calibration lines a. la "kappa_TST" in T1500377. As such, I attach a brand-spankin' new money plot, where I compare the actuation strength of each quadrant as measured by the optical lever in pitch and yaw, against the actuation strength of all quadrants acting collectively in the longitudinal. They agree quite beautifully, as we've suspected all along. I think the only thing I'm sad about is the slope difference after the ETMY's bias sign flip, but (a) we only have a few points of optical lever data, and we know the scatter is pretty large from day-to-day and (b) comparing strength change in longitudinal against strength change in pitch & yaw may be a little bit mis-leading. Anyways -- I think we can safely say that the actuation strength change we see in calibration is almost entirely attributed to the slow changes in electric field around the test mass. This time, I'm careful in my words not to just colloquially say "charge," since we haven't yet narrowed down whether the dominant mechanism is actual charge close to the ESD pattern, or slowly drifting voltage on the cage (again, see Leo's work in T1500467). ------------- Details: For today's charge measurements, I've followed the newly updated instructions on the aWIKI. To produce the attached .pngs, I've updated and committed the changed to /ligo/svncommon/SusSVN/sus/trunk/QUAD/Common/Scripts/Long_Trend.m The charge data from this trend was subsequently exported to /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Results/CAL_PARAM/2015-10-27_H1SUSETMY_ChargeMeasResults.mat For the estimate of actuation strength change from the Cal Lines, I've used Sudarshan's data from SLM tool aLOGged here, and committed to the CalSVN repo here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Results/CAL_PARAM/2015-10-27_Oct_Kappas.mat In addition to the 60 [s] FFTs from SLM Tool, I've taken a 1 [hr] rolling average of the data to through out what we're now confident is mostly Gaussian Noise to clean up the plot. The script used to analyze these results collectively lives and is committed here: compare_chargevskappaTST_20151027.m
Have remained in observing. 23:32 UTC Kyle back from Y28.
This is time varying calibration parameter trend for the month of Sep-Oct plotted using the data obtained from the SLM tool. The output data in mat format is also attached to this alog.
Michael T., Jim, Carlos, Jonathan, Ryan, Dave
we tested the ability for a front end computer to send syslogs to the central loghost with mixed results. We tested on h1pemmx and h1pemmy. We were able to send the dmesg string to the loghost, and output from the command line "logger". We were unable to send sshd logs. When we repeated the process on the DTS (logging to Ryan's loghost) we were able to log everything. Work is ongoing.
I have created a Beckhoff SDF overview MEDM screen, it is accessible from the sitemap via the GRD button (last entry at bottom of list).
For each of the 9 PLCs I have done the following:
1. In the target area for the beckhoff target (not the sdf target) I created a new autoBurt.req file by parsing the Beckhoff channel list which I had created last week from the raw epics database file.
cd /opt/rtcds/lho/h1/target/h1ecatc1/h1ecatc1plc1epics/
cat ~david.barker/sdf/h1ecatc1_PLC1_sdf_chanlist.ini| grep "H1"|sed 's/[//g'|sed 's/]//g' |sort > autoBurt.req
2. Use the new autoBurt.req to snapshot the running system and create an OBSERVE.snap file in the sdf target area
cd /opt/rtcds/lho/1/target/h1sysecatc1plc1sdf/h1sysecatc1plc1sdfepics/burt
burtrb -f /opt/rtcds/lho/h1/target/h1ecatc1/h1ecatc1plc1epics/autoBurt.req > OBSERVE.snap
3. Set the OBSERVE.snap to monitor all channels as a starting point
set_sdf_monitor 1 OBSERVE.snap
4. For each PLC, configure to use the OBSERVE.snap as the reference table file
5. Copy the OBSERVE.snaps from the target into the SVN userapps area for sys/h1/burtfiles, create symbolic links in the target area
david.barker@sysadmin0: cd h1sysecatc1plc1sdf/h1sysecatc1plc1sdfepics/burt/
david.barker@sysadmin0: ll OBSERVE.snap
lrwxrwxrwx 1 controls controls 76 Oct 27 17:30 OBSERVE.snap -> /opt/rtcds/userapps/release/sys/h1/burtfiles/h1sysecatc1plc1sdf_OBSERVE.snap
david.barker@sysadmin0:
I added SDF monitors for the Beckhoff PLCs. Dave is working on getting OBSERVE.snap files for them and will link the SDF_TABLE views into the sitemap as he gets them ready.
These are built through the RCG system, but have no real time component. There are entries in the target area for them
These run on h1build, NOT on a front end. They use channel access to communicate with the plcs and so can run anywhere we have a friendly environment.
We are not ready to have the guardian monitor the difference count yet.
TITLE: 10/27 [EVE Shift]: 23:00-07:00 UTC (16:00-00:00 PDT), all times posted in UTC STATE Of H1: Observing @ ~ 71 MPc OUTGOING OPERATOR: Nutsinee QUICK SUMMARY: Lights appear off in the LVEA, PSL enclosure, end X, end Y and mid X. I can not tell from the camera if they are off at mid Y. Winds are less than 20 mph. ISI blends are at 45 mHz. Earthquake seismic band is between 0.01 and 0.1 um/s. Microseism is between 0.1 and 0.6 um/s. In standdown for GRB until 23:41 UTC.
TITLE: "10/27 [DAY Shift]: 15:00-15:00UTC (8:00-16:00 PDT), all times posted in UTC"
STATE Of H1: Observing at 78 Mpc
SUPPORT: Control room folks
SHIFT SUMMARY: Maintenance period went well. There were couple of things that happened before I was informed and both had potentials of interfering with on-going tasks so please do not forget to come by or call the control room to check with operator on duty. We had a little trouble locking DRMI. After locking PRMI everything went smoothly. Jenne ran a2L script and and Evan was doing some measurement before we went to Observing. Then a GRB alert shortly after. The ifo is currently in Stand-down mode.
INCOMING OPERATOR: Patrick
ACTIVITY LOG:
15:08 Richard transitioning LVEA to LASER SAFE. Bubba begins grouting work. PSL is not down. IMC still locks.
15:21 Hugh to EY to check HPI fluid and secure light pipe between table and BSC
15:23 Jeff and TJ to pipe bridge between HAM4 and HAM5
Ryan patching alog
15:25 Ken to the beamtube 250 meters from end X
Jodi et al to mid X
15:35 Fil to both ends for temperature sensors installation
Dale to LVEA taking pictures
15:40 Karen to LVEA and EY to cleam
Jason to diode room to restart PSL WD
15:48 Hugh leaving EY, heading to EX. Jason out of diode room.
15:52 Christna to LVEA.
Ryan done patching alog.
Kiwamu leaves laser at high power for ISS measurement
16:00 Dale back
16:05 Mitchell taking Sprague to LVEA
Jonathan installing EPICS IOC to monitor Beckhoff PLC
16:14 Jodi to LVEA checking on 3IFO stuff
16:19 Kyle to check on HAM3 ion pump
Mitchell out
16:27 Hugh done
Jodi out of LVEA
16:36 Karen out of EY
16:52 Maddie called to make sure it's okay to restart h(t) DMT. Things went well.
17:00 Christina opening the roll up door. Karen called saying she's at EX. Fill also there.
17:18 DMT0 - Ryan Fisher called to say he's restarting some process and shouldn't affect anything. Next time if people don't speak English-English I'm just gonna hand the phone over to Jeff Kissel.
17:26 Jeff done with charge measurement (started about an hour ago).
Evan to LVEA installing cable at ISC rack (R1 to R4).
17:41 Fil done at EX. Heading to EY.
17:44 Jeff K. begins EX charge measurement.
17:48 HAM2 tripped. Jeff K. recovered everything. HAM2 ISI switched to low gain.
18:04 TJ and Jeff B. done
18:13 Kiwamu and Sudarshan to LVEA measuring ISS second loop TF.
18:17 Richard at EX pulling some cable.
18:18 Hugh to HAM1 to check on grouting.
18:22 Hugh out.
18:44 Jeff done with EX charge measurement.
18:46 Jodi back from the Mid.
Kiwamu out
Grouting done
18:48 DAQ restart (Jim B)
18:59 Kiwamu done with ISS for the day.
19:01 NDS not come back. DAQ restart again.
Fil done at EY and done with all temperature sensors installation.
19:06 Richard leaving EX
19:50 Begins initial alignment. Cheryl and Jeff K to LVEA doing sweep.
19:52 Cheryl to Optics lab
20:07 Cheryl out
20:45 Kyle to anchor ion pump 275m from EY.
Richard to HAM6 to put on safety sticker.
20:58 Richard done
21:52 Evan to unplug the cable.
21:56 Evan out.
21:59 Observing/Sciencing
22:40 GRB alert. Stand down.
I don't see this reported elsewhere in the log, but the DAQ restarts seem to have introduced a large (5 minute?) lag in the calibration pipeline. To fix this, Maddie restarted the pipeline at about 21:04 UTC (14:04 PDT). This resulted in a few (unlocked data) h(t) frames being lost, but the pipeline latency is reduced to ~11s.
Sudarshan, Kiwamu, (WP#5569)
During the maintenance period in this morning, we looked at a few things in order to improve the ISS robustness.
In summary, the 2nd loop ISS is not back in a great shape yet. There will be some times where it needs the manual engagement.
(1) The open loop measurement suggested that the UGF is at 10 kHz with a phase margin of about 50 deg. No crazy thing was found.
(2) The PID loop seems to have been suffering by unidentified extra offset, which explains the behavior we saw yesterday (alog 22863). I have edited the IMC_LOCK guardian so that it servos to a better locking point where the transient is less in the diffraction power.
(3) The IMC seems to be adding extra intensity noise below 1 Hz. This is the main reason why the PID loop does not converge simply because it is too high.
Change in the PID locking point
After a number of the 2nd loop engagement tests, I confirmed that it was the PID loop which pulled the diffraction power to a small value (alog 22870). This happens even without any additional gain or boosts which are engaged in the CLOSE_ISS loop. The reason of this bad behavior was found to be due to a different (unidentified) offset in the SECONDLOOP_SINGAL. I do not know why it changed. Originally the guardian was supposed to servo SECONDLOOP_SIGNAL to 0.7 which had been fine in the past in terms of transient kick to the first loop. However, as reported in the recent alogs, this locking point became bad in the sense that it gave a too large kick and eventually unlocked both 1st and 2nd loops. I experimentally adjusted the offset point which ended up with -0.4. I have edited the guardian accordingly and checked in to the SVN.
I then tested the manual engagement with the new PID locking point multiple times (Note, I did it manually because the PID loop was not able to converge due to large motion in the IMC). I confirmed that it did not pull the diffraction to a small value. Though it often lifts up the diffraction up to about 10% which is acceptable.
Extra intensity noise from IMC
This is something we already knew (alog 22482), but intensity noise became higher at low frequencies as the light goes through the IMC. This is likely due to some static misalignment somewhere (or perhaps very large offset in the length control) in the IMC. In order to identify which optics are contributing most, I looked at several channels for SUS-MC1, MC2 and MC3, and HAMs 2&3 table motion. However, sadly, all three MC optics and both HAM tables seem to equally contribute to the intensity fluctuation seen in MC2_TRANS and ISS PD array below 1 Hz according to coherence. The contribution from yaw tend to be larger than that of pitch according to the coherence, but they all are above a coherence of 0.85 or so below 1 Hz. It is difficult to say what optics are contributing most.
We need to study why the IMC motion is so high these days.
For those who will engage the 2nd loop in the future
The manual engagement now works fine. As usual, when the 2nd loop does not engage on its own for more than 5 minutes or so, please follow the instruction for the manual engagement (alog 2249). Also, when the manual attemp fails (i.e. the deffraction power is pulled by more than 5 % when manually engaging), one can prevent a lockloss by manually disengaging the second loop input (the very top red button in the 2nd loop ISS screen). Subsequently one can make another attempt once the diffraction power settles.
Due to the violin mode problem on 10/25, Sheila has asked me to investigate when this mode really started to rung up. The first plot attached shows that the amplitude of 1008.45Hz were consistant the day before the power glitch and three hours before power glitch (the small difference you see is within the mode normal fluctuation range). The second plot shows the 1008.45 Hz got rung up by an order of magnitude during the first lock acquired after the power glitch just like others. Because this mode didn't have a damping filter at the time, ideally the amplitude should have stayed where it was. However, the final plot shows that the amplitude became worse as time progress while other modes were either stable or being damped until it caused the problem on October 25th. Could anything that happened during the power lost caused the mode to change its phase as it seems to be slowly rung up by ETMY MODE3 that's been existing since before O1? Note that this violin mode had never rung up before. The investigation continues.
To ensure that the 1008.45Hz line hasn't been slowly ringing up all this time, I've looked back at the asd amplitude of this mode until October 1st. First plot attached shows the amplitude/sqrt(Hz) versus frequency of this particular mode, one plot per day. The second plot shows log amplitude versus time. I only plotted one data point per day (10:00:00 UTC if data available, or any time where the BNS range was stable and the ifo was locked for at least an hour). The last data point is today (10/28 02:00:00 UTC). This mode has been fluctuating between 1e-22 and 1e-21 since the begining of the month (10/01) up until 10/20. You can see clearly that the amplitude begins to rise above its nominal on 10/21 after the power outage on 10/20 and continues to grow exponentially until it started to cause problems on 10/25. Indicates that the amplitude grow was causing by a positive feedback, which Sheila found it to be ETMY MODE3.
To conclude this study: This mode wasn't, and hasn't been ringing up before October 20th. Why it started to ring up after power outage is unclear. I can't think of anything else but something must have changed to cause this mode to change phase during the power outage.
Should we worry...?
Was there a significant temperature excursion during the power outage?
Yes.
I've attached the plot of average temperature in the VEAs. After the power outage LVEA average temperature had three big dips of about half degree. Average temperature at EY seems to fluxtuate more often and EX had couple of large drops.
Which turns out to be just a coincidence with the power outage according to John.
GregM, RickS, DarkhanT, JeffK, SudarshanS
This was an attempt to study what the GDS output will look like with kappa factors applied. GregM volunteered to create test data with kappa applied, kappa_C and kappa_tst, on 'C01' data for the days between October 1 through 8. The kappa corrected data is referred as 'X00'. The correction factors are applied by averaging the kappa's at 128s. This was loosely determined from the study done last week (alog 22753) on what the kappa's look like with different time-averaging duration.
Here, comparisons are made between 'C01' as 'X00'. The first plot contains the kappa factors that are relevant to us. kappa_tst and kappa_C are applied and are thus relevant, whereas cavity pole (f_c) varies quite a bit at the beginning of each lock-stretch and is thus significant but we don't have an infrastructure to correct it. The first page contains kappa's calculated at a 10s FFT and is plotted on red and a 120s averaged kappa's plotted in blue. Page 2 has similar plot but has kappa plotted at 20 minutes averaging (it helps to see the trend more clearly).
Page 3 and onwards has plots of GDS/PCal at pcal calibration line frequencies for both magnitude and phase plotted for C01 and X00 data. The most interesting plots are the magnitude plots because applying real part of kappa_tst and kappa_c does not have a significant impact on phase. The most interesting thing is that applying kappa's flattens out the long-term trends in GDS/Pcal in all four frequencies. However, at 36 Hz, it flattens out the initial transient as well but introduces some noise into the data. At 332 Hz and 1 Khz it introduces the transient at the beginning of the lock stretch and it does not seem to have much effect at 3 KHz line. We think that this transient should be flattened out as well with the application of kappa's. The caveat is we don't apply cavity pole correction and we know that the cavity pole has a significant effect in the frequency region above the cavity pole.
DarkhanT, RickS, SudarshaK
After seeing the ~2 % transient at the beginning of almost each lock stretch in GDS [h(t)] trend at around 332 Hz, we had a hunch that this could be a result of not correcting for cavity pole frequency fluctuation. Today, Drarkhan, Rick and I looked at old carpet plots to see if we expect variation similar to what we are seeing and indeed the carpets plot predicted few percent error in h(t) when cavity pole is varying by 10 Hz.
So we decided to correct for the cavity pole fluctuation to h(t) at calibartion line frequency. We basically assumed that h(t) is sensing dominated at 332 Hz and used absolute value of the correction factor that change in cavity pole would incur [C/C' = (1+ i* f /f_c)/(1+ i* f /f_c')] and appropriately multiplied it to the GDS output.
The result is attached below. Applying cavity pole fluctuation gets rid of the transient seen at the beginning of each lock stretch as well as 1 % overall systematic we saw on the whole trend. We used cavity pole as 341 Hz for nominal value which is calculated from the model at the time of calibration. In the plot below, the cyan in both bottom and top left are the output of GDS CALIB_STRAIN/ PCAL uncorrected for Kappas, the green on the top left is corrected for kappa_tst, kappa_C and cavity pole whereas the green on the bottom left is corrected for kappa_C and kappa_tst only ( we only know how to correct these in time domain).