Displaying reports 62441-62460 of 84442.Go to page Start 3119 3120 3121 3122 3123 3124 3125 3126 3127 End
Reports until 11:43, Friday 16 October 2015
H1 ISC
evan.hall@LIGO.ORG - posted 11:43, Friday 16 October 2015 (22586)
LSC FF coherences with DCPD sum

The attached dtt screenshot shows the coherences of MICH and SRCL feedforward with the DCPD sum.

Note in particular the high coherence with SRCL FF between 2 and 5 Hz. In fact, most of the rms SRCL FF drive comes from below 5 Hz.

The attached dataviewer screenshot shows timeseries trends of the DCPD sum (below) and the FF signals (above) during lock acquisition. One can see that turning on the feedforward appears to increase the DCPD sum rms.

Also attached are TFs of the foton filters used for the feedforward.

Images attached to this report
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 08:13, Friday 16 October 2015 (22583)
CDS model and DAQ restart report, Friday-Thursday 9th-15th October 2015

O-1 Days 22-28

model restarts logged for Thu 15/Oct/2015

No restarts reported

model restarts logged for Wed 14/Oct/2015

No restarts reported

model restarts logged for Tue 13/Oct/2015
2015_10_13 08:03 h1calex
2015_10_13 08:05 h1broadcast0
2015_10_13 08:05 h1dc0
2015_10_13 08:05 h1nds0
2015_10_13 08:05 h1nds1
2015_10_13 08:05 h1tw0
2015_10_13 08:05 h1tw1

Maintenance: new calex model with associated DAQ restart

model restarts logged for Mon 12/Oct/2015

No restarts reported

model restarts logged for Sun 11/Oct/2015

No restarts reported

model restarts logged for Sat 10/Oct/2015

No restarts reported

model restarts logged for Fri 09/Oct/2015

No restarts reported

H1 General
nutsinee.kijbunchoo@LIGO.ORG - posted 08:00, Friday 16 October 2015 - last comment - 09:21, Friday 16 October 2015(22580)
Ops Owl Shift Summary

TITLE: "10/16 [OWL Shift]: 07:00-15:00UTC (00:00-08:00 PDT), all times posted in UTC"

STATE Of H1: Observing at ~80 Mpc for the past 2 hours.

SUPPORT: Kiwamu, Sheila

SHIFT SUMMARY: Difficulty locking more than half of the shift. First I ran into Guardian issue not turning on the gains for INP1 and PRC1. Corey probably ran the a2l script and turned on the right gains by accident when he reverted the SDF differences (after a2l script failed) so we weren't aware of this happening. After couple of times turning the gains on by hand and eventually reloaded Guardian code as Kiwamu suggested, I ran into another problem and kept losing lock at ENGAGE_ASC_PART3. During phone calls with Sheila and Kiwamu we though it was an issue with the ASC. Kiwamu came over and after noticing a huge drop in RF90 prior to a lockloss he suggested I run an initial alignment. Turned out there were 2urad offset in TMSY. I didn't have any trouble locking arms green and DRMI so I didn't suspect a bad alignment (although I kept having trouble finding IR. Maybe that could have been a red flag?).

After the initial alignment we lost lock at Switch to QPDS once but everything went smoothly afterward. We got to NOMINAL_LOW_NOISE, ran the a2l script (had no trouble running it on my account), and finally Observing for the first time in almost 7 hours at 12:56 UTC.

 

Tonight I learned how 2 urad can make life miserable. My world will never be the same again.....

 

INCOMING OPERATOR: Jim

ACTIVITY LOG:

See Shift Summary.

Comments related to this report
sheila.dwyer@LIGO.ORG - 09:21, Friday 16 October 2015 (22584)

I apolgize for not reloading the guardian with the INP1+PRC1 fix.  

The A2L script does not touch the loop gains at all, that is all handled by gaurdian.  

H1 General
nutsinee.kijbunchoo@LIGO.ORG - posted 05:59, Friday 16 October 2015 (22579)
Back Observing at 12:56:10 UTC

Back Observing at 12:56:10 UTC after running the a2l script. It went fine for me (I ran it on my account).

H1 General
nutsinee.kijbunchoo@LIGO.ORG - posted 01:24, Friday 16 October 2015 - last comment - 06:16, Friday 16 October 2015(22575)
Mayday! Mayday!

POP_A_LF and AS_AIR RF90 aren't looking good. I while I was wondering what's going on I noticed differences in SDF and ASC INP and PRC1 pitch and yaw gains were 0 (setpoint=1). Was this a leftover from a2l script error? I reverted it, accepted the changes and hoped for the best. No luck though. IFO just lost lock at NOMINAL LOW NOISE. CSOFT was running away.

 

Seeing how a2l script left us with problems tonight I WILL NOT RUN IT until I know that it's safe to run.

Images attached to this report
Comments related to this report
nutsinee.kijbunchoo@LIGO.ORG - 02:20, Friday 16 October 2015 (22576)

Lost lock at ENGAGE ASC PART3 twice in a row. I've attached the log and the lockloss plots of both locklosses (and the ASC Striptool from the recent lockloss). Sorry about the terrible screenshots of the log (only showing the DOWN state).

Images attached to this comment
nutsinee.kijbunchoo@LIGO.ORG - 02:45, Friday 16 October 2015 (22577)

I called Kiwamu to make sure the ASC loop gains were correct. Turned out INP and PRC1 (pitch and yaw) gains were 0 when they're supposed to be 1. I put in the gains by hand before realized that PRC1 had no ramp time at all. There was a spike from PRC1 yaw and the ifo lost lock shortly after. So I gave PRC1 ramp time of 10 s.

Images attached to this comment
kiwamu.izumi@LIGO.ORG - 06:16, Friday 16 October 2015 (22578)

Nutsinee, Sheila, Kiwamu,

Nutsinee had a number (5-ish times) of lock losses in ENGAGE_ASC_PART3. We blame TMSY for these lock losses.

Nutsinee went through the initial alignment and it corrected TMSY in pitch by 2 urad. See the attached screenshot showing 2 day trend of TMS angles. A large step shown in the middle of the screenshot must be the temporary correction that Corey introduced in the last night (alog 22535). You can see that the TMSY angle is now back to where it was 2 days ago. Tonight, the symptom was that when we engaged the SOFT loops, something started running away and decreasing the cavity power everywhere. We think that TMSY was pulling the SOFT loops to some kind of bad alignment point due to the different TMSY angle. It was nothing to do with the 20 dB boost in the SOFT loops (for example, alog 21587) which we suspected at the very beginning of the investigation. After the initial alignment, everything seems to be going as smooth as usual and we made it back to low noise (alog 22579). We are happy.

In additon, there was another issue with INP1 and PRC1 where they were not engaged in ENGAGE_ASC_PART1 and PART2. This was fixed by reloading the ISC_LOCK guardian since it had been edited in the last evening but was not reloaded since then.

Images attached to this comment
LHO General
corey.gray@LIGO.ORG - posted 23:57, Thursday 15 October 2015 (22565)
EVE Ops Summary

TITLE:  10/15 EVE Shift:  23:00-07:00UTC (16:00-00:00PDT), all times posted in UTC     

STATE of H1:  Currently in Lock Acquisition

Incoming Operator:  Nutsinee

Support:  Jenne on phone about A2L script

Quick Summary:  Two mysterious locklosses toward the end of the shift (see specific entries).

Shift Activities:

H1 General
corey.gray@LIGO.ORG - posted 23:55, Thursday 15 October 2015 - last comment - 07:31, Friday 16 October 2015(22574)
H1 Mysterious Lockloss @ 6:15

H1 dropped out at 6:15 for unknown reasons (seismic quiet & Strip Tools looked fine).

Attempt #1:  Guardian stuck at Check IR.  So, moved ALS PLL DIFF OFFSET slider until Yarm power started flashing.  Then Guardian continued.  Engage ASC Part3 seems to be a bit of a time sink here (it took about 6min for the control signals to take over).  Had a lockloss during the Reduce Modulation Depth step.

Attempt #2:  Nutsinee was in early for her shift so I handed over the ifo to her.  She mentioned watching Jenne run the A2L script, so she will give it a try during her shift.

I also asked Nutsinee to take a look at the previous locklosses since she has experience running lockloss tools.

Comments related to this report
nutsinee.kijbunchoo@LIGO.ORG - 07:31, Friday 16 October 2015 (22582)

I took a quick look at this particular lockloss and it seems like PRC2 P was growing prior to the lockloss. The zoomed-in version of the same plot doesn't tell me much except that DHARD might be glitchy prior to the lockloss. Looking at the LSC channel list it seems like POPAIR RF90 and ASAIR_A_LF glitched right before the lockloss. The Witness channel list tells me that SR3 is likely responsible for the glitch.

 

I don't really have a conclusion here. Just reporting what I observed.

Images attached to this comment
H1 General
corey.gray@LIGO.ORG - posted 22:20, Thursday 15 October 2015 - last comment - 10:20, Wednesday 21 October 2015(22572)
H1 Lockloss @ 4:10utc

H1 had a lockloss at 4:10utc where there was no obvious culprit (nothing seismic, but I forgot to look at strip tools to see anything obvious).

Commissioning Mode for A2L:  Once in Nominal Low Noise, ran the A2L script, but unfortunately it had errors, and left the SDF with diffs (after getting some consulting with Jenne able to Revert the SDF).  So spent about another 22min running & troubleshooting A2L recovery.  Will send Jenne errors in A2L script session.

Then went to Observation Mode at 5:10utc.

Comments related to this report
corey.gray@LIGO.ORG - 22:25, Thursday 15 October 2015 (22573)

Operators:  Do not run A2L script until Jenne troubleshoots script.  (had "Permission denied" errors, so maybe there's an issue of running script logged in as ops?).  

Will put this "pause" in running A2L in the new Ops Sticky Note page.

nutsinee.kijbunchoo@LIGO.ORG - 06:55, Friday 16 October 2015 (22581)

I was able to run the a2l script on my account. The script cleared all the SDF after it's done as Jenne advertised. All is well.

jenne.driggers@LIGO.ORG - 16:57, Friday 16 October 2015 (22592)

For lack of a better place to write this, I'm leaving it as a comment to this thread. 

The problem was that the results directory wasn't write-able by the Ops accounts, although it was by personal accounts.  I've chmod-ed the directory, so the A2L script should run no matter who you are signed in as.

Please continue running it (instructions) just before going to Observe, or when dropping out of Observe (i.e. for Maintenence).

H1 General
corey.gray@LIGO.ORG - posted 18:32, Thursday 15 October 2015 - last comment - 18:53, Thursday 15 October 2015(22568)
GRB Alarm: Stand-down 1:27-2:27UTC!
Comments related to this report
keith.thorne@LIGO.ORG - 18:53, Thursday 15 October 2015 (22569)CDS, INJ
I implemented an automatic restart system using monit for the GRB alert code  See LLO entry 21671.  For unknown reasons, this new method of running the script (as a detached process from an init script, not in a terminal window) also made all the GraceDB comm errors go away.  Implementation details are posted in the log entry if LHO wishes to follow.
H1 CAL
jeffrey.kissel@LIGO.ORG - posted 16:05, Thursday 15 October 2015 - last comment - 13:26, Tuesday 20 October 2015(22561)
H1 DARM OLGTFs and PCAL to CAL-CS TFs confirm that H1 ETMY Bias Sign Flip Caused Minimal Change
J. Kissel, K. Izumi, D. Tuyenbayev

As expected from preliminary analysis of the actuation strength change (see LHO aLOG 22558), DARM Open Loop Gain TFs and PCAL to CAL-CS transfer functions reveal minimal change in the calibration of the instrument. For these measurements, all we have changed in the CAL-CS model is switching the two signs in the ESD stage, which in turn cancel each other. As such, the only change should be the change in discrepancy between the canonical model and the actual current strength of the TS / L3 / ESD stage. Again, this change was small (sub ~10%, 5 [deg]), and has been well-tracked by calibration lines, so we will continue forward with focusing our efforts on determining how to apply the time-dependent corrections.

Attached are screenshots of the raw results. Darkhan's working on the more formal results. Stay tuned!

The after-bias-flip results live here:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/DARMOLGTFs/
2015-10-15_H1_DARM_OLGTF_7to1200Hz_AfterBiasFlip.xml

/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/PCAL/
2015-10-15_PCALY2DARMTF_7to1200Hz_AfterBiasFlip.xml
Images attached to this report
Comments related to this report
darkhan.tuyenbayev@LIGO.ORG - 21:10, Thursday 15 October 2015 (22571)CAL

Jeffrey K, Kiwamu, Sudarshan, Darkhan

Some conclusions based on the analysis of the DARM OLG TF and the PCAL to DARM TF measurements taken after flipping the ESD bias sign:

  • Together with the ESD bias sign flip we also inverted the sign of the ESD DriveAlign gain, this means that overall L3 stage actuation function sign was not affected by today's sign flip activites;
  • Since L3 stage actuation function sign was not affected, CAL-CS front-end calibration and EPICS values for calculating kappas are still valid1;
  • Actuation function analysis showed that flipping the ESD bias sign resulted in reduction of the ESD actuation stregth by 7-8% (LHO alog 22558);
  • Change in ESD actuation strength was reflected in mean κtst calculated from calibration lines over 43 minutes of undisturbed data after the ESD bias sign flip; comparison of the most recent actuation and sensing function measurements vs. H1 DARM model corrected with kappas showed residuals at the same level as measurements vs. kappa corrected models before the sign flip (see attached plots).

1although the total L3 stage actuation function sign did not change, the ESD bias sign flip was reflected in two of the SUSETMY L3 stage model EPICS replicas in CAL-CS model: H1:CAL-CS_DARM_ANALOG_ETMY_L3_GAIN and H1:CAL-CS_DARM_FE_ETMY_L3_DRIVEALIGN_L2L_GAIN.

Below we list kappas used in the kappa corrected parameter file for the measurements taken after the ESD bias sign flip on Oct 15. We used 43 minute mean kappas calculated from 60sec FFTs starting at GPS 1128984060 (values prior bias sign flip are given in brackets, previously they were reported in LHO alog comment 22552):

κtst = 1.004472 (1.057984)
κpu = 1.028713 (1.021401)
κA = 1.003843 (1.038892)
κC = 0.985051 (0.989844)
fc = 334.512654 (335.073564) [Hz]

An updated comparison script and parameter files for the most recent measurements were committed to calibration SVN (r1685):

/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/CompareDARMOLGTFs_O1.m
/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/H1DARMparams_1128979968.m
/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/H1DARMparams_1128979968_kappa_corr.m

Comparison plots were committed to CalSVN (r1684):

/trunk/Runs/O1/H1/Results/DARMOLGTFs/2015-10-15_H1DARM_O1_cmp_AfterSignFlip_*.pdf

As a double check we created Matlab file with EPICS values for kappas using H1DARMparams_1128979968.m parameter file and made sure that the values agree with the ones currently written in the EPICS. The logs from calculating these EPICS values was committed to CalSVN (r1682):

/trunk/Runs/O1/H1/Scripts/CAL_EPICS/D20151015_H1_CAL_EPICS_VALUES.m
/trunk/Runs/O1/H1/Scripts/CAL_EPICS/20151015_H1_CAL_EPICS_*

Non-image files attached to this comment
jeffrey.kissel@LIGO.ORG - 13:26, Tuesday 20 October 2015 (22686)
For the record, we changed the requested bias voltage from -9.5 [V_DAC] (negative) to +9.5 [V_DAC] (positive).
H1 ISC
peter.fritschel@LIGO.ORG - posted 11:23, Wednesday 14 October 2015 - last comment - 10:13, Friday 16 October 2015(22513)
Residual DARM motion: comparison of H1 and L1

The first attached plot (H1L1DARMresidual.pdf) shows the residual DARM spectrum for H1 and L1, from a recent coincident lock stretch (9-10-2015, starting 16:15:00 UTC). I used the CAL-DELTAL_RESIDUAL channels, and undid the digital whitening to get the channels calibrated in meters at all frequencies. The residual  and external DARM rms values are:

  residual DARM external DARM
H1 6 x 10-14 m 0.62 micron
L1 1 x 10-14 m 0.16 micron

The 'external DARM' is the open loop DARM level (or DARM correction signal), integrated down to 0.05 Hz. The second attached plot (H1L1extDARMcomparison.pdf) shows the external DARM spectra; the higher rms for H1 is mainly due to a higher microseism.

Some things to note:

The 3rd attached plot (H1L1DARMcomparison.pdf) shows the two calibrated DARM spectra (external/open loop) in the band from 20-100 Hz. This plot shows that H1 and L1 are very similar in this band where the noise is unexplained. One suspect for the unexplained noise could be some non-linearity or upconversion in the photodetection. However, since the residual rms fluctuations are 6x higher on H1 than L1, and yet their noise spectra are almost indentical in the 20-100 Hz band, this seems to be ruled out - or at least not supported by this look at the data. More direct tests could (and should) be done, by e.g. changing the DARM DC offset, or intentionally increasing the residual DARM to see if there is an effect in the excess noise band.

Non-image files attached to this report
Comments related to this report
evan.hall@LIGO.ORG - 09:43, Thursday 15 October 2015 (22550)

We briefly tried increasing the DCPD rms by decreasing the DARM gain by 6 dB below a few hertz (more specifically, it's a zero at 2.5 Hz, a pole at 5 Hz, and an ac gain of 1... it's FM5 in LSC-OMC_DC). This increased the DCPD rms by slightly less than a factor of 2. There's no clear effect on the excess noise, but it could be we have to be more aggressive in increasing the rms.

  • Nominal DARM configuration: 2015-10-14 20:40:00 to 20:45:00 Z
  • Reduced low-frequency gain: 2015-10-14 20:34:30 to 20:39:30 Z
Images attached to this comment
hartmut.grote@LIGO.ORG - 10:13, Friday 16 October 2015 (22585)
interesting,
but do I interpret it right that you (on the experiment reported in the comment) assume that the 
DARM errorpoint represents the true DARM offset/position?
I thought that it is the case at least at L1 that when DARM is locked on heterodyne,
and the OMC is locked onto carrier (with the usual DC offset in DARM),
then the power in transmission of the OMC fluctuates by several 10%.
Assuming that the TEM00-carrier coupling to the OMC would be no different when DARM is locked 
to OMC trans. power, then also the 'true' DARM would fluctuate this much
impressing this fluctuation onto DARM.
This fluctuation should show up in the heterodyne signal then.
So in this case increasing the DARM gain to reduce the rms would probably
not do anything.
Or?
H1 CDS (CDS, GRD, ISC)
sheila.dwyer@LIGO.ORG - posted 09:30, Wednesday 14 October 2015 - last comment - 20:17, Thursday 15 October 2015(22510)
Lockloss related to epics freeze

As Jim had almost relocked the IFO, we had an epics freeze in the gaurdian state RESONANCE.  ISC_LOCK had an epics connection error.

What is the right thing for the operator to do in this situation?

Are these epics freezes becoming more frequent again?

screenshot attached.

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 14:39, Wednesday 14 October 2015 (22516)

epics freezes never fully went away completely and are normally only a few seconds in duration. This morning's SUS ETMX event lasted for 22 seconds which exceeded Guardian's timeout period. To get the outage duration, I second trended H1:IOP-SUS_EX_ADC_DT_OUTMON. Outages are on a computer basis, not model basis, so I have put the IOP Duotone output EPICS channel into the frame as EDCU channels (access via channel access over the network). When these channels are unavailable, the DAQ sets them to be zero.

For this event the time line is (all times UTC)

16:17:22 DAQ shows EPICS has frozen on SUS EX
16:17:27 Guardian attempts connection
16:17:29 Guardian reports error, is retrying
16:17:43 Guardian timesout
16:17:45 DAQ shows channel is active again

The investigation of this problem is ongoing, we could bump up the priority if it becomes a serious IFO operations issue.

jameson.rollins@LIGO.ORG - 14:57, Wednesday 14 October 2015 (22517)

To be clear, it sounds like there was a lockloss during acquisition that was caused by some kind of EPICS drop out.  I see how a lockloss could occur during the NOMINAL lock state just from an EPICS drop out.  guardian nodes might go into error, but that shouldn't actually affect the fast IFO controls at all.

jameson.rollins@LIGO.ORG - 20:17, Thursday 15 October 2015 (22570)

Sorry, I meant that I can not see how a guardian EPICS dropout could cause a lock loss during the nominal lock state.

Displaying reports 62441-62460 of 84442.Go to page Start 3119 3120 3121 3122 3123 3124 3125 3126 3127 End