Displaying reports 53441-53460 of 83260.Go to page Start 2669 2670 2671 2672 2673 2674 2675 2676 2677 End
Reports until 14:08, Wednesday 19 October 2016
H1 ISC
sheila.dwyer@LIGO.ORG - posted 14:08, Wednesday 19 October 2016 - last comment - 14:08, Wednesday 19 October 2016(30649)
What happened to REFLAIR on monday afternoon?

Sheila, Terra, Jeff K, Patrick, Nutsinee

Tonight we continued with the PRMI locking difficulties of yesterday.  Nutsinee managed to lock PRMI by lowering the MICH gain and commenting the boost and offloading out of the guardian. 

Patrick tracked down some differences between the "good" PRMI time that Jenne mentioned (Oct 17 16:03 UTC) and now, there was a filter missing in the BS top mass offloading, and two whtening stages (with the anti whitening) were engaged in the middle of the lock yesterday afternoon.  We don't know why this happened, but just undoing it actually makes it impossible to lock at all. 

To match the PRCL loop shape to a reference, I had to add 6 db of gain (PRCL digital gain of 16 rather than 8).   However it looks like we still missing some kind of boost in PRCL (1st screenshot).  The second screenshot shows the MICH loop measured today on the left, before and after the PRCL fix, and an old measurement on the left. 

We have commented out the boost and offloading of MICH in the guardian for PRMI, and nutsinee created  new prmi_mich_gain_als and prmi_prcl_gain_als parameters in lscparams, and we have set them to 0.7 and 16 for now, although the nominal values should be 1.4 and 8.  You will have to reduce the MICH gain by hand to get it lock. 

Could who ever did some measurements that required changing the REFLAIR whitnening gain triple check that whatever they did is not causing us problems in the morning?

Images attached to this report
Comments related to this report
patrick.thomas@LIGO.ORG - 05:58, Wednesday 19 October 2016 (30650)
I logged in and checked the Beckhoff machines for errors. All of the terminals are in OP. There is a CRC on h1ecatc1 EndXLink R5_6 (not sure what this means). The only other thing I noticed is that the Send Frames diagnostic is very high and increasing on all three machines (also not sure what this means).

Attached is a screenshot of h1ecatc1.
Images attached to this comment
kiwamu.izumi@LIGO.ORG - 10:51, Wednesday 19 October 2016 (30658)

Jim W, Kiwamu

As Sheila suspected, the difficulty turned out to be due to wrong whitening settings on REFLAIR RF45 which we had changed on this past Monday during the frequency noise study (30610). We found it having two whitening stages engaged with a gain of 12 dB which should have been no whitening gains with 0 dB according to trend. So we set them back to no whitening stages with 0 dB gain. This apparently improved the situation -- we are now able to lock DRMI reliably and proceed with the rest of full lock sequence.

The SDF is updated accordingly.

H1 General
vernon.sandberg@LIGO.ORG - posted 10:50, Wednesday 19 October 2016 (30657)
Work Permit Summary for 2016 October 18

 

Work Permit Date Description alog/status
       
6258.html 2016-10-18 10:10 Reconfigure DAQ to rename the old commissioning frame as the new full/raw frame. Stop writing the smaller old science frames. Reconfigure NDS to serve these renamed frames. Co-ordinate with LDAS on archival. 30623, 30637
6257.html 2016-10-18 08:52 Perform scheduled maintenance to scroll compressor #3, #4, #5 @ Y-End vent/purge-air supply skid. Maintenance activity will require for the compressors to run for brief periods of time. Lock-out/tag-out power to skid. 30631
6256.html 2016-10-18 08:28 Update software on timing RF Counter/Comparator with release 5 which disables 4 internal LED's. 30625
6255.html 2016-10-17 15:59 Activity: (see also WP #6221) Start and run rotating pumps while baking RGA using variac-limited flexible heaters.  30630
6254.html 2016-10-17 15:36 Pick empty BSC ISI storage container with load cell attached to obtain accurate weight of the container. (This work is in support of 3rd IFO efforts.)  30632
6253.html 2016-10-17 14:17 Place HWS cameras (LVEA & EX) on temporary power supplies, same configuration as EY. Will need laser hazard at EX and LVEA. ECR 1500251 FRS 4559  30644
6252.html 2016-10-17 13:54 Update the DMT calibration code to gstllal-calibration 1.0.6. This release contains a few bug fixes based on testing from this week's lock stretches for the coherence gating for the kappa calculations. 30626
6251.html 2016-10-17 12:07 Upgrade all front end models and all DAQ systems to RCG 3.2 Will require restarts of all models and of the DAQ. During the upgrade no front end data will be recorded (EDCU will be ran to trend VAC). Est. front end down time about 1 hour. 30609, 30637
6250.html 2016-10-17 11:19 Investigate glitches revealed by Omicron. Complete routine maintenance- diagnosing and tuning OFS, etc. May include blocking Pcal beam for extended periods to diagnose source of glitches. 30548
6249.html 2016-10-17 09:05 Install two thermocouples inside cyropump #3 exhaust pipe and connect to existing CDS PV signals: HO:VAC-MY_CP3_TE202A_DISCHARGE_TEMP_DEGC & HO:VAC-MY_CP3_TE202B_DISCHARGE_TEMP_DEGC. This will require removing the exhaust check valve.

https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=30662

6248.html 2016-10-17 08:57 Investigate and repair broken readback on some RF amps, AMP18M, AMP24M1, and DIV40M.  30633
6247.html 2016-10-17 08:57 Pull power and network cable for RGA in corner station. Work will consist of climbing over HAM4 and pulling around BSC2 and BSC3.  30635
6246.html 2016-10-17 08:55 Investigate and repair H1:ISC-RF_C_DIV40M_POWEROK 40Mhz divider readback.  30633
6245.html 2016-10-13 13:51 Turn off cdswiki to clone its hard drive and move the server to a newer hardware due to system failures  
Previous Work Permits      
6237.html 2016-10-10 15:43 Remove the Solaris QFS/NFS server h1ldasgw2. This was installed in the summer to NFS export frames to the NDS servers. It has served corrupted frames, so we will return to the previous configuration. 30637
LHO VE
kyle.ryan@LIGO.ORG - posted 10:08, Wednesday 19 October 2016 - last comment - 14:48, Wednesday 19 October 2016(30654)
~0915 hrs. local -> Adjusted heating of RGA @ X-end
Don't have remote monitoring and will need to enter X-end VEA to make measurements sometime this afternoon.
Comments related to this report
kyle.ryan@LIGO.ORG - 14:48, Wednesday 19 October 2016 (30671)
~1320 hrs. local -> Measured zone temps and made small adjustments 
LHO General
patrick.thomas@LIGO.ORG - posted 07:59, Wednesday 19 October 2016 - last comment - 10:44, Wednesday 19 October 2016(30652)
Ops Owl Shift Summary
Helped Sheila troubleshoot. Ran through an initial alignment. Still not able to lock DRMI. Peter is in the PSL enclosure aligning the PMC. Strange behaviour with the PSL noise eater (Peter agrees). Reported in oscillation but no seeming effect on ISS, FSS, etc.

11:35 UTC Set request to LOCK_DRMI

Did DRMI catch very very briefly? Went to CHECK_MICH_FRINGES. Aligned BS. Back to LOCK_DRMI. No catches.

11:50 UTC Start initial alignment
11:53 UTC PSL noise eater oscillating. No one else is here and I'n not sure I'm permitted to go in the LVEA alone to fix it. Leaving it until someone comes in. FSS, ISS and PMC are still locked despite this.
12:35 UTC Finished initial alignment

Set to LOCK_DRMI. Still no catches. PSL noise eater is still in oscillation.

13:38 UTC Peter to PSL enclosure to align PMC. Set IFO to down.
13:49 UTC Bubba to mid X to check on supply fan
14:21 UTC Bubba back
Comments related to this report
peter.king@LIGO.ORG - 10:44, Wednesday 19 October 2016 (30656)
I do not recall seeing a circumstance where all of the various servos were locked and seemingly happy when the
noise eater was oscillating.  I have seen it when the FSS, ISS and PMC were happy when the noise eater was
oscillating but when that happens the HPO's PZT tends to sawtooth around because the injection locking is less
than happy.

    However today the HPO PZT was flat.
H1 General
nutsinee.kijbunchoo@LIGO.ORG - posted 00:56, Wednesday 19 October 2016 (30645)
Ops EVE shift summary

TITLE: 10/18 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Aquisition
INCOMING OPERATOR: Patrick
SHIFT SUMMARY: Initial alignment didn't go so smoothly but we did it eventually. We had issue at INPUT_ALIGN (kept knocking ISS, FSS, and PMC out of lock), MICH_DARK_LOCKED (MICH won't catch the mode -- I had to pre-adjust BS in DOWN before requesting MICH_DARK_LOCKED), and SRC_ALIGN (the signal kept running away instead of converging). We kept losing lock at PRMI. Similar thing happened last night. Sheila mentioned that MICH gain was too high. We lowered the gain by half and managed to kept PRMI locked stabily.

 

Note: Lockloss tool complained about ASC-AS_A_DC_SUM_OUT_DQ claiming that it doesn't exist (so was dataviewer) so I removed the channel from the lockloss file.

H1 ISC
sheila.dwyer@LIGO.ORG - posted 00:47, Wednesday 19 October 2016 - last comment - 15:59, Thursday 20 October 2016(30648)
PMC HV noise, TCS, and lump in DARM from 200-1000Hz

The first attached screenshot is a series of DCPD spectra from the last several months.  When we first turned on the HPO, the noise in DARM did not get any worse.  The noise stayed similar to O1 until July 16th, the lump from 200-1kHz started to appear in late July, when we started a series of changes in alignment and TCS to keep us stable with a decent recycling gain at 50 Watts.  

PMC

Last night, Gabriele pointed out that the noise in DARM is coherent with the PMC HV.  The HV readback had been mostly some white noise (probably ADC noise), until the last few weeks, but has been getting noisier so that some of the things jitter peaks show up in it now (SEcond attached screenshot, colors corespond to the dates in the DCPD spectrum legend.  This may be related to the problem described in LLO alogs 16186 and 15986.  The PMC transmision has been degrading since July, which could be a symptom of misalingment.  Since July, the REFL power has nearly doubled from 22 to 38 Watts, while the transmission has dropped 28%.  The PMC sum has also dropped by 4%, roughly consistent with the 3% drop in the power out of the laser.  Peter and Jason are planning on realigning the PMC in the morning, so it will be interesting to see if we see any difference in the HV readback.  

TCS + PRC alignment:

The other two main changes we have had in this time are changes in the alignment through the PRC, and changes to TCS.  These things were done to improve our recycling gain and stability without watching the noise impact carefully.  In July we were using no offsets in the POP A QPDs.  We started changing the offsets on August 1st, after the lumpy noise first appeared around July 22nd.  We have continued to change them every few weeks since then, but generally moving in the same direction.  

The only TCS change that directly coresponds to the dates when our noise got worse was the reduction of ETMY RH from 0.75 W each on July 22nd, the other main TCS changes happened September 10th.  It would be nice to undo some of these changes before turning off the HPO, even if it means reduing the power to be stable.  

Images attached to this report
Comments related to this report
daniel.sigg@LIGO.ORG - 07:26, Wednesday 19 October 2016 (30651)

The HVMon signal (PMC length) shows a peak at about 600Hz/rtHz. We don't this is an indication of frequency noise from the laser, but rather an error point offset picked up in the PMC PDH signal. As such this is length noise added to the PMC and suppressed by the cavity storage time. Assuming this factor is about 1/10000, we would still get ~100mHz/rtHz which are modulated onto the laser frequency. Seems a lot.

The HVMon measure 1/50 of the voltage send to the PZT. With no whitening this is not very sensitive.

daniel.sigg@LIGO.ORG - 11:02, Wednesday 19 October 2016 (30659)

After the PSL team adjusted the PMC alignment the ~400Hz peaks are no longer visible in the HVMon spectrum. The coherence is gone as well—except for the 1kHz peak.

Non-image files attached to this comment
sheila.dwyer@LIGO.ORG - 16:38, Wednesday 19 October 2016 (30672)

About the PMC:

1st screenshot  shows the small improvement in DARM we got after the PMC realignment.  While the coherence with PMC HV may be gone, it might just be that the PMC HV signal is now burried in the noise of the ADC.  At a lockloss I went to the floor and measured HV mon, then plugged it into one of the 560s, AC coupled, 10 Hz high pass, gain of 100, and sent the output into H1:LSC-EXTRA_AI_1.  We still have high coherence with this channel and DARM.  (last attchment)

Also, the PMC realingment this morning did decrease the reflected power, but the transmitted power also dropped.

  refl(W) trans(W) sum(W) laser power out
July 20 126W 157 174
Yesterday 35 103 138 169
today 27 100 126 169

About turning the HPO on not adding noise:

Kiwamu pointed out that the uncalibrated comparison above showing that the noise did not get worse when the HPO came on was not as convincing as it should have been.  This morning he and I used the pcal line hieght to scale these uncalibrated spectra to something that should be proportional to meters, although we did not worry about frequency dependent calibration.  (4th screenshot)  From this you can see that the noise in July was very close to what it was in March before the HPO came on, but there is some stuff in the bucket that is a little worse.  

The point is made best by the last attached screenshot, which is nearly identical noise in the last good lock I could find before the HPO came on to the first decent lock after it came on. Pcal was not working at this time, so we can't use that to verify the calibration, but the input powers were similar (20Watts and 24 Watts), DCPD powers are both 20mA, and the DCPD whitening was on in both cases.  (The decimation filters were changed around the same time that the HPO came on which accounts for the difference at high frequencies.)

Images attached to this comment
jason.oberling@LIGO.ORG - 15:59, Thursday 20 October 2016 (30693)PSL

Regarding power available to the PMC, I know this is obvious but another thing we have to consider is the ISS.  Since the ISS AOM is before the PMC, it clearly also effects the amount of power available to the PMC.  Peter K. can correct me if I am wrong, but it is my understanding that this happens primarily in 2 ways:

  • Obviously if the ISS AOM is diffracting more light, the total amount of power availabe to the PMC decreases.
  • Related to the above, the ISS AOM can cause slight distortions in the beam profile, which is dependant on how hard the AOM is driven; the harder the AOM is driven, the more it distorts the beam.  These distortions cause changes to the mode matching into the PMC and therefore lower the visibility of the cavity.  This has the effect of increasing the reflected power and lowering the transmitted power.

On 2016-9-21, for reasons unknown to me, the ISS control offset was changed from ~4.3 to 20.  This means we are driving the ISS AOM much harder than we were previously.  This then causes changes in the beam profile, which effects the PMC mode matching and lowers the cavity visibility.  This is likely why, even though we have had only a 5W decrease in laser power since July, the total power into and the power transmitted by the PMC are down and the power reflected by the PMC has increased, and why we cannot return to the July PMC powers Sheila listed in her table in the above comment by simply tweaking the beam alignment into the PMC.  I have attached a 120 day minute-trend of the ISS control offset (H1:PSL-ISS_CTRL_OFFSET) that shows the changes in the ISS control offset value since 2016-6-22, including the 2016-9-21 change.  There are of course other reasons why the control offset changed (as can be seen on the attachment, the offset was changed several times over the last 4 months), the one on 9-21 just really stuck out.

Is there a reason why the control offset was changed so drastically?  Something to do with the new ISS outer loop electronics?

Images attached to this comment
H1 ISC (DetChar, SUS, SYS)
jeffrey.kissel@LIGO.ORG - posted 22:38, Tuesday 18 October 2016 (30646)
Input Arm HXTS Highest Vertical (V3 / Bounce) Modes Characterized -- Not the Source of Mystery Peaks in LSC Aux Loops
J. Kissel

We're continuing to struggle to even get back to DRMI locking as of last night (LHO aLOG 30614). One suspicion is the sharp feature 27.0977 Hz (using a 0.005 Hz BW ASD) that has suddenly appeared, as shown in Jenne's attached ASD. 

Because we haven't yet measured them, and we're dead in the water until we figure out the problem, I've taken the time to excite and measure all of the HAM2 and HAM3 HSTS and HLTS's highest vertical mode frequencies, modeled to be at 27.3 Hz and 28.1 Hz, respectively. The results are as follows:

 Optic      Sus. Type     V3 Mode Freq (Hz)
  MC1         HSTS            27.387
  MC2         HSTS            27.742
  MC3         HSTS            27.426

  PRM         HSTS            27.594
  PR2         HSTS            27.414

  PR3         HLTS            28.211

None match up with the mystery line. The closest is MC1, at ~0.3 Hz away.

All values were measured with 5 averages at 0.005 Hz BW. I attach screenshots of the DTT and awggui templates for MC3 as an example. The DTT templates have been copied and committed into the Sus repository here:
/ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/PR3/Common/2016-10-18_H1SUSPR3_V3_Mode.xml
/ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/MC1/Common/2016-10-18_H1SUSMC1_V3_Mode.xml
/ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/MC2/Common/2016-10-18_H1SUSMC2_V3_Mode.xml
/ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/MC3/Common/2016-10-18_H1SUSMC3_V3_Mode.xml
/ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/PR2/Common/2016-10-18_H1SUSPR2_V3_Mode.xml
/ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/PRM/Common/2016-10-18_H1SUSPRM_V3_Mode.xml
Images attached to this report
H1 TCS (TCS)
edmond.merilh@LIGO.ORG - posted 18:26, Tuesday 18 October 2016 (30644)
HWS camera work at EX

Some work was started on the HWS camera on ISCTEX today as per WP6253. It wasn't completed but brought to a point where it can be completed realtively easily at the next opportunity. It was returned to it's original configuration and confirmed to still be operational as it was before the work was started.

H1 CAL (CAL)
travis.sadecki@LIGO.ORG - posted 17:43, Tuesday 18 October 2016 - last comment - 12:58, Monday 24 October 2016(30642)
PCal X and Y OFS Optimization

Rick, Evan G., Travis

As part of the bi-annual PCal maintenance, today we optimized the drive range of the PCal OFS for both end stations.  The procedure for this was:

1) Turn off PCal lines and inject 10 Hz sine wave.

2) Break the OFS lock.

3) Note that the AOM drive is large (~1.5V) with loop open.

4) Adjust the offset in 1V steps to find the maximum OFS PD output.

5) Record max OFS PD output.

6) Close shutter and record minimum OFS PD output.

7) Set offset to half of 95% of max OFS PD output.

8) Find amplitude of injected sine wave that give us the maximum p-p OFS PD voltage.

9) Record magnitudes of carrier and sideband frequencies of the OFS and TX PDs.

Results:

PCal X:

Max OFS PD out = 10.5 V
95% of Max OFS PD out = 10 V p-p
OFS PD output with shutter closed = -0.01 V
Offset set to 5.00 V
64000 cts. peak is maximum - > ~10V p-p
 
Harmonics:
OFS PD
carrier 14 dB
1st SB -42.7 dB
2nd SB -43.5 dB
TX PD
carrier -234 dB
1st SB -294 dB
2nd SB -291.1 dB
 
Max modulation at 90% of this value - > 57,000 cts. PEAK (out of 64000 cts.)

 

PCal Y:

Max OFS PD out = 7.9 V
95% of Max OFS PD out -> 7.5 Vp-p
OFS PD output with shutter closed = -0.01
Offset set to 3.75 V
49000 its peak is maximum -> ~7.5V p-p
 
Harmonics:
OFS PD
carrier 11.75 dB
1st SB -67.4 dB
2nd SB -56 dB
TX PD
carrier -230.2 dB
1st SB -294.5 dB
2nd SB -295.1 dB
 
Max modulation at 90% of this value - > 44,000 cts. PEAK (out of 49000 cts.)
Comments related to this report
travis.sadecki@LIGO.ORG - 10:28, Wednesday 19 October 2016 (30655)

As the final part of the maintenance for PCal Y, we took a transfer function of the OFS.  See attached traces.

Images attached to this comment
evan.goetz@LIGO.ORG - 12:58, Monday 24 October 2016 (30802)CAL, INJ
The limit on the excitation amplitude is total number of counts that should be allowed to be sent to the Pcal. This is a frequency independent value. So for Pcal Y, the the maximum in H1:CAL-PCALY_EXE_SUM should be no larger than 44,000 counts. For Pcal X, the total of H1:CAL-PCALX_EXE_SUM and H1:CAL-PINJX_HARDWARE_OUT should be less than 57,000 counts.

Right now, Pcal Y has the following injections set:

Freq. (Hz)     Amp. (cts)
-------------------------
   7.9         20000.0
  36.7           750.0
 331.9          9000.0
1083.7         15000.0
-------------------------
Total = 44750.0

This is just above the threshold. It might be worth returning the 331.9 Hz line to O1 level (see LHO aLOG 30476 for the increased amplitude lines) since the detector noise in this region is recently improved.

Pcal X has the following injections set:

Freq. (Hz)     Amp. (cts)
-------------------------
1501.3         39322.0

And CW injections on the Pcal X are totaling ~1585 counts giving Total = 40907.

H1 ISC (ISC)
marc.pirello@LIGO.ORG - posted 16:41, Tuesday 18 October 2016 (30633)
RF Concentrator Readback Status

M. Pirello

Work Permit: #6246 & #6248

Prior to working on this we determined that the analog voltages on the channels in questions are outputting anticipated values.  The immediate problem is with the PowerOK bit for each of these signals.

Investigation into these chassis yeilded the following:

1.  Upon rebooting RF Amp Concentrator #1 on ISC_C3 in the CER (S1103450) the stuck bits on that chassis AMP18M and AMP24M1 were reset.  We are 2/3 of the way done with this task, piece of cake!

2.  Upon rebooting RF Amp Concentrator #2 on ISC_C3 in the CER (S1103451) 3 new bits were stuck for a total of 4 bits,  including the original DIV40M, drat!

3. I tried disconnecting, measuring, and reconnecting each of the 4 stuck signals on the DB25's on the back of the unit.  These voltages look good, no luck resetting the latched bits here.

     a.  DB25#1 = DIV40M = 3.64V 

     b.  DB25#2 = AMP40M = 3.75V

     c.  DB25#3 = DIV10M = 3.61V

     d.  DB25#4 = AMP10M = 3.67V

4.  I then disconnected the DB37 on RF Amp Concentrator #2 (S1103451) and checked each signal coming out of the RF Amp Concentrator.  This output is confusing.  The I couplers inside are referenced to 5V.  An on state should be 5V, off state should be 0V.  All of the PO signals should be either 5V or 0V.  PO11 is 2.5V, PO7 is 2.5V, PO2 is 2.5V.

PIn 1 (M1P12) 0V Pin 11 (M1P2) 0V Pin 21 (M1N11) 0.033V Pin 31 (M1N1) 0.004V
Pin 2 (M1P11) 0V Pin 12 (M1P1) 0V Pin 22 (M1N10) 0.029V Pin 32 (PO8) 0.009V
Pin 3 (M1P10) 3.4V Pin 13 (PO12) 0.008V Pin 23 (M1N9) 0.029V Pin 33 (PO11) 2.509V
Pin 4 (M1P9) 2.598V Pin 14 (PO4) 0.008V Pin 24 (M1N8) 0.0228V Pin 34 (PO3) 0.005V
Pin 5 (M1P8) 0V Pin 15 (PO7) 2.510V Pin 25 (M1N7) 0.002V Pin 35 (PO6) 0.006V
Pin 6 (M1P7) 2.951V Pin 16 (PO10) 0.007 Pin 26 (M1N6) 0.033V Pin 36 (PO9) 0.004V
Pin 7 (M1P6) 3.073V Pin 17 (PO2) 2.515V Pin 27 (M1N5) 0.033V  Pin 37 (PO1) 0.003V
Pin 8 (M1P5) 3.027V Pin 18 (PO5) 0.004V Pin 28 (M1N4) 0.005V    
Pin 9 (M1P4) 3.051V Pin 19 (GND) GND Pin 29 (M1N3) 0.004V    
Pin 10 (M1P3) 0V Pin 20 (M1N12) 0.029V Pin 30 (M1N2) 0.004V    

5.  I then reconnected the DB37 to H1_EtherCAT_Corner_3.  Ten out of twelve of the OK bits remained red.  I recycled power on the RF Amp Concentrator #2 (S1103451) on ISC_3 and again all but 4 of the OK bits were green like before.

6.  I put everything back together, removed the breakout boards, etc.  When I left the CER, the 4 bits were latched, DIV40M, AMP40M, DIV10M, AMP10M.  After noon I checked the OK bits and 10 out of 12 OK bits are red including the original four.  I am relatively sure that the issue is with the RF Amp Concentrator. The power OK signals going into this chassis are good, the power OK signals coming out of it seem to latch up spontaneously and output bad voltages.  Perhaps the 5V regulator is outputting 2.5V?

Recommendation:

Strike DIV40M from WP6248 and close out WP6248 and FRS 6391 because DIV40M is connected to a different chassis. Expand FRS 6059 to encompass RF Amp Concentrator #2 (S1103451) and work to debug the source of the spontaneous latchup/latchdown and bad voltages (including DIV40M).

Images attached to this report
H1 DAQ
david.barker@LIGO.ORG - posted 16:36, Tuesday 18 October 2016 (30641)
new DAQ overview, shows retransmisison stats and removed science frame reporting

note that fw2 and tw1 have issued retransmission requests since the last daq restart

Images attached to this report
H1 DAQ
david.barker@LIGO.ORG - posted 16:26, Tuesday 18 October 2016 (30639)
DAQ daqd and OS summary

Here are the current versions of daqd and nds Jonathan built and installed today:

daqd-process-catagory operating system(s) machines
data concentrator gentoo h1dc0
frame writer ubuntu12, gentoo* h1fw0, h1fw1, h1fw2, h1tw0, h1tw1*
nds-1 server gentoo h1nds0, h1nds1
frame broadcaster (dmt) gentoo h1broadcaster0

Note that h1tw0 was upgraded to U12 because it was showing RAID errors, h1tw1 has been kept back with it original Gentoo OS (these machines were the original frame writers)

here is the nds process table (NDS-1 servers run two processes; daqd and nds)

nds-process-catagory operating system machines
nds-1 server gentoo h1nds0, h1nds1
H1 TCS
betsy.weaver@LIGO.ORG - posted 16:16, Tuesday 18 October 2016 - last comment - 16:33, Tuesday 18 October 2016(30638)
TCSY CHiller... on going

Today, I calculated the volume of piping that feeds the circulation loop for the TCSY laser.  The total volume of water in the piping alone (to and from the laser to the chiller on the mech room mezzanine) to be 36 liters.  Wow, much more than I had assumed!  The chiller reservoir holds 7 liters, for a total of 43L in the system at any given "full" time.

Recall that the system popped a mesh filter and ran the reservoir dry on Sept 28th alog 30041 - at which time only 6L was added to ~fill the reservoir.  At the time, they also noticed that there was air in noisy lines down at the table so there was air pushed through some or all of the piping volume (at least the chiller->laser piping was full of air as they mentioned, so half of the 36L piping would have needed to be refilled).

I've added up the small-ish amounts of water we have been adding daily to keep the chiller reservoir topped off - we have added 10.5L over the last 3 weeks.  With the 6L added Sept 28th, we've added 16.5L to a 43L system so far.  Even assuming there was some water still in the pipes during the Sept 28th "leak", we likley still have a ways to go before we have the system full.

Keep on filling...

 

(From VE drawings, I estimated ~2880" of piping length round trip chiller-laser-chiller.  The piping ID looks to be 1".)

Comments related to this report
alastair.heptonstall@LIGO.ORG - 16:33, Tuesday 18 October 2016 (30640)

That's around the same volume I had calculated for flushing the chillers (I think I had ~10 gallons).  So we're getting the same number there.

H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 16:14, Tuesday 18 October 2016 - last comment - 08:20, Wednesday 19 October 2016(30637)
CDS maintenance summary, Tuesday 18th October 2016

WP6251 RCG3.2 upgrade

Jonathan, Jim, Dave:

The main work today was in upgrading the front end models and the DAQ systems to RCG3.2. All models were recompiled yesterday evening (see alog 30609). This morning we installed this new code (took 1hr 27mins). New mx code was compiled, new daqd binaries were created for all DAQ systems. The install sequence was:

WP6258 DAQ removal of science frame

Jonathan, Greg, Jim, Dave:

All frame writers were configured to no longer write science frames, and rename the old commissioning frames as the new raw frame.

Detailed procedure given in wiki page https://lhocds.ligo-wa.caltech.edu/wiki/MakingTheCommissioningFrameTheNewFullFrame

After this change, there are no C-named frames, only R-named frames. Archived C-Frames are now R-Frames, new frames are what were previously called commissioning frames with R-names in the frames/full directories. What were called science frames are not written, no new frames in frames/science directories.

h1nds0 and h1nds1 were reconfigured to the new R-names and restarted

the wiper scripts on h1ldasgw0 and h1ldasgw1 were changed to not use science frames, and to give the disk these frames used to the full frames instead

WP6237 Remove h1ldasgw2

Jim, Dave

a third QFS/NFS server was install in the MSR during the summer as part of the attempt to fix h1fw0/h1fw1 instability. It offloaded the exporting of the frame directories (in read-only mode) to the NDS servers. It later proved to be a liability when corrupted frame files were served by h1ldasgw2 to both NDS servers.

Today we decommissioned h1ldasgw2 and reconfigured h1ldasgw0, h1ldasgw1 to serve their respective file systems as a read-only export to the NDS machines. h1nds0 and h1nds1 were configured to no longer use h1ldasgw2.

Comments related to this report
james.batch@LIGO.ORG - 08:20, Wednesday 19 October 2016 (30653)
A new version of dataviewer was also installed for Ubuntu12 and Ubuntu14 control room workstations.  This version, 3.2, will handle the leap second to be applied Dec. 31, 2016.  Dataviewer is part of the advLigoRTS source code.
H1 CDS (Lockloss)
sheila.dwyer@LIGO.ORG - posted 01:04, Wednesday 12 October 2016 - last comment - 17:11, Tuesday 18 October 2016(30439)
lockloss tool not working

 Possibly after some changes made durring maintence, the lockloss tool stopped working. I can use lockloss plot, but not select. 

sheila.dwyer@opsws4:~/Desktop/Locklosses$ lockloss -c channels_to_look_at_TR_CARM.txt select

Traceback (most recent call last):
  File "/ligo/cds/userscripts/lockloss", line 403, in <module>
    args.func(args)
  File "/ligo/cds/userscripts/lockloss", line 254, in cmd_select
    selected = select_lockloss_time(index=args.index, tz=args.tz)
  File "/ligo/cds/userscripts/lockloss", line 137, in select_lockloss_time
    times = list(get_guard_lockloss_events())[::-1]
  File "/ligo/cds/userscripts/lockloss", line 112, in get_guard_lockloss_events
    for t in guardutil.nds.find_transitions(GRD_LOCKING_NODE, t0, t1):
  File "/ligo/apps/linux-x86_64/guardian-1.0.3/lib/python2.7/site-packages/guardutil/nds.py", line 32, in find_transitions
    for buf in conn.iterate(t0, t1, [channel]):
RuntimeError: Requested data were not found.
Comments related to this report
sheila.dwyer@LIGO.ORG - 02:19, Tuesday 18 October 2016 (30615)

After noticing LLO alog 28710 I tried to svn up the lockloss script (we're at rev 14462 now), but I still get the same error when I try to use select

jameson.rollins@LIGO.ORG - 17:11, Tuesday 18 October 2016 (30643)

The problem at LLO is totally different and unrelated.

The exception you're seeing is from an NDS access failure.  The daq NDS1 server is saying that it can't find the data that is being requested.  This can happen if you request data too recent in the past.  It also used to happen because of gaps in the NDS1 lookback buffer, but those should have been mostly fixed.

Right now the lockloss tool is looking for all lock loss events from 36 hours ago to 1 second ago.  The 1 second ago part could be the problem, if that's too soon for the NDS1 server to handle.  But my testing has indicated that 1 second in the past should be sufficient for the data to be available.  In other words I've not been able to recreate the problem.

In any event, I changed the look-back to go up to only two seconds in the past.  Hopefully that fixes the issue.

Displaying reports 53441-53460 of 83260.Go to page Start 2669 2670 2671 2672 2673 2674 2675 2676 2677 End