Displaying reports 72441-72460 of 83068.Go to page Start 3619 3620 3621 3622 3623 3624 3625 3626 3627 End
Reports until 09:50, Monday 24 March 2014
H1 SUS
arnaud.pele@LIGO.ORG - posted 09:50, Monday 24 March 2014 (10932)
ETMX Y2Y

On friday, I had a chance to test the Y2Y UIM filter, by taking an open loop transfer function driving in yaw from uim and looking at the test mass optical lever. The open loop looks as expected, falling at 1/f^2 after a resonnance at 0.6Hz. The attachement shows the open loop without vs with the filter.

Non-image files attached to this report
H1 SUS
stuart.aston@LIGO.ORG - posted 09:21, Monday 24 March 2014 - last comment - 10:04, Monday 24 March 2014(10951)
Templates added for TMSY (TMTS) DTT TFs
[Stuart A, Jeff B]

Templates have been committed to the sussvn for the H1 TMSY (TMTS) suspension, derived from L1 TMSX, as follows:-

2014-03-24_0900_H1SUSTMSY_M1_WhiteNoise_L_0p01to500Hz.xml
2014-03-24_0900_H1SUSTMSY_M1_WhiteNoise_T_0p01to500Hz.xml
2014-03-24_0900_H1SUSTMSY_M1_WhiteNoise_V_0p01to500Hz.xml
2014-03-24_0900_H1SUSTMSY_M1_WhiteNoise_R_0p01to500Hz.xml
2014-03-24_0900_H1SUSTMSY_M1_WhiteNoise_P_0p01to500Hz.xml
2014-03-24_0900_H1SUSTMSY_M1_WhiteNoise_Y_0p01to500Hz.xml

These can be made available by svn'ing up the /ligo/svncommon/SusSVN/sus/trunk/TMTS/H1/TMSY/SAGM1/Data/ directory.

Let me know if there are any issues with their use.

n.b. the reference included in these templates is for L1 TMSX at Phase 3b (in-vacuum) of testing.
Comments related to this report
arnaud.pele@LIGO.ORG - 10:04, Monday 24 March 2014 (10953)

Thanks for adding those Stuart !

LHO VE
john.worden@LIGO.ORG - posted 08:40, Monday 24 March 2014 (10950)
Accumulation Y1

We have accumulated for ~5 days on  Y1. Plot attached. Kyle will start to analyze the RGA data this week.

Images attached to this report
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 08:04, Monday 24 March 2014 (10949)
CDS model and DAQ restart report, Sunday 23rd March 2014

model restarts logged for Sun 23/Mar/2014
2014_03_23 10:44 h1broadcast0
2014_03_23 10:44 h1fw0
2014_03_23 10:44 h1fw1
2014_03_23 10:51 h1broadcast0
2014_03_23 11:19 h1dc0
2014_03_23 11:21 h1broadcast0
2014_03_23 11:21 h1fw0
2014_03_23 11:21 h1fw1
2014_03_23 11:21 h1nds0
2014_03_23 11:21 h1nds1
2014_03_23 11:36 h1broadcast0
2014_03_23 11:41 h1broadcast0
2014_03_23 11:43 h1broadcast0
2014_03_23 14:29 h1lsc
2014_03_23 14:32 h1lsc

All restarts were expected. h1lsc due to DAQ issues, h1broadcast0 has memory swapping issues.

H1 ISC
sheila.dwyer@LIGO.ORG - posted 20:06, Sunday 23 March 2014 (10948)
Leaving arm locked with IR

The COMM guardian now has states to lock the IR to the arm using the refl bias path, I will leave this running overnight. 

H1 ISC
sheila.dwyer@LIGO.ORG - posted 20:05, Sunday 23 March 2014 - last comment - 10:10, Monday 24 March 2014(10947)
COMM Noise

Measuring the COMM noise with the refl DC bias path engaged gives repeatable results, while we did not repeatable results without it.  However, something seems to be wrong with our calibration.  Today, when the refl bias path is not engaged we wander over the peak in about 10 seconds, and all th way off resonance, so we know that the COMM noise is at least a fwhm of the IR resonance in the arm, 164Hz

When we lock the refl bias path, measure a spectrum and correct for the gain of the refl DC bias path, the rms should be the noise of the refl bias path.  However, we are consistently measuring a noise around 35 Hz (with cavity pole corrected for) or around 9 Hz without taking out the cavity pole.  However, we know this is wrong. 

At least with a repeatable measurement we can evaluate what make the noise worse and better, even if we don't yet trust the calibration. Attached is a plot of the COMM noise measured with refl DC bias engage, the gain of that path corrected for but not the cavity pole.  In the blue trace the OpLev damping was on for both pitch and Yaw, in the green trace the pitch damping was off on both the ETM and ITM.  The coherence are shown in the bottom panel, the solid lines were measured with no damping, the dashed lines with damping.

With the damping on, the noise at 0.45Hz is reduced by a factor of 5.5, although it doesn't seem to change the rms much in this plot. 

The green trace was measured with lower bandwidth because the lock was not lasting more than a few minutes today, especially with the damping off.

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 10:10, Monday 24 March 2014 (10954)

Correction:

The arm cavity pole is 42 Hz, not 82 (that was the HIFOY number)! This noise measurement seems more reasonable with that in mind. 

Thank you Ryan. 

H1 INS (SEI)
jim.warner@LIGO.ORG - posted 17:44, Sunday 23 March 2014 (10945)
ETMY ISI Tf's didn't turn out...again

Hugh graciously ran new tf's on Saturday. I came in today to look at them and, once again we have some issues. I'm betting we have some rubbing stops on TMS again...sigh. We had this problem before, Corey went in and did some adjusting, my first attached image shows he mostly fixed it, although I believe Arnaud had told me they had reason to believe the TMS was still rubbing some then (you can still see a little something at 2.5hz). My second image shows the TF from yesterday, and a peak that I had seen before at 2.5hz has re-appeared. Not as bad as the stuff at the beginning of the month, but we were doing better just a few days ago.

Images attached to this report
H1 AOS (ISC)
stefan.ballmer@LIGO.ORG - posted 15:21, Sunday 23 March 2014 - last comment - 08:38, Tuesday 25 March 2014(10944)
POBAIR_B_RF18 demod fuse tripped / sign for REFLAIR flipped ....
Dave O, Stefan

While attempting to lock the PRC we noticed that POB_Air_B_RF_I_Err was low on signal. We traced this back to a low de-mod signal (see attached plot). We traced this down to a tripped fused in the ISC_R2_Rack. This tripped on the 17th March. We tried to reset it but it simply re-tripped again.

When we locked PRX and PRY, we were also surprised that we needed to flip the feed-back sign in PRCL to lock on the carrier (compared to the settings the LSC guardian uses).
Images attached to this report
Comments related to this report
rich.abbott@LIGO.ORG - 09:56, Monday 24 March 2014 (10952)ISC
I assume by fuse you guys mean breaker right?  I can't tell if this unit is now functional (as might be implied by successful locking mentioned in this post) or still problematic.  I would like to understand this better in case it points to a potential flaw somewhere.
richard.mccarthy@LIGO.ORG - 08:10, Tuesday 25 March 2014 (10975)
The negative regulator had the kapton insulation material under it not the gray that works.  The -15 was shorting to the case. We replaced both +-15 insulating material.  Unit was restored and is functional.
filiberto.clara@LIGO.ORG - 08:38, Tuesday 25 March 2014 (10976)
Unit D0902796 Serial Number S1000977 had insulation for both positive and negative regulators replaced.
H1 IOO
stefan.ballmer@LIGO.ORG - posted 13:21, Sunday 23 March 2014 (10943)
IMC ODC bitmask updated
Updated the IMC ODC bitmask to again include the min trans power checks - not sure why they went missing.
Also updated the cds/h1/scripts/h1setODCbitmask script.
H1 CDS
david.barker@LIGO.ORG - posted 11:48, Sunday 23 March 2014 (10942)
h1boot back up, CDS and DAQ Status

After about 30 mins of disk activity h1boot completed its reboot. Here is the current status of the systems:

DAQ

DAQ Frame Data Was Lost. Here are the frame gaps on the two frame writers

frame writer from (gps local) to (gps local)
h1fw0 1079622528 08:10PDT 1079631872 10:45PTD
h1fw1 1079623744 08:30PDT 1079631936 10:45PDT

So no data from 08:30 to 10:45PDT. We should investigate if we cannot make the DAQ more independent of h1boot. During run time the DAQ logs to the target area, and I assume that is why the frame writer stop times are after h1boot's demise and at different times.

At 11:18 I performed a clean restart of the DAQ since the framewriters and broadcaster had restarted themselves. All came back ok except:

h1lsc model is showing a DAQ error in its STATEWORD. No error from h1dc0 on this node.

h1broadcaster0 shows the same swap memory problems, I rebooted this unit to clear the error. We will add more memory soon to fix this.

Front Ends

All the front ends EPICS updates became unfrozen, no restarts were needed. The DAQ restart cleared the h1iopoaf0 0x2000 status (model change last week) but an h1lsc DAQ error is showing (see above)

Workstations

Good news on the workstations, they all became unfrozen and appear to be operational. Some wall mount TV servers seem to be not operational, we can fix these tomorrow.

Remote Monitoring

The Vacuum and other MEDM screens are being updated on the cds  web page.

Remote access using 2FA now working.

SysAdmin

Looking at dmesg and logs on h1boot I cannot determine if a full FSCK was ran. We should still schedule this for Tuesday.

I have switched the RSYNC backup of h1boot by cdsfs1 on again.

IFO

I'll leave the verification of IFO operations to the commissioning crew to check.

H1 CDS (DAQ, VE)
david.barker@LIGO.ORG - posted 10:33, Sunday 23 March 2014 (10941)
rebooting h1boot, status of CDS and DAQ systems

at 10:09PDT I rebooted h1boot via front panel reset button after finding its console locked up. For the past 20 mins it looks like it is file checking the mirror raid, those two disks are continuously active.

The vacuum overview medm on the VE workstation is live and everything looks good.

The DAQ units are updating their UPTIME counters, but it appears that h1broadcaster0 is down, so most probably no DMT data is being sent. I'll check the frame file status when h1boot is healthy.

MEDMs associated with front ends are visible, but frozen at 06:42:21PDT this morning (GPS time 1079617357).

H1 DAQ (CDS)
david.barker@LIGO.ORG - posted 09:01, Sunday 23 March 2014 (10940)
frame writers may be no longer writing frames

The frame writers file listings I'm getting from h1fw0 and h1fw1 are confusing. h1fw0 says it last wrote a frame at 08:10PDT, and h1fw1 has a last time of 08:30PDT. I'm not sure why these are exactly on a 10 minute boundary (only second trends use that time) and I'm not sure why they would stop at different times and over an hour from the h1boot problem.

H1 CDS (VE)
david.barker@LIGO.ORG - posted 08:57, Sunday 23 March 2014 (10939)
vacuum cryopump levels still good

The external epics gateway is still running, I am able to report CP levels are good:

 

 

HVE-LY:CP1_LT100               91.5751

HVE-LX:CP2_LT150               92.5519

HVE-MY:CP3_LT200               90.9646

HVE-MY:CP4_LT250               91.6972

HVE-MX:CP5_LT300               91.5751

HVE-MX:CP6_LT350               91.5751

HVE-EY:CP7_LT400               92.7961

HVE-EX:CP8_LT500               91.3309

H1 CDS
david.barker@LIGO.ORG - posted 08:13, Sunday 23 March 2014 - last comment - 08:34, Sunday 23 March 2014(10937)
possible problem with h1boot, investigating

I'm noticing a possible problem with h1boot, the NFS server of the /opt/rtcds file system. Machines which mount this file system are not letting me log in (they freeze after accepting my password) and h1boot is not responding to ping requests.

The disk-to-disk backup of h1boot at 05:00 this morning completed normally at 05:04. MEDM snap shot images suggest the problem appeared at 06:42 this morning.

Comments related to this report
david.barker@LIGO.ORG - 08:34, Sunday 23 March 2014 (10938)

Here are the central syslogs for the event.

 

 

Mar 23 06:42:18 h1boot kernel: [12441189.877539] CPU 0 

Mar 23 06:42:18 h1boot kernel: [12441189.877544] Modules linked in:

Mar 23 06:42:18 h1boot kernel: [12441189.877953] 

Mar 23 06:42:18 h1boot kernel: [12441189.878157] Pid: 4652, comm: nfsd Not tainted 2.6.34.1 #7 X8DTU/X8DTU

Mar 23 06:42:18 h1boot kernel: [12441189.878369] RIP: 0010:[<ffffffff8102f70e>]  [<ffffffff8102f70e>] find_busiest_group+0x3bc/0x784

Mar 23 06:42:18 h1boot kernel: [12441189.878785] RSP: 0018:ffff8801b9cefa60  EFLAGS: 00010046

Mar 23 06:42:18 h1boot kernel: [12441189.878993] RAX: 0000000000000000 RBX: ffff880001e0e3c0 RCX: 0000000000000000

Mar 23 06:42:18 h1boot kernel: [12441189.879403] RDX: 0000000000000000 RSI: ffff880001e0e4d0 RDI: 00000000fe15cf79

Mar 23 06:45:25 script0 kernel: [12378753.814330] nfs: server h1boot not responding, still trying

Mar 23 06:45:27 script0 kernel: [12378755.721463] nfs: server h1boot not responding, still trying

LHO VE
john.worden@LIGO.ORG - posted 08:06, Sunday 23 March 2014 (10936)
Accumulation underway in Beam Tube - alarms generated.

As Kyle mentioned in his alog of Friday he is running an accumulation measurement in Y1. Alarms were expected as the pressure rises in the module and this morning we have them. Pressures are at 4e-8 so there is no cause for worry. The rate of rise is close to 1e-8 torr per day. Monday this measurement will be terminated if all goes well.

Images attached to this report
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 08:00, Sunday 23 March 2014 (10935)
CDS model and DAQ restart report, Saturday 22 March 2014

model restarts logged for Sat 22/Mar/2014
2014_03_22 12:33 h1fw1

unexpected restart of h1fw1.

Displaying reports 72441-72460 of 83068.Go to page Start 3619 3620 3621 3622 3623 3624 3625 3626 3627 End