Displaying reports 47741-47760 of 83264.Go to page Start 2384 2385 2386 2387 2388 2389 2390 2391 2392 End
Reports until 08:44, Wednesday 31 May 2017
H1 TCS (ISC, TCS)
aidan.brooks@LIGO.ORG - posted 08:44, Wednesday 31 May 2017 (36556)
ITMY absorption estimate using recent HWSY data - 360ppb

I analyzed the spherical power measured by HWSY from last night's lock and compared it to the estimated thermal lens from the SIM model. Once the HWS and SIM models were all put into the same single-pass scale (HWS data measures a double-pass of the test mass thermal lens and the SIM data estimates the single-pass thermal lens), I could see that the SIM model was underestimating the thermal lens by about 25%. 

The old absorption value was 280ppb. The new absorption value (required to fit the SIM model estimate to the HWS data) is 360ppb.

 

Images attached to this report
Non-image files attached to this report
H1 General
edmond.merilh@LIGO.ORG - posted 08:04, Wednesday 31 May 2017 (36555)
Shift Transition - Day

TITLE: 05/31 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Engineering
OUTGOING OPERATOR: N/A
CURRENT ENVIRONMENT:

Winds calm, primary and secondary uSeiem are calm
QUICK SUMMARY: ops lazy script isn't doing transition (-t).

H1 ISC
jenne.driggers@LIGO.ORG - posted 22:25, Tuesday 30 May 2017 - last comment - 09:05, Wednesday 31 May 2017(36553)
Other locking notes from today

[Vaishali, Patrick, Sheila, Jenne]

A few other locking notes from today, although nothing ground-breaking.

* I asked Vaishali to write a note "Xarm green weirdo - adjusting fiber polarization fixed it", thinking to myself that of course I'd remember exactly what I meant by weirdo.  I think (esp. from the attached screenshot) that I thought adjusting the fiber polarization fixed the drifting of the Xend laser power, since that stopped for about 2 hours, although obviously it didn't actually fix it.  Anyhow, Vaishali can comment here if she remembers more than I do....

* We really, really struggled with ALS DIFF while the wind was high earlier this afternoon.  For a long while the whole wind trend plot was above the 20 mph line, gusting to above 40 mph.  We weren't having trouble before that (it started pretty suddenly), and we were fine again after it died down and was averaging closer to 10 mph.  The seismic config was in "Windy" the whole time, per the table on the config medm screen.  We tried going through DIFF by hand several times, but I think the real key was just to wait until the wind was more calm.

* I saw that the power recycling gain dropped when we increased power. I tried re-setting the SOFT offsets, and in the end it turns out that the QPD offsets that Sheila had found were being overwritten in the ISC_LOCK guardian - a remnant of a time when we had 2 states for those that we liked - so we have reverted them to Sheila's offsets (since mine were nearly identical), and removed the hard-coded offsets from the guardian. 

* I measured DHARD yaw just after increasing the PSL power to 30W (was actually closer to 27W during the measurement), and everything looked fine for going through with the Lownoise_ASC state.  I did so by hand, and then again through at least one more lock letting guardian run through the state, and I had no problems.  We aren't sure what this is, but it's not due to the SOFT offsets, since Sheila had been re-setting them by hand each lock at that time.

* On our final successful lock from this evening, I did the reduction of 45MHz modulation depth, skipped reducing the 9MHz modulation depth, and skipped the SRC ASC high power state, but otherwise was able to complete Noise_tunings, which basically puts us at NomLowNoise.  After a while the CSOFT / dP/dTheta instability showed up, and we lost lock.  I had increased the CSOFT pit gain from 0.6 to 0.8, but we lost lock while that was ramping, so I don't know if it would have helped or not.  If all the oplevs were functional, we'd like to try oplev damping again.  Alternatively, perhaps we'll try using the ISS 3rd loop again.

* After this, we've been unable to hold ALS_COMM locked, and we're suspicious of both the CARM / IMC loop (see the thread that starts with alog 36546), and the new ALSX laser (see alog 36550).

In conclusion, lots of measurements and testing, but no fundamental progress beyond the pre-ALS-laser-fiasco situation.  Hopefully someone has inspiration for us in the morning regarding the ALSX laser power fluctuations, and we'll be able to keep moving forward.

 

Comments related to this report
vaishali.adya@LIGO.ORG - 09:05, Wednesday 31 May 2017 (36559)

If I remember correctly, we found it to be strange that the transmitted power was dropping on its own accord and when you would tweak up the alignment a bit, it would pretend to increase and drop again.

H1 ISC
jenne.driggers@LIGO.ORG - posted 22:02, Tuesday 30 May 2017 (36550)
ALSX laser confusion - laser experts please help

[Patrick, Sheila, Jenne]

We are perplexed by the behavior of the new ALSX laser.  In the first attached screenshot, for most of the time after -30min (except a bit arouhnd -10min), we were sitting in Locking_arms_green, so the green lasers were locked to the arms, but nothing else was going on locking-wise (this was concurrent with some IMC diagnostics). 

We see that the green power transmitted through the arm drifts a lot.  It doesn't seem to be exactly following the IR output of the laser (as measured by the FIBR_A_LF_OUTPUT, which should be a combo of IR from the vertex and IR from the end station laser but dominated by the end station laser), although it does sometimes.  During these large drifts the arm was locked on TEM00 of the green beam, as seen on the transmission camera.  The drifts are mirrored by the LASER_GR_LF_OUTPUT, which is a measure of the green power pretty soon after the doubling crystal.  So, this is something that is really happening to the amount of green light going into the arm cavity. 

Also note that in the second attached screenshot, it looks like the output of the new laser is more noisy than the old one. We tried toggling the noise eater switch for the laser remotely from the screen but that didn't seem to affect the behavior. 

It doesn't really look like what I think of as mode hopping of the laser, since it's slow drifts and not fast jumps.  At this point we're pretty confused, and not able to hold ALS COMM lock for very long, so we're hoping that someone can take a look in the morning and help us out. 

Images attached to this report
LHO General
patrick.thomas@LIGO.ORG - posted 21:59, Tuesday 30 May 2017 (36552)
Ops Eve Shift Summary
TITLE: 05/30 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
SHIFT SUMMARY:
LOG: Sheila, Jenne, Jeff K. and Vaishali working on relocking. Ran into possible changes in IMC/CARM loop UGF and the green power output from the end X ALS laser.

23:29 UTC Kyle and Chandra to mid Y
~23:43 UTC Jenne, Vaishali and I adjusted the fiber polarization in the MSR
01:42 UTC Sheila and Jeff to LVEA to measure CARM open loop gain
01:46 UTC Chandra and Kyle back
01:56 UTC Sheila and Jeff back
02:39 UTC Jeff to LVEA to measure IMC loop
02:51 UTC Gave Aidan remote access to take HWS measurement
Sheila to LVEA to measure IMC loop
H1 ISC (IOO, OpsInfo)
jeffrey.kissel@LIGO.ORG - posted 20:44, Tuesday 30 May 2017 - last comment - 21:58, Tuesday 30 May 2017(36546)
What happened to the CARM/IMC Gain?
J. Kissel, S. Dwyer

Looking into why we have such high frequency noise coupling about a few kHz, we re-measured the CARM UGF (with and without IMC Boost Filter that comes on before nominal low noise) and IMC UGF (with the boost OFF), and found both loops with a factor of 2 less gain than we expect, at 7 kHz and 26 kHz respectively.

See attached transfer functions.

For reference, 
- just a few days ago, Kiwamu suggests that the IMC loop gain was at it's normally high ~50 kHz in LHO aLOG 36354.
- I expect the current design similar from when Chris Whittle fully characterized the loop LHO aLOG 29735.
- The only difference we expect is this "new" boost, which we started using in Oct 2016; see LHO aLOG 30549.

The investigation continues...
 
Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 20:56, Tuesday 30 May 2017 (36547)
I attach the raw measurement data and the script I used to make the above plots.

SCRN0005.txt -- No IMC Boost CARM OLGTF Magnitude (in [dB])
SCRN0006.txt -- No IMC Boost CARM OLGTF Phase (in [deg])

SCRN0007.txt -- With IMC Boost CARM OLGTF Magnitude (in [dB])
SCRN0008.txt -- With IMC Boost CARM OLGTF Phase (in [deg])

SCRN0009.txt -- No Boost IMC OLGTF Magnitude (in [dB])
SCRN0010.txt -- No Boost IMC OLGTF Phase (in [deg])

Apologies for the arcane file format, the one GPIB setup has stopped working.
Non-image files attached to this comment
sheila.dwyer@LIGO.ORG - 21:58, Tuesday 30 May 2017 (36551)

Jenne Patrick Sheila

  • I went to the floor and measured the peak to peak value of the IMC PDH signal at out1, with the in1 slider set to 16dB, and saw 2.1Vpp (you can compare to 30549, where I measured 2.4Vpp).  
  • We also measured the cross over between the MC2 path and the laser path, and saw that is also in good agreement with an old measurement. (attached)
  •  At a loss, we re-measured the IMC OLG and saw that the ugf was 49 kHz, very similar to what we expect, and a factor of 2 larger than Jeff's measurement....

I'm not sure what happened, but right now the IMC gain seems fine.

Images attached to this comment
Non-image files attached to this comment
H1 TCS (TCS)
aidan.brooks@LIGO.ORG - posted 20:08, Tuesday 30 May 2017 (36545)
Further HWSX measurements for lock around 1180229526

Following today's minor adjustment of HWSX alignment to clear the ETMX reflection, I reran the HWS plotting code on the lock from this evening. The point absorber is still very obvious. Here are three images from periods during the first 8 minutes.

The colored contour scale is truncated at -80nm, where everything lower than -80nm is white.

After 169s:

After 229s:

After 471s

You can see the thermal diffusion taking place here - the gradient is increasing in the areas surrounding the point source. As can be seen here, the optical path distortion from the point source is 80nm over roughly about 25mm diameter.

Images attached to this report
LHO VE
chandra.romel@LIGO.ORG - posted 19:30, Tuesday 30 May 2017 - last comment - 21:02, Tuesday 30 May 2017(36543)
Did CP3 fully overfill last Friday?

After talking with Dave B. and Patrick T. about the set point values I gave them for CP3,4 auto overfills, I discovered that CP3 stops overfilling when thermocouples see < -30C. CP4 is set to -60C. I cannot reason why I would have set them differently, but now I know -30C is too high, and I'm concerned that last Friday's overfill never actually overfilled. Attached is a plot of Sunday night's manual fill right at the pump, where I stood for 50 minutes and verified LN2 coming out of exhaust (the final dip you see in plot). For most of those 50 minutes, the exhaust was just cold gas reading below -50C.

Also attached are the fill plots from last Wed. and Fri. You can see the usual shallow slope in Wed. plot followed by the LN2 finale. Friday's fill fell immediately to something less than -30C which could have just been cold gas.

We will change the set point to -100C for both CP3,4.

Images attached to this report
Comments related to this report
kyle.ryan@LIGO.ORG - 21:02, Tuesday 30 May 2017 (36549)
I think your observation in the data of how long the thermocouples spent below -30C (the auto fill "stop" set point) on Sunday night's manual fill in the absence of LN2 actually exiting the exhaust makes a convincing argument for abbreviated fill(s) on Friday (and Wednesday?).  
LHO VE
chandra.romel@LIGO.ORG - posted 19:07, Tuesday 30 May 2017 (36542)
CP3 plug work

Kyle and I spent some time at CP3 sensing line this evening. We increased GN2 flow by removing the 0-5 LPM rotameter and applying up to 30 psi - anymore than this causes >10 psi exhaust pressure and we didn't feel comfortable going above that. We also applied vacuum to the sensing line using a diaphragm pump. We saw the % full read back at 100% on occasion when applying pressure. BTW, the transducer is rated for 2000 psi. We toggled between pressure and vacuum until the GN2 bottle ran out and connected a new bottle for overnight. Reinstalled the rotameter and left flow at 5 LPM overnight. 

After these activities I overfilled CP3 from control room. It took 3 minutes by applying 50% open on valve. Set back to 20% open.

Images attached to this report
X1 DTS
jonathan.hanks@LIGO.ORG - posted 17:22, Tuesday 30 May 2017 (36539)
LHO DTS - x1ldasgw1 restarted last week, required remounting frame directories for x1nds1, x1fw1

I needed to work on some code on the test stand and found that x1ldasgw1 had restarted last Friday ~23:48 localtime.  So I had to remount the frame directories for x1nds1 and x1fw1.

H1 ISC (CDS, DetChar, OpsInfo)
jeffrey.kissel@LIGO.ORG - posted 17:17, Tuesday 30 May 2017 - last comment - 13:20, Wednesday 31 May 2017(36538)
ALS Y WFS A Demod Local Oscillator Power Surpasses Threshold; Throws Error; Threshold Increased
S. Dwyer, J. Kissel

We found that the Beckhoff error handling system was reporting an intermittent error on ALS Y WFS A Demod Local Oscillator (LO) Power (H1:ALS-Y_WFS_A_DEMOD_LOMON) having surpassed its threshold (H1:ALS-Y_WFS_A_DEMOD_LONOM). 

Trending reveals that
 (a) The error bit has been green since the Beckhoff ADC was replace on Sept 22nd 2016 (see LHO aLOG 29848) 
 (b) the LO's power has been slowly increasing over time, and was now on the hairy edge of the threshold.

After consulting Sheila, she suggests "it's probably fine," so I increased the threshold from 21 [dBm?] to 22.5 [dBm], and accepted the new threshold in both safe and OBSERVE.snaps in the SDF system. (I'm not confident that this monitor is actually calibrated into physical units, but 21 [dBm] doesn't sound crazy so I put confidence in the designer of the system to have done so.)

Hopefully this doesn't turn out to be a canary for the demodulator like the diode laser power was to the now replaced ALSX laser...

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 11:37, Wednesday 31 May 2017 (36561)
Opened FRS Ticket 8248 regarding missing 6 dB attenuator on the ALS COMM VCO path.
jeffrey.kissel@LIGO.ORG - 17:55, Tuesday 30 May 2017 (36540)CDS, DetChar, ISC
Other thresholds updated / cleaned up:

H1:ALS-C_COMM_A_DEMOD_LONOM increased from 12 to 19 [dBm?]
   The monitor channel H1:ALS-C_COMM_A_DEMOD_LOMON increase from about 12.4 to 18.3 [dBm?] on March 01 2017 ~23:00 UTC (mid-afternoon local time). I could find an associated aLOG about this.

H1:ISC-RF_C_AMP137M_OUTPUTNOM increased from 20 to 21.5 [dBm?]
   The monitor channel H1:ISC-RF_C_AMP137M_OUTPUTMON has been as high as 22 [dBm?] between Feb 7 and Mar 23rd, but then was brought back to ~21 with some small steps between then and now. No aLOGs about this one either.

All of these monitor channel changes were through the course of the observing run, so we presume that this is the new normal. New thresholds have been accepted into the SDF system.
Images attached to this comment
daniel.sigg@LIGO.ORG - 21:02, Tuesday 30 May 2017 (36548)

A nominal LO for a length sensor is around ~10-13 dBm. For a WFS the signal level is divided between the 4 segments. In software, the readbacks are added back together, so the LO sum should be similar in value. For the demodulators the RF power is measured after an internal 10-11 dB amplifier. So, a normal readback will be around 21-24 dBm for the LSC and the ASC sum. There is no amplifier for the phase-frequency discriminators, so their readbacks will be between 10-13 dBm.

For a distribution amplifier the power is measured before the 8-way splitter and is nominally around 22 dBm.

The ALS-C_COMM demod is driven by the COMM VCO. Alog 34512 describes a measurement involving the COMM VCO. It looks like a 6dB attenuator was left out when the changes were reverted. This should be fixed.

Not sure what happened on Feb 7, but on March 23 the harmonics oscillator was swapped which required a readjustment of some of the RF attenuators. Looks like H1:ISC-RF_C_AMP137M_OUTPUTNOM was effectively reduced by 1dB, see alog 35051. This is fine.

daniel.sigg@LIGO.ORG - 13:20, Wednesday 31 May 2017 (36571)

Checking the ALS WFS RF readbacks:

  • X channels seem to be fine
  • In Y only ALS_Y_WFS_A shows a drift
  • Among the 4 segments, only chn 3 and 4 show the drift

This indicates that the EL3104 module L7 in EtherCAT end 3 (D1400175-v2) of EY is broken too.

Images attached to this comment
H1 ISC
daniel.sigg@LIGO.ORG - posted 15:01, Tuesday 30 May 2017 - last comment - 11:47, Wednesday 31 May 2017(36528)
EX ALS laser swapped

Temp A: 24.4 ºC
Temp B: 21.6 ºC
Diode: 1.80A
Laser Crystal Temp: 29.60 ºC
Doubler Temp: 33.62 ºC

Images attached to this report
Comments related to this report
daniel.sigg@LIGO.ORG - 15:14, Tuesday 30 May 2017 (36529)

Changed set points:

H1:ALS-X_FIBR_A_DEMOD_RFMAX: 10 from 4
H1:ALS-X_FIBR_LOCK_BEAT_RFMIN: -10 from -15
H1:ALS-X-LASER_HEAD_LASERDIODEPOWERNOMINAL: 1.80
H1:ALS-X-LASER_HEAD_LASERDIODEPOWERTOLERANCE: 0.2

daniel.sigg@LIGO.ORG - 15:21, Tuesday 30 May 2017 (36530)

The cable to the H1:ALS-X_LASER_IR_DC photodetector is intermittent and needs to be re-terminated. Currently, the readback is broken.

keita.kawabe@LIGO.ORG - 18:19, Tuesday 30 May 2017 (36541)

S/N (on the controller) Pulled out -> 2011B, Installed -> 2011A

Interlock cable was installed by Filiberto.

Handles on the controllers were swapped as the new one didn't have the tapped holes necessary for mounting on the table enclosure extension. The untapped handles will be drilled and tapped later.

sheila.dwyer@LIGO.ORG - 19:31, Tuesday 30 May 2017 (36544)

Kiwmau and I went out the end station after they swapped the laser and tweaked the beatnote alignment. We ended up with more power on the BBPD than before the laser swap (20mW now  compared to 19mW  from the old laser before the power dropped on sunday evening.)  We also moved the lens in the laser path before teh beatnote beam splitter 1.5 inches closer to the beamsplitter. This increased the beat note power to 8dBm, compared to about 0dBm from the old laser before Sunday evening. 

kiwamu.izumi@LIGO.ORG - 05:37, Wednesday 31 May 2017 (36554)

After we finished up the hardware work on the ISCTEX table, Jenne and Ed aligned the X arm for the green laser, which resulted in a highest (normalized) transmission of roughly 0.8 when it was fully resonant. Therefore the amount of light power reaching the corner station decreased from what it used to be by 20%. Since the output power of the new laser at that point was lower than it had been with the old laser by 10%, the half of the reduction in the transmission can be explained by the reduced laser power. I think the remaining 10% is due to mode-matching loss.

jeffrey.kissel@LIGO.ORG - 11:47, Wednesday 31 May 2017 (36564)CDS
Opened FRS Ticket 8249 regarding broken readback of ALS X IR PD.
LHO VE
chandra.romel@LIGO.ORG - posted 00:13, Monday 29 May 2017 - last comment - 09:05, Wednesday 31 May 2017(36479)
Mid-Y pressure rise

We had a potentially scary situation tonight at mid-Y and through crazy coincidence managed to fix it before it became a serious problem. Sheila contacted me around 10 pm local time about a verbal pressure alarm that was going off in control room for BSC7 (gauge PT-170). I checked the MEDM screen from home and didn't see anything abnormal - except that the pressure is a bit high since the vent (7e-8 Torr). Most likely it's alarming because of set point setting.

This alarm made me look at our site pressure trend (48 hr trend here) and noticed that PT-210 at mid-Y was quickly drifting up starting around 7pm. Suspected CP3 and/or CP4 were warming up due to very hot temperatures we've had this weekend. Gerardo was unable to remotely log into CDS to initiate a remote overfill even though we were supposed to have permission until June 1. I drove out to the site to manually overfill both cryopumps at the skid by opening the bypass valve 1/2 turn (just like the good ole days). Filled CP3 first and observed an almost immediate drop in pressure. Took 50 min. to overfill (verified by watching LN2 pour out of exhaust). As soon as I started the fill, the exhaust flow increased to turbulent. CP4 didn't exhibit the same turbulent behavior, and took 30 minutes to overfill. Conclusion is CP3's valve actuator setting from Friday was too low at 15% open. I reset to 18%. Also increased CP4 from 37% to 39% open. Tomorrow is supposed to be 98F!

Need to learn what the current pressure alarms are set to; I propose we tighten these just for mid-Y so vacuum staff is alerted quickly when pressure starts to rise. I also suggest we try to maintain seconds vs. minutes of overfill time as we approach a hot summer.

Images attached to this report
Comments related to this report
chandra.romel@LIGO.ORG - 00:22, Monday 29 May 2017 (36480)

Based on this log entry from last June 24, it took 35 minutes to overfill CP4 until LN2 poured out the exhaust. This was before CP4 clog - we were experimenting with durations and flow rates to create a work-around for CP3.

https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=27950

kyle.ryan@LIGO.ORG - 01:22, Monday 29 May 2017 (36481)
Tonight's real life scenario has been my nightmare for the past 18 months (since ice plugs have required manual filling of CP3 and CP4).  It happened on my watch and I am responsible for it.  In my defense, this was not the result of inattention or a false sense of security on my part.  I had lowered CP3's manual LLCV %open value to 15% open, down from 17% open, in response to Friday's automated fill having only taken 17 seconds.  This would have been an appropriate response, perhaps, for springtime ambient temperature conditions but proved too much of a reduction/correction for this weekend's warmest-of-the-year weather.  

I look at the vacuum site overview screen multiple times on non-work days and am quite familiar with what the "normal" values are.  Today was no different. At around 07:30 pm local time, I looked and noticed that PT243 was 3.97 x 10-9 torr which is higher than normal and caught my attention.  I reasoned that this was probably hydrogen emitting from the BT steel on this "hot" day but was concerned enough that I resolved to check it again before going to bed.  At approx. 10:30 pm local time, I looked and saw that PT243 had fallen to 2.?? x 10-9 torr.  Minutes later, I checked my phone before going to bed and became aware of a text thread between Chandra R. and Gerardo M. which had been in progress for the previous 30 minutes.  So, the reduction in PT243 at 10:30 pm was the result of the fact that Chandra was already on site and had started filling CP3 manually via the opening the LLCV bypass valve.  

Had Sheila D. not contacted Chandra at approx. 09:50 pm and Chandra not responded by doing a manual fill, the pressure shown by PT243 10:30 pm would have been much higher than the previuosly "concerning" value seen at ~07:30 and I feel that I would have responded appropriately.  Still, this didn't have to happen.  As Chandra reminded me, pressure trends are available (new location) for remote viewing and, had I reviewed these in addition to the Vacuum Site Overview, I would have noticed that the Y-mid values were increasing independent of the rest of the site pressures.  This would have dispelled my "hydrogen" theory at 07:30 and I would have done a manual refill then.  



chandra.romel@LIGO.ORG - 03:57, Monday 29 May 2017 (36482)

Kyle, you shouldn't feel responsible. I'm usually the one who manipulates the LLCV settings based on temperature fluctuations and fullness of Dewar and have a better feel for adjustments. Sorry I didn't explain that better before I left. We can start to think about the next level of automation on this system which would increase/decrease valve setting based on how long it took to fill the previous time. Folks should also recognize that the work we're proposing to do post O2 by either decommissioning and/or regenerating these CPs will eliminate these risks.

Also, I doubt PT-170 alarming was actually a crazy coincidence. I forgot to trend its pressure but am guessing it was also starting to increase due to loss of cryopump action at MY. And because its pressure was already high from vent, it alerted us before we had to wait for -8 torr range alarms in arm. Thank goodness Sheila was in the lab at the time to catch it!

 

michael.zucker@LIGO.ORG - 08:44, Monday 29 May 2017 (36484)

Good save, I should have thought of this. The dominant boiloff load is (should be) blackbody radiation from the tube, which is at BTE ambient temperature. I will make a number for the fractional effect on liquid mass flow per degree, so we can add that % on to our "open-loop estimate". 

EDIT: see post 36496

Worth noting, though, the high ambient (BTE ) temp is raising the hydrogen diffusion flux, which doubles every 6 C or so (harmlessly, as long as we have ion pumps). So the pressure trend (and particularly, any attribution to the CP) has to be interpreted carefully.  

Even before today, the ice plugs gave me nightmares. We have to fix them, and stop any more from happening. 

chandra.romel@LIGO.ORG - 08:59, Monday 29 May 2017 (36485)

Dave Barker suggested increasing to daily auto overfills rather than Mon-Wed-Fri. I like this idea. We'll discuss with vacuum group this week.

david.barker@LIGO.ORG - 09:04, Monday 29 May 2017 (36486)

The cell phone alarm system currently monitors the vacuum gauge pairs at the ends of the 2km beam tube sections. For MY those are PT243 (closest to corner) and PT246 (closest to EX). I can certainly add all the other gauges in MY (PT244, PT245, PT210) to the system if needed.

david.barker@LIGO.ORG - 09:16, Monday 29 May 2017 (36487)

I've created a remote access permit for the vacuum group, good through the end of the year.

david.barker@LIGO.ORG - 11:32, Tuesday 30 May 2017 (36517)

No cell phone alarms were raised for this event, their upper alarm range is 5.0e-08 torr, an order of magnitude higher than what MY saw Sunday night (trend attached).

Gerardo was able to remotely log into CDS from home, he had a permit open. He was unable to directly log into the vacuum1 machine to make vacuum changes due to recent ssh cert changed. I recommend that every week the vacuum group test remote log into vacuum1 to verify this is possible.

 

Images attached to this comment
chandra.romel@LIGO.ORG - 16:05, Tuesday 30 May 2017 (36533)

More clues....or confusion. MidY IP9 current plotted with PT-210 pressure increase from Sunday evening. Strange behavior in IP.

Images attached to this comment
chandra.romel@LIGO.ORG - 08:44, Wednesday 31 May 2017 (36557)

Trended PT-170 over the weekend to understand the verbal alarm Sheila heard in control room. PT-170 pressure has steadily been falling since the vent, but at about 9:30pm local time on Sunday, it was crossing over the 7e-8 Torr alarm threshold causing the verbal alarm. What luck!

Images attached to this comment
chandra.romel@LIGO.ORG - 09:05, Wednesday 31 May 2017 (36558)

Outside temperature plotted over 30 days along with PT-243 at mid-Y.

Images attached to this comment
Displaying reports 47741-47760 of 83264.Go to page Start 2384 2385 2386 2387 2388 2389 2390 2391 2392 End