Displaying reports 57841-57860 of 84537.Go to page Start 2889 2890 2891 2892 2893 2894 2895 2896 2897 End
Reports until 13:53, Monday 30 May 2016
H1 PSL (CDS, DetChar, ISC, PSL, TCS)
sheila.dwyer@LIGO.ORG - posted 13:53, Monday 30 May 2016 - last comment - 11:52, Tuesday 31 May 2016(27440)
laser tripped, TCS DAC problem, accoustic noise from PSL AC coupling to RF

Sheila, Terra, Craig

The PSL is off, is has been since about 17:30 UTC. The chillers don't seem to be tripped. Confusingly the laser screen indicates that the chillers are fine, (two green boxes) while the PSL_STATUS screen has a red box for the crystal chiller (screenshot attached).  Jason will come out to the site to investigate/restart the laser.  

We also noted that the temperature trends for the PSL have been usual since thursday morning's incursion. (2nd screnshot) I went to the controller box and saw that the north AS unit was on, which was probably unintentional (the south unit was off, and they are normally both off in science mode).  I turned it off at noon local time.  Terra noted that the PSL microphone has seen an elevated level of noise in the last few days, which went back to normal as soon as the AC unit was off.  (In the third attached screenshot, blue traces are from the time when the AC was on).  The montors on the RF AM stabilization also changed when we turned off the AC, and some channels on the AM stabilization box seem to have been sensitive to some kind of switching of the PSL HVAC over the last few days.  

It seems like we need some kind of a better way to monitor if the PSL environment settings are correct, maybe adding them to diag main if we can find a good set of tests to write.  It also is suprising to me that our RF system seems to be so sensitive to accoustic pick up in the PSL.  Has anyone in detchar looked at PSL PEM monitors to see if glitches there are correlated to the "RF45" glitches seen durring O1? 

The TCS chillers are also tripped, with the same DAC problem we have been having. (27435)  This happened about 36 hours ago.

Images attached to this report
Comments related to this report
terra.hardwick@LIGO.ORG - 12:23, Monday 30 May 2016 (27442)

Bottom chiller screen; flashing between 'temperature' and 'warning'

Images attached to this comment
peter.king@LIGO.ORG - 12:38, Monday 30 May 2016 (27443)
The laser was up and running this morning when I checked it around 6 am (local).
The gibberish message on the diode chiller controller I've never seen before
and is most likely a controller malfunction.

    To fix the problem, I would try (in order):
 - power cycling the chiller with the power switch located at the rear of the chiller
 - replacing the chiller controller (if Jeff Bartlett happens to have a spare handy)
 - install the spare chiller (which will take a bit of work because ... )
     * the turbine flow sensors need to be replaced with the vortex ones
     * the 3-phase power plug needs to be installed
     * some filters need to be removed
     * the coolant lines will need to have any air pockets removed

    The problem with the first solution is that it is hard to gauge how long the
"fix" might be valid for before the laser could trip out again.
terra.hardwick@LIGO.ORG - 13:25, Monday 30 May 2016 (27444)

We used Sheila's very instructive alog to kill and restart all the models on the OAF machine, reset the TCS chillers and restart the TCS laser. 

jason.oberling@LIGO.ORG - 16:14, Monday 30 May 2016 (27448)

J. Oberling, S. Dwyer

Attempted to bring the PSL back up but were ultimately unsuccessful.  Came in and found the crystal chiller running and the diode chiller off, although the Laser MEDM screen indicated the diode chiller was up and running.  EPICS channels frozen again?

The diode chiller turned on without an issue, although the weirdness on the main screen, seen in Terra's photos above, did not go away.  We let the chiller run for several minutes and then attempted to power on the HPO.  Approximately midway during the pump diode power up everything stopped and we found the diode chiller shut off.  To see if it was a coincidence we reset the interlocks and attempted to turn the HPO on again, this time monitoring the chillers.  The HPO got to its second stability range and the diode chiller immediately shut off.  We power cycled the diode chiller (which by the way cleared the funky front panel issue seen in the photos above).  This time the HPO acheived the second stability range for 10 whole seconds before the diode chiller shut off again; it almost seems as if the chiller is shutting off as soon as it sees a heat load.  During all this the crystal chiller remained up and running without issue.

At this time I'm out of ideas, although the chiller behavior copuled with the front screen weirdness makes me think we may have a control panel problem with the diode chiller (as Peter mentions above); I seem to recall that when we had the chiller flow sensor issues last year (April/May 2015) we also had some weird issues with the chiller (the one we just recently removed from service) that were solved by replacing the control panels.  I left the PSL off; the diode chiller is also off and the crystal chiller is running.  Please do not attempt to turn it on, we will investigate more fully tomorrow morning.

jason.oberling@LIGO.ORG - 11:52, Tuesday 31 May 2016 (27450)

Filed FRS #5605.

LHO VE
kyle.ryan@LIGO.ORG - posted 11:47, Monday 30 May 2016 - last comment - 09:26, Tuesday 31 May 2016(27441)
HAM4 AIP
(see attached) 
Will investigate Tues.
Non-image files attached to this report
Comments related to this report
vernon.sandberg@LIGO.ORG - 09:26, Tuesday 31 May 2016 (27449)

AIP = "annulus ion pump"

H1 ISC
sheila.dwyer@LIGO.ORG - posted 20:07, Saturday 28 May 2016 - last comment - 14:08, Friday 03 June 2016(27437)
locklosses possibly related to RF problem or SR3 glitches

Evan and I spent most of the day trying to investigate the sudden locklosses we've had over the last 3 days.  

1) We can stay locked for ~20 minutes with ALS and DRMI if we don't turn on the REFL WFS loops.  If we turn these loops on we loose lock within a minute or so.  Even with these loops off we are still not stable though, and saw last night that we can't make it through the lock acquisition sequence. 

2)In almost every lockloss, you can see a glitch in SR3 M2 UR and LL noisemons just before the lockloss, which lines up well in time with glitches in POP18.  Since the UR noisemon has a lot of 60 Hz noise, the glitches can only be seen there in the OUT16 channel, but the UR glitches are much larger.  (We do not actuate on this stage at all).  However, there are two reasons to be skeptical that this is the real problem:

It could be that the RF problem that started in the last few days somehow makes us more senstive to loosing lock because of tiny SR3 glitches, or that the noisemons are just showing some spurious signal which is related to the lockloss/ RF problems. Some lockloss plots are attached. 

It seems like the thing to do would be trying to fix the RF problem, but we don't have many ideas for what to do. 

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 20:25, Saturday 28 May 2016 (27438)

We also tried running the Hang's automatic lockloss tool, but it is a little difficult to interpret the results from this.  There are some AS 45 WFS channels that show up in the third plot that apprears, which could be related to either a glitchy SR3 or an RF problem. 

Images attached to this comment
sheila.dwyer@LIGO.ORG - 20:27, Saturday 28 May 2016 (27439)

One more thing: Nnds1 chrashed today and Dave helped us restart it over the phone.

andrew.lundgren@LIGO.ORG - 07:41, Wednesday 01 June 2016 (27470)DetChar, ISC, Lockloss
For the three locklosses that Sheila plotted, there actually is something visible on the M3 OSEM in length. It looks like about two seconds of noise from 15 to 25 Hz; see first plot. There's also a huge ongoing burst of noise in the M2 UR NOISEMON that starts when POP18 starts to drop. The second through fourth attachments are these three channels plotted together, with causal whitening applied to the noisemon and osem.

Maybe the OSEM is just witnessing the same electrical problem as is affecting the noisemon, because it does seem a bit high in frequency to be real. But I'm not sure. It seems like whatever these two channels are seeing has to be related to the lockloss even if it's not the cause. It's possible that the other M2 coils are glitching as well. None of the other noisemons look as healthy as UR, so they might not be as sensitive to what's going on.
Images attached to this comment
keita.kawabe@LIGO.ORG - 14:08, Friday 03 June 2016 (27501)

RF "problem" is probably not a real RF problem.

Bad RFAM excess was only observed in out-of-loop RFAM sensor but not in the RFAM stabilization control signal. In the attached, top is out-of-loop, middle is the control signal, and the bottom is the error signal.

Anyway, whatever this low frequency excess is, it should come in after the RF splitter for in- and out-of-loop board. Since this is observed both in 9 and 45MHz RFAM chassis, it should be in the difference in how in- and out-of-loop boards are configured. See D0900761. I cannot pinpoint what that is but my guess is that this is some DC stuff coming into the out-of-loop board (e.g. auto bias adjustment feedback which only exists in out-of-loop).

Note that even if it's a real RFAM, 1ppm RIN at 0.5Hz is nothing assuming that the calibration of that channel is correct.

Images attached to this comment
andrew.lundgren@LIGO.ORG - 15:22, Wednesday 01 June 2016 (27486)DetChar, ISC, Lockloss
Correction: The glitches are visible on both the M2 and M3 OSEMs in length, also weakly in pitch on M3. The central frequency looks to be 20 Hz. The height of the peaks in length looks suspiciously similar between M2 and M3.
Images attached to this comment
andrew.lundgren@LIGO.ORG - 01:42, Thursday 02 June 2016 (27496)DetChar, ISC, Lockloss
Just to be complete, I've made a PDF with several plots. Every time the noise in the noisemons comes on, POP18 drops and it looks like lock is lost. There are some times when the lock comes back with the noise still there, and the buildup of POP18 is depressed. When the noise ends, the buildup goes back up to its normal value. The burst of noise in the OSEMs seems to happen each time the noise in the noisemons pops up. The noise is in a few of the noisemons, on M2 and M3.
Non-image files attached to this comment
H1 ISC
evan.hall@LIGO.ORG - posted 17:06, Saturday 28 May 2016 (27436)
Oscillator RFAM is worse since the morning of May 26

Not clear why. It may correspond to a PSL incursion the same morning.

Also, what used to be hooked up to outputs 7 and 8 on the 9 MHz distribution amplifier in the CER? Sheila and I found some terminated attenuators on these ports, but I recall it used to be cabled up somehow.

Non-image files attached to this report
H1 CDS (CDS, TCS)
sheila.dwyer@LIGO.ORG - posted 13:05, Saturday 28 May 2016 (27435)
How to fix TCS chiller tripping problem

Evan H arrived on site this morning to find the TCS chillers tripped, reset them and found the same behavoir as described in alogs 27381 and 27374

On the OAFIOP GDS TP screen, both the DAC and ADC bits (as well as DK) were red, the ADC error cleared with a diag reset but the DAC error would not reset.  We called Dave and he asked us to check that the DAC outputs were all zero by looking at the DAC MON screens (accessed by clicking the blue buttons labeled D0). This means that it is the same DAC problem Nutsinee described.  

To fix the problem, you need to stop and start all the models on the front end, including the IOP model.  This can be done by:

1)maybe check SDF before you kill models (I frogot this step)

2) log in to h1oaf0 as controls 

3) run a script /etc/kill_models.sh, wait for all models to be shut down in the correct order, with the IOP model last

4) run a script /etc/start_models.sh

5) Dave said that for the PEM model only we restore the settings by loading the OBSERVE.snap and hitting LOAD TABLE +EDB.  Since I frogot to check SDF before killing the models I am using the automatic burts to restore the rest of the models. Confusingly, the automatic burts always appear to indicate that there are no diffs, because all channels are no mon in them, so to actually check you need to select full table, select all mon, then set the table back to setting diffs. Time machine doesn't work for the SDF screens, which would be handy in a situation like this.  

Go to the mezzanine and follow instructions here https://lhocds.ligo-wa.caltech.edu/wiki/TCS to restart the chillers, and the laser controllers near the TCS tables.

H1 General
edmond.merilh@LIGO.ORG - posted 21:30, Friday 27 May 2016 (27434)
Shift Summary - Evening
TITLE: 05/28 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Unknown
INCOMING OPERATOR: None
SHIFT SUMMARY:
  • Tour group in the Control Room.
  • Tara also gave a private tour to a couple of friends.
  • Work on TCS is done for the evening but not complete.
  • Sheila and Evan engaged in ongoing investigation of locklosses occurring around the ENGAGE_SRC_ASC stage.
  • ~ 2:15UTCGoing away party for Brynley has me at the helm, solo. I was instructed to see how long I can get things locked for (with not much hope). Otherwise, I can call it if I'm not having any luck.
  • 3:13UTC EY .3micron dust alarm
  • 3:00UTC Patrick left. I didn't realize he'd remained.
  • DRMI can get past it's ASC stage.
  • After the last DRMI lockloss I now hoave NO IR flashes in the arms.I guess it's time I called it a night 
LOG:
H1 TCS (ISC)
nutsinee.kijbunchoo@LIGO.ORG - posted 19:08, Friday 27 May 2016 (27433)
CO2Y table alignment work

Kiwamu, Tega, Nutsinee

 

Quick Conclusion: We are done for the day. The clipping is pretty much fixed but the heating profile remains ununiform. We will resume the work on Tuesday. CO2Y power has been set to zero while CO2X remains at its nominal power. The beam dump has been put back in front of the FLIR camera.

Details: Today we went back to the table and tried to find the clipping point somewhere around M4 and M4A mirrors because the low transmissivity we saw the day before (alog27369). Seeing that the beam profile looks good reflecting off M4 we measured the power again using a power meter with a bigger aperture. We measured more power this time and the transmissivity from point 2 to point 3 wasn't as bad (78% instead of 35% -- see first attachment). We moved on to fix the horizontal clipping on M5 starting by adjusting M4A mirror. After centered the beam on M5 we moved M5, M6, and moved the annulus mask position sensor away from the beam path by about a centimeter. We adjusted BS1 to align the beam through both irises and fine tuned the alignment using the steering mirror between BS1 and the first iris. We adjusted M3 slightly to center the beam spot onto the FLIR camera screen. Looking at the camera image we noticed the beam profile has temperature gradient by 2 deg C from the lowest to highest when the power was 0.15 W at the screen. We are not sure if this is critical but we can improve this on Tuesday. The beam dump has been put back in place before we closed out.

 

ALso by fixing this clipping we improved the maximum CO2Y power to the ITM from ~2.5 W to ~3.7 W.

 

FLIR image yesterday

 

FLIR image today (I don't know why the image rotates. Use your imagination.)

 

 

Images attached to this report
LHO VE (VE)
david.barker@LIGO.ORG - posted 16:13, Friday 27 May 2016 (27432)
CP3 refill reminders schedule changes

The CP3 refill reminders (emails and cell phone text messages) have been rescheduled from every other day (actually every odd-day of the year) to Mon, Wed, Fri. This means that reminders will no longer be sent during weekends. The change was requested by the vacuum group.

LHO General
patrick.thomas@LIGO.ORG - posted 16:11, Friday 27 May 2016 (27431)
Ops Day Summary
13:55 UTC Chris S. opened high bay outside door for approx. 30 min to remove barrels
Mode cleaner lost lock, NPRO noise eater went out of range
15:22 UTC Jim W. to LVEA to toggle noise eater switch
15:28 UTC Jim W. done
15:55 UTC Betsy to LVEA to look for equipment
16:11 UTC Travis to HAM6 with tape measure
16:22 UTC Jim B. to staging building
16:28 UTC Travis and Betsy done
16:42 UTC Nutsinee and Kiwamu to TCSY CO2 table
17:02 UTC Jeff B. and Jason using forklift by mechanical room
17:32 UTC Jim B. back
17:55 UTC Keita and Haocun to HAM6 to take measurements
18:02 UTC Jeff B. and Jason done
18:09 UTC Kyle to LVEA to look for property tag
18:11 UTC Boom lift delivery through gate
18:55 UTC Kyle back. Kyle going to mid Y.
19:05 UTC Nutsinee and Kiwamu back
19:25 UTC Kyle back from mid Y
20:05 UTC Jeff B. using forklift near mechanical building
20:21 UTC Christina to open OSB receiving door
20:31 UTC Chandra and Gerardo to mid Y to fill CP3 and look at equipment in building
20:51 UTC Kiwamu and Nutsinee back to TCSY CO2 table
21:01 UTC Keita to CER to check ISC racks
21:13 UTC Jeff B. done
21:13 UTC Chandra and Gerardo done

Sheila and Evan H. have been troubleshooting locklosses. Nutsinee and Kiwamu have been working on TCSY CO2.
H1 General
edmond.merilh@LIGO.ORG - posted 16:08, Friday 27 May 2016 (27430)
Shift Summary - Evening Transition
TITLE: 05/27 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Aquisition
OUTGOING OPERATOR: Patrick
CURRENT ENVIRONMENT:
    Wind: 17mph Gusts, 7mph 5min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.07 μm/s 
QUICK SUMMARY:
LHO VE
chandra.romel@LIGO.ORG - posted 14:28, Friday 27 May 2016 (27428)
CP3 LLCV decrease
Lowered LLCV value from 20% to 19% because exhaust pressure was reading 1.2 psi. 

Chandra and Gerardo made arrangements for CP3 fill on Monday, Memorial Day.
LHO VE
chandra.romel@LIGO.ORG - posted 14:15, Friday 27 May 2016 (27427)
CP3 overfill
2pm local

1/2 turn open LLCV bypass - took 22 sec. to overfill CP3. 

H1 CDS
david.barker@LIGO.ORG - posted 12:16, Friday 27 May 2016 - last comment - 15:53, Friday 27 May 2016(27425)
Staging building SUS test stand restarted

Jim, Dave:

Thursday afternoon and this morning we brought the Staging Building SUS test stand back to life. The front end machine (bscteststand2) and its IO Chassis started with no problems. Most of our work was in getting the workstation running to permit DTT and DATAVIEWER to run. The original SUS workstation, which used to be in the office area, has been repurposed. We located the old SEI workstation in the DTS and pressed that into SUS service. In order to run the workstation outside of the clean area but not in the office, we have temporarily setup in the communications closet. Next week when the area behind the large roll-up door becomes available we will move the workstation there to avoid the closet's cooling issues.

Originally a NAT router (bisbee) in the FEC rack was connected to the GC switch via a wall outlet. We have decommissioned the NAT router, and used the ethernet run to the closet as the way of hooking the workstation up to the iMac and the FEC. We will not be replacing the NAT router, this test stand will no longer be accessible from GC.

Next week Betsy will be able to use the iMac in the clean room to X-foward DTT sessions running on the workstation.

Comments related to this report
david.barker@LIGO.ORG - 15:53, Friday 27 May 2016 (27429)

workstation in comms closet has been powered down and the door has been closed.

H1 SEI
hugh.radkins@LIGO.ORG - posted 09:41, Friday 27 May 2016 (27423)
EndY BRS Drift--May want to recenter before power outage

Attached are the trends for the BRSY Drift.  We are close to -10000 counts and doing about 5000 counts per week.  Next weekend power outage may suggest we just wait until after power recovery.  Will check with UW whether we should recenter before or after as well as power recovery procedure.

Images attached to this report
H1 ISC
terra.hardwick@LIGO.ORG - posted 02:33, Friday 27 May 2016 - last comment - 12:43, Friday 27 May 2016(27416)
First successful electrostatic damping of PI at LHO, new PI mode?

Ross, Tega, Evan, Terra

Tonight we successfully damped a known parametric instability at 15540.6 Hz with the newly implemented ESD damping scheme. 

In April last year, this mode was detected in the X-arm during a 15W lock. Ultimately it was avoided by turning on the ETMX ring heater (0.5 W requested power top and bottom), shifting the optical mode peak down in frequency and away from ~15540 Hz mechanical modes. To test the new active damping scheme, we turned off the ETMX ring heater, allowed 15540.6 Hz to start to ring up during a 24W lock, and damped it by driving the UR and LL quadrants of the ETMX ESD.  

Below we tracked the amplitude of the ~15540.6 Hz mode. Leftmost action is the important part: first we briefly manually rang it up (gain -1000) before switching the gain sign (gain +500) to rapidly damp. Attached images show power spectrum before damping and immediately after.  We had planned to ring up and down again to get a better idea of the gain settings, but with the newly low magnitude peak, the line tracker got confused with another peak ~1 Hz away and then we lost lock shortly after, for unrelated reasons. 

Briefly, the damping set up: We grab the mechanical mode signal from the OMC transmission DCPDs (H1:OMC-PI_DCPD_64KHZ_A) and send it to the relevant end station, downconverting before the trip and upconverting after using synced oscillators set approximately to the known mechanical mode. There, the mechanical mode peak is tracked with iWave. Output is run through a damping filter for gain control and finally sent to actuate on the UR and LL quadrant of the ETM LNLV ESD. Overall, we get early detection of PI from the OMC and actuation on the test mass with the exact equal but opposite mechanical mode frequency that is ringing up, enabling damping to happen earlier in the lock aquisition process before PI has as much time to ring up. This is necessary as we increase power yet remain working with relatively low actuation force from the ESDs. 

PI at 15520: While working with 15540.6 Hz, we witnessed a mode at 15520 begin to ring up as well. During a second 24W lock, we allowed both to ring up ~15 min; they grew rapidly at similar rates, ultimately producing a strong 20Hz comb and breaking the lock. Will investigate (and attempt to damp) more this weekend.

We didn't get another good lock to test on tonight and we're still working out issues so I've left the damping system in manual mode and have turned the ETMX ring heater back on. 

Images attached to this report
Comments related to this report
ross.kennedy@LIGO.ORG - 11:33, Friday 27 May 2016 (27424)

We used offline data from the same time as this damping and tracked the amplitude and frequency of the line. At around 700s you see the same response as discussed above. From the frequency tracking you can see that the amplitude is just from the 15540Hz mode i.e. our line tracker was locked on this mode. The scale of the amplitude in this plot compared to the above plot differs by ~sqrt(2) due to a forgotten factor in our h1susetmxpi model. 

Images attached to this comment
aidan.brooks@LIGO.ORG - 12:43, Friday 27 May 2016 (27426)

Here are the estimates for the HOM spacing (in Hz) for the X and Y arm cavities over the last two days. 

Remember:

  • Self-heating creates a bulge on the surface that makes the concave optic surface flatter and hence ROC becomes larger
    • responsible for short time scale spikes
  • RH turning on makes the surface more concave, hence ROC becomes smaller
    • around 3/4 of the way through this time series
  • RH turning off makes the ROC larger
    • about 1/3 of the way through this time series

Images attached to this comment
Displaying reports 57841-57860 of 84537.Go to page Start 2889 2890 2891 2892 2893 2894 2895 2896 2897 End