Displaying reports 56741-56760 of 83228.Go to page Start 2834 2835 2836 2837 2838 2839 2840 2841 2842 End
Reports until 15:34, Monday 16 May 2016
H1 CDS (DAQ, SUS)
david.barker@LIGO.ORG - posted 15:34, Monday 16 May 2016 - last comment - 17:52, Monday 16 May 2016(27224)
new SUS PI models installed, DAQ restarted

Tega, Ross, Jim, Dave:

We installed new code for the models h1susitmpi, h1susetmxpi and h1susetmypi. The new code required a DAQ restart, which TEAM-PI obtained permission from TEAM-COMMISSIONING for this afternoon.

The purpose for the change was to make the code more efficient and claw back some CPU time on the h1susitmpi model which was running long (15-16uS for a 64k model). This was successful, it is now running in the 9-10uS range. ETMX is unchanged at 7uS, there is hint that ETMY is running one micro-second longer from 3uS to 4uS.

Comments related to this report
tega.edo@LIGO.ORG - 17:52, Monday 16 May 2016 (27231)
Tega, Ross, Jim and Dave.

Detailed changes made to the PI models.
PI_MASTER:
1. Added OMC_PI_MODE library part.
2. Replicate the functionality of the "SUS_PI_DAMP" block in "SUS_PI_COMPUTE" and "ETM_DRIVER". 
3. Removed the down-conversion blocks from SUS_PI_DAMP to avoid unnecessary computation in the h1susitmpi model. 
4. Renamed OMC_PI as OMC_PI_DOWNCONV to better reflect functionality.
5. Rearranged the library parts so that Simulink blocks related to the OMC_DCPD are on the right whilst blocks that process the QPD data are on the left. 

h1susetmxpi:
1. Replace the ETMX_PI_DAMP block with the new library parts: SUS_PI_COMPUTE (block name: ETMX_PI_DAMP) and ETM_DRIVER (block name: ETMX_PI_ESD_DRIVER). 
2. Moved the down-conversion blocks out of ETMX_PI_DAMP into a single block at the top of the model.
3. Added OMC_DCPD data into the PI control path using a switch that takes either the processed signals from the QPDs (ETMX_PI_DAMP)  or the processed signals from the OMC_DCPDs(ETMX_PI_OMC_DAMP)". 

h1susetmypi:
1. Replace the ETMX_PI_DAMP block with the new library parts: SUS_PI_COMPUTE (block name: ETMY_PI_DAMP) and ETM_DRIVER (block name: ETMY_PI_ESD_DRIVER). 
2. Moved the down-conversion blocks out of ETMY_PI_DAMP into a single block at the top of the model.
3. Changes needed to process OMC data are on hold for now.

h1susitmpi:
1. Updated the links for ITMX_PI_DAMP and ITMY_PI_DAMP blocks to the new library part: SUS_PI_COMPUTE.

The attached images show the before & after snapshots for each model.
Images attached to this comment
H1 TCS
nutsinee.kijbunchoo@LIGO.ORG - posted 14:37, Monday 16 May 2016 (27216)
HWSMSR crashed Sunday 3 AM

Now restored. Restarting the h1hwsmsr computer did the trick.

 

Here's more detail on what happened:

I came in this morning noticed a connection error on DIAG_MAIN. Opened up the HWS ITMs code and saw that every channel went white. There was no restart in the morning that could have affected TCS. I powercycled the msr computer and was able to rerun the code.

Dave also mentioned that various computer crashes between 2-4 AM local was normal during ER6. He didn't see this problem during O1. We also looked at the memory and CPU usage. Nothing is overloaded

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 13:32, Monday 16 May 2016 - last comment - 13:54, Monday 16 May 2016(27221)
weekend timeline

we had some system problems over the weekend, here is a summary of the time-line. (These would appear to be unrelated events)

The last issue caused Guardian problems with the ALS_XARM node. We did the following to try to fix it:

The power up of h1guardian sans epics-gateway gave even more CA connection errors, with HWS IOC and h1seib3 FEC. These were very confusing, and seemed to go away when the h1slow-h1fe epics gateway was restarted which added to the confusion. We need to reproduce this error.

After Patrick restarted the h1ecatx1 IOC the guardian errors went away.

Comments related to this report
jameson.rollins@LIGO.ORG - 13:54, Monday 16 May 2016 (27222)

Rebooting the entire guardian machine just because one node was having a problem seems like extreme overkill to me.  I would not recommend that as a solution, since obviously it kills all other guardian processes, causing them to loose their state and current channel connections.  I don't see any reason to disturb the other nodes because one is having trouble.  Any problem that would supposedly be fixed by rebooting the machine should also be fixed by just kill and restarting the affected node process.

The actual problem with the node is not specified, but the only issue I know of that would cause a node to become unresponsive and immune to a simple "guardctrl restart" is the EPICS mutex thread lock issue, which has been reported both at LLO and LHO, and both with solutions that don't require rebooting the entire machine.  Presumably the issue being reported here is somehow different?  It would be good to have a better description of what exactly the problem was.

H1 CDS
patrick.thomas@LIGO.ORG - posted 12:56, Monday 16 May 2016 - last comment - 13:00, Monday 16 May 2016(27219)
Error on h1ecatx1 IOC
See attached screenshot.
Images attached to this report
Comments related to this report
patrick.thomas@LIGO.ORG - 13:00, Monday 16 May 2016 (27220)
Restarted the IOC.
LHO VE (CDS, VE)
patrick.thomas@LIGO.ORG - posted 12:37, Monday 16 May 2016 (27218)
Forced PT100 Cold Cathode on
Forced PT100 Cold Cathode on in Beckhoff on h0vacly per Chandra's request (see attached). It is now reading ~1.01e-07.
Images attached to this report
H1 SEI (SEI)
travis.sadecki@LIGO.ORG - posted 10:01, Monday 16 May 2016 (27215)
HEPI Pump Trends - past 45 days

Attached are HEPI Pump Trends for the past 45 days.  To my untrained eye, I don't see any egregious excursions in pump pressures.  SEI folks should review.

This completes FAMIS Request 4520.

Images attached to this report
LHO VE
chandra.romel@LIGO.ORG - posted 09:46, Monday 16 May 2016 - last comment - 09:50, Monday 16 May 2016(27212)
HAM11 annulus IP
Labeled HAM11 annulus IP (physically on HAM12) fell out of scale at around noon yesterday. 

Labeled HAM12 annulus IP (physically on HAM11) needs to be replaced. i.e. HAM11 annulus IP is working hard

https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=26804
Images attached to this report
Comments related to this report
chandra.romel@LIGO.ORG - 09:50, Monday 16 May 2016 (27214)
Diagonal pressure continues to trend down
Images attached to this comment
LHO VE
chandra.romel@LIGO.ORG - posted 09:20, Monday 16 May 2016 (27209)
CP3 overfill
9am local

1/2 turn on LLCV bypass --> took 22 seconds to overfill.

Lowered LLCV back to 20% (from 21%). Hot weather last week likely cause of long overfill times last Wed. and Fri.

*watch exhaust pressure after tomorrow's Dewar fill

Images attached to this report
H1 General
edmond.merilh@LIGO.ORG - posted 08:42, Monday 16 May 2016 - last comment - 09:21, Monday 16 May 2016(27208)
Monday Morning Meeting Minutes

SEI - No major maintenance plans scheduled. Ongoing tweaking with BRS.

SUS - model changes for HSTS scheduled for tomorrow.

VAC - GV measurements and manual CP3 overfill scheduled for tomorrow

CDS - Cables to be pulled for ITM ESD tomorrow. Auto restart of workstations to take place tomorrow morning.

PSL - the team sees no pressing reason to go into the enclosure at this time other than to do some DBB aligning tomorow, possibly.

Comments related to this report
chandra.romel@LIGO.ORG - 09:21, Monday 16 May 2016 (27210)
*CP3 Dewar fill from Norco truck tomorrow
H1 General
edmond.merilh@LIGO.ORG - posted 08:24, Monday 16 May 2016 (27206)
Shift Summary - Day Transition
TITLE: 05/16 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 0.0Mpc
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    Wind: 4mph Gusts, 1mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.08 μm/s 
QUICK SUMMARY:
TITLE: 05/16 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 0.0Mpc
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    Wind: 4mph Gusts, 1mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.08 μm/s 
QUICK SUMMARY:
 
H1 PSL (PSL)
peter.king@LIGO.ORG - posted 07:10, Monday 16 May 2016 - last comment - 08:10, Monday 16 May 2016(27204)
laser status
The laser was off this morning.  The chiller indicated that there was a "Flow sensor 1" error.
Comments related to this report
peter.king@LIGO.ORG - 08:10, Monday 16 May 2016 (27205)
Looking at the data from the various flow sensors, maybe, just maybe, the problem is with the flow sensor attached
to the auxiliary circuit (which monitors flows to the power meters ...).  The attached plot seems to imply that the
flow to the power meters dropped before the crystal chiller flow declined.

    Would need to check that the power meters are really attached to this part of the cooling circuit because the
diode chiller was running this morning.

For reference the cooling system information is under
https://dcc.ligo.org/DocDB/0067/T1100373/002/LIGO-T1100373-v2%20Coolant%20system%20operating%20and%20maintenance%20manual.pdf
Images attached to this comment
H1 PSL
robert.schofield@LIGO.ORG - posted 16:59, Sunday 15 May 2016 (27203)
Shaker mounted on PSL table

While the laser was down I mounted a B&K voice coil shaker on the +X, +Y corner of the table to study the PSL jitter contribution to DARM noise in the 80-200 Hz region. I doubt it will affect alighnment but be aware since it is quite heavy.

H1 DetChar (DetChar, PEM)
brynley.pearlstone@LIGO.ORG - posted 15:21, Sunday 15 May 2016 - last comment - 08:57, Wednesday 18 May 2016(27202)
Possible new comb in H1, and persistent 0.5Hz comb

BP

Following Friday night's lock, I looked at the spectrum and saw some regular structure, looking like a 2Hz comb at odd frequencies.This looks like a new comb. [figs 1&2]

As followe up, I ran coherence with all of the EBAY magnetometers and saw strong coherence is some places with this 2Hz comb, as well as the persisting 0.5Hz comb (see plots below)

0.5Hz comb: https://ldas-jobs.ligo.caltech.edu/~brynley.pearlstone/comb_investigations/May_2016_comb/H1:PEM-EX_MAG_EBAY_SEIRACK_Z_DQ_25_40Hz.png

0.5Hz comb + 2Hz comb: https://ldas-jobs.ligo.caltech.edu/~brynley.pearlstone/comb_investigations/May_2016_comb/H1:PEM-EX_MAG_EBAY_SEIRACK_X_DQ_25_40Hz.png

Note: These two are the same magnetometer (MAG_EBAY_SEIRACK), looking at 2 different axes. Both combs were also seen in other magnetometers.

Full list of plots: https://ldas-jobs.ligo.caltech.edu/~brynley.pearlstone/comb_investigations/May_2016_comb/

Previous efforts to mitigate the 0.5Hz comb was focussed on powering the timing card independently in the CS EBAY LSC-C! I/O chassis which handles DARM. This has not worked to eliminate the comb. I can't report any reduction yet, as Friday's lock was not sensitive at low (<100Hz) frequencies.

This 2Hz comb on 1Hz offset is the transform of a 1Hz square wave. The 2Hz comb might have to do with the 0.5s and 1s structure seen it Keith's data folding studies here: https://alog.ligo-wa.caltech.edu/aLOG/index.php

Images attached to this report
Comments related to this report
brynley.pearlstone@LIGO.ORG - 08:57, Wednesday 18 May 2016 (27270)

Looking at the strain sensitivity of this lock vs a typical O1 lock there is no way to tell whether the combs are reduced in the strain. It is clear that these new 2Hz combs aren't going to be quiet.

Images attached to this comment
H1 ISC (ISC, PSL)
sheila.dwyer@LIGO.ORG - posted 21:38, Saturday 14 May 2016 - last comment - 13:49, Sunday 15 May 2016(27199)
30 Watts for 5 minutes

Evan, Sheila

Today we blended CHARD P, with the same blend filters Craig and I used last night for yaw. This went smoothly and the attached screenshot shows the settings, which are accepted in SDF and shouldn't need to be in the guardian since they don't need to change.  WE haven't made it to low noise to see what kind of improvement this gives us.

We also had a look at dithering SRM and demodulating DARM control to get an alignment signal for SRM.  The SNR was not good for a 4Hz or 9 Hz dither, but for a 15 Hz dither there is clearly a signal, although the lock point was not good. I've added filters we could try to supress the length coupling in MICH2, SRCL2, PRCL2 filter banks, but we didn't get a chance to try this again.

Evan noticed that there is an instability in CHARD P at 0.2Hz when we tried to power up past 20 Watts or so last night, so we redisgned the boost that comes on at 17Watts to be gentler and have higher frequency zeros, (the new filter is MSBoost2 and in FM1).  With this we seems to be stable at 30Watts, at least for 5 minutes (we broke lock by trying to go to 35W). 

We had a bit of trouble with the laser today, nothing that can't be fixed easily ( noise eater and ISS oscialltions, difficulty locking the FSS).  Evan changed the temp search ramp parameters for the FSS. 

Images attached to this report
Comments related to this report
evan.hall@LIGO.ORG - 13:49, Sunday 15 May 2016 (27201)

Also, we had to reduce the analog gain of the TMS QPDs even further in order to avoid saturation on some of the segments when going above 25 W. We used to set the gain to 9 dB once we achieved resonance; now we reduce it to 3 dB.

This is a problem that is better solved by picomotoring the TMS beams. The worst offender seems to be the Y-arm B diode; it has a factor of something like 40 in power between segments 2 and 4.

LHO VE
chandra.romel@LIGO.ORG - posted 14:15, Friday 13 May 2016 - last comment - 09:22, Monday 16 May 2016(27181)
CP3 overfill
1:30pm local

1/2 turn open on LLCV bypass --> took 33:17 min. to overfill CP3. 

Raised LLCV from 20% to 21%.
Comments related to this report
chandra.romel@LIGO.ORG - 09:22, Monday 16 May 2016 (27211)
Temp induced
Images attached to this comment
chandra.romel@LIGO.ORG - 08:05, Saturday 14 May 2016 (27197)
CP3 Dewar is being filled this Tuesday so LLCV may need to be lowered again.
Displaying reports 56741-56760 of 83228.Go to page Start 2834 2835 2836 2837 2838 2839 2840 2841 2842 End