Displaying reports 64461-64480 of 83038.Go to page Start 3220 3221 3222 3223 3224 3225 3226 3227 3228 End
Reports until 09:34, Thursday 11 June 2015
H1 General
jeffrey.bartlett@LIGO.ORG - posted 09:34, Thursday 11 June 2015 (19070)
Clear HEPI Watchdog Counters
Cleared the HEPI L4C watchdog counter for ITMY. All others were green and clear.
LHO VE
bubba.gateley@LIGO.ORG - posted 08:51, Thursday 11 June 2015 (19069)
Beam Tube Washing
Scott L. Ed P.
6/8/15
The following are some of the dirtiest area we have seen on the X-arm.
Cleaned 58 meters ending 8.4 meters north of HNW-4-068.

6/9/15
 Maintenance day which means we can use the Hilti HEPA vacuum, the most effective vacuum for cleaning out the support tubes. We have been holding off on using this vacuum during ER7 because of the loud thump emitted from the internal cleaning system of the hepa filter.
We will go back and clean 2 sets of support tubes and cap them, as well as move forward to clean as many supports as reasonable before the end of maintenance day at noon.
Cleaned 30 meters of tube ending at HNW-4-070. Test results posted on this A-log.
 
Robert Schofield came out to the area where we are cleaning to look at our procedure and methods of actual cleaning of the tube to investigate possible glitches seen by the control room operators during lock.

6/10/15
Remove lights, cords, vacuum machines, and all related equipment from enclosure and thoroughly clean all equipment, then relocate to next section north.
Started cleaning at HNW-4-070, cleaned 15 meters of tube.

To date we have cleaned a total of 3418 meters of tube. 



Non-image files attached to this report
H1 INJ (INJ)
peter.shawhan@LIGO.ORG - posted 08:32, Thursday 11 June 2015 (19068)
Ending scheduled burst hardware injections for ER7
Since we got a few coincident burst hardware injections this morning, and the rest of ER7 (now through Sunday 8:00 PDT) will be split between local measurements/commissioning and running, I have disabled the scheduled burst hardware injections for the rest of ER7.  To be precise, I left the next few dozen in the schedule but I set their amplitudes to zero, in order to test the long-term behavior and stability of tinj; tinj will call awgstream to inject them, but because awgstream will just add zero strain to the instrument, these should have no effect and will not appear in ODC bits or database segments.

There is still a plan to add a stochastic injection sometime before ER7 ends, if time permits.
H1 General
nutsinee.kijbunchoo@LIGO.ORG - posted 08:04, Thursday 11 June 2015 - last comment - 19:34, Friday 12 June 2015(19067)
Owl Shift Summary

00:00 The ifo locked right before I came in. Wind speed is <20 mph. 90 mHz blend filter is used for BSC2.

           I noticed the BS oplev sum is saturated (> 80000 counts). Is this alright? It's been around this value for 10+ days.

01:55 There's a big bump at ~30 Hz that caused a big dip in the BNS range. SUS Oplev plots didn't show anything suspicious. The bump at this frequency happened through out the night, just not as big.

02:00 A 4.7 MAG earthquake in Ecuador shook PR3 a little and BNS range dropped slightly (from 61 Mpc to 60 Mpc), but that's all it did. No WD tripped. 

08:00 We've been locked for 8+ hours and still going strong at 61 Mpc! We had 5+ hours of coincidence with Livingston tonight. Handling the ifo to Jeff B.

Comments related to this report
daniel.hoak@LIGO.ORG - 17:16, Thursday 11 June 2015 (19083)

Judging from the normalized spectrograms on the summary pages, the 30Hz noise looks like occasional scattering noise, likely from the alignment drives sent to the OMC suspension.  Currently the Guardian sets the OMC alignment gain at 0.2 (for a UGF of around 0.1-0.5 Hz in the QPD alignment loops).  This is probably too high from a scattering-noise perspective, it can be reduced by a factor of two without ill effects.

daniel.hoak@LIGO.ORG - 19:34, Friday 12 June 2015 (19105)DetChar

To follow up on this noise, here is a plot of one of the noise bursts around 20-30Hz, alongside the OMC alignment control signals.  The noise has the classic scattering-arch shape, and it is correlated with the ANG_Y loop, which send a large signal to the OMC SUS.  We've seen this kind of thing before.  The start time for the plot is 09:27:10 UTC, June 11 (the time axes of the two plots are a little off, because apparently indexing for mlab PSDs is the hardest thing I've had to do in grad school.)

The second plot attached compares the OMC-DCPD_SUM and NULL channels at the time of the noise bursts in the first plot, to a quiet time one minute prior.  The scattering noise is largely coherent between the two DCPDs.

Images attached to this comment
H1 SEI
hugh.radkins@LIGO.ORG - posted 08:00, Thursday 11 June 2015 (19066)
BS ISI stage1 ISO running high wind blends

Jim must have switched these.  The 90mHz blends are on for X & Y rather than the 45s.  The SDF is red for this reason.

H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 07:56, Thursday 11 June 2015 (19065)
CDS model and DAQ restart report, Wednesday 10th June 2015

model restarts logged for Wed 10/Jun/2015

no restarts reported

H1 INJ (INJ)
peter.shawhan@LIGO.ORG - posted 07:16, Thursday 11 June 2015 (19064)
Transient injections now running at LHO under the 'hinj' account
Dave Barker created a shared 'hinj' account for running hardware injections (alog 19057), and we restarted tinj (the transient injection process) under that account yesterday.  Unfortunately, the injections failed overnight due to awgstream errors.  That was puzzling because executing awgstream on the command line (with zero amplitude or extremely small amplitude) worked fine.  It turns out that different versions of the gds, awgstream and root packages are installed on the LHO injection machine compared to the LLO injection machine, so the environment setup that was copied over from LLO caused awgstream to fail when executed by tinj.  I made a separate environment setup script for LHO and restarted tinj under that, and now it seems to be working fine.

After doing a zero-amplitude test injection at 1118063333 (which should have had no effect on anything), I modified the schedule to more promptly do som burst injections (coincident at both sites), at GPS times 1118063933, 1118067123, and 1118067543.  (The actual signal comes a second or two later than those waveform file start times.  As of this writing, the first of those was picked up by the burst pipelines: see https://gracedb.ligo.org/events/view/G159516 .  We'll see about the others.
H1 SUS (ISC, SUS)
keita.kawabe@LIGO.ORG - posted 02:04, Thursday 11 June 2015 - last comment - 04:46, Thursday 11 June 2015(19060)
Charging localization measurement (Leo, Betsy, Evan, Kiwamu, Daniel, Keita)

Related:

Den's alog 16624

Den's alog 16727

Summary:

Many things are dubious.

  1. EX LL ESD is broken as the ESD to length response is 3 orders of magnitude smaller than the others. Probably shoddy connection somewhere between the driver and the ESD electrode, as the voltage readback looks normal.
  2. It looks as if either the sign of EX ESD output is flipped (positive digital out induces negative voltage) or the sign of EY ESD output is flipped but not both.
  3. I'm just assuming that the sign convention for CAL-CS_DARM_EXTERNAL_DQ is length(X)-length(Y), and that it's correct, without any confirmation.

Despite these things, it seems as if the charges on the back are on the same order as reported in Den's alog 16624.

If we assume that the sign of EY ESD is wrong and we still take 1. into account, the charges are calculated as:

  front back
X 4.4nC 1.1nC
Y 2.2nC 1.2nC

This looks semi-reasonable.

If we assume that the sign of EX ESD is wrong and we still take 1. into account, the charges are:

  front back
X 5.7nC -0.9nC
Y -6nC -0.4nC

I don't like that the signs are all over the place.

If we assume that everything is correct except that the EX LL is broken (i.e. we ignore the 2. above but take 1. into account), the charges based on are:

  front back
X 4.4nC 1.1nC
Y -6nC -0.4nC

Again the signs are all over the place.

These are based on the same calculation as Den's alog 16624.

I'm assuming that the sign convention of CAL-CS_DARM_EXTERNAL_DQ is length(X)-length(Y) (i.e. positive when X stretches and Y shrinks).

Anyway, no matter how you look at the data, the back surface charges are quite similar to what was reported in Den's alog (except for the signs that don't make much sense for the latter two tables).

We tried similar measurements as described in Den's alog 16727 but the angle data for X was unusable (no coherence at all). If you're interested in Y data, all measurements were saved in Betsy's template directory.

 

The gist of the measurements:

Differences between EX and EY measurements:

Fishy sign of ESDs (Go to the floor and figure out):

EY ESD length drivealign matrix has a negative DC gain while the corresponding matrix for EX is positive even though the LSC DARM output matrix already takes care of the sign difference necessary for DARM control for EX and EY.

It looks as if either the bias line has a wrong sign for one ETM but not the other, or LL/LR/UL/UR lines have a wrong sign for one ETM but not the other.

Raw-ish data and calculations:

Measured the zero-bias transfer coefficients from ESD segments and the ring heater (top and bottom combined) to the calibrated DARM channel in meters/volts at around 80Hz. After taking the TF of the drivers and the DAC V/cts into account, they are:

  LL [m/V] LR [m/V] UL [m/V] UR [m/V] ESD combined [m/V] Ring Heater [m/V]
EX +1.3E-18 +2.2E-15 +1.0E-15 +5.6E-16 +3.8E-15 * 4/3 -6.7E-16
EY +5.6E-16 +6.5E-16 +1.4E-15 +1.5E-15 +4.1E-15 +1.9E-15

Positive data is actually about 24 deg (Y arm) or 30 deg (X arm) delayed, while negative data is about 210 deg (X arm) delayed.

EX LL is not working. Coherence is very large, the voltage readback looks OK, but it has 3 orders of magnitude smaller response than the others. EX LL did not change much when the nominal EX ESD bias was put back on.

I multiplied the ESD combined data by 4/3 only for EX to take into account that the EX LL driver is not working.

Force to length transfer function at 80Hz is -1/M/(2*pi*80Hz)^2 = -1E-7[m/N] (negative as the phase is 180 degrees relative to DC).

Also, the above is the TF to DARM, which is supposed to be X length - Y length. In order to move to a sign convention where positive means that the ETM moves closer to ITM, the sign of the X data should be flipped.

Combining these, the above raw-ish data is converted to N/V as:

  LL [N/V] LR [N/V] UL [N/V] UR [N/V] ESD combined [N/V] Ring Heater [N/V]
EX +1.3E-11 +2.2E-8 +1.0E-8 +5.6E-9 +3.8E-8 * 4/3 -6.7E-9
EY -5.6E-9 -6.5E-9 -1.4E-8 -1.5E-8 -4.1E-8 -1.9E-8

The signs of this table don't really make sense (positive ESD electrode potential should move ETMX and ETMY in the same direction if the charge has the same sign).

Anyway, from here, you solve Den's rough formula:

FRH / VRH = Afront Qfront + Aback Qback

FESD / VESD = Bfront Qfront + Bback Qback

Afront = 1 / 0.20 [1/m] ; Aback = -1 / 0.04 [1/m] 

Bfront = 1 / 0.20 [1/m] ; Bback = 1 / 0.04 [1/m]

Comments related to this report
rainer.weiss@LIGO.ORG - 04:46, Thursday 11 June 2015 (19063)
It is too confusing to be sure. My guess is also that there is charge on the back of the test mass. So let me suggest
that since you are now going to enter the chamber and use the top gun that the 10"flange with the off-axis nipple
be mounted on both x and y etm chambers with the associated small gate valves so that we have this capability
of best location for discharge in the future. Do not remove the gate valves from the middle flanges.
H1 AOS
jim.warner@LIGO.ORG - posted 00:03, Thursday 11 June 2015 (19062)
Shift Summary

~16:00 Locked IFO

~16:45 Winds pick up, lock loss

Many frustrating hours of lock losses at REFL_TRANS or there abouts

23:15 Lock IFO again.

H1 DetChar (ISC)
jim.warner@LIGO.ORG - posted 22:34, Wednesday 10 June 2015 - last comment - 23:33, Wednesday 10 June 2015(19059)
High-ish winds making locking difficult

When I took over from Jeff this afternoon, he had successfully damped down bounce and roll modes on various optics, so I was able to proceed with locking the IFO at 17:15. About 45 minutes after that, the DARM got glitchy and we noticed that the wind traces on the wall were all trending up. After 45 minutes we lost lock and I haven't been able to relock since. Winds seem to be slowly winding down, but it's not totally calm yet. Pretty consistenly the IFO will get o CARM reduction, then lose lock on REFL_TRANS. Attached plots show trends for the last 6 hours at EX and EY. We lost lock pretty much right as winds topped out at EY.

Images attached to this report
Comments related to this report
jim.warner@LIGO.ORG - 23:33, Wednesday 10 June 2015 (19061)

Winds have settled down. IFO is locked on LSC_FF, just 20 minutes after LLO went down. Oh, well, more data points about LHO locking and winds. I attach the last 7 hours of windspeeds for EX and EY, we were able to lock at roughly the beginning and at the end. So, roughly 0:00:00 UTC 6/11 and again at 5:45 UTC, and not betwixt the two. DARM is still kind of glitchy, but it seems to be holding up.

Images attached to this comment
H1 INJ
edward.daw@LIGO.ORG - posted 17:56, Wednesday 10 June 2015 (19058)
Files for simultaneous stochastic injection at both sites.
I've created 1 hour injection files intended for simultaneous injection at LHO/LLO. There is a single 3600 second text file on h1hwinj1 at this location:
h1hwinj1:/ligo/home/edward.daw/research/hardware_injections/dependencies/sources/virgo/NAPNEW/SCRIPTS/IsotropicSbGenerator/first_injframes/SB_H1_ER7_1hr.txt
This contains a single set of channel data for injection at the Hanford site. There is a corresponding one hour ascii text file at Livingston at this location:
l1hwinj1:/home/edward.daw/injections/SB_L1_ER7_1hr.txt
Both files were created using a single call to IsotropicSbGenerator.py on the l1hwinj machine as follows:
cd /ligo/home/edward.daw/research/hardware_injections/dependencies/sources/virgo/NAPNEW/SWIG/
./IsotropicSbGenerator.py -i IsotropicSbGenerator3.ini
The results were 2 frames, one containing the Hanford data, the other containing the Livingston data. The LLO file was copied by ftp
to l1hwinj1.ligo-la.caltech.edu. At each site, libframe and frgetvect were used to convert the frame data to ascii text.

The amplitudes of these injections are omega_GW=0.001 at 100Hz in each detector. This should be far more subtle than the previous injection, so no need (hopefully) for Jeff/Adam to rescale the amplitude this time.

Hope there is some simultaneous up time at the two sites to try this test; I will be delighted if this is possible, but understand of course if there isn't an opportunity.

Thanks for any help. I've left a copy of this entry on the LLO aLOG.
H1 CDS
david.barker@LIGO.ORG - posted 17:53, Wednesday 10 June 2015 (19057)
hinj user account created on h1hwinj1 machine

Following what was done recently at LLO, I have created a common hinj account on the LHO hardware injection machine h1hwinj1. To conform to security requirements, users cannot ssh into this machine using the common account, they must still use their own LIGO.ORG account. Once on the machine, they can set-user to access the hinj account. The plan is to run all injections (contuous and transient) using the hinj account.

H1 CDS
david.barker@LIGO.ORG - posted 17:41, Wednesday 10 June 2015 (19056)
GRB EPICS alarm

Dave Barker. WP5269

I have put together a quick EPICS alarm handler for Gamma Ray Bursts and Supernovae (GRB/SN) which runs on the operator alarm machine. It is called grbsn.alhConfig (in svn under cds/h1/alarmfiles).

The GRB alert system (which runs on h1fescript0) polls the GraceDB database every 10 seconds looking for External events (Gamma Ray Bursts and Supernovae). If an event is detected, its information is written to EPICS records which are hosted on the FE model h1calcs.

To put together a quick alarm system for ER7, I am using the alarm fields of the H1:CAL-INJ_EXTTRIG_ALERT_TIME record which records the GPS time of the trigger. By setting the HIGH alarm field to the current GPS time plus one second, when the next event is recorded this PV goes into the MAJOR alarm state.

The grbsn.alhConfig file provides the operator with two buttons:

[G] button opens the guidance text. It refers to the alog entry  https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=19037

[P] button runs a script which resets the alarm level to the current event, turning off the current alarm

When a GRB/SN alarm is raised the operator should:

1. acknowledge the alarm to stop it beeping/flashing

2. press the [P] button to turn off the current alarm and prime the system for the next event

3. read the guidance to see if any further action is needed

Information on the current event can be obtained by running the script grb_latest_event_info.py. This gives you the event time as local time, so you can easily determine when the one hour stand-down time will expire.

Images attached to this report
H1 IOO
daniel.sigg@LIGO.ORG - posted 17:27, Wednesday 10 June 2015 (19055)
Strange IO alignment shift

(baffled control room crew)

After the most recent lock loss the IMC wasn't relocking and looked completely misaligned. The screenshot shows about 20 minutes of the previous lock, 30 minutes of confusion and 10 minutes on a new lock. The only dof which shows a signifcant change is the input pointing. Strange.

Images attached to this report
H1 General
jeffrey.bartlett@LIGO.ORG - posted 16:34, Wednesday 10 June 2015 (19054)
Ops Day Shift Summary
Observation Bit: Commissioning 

08:00 – Working on relocking
08:05 – Add 150ml water to PSL crystal chiller
08:11 – IFO Locked at LSC_FF
08:13 – Intent bit set to Undisturbed 
08:30 – Bean tube cleaning team working on X-Arm
09:11 – Set intent bit to Commissioning – Kieta running charging measurements at both end stations
09:33 – LLO ops reported a power hit this AM. Working on recovery
09:49 – Lockloss – Commissioning activities 
11:52 – Jim Blunt on site to meet with Richard
11:55 Dan – Making delivery to VPW with PU truck
11:59 – IFO Locked at LSC_FF
12:00 Keita – Running charging measurements – Intent bit Commissioning 
12:21 – Beam tube cleaning crew breaking for lunch
13:08 Daniel – Moving charge test rig from End-Y to End-X
13:45 – Beam tube cleaning crew working on X-Arm
14:52 Daniel – Back from End stations
15:00 Keita – Finished with charge measurements
15:03 - IFO locked at LSC_FF
15:05 - Set Intent bit to Undisturbed 
15:15 Bubba – Going to Mid-X to get some tubing
15:17 - Lockloss – Possible due to beam tube cleaning
15:42 - Beam tube cleaning crew finished for the day
16:29 - Turn over to Jim W. – Still working on relocking
H1 SUS (DetChar, ISC, SUS, SYS)
betsy.weaver@LIGO.ORG - posted 11:20, Wednesday 10 June 2015 - last comment - 16:10, Wednesday 10 June 2015(19049)
ESD/charge measurements have ~commenced - status report on IFO

We attempted to set up the IFO for some end station ESD measurements around 10am.  During some of the switch settings, the IFO lost lock.  Relocking is ongoing with help from commiss.  A suite of ESD/charge measurements have become increasingly more important, hence time has been allotted today.

Comments related to this report
kiwamu.izumi@LIGO.ORG - 12:45, Wednesday 10 June 2015 (19050)

I was asked by Keita to update the ETMX actuator calibration path for their charge measurement in full lock. I updated the foton filter of CAL-CS and a few settings associated with the simulated ETMX actuators in CAL-CS. The SDF is updated accordingly. We are now temporarily using the simulated ETMX in CAL-CS by switching the output matrix as of around 2015-06-10 19:00 UTC. We did not touch the ETMY actuator path at all.

 

* * * * * *

The changes I made are:

  • H1:CAL-CS_DARM_ANALOG_ETMX_L3_GAIN
    • 1.42 -> 1.0
  • H1:CAL-CS_DARM_ANALOG_ETMX_L1_GAIN
    • -1.0 -> 1.0 (this must be a typo)
  • H1:CAL-CS_DARM_FE_ETMX_L1_LOCK_OUTSW_L
    • OFF -> ON
  • H1:CAL-CS_DARM_FE_ETMX_L3_LOCK_OUTSW_L
    • OFF -> ON
  • Changed the DC gain of FM1 in H1:CAL-CS_DARM_ANALOG_ETMX_L3
    • DC gain is set to 3.3491e-13 [m/cnts] (alog 18756

 

betsy.weaver@LIGO.ORG - 13:18, Wednesday 10 June 2015 (19051)

With Keita's guidance, we calibrated the TR QPDs to the ETM OPLEVs to get a calibration constant which will be needed to interpret the charge data being taken.  We found the calibration by driving at the L2 stage at 6Hz and watching the QPDs and OPLEVs.  Note, the Y-arm QPD TR B was not as well centered as the X-arm TR A and B and X-arm TR A.

Here are the calibration numbers:

 

    OPLEV QPD A QPD B OL/QPD  
    uRad/ct ct/ct ct/ct Calib Const  
          uRad/ct  
ETMX  PIT AMP 3.26654e-8 2.30053e-8 2.31317e-8 1.419907587  
  PHASE -12 169 169    
             
ETMX YAW AMP 4.1988e-8 3.08716e-8 2.88229e-8 1.360084997  
  PHASE -12 169 169    
             
ETMY PIT AMP 3.40891e-8 2.30263e-8 5.51346e-9 1.480441929  
  PHASE -31 167 165    
             
ETMY YAW AMP 4.07971e-8 3.26463e-8 1.5472e-8 1.249669947  
  PHASE -31 169 169    
             
betsy.weaver@LIGO.ORG - 16:10, Wednesday 10 June 2015 (19053)

Keita has completed the ESD/RH charge measurements for ETMx and ETMy, however he is still chewing on the numbers.  He expects to post something later today.

H1 CDS (SUS)
david.barker@LIGO.ORG - posted 14:28, Tuesday 09 June 2015 - last comment - 15:41, Wednesday 10 June 2015(19030)
status of 18bit DAC calibrations

We are seeing two issues with the autocal of the 18bit DAC cards (used almost exclusively by suspension models). The first is a failure of the autocal; the second is the autocal succeeding but taking longer than normal.

There are three calibration completion times: ~5.1S for unmodified DAC, ~5.3S for modified DAC, ~6.5S for modified long-running DAC

h1sus2b, h1sush34, h1susex, h1susey all have no DAC issues

h1sush2a: the third DAC is taking 6.5S to complete. This DAC is shared between PRM and PR3 models. First two channels are PRM M1 Right and Side. Last six channels are PR3 M1 T1,T2,T3,Left,Right,Side.

h1sush56: this has unmodified cards. 4th DAC is failing autocal

h1susb123: This one is strange:

On first autocal run after a computer reboot: 7th DAC failed autocal

On first restart of all models: 1st DAC failed autocal, 7th DAC succeeded

On second restart of all models: 1st DAC failed autocal, all others good (consistent restart behavior)

In all cases, 5th DAC card is running long for autocal (6.57S).

In the current situation with the 7th card now succeeding and the 1st DAC failing, the 1st DAC is driving ITMY M0 (F1,F2,F3,LF,RT,SD) and M0 (LF,RT)

Comments related to this report
jeffrey.kissel@LIGO.ORG - 15:41, Wednesday 10 June 2015 (19052)CDS, DetChar, ISC, SUS
I've been asked to translate / expand on the above entry, and I figure my reply was worth just commenting on the aLOG itself. If one person asks, many more are just as confused.

---------
We know that DetChar have seen Major Carry Transition or "DAC" [digital-to-analog converter (DAC)] glitches in some of the detectors interferometric signals, that have been pin-pointed to be from a few select stages of a few select suspensions (see e.g. LHO aLOGs 18983, 18938, or 18739).

To combat the issue, yesterday, we restarted all the front-end processes (including the Input Output Processor [IOP] process) on the four corner station SUS computers:
h1sush2a (controlling MC1, MC3, PRM and PR3 all in HAM2)
h1sush2b (controlling IMs 1 through 4)
h1sush34 (controlling MC2 and PR2 in HAM3 and SR2 in HAM4)
h1susb123 (controlling ITMY, BS, and ITMX in BSCs 1, 2 and 3 respectively)

Restarting the IOP process for any front-end who is coupled with an I/O chassis that runs 18-bit DAC cards performs the auto-calibration (autocal) routine on those 18-bit DACs, recalibrating the voltage bias between the (2-bit + 16-bit cards) of the 18-bit DAC to reduce Major Carry transition glitching. When the front-end computer is rebooted or power-cycled, the IOP process is started first, and therefore runs the auto-calibration routine as well. After autocalibration is complete, the user models for each suspension are started. 

Note that the other 3 SUS "control" computers,
h1sush56 (controlling SRM and SR3 in HAM5 and the OMC in HAM6 )
h1susex (controlling ETMX and TMSX)
h1susey (controlling ETMY and TMSY)
were NOT restarted yesterday, but have been restarted several times over recent history.

Each of these front-end or I/O chassis contains many DAC cards, because it (typically) controls many suspensions (as described above), hence the distinction between the card numbers in each front-end. Said differently, each DAC card has 8 channels -- because of initial attempts to save money and conserve space, the above mentioned corner station computers / IO chassis have some DAC cards that control OSEMs of two different suspensions. There's a map of which DAC card which controls which OSEM in the wiring diagrams; there is a diagram for each of the above mentioned computers / I/O chassis; each diagram, named after the chambers the I/O chassis controls can be found from the suspension electronics drawing tree, E1100337.

Recall that we have recently swapped out *most* of the suspensions' 18-bit DAC cards for a "new" 18-bit DAC card with upgraded EPROMs (see LHO aLOGs 18557 and 18503, ECR E1500217, and Integration Issue 945). This is what Dave means when he references "modified" vs. "unmodified" DAC cards. All DAC cards in h1sush56 remain unmodifed, where as the other corner-station DAC cards in all other SUS I/O chassis have all been upgraded.

Dave also describes that 18-bit DACs are used "almost exclusively by suspension models." The PCAL and PEM DACs are also 18-bit DACs. Every other fast DAC (used by TCS, SEI, PSL, ISC, etc.) is a 16-bit DAC which have not been found to have any issues with major carry transition glitching. Note that because the tip-tilt suspensions (the OMs and RMs, collectively abbreviated as the HTTS for HAM Tip-Tilt Suspensions) were purchased under ISC dollars, who made the decision to do this (as with many other things) differently -- they have 16-bit DACs. 

After we've restarted the IOP front-end process, we track its start-up log file to ensure that the auto-calibration was successfully completed. We've found that there are two "failure" modes to this auto-calibration process that get reported in these log files. 
(1) The IOP process log reports that the auto-calibration process has failed.
(2) The IOP process log reports that the auto-calibration process took longer than is typical. 
We don't know the practical result of either of these two failure modes means. This was brought up on the CDS hardware call today, but no one had any idea. I've pushed on the action item for Rolf&Co to follow up with the vendor to try to figure out what this means.

As for (2), 
- the typical time for an IOP process to complete the auto-calibration of one unmodified DAC card is 5.1 [sec]. 
- the typical time for a modified DAC card with the new EEPROM is 5.3 [sec].
- the atypical "errant" time appears to be 6.5 [sec]. 
But, again, since we have no idea what "running long" means, our only reason to call this a "failure mode" is that it is atypical.

So re-writing Dave's above status (which is a combined status that is the results of yesterday's IOP model restarts and other previous IOP model restarts) with the jargon explained a little differently and in more detail:
- h1sus2b, h1sush34, h1susex, h1susey all have no DAC auto-calibration issues. All DAC cards in these front ends / I/O chassis report a successful auto-calibration routine.
- h1sush2a: of the all the DAC cards in this I/O chassis, only the 3rd DAC (which is s controls the some of the TOP, M1 mass OSEMs of PRM and some of the TOP, M1 OSEMs of PR3) is suffering from failure mode (2).
- h1sush56: the 4th DAC is suffering from failure mode (1).
- h1susb123: this computer's IOP process was restarted several times. 
	- Upon the first autocal, which was performed as a result of rebooting the computer, 
		- the 1st and 7th DAC card suffered from failure mode (1), 
		- the 5th DAC card suffered from failure mode (2),
		- all other DAC cards reported success.
	- Upon the second autocal, which was performed as a result of restarting the IOP process, 
		- the 1st DAC card suffered from failure mode (1), 
		- the 5th DAC card suffered from failure mode (2),
		- all other DAC cards reported success.
	- Upon the thir autocal, which was performed as a result of restarting the IOP process again, 
		- the 1st DAC card suffered from failure mode (1), 
		- the 5th DAC card suffered from failure mode (2),
		- all other DAC cards reported success.
	The first DAC card is shared between the two TOP, M0 and R0 masses of ITMY. (note Dave's typo in the initial statement)

For previous assessments, check out LHO aLOG 18584.
Displaying reports 64461-64480 of 83038.Go to page Start 3220 3221 3222 3223 3224 3225 3226 3227 3228 End