Displaying reports 65561-65580 of 83069.Go to page Start 3275 3276 3277 3278 3279 3280 3281 3282 3283 End
Reports until 15:59, Tuesday 21 April 2015
H1 General
edmond.merilh@LIGO.ORG - posted 15:59, Tuesday 21 April 2015 (17968)
Daily Ops Summary

07:00 Jeff K begins RCG upgrade; on-table External went closed just after; ISS having some trouble. Sigg took control of the FE and is working on the ISS issue

08:15 ISS fixed and shutter reset

08:20 Hgh to LVEA to check HEPI accumulators. (pressure and charging)

08:23 Elli & Nutsi out to HWS table by HAM4

08:25 Started un-tripping all suspensions except for BS pending the M2 stage coil driver swap.

08:27 Fil swapping the BS M2 coil driver

08:32 Corey to MY

08:40 Jim Batch restarting DAQ

08:43 Fil reports coil driver swapped. Also, seismometer in beer garden is connected.

08:48 Rick, Jason and Kiwamu headed towards PSL to fix periscope PZT

08:50 Kyle into LVEA and then to Y VEA

08:51 Elli and Nutsi out of LVEA

09:00 brought end stations to offline and safe. (sei/sus)

09:11 Jim and Corey to EX to restart damper on BRS

09:13 Betsy out to LVEA to look for stuff

09:25 McCarthy out to LVEA

09:31 Greg to TCSY table

09:40 ISS FE model re-started

09:44 McCarthy out of LVEA

09:45 Corey out to LVEA/Jim back from EX

09:53 Hugh out of LVEA

09:54 Hugh out to Ends to check HEPI actuators

09:55 Corey out of LVEA

09:56 RCG upgrade is COMPLETE! .....

10:00 Elli and Nutsi to end stations to put plates on cameras

10:12 Doug to EX to check opLev - did a tiny bit of alignment

10:29 Cris and Karen out of LVEA. Karen to EY

10:32 Elli and Nutsi back from ends

10:38 Betsy out to LVEA

10:41 Dick out to LVEA to ISC racks 1/2/4

10:46 Kyle back from end stations

10:53 Gerardo making deliveries to end stations

10:54 2nd LN2 delivery arrived. I don't remember the first one getting here but the alarms tell the tale.

11:01 Cris to EX

11:15 Port-O-Let maintenance on site

11:15 Hugh back from EY

11:20 original BS M2 coil driver returned to SUS C5.

11:36 Karen leaving EY

12:04 Jason/Rick/Kiwamu out of PSL

12:07 Greg, Elli & Nutsi out of LVEA

12:21 Dick out of LVEA

12:24 Fil and Hugh out to LVEA to press the centering button on the new seismometer

13:23 FIl and Andres @ MY

13:33 Gerardo out to LVEA to retreive a PC by the X manifold.

13:35 Hugh out of LVEA

13:50 Gerardo out of LVEA

14:01 Kyle to Mid Stations

14:37 Hugh into CER

15:14 Fil and co. back from Mid station

15:15 Kyle back from Mid Stations

H1 CDS (SUS)
betsy.weaver@LIGO.ORG - posted 14:53, Tuesday 21 April 2015 - last comment - 16:09, Tuesday 21 April 2015(17983)
SDF status after upgrade

While clearing out many settings diffs on the SDF after the RCG 2.9.1 upgrade today, Kissel and I discovered a few issues, the worse offender is the first one:

-  OPTICALIGN values are not being monitored on SDF since they constantly change.  However after reboots, these values are read in via a burt from the SAFE.SNAP file.  Often these files are older than 1 day (in many cases weeks old) so therefore restore an old IFO alignment.  We need a better way to save local IFO alignments in the SAFE.SNAP now that we have rerouted how we transition suspensions from aligned to misaligned.

-  Precision of the SDF writing feature is different than the precision in the reading feature, therefore they do not clear from the diff list when you attempt to ACCEPT them.

-  Useage of CONFIRM and LOAD TABLE buttons is still confusing

-  Settings on the DIFFS screen represent the switch settings via reading SWSTAT however the MON list still shows the SW1 and SW2 settings.  This means the TOTAL DIFFs and NOT MONITORED counts never add up to the same number.  1 SW line item on the DIFF screen turns into 2 line items when pushed to the NOT MONITORED screen.

-  The ramp matrix (specifically looking at LSC) is constantly reporting changes.  The time stamp keeps updating even though the value is not actually changing.  Haven't looked at other ramp matrices yet.

Comments related to this report
jeffrey.kissel@LIGO.ORG - 16:09, Tuesday 21 April 2015 (17985)
I've done a little bit more clean up -- this user interface is fantastic once you get to know how to do the operations you want. 

For example, I wanted to 
    - Get rid of the "errant file" red messages on the corner station BSC-ISI tables
    - Reduce the channels "not found" and "not initialized" to zero on the HAM-ISI tables
both of which require one to write (i.e. confirm) something and then force the front end to re-read that newly written table (i.e. load) to clear the errors. So, I've 
    - selected to monitor the ISI's Master Switches (which instantly become a diff, because the "safe" state is still with the master switch OFF, and we're currently using the ISIs), 
    - confirmed the change, (on the main table screen)
    - loaded the table, (on the "SDF RESTORE" screen)
    - selected to unmonitor the master switch, 
    - and confirmed. (on the main table).

Very nice!
H1 TCS (TCS)
greg.grabeel@LIGO.ORG - posted 14:22, Tuesday 21 April 2015 (17980)
TCS Y CO2 Table
The 50W laser is now running on the TCS Y table. Unfortunately the issues with the HWS kept us from being able to start trying to align everything. The projection beam is currently being dumped. Also took the opportunity to install the DC power switch. Currently the Y arm has the switch installed while the X arm does NOT have one installed. While setting up the IR fault kept triggering; if the laser suddenly shuts off that would be my first suspect.
Images attached to this report
H1 CDS (CDS, DAQ, DetChar, IOO, ISC, PEM, PSL, SEI, SUS, TCS)
jeffrey.kissel@LIGO.ORG - posted 13:19, Tuesday 21 April 2015 - last comment - 14:48, Tuesday 21 April 2015(17979)
H1 IFO Upgraded to RCG 2.9.1
J. Kissel, D. Sigg, D. Barker, J. Batch, B. Weaver, E. Merilh

We've upgraded all front-end models to RCG 2.9.1. Recovery of the IFO is on-going, given that we had our usual chaotic array of activities, but all signs point towards success. The upgrade was a little bit harder for some models / front-ends than others (details below), but we now have a completely green CDS overview screen (except for the saturation indicators that are always present on certain ISC models, and the currently-railing STS2 plugged into STS B causing all corner station seismic models to saturate -- more on that later). Details of the hiccups below.

Details
---------
Problem children:
- PSL ISS front-end model
This guy was the nastiest. The model some how did not have a safe.snap file or softlink in the 
/opt/rtcds/lho/h1/target/h1psliss/h1pslissepics/burt/
directory. We didn't notice this until much later, but at first this caused the model to simply not start because it couldn't find (what used to be) "the burt button" EPICs record that gave to OK status to get started. Daniel and I tried blindly restarting the model, recompiling-reinstalling-and-restarting the model, with no success. Finally, Daniel figured out that if he burt restored *while* the front end was coming up, he could get the model to start. Later, Betsy was making the effort to turn on the PSL ISS model's SDF system, created a table, but could not load it. After about 30 [sec] of trying, the front end model core dumped, seg faulted, and just died (not only this user model crashed, every other model on the front end survived). It was only after investigating this, that we found out about this missing target-area-safe.snap. The userapps repo had a safe.snap, so we softlinked the target-area to the userapps repo, and restarted the front-end model. 
All has been well since. 
We don't understand how a front end model can exist with out *something* in the target area called "safe.snap".

- The PCAL X and Y front-end model
Because it appears to be related (both in symptom and potentially in responsible parties) I mention this here. Last night, when I was capturing all settings in prep for today's model restarts, I captured new safe.snaps for the pcal front end models, because they were not yet under SDF control. In doing so immediately noticed that these models *also* didn't have safe.snap files in the target area. I didn't think much of it at the time, because I know the history of the pcal models, BUT now that I see a similar problem with the psliss model, I worry that the problem is systematic. Will investigate further.

- The end-station ISC, SEI, and SUS models
We had only planned to restart the end station's ISC and PEM front ends, because the SUS and SEI had been upgraded last week. However, when we restarted the ISC models, we found lots of continuous IPC errors between the ISC models and SEI and SUS. We think we traced this down to the clearing of the entire IFO's IPC file yesterday. Dave and Jim had thought they'd recompiled and reinstalled the SEI and SUS, which should have populated the IPC file *without* restarting the models, but this didn't appear to be successful. So, we ended up recompiling, reinstalling, and restarting the SEI and SUS models in addition to every other model at the end station (again). All errors are clear now, as mentioned above.

Also:
As mentioned by Daniel (LHO aLOG 17969), we had forgotten to update the some ODC library parts before getting started with the recompiling yesterday, so only some models received the upgrade to their ODC. There also seems to be a few bugs with the updated version, from what we can see. Will follow up with the ODC team. However, since we've run out of time, we'll include the remainder of these in the model recompiling already planned for next Tuesday. 
Models that need restarting to receive the ODC update:
- All corner station SEI and SUS models
- Corner station TCS model
- Corner station PEM
- Corner LSC, ASC, and OMC models
- Corner Station CAL model
(Note, all SUS AUX models don't have any ODC in them, so they do not need the update).
Comments related to this report
james.batch@LIGO.ORG - 14:48, Tuesday 21 April 2015 (17982)DAQ
The DAQ system was also updated to 2.9.1, h1fw0, h1fw1, h1nds0, h1nds1, h1dc0, h1broadcast0.  The NDS1 protocol is now reported as 12.2, so a few control room tools will need to be updated to handle the protocol version change.  They should function properly as they are for now.  

There was issues with duplicate channel names restarting the data concentrator.  This was caused by the ODC part in h1calex and h1caley being named simply ODC at the top level of the model instead of EX_ODC and EY_ODC.  This delayed the restart of the data concentrator by several minutes.

The h1asc model is running a specially modified awgtpman to allow more testpoints, as it was under RCG-2.9.
H1 TCS (TCS)
aidan.brooks@LIGO.ORG - posted 12:58, Tuesday 21 April 2015 (17977)
HWSY SLED has died. RIP.

Elli. Greg. Aidan. Nutsinee.

The HWSY SLED has ended it's useful lifespan. The power has decayed by about 90% as shown in the attached plot of the last 6 months.

This is due to running it consistently at a high current. At present, there is a hardware maximum current limit on the SLED driver (https://dcc.ligo.org/LIGO-T1000662) which has been set to the maximum limit specified by the SLED supplier (it's also unqiue for every diode). I think we need to set this limit lower to increase the lifetime of these SLEDs - we always wanted to run them at roughly half the output power, but this has control has been hanlded in software. 

Images attached to this report
H1 CDS
patrick.thomas@LIGO.ORG - posted 12:57, Tuesday 21 April 2015 (17978)
h1conlog-old decommisioned
I shutdown h1conlog-old and removed its disk. It was hosting data from a previous version of Conlog. I copied SQL dumps of the databases to /ligo/lho/data/conlog/h1/backups/h1conlog-old. This computer will be transitioned to backup storage for the new version of Conlog.

This completes WP 5163.
H1 PSL (DetChar, IOO, PSL)
richard.savage@LIGO.ORG - posted 12:29, Tuesday 21 April 2015 - last comment - 16:49, Tuesday 21 April 2015(17976)
H1 IO/PSL Periscope PZT mirror swap

JasonO, KiwamuI, RickS (and RobertS, in spirit)

Today, we moved the PZT-controlled mirror from the top of the IO periscope down to the surface of the optical table and swapped it with with a turning mirror that was on the table.  I.e. IO_MB_M6 (top of periscope) swapped with IO_MB_M4 (turning mirror immediately downstream of thin-film polarizers).  Note that the PZT blocks the (weak) beam transmitted through the mirror, so we removed beam dump IO_MB_BD6.

We first installed an iris at the top mirror ouput, using a C-clamp to attach a temporary plate to the top of the periscope. 

We removed the top mirror mounting plate, installed a remporary Ameristat skirt using cleanroom tape, and used a single-edge razor blade to remove some of the periscope damping material. This allowed the top plate to drop down to the required position.

We swapped the pitch actuator on the upper mirror mount to use a non-lockable actuator that doesn't interfere wtih the mounting plate.

We then used existing irises on the table in the path transmitted by the bottom  periscope mirror and the iris we installed plus the spot on the outside of the PSL enclosure that reflects from the HAM1 input port to align the two mirrors we swapped.

We were able to re-install the protective shield for the vertical path up the periscope in it's original orientation.

We expect that RobertS will assess whether or not this reduced noise induced by the PZT mirror by not having it at the top of the periscope where the noise is amplified by the periscope resonance.

A few images are attached below.

Images attached to this report
Comments related to this report
kiwamu.izumi@LIGO.ORG - 16:49, Tuesday 21 April 2015 (17990)IOO

Some comments from point of view of the IMC control.

  • After the pzt move, we were able to lock the IMC without an issue by just steering the top and bottom periscope mirrors by hand.
    • Though we had to remove the light pipe from the shutter box (which is the one attached on the HAM1 viewport ) to coarsely check the pointing to IMC at the beginning.
  • Once we locked IMC, we manually offloaded the pzt digital offsets to the mechanical knobs. As a result, now the digital offsets are less than 1000 cnts in both pitch and yaw.
  • Since we mounted the pzt mirror such that it preserves the pitch and yaw relations, we did not have to introduce a funny output matrix in the IMCASC digital control.
  • We could have tuned up the output matrix for DOF_3 so as to compensate the extra lever arm length from the pzt to MC1, but we left them unchanged.
    • They seem working OK anyway without any change.
  • The horizontal position of the existing beam seems to have shifted. I did not correct it for now.
    • According to IM4 TRANS, beam shifted in YAW by 0.7 counts downward so that it is now -0.7 cnts.
    • On the other hand the pit seems to have stayed. IM4 TRANS pit remained to be at -0.5 cnts
    • This must have changed PRM spot position. We can steer one of IMs to correct it, if necessary.
Images attached to this comment
H1 SUS (CDS, DAQ, DetChar, ISC)
betsy.weaver@LIGO.ORG - posted 12:08, Tuesday 21 April 2015 - last comment - 16:30, Tuesday 21 April 2015(17975)
All SUS 18-bit DAC re-calibrated

After today's upgrade to RCG 2.9.1 we looked to see if the DAC AUTOCAL was successful using these procedural notes.  The last check was done on the BSCs only a few weeks ago (alog 17597).  During today's check, we found errors on h1susb123 again and also on h1sush56.  Both show that the AUTOCAL failed for 1 of their DACs.  As well, Kissel logged into LLO and found all AUTOCALs reported SUCCESS on all SUS front ends since their last computer restart.

The LHO errors were as follows:

controls@h1sush56 ~ 0$ dmesg | grep AUTOCAL
[   60.359217] h1iopsush56: DAC AUTOCAL SUCCESS in 5134 milliseconds
[   65.510812] h1iopsush56: DAC AUTOCAL SUCCESS in 5134 milliseconds
[   70.661410] h1iopsush56: DAC AUTOCAL SUCCESS in 5134 milliseconds
[   75.813017] h1iopsush56: DAC AUTOCAL FAILED in 5134 milliseconds
[   80.963620] h1iopsush56: DAC AUTOCAL SUCCESS in 5134 milliseconds
[8443363.521348] h1iopsush56: DAC AUTOCAL SUCCESS in 5134 milliseconds
[8443368.669944] h1iopsush56: DAC AUTOCAL SUCCESS in 5133 milliseconds
[8443373.818544] h1iopsush56: DAC AUTOCAL SUCCESS in 5133 milliseconds
[8443378.967145] h1iopsush56: DAC AUTOCAL FAILED in 5133 milliseconds
[8443384.115739] h1iopsush56: DAC AUTOCAL SUCCESS in 5133 milliseconds

The first of the 2 above calibrations was the reboot 97 days ago, on Jan 14, 2015 when we upgraded to RCG 2.9.

________________________________________________________________

controls@h1susb123 ~ 0$ dmesg | grep AUTOCAL
[   61.101850] h1iopsusb123: DAC AUTOCAL SUCCESS in 5134 milliseconds
[   66.252460] h1iopsusb123: DAC AUTOCAL SUCCESS in 5134 milliseconds
[   71.833569] h1iopsusb123: DAC AUTOCAL SUCCESS in 5133 milliseconds
[   77.416848] h1iopsusb123: DAC AUTOCAL SUCCESS in 5134 milliseconds
[   82.567454] h1iopsusb123: DAC AUTOCAL SUCCESS in 5134 milliseconds
[   87.718046] h1iopsusb123: DAC AUTOCAL SUCCESS in 5134 milliseconds
[   92.869668] h1iopsusb123: DAC AUTOCAL SUCCESS in 5134 milliseconds
[   98.021279] h1iopsusb123: DAC AUTOCAL FAILED in 5134 milliseconds
[6643697.827654] h1iopsusb123: DAC AUTOCAL SUCCESS in 5134 milliseconds
[6643702.976255] h1iopsusb123: DAC AUTOCAL SUCCESS in 5133 milliseconds
[6643708.553386] h1iopsusb123: DAC AUTOCAL SUCCESS in 5134 milliseconds
[6643714.130498] h1iopsusb123: DAC AUTOCAL SUCCESS in 5134 milliseconds
[6643719.278911] h1iopsusb123: DAC AUTOCAL SUCCESS in 5133 milliseconds
[6643724.427687] h1iopsusb123: DAC AUTOCAL SUCCESS in 5134 milliseconds
[6643729.576295] h1iopsusb123: DAC AUTOCAL SUCCESS in 5133 milliseconds
[6643734.724703] h1iopsusb123: DAC AUTOCAL FAILED in 5133 milliseconds
[8443632.920341] h1iopsusb123: DAC AUTOCAL SUCCESS in 5133 milliseconds
[8443638.069031] h1iopsusb123: DAC AUTOCAL SUCCESS in 5133 milliseconds
[8443643.646215] h1iopsusb123: DAC AUTOCAL SUCCESS in 5134 milliseconds
[8443649.223336] h1iopsusb123: DAC AUTOCAL SUCCESS in 5134 milliseconds
[8443654.371885] h1iopsusb123: DAC AUTOCAL SUCCESS in 5133 milliseconds
[8443659.520452] h1iopsusb123: DAC AUTOCAL SUCCESS in 5134 milliseconds
[8443664.669135] h1iopsusb123: DAC AUTOCAL SUCCESS in 5133 milliseconds
[8443669.817631] h1iopsusb123: DAC AUTOCAL FAILED in 5133 milliseconds

 

The first of the 3 above calibrations was the reboot 97 days ago, on Jan 14, 2015 when we upgraded to RCG 2.9, then a restart/calibration by Kissel April 1 (mentioned above), then today's RCG 2.9.1 upgrade.

Comments related to this report
jeffrey.kissel@LIGO.ORG - 16:30, Tuesday 21 April 2015 (17989)DetChar
J. Kissel, B. Weaver, D. Barker, R. McCarthy, and J. Batch

We've traced down which suspensions are *using* these DAC cards that always fail their calibration: susb123's is used by the ITM ESD, and sush56's is used by the bottom stage of SR3. Both are currently not used for any local or global control, so we don't *think* this should cause any glitches. 

@DetChar -- can you confirm this? I'm worried that when the DAC *noise* crosses zero, then there are still glitches.

Further -- Dave has confirmed that there are 18-bit DAC on the DAQ Test stand which fail the calibration, and those cards specifically appear to be of a different generation board that the ones that pass the calibration regularly. We suspect that this is the case on the H1 DAC cards as well. However, because they're not used by anything, we figure we'll wait until we *have* to swap out the SUS DACs for the newer-better, fixed-up EEPROM version of the board to investigate further. 

That's the plan so far. Stay tuned!
H1 SEI
jim.warner@LIGO.ORG - posted 11:34, Tuesday 21 April 2015 (17974)
BRS software restarted, damper, too

A couple of weeks ago Jeff and I turned off the damper on the BRS because high winds were activating the damper and sending impulses into the BRS. This morning I took Corey and Patrick down to show them how to stop/start the software and reposition the masses. The masses were out of position ~45 degrees, so I recentered them. We then went into the rack room and restarted the software and re-enabled the damper. Seems to be running fine now.

H1 TCS (DetChar, PEM, TCS)
andrew.lundgren@LIGO.ORG - posted 11:24, Tuesday 21 April 2015 - last comment - 03:53, Thursday 23 April 2015(17973)
Request to turn off HWS cameras in CS at a known time
Detchar would like to request that the HWS cameras in the center building be turned off for a few minutes at a known time. We're trying to track down some glitches in PEM and ISI sensors that happen every second, and Robert suspects the HWS. Just a few minutes with them off, and then on again, would be fine; we don't need the IFO to be in any particular state, as long as the ISIs are running fine. We would need the precise times (UTC or GPS preferred), as the channels that record the camera state don't seem trustworthy (alog).
Comments related to this report
eleanor.king@LIGO.ORG - 18:39, Tuesday 21 April 2015 (17995)

This afternoon I tunred all HWS off and I will leave them off all night (both of the corner station HWS were on prior to this).

andrew.lundgren@LIGO.ORG - 00:24, Wednesday 22 April 2015 (17997)DetChar, SUS, TCS
It seems like the HWS was in fact the culprit. The HWS was turned of at Apr 21 20:46:48 UTC, according to TCS-ITMX_HWS_DALSACAMERASWITCH. I checked the BLND_Z of the GS13s on BS and ITMX, and the table 2 PSL accelerometer. All three had glitches every second before the HWS was turned off. They all continued to glitch for 11 more seconds (until the end of the minute), and then all stopped at the exact same time.

Attached is a spectrogram of the ITMX GS13. It's hard to see the glitches in the PSL by spectrogram or even Omega scan, but they're very apparent in the Omicron triggers.
Images attached to this comment
joshua.smith@LIGO.ORG - 07:41, Wednesday 22 April 2015 (17998)DetChar, SEI

Here are three better spectrograms showing the transitioning off of the HWS and the loud once per second glitches going away in the ISI-*_ST2_BLND_Z_GS13_CUT_IN1 channels. These plots are made with https://ldvw.ligo.caltech.edu using 0.25 seconds per FFT and normalization turned on. Conclusions same as Andy's post above. 

Images attached to this comment
joshua.smith@LIGO.ORG - 08:45, Wednesday 22 April 2015 (18000)DetChar, SEI

David Shoemaker asked the good question, do these glitches even show up in DARM? Well, that's hard to say. There are once per second glitches that show up in the ISI channels, and once per second glitches that show up in DARM. We don't know if they have the same cause. Figures are: 1. DARM once per second glitches, 2,3, BS and ITMX, 4. overlay of all showing that the glitches in DARM are just slightly ahead in time (in this 0.25 sec/fft view, unless there is some sample-rate timing bias). 

In order to test whether they are both caused by the HWS it would be really useful if folks on site could turn the HWS on, then off, for a minute or so in each configuration during a low-noise lock and record the UTC times of those states. 

Images attached to this comment
Non-image files attached to this comment
sheila.dwyer@LIGO.ORG - 03:53, Thursday 23 April 2015 (18019)

We got to a low noise state which is not at as low noise as our best, with the spectrum about a factor of 10 worse at around 90 Hz than our best reference.  We were in low noise, HWS off from 10:42:30 UTC to 10:47:30 UTC, i turned the cameras on according to Elli's instructions and we left the cameras on from 10:48:20 UTC to 10:53:40.  

H1 GRD
jameson.rollins@LIGO.ORG - posted 11:14, Tuesday 21 April 2015 (17972)
test of new guardian version to fix SPM DIFF flickering issue

Betsy, Jamie

summary

We just tested a new guardian core version (r1438, with cdsutils r474) on SUS_SRM, to see if it gets rid of the "SPM DIFF flickering" issue that we've been seeing.  The new version does appear to fix the issue.  The test is now complete and the SUS_SRM guardian has been reverted back to the current nominal guardian core installation version (r1390, cdsutils r443).

backstory

We've been seeing a minor guardian issue since the last guardian upgrade with guardian nodes that have set point monitoring (SPM) enabled (ca_monitor = True).

When SPM is enabled, and set point differences are detected, the GRD SPM_CHANGED channel indicates the number of changed set points.  The current problem is that in the presence of set point differences, the node SPM is seemingly "flickering" between the "set point changes detected" state and the default "all clear" state.  This causes the SPM indicators on the GRD screens to flicker.

I was not able to reproduce this issue locally, but I reworked and cleaned up the SPM code path in the guardian core in the hopes that it would make the issue go away.  The issue was not see in the new version recently deployed at LLO, so the hope was that the issue was resolved.  This test was to see if the new release fixes the issue at LHO, which it does appear to do.

I will put in a change request for an upgrade to this latest version for H1.

H1 CDS (CDS, DAQ, TCS)
andrew.lundgren@LIGO.ORG - posted 09:53, Tuesday 21 April 2015 - last comment - 09:24, Friday 24 April 2015(17971)
Minute trends, second trends, and raw data disagree on when HWS camera was off
Andy, Duncan

When looking for times when the HWS camera was on or off, I found that the minute trends indicated that it was off on Apr 18 6:30 UTC for ~27 minutes. But the second trends indicate that it was turned off 20 minutes later than that (and back on at the same time). The raw data (sampled at 16 Hz) indicates that the camera was never turned off.

This was originally found using data over NDS2, but Duncan has confirmed by using lalframe to read the frames directly. I've attached a plot below. The channels are H1:TCS-ITM{X,Y}_HWS_DALSACAMERASWITCH.
Images attached to this report
Comments related to this report
shivaraj.kandhasamy@LIGO.ORG - 14:39, Tuesday 21 April 2015 (17981)DAQ, DetChar

I was able to successfully run it in the Caltech (CIT) cluster using a matlab code i.e., the raw, minute and second trends agree. The matlab code uses ligo_data_find. But if I run the same code at Hanford cluster it produces the results Andy and Duncan saw i.e., the trends disagree. So there seems to difference between the frames at these two locations for the trend ones. I have attached the matlab codes here with incase some one wants to test it.

Non-image files attached to this comment
gregory.mendell@LIGO.ORG - 14:32, Thursday 23 April 2015 (18029)

This is because the trend data from the two CDS framewriters can disagree. This happens if a framewriter restarts during the period covered by the trend file, and the averages from each framewriter are computed using a different number of values. These differences only happens with the trend data. See below for the details.

Note that at LHO, LDAS is using the CDS fw1 framewriter as the primary source of the scratch trends (saved at LHO for the past month) and the CDS fw0 frameswriter as the primary source of the archive trends (copied to CIT and saved permenantly at LHO and CIT).

If a framewriter goes down, it will still write out the trend data based on what data it has since it restarted.

Thus you can get trend frames that contain data averages for only part of the time period covered by the file.

For the time given in this alog, the trend files under /archive (from framewriter-0) and /scratch (from
framewriter-1) differ is size:

$ ls -l /archive/frames/.../H-H1_M-1113372000-3600.gwf
-r--r--r--   1 ldas     ldas     322385896 Apr 18 00:27 /archive/frames/.../H-H1_M-1113372000-3600.gwf

$ ls -l /scratch/frames/.../H-H1_M-1113372000-3600.gwf
-r--r--r--   1 ldas     ldas     310156193 Apr 18 00:46 /scratch/frames/.../H-H1_M-1113372000-3600.gwf

Note that both files pass FrCheck (but have different checksum) and contain valid data according to framecpp_verify (e.g., run with the --verbose --data-valid options).

However, if I dump out the data for one of the channels in question, I get:

$ FrDump -i /archive/frames/.../H-H1_M-1113372000-3600.gwf -t H1:TCS-ITMX_HWS_DALSACAMERASWITCH.mean -d 5 | grep "0:"
     0:           1           1           1           1           1
      1           1           1           1           1
    10:           1           1           1           1           1
      1           1           1           1           1
    20:           1           1           1           1           1
      1           1           1           1           1
    30:           1           1           1           1           1
      1           1           1           1           1
    40:           1           1           1           1           1
      1           1           1           1           1
    50:           1           1           1           1           1
      1           1           1           1           1

$ FrDump -i /scratch/frames/.../H-H1_M-1113372000-3600.gwf -t H1:TCS-ITMX_HWS_DALSACAMERASWITCH.mean -d 5 | grep "0:"
     0:           0           0           0           0           0
      0           0           0           0           0
    10:           0           0           0           0           0
      0           0           0           0           0
    20:           0           0           0           0           0
      0           0           0           1           1
    30:           1           1           1           1           1
      1           1           1           1           1
    40:           1           1           1           1           1
      1           1           1           1           1
    50:           1           1           1           1           1
      1           1           1           1           1

These frames start at,

$ tconvert 1113372000 Apr 18 2015 05:59:44 UTC

and the 0's start about 28 minutes into the /scratch file (copied from framewriter-1), while the /archive version only contains 1's (copied from framewriter-0).

Thus, I predict framewriter-1 restarted at around Apr 18 2015 06:28:00 UTC. It seems that 0's get filled in for times before that.

If I check, H1:TCS-ITMX_HWS_DALSACAMERASWITCH.n, which gives the number of values used to get the averages, this is also 0 when then above numbers are 0, indicating the 0's came from times when framewriter-1 had no data.

Note that this behavior only occurs for second-trend and minute-trend data.

If data is missing in the raw or commissioning data, no file is written out. Thus, we never find a difference between the raw (H1_R) or commissioning (H1_C) frames between valid frames written by both framewriters. Note that the diffH1fb0vsfb1Frames process seen in the first row of green lights here,

http://ldas.ligo-wa.caltech.edu/ldas_outgoing/archiver/monitor/d2dMonitor.html

is continuously checking that the raw frames from the two framewriters is the same. (The same process runs at LLO too.)

If differences are found, it sends out an email alert.

I've never received an alert, expect when the RAID disk-arrays have either filled up (and 0 byte files were written by one framewriter) or
when the RAID disk-array hung in some way that caused corrupt files to be written. In both cases, the files on the problem array never pass FrCheck and are never copied into the LDAS system.

Thus, the above feature, is a feature of the second-trend and minute-frames only. To avoid this issue, code should check the .n channel to make sure the full number of samples were used to obtain the average. Otherwise, some of the trend data gets filled in with zeros.



david.barker@LIGO.ORG - 16:06, Thursday 23 April 2015 (18032)

Greg said:

 

Thus, I predict framewriter-1 restarted at around Apr 18 2015 06:28:00 UTC. It seems that 0's get filled in for times before that.

           

the restart log for 17th April says

2015_04_17 23:28 h1fw1

With local PDT time = UTC - 7, Greg gets a gold star.

 

daniel.sigg@LIGO.ORG - 09:24, Friday 24 April 2015 (18041)

There should also be a .n channel which tells you how many samples were included in the average.

H1 IOO
daniel.sigg@LIGO.ORG - posted 09:51, Tuesday 21 April 2015 (17970)
IMC ASC model

Added a feature to the IMC ASC model, so we can test a fast PZT servo to stabilize the input pointing. A new column or row has been added to the input and output matrices, respectively---with the output connected back to the input using a unit delay. The idea is that the WFS DC is feed through DOF4 to the PZTs with a high bandwidth servo. The current PZT output is fed to the new output matrix row, fed back to the new input matrix column and added to the fast PZT servo as an input offset. medm screens have not been updated.

H1 CDS (CAL, CDS, DAQ, IOO, ISC, PSL, SEI, SUS, TCS)
jeffrey.kissel@LIGO.ORG - posted 07:03, Tuesday 21 April 2015 - last comment - 09:39, Tuesday 21 April 2015(17967)
Beginning RCG Upgrade 2.9.1
I'm beginning to upgrade the front-end models to RCG 2.9.1. 

Order of operations:
- PSL (to prep for Periscope PZT Relocation)
- OAF (for TCS and CO2 table alignment)
- End-station ISC (for Green and HWS commissioning)
- Corner Station SEI/SUS
- Corner Station LSC/ASC
- End-station PEM/CAL
- SUS AUX

All ISC Manager and IMC guardians have been transistioned to their DOWN state.

Stay tuned and wish me luck!
Comments related to this report
daniel.sigg@LIGO.ORG - 09:39, Tuesday 21 April 2015 (17969)

The ODC EPICS change to longs is only partially implemented. We half forgot to svn up the cds/common/model directory with the new ODC changes. So, most of the corner models still used doubles for ODC EPICS channels. However, all end station ODC channels are long as well as psl and imc. The PSL also has its ODC DAQ channel updated to uint32.

The bitmask is still a double, so. Seemingly, we only have a cdsEpicsOutLong and no cdsEpicsInLong. I guess this is to compensate for the fact that we have cdsEpicsBinIn but no cdsEpicsBinOut...

H1 ISC (ISC, SUS)
sheila.dwyer@LIGO.ORG - posted 01:59, Tuesday 21 April 2015 (17966)
locking with higher recycling gain

Evan, Gabriele, Sheila

Today we learned how to lock the IFO with a higher recycling gain.  First we locked the IFO and adjusted the alingment again to get a recycling gain of ~40 similar to what was done yesterday, realinged green down the arms by moving TMS and the QPD offsets (QPDs were only necessary for the X arm) and updated the camera positions. With the improved recycling gain we had to make some changes to both the CARM offset reduction sequence and the ASC.

CARM offset reduction 

All the changes we made to the CARM offset reduction were consistent with having higher arm build ups because of the better recycling gain.

ASC

We found the same situation as what Evan and Gabriele noted yesterday, that the sign on the PRC2 loops has flipped.  We have also turned off some roll offs and increased some gains to make it easier for us to find the good recycling gain.  (These may introduce noise in to DARM so we may need to add them back latter.) Specifically,

These changes are all in the guardians now,we have not compeltely tested the ASC.  We also can turn the ASC gains back down to what they used to be.  We were able to turn all the ASC on with a recycling gain of 39.6, however in the bounce, violin and roll mode damping broke the lock.  We think this is most likely to be the roll mode damping that is a problem. 

SUS chassis needing to be power cycled

Also, the ETMX PUM watchdog is tripped, it seems like each time the watchdog trips on only one of the coils we have to drive to the end station to power cycle the chasis.  We also had to drive down there earlier in the evening to power cycle the UIM coil driver chassis.  

H1 AOS
robert.schofield@LIGO.ORG - posted 18:38, Monday 20 April 2015 (17964)
Magnetic coupling measurement at EY: 5e-20 m/sqrt(Hz) predicted at 11 Hz

I measured magnetic coupling at ETMY. The figure shows one of the injections as well as the predicted noise floor from the ambient background of about 10 pT/sqrt(Hz) (transients will likely be greater).  The plot also shows the upper limit to suspension coupling that Anamaria and I made at LLO in Nov. 2013 (upper limit because the dominant coupling was to cables not the suspension), and the prediction from my measurements of the moments of individual test mass suspension parts. The value here may also be an upper limit to suspension coupling, if coupling is dominated by cable coupling, but I took the measurement at an end station to reduce this possibility. Also, the behavior is consistent with the 1/f^5 behavior above 20 Hz that I would expect for coupling to the PUM magnets and the coupling factor is close to what was predicted for the PUM magnets.

link to text file

Non-image files attached to this report
H1 CAL (CAL, DetChar, ISC)
jeffrey.kissel@LIGO.ORG - posted 12:57, Monday 20 April 2015 - last comment - 09:08, Friday 24 April 2015(17951)
Studies on the Precision of DARM Calibration
J. Kissel, K, Izumi

I had started the weekend hoping to improve the DARM calibration in the following ways:
(1) Including the compensation for the analog and digital anti-aliasing (AA) and anti-imaging (AI) filters.

(2) Decreasing the DARM coupled cavity pole by 25% to 290 [Hz].

(3) Establishing an uncertainty estimate of the optical gain (the DC scale factor component of the sensing function).

(4) Reducing the delay time in the actuation from four 16 [kHz] clock cycles to one 16 [kHz] clock cycles.

After study, and Sunday's improvement to the power recylcing gain, we've decided not to make *any changes to the calibration, yet. 
However, for the record, I put down what I've studied here, so we can begin to understand our uncertainty budget.

%% Details
-----------
(1) Including the compensation for the analog and digital anti-aliasing (AA) and anti-imaging (AI) filters
LLO has pioneered a method to compensate for the high frequency effects of the analog and digital (or IOP) AA and AI filters, by including the *product* of all four filters in the actuation chain of the front-end CAL-CS model (see the last few pages of G1500221 and LLO aLOG 16421). 

Further, Joe has analyzed a collection of 281 real, analog AA/AI filters that were tested during CDS acceptance testing to refine the exact frequency response of these filters (see first attachment, aLIGO_AAAI_FilterResponse_T1500165.pdf). In summary, the 3rd order Butterworth's corner frequency is statistically significantly lower; measured to be 8.941 (+0.654 /-0.389 or +7%/-3%) [kHz] instead of the ~10 [kHz] Butterworth model that we have been using (which was inherited from a .mat file the 40m). Though this does not appreciably affect the magnitude error at high-frequency, it does as much as 3 [deg] of phase by 2 [kHz], which can throws off our estimate of the residual unknown time delay by 5 [us] when we try to account for it in our fitting of the open loop gain transfer function.

However, after exploring what LLO has implemented, we've discovered a flaw in the implementation of this compensation. In going from the continuous zpk model of the filters to discrete, because we're trying to model these filters which have all of their response near, at, or above the Nyquist frequency, there is significant difference of modeled filter's response between the continuous and discrete models (see second attachment 2015-04-18_AAAI_FilterStudy.pdf). 

As such, we will *not* begin to compensate for the AA and AI filtering until we arrive at a better method for compensating these filters.

(2) Decreasing the DARM coupled cavity pole by 25% to 290 [Hz].
Over the past few weeks, we've established the DARM coupled cavity pole is now at 290 [Hz] instead of the predicted L1 value of 389 [Hz] (see LHo aLOG 17863). We've added one more DARM open loop gain transfer function to the list we're now comparing after the HAM6 vent,
Apr 13 2015 04:15:43 UTC % Post HAM6 Vent & UIM/TST Crossover; 10 [W] input power
Apr 13 2015 06:49:40 UTC % No loop parameter changes, but input power 15 [W]
Apr 15 2015 07:53:56 UTC % Input Power 15 [W] no change in control system from previous measurement 
with these three measurements, I made a statistical comparison of the model / measurement residual while using 290 [Hz] for the modeled coupled cavity pole frequency, and reducing the unknown time delay from 40 [us] to 30 [us] because I've used Joe's measured mean for the  analog AA / AI in the model (see third attachment 2015-04-18_290HzCCP_H1DARMOLGTF.pdf ). As one can see on the 3rd and 4th page, assuming each of the residuals frequency points is a measurement of the the true OLGTF value with a Gaussian distribution, the uncertainty in the frequency dependence of the OLGTF model is now a 1-sigma, 68% confidence interval of +/- 1.5% in magnitude and 1 [deg] between 15 and 700 [Hz] (IF we change the CCP frequency to 290 [Hz], compensate for the AA and AI filters, and include 30 [us] of unknown delay). Note that this assumption of Gaussianity appears to be roughly true for the magnitude, but not at all in phase (I'm still thinking on this). Also note the each one of these frequency points has passed a 0.99 coherence threshold on a 10 [avg] measurement (and most have coherence above 0.995), so the individual uncertainty for each point is sqrt((1-coh)/(2*nAvgs*coh)) = 1 to 2%.

Recall the frequency dependence of the model is determined by the following components included in the model:
- The 1/f^2 dependence of the [m/N] suspension transfer function (as modeled by the QUAD state space model)
- The 2000 [Hz] ESD driver pole
- The analog and digital anti-imaging filters
- The 130 [us] of actuation delay from 1 16 [kHz] cycle of SUS Computation, 3 65 [kHz] cycles of IOP Error Checking, 1 65 [kHz] cycle of IOP Computation, and 1/2 65 [kHz]  cycle for Zero-order Hold Delay
- The DARM filters
- The single-pole response (at 290 [Hz]) of the optical plant 
- The analog and digital anti-aliasing filters
- The 76 [us] of sensing delay from 1 65 [kHz] cycle of IOP Computation, 1 16 [kHz] cycle of OMC Computation
- The 30 [us] of unknown time delay

As a cross-check, I recalculated the comparison with the CCP frequency that's currently used in the model, 389 [Hz], and found that at around the high-frequency PCal lines, roughly ~535 [Hz] the model / measurement discrepancy is 25-30%. This is consistent with what the PCAL calibration reports at these frequencies, a DARM / PCAL (which is equivalent to model / measurement) discrepancy of 25-30% -- see LHO aLOG 17582. At the time, the PCAL team reports their internal uncertainty to be in the few-percent range.

This had convinced me on Saturday that I had enough information to "officially" change the DARM CCP frequency in the CAL-CS front end, but Gabriele and Evan have since changed the alignment scheme for the corner station to improve the power recycling cavity gain by improving the ITM DC alignment LHO aLOG 17946. This will have an effect on the signal recycling cavity and therefore the DARM CCP frequency, so we'll wait until we get a few more OLGTFs in this new configuration before changing anything.

(3) Establishing an uncertainty estimate of the optical gain (the DC scale factor component of the sensing function).
After refining the precision of the frequency dependence in magnitude, this allows to quantify the precision to which we can estimate the overall DC scale factor that one needs to scale the model to the measured OLGTF; a factor that we traditionally have attributed only to the change in optical gain between lock stretches. For this study, I've used *all* six DARM OLGTF TFs, see 2015-04-18_AllMeas_FittedCCP_H1DARMOLGTF.pdf. Note that this increases the uncertainty of the frequency dependence to a less Gaussian 2.5%, but as you'll see this is still plenty precise.

Recall that before transition to the OMC DCPDs, regardless of input power to the IFO, the OMC_READOUT sensor gain is changed to match the RF readout sensor gains which are already power normalized. That should mean that input power should have no affect on the measured optical gain, and this is a safe comparison. 

With 6 measurements, the mean scale factor for the OLG TFs is 1.05e6 +/- 26% [ct / ct]. This is consistent by the variation the DARM digital gain by 34% that was used for these 6 measurements. The current optical gain used for the sensing function the CAL-CS front end model is 1.1e6 [ct/m]. This 4% difference from the mean of the these 6 measurements is well within the 26% uncertainty, so we've concluded to *not* change anything there. 

All this being said, we have used the *same* actuation strength for all of these comparisons, but there is no guarantee that the actuation strength is not changing along with the optical gain.
- ETMY is controlled using the Test Mass (L3) and UIM (L1) stages
- The cross-over for these two stages in the two groups of measurements is ~1.2 [Hz] and 2.5 [Hz] (see 17713), and by 10 [Hz], the contribution of the UIM is roughly -25 [dB] and -15 [dB]. Therefore the ESD is the dominate actuator in the frequency region which we're we trying to 
- Static charge affects the actuation strength of the ESD by changing the effective bias voltage of the drive, as well as changing the amount of drive that's in the longitudinal direction (because the charge can migrate to different regions of the reaction mass / test mass gap), see e.g. G1500264, LLO aLOG 16611, or LLO aLOG 14853.
- If there is substantial residual charge on the ESD, the charge varies on the the ESD when Ion Pumps are valved into the chamber.
- It has been shown many times over that the charge varies on the few hour time scale when there is significant residual charge on the test mass and the ion pumps are valved in (see e.g., G1401033 or as recently as LLO aLOG 17772).
Thus, it is reasonable to suspect that the actuation strength is changing between these measurements. LHO has made no-where-near enough measurements (only a one-time comparison between ETMX and ETMY, see LHO aLOG 17528) to quantify how much this is changing, but here is what is possible:
- We have a physical model of the actuation strength (or at least more accurate equation for how the bias voltage determines the actuation strength, see above citations). I think we can take what we've seen for the variance (as high as +/- 400 [V] !!) and propagate that through to see how much of an affect it has on the strength
- PCAL lines at low-frequency (~30 [Hz]), compared against the DARM calibration lines should show how the optical gain is varying with time, it's just that no one has completed this study as of yet.
- Calculation of the gamma coefficient from the DARM lines should also reveal how the open loop gain transfer function is changing with time. In the past, we've assumed that changes in gamma are fluctuations in the optical gain because we've had actuators with non-fluctuating strength. 

Thus, for now, we'll incorrectly assign all of the uncertainty in the scale factor to optical gain, and call is 26%. Perhaps it will be much better to trust PCAL at this point and time, since it's precision is so much greater than this "scale the OLGTF model" method, but I would need a third measurement technique to confirm the accuracy. I think a power budget propagated to a shot noise estimate compared against the measured ASD (like in LHO aLOG 17082) is the easiest thing to do, since it can be done offline. Or we should resurrect the campaign to use the IMC VCO as a frequency reference, but this has the disadvantage of being an "offline, odd configuration" measurement, just like the free-swinging Michelson.

(4)Reducing the delay time in the actuation from four 16 [kHz] clock cycles to three 16 [kHz] clock cycles.
As mentioned above, the time delays that are included in this model are 
- The 130 [us] of actuation delay from 1 16 [kHz] cycle of SUS Computation, 3 65 [kHz] cycles of IOP Error Checking, 1 65 [kHz] cycle of IOP Computation, and 1/2 65 [kHz]  cycle for Zero-order Hold Delay
- The 76 [us] of sensing delay from 1 65 [kHz] cycle of IOP Computation, 1 16 [kHz] cycle of OMC Computation
- 30 [us] of unknown time delay (the equivalent of ~8-9 [deg] of phase at 700 [Hz])
for a total of 206 [us] of delay for which we've accounted, out of the total 236 [us] that's used to produce the above frequency-dependence comparison. So, there's a total of 3.4 or 3.9, 16 [kHz] cycles of known or known+unkuown time delay, respectively. Remember that the "L/c", light-travel time delay (13 [us]) is *less* than the one 16 [kHz] SUS clock cycle (61 [us]) delay that defines when the control signal arrives at the end station over RFM IPC, so we ignore it.

Since we only have the infrastructure add the delay in the actuation paths in CAL-CS, then we can only account for the *differential* delay between the two paths. If we assign the unknown delay to the actuation side of things, then the difference in delay between the two paths is (130+30)-76 = 84 [us] = 1.3 16 [kHz] clock cycles, leaving a residual overall delay of 76 [us]. If we assign it to the sensing function, we get 130-(76+39) = 24 [us] = 0.39 16 [kHz] clock cycles, leaving a residual of 130 [us]. Since we can't do less than 1 [kHz] clock cycle, we should chose to assign the unknown delay to the actuation function, apply one 16 [kHz] cycle delay to the actuation function, and suffer the 0.3 / 16384 = 18 [us] phase difference between the sensing and actuation path, and have to account for a 76 [us] delay in offline analysis.
Non-image files attached to this report
Comments related to this report
daniel.sigg@LIGO.ORG - 22:52, Monday 20 April 2015 (17965)

Your list of known delays doesn't seem to include the 13us (L/c) delay from the interferometer response (see e.g. eqn. 16 in T970101).

jeffrey.kissel@LIGO.ORG - 09:22, Wednesday 22 April 2015 (18002)
Daniel's right, details below. As such, the unknown time delay is 16 +/- 5 [us], 
For clarity I repeat the new list of time delays:
the time delays that are included in this model are 
- The 130 [us] of actuation delay from 
     - one 16 [kHz] cycle of SUS Computation, 
     - three 65 [kHz] cycles of IOP Error Checking, 
     - one 65 [kHz] cycle of IOP Computation, and 
     - one-half a 65 [kHz]  cycle for Zero-order Hold Delay
- The 89.3 [us] of sensing delay from 
     - one L/c delay sensing the ETM motion in the corner, 
     - one 65 [kHz] cycle of IOP Computation, and
     - one 16 [kHz] cycle of OMC Computation
- 16.7 [us] of unknown time delay (the equivalent of ~3-4 [deg] of phase at 700 [Hz])
for a total of 219.3 [us] of delay for which we've accounted, out of the total 236 [us] that's used in the model.

Details:
--------
More on the L/c time delay, as explained by Daniel:
I have said above,
"Remember that the "L/c", light-travel time delay (13 [us]) is *less* than the one 16 [kHz] SUS clock cycle (61 [us]) delay that defines when the control signal arrives at the end station over RFM IPC, so we ignore it."

Daniel agrees:
The fiber delay is n * L/c or about 20us. It doesn't matter because it is part of
the SUS cycle delay.

However, there is a sensing function delay. When you push the ETM (from the DARM actuation) it takes at least
L/c before you can measure a signal in the corner. This is a pure optical delay. This sensed control signal is indeed what we're measuring when we take an open loop gain transfer function.

For gravitational waves the situation is similar, the photons which travel forth and back in
the arm are, on average, sampling h(t) from half a round trip ago. In reality, this
is only exactly true for perpendicular incidence. 

As such, we should subtract 3994.465(+/- 7e-4) [m] / 299792458 [m/s] = 13.3 [us] from the "unknown" time delay, leaving us with a timing uncertainty of 16.7 [us]. Unclear yet what the uncertainty is in this number, since thus far it's merely fit by-eye to make the phase of the OLGTF residual flat. From playing around with the number in the fit, I would suggest a 5 [us] uncertainty on this unknown timing residual.

I'll update 
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER7/H1/Scripts/H1DARMmodel_preER7.m  
later today to reflect this knowledge.
koji.arai@LIGO.ORG - 10:18, Wednesday 22 April 2015 (18008)

OMC DCPDs have uncompensated poles at 13.7kHz and 17.8kHz due to their locations above the nyquist freq.
They cause the delay of ~18.5us. The details can be found in LHO ALOG 17647

jeffrey.kissel@LIGO.ORG - 08:12, Friday 24 April 2015 (18037)
I've confirmed Koji's statement with a bode plot, though I get a better "fit" with 20 [us] delay. But the point is moot.  I'll definitely just include this in the actual frequency response of the sensing function. This brings the unknown time delay to 0 +/- 5 [us] -- wow! Let's hope we don't find out about anything else. ;-) 

Also -- that means we should include this in the approximation for the super-Nyquist frequency response of the sensing function along with the digital and analog AA filters when we fix that it in the front-end.
Non-image files attached to this comment
jeffrey.kissel@LIGO.ORG - 09:08, Friday 24 April 2015 (18039)
I've reprocessed the results after adding the L/c arm delay and the OMC DCPD uncompensated high frequency poles mentioned above. Because we've replaced the equivalent unknown time delay with a known time delay of L/c 13.3 [us] and some very high-frequency poles, the results have actually changed very little and therefore the uncertainty in the frequency response of the OLGTF has changed very little: 
                                   Was                     Is Now
Magnitude Residual StDev:  1.0045 +/- 0.025318       1.0043  +/- 0.025309     
Phase Residual StDev:      0.4299 +/- 1.0307         0.23821 +/- 1.0534
However, there're less unknowns in the model, which is exactly what we want. 

As such, I stand by my earlier statement:

As one can see on the 3rd and 4th page, assuming each of the residuals frequency points is a measurement of the the true OLGTF value with a Gaussian distribution, the uncertainty in the frequency dependence of the OLGTF model is now a 1-sigma, 68% confidence interval of +/- 2.5% in magnitude and 1 [deg] between 15 and 700 [Hz] (IF we change the CCP frequency to 290 [Hz] -- which is now probably different, and find a good discrete approximation for compensating for the OMC DCPD poles, the AA, and the AI filters). Note that this assumption of Gaussianity appears to be roughly true for the magnitude, but not at all in phase (I'm *still* still thinking on this). Also note the each one of these frequency points has passed a 0.99 coherence threshold on a 10 [avg] measurement (and most have coherence above 0.995), so the individual uncertainty for each point is sqrt((1-coh)/(2*nAvgs*coh)) = 1 to 2%.


Details:
--------
I've added the following parameters to the params files:
par.C.omcdcpdpoles_Hz = [13.7e3 17.8e3]; % LHO aLOGs 18008 and 17647

par.C.armLength.x = 3994.4704; % [m] +/- 0.3e-3; LHO aLOG 9635
par.C.armLength.y = 3994.4692; % [m] +/- 0.7e-3; LHO aLOG 11611
par.C.speedoflight = 299792458; % [m/s]
and added the following lines to the DARM model
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER7/H1/Scripts/H1DARMmodel_preER7.m

par.C.uncompensatedomcdcpd.c = zpk([],-2*pi*par.C.omcdcpdpoles_Hz,prod(-2*pi*par.C.omcdcpdpoles_Hz));
par.C.uncompensatedomcdcpd.f = squeeze(freqresp(par.C.uncompensatedomcdcpd.c,2*pi*freq));

par.t.armDelay  = mean([par.C.armLength.x par.C.armLength.x]) ./ par.C.speedoflight;
Non-image files attached to this comment
Displaying reports 65561-65580 of 83069.Go to page Start 3275 3276 3277 3278 3279 3280 3281 3282 3283 End