Displaying reports 55361-55380 of 83097.Go to page Start 2765 2766 2767 2768 2769 2770 2771 2772 2773 End
Reports until 08:46, Wednesday 20 July 2016
H1 General
edmond.merilh@LIGO.ORG - posted 08:46, Wednesday 20 July 2016 (28517)
Morning Meeting Minutes

TCS-Y: LASER pssobly damaged

SEI: Discovered that power-ups/re-boots cause HEPI pump Diff pressure(s) to spike. This will be added to FRS.

CDS: No full report on Timing changes yet. A long lock stretch would be more telling.

PSL: Jenne reported that noise increased by a factor of 2. ISS setting are a bit different since more RF power being fed to AOM.

VAC: no issues reported

FAC: no issues reported

H1 SEI
edmond.merilh@LIGO.ORG - posted 08:30, Wednesday 20 July 2016 (28516)
HEPI L4C Weekly Saturation Accumulator Clearing - FAMIS# 7063

All accumulators found to be at 0 and GREEN

H1 PSL
edmond.merilh@LIGO.ORG - posted 08:25, Wednesday 20 July 2016 (28514)
PSL Weekly Trends - 10 day trends FAMIS# 6105
Images attached to this report
H1 General (GRD, OpsInfo, PSL, SEI, SUS)
sheila.dwyer@LIGO.ORG - posted 02:20, Wednesday 20 July 2016 - last comment - 06:37, Wednesday 20 July 2016(28511)
locking tonight, SRM dither signal, some requests for help

We didn't get to DC readout until after midnight tonight.  There were no huge problems, but several things that it would be helpfull if someone could follow up on:

In the end we got to DC  readout, and Evan, Carl and I had a look at the dither alignment for SRM pitch using POP 90 which is now in the ADS matrix.  The script that is in userapps/asc/h1/scripts/ditherSRM.py sets things up, and we could see that we have an error signal that responds to moving SRM and has a zero crossing at a good alignment.  We tried closing the loop but we probably hadn't turned the gain up enough and broke lock for an unknown reason, by this time our alignment had become rather bad and we had a small EQ. 

Since the optics are drifting so much tonight I'm not going to do an alingment now, but if the morning operator could start by doing initial alignment when charge measurements are over that would help us get started tomorow. 

Images attached to this report
Comments related to this report
richard.mittleman@LIGO.ORG - 06:37, Wednesday 20 July 2016 (28512)

The Yaw alignment of the ISI's has a temperature dependence (i don't remember the number but it is something like the expansion coefficient of Aluminum 2.2e-5/K, with some geometry that is going to be slightly less then 1),   if the platform  was running with a DC offset, turning it off and then back on could produce a drifting Yaw alignment

-- 

The Yaw alignment of the ISI's has a temperature dependence (i don't remember the number but it is something like the expansion coefficient of Aluminum 2.2e-5/K, with some geometry that is going to be slightly less then 1)
The Yaw alignment of the ISI's has a temperature dependence (i don't remember the number but it is something like the expansion coefficient of Aluminum 2.2e-5/K, with some geometry that is going to be slightly less then 1)
H1 SUS
sheila.dwyer@LIGO.ORG - posted 01:28, Wednesday 20 July 2016 - last comment - 21:03, Wednesday 20 July 2016(28510)
MC3 saturation

At 8:09 UTC  (on July 20th) we had a large glitch in interferometer signals and MC3 saturated. This seems like a suspect for some kind of a suspension glitch that would be worth following up on.

Comments related to this report
andrew.lundgren@LIGO.ORG - 07:35, Wednesday 20 July 2016 (28513)DetChar, SUS
The glitch is at 8:08:24.5 UTC, and it seems to have originated from the MC3 M1 LF OSEM. The glitch there is 500 counts peak-to-peak, while it's about 40 in the RT OSEM. It looks like one short glitch that caused some ringing for three seconds, which was visible in T2 and T3 as well, but not nearly as large as LF and RT.
betsy.weaver@LIGO.ORG - 09:57, Wednesday 20 July 2016 (28522)

For what it's worth - the overall trend of these signals did not change from before the little glitch event.  OSEM signals look healthy. 

Images attached to this comment
sheila.dwyer@LIGO.ORG - 21:03, Wednesday 20 July 2016 (28541)

We had a similarly huge glitch a few seconds before 23:30:30 UTC (also July 20th).  It doesn't seem to be the same MC3 LF problem that Andy found. In the first attachment you can see the glitch from last night showing up clearly in MC3 M1 LF, in the second attachment you can see the very similar glitch from this afternoon without anything happening in MC3.  For this afternoon's glitch I also looked at top mass osems for all the other optics and don't see much happening. 

Also, all 3 MC mirrors react to the 8:08 glitch, this is because we don't have much of a cut off on our MC WFS loops.  Adding more cut offs might be a good idea.

We've had several unexplained and sudden locklosses lately, and I wonder if whatever causes these huge glitches also causes some locklosses. 

Images attached to this comment
H1 General
jeffrey.bartlett@LIGO.ORG - posted 23:57, Tuesday 19 July 2016 (28509)
Ops Evening Shift Summary
Title:  07/19/2016, Day Shift 23:00 – 07:00 (16:00 – 00:00) All times in UTC (PT)
State of H1: IFO unlocked. Site recovering from maintenance day.    
Commissioning: 
Outgoing Operator:  Ed 
 
Activity Log: All Times in UTC (PT)

23:00 (16:00) Start of shift
23:30 (16:30) Dave – Going into CER to reboot H1IOPSUSB123
23:34 (16:34) Dave – Finished in CER
23:36 (16:36) Dave – Going to both Mid Stations to reboot H1IOPPEMMY & H1IOPPEMMX
00:48 (17:48) Vern – Going into LVEA to restart TCS lasers
01:00 (18:00) Vern – Out of LVEA
01:23 (18:23) Peter – Going into LVEA to reset NPRO Noise heater
01:30 (18:30) Peter – Out of LVEA
03:24 (20:24) Nutsinee & Jeff K. – Going into LVEA to check TCS-Y Laser
03:46 (20:46) Nutsinee & Jeff K – Out of the LVEA
04:25 (21:25) Nutsinee – Going to TCS-Y table to power cycle laser
05:18 (22:18) Nutsinee – Going into LVEA to check cabling at TCS-Y Table
05:35 (22:35) Nutsinee – Out of LVEA


End of Shift Summary:

Title: 07/19/2016, Day Shift 23:00 – 07:00 (16:00 – 00:00) All times in UTC (PT)
Support: Dave, Jim, Hugh, Peter, Vern, Nutsinee, Jeff K., Jenne, Sheila         
Incoming Operator: N/A

Shift Detail Summary: Site is recovering from Timing system reprogramming (WP #5993). Jenne did initial alignment, and is now working on locking. Nutsinee & Jeff K. are sorting out a problem with the TCS-Y laser. They found the TCS-Y laser is down on power (putting out about 42W vs 58W for TCS-X). IFO locked and commissioning work continues. 
H1 SEI (PEM, SEI)
david.mcmanus@LIGO.ORG - posted 23:38, Tuesday 19 July 2016 - last comment - 16:34, Thursday 21 July 2016(28507)
Newtonian Noise Array set-up (Part 1)

David McManus, Jenne Driggers

Today I set up 16 of the sensors for the corner station Newtonian noise L4C array. These were the 16 that were most out of the way and least likely to be tripping hazards, mostly focused in the 'Beer Garden' area and around the arms. The channels I connected were: 4,8,13,18,19,20,21,22,23,24,25,26,27,28,29,30. The sensors corresponding to these channels are included in the table attached to this report. The sensors are stuck to the ground using a 5 minute epoxy, and a foam cooler is placed on top of each one and taped to the ground. These foam coolers have a small hole cut near the base so that the cable can get out without touching the cooler (shown in one of the pictures). The cut surfaces are sealed with tape to prevent foam from flaking off onto the LVEA floor. The cables have basic strain relief by taping them to the ground on either side of the foam cooler, which also helps to ensure that the cable is not touching the cooler. I've attached two pictures showing what the sensors look like with and without the cooler. 

As a side note sensor 26 is quite difficult to access as it is placed almost directly beneath a vacuum tank. When it eventually needs to be removed a small person may be required to fetch it. The final attached photo shows how it is positioned (BSC7). I placed it by carefully stepping into the gap between the pipes that run along that section of the arm and then crawling under the vacuum tank support.

Images attached to this report
Comments related to this report
david.mcmanus@LIGO.ORG - 16:34, Thursday 21 July 2016 (28569)

The sensor channel names are H1:NGN-CS_L4C_Z_#_OUT, where # is the channel number i reference in this post. Jenne made an MEDM screen which can be found under the SEI tab, and then 'Newtonian Seismic Array'

H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 23:35, Tuesday 19 July 2016 - last comment - 10:47, Wednesday 27 July 2016(28508)
CDS maintenance summary

Upgrade of Timing Firmware

Daniel, Ansel, Jim, Dave

Most of today was spent upgrading the entire timing system to the new V3 firmware. This did not go as smootly as planned, and took from 9am to 6pm to complete. By the end of the day we had reverted the timing master and the two CER fanouts to the orginal code (the end station fanouts were not upgraded). We did upgrade all the IRIG-B fanouts, all the IOChassis timing slaves, all the comparators and all the RF Amplifiers.

The general order was: stop all front end models and power down all front end computers, upgrade the MSR units, upgrade the CER fanouts, upgrade PSL IO Chassis (h1psl0 was restarted, followed by a DAQ restart), upgrade all CER slaves (at this point the master was reverted to V2), at EY we upgraded IRIG-B and slaves (skipping fanout), at MY we upgraded the PEM IO Chassis, at EX we did the same as EY and at MX the same as MY. 

All remaining front ends were now powered up. The DAQ was running correctly but the NDS were slow to complete their startup. Addiional master work in the MSR required a second round to restarts, at this point comparators which had been skipped were upgraded and the CER fanouts were downgraded. Finally after h1iopsush56 cleared a long negative irig-b error all systems were operational.

During these rounds of upgrades FEC and DAQ were restarted several times.

Addition of Beam Splitter Digital Camera

Richard, Carlos, Jim

An analog camera was replaced with a digital video GIGE-POE camera at the Beam Splitter.

New ASC code

Sheila:

new h1asc code was installed and the DAQ was restarted.

Reconfigured RAID for ldas-h1-frames file system

Dan:

The ldas-h1-frames QFS file system was reconfigured for faster disk access. This is the file system exported by h1ldasgw0 for h1fw0's use. After the system was upgraded, we reconfigured h1fw0 to write all four frame types (science, commissioning, second and minute). As expected, h1fw0 was still unstable at the 10 minute mark, similar to the test when h1fw0 wrote to its own file system. h1fw0 was returned to its science-frames-only configuration.

Comments related to this report
jeffrey.kissel@LIGO.ORG - 08:26, Wednesday 20 July 2016 (28515)DetChar, INJ, PEM, SYS
Just curious -- it's my impression that the point of "upgrading the timing system to the new V3 firmware" was to reprogram all timing system hardware's LED lights so as to not blink every second or two, because we suspect that those LEDs are somehow coupling into the IFO and causing 1 to 2 Hz combs in the interferometer response. 

The I/O chassis, IRIG-B, comparators, and RF amplifiers are a huge chunk of the timing system. Do we think that this majority will be enough to reduce the problem to negligible, or do we think that because the timing master and fanouts -- which are the primary and secondary distributors of the timing signal -- are still at the previous version that we'll still have problems?
richard.mccarthy@LIGO.ORG - 09:27, Wednesday 20 July 2016 (28520)
With the I/O chassis timing upgrade we removed the separate power supply form the timing slaves on the LSC in the corner and both EX and EY ISC chassis.  Hopefully the timing work will eliminate the need for the separate supplies.
keith.riles@LIGO.ORG - 12:09, Wednesday 20 July 2016 (28528)
Could you clarify that last comment? Was yesterday's test of changing the LED blinking pattern
done in parallel with removal of separate power supplies for timing and other nearby electronics?



 
jeffrey.kissel@LIGO.ORG - 12:29, Wednesday 20 July 2016 (28529)CDS, DetChar, INJ, PEM
Ansel has been working with Richard and Robert of the past few months testing out separate power supplies for the LEDs in several I/O chassis (regrettably, there are no findable aLOGs showing results about this). Those investigations were apparently enough to push us over the edge of going forward with this upgrade of the timing system. 

Indeed, as Richard says, those separate power supplies were removed yesterday, in addition to upgrading the firmware (to keep the LEDs constantly ON instead of blinking) on the majority of the timing system. 
ansel.neunzert@LIGO.ORG - 10:38, Thursday 21 July 2016 (28554)
To clarify Jeff's comment: testing on separate power supplies was done by Brynley Pearlstone, and information on that can be found in his alog entries. Per his work, there was significant evidence that the blinking LEDs were related to the DARM comb, but changing power supplies on individual timing cards did not remove the comb. This motivated changing the LED logic overall to remove blinking.

I'm not sure whether the upgrades done so far will be sufficient to fix the problem. Maybe Robert or others have a better sense of this?

Notable alog entries from Bryn:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=25772
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=25861
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=27202
keith.riles@LIGO.ORG - 18:39, Thursday 21 July 2016 (28562)
I have gone through and manually compared FScan spectrograms and
normalized spectra for the 27 magnetocometer channels that are
processed daily: https://ldas-jobs.ligo-wa.caltech.edu/~pulsar/fscan/H1_DUAL_ARM/H1_PEM/fscanNavigation.html,
to look for changes following Tuesday's timing system intervention,
focusing on the lowest 100 Hz, where DARM 1-Hz (etc.) combs are worst.

Because of substantial non-stationarity that seems to be typical,
it's not as straightforward as I hoped it would be to spot a change
in the character of the spectra. I compared today's generated FScans (July 20-21)
to an arbitrary choice two weeks ago (July 6-7).

But these six channels seemed to improve w.r.t. narrow line proliferation:

H1_PEM-CS_MAG_EBAY_LSCRACK_X_DQ
H1_PEM-EX_MAG_EBAY_SUSRACK_X_DQ
H1_PEM-EX_MAG_EBAY_SUSRACK_Y_DQ
H1_PEM-EX_MAG_EBAY_SUSRACK_Z_DQ
H1_PEM-EY_MAG_EBAY_SUSRACK_X_DQ
H1_PEM-EY_MAG_VEA_FLOOR_X_DQ  (before & after figures attached)

while these four channels seemed to get worse w.r.t. narrow lines:

H1_PEM-EX_MAG_VEA_FLOOR_Z_DQ
H1_PEM-EY_MAG_EBAY_SEIRACK_X_DQ
H1_PEM-EY_MAG_EBAY_SEIRACK_Y_DQ
H1_PEM-EY_MAG_EBAY_SEIRACK_Z_DQ

In addition, many of today's spectrograms show evidence of broad
wandering lines and a broad disturbance in the 40-70 Hz band
(including in the 2nd attached figure).




Images attached to this comment
keith.riles@LIGO.ORG - 10:47, Wednesday 27 July 2016 (28672)
Weigang Liu has results in for folded magnetometer channels for UTC days July 18 (before changes), July 19-20 (overlapping with changes) and July 21 (after changes):

Compare 1st and 4th columns of plots for each link below.

CS_MAG_EBAY_SUSRACK_X - looks slightly worse than before the changes
CS_MAG_EBAY_SUSRACK_Y - periodic glitches higher than before
CS_MAG_EBAY_SUSRACK_Z - periodicity more pronounced as than before

CS_MAG_LVEA_VERTEX_X -  periodic glitches higher than before
CS_MAG_LVEA_VERTEX_Y -  periodic glitches higher than before
CS_MAG_LVEA_VERTEX_Z -  periodic glitches higher than before

EX_MAG_EBAY_SUSRACK_X - looks better than before
EX_MAG_EBAY_SUSRACK_Y - looks better than before
EX_MAG_EBAY_SUSRACK_Z - looks slightly worse than before

EY_MAG_EBAY_SUSRACK_Y  - looks slightly better after changes
EY_MAG_EBAY_SUSRACK_Z - looks the same after changes
(Weigang ran into a technical problem reading July 21 data for EY_MAG_EBAY_SUSRACK_X)

A summary of links for these channels from ER9 and from this July 18-21 period can be found here.
H1 TCS
nutsinee.kijbunchoo@LIGO.ORG - posted 22:14, Tuesday 19 July 2016 - last comment - 14:44, Thursday 21 July 2016(28506)
Mysterious CO2Y output power dropped after today maintenance activities

Jeff K, Alastair (by phone), Nutsinee

Jeff noticed that TCS CO2Y was throwing a bunch of guardian error messages which led him to investigate and found that the CO2Y actual output power was lower since the laser recovered from maintenance activity this morning. Timeseries shows that CO2Y power dropped out at 15:41 UTC (8:41 local time) and never came back to its nominal (~57W). Chiller temperature which is read off the front end was down at the same time indicating CO2Y was down due to some front end maintenance activity. The supply current to CO2Y was also low compared to CO2X (19A vs 22A) suggesting that the low power output was real. And indeed, we went out and measured about 40W at the table (we stick a handheld power meter right before the first steering mirror).

We don't know why would the Front End maintenance today would affect CO2Y output power (CO2X is fine by the way). On the plus side, the heating profile looks good on the FLIR camera which means nothing was misaligned and we can still use CO2Y laser. The beam dump that was in front of the FLIR screen hasn't been put back so be mindful if you ever want to blast full power through the rotation stage.

 

I commented out the output power fault checker part in TCS power guardian so that ISC_LOCK can still tell it to go places. I added a temporary +1 degree offset to the minimum angle parameter for CO2Y rotation stage calibration so it would go to requested powers. We requested TCS CO2 laser stabilization guardian to down because it's not usable given a current output power.

 

Quick conclusion: CO2Y is still functional. The reason for power loss is to be investigated.

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 09:32, Wednesday 20 July 2016 (28521)
J. Kissel, S. Dwyer, N. Kijbunchoo, J. Bartlett, V. Sandberg

A few checks we forgot to mention to Nutsinee last night:
- Nutsinee and I checked the flow rate on the mechanical flowmeter for both the supply and return for TCSY chiller line, and it showed (what Nutsinee said was) nominal ~3 Gallon per minute. This was after we manually measured the power to be 41 W coming out of the laser head to confirm the EPICs readout.

- Sheila and I went to the TCS chillers on the mezzanine. Their front-panel display confirmed the ~20.1 deg C EPICs setting for temperature.

- On our way out, we also noticed that a power supply in the remote rack that is by the chillers marked "TCSY" was drawing ~18 mA, and was fluctuating by about +/- 2mA. We didn't know what this meant, but it was different than the power supply marked TCSX. We didn't do anything about it.

- The RF oscillator mounted in that same remote rack appeared functional spitting out some MHz frequency sine wave. Sheila and I did not diagnose any further than "merp -- looks like an oscillator; looks on; looks to be programed to spit out some MHz sine wave." 

nutsinee.kijbunchoo@LIGO.ORG - 14:44, Thursday 21 July 2016 (28561)

Alastair, Nutsinee

Today I went and check the CO2Y power supply set point. Voltage limit is set to 30V and current limit is set to 28A. Same goes for CO2X power supply. These are correct settings, which means CO2Y laser is really not behaving properly.

H1 SUS (CDS)
sheila.dwyer@LIGO.ORG - posted 19:34, Tuesday 19 July 2016 - last comment - 10:53, Wednesday 20 July 2016(28503)
making SDF a little easier to deal with

For all the suspensions for which the guardian sets the SDF file to down, I have changed the safe.snap to be a softlink to the down.snap in the userapps repository.  This means we have one less sdf file to worry about maintaining for these suspensions.  If anyone can do a similar job for the rest of the suspensions, (ie, make sure that safe.snap is a softlink to some file that gets maintained),  things will be a little easier next time we restart all models. 

Comments related to this report
matthew.evans@LIGO.ORG - 22:20, Tuesday 19 July 2016 (28504)CDS

Along the same lines, I made a script which should allow us to right-click on a EPICs field and ask that it be accepted into the currently loaded SDF file.

This script is based on "instafoton.py" in /opt/rtcds/userapps/trunk/cds/utilities, with some help from create_fe_sdf_source_file_list.py (/opt/rtcds/userapps/trunk/cds/h1/scripts/).  The idea is that it can be added to the MEDM drop-down menu like instafoton.  The script is instaSDF.py in /opt/rtcds/userapps/trunk/cds/utilities (also attached).

The script works by changing the snap file which is currently loaded in SDF (as reported by the SDF_LOADED EPICs record, e.g. safe.snap), and then asking SDF to reload.  As of this writing, the script is "toothless" in that it does not take the final steps to replace existing snap file or reload; the code required to do this is commented out.

To do:

  1. have the CDS folks take a close look at instaSDF.py
  2. uncomment the active lines, backup the target snap file, and test as in (e.g., instaSDF.py H1:SUS-ETMX_L3_LOCK_P_GAIN )
  3. add an entry in the MEDM Execute menu (like instafoton)
Non-image files attached to this comment
betsy.weaver@LIGO.ORG - 09:19, Wednesday 20 July 2016 (28518)

While I still don't follow why we went from wanting more SDF files at various states, including an all-sacred SAFE.snap, to now just wanting 1 file, with JK's instruction I made more soft links in sus burt files.  I guess this takes off where Sheila left off, namely that for any SUS that was sitting on the OBSERVE file this morning during IFO DOWN, I set the safe.snap to be softlinked to the observe snap.  So, none of the names that the SDF overview say that the SUSes are looking at are correct.  Everything has a soft link to some other file.

Someone else will have to suggest what PI files are softlinked too, I didn't touch those.

The following list is the state of the situation.  Good luck.

lrwxrwxrwx 1 controls     controls 67 Jan 13  2015 h1susauxasc0/h1susauxasc0epics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susauxasc0_safe.snap
lrwxrwxrwx 1 controls     controls 67 Jan 13  2015 h1susauxb123/h1susauxb123epics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susauxb123_safe.snap
lrwxrwxrwx 1 controls     controls 65 Jan 13  2015 h1susauxex/h1susauxexepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susauxex_safe.snap
lrwxrwxrwx 1 controls     controls 65 Jan 13  2015 h1susauxey/h1susauxeyepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susauxey_safe.snap
lrwxrwxrwx 1 controls     controls 65 Jan 13  2015 h1susauxh2/h1susauxh2epics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susauxh2_safe.snap
lrwxrwxrwx 1 controls     controls 66 Jan 13  2015 h1susauxh34/h1susauxh34epics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susauxh34_safe.snap
lrwxrwxrwx 1 controls     controls 66 Jan  9  2015 h1susauxh56/h1susauxh56epics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susauxh56_safe.snap
lrwxrwxrwx 1 sheila.dwyer controls 62 Jul 19 19:00 h1susbs/h1susbsepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susbs_down.snap
lrwxrwxrwx 1 sheila.dwyer controls 64 Jul 19 19:22 h1susetmx/h1susetmxepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susetmx_down.snap
lrwxrwxrwx 1 controls     controls 66 Jul 27  2015 h1susetmxpi/h1susetmxpiepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susetmxpi_safe.snap
lrwxrwxrwx 1 sheila.dwyer controls 64 Jul 19 19:28 h1susetmy/h1susetmyepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susetmy_down.snap
lrwxrwxrwx 1 controls     controls 66 Jul 27  2015 h1susetmypi/h1susetmypiepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susetmypi_safe.snap
lrwxrwxrwx 1 betsy.weaver controls 67 Jul 20 09:02 h1sushtts/h1sushttsepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1sushtts_observe.snap
lrwxrwxrwx 1 betsy.weaver controls 65 Jul 20 09:03 h1susim/h1susimepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susim_observe.snap
lrwxrwxrwx 1 controls     controls 65 May  2 16:22 h1susitmpi/h1susitmpiepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susitmpi_safe.snap
lrwxrwxrwx 1 sheila.dwyer controls 64 Jul 19 19:02 h1susitmx/h1susitmxepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susitmx_down.snap
lrwxrwxrwx 1 sheila.dwyer controls 64 Jul 19 18:59 h1susitmy/h1susitmyepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susitmy_down.snap
lrwxrwxrwx 1 betsy.weaver controls 66 Jul 20 09:05 h1susmc1/h1susmc1epics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susmc1_observe.snap
lrwxrwxrwx 1 sheila.dwyer controls 63 Jul 19 19:06 h1susmc2/h1susmc2epics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susmc2_down.snap
lrwxrwxrwx 1 betsy.weaver controls 66 Jul 20 09:05 h1susmc3/h1susmc3epics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susmc3_observe.snap
lrwxrwxrwx 1 betsy.weaver controls 66 Jul 20 08:55 h1susomc/h1susomcepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susomc_observe.snap
lrwxrwxrwx 1 sheila.dwyer controls 63 Jul 19 19:19 h1suspr2/h1suspr2epics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1suspr2_down.snap
lrwxrwxrwx 1 sheila.dwyer controls 63 Jul 19 19:04 h1suspr3/h1suspr3epics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1suspr3_down.snap
lrwxrwxrwx 1 sheila.dwyer controls 63 Jul 19 19:03 h1susprm/h1susprmepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susprm_down.snap
lrwxrwxrwx 1 betsy.weaver controls 66 Jul 20 09:01 h1sussr2/h1sussr2epics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1sussr2_observe.snap
lrwxrwxrwx 1 betsy.weaver controls 66 Jul 20 08:59 h1sussr3/h1sussr3epics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1sussr3_observe.snap
lrwxrwxrwx 1 sheila.dwyer controls 63 Jul 19 19:20 h1sussrm/h1sussrmepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1sussrm_down.snap
lrwxrwxrwx 1 betsy.weaver controls 67 Jul 20 09:14 h1sustmsx/h1sustmsxepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1sustmsx_observe.snap
lrwxrwxrwx 1 betsy.weaver controls 67 Jul 20 09:15 h1sustmsy/h1sustmsyepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1sustmsy_observe.snap
 

jeffrey.kissel@LIGO.ORG - 10:53, Wednesday 20 July 2016 (28524)CDS, GRD, SEI, SYS
J. Kissel, B. Weaver, S. Dwyer

Note that this is a conscious paradigm shift regarding safe.snaps. 

Since all SUS control models are pointing to either down.snaps or OBSERVE.snaps, there will be requested output when the front-end comes up from a reboot. HOWEVER, the SUS output will still be protected because the SUS USER model watchdog, (based on the SEI watchdog), by design comes up tripped, preventing all output. It requires human intervention to be untripped. Even if for some reason this user watchdog fails, we still have the IOP Software watchdog that is independently watching the SUS OSEMs adding another layer protection further preventing any unlikely hardware damage.

I recommend that SEI do this as well, and then they can reduce their number of files that need maintaining to one.

Why, then, do we have some SUS point to OBSERVE.snap, and others point to down.snap?
Those suspensions which *have* a down, are those suspensions manipulated by ISC and DRMI guardians (i.e. MC2, PRM, PR3, SRM, SR2, and all of the BSC SUS besides the TMTS). Thus, there is quite a bit of difference between their nominal low noise settings (i.e. OBSERVE) and their "lets start the lock acquisition sequence" settings (i.e. down). 

These down.snaps were created only for these suspensions in order to have a limited enough scope that we got accomplished what needed accomplishing, when the settings for these suspensions were in question. 

Thus, there remain OBSERVE.snaps for typically ISC untouched SUS (i.e. MC1, MC3, SR3, SR2, the IMs, OMs and RMs, the TMTS, and the OMC), because these were created for *every* front-end model prior to O1. Since we're more often in a low-noise-like state for these SUS than in the SAFE state, the OBSERVE files have been better maintained. 

Note also that we're continuing to unmonitor channels and setting that are manipulated by guardian. Thus there should be a decreasing amount of DIFFs between any of these models' "ground" state and their nominal low noise state. 

This comes at a price however -- if guardian code is changed, then those SDFs settings must be manually re-monitored, which is difficult at best to remember. Especially for the new filter module monitoring system where we have individual control over each button in the bank. Further, if there are settings which are not monitored that are regularly changed by operators (like alignment sliders) or things that occasionally get tuned by scripts (like A2L DRIVEALIGN matrix elements, or Transmon QPD offsets), then they have a potential for coming up wrong. Thus, as a part of "SDF reconciliation" before a planned reboot, we should look at all channel diffs, not just those that have been masked to be monitored.
H1 General
edmond.merilh@LIGO.ORG - posted 15:45, Tuesday 19 July 2016 (28502)
Shift Summary - Day

15:10 Christina bringing a load with the forklift

15:11 Interns out to Y End/MidStations to recover VME crates for Richard.

15:12 Peter reported that the AOM driver for the PSL ISS may have given out.

15:24 Paul to EY to change out a microphone box.

15:33 Betsy in and out of LVEA a few times staging equipment for viewport work.

15:34 Tarvis out to LVEA to start staging equipment for ITM cameras

15:35 Gerardo out to close GV 5 and 7 WP# 5988

15:37 Fil working over HAM3 for GIG-E camera install for BS WP# 6004

15:44 Someone here to service the candy machines

15:45 Christina and Karen out to clean on arms.

15:45 Kyle into LVEA

15:52 Dave McManus into LVEA  WP#6006

15:54 Peter in to PSL enclosure to search for spare AOM driver.

16:03 15:11 Interns out to X End/MidStations to recover VME crates for Richard.

16:18 Travis to begin install of ITM cameras. WP# 5988

16:26 Peter out of Enclosure until FE computers are back up

16:27 Fil out to End Stations and CER to move I/O chassis from temporary power to permanent power.

16:33 Interns done at out buildings. Headed into LVEA for VME crates.

16:34 Desk Delivery through front gate.

16:41 LN2 delivery arrived -Y arm

16:42 Jim B, Ansel et al into CER to begin Timing reprogramming.

16:46 Visitors for John Worden on site.

16:50 Richard into CER to look at camera stuff.

16:50 Hugh to End Stations for weekly HEPI fluid level checks.

16:54 Cintas on site to change out mats

16:57 entered FRS report (5900) regarding very low audio level coming from Gate Call Box

17:00 Dave B ready to restart PSL computers

17:08 Interns out of the LVEA

17:12 Kyle into LVEA to look for Ameristat

17:16 Kyle out!

17:36 Hugh done with weekly HEPI fluid checks

17:42 John taking guests on an LVEA tour

17:42 Karen into LVEA

17:46 Peter preparing to go back into the PSL enclosure to deal with the AOM

17:47 Travis et al are out of the LVEA for today. Reportedly 55% done.

17:48 Richard out of the CER.

17:51 Fil done with I/O chassis

17:53 Timing crew out of the CER

18:10 Fil out to floor to look for IO table patch panels

18:11 Dave and Ansel out to both Mid and End stations for Timing upgrade work.

18:12 Interns to outbuildings and floor again to complete the previous task (VME Crates)

18:14 Water delivery on site.

18:17 Jenne out to LVEA to assist Dave M with seismometers

18:28 Dave reporting from EY. Also,  Norco leaving site.

18:29 Gerardo and Chandra are opening GV5 and GV7

18:39 HEPI EY Diff Pressure alarm (RED)

18:41 GVs are OPEN

18:55 Jenne and David out of LVEA for lunch.

19:06 Karen out of the LVEA.

19:12 Gerardo out of the LVEA

20:02 Gerardo and John are going down Y arm, a little past Mid.

20:10 Fil out to pull cables at HAM1 and 2.

20:09 Chandra out to EY

20:16 Travis and Gerardo out to EY

20:53 Travis back from EY

20:54 Chandra and Gerardo to EX

21:19 Gerardo and Chandra back to corner

22:16 Dave at EY for reboot #3

22:44 "          " EX"                   "

22:45 Handing off to Jeff

H1 PSL
peter.king@LIGO.ORG - posted 15:19, Tuesday 19 July 2016 (28500)
FSS & ISS current draw
Measured the current draw of the ISS AOM driver.
 - 980 mA at 23.54 V, with the ISS oscillating
 - 483 mA to 503 mA at 24 V, depending on the RF output

    For the FSS, 1-ohm resistors were soldered onto a DB37 breakout and the voltage
across them was measured.  Ideally the resistance would be lower but I didn't know
what current to expect and thought any voltage reading might be too small.
  * +24V, 89.5 mV and 82.5 mV for a total of 172 mA
  * -24V, -81.3 mV and -81.9 mV, for a total of 163.2 mA
H1 PSL
peter.king@LIGO.ORG - posted 13:22, Tuesday 19 July 2016 - last comment - 15:42, Tuesday 19 July 2016(28496)
ISS RF
Measured the 80 MHz RF level at the output of the balun in the distribution panel in
the PSL rack.  This provides the local oscillator for the ISS AOM driver.  Originally
a 5 dB attenuator was at the output of the balun.  All RF powers were measured with
the Agilent RF power meter.

+13.0 dBm at the balun

+8.2 dBm at end of cable to the AOM driver with a 3 dB attenuator installed
+10.2 dBm at end of cable to the AOM driver with a 1 dB attenuator installed

    Left the 1 dB attenuator in place as this yields the right RF level for the AOM
driver input.

    Anecdotally it seems to me that the RF level has been dropping over the past few
months.
Comments related to this report
peter.king@LIGO.ORG - 15:42, Tuesday 19 July 2016 (28501)
AOM driver RF output power versus modulation input voltage.
Images attached to this comment
H1 PSL
peter.king@LIGO.ORG - posted 13:06, Tuesday 19 July 2016 - last comment - 14:47, Tuesday 19 July 2016(28494)
FSS transfer functions
Attached are some FSS transfer functions, all with a common gain slider of 20 dB.  The input modecleaner
was locked for all these measurements.

C20F4TF.jpg: FAST gain 4 dB
C20F10TF.jpg: FAST gain 10 dB
C20F16TF.jpg: FAST gain 16 dB
C20F22TF.jpg: FAST gain 22 dB
Images attached to this report
Comments related to this report
peter.king@LIGO.ORG - 13:50, Tuesday 19 July 2016 (28497)
Attached are all the transfer functions on the one plot.  One can see that with the FAST gain
as high as 22 dB, the dip associated with the crossover between the PZT and EOM is noticeable.

    The raw ASCII data is attached: frequency, magnitude, phase, magnitude, phase .... etc.
for the 4 FAST gain settings.
Images attached to this comment
Non-image files attached to this comment
jenne.driggers@LIGO.ORG - 14:47, Tuesday 19 July 2016 (28499)

Is 0 dB = 0 dB, or is -10 dB = 0 dB? If the latter, can you please make it so 0 dB = 0 dB?

H1 ISC
sheila.dwyer@LIGO.ORG - posted 23:48, Monday 18 July 2016 - last comment - 14:57, Tuesday 19 July 2016(28480)
alignment change recovers decent recycling gain at 40 Watts

Sheila Jenne Matt Carl Jeff

Tonight we tried a repeat of what Kiwamu and I did last week, and we were able to restore a recycling gain of ~32 (according to the POP DC/ IM4 channel, which is more like 36 according to the way we calculated it durring O1) at 40 Watts.  We moved PRM PIT 90 urad in the negative direction, and had to touch up SRM as we did that.  We moved the spot on the POPX WFS enough that we railed the PZT and the spot was off center by 0.5.  Again the spot on POP A was mostly on just the bottom 2 quadrants.   The power on the baffle PDs also decreased as we moved the alingment.  We also tried Yaw but this wasn't able to improve the recycling gain.

Since it seems like we might want to use pico motors on POPA, Jenne tried to repeat the PRM move at 2 Watts by moving PRM to match the witness sensors at 40 Watts, she could only go about half way before we lost lock. 

Images attached to this report
Comments related to this report
jenne.driggers@LIGO.ORG - 00:21, Tuesday 19 July 2016 (28482)

Stopping for the night so I can come in for Tues maint. in the morning, but not a lot more progress. 

This last lock, I engaged an extra offset of 0.4 in the PRC1 pit loop once we got to 40W.  This puts the PRM in the same place as Sheila mentioned earlier, and brings up the PRC gain.  I used a ramp of 100 seconds, which may have been a bit fast for the SOFT loops, but we held the lock. 

I then got bold, and requested 45W (intending to stop there only momentarily on my way to 50W), but the ISS 3rd loop went unstable.  Probably we need to reduce the gain, since we've got a bit more optical power, although I'm not totally sure why, since Kiwamu's measurement in alog 27940 shows that the loop is ultra stable.  Probably we should measure the 3rd loop again at 40W.

sheila.dwyer@LIGO.ORG - 14:57, Tuesday 19 July 2016 (28498)

Attached are some plots showing what happened as we powered up and moved PRM last night, compared to a power up from december.  

In the first plot you can see that the recycling gain drops as we power up, with a similar trend in all three diodes that we use to monitor the power recycling gain.  Last night's trends are in the first column, the trends from december are in the right column.  Last night the ratio of IMC_PWR_IN_OUT to IM4_TRANS_SUM was pretty much constant, while in december it increased by 10%, which explains why the POP/IM4 power recycling gain monitor does not agree with the others in December.  There is also an unknown overall gain change for IM4 sum between now and December.  

The second row in the first attachement shows how the baffle PDs normalized to the input power (IMC_PWR_IN_OUT).  

The second attachment shows pitch oplevs (you can tell that as we move PRM CSOFT P is changing) and QPDs durring this change. 

Images attached to this comment
Displaying reports 55361-55380 of 83097.Go to page Start 2765 2766 2767 2768 2769 2770 2771 2772 2773 End