Displaying reports 61601-61620 of 85584.Go to page Start 3077 3078 3079 3080 3081 3082 3083 3084 3085 End
Reports until 20:24, Friday 01 January 2016
H1 General (OpsInfo)
travis.sadecki@LIGO.ORG - posted 20:24, Friday 01 January 2016 (24624)
OPS Day mid-shift summary

Observing for 2 hours at 75-80 MPc.  There seems to be a bit of excess noise below ~30Hz in DARM which seem to correlate with some excess noise in PRCL and MICH.  Wind and seismic are both calm. 

In getting back to Low Noise, I had to change the OMC PZT starting voltage in omcparams.py back to -24 from the -30 that Corey changed it to earlier today.  I did NOT commit the changes to SVN since they seem to be in flux.

H1 PSL
betsy.weaver@LIGO.ORG - posted 18:42, Friday 01 January 2016 - last comment - 23:40, Friday 01 January 2016(24622)
SDF PMC REF channel TEMPORARILY set to NOT MONITORED

Because the H1:PSL-PMC-REF slider was adjusted to land on a value with much larger precision than the SDF file can deal with, it was in alarm out to the 10e-16 digit which could not be cleared with the ACCEPT button.  The correct way to clear this bug is to write the epics slider to a smaller precision value (truncate it to the 10 e-8 or something digit).  However, we are all the way back to Nom LOW Noise, and didn't want to risk this minute slider adjustment - SO, we set the channel to be NOT MONITORED until the next lock loss, when we should:

 

- Set this channel (H1:PSL-PMC-REF) to be MONITORED again in SDF

- Type in the SDF setpoint value (1.18615) into the slider value and check that it is no longer in error on SDF.

Comments related to this report
betsy.weaver@LIGO.ORG - 18:43, Friday 01 January 2016 (24623)

Images attached to this comment
jason.oberling@LIGO.ORG - 23:40, Friday 01 January 2016 (24627)OpsInfo

This was actually a temporary change we had to make to help the PMC lock that I had forgotten about.  This setting can be reverted to its orginal value of 1.35V once the channel is set to be MONITORED again in the SDF.

H1 General
travis.sadecki@LIGO.ORG - posted 18:30, Friday 01 January 2016 (24620)
Observing at 2:29 UTC

After a remarkably smooth recovery from the PSL incursion, we are back to Observing.

H1 PSL (DetChar, ISC, PSL)
jason.oberling@LIGO.ORG - posted 16:57, Friday 01 January 2016 (24619)
PSL FSS RefCav Alignment

J. Oberling, C. Gray

After this morning's NPRO shut-off, we noticed the FSS RefCav TPD was down to ~1V (the RefCav Refl spot showed that the pitch had moved significantly as well).  At the rate of decay (see attached 7 day trend), there was a good possiblity of the TPD not surviving until maintenance Tuesday (1-1-2016), so we decided, with M. Landry's permission, to go in a adjust it.  In the interest of returning to operation quickly, we only adjusted the periscope mirrors to bring the TPD back up; no investigation as to the cause of the alignment shift was done.

We began by adjusting only pitch on the top periscope mirror.  This only got us back to half way, so I decided to include the bottom periscope mirror and walk the beam.  Walking the beam down got the TPD to ~1.47.  We then did a small yaw tweak to the top periscope mirror only, which got us to ~1.51V.  We then measured the visibility of the cavity, and then turned the ISS back ON and left the PSL enclosure.  Total time in the enclosure was ~30 minutes.

Images attached to this report
H1 PSL (PSL)
jason.oberling@LIGO.ORG - posted 16:48, Friday 01 January 2016 - last comment - 23:41, Friday 01 January 2016(24617)
PSL NPRO Turned Off

J. Oberling, C. Gray

At 21:01:51 UTC the PSL NPRO shut off (see attached trend of NPRO diode power, NPRO power, and PSL power watchdog), reason unknown.  The PSL watchdog tripped 3 seconds later at 21:01:54 UTC, so the NPRO definitely shut off first.  After Corey's phone call I drove to the site to reset the PSL.  The only thing I found wrong was that the NPRO power supply was ON, but not sending power to the NPRO (key ON, power supply OFF).  The UPS the NPRO power supply is plugged into did not show an error, and there were no other obvious indications that something went wrong, so I turned the NPRO back ON.  It powered up without issue, and we turned the 35W FE laser on from the LDR.  After fiddling with the PMC and FSS (as usual after a laser shutdown), everything with the PSL was locked and operating properly.  It is still unknown why the NPRO shut off.  At this point I suspect a glitch with the UPS; we have seen this behavior before, where a glitch in the UPS causes the NPRO to shut off with no idication of what went wrong.

At this point we noticed the FSS TPD was down to ~1V, which is getting close to the point where ALS becomes unhappy.  A 7 day trend of the RefCav TPD is attached.  At that rate of decay the FSS might not have lasted until maintenance day on 1-5-2016, so we decided (with M. Landry's permission) to go in and do a quick adjustment of the RefCav alignment.  I will detail that in a follow up alog.

Images attached to this report
Comments related to this report
thomas.vo@LIGO.ORG - 18:31, Friday 01 January 2016 (24621)

In order to get the PMC locked, we also tweaked the H1:PSL-PMC_REF from 1.35V to 1.19V, this caused a notification in SDF.

jason.oberling@LIGO.ORG - 23:41, Friday 01 January 2016 (24628)

Thanks Thomas, I had forgot we made that change.

LHO General
corey.gray@LIGO.ORG - posted 16:46, Friday 01 January 2016 (24618)
DAY Ops Summary

TITLE:  1/1 DAY Shift:  16:00-00:00UTC (08:00-04:00PDT), all times posted in UTC     

STATE of H1:   DOWN (PSL recovered, Initial Alignment by Travis)

Incoming Operator:   Travis

Support:  Chatted with TJ, Sheila on the phone, Thomas Vo helping with operator duties.

Quick Summary:

Nice first half of the shift; latter half of shift H1 went down due to the PSL NPRO power tripping.

Shift Activities:

 

LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 16:39, Friday 01 January 2016 (24616)
CP3 Manual Overfill @ 00:03 UTC

After calling the control room I drove past CS @ 23:58 and arrived at Y-Mid station @ 00:01.
Started filling CP3 @ 00:03, opened the exhaust check valve bypass valve, then opened LLCV bypass valve 1/2 turn, closed the flow 25 seconds later.  5 minutes later I closed the exhaust check valve bypass valve.

Started driving back from Y-Mid @ 00:12, arrived at the CS @ 00:16.

H1 FMP (FMP)
gregory.mendell@LIGO.ORG - posted 16:04, Friday 01 January 2016 - last comment - 16:20, Friday 01 January 2016(24614)
Daily fill of CP3 at the Y-mid station

At 23:58 UTC, Gerardo checked in about filling CP3 at the Y-mid station. He is going to Mid Y to fill the dewar.

Comments related to this report
travis.sadecki@LIGO.ORG - 16:20, Friday 01 January 2016 (24615)

Gerardo done at 0:20 UTC.

H1 PSL
corey.gray@LIGO.ORG - posted 13:20, Friday 01 January 2016 (24613)
21:02UTC Lockloss: PSL/FSS-related?

H1 abrubtly dropped out of lock.  Did not see any obvious reasons for why at first, but then while waiting for Guardian to bring us back, noticed that we have no light from the PSL.  

IMC_LOCK Guardaian is saying:  FSS Unlocked.  (Looking for instructions/alogs about how to address this).  

But I also see that we have 0W output power for the Laser.  Might go to the Laser Diode Room to see if there are any obvious messages.

There is no light on the PSL video screens.

Talking with Jason right now.

H1 AOS (DetChar, ISC)
joshua.smith@LIGO.ORG - posted 12:11, Friday 01 January 2016 - last comment - 08:39, Sunday 03 January 2016(24611)
RF Beats / Whistles seen at H1 on December 31

Happy New Year all!

H1 has historically not has as bad of RF beat / whistle problems as L1 has. In fact, the last alog report is for data on October 30th. But dec 31st shows a high density of glitches above 100Hz and densest above 500Hz, which have the signature shape of RF beats we've seen before and are correlated with PRCL and SRCL, similar to one of the manifestations of whistles seen at LLO

Note 1: we produce auto omega scans for the loudest glitches that hveto finds, and these whistles are all very quiet. If anyone wants to follow these up in more detail, you can get GPS times for the high-frequency low-snr triggers for Dec 31 correlations with PRCL and SRCL at those links. 

Note 2: Dec 31 looks like the worst day, but there might have been some weak whistles occuring on other days too, but we'd have to follow up some low SNR triggers on those days (e.g. today Jan 1 and the past few days).

Images attached to this report
Comments related to this report
joshua.smith@LIGO.ORG - 08:39, Sunday 03 January 2016 (24654)DetChar, ISC

Quick update: RF beat / whistles are still happening today, Jan 3. The hveto page shows whistles in rounds 4 and 8 coincident this time with H1:ASC-REFL_B_RF45_Q_PIT_OUT_DQ (and not PRCL and SRCL as above, so a different line/VCO freq must be at play). They are still low SNR, but lightly visible in omega scans. Some example omega scans are attached and linked here. Text files of glitches coincident with ASC REFL B RF45 are here for round 4 and round 8

https://ldas-jobs.ligo-wa.caltech.edu/~hveto/daily/201601/20160103/H1-omicron_BOTH-1135814417-28800-DARM/wscans_rd8/
 
​[8:34] 
note, not showing in PRCL or SRCL
Images attached to this comment
H1 OpsInfo
corey.gray@LIGO.ORG - posted 11:47, Friday 01 January 2016 (24610)
RECENT --Operator Sticky Note-- UPDATES

Since there can be times when one can miss recent changes to operating H1 (due to not getting your hands on H1 due to long locks or being away in between being on shift....or maybe you just don't have a great memory, like me!), there can be times when one might not be up-to-speed on the latest news, workarounds, tricks for running H1.  

This is why we have an Operator Sticky Notes wiki page.  During recent shifts, there were a few items/scenarios which were new to me.  To help reduce searching, I added several new items to the list, and here are some of the newest ones (recent - older) I noticed/came across this week:

Please help to keep this page current with any new changes you come across.

H1 General (OpsInfo)
corey.gray@LIGO.ORG - posted 11:22, Friday 01 January 2016 - last comment - 12:13, Friday 01 January 2016(24609)
H1 Back To Observing (Post Quake)

As I walked in on shift, TJ was working on bringing H1 back.  I'm fairly certain he was just going for locking H1 back up without trying an alignment.  As Guardian was in the first few steps, TJ was touching up arm powers.  A pretty cool thing was seeing DRMI lock within seconds(!), and then Guardian continued taking H1 up, until,...

Guardian Message:  "OMC not ready" In DC_READOUT_TRANSITION

I had heard other operators having to deal with an issue with the OMC not locking over recent weeks, but I had not personally dealt with this.  So, when Guardian gave the message "OMC not ready" in DC_READOUT_TRANSITION, I had to do an alog search (and also left a voicemail with the on-call commissioner).  I eventually found some alogs which gave some things to try which are basically:

  1. On OMC_LOCK Guardian node, "READY_FOR_HANDOFF" will already be selected.  I re-selected  "READY_FOR_HANDOFF".  BUT, this didn't work.
  2. Then I took OMC_LOCK to Auto.  This just had Guardian repeatedly try (and fail) to lock the OMC by sweeping the OMC PZT to find the three high peaks (2 sideband, 1 carrier).  So this didn't work.
  3. I then found a TJ alog (& Kiwamu's earlier alog), which pointed to how to change the PZT sweep starting voltage.  Basically I changed the PZT starting voltage from -24 to -30 (seems like these values have been used a few times over the last few weeks) in the parameter file (omcparams.py).  I saved this file, and then hit LOAD on OMC_LOCK.  NOTE:  I did not know how to check this file into the SVN, so omcparams.py is not checked into SVN.
    • This worked!  And then ISC_LOCK immediately took over and continued on (paused at the usual Engage ISS).
    • Additional NOTE:  Since I was logged in as ops on the operator0 workstation, I was not able to Edit the OMC_LOCK code.  So I went to another workstation, logged in as corey.gray, and then editted/saved the omcparams.py file.  Is there a reason why Guardian files are ReadOnly when logged in as ops....oh, actually, it's probably because we want to keep track of who makes changes to files, yes?

Made it to NLN.  

Guardian Addtional Notes:  (with TJ's help on the phone)

Did get an ISC_LOCK user message of:  "node OMC_LOCK:  STOLEN (by USER)".  This is due to my taking OMC_LOCK to Auto (#2 above).   To clear this message, one can go to the ISC_LOCK, click MANUAL, select INIT.  (this should take OMC_LOCK back to being managed by IMC_LOCK).

Additionally, I also happen to notice some red values for OMC_LOCK & IMC_LOCK, which were the *_LOCK_EXECTIME_LAST channels.  TJ said these are normally red, so I didn't worry about them.

FINALLY:  Back to Observing at 17:08 roughly about 80min after the EQ (could have been much quicker if I didn't have the OMC issue!)!  Had a few big glitches at the beginning of the lock, but this could have been related to my issues noted above (maybe?).  But over the last few hours, H1 has been running with a nice range around the usual 80Mpc.

Comments related to this report
corey.gray@LIGO.ORG - 12:13, Friday 01 January 2016 (24612)OpsInfo

Sheila walked me through how to check a the file I changed to the SVN.

Since I was logged in as ops on the operator0 work station, I had to ssh into another computer:

  • ssh corey.gray@opsws1
  • [entered password]
  • userapps
  • cd omc/h1/guardian
  • svn ci -m "changed pzt voltage to -30 from -24"
H1 CDS
david.barker@LIGO.ORG - posted 10:33, Friday 01 January 2016 (24608)
NTP server issued a warning at 00:15 UTC, possible cause of conlog's problem?

attached email shows central ntp server reporting a NTP Stratum Change Alarm around the year change. Maybe this is related to conlog's problem? Further investigation is needed.

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 10:16, Friday 01 January 2016 (24607)
Transition to DAY Shift Update

TITLE:  1/1 DAY Shift:  16:00-00:00UTC (08:00-04:00PDT), all times posted in UTC     

STATE of H1:   DOWN

Outgoing Operator:  TJ

Quick Summary:

Walked in to find H1 down, but TJ was taking it right back to locking....specifics on this in upcoming alog.

H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 09:44, Friday 01 January 2016 (24606)
CDS model restart report Wednesday,Thursday 30th,31st December 2015

O1 days 104,105

No restarts reported for both days.

BTW: DAQ has now been running for 31 days continuously. Only h1broadcaster (reconfiguration) and h1nds1 (crash) has been restarted in this time period. Both frame writers and trend writers have been stable.

H1 CDS
david.barker@LIGO.ORG - posted 19:00, Thursday 31 December 2015 - last comment - 11:33, Tuesday 05 January 2016(24595)
Conlog failed at UTC new year roll over

The conlog process on h1conlog1-master failed soon after the UTC new year. I'm pretty sure it did the same last year but I could not find an alog confirming this. I followed the wiki instructions on restarting the master process. I did initially try to mysqlcheck the databases, but after 40 minutes I abandoned that. I started the conlog process on the master and configured it for the channel list. After a couple of minutes all the channels were connected and the queue size went down to zero. H1 was out of lock at the time due to an earthquake.

For next year's occurance, here is the log file error this time around

root@h1conlog1-master:~# grep conlog: /var/log/syslog

Dec 31 16:00:12 h1conlog1-master conlog: ../conlog.cpp: 301: process_cac_messages: MySQL Exception: Error: Duplicate entry 'H1:SUS-SR3_M1_DITHER_P_OFFSET-1451606400000453212-3' for key 'PRIMARY': Error code: 1062: SQLState: 23000: Exiting.

Dec 31 16:00:12 h1conlog1-master conlog: ../conlog.cpp: 331: process_cas: Exception: boost: mutex lock failed in pthread_mutex_lock: Invalid argument Exiting.

Comments related to this report
patrick.thomas@LIGO.ORG - 19:53, Thursday 31 December 2015 (24597)
This indicates that it tried to write an entry for H1:SUS-SR3_M1_DITHER_P_OFFSET twice with the same Unix time stamp of 1451606400.000453212. This corresponds to Fri, 01 Jan 2016 00:00:00 GMT. I'm guessing there was a leap second applied.
david.barker@LIGO.ORG - 09:42, Friday 01 January 2016 (24605)

of course there was no actual leap second scheduled for Dec 31 2015, so we need to take a closer look at what happened here.

patrick.thomas@LIGO.ORG - 11:33, Tuesday 05 January 2016 (24708)
The previous line before the error reports the application of a leap second. I'm not sure why, since you are right, none were scheduled.

Dec 31 15:59:59 h1conlog1-master kernel: [14099669.303998] Clock: inserting leap second 23:59:60 UTC
Dec 31 16:00:12 h1conlog1-master conlog: ../conlog.cpp: 301: process_cac_messages: MySQL Exception: Error: Duplicate entry 'H1:SUS-SR3_M1_DITHER_P_OFFSET-1451606400000453212-3' for key 'PRIMARY': Error code: 1062: SQLState: 23000: Exiting.
Dec 31 16:00:12 h1conlog1-master conlog: ../conlog.cpp: 331: process_cas: Exception: boost: mutex lock failed in pthread_mutex_lock: Invalid argument Exiting.
Displaying reports 61601-61620 of 85584.Go to page Start 3077 3078 3079 3080 3081 3082 3083 3084 3085 End