Displaying reports 59101-59120 of 83076.Go to page Start 2952 2953 2954 2955 2956 2957 2958 2959 2960 End
Reports until 16:39, Friday 01 January 2016
LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 16:39, Friday 01 January 2016 (24616)
CP3 Manual Overfill @ 00:03 UTC

After calling the control room I drove past CS @ 23:58 and arrived at Y-Mid station @ 00:01.
Started filling CP3 @ 00:03, opened the exhaust check valve bypass valve, then opened LLCV bypass valve 1/2 turn, closed the flow 25 seconds later.  5 minutes later I closed the exhaust check valve bypass valve.

Started driving back from Y-Mid @ 00:12, arrived at the CS @ 00:16.

H1 FMP (FMP)
gregory.mendell@LIGO.ORG - posted 16:04, Friday 01 January 2016 - last comment - 16:20, Friday 01 January 2016(24614)
Daily fill of CP3 at the Y-mid station

At 23:58 UTC, Gerardo checked in about filling CP3 at the Y-mid station. He is going to Mid Y to fill the dewar.

Comments related to this report
travis.sadecki@LIGO.ORG - 16:20, Friday 01 January 2016 (24615)

Gerardo done at 0:20 UTC.

H1 PSL
corey.gray@LIGO.ORG - posted 13:20, Friday 01 January 2016 (24613)
21:02UTC Lockloss: PSL/FSS-related?

H1 abrubtly dropped out of lock.  Did not see any obvious reasons for why at first, but then while waiting for Guardian to bring us back, noticed that we have no light from the PSL.  

IMC_LOCK Guardaian is saying:  FSS Unlocked.  (Looking for instructions/alogs about how to address this).  

But I also see that we have 0W output power for the Laser.  Might go to the Laser Diode Room to see if there are any obvious messages.

There is no light on the PSL video screens.

Talking with Jason right now.

H1 AOS (DetChar, ISC)
joshua.smith@LIGO.ORG - posted 12:11, Friday 01 January 2016 - last comment - 08:39, Sunday 03 January 2016(24611)
RF Beats / Whistles seen at H1 on December 31

Happy New Year all!

H1 has historically not has as bad of RF beat / whistle problems as L1 has. In fact, the last alog report is for data on October 30th. But dec 31st shows a high density of glitches above 100Hz and densest above 500Hz, which have the signature shape of RF beats we've seen before and are correlated with PRCL and SRCL, similar to one of the manifestations of whistles seen at LLO

Note 1: we produce auto omega scans for the loudest glitches that hveto finds, and these whistles are all very quiet. If anyone wants to follow these up in more detail, you can get GPS times for the high-frequency low-snr triggers for Dec 31 correlations with PRCL and SRCL at those links. 

Note 2: Dec 31 looks like the worst day, but there might have been some weak whistles occuring on other days too, but we'd have to follow up some low SNR triggers on those days (e.g. today Jan 1 and the past few days).

Images attached to this report
Comments related to this report
joshua.smith@LIGO.ORG - 08:39, Sunday 03 January 2016 (24654)DetChar, ISC

Quick update: RF beat / whistles are still happening today, Jan 3. The hveto page shows whistles in rounds 4 and 8 coincident this time with H1:ASC-REFL_B_RF45_Q_PIT_OUT_DQ (and not PRCL and SRCL as above, so a different line/VCO freq must be at play). They are still low SNR, but lightly visible in omega scans. Some example omega scans are attached and linked here. Text files of glitches coincident with ASC REFL B RF45 are here for round 4 and round 8

https://ldas-jobs.ligo-wa.caltech.edu/~hveto/daily/201601/20160103/H1-omicron_BOTH-1135814417-28800-DARM/wscans_rd8/
 
​[8:34] 
note, not showing in PRCL or SRCL
Images attached to this comment
H1 OpsInfo
corey.gray@LIGO.ORG - posted 11:47, Friday 01 January 2016 (24610)
RECENT --Operator Sticky Note-- UPDATES

Since there can be times when one can miss recent changes to operating H1 (due to not getting your hands on H1 due to long locks or being away in between being on shift....or maybe you just don't have a great memory, like me!), there can be times when one might not be up-to-speed on the latest news, workarounds, tricks for running H1.  

This is why we have an Operator Sticky Notes wiki page.  During recent shifts, there were a few items/scenarios which were new to me.  To help reduce searching, I added several new items to the list, and here are some of the newest ones (recent - older) I noticed/came across this week:

Please help to keep this page current with any new changes you come across.

H1 General (OpsInfo)
corey.gray@LIGO.ORG - posted 11:22, Friday 01 January 2016 - last comment - 12:13, Friday 01 January 2016(24609)
H1 Back To Observing (Post Quake)

As I walked in on shift, TJ was working on bringing H1 back.  I'm fairly certain he was just going for locking H1 back up without trying an alignment.  As Guardian was in the first few steps, TJ was touching up arm powers.  A pretty cool thing was seeing DRMI lock within seconds(!), and then Guardian continued taking H1 up, until,...

Guardian Message:  "OMC not ready" In DC_READOUT_TRANSITION

I had heard other operators having to deal with an issue with the OMC not locking over recent weeks, but I had not personally dealt with this.  So, when Guardian gave the message "OMC not ready" in DC_READOUT_TRANSITION, I had to do an alog search (and also left a voicemail with the on-call commissioner).  I eventually found some alogs which gave some things to try which are basically:

  1. On OMC_LOCK Guardian node, "READY_FOR_HANDOFF" will already be selected.  I re-selected  "READY_FOR_HANDOFF".  BUT, this didn't work.
  2. Then I took OMC_LOCK to Auto.  This just had Guardian repeatedly try (and fail) to lock the OMC by sweeping the OMC PZT to find the three high peaks (2 sideband, 1 carrier).  So this didn't work.
  3. I then found a TJ alog (& Kiwamu's earlier alog), which pointed to how to change the PZT sweep starting voltage.  Basically I changed the PZT starting voltage from -24 to -30 (seems like these values have been used a few times over the last few weeks) in the parameter file (omcparams.py).  I saved this file, and then hit LOAD on OMC_LOCK.  NOTE:  I did not know how to check this file into the SVN, so omcparams.py is not checked into SVN.
    • This worked!  And then ISC_LOCK immediately took over and continued on (paused at the usual Engage ISS).
    • Additional NOTE:  Since I was logged in as ops on the operator0 workstation, I was not able to Edit the OMC_LOCK code.  So I went to another workstation, logged in as corey.gray, and then editted/saved the omcparams.py file.  Is there a reason why Guardian files are ReadOnly when logged in as ops....oh, actually, it's probably because we want to keep track of who makes changes to files, yes?

Made it to NLN.  

Guardian Addtional Notes:  (with TJ's help on the phone)

Did get an ISC_LOCK user message of:  "node OMC_LOCK:  STOLEN (by USER)".  This is due to my taking OMC_LOCK to Auto (#2 above).   To clear this message, one can go to the ISC_LOCK, click MANUAL, select INIT.  (this should take OMC_LOCK back to being managed by IMC_LOCK).

Additionally, I also happen to notice some red values for OMC_LOCK & IMC_LOCK, which were the *_LOCK_EXECTIME_LAST channels.  TJ said these are normally red, so I didn't worry about them.

FINALLY:  Back to Observing at 17:08 roughly about 80min after the EQ (could have been much quicker if I didn't have the OMC issue!)!  Had a few big glitches at the beginning of the lock, but this could have been related to my issues noted above (maybe?).  But over the last few hours, H1 has been running with a nice range around the usual 80Mpc.

Comments related to this report
corey.gray@LIGO.ORG - 12:13, Friday 01 January 2016 (24612)OpsInfo

Sheila walked me through how to check a the file I changed to the SVN.

Since I was logged in as ops on the operator0 work station, I had to ssh into another computer:

  • ssh corey.gray@opsws1
  • [entered password]
  • userapps
  • cd omc/h1/guardian
  • svn ci -m "changed pzt voltage to -30 from -24"
H1 CDS
david.barker@LIGO.ORG - posted 10:33, Friday 01 January 2016 (24608)
NTP server issued a warning at 00:15 UTC, possible cause of conlog's problem?

attached email shows central ntp server reporting a NTP Stratum Change Alarm around the year change. Maybe this is related to conlog's problem? Further investigation is needed.

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 10:16, Friday 01 January 2016 (24607)
Transition to DAY Shift Update

TITLE:  1/1 DAY Shift:  16:00-00:00UTC (08:00-04:00PDT), all times posted in UTC     

STATE of H1:   DOWN

Outgoing Operator:  TJ

Quick Summary:

Walked in to find H1 down, but TJ was taking it right back to locking....specifics on this in upcoming alog.

H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 09:44, Friday 01 January 2016 (24606)
CDS model restart report Wednesday,Thursday 30th,31st December 2015

O1 days 104,105

No restarts reported for both days.

BTW: DAQ has now been running for 31 days continuously. Only h1broadcaster (reconfiguration) and h1nds1 (crash) has been restarted in this time period. Both frame writers and trend writers have been stable.

H1 DCS (DCS)
gregory.mendell@LIGO.ORG - posted 08:48, Friday 01 January 2016 (24604)
Visually inspected the DCS (LDAS) cluster

While we were down I walked to the onsite warehouse and visually inspected the DCS (LDAS) cluster. Monitoring of the computers and HVAC showed all things were fine, and I've now verified this visually. It was also good to see that the HVAC system is running fine in this cold (18 degree F) weather.

LHO General
thomas.shaffer@LIGO.ORG - posted 08:00, Friday 01 January 2016 (24603)
Ops Owl Shift Summary
LHO General
thomas.shaffer@LIGO.ORG - posted 07:49, Friday 01 January 2016 (24602)
Lockloss 15:47 UTC

Most likely due to:

  1. 5.8 80km ENE of Raoul Island, New Zealand 2016-01-01 15:02:15 UTC 38.3 km
LHO General
thomas.shaffer@LIGO.ORG - posted 00:09, Friday 01 January 2016 (24601)
Ops Owl Shift Transition
H1 General
jeffrey.bartlett@LIGO.ORG - posted 00:07, Friday 01 January 2016 (24600)
Ops Evening Shift Summary
Activity Log: All Times in UTC (PT)

00:00 (16:00) Take over from Corey
00:44 (16:44) GRB alert. Spoke to LLO. In one hour hold for collecting background statistical data
01:30 (17:30) Dave B. called to say Conlog was down due to UTC year switch
01:44 (17:44) End one hour GRB hold
01:50 (17:50) Power reset Video2 to free up hung FOMs 
02:40 (18:40) Lockloss – Due to mag 6.3 EQ south of Australia
02:55 (18:55) Dave B. has restarted Conlog  
03:07 (19:07) Seismic up to 0.3um/s – Put IFO into down until the earth settles down a bit 
05:10 (21:10) GRB Alert – LHO & LLO down due to EQ – Ignored alert
06:23 (22:23) IFO relocked and in Observing mode
08:00 (00:00) Turn over to TJ
  
End of Shift Summary:

Title: 12/31/2015, Evening Shift 00:00 – 08:00 (16:00 – 00:00) All times in UTC (PT)

Support:  Mike, 
 
Incoming Operator: TJ

Shift Detail Summary: Lost lock about 3 hours into the shift due to a 6.3 mag EQ. After the seismic noise quieted, did an Initial alignment, and relocked the IOF with relative little trouble. IFO is currently in Observing mode with 21.8W of power and 79Mpc of range. Environmental conditions are generally good.   

HAPPY 2016!  
H1 General
jeffrey.bartlett@LIGO.ORG - posted 22:39, Thursday 31 December 2015 (24599)
Back in Observing
   After seismic 0.03-0.1 band dropped below 0.1um/s ran through initial alignment. 
NOTE: Had no problems with the Guardian INPUT_ALIGN process completing successfully. Was able to "fine tune" MICH_DARK by hand.

   Relocked the IFO on the first try with no problems and was in Observing mode at 06:23 (10:23). Power is at 21.8W and range is at 82Mpc. 
NOTE: This is the first relock I've done in the past couple of week that FIND_IR Diff completed under Guardian control. All previous locks I had to tune IR Diff by hand.  

    
 
H1 General
jeffrey.bartlett@LIGO.ORG - posted 20:15, Thursday 31 December 2015 (24598)
Ops Evening Mid-Shift Summary
   IFO is down due to the EQ reported in an earlier post. Seismic was dropping nicely and was just about to a point where it might be possible to relock, when it took another leg up and is back around 1.0um/s. Will give it some more time before doing an initial alignment. 
H1 CDS
david.barker@LIGO.ORG - posted 19:00, Thursday 31 December 2015 - last comment - 11:33, Tuesday 05 January 2016(24595)
Conlog failed at UTC new year roll over

The conlog process on h1conlog1-master failed soon after the UTC new year. I'm pretty sure it did the same last year but I could not find an alog confirming this. I followed the wiki instructions on restarting the master process. I did initially try to mysqlcheck the databases, but after 40 minutes I abandoned that. I started the conlog process on the master and configured it for the channel list. After a couple of minutes all the channels were connected and the queue size went down to zero. H1 was out of lock at the time due to an earthquake.

For next year's occurance, here is the log file error this time around

root@h1conlog1-master:~# grep conlog: /var/log/syslog

Dec 31 16:00:12 h1conlog1-master conlog: ../conlog.cpp: 301: process_cac_messages: MySQL Exception: Error: Duplicate entry 'H1:SUS-SR3_M1_DITHER_P_OFFSET-1451606400000453212-3' for key 'PRIMARY': Error code: 1062: SQLState: 23000: Exiting.

Dec 31 16:00:12 h1conlog1-master conlog: ../conlog.cpp: 331: process_cas: Exception: boost: mutex lock failed in pthread_mutex_lock: Invalid argument Exiting.

Comments related to this report
patrick.thomas@LIGO.ORG - 19:53, Thursday 31 December 2015 (24597)
This indicates that it tried to write an entry for H1:SUS-SR3_M1_DITHER_P_OFFSET twice with the same Unix time stamp of 1451606400.000453212. This corresponds to Fri, 01 Jan 2016 00:00:00 GMT. I'm guessing there was a leap second applied.
david.barker@LIGO.ORG - 09:42, Friday 01 January 2016 (24605)

of course there was no actual leap second scheduled for Dec 31 2015, so we need to take a closer look at what happened here.

patrick.thomas@LIGO.ORG - 11:33, Tuesday 05 January 2016 (24708)
The previous line before the error reports the application of a leap second. I'm not sure why, since you are right, none were scheduled.

Dec 31 15:59:59 h1conlog1-master kernel: [14099669.303998] Clock: inserting leap second 23:59:60 UTC
Dec 31 16:00:12 h1conlog1-master conlog: ../conlog.cpp: 301: process_cac_messages: MySQL Exception: Error: Duplicate entry 'H1:SUS-SR3_M1_DITHER_P_OFFSET-1451606400000453212-3' for key 'PRIMARY': Error code: 1062: SQLState: 23000: Exiting.
Dec 31 16:00:12 h1conlog1-master conlog: ../conlog.cpp: 331: process_cas: Exception: boost: mutex lock failed in pthread_mutex_lock: Invalid argument Exiting.
Displaying reports 59101-59120 of 83076.Go to page Start 2952 2953 2954 2955 2956 2957 2958 2959 2960 End