Displaying reports 57921-57940 of 78003.Go to page Start 2893 2894 2895 2896 2897 2898 2899 2900 2901 End
Reports until 09:35, Tuesday 18 August 2015
H1 SUS
betsy.weaver@LIGO.ORG - posted 09:35, Tuesday 18 August 2015 - last comment - 10:13, Tuesday 18 August 2015(20625)
SUS Guardian issue

This morning after we restarted the ITM and ETM SUS models, the guardians failed when attempting to bring these SUSes back up.  It failed on all 4 QUAD SUS guardians at the same place when requesting to go from SAFE to ALIGNED:

 

2015-08-18T15:47:25.63598 SUS_ETMX [ENABLE_ALL.main] ezca: H1:SUS-ETMX_L2_TEST_L => ON: OUTPUT
2015-08-18T15:47:25.88943 SUS_ETMX [ENABLE_ALL.main] ezca: H1:SUS-ETMX_L2_TEST_P => ON: OUTPUT
2015-08-18T15:47:26.14962 SUS_ETMX [ENABLE_ALL.main] ezca: H1:SUS-ETMX_L2_TEST_Y => ON: OUTPUT
2015-08-18T15:47:26.40621 SUS_ETMX [ENABLE_ALL.main] ezca: H1:SUS-ETMX_L3_TEST_L => ON: OUTPUT
2015-08-18T15:47:26.65966 SUS_ETMX [ENABLE_ALL.main] ezca: H1:SUS-ETMX_L3_TEST_P => ON: OUTPUT
2015-08-18T15:47:26.91415 SUS_ETMX [ENABLE_ALL.main] ezca: H1:SUS-ETMX_L3_TEST_Y => ON: OUTPUT
2015-08-18T15:47:27.16605 SUS_ETMX [ENABLE_ALL.main] ezca: H1:SUS-ETMX_L3_TEST_BIAS => ON: OUTPUT
2015-08-18T15:47:27.16619 SUS_ETMX [ENABLE_ALL.main] Turning on OL damping outputs
2015-08-18T15:47:27.18074 SUS_ETMX W: Traceback (most recent call last):
2015-08-18T15:47:27.18076   File "/ligo/apps/linux-x86_64/guardian-1485/lib/python2.7/site-packages/guardian/worker.py", line 459, in run
2015-08-18T15:47:27.18077     retval = statefunc()
2015-08-18T15:47:27.18077   File "/ligo/apps/linux-x86_64/guardian-1485/lib/python2.7/site-packages/guardian/state.py", line 240, in __call__
2015-08-18T15:47:27.18078     main_return = self.func.__call__(state_obj, *args, **kwargs)
2015-08-18T15:47:27.18078   File "/opt/rtcds/userapps/release/sus/h1/guardian/SUS.py", line 456, in main
2015-08-18T15:47:27.18079     sus.olDampOutputSwitchWrite('ON')
2015-08-18T15:47:27.18079 NameError: global name 'sus' is not defined
2015-08-18T15:47:27.18080
2015-08-18T15:47:27.18762 SUS_ETMX ERROR in state ENABLE_ALL: see log for more info (LOAD to reset)

 

Reloading the GRD didn't work.  In order to get around this we Put the GRD to MANUAL and enabled the output switches by hand, then selected ALIGNING, anmd ALIGNED. 
 

Comments related to this report
jenne.driggers@LIGO.ORG - 10:13, Tuesday 18 August 2015 (20627)

This is now fixed. 

An object called "susobj" is defined early in the guardian, which contains the name of the optic.  In the new state ENABLE_ALL, the optical lever turn-on was mistakenly calling this object "sus" rather than "susobj".  3 extra characters, one click of the Load button, and ITMX went nicely from DAMPED to ALIGNED.

I have loaded the guardian code on ITMY as well, but *not* on the ETMs, since charge measurements are in progress.

UPDATE: ETMX and ETMY guardians were loaded, after charge measurements were complete.

H1 SYS
betsy.weaver@LIGO.ORG - posted 09:32, Tuesday 18 August 2015 (20624)
TUESDAY MAINTENANCE STATUS - DAQ boot failed

After the model restarts from this morning, we attempted to perform a DAQ restart which failed and is unresponsive to a login.  Waiting for CDS...

 

H1 General (SEI)
travis.sadecki@LIGO.ORG - posted 09:32, Tuesday 18 August 2015 (20623)
HEPI accumulated WD cleared

HAM5, ITMX, and BS had 1214, 44, 253 accumulated saturations respectively.  They have been cleared.

H1 TCS (SYS, TCS)
duncan.macleod@LIGO.ORG - posted 08:59, Tuesday 18 August 2015 (20622)
Connected TCS-ODC to ODC-MASTER via IPC

[Dave B, Nutsinee, Duncan M]

I have added an IPC receiver to the h1odcmaster model to connect the TCS-ODC from h1tcscs to ODC-MASTER. This model has been rebuild and restarted, but this work should not be considered complete until after a restart of the DAQ.

This change, along with associated updates to the SDF/safe.snap were recorded as r11318.

H1 General
jeffrey.bartlett@LIGO.ORG - posted 08:14, Tuesday 18 August 2015 (20621)
Owl Shift Summary
LVEA: Laser Hazard
IFO: UnLocked
Observation Bit: Commissioning  

All Times in UTC
07:00 Take over from Corey G.
07:00 IFO unlocked – Commissioners looking into problem going to high power 
07:14 Locked at DC_READOUT 
08:36 Go to INCREASE_POWER
08:39 Lockloss – Commissioners working on going to high power problem
09:19 Trying to relock at DC_READOUT – No lock at LOCK_DRMI_1F
11:50 Running initial alignment due to DRMI and bad looking spot on AS AIR
12:00 LLO starting maintenance at or around 12:00UTC
12:12 Peter – Starting PMC adjustments 
14:17 Christina & Karen – Moving stuff into the High-Bay & OSB Receiving
14:50 Christina & Karen – Finished moving stuff around the OSB
14:52 Christina & Karen – Going to End-Y for cleaning
15:00 Turn over to Travis
H1 PSL (PSL)
peter.king@LIGO.ORG - posted 08:12, Tuesday 18 August 2015 (20620)
PMC Temperature/Length Adjustments
A quick entry to capture things as they are.  A more detailed entry will follow.

The Quest was to try and bring back the alignment of the pre-modecleaner by adjusting the temperature
of the body.  In short, it didn't work.  The longer story is that the jury is still out (I'd say).

The body temperature of the pre-modecleaner was ~304.5K when the alignment was notionally good.  The elevated
temperature of the pre-modecleaner was ~305K.  Increasing the voltage on the PZT, by increasing the HVREF
signal, results in the temperature of the pre-modecleaner decreasing.  As expected.  Not surprisingly this
takes quite some time.

The fastest way to bring the temperature down was to turn off the temperature loop.  Set the HVREF to a low
value, req-acquire lock for the PMC and the increase the HVREF to its maximum value and then turn the heater
loop on with the HEATER OFFSET set to zero.

Attached is a plot of the pitch of a pre-modecleaner transmitted beam as monitored by the quadrant photodiode
in the ISS photodiode box.  Clearly both pitch and yaw are affected by the temperature change.  Previously
it wasn't so obvious that yaw was affected but that might be due to the time scale of the temperature changes.

Also attached are plots of the transmitted and reflected output of the pre-modecleaner, and the output power
of the laser during the same time.  There is a slight increase in the transmitted power but nothing to write
home about.

In short it's undoubtedly faster to go in and re-align. 
Images attached to this report
H1 INJ (INJ)
daniel.hoak@LIGO.ORG - posted 02:17, Tuesday 18 August 2015 - last comment - 14:01, Tuesday 18 August 2015(20618)
SR3 M1 T2 FOO BAR

Evan, Dan, Jeff

The SR3 M1 T2 actuator is...unwell.

We started to have trouble staying locked after power up about four hours ago.  At first it looked like an ASC problem in the SRC, and since the AS36 loops are the scapegoat du jour, Evan started to retune the SRC1 loop.  He observed a recurring transient in the beam spots at the AS port, and saw the same thing in the SR3 oplevs.  We turned off the SR3 cage servo and the kicks kept on coming.

Eventually we looked at the SR3 M1 voltmons and found the first plot attached, which is for eight hours.  The T2 OSEM and voltmon started to get ratty about three and a half hours ago.  The noise is a series of slow transients with a several-second rise and a steep decay.  See Fig. 2.

We're pretty convinced that it's an actuator problem, something between the DAC and the voltmon readback in the coil driver box.  We have unlocked the IFO and turned off the SR3 top-stage damping and we still see the pattern of noise.  We shall leave the pleasure of power-cycling the AI chassis and coil driver to others.

 

As an amuse bouche for the maintenance day team, we also discovered that the SR3 M1 T1 voltmon is complete nonsense.  The T1 voltmon time series is a collection of step functions and spikes, 100x larger than the other M1 voltmons.

 

The SR3 M1 damping has been turned back on.

Images attached to this report
Comments related to this report
richard.mccarthy@LIGO.ORG - 12:18, Tuesday 18 August 2015 (20639)
It would appear that the glitches come and go but tend not to be gone for too long, but enough to slow down trouble shooting.  In and effort to get things moving I replaced the Triple Top driver S1001082 with S1001086 as the first effort at fixing this problem.  Do to a lot of activity (it is Tuesday) it was hard to see if this fixed the problem.  As can be seen from Dave B. report we also restarted the IOP in make sure a calibration occurred.  Had what appeared to be glitches in the signals that turned out to be sei trips.  So again not the easiest item to follow up on.  After the system settled down I have not seen any excess noise on T2 for over and hour.  But will continue to monitor.
keita.kawabe@LIGO.ORG - 14:01, Tuesday 18 August 2015 (20643)

Seems like we've been good for the past 2.5 hours or so.

Images attached to this comment
LHO General
corey.gray@LIGO.ORG - posted 23:59, Monday 17 August 2015 (20610)
Ops EVE Summary

8/17 EVE Shift:  23:00-7:00UTC (16:00-0:00PDT)

Shift started with Travis taking H1 GRD-ISC_LOCK to:  NOISE_TUNINGS

H1 Guardian Operations for Tonight:

There are two states of Guardian where H1 is having issues:

  1. INCREASE_POWER:  Due to PMC drift, H1 gets stuck checking power.  So, to get past this state, we have been hitting MANUAL (on the Lock State medm), selecting COIL_DRIVERS, and then hit AUTO, and go to NOISE_TUNINGS
  2. Next problem spot appears to be the engaging the ISS 2nd Loop.  So, once in NOISE_TUNINGS, hit MANUAL, and select NOMINAL_LOW_NOISE.

Other Activities:  UTC (PDT)

H1 CDS (AOS, CDS, DAQ, DetChar, ISC, PEM, PSL, SEI, SUS)
jeffrey.kissel@LIGO.ORG - posted 21:56, Monday 17 August 2015 (20616)
8/17 Maintenance Day / Re-locking Team Plan
J. Kissel, J. Oberling, B. Weaver

Up to bat for the Recovery Team tomorrow:
Operator: T. Sadecki
DetEng: B. Weaver, J. Oberling
Commissioning: J. Driggers

Attached is tomorrow's Maintenance Plan. The major activities (i.e. those with the greatest impact) will be 
- an adjustment of the PSL's PMC Temperature to try to restore the full power output, and
- moving the violin mode monitors out of the QUAD models and into the OAF model to try to reduce the computation turn-around time for the ETMs and mitigate timing overages with the slower front ends.

Full List (In Chronological Order):
1st Wave (as soon as Peter arrives in the morning)
- Adjust temperature / alignment of PMC remotely from control room

2nd Wave (~8:00 am)
- Monitor PMC temperature / alignment of PMC, if remote change doesn't have desired affect, then PSL incursion.
- Commissioning HEPI HAM1 inertial isolation
- Remove violin mode damping from QUADs, Install into OAF
- Updates to ODC MASTER model to include/update TCS
- New power supply on corner-station TCS Hartmann WFS
- New power supply on corner-station PEM electronics

3rd Wave (~9:00a)
- Add all newly requested channels to GDS broadcaster, low-latency frames
- Total GDS package upgrade
- Charge measurements on ETMs
- Watch for Beckhoff crash at EX

The hope is to begin recovery by 10a, such that we have a full IFO back up and running by 12p PT! This looks to be a pretty light maintenance day (finally!), but we'll see how it goes.
Images attached to this report
Non-image files attached to this report
H1 CAL
kiwamu.izumi@LIGO.ORG - posted 21:46, Monday 17 August 2015 (20617)
Semi automation of CAL CS maintenance

I wrote python scripts that copy the ETMY suspension digital filters and turn the right combination of the filters in the simulated ETMY in CAL CS.

The scripts are now in the following svn locations. It would be ideal if anyone who makes a modification run the following scripts to update the CAL CS suspension filters.

/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER8/H1/Scripts/CALCS/copySus2Cal.py

/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER8/H1/Scripts/CALCS/setCalcsEtmy.py

At the time being, they are written in a hardcoded way so that they only do the copy and paste operations only for the SUS ETMY filters and it does not even look at the other quad suspensions. The 1st script uses the python foton bindings to copy the foton filters from H1SUSETMY.txt over to H1CALCS.txt. The 2nd script uses nds2 to check the filter settings at a past point and copy the settings for the CAL CS filters. Right now the code does not look beautiful at all because I use the sfm option of the command line cdsutils. I am hoping that I can replace them by fancier functions at some point. For now, they should be good enough. I have tested each code several times in this evening and they seemed to be functional as intended.

H1 CAL (CAL)
darkhan.tuyenbayev@LIGO.ORG - posted 21:33, Monday 17 August 2015 - last comment - 21:42, Monday 17 August 2015(20614)
DARM open-loop TF measurement

JeffreyK, SudarshanK, DarkhanT

Today we took DARM open-loop transfer function measurement using twice stronger drive at higher frequencies, > 1000 Hz, compared to measurement taken last night (see LHO alog #20585). We will need to bump up drive level a little bit more at higher frequencies to get a better coherence in the future measurements.

We also took PCALY sweep with the same frequency vector that started 10-15 minutes after starting DARM sweep.

Since these two measurements were taken close in time and within the same lock stretch we assume that IFO parameters were the same in both measurements. We will try to use both sweeps to estimate DARM parameters.

DARM measurement XML file was committed to SVN:

CalSVN/aligocalibration/trunk/Runs/ER8/H1/Measurements/DARMOLGTFs/2015-08-17_H1_DARM_OLGTF.xml

PCALX and PCALY measurements' XML files were committed to SVN:

CalSVN/aligocalibration/trunk/Runs/ER8/H1/Measurements/PCAL/2015-08-17_PCALX2DARMTF_logscale.xml

CalSVN/aligocalibration/trunk/Runs/ER8/H1/Measurements/PCAL/2015-08-17_PCALY2DARMTF_logscale.xml

Images attached to this report
Non-image files attached to this report
Comments related to this report
darkhan.tuyenbayev@LIGO.ORG - 21:42, Monday 17 August 2015 (20615)

We did a follow up measurement down to ~25 Hz with adjusted envelope that gave us a slightly higher coherence at high frequencies, [600 .. 900] Hz.

This measurement XML file was committed to SVN:

CalSVN/aligocalibration/trunk/Runs/ER8/H1/Measurements/DARMOLGTFs/2015-08-17_H1_DARM_OLGTF_downTo25Hz.xml

The adjusted envelope with references from both DARM TF measurements were saved in a template:

CalSVN/aligocalibration/trunk/Runs/ER8/H1/Measurements/DARMOLGTFs/2015-08-17_H1_DARM_OLGTFtemplate.xml

Images attached to this comment
Non-image files attached to this comment
H1 SYS (GRD)
jameson.rollins@LIGO.ORG - posted 18:13, Monday 17 August 2015 - last comment - 11:19, Tuesday 18 August 2015(20613)
H1 now in OBSERVATION MODE

H1 now in OBSERVATION MODE

As per LIGO-M1500250, H1 has now been put into "OBSERVATION MODE".  This means:

The ODC-OPERATOR_OBSERVATION_READY bit, which is set to "Undisturbed" by the operator on the GUARD_OVERVIEW screen, will now be automatically unset by the Guardian IFO top node if any of the monitored guardian nodes drop from OK==True.

The most likely ways that this can happen are:

Operators must be aware that the INTENT bit must be manually reset after any of these changes.

The list of Guardian nodes being monitored is stored in:

USERAPPS/sys/h1/guardian/IFO_NODE_LIST.py

The current monitored node list is:

ALIGN_IFO
ALS_COMM
ALS_DIFF
ALS_XARM
ALS_YARM
HPI_BS
HPI_ETMX
HPI_ETMY
HPI_HAM1
HPI_HAM2
HPI_HAM3
HPI_HAM4
HPI_HAM5
HPI_HAM6
HPI_ITMX
HPI_ITMY
IMC_LOCK
ISC_DRMI
ISC_LOCK
ISI_BS_ST1
ISI_BS_ST2
ISI_ETMX_ST1
ISI_ETMX_ST2
ISI_ETMY_ST1
ISI_ETMY_ST2
ISI_HAM2
ISI_HAM3
ISI_HAM4
ISI_HAM5
ISI_HAM6
ISI_ITMX_ST1
ISI_ITMX_ST2
ISI_ITMY_ST1
ISI_ITMY_ST2
OMC_LOCK
SEI_BS
SEI_ETMX
SEI_ETMY
SEI_HAM2
SEI_HAM3
SEI_HAM4
SEI_HAM5
SEI_HAM6
SEI_ITMX
SEI_ITMY
SR3_CAGE_SERVO
SUS_BS
SUS_ETMX
SUS_ETMY
SUS_IM1
SUS_IM2
SUS_IM3
SUS_IM4
SUS_ITMX
SUS_ITMY
SUS_MC1
SUS_MC2
SUS_MC3
SUS_OM1
SUS_OM2
SUS_OM3
SUS_OMC
SUS_PR2
SUS_PR3
SUS_PRM
SUS_RM1
SUS_RM2
SUS_SR2
SUS_SR3
SUS_SRM
SUS_TMSX
SUS_TMSY
TCS_ITMX
Comments related to this report
vernon.sandberg@LIGO.ORG - 11:19, Tuesday 18 August 2015 (20634)

Note to Operators: ODC-OPERATOR_OBSERVATION_READY bit, which is controlled by the button labeled "undisturbed" on the GUARD_OVERVIEW.adl screen, is distinct and different from the H1:ODC-OBSERVATORY_MODE channel, which is set from the OPS_OBSERVATORY_MODE.adl screen. The ODC-OBSERVATORY_MODE describes the activity (wind, preventative maintenance, commissioning, etc.) and is used to generate our time spent pies.  These are summarized in the DetChar summary pages at:
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20150817/lock/operating_mode/

The Operators' priority is to the GUARD_OVERVIEW undisturbed button, which signals the readiness for scientifically valid observing. The selection of the LHO operating mode to "OBSERVING" (in the OPS-OBSERVATORY_MODE.adl screen) can be done after the pressing of the "undisturbed" button.  After a lock loss, the operator should select the appropriate LHO operating mode activity.


 

Images attached to this comment
H1 ISC
kiwamu.izumi@LIGO.ORG - posted 18:09, Monday 17 August 2015 - last comment - 03:16, Tuesday 18 August 2015(20612)
ETMY ESD elliptic roll off filter added

Sheila, Kiwamu

We newly put an elliptic low pass in ETMY_L3 in order to reduce the number of saturation events.

Today, we noticed that ETMY frequently saturated at the bottom stage. Looking at the spectra and time series, we found that high frequency components above 1 kHz was a culprit. This is related to the activity that Evan et al worked on last night (alog 20585) trying to get a better phase margin. We decided to install a roll off in order to mitigate the issue. We put an elliptic whose cut off is at 950 Hz with a 20 dB rejection in the stop band and 3 dB ripple in the pass band. Since the filter banks in ETMY_L3_LOCK was full, we put it in DRIVEALIGN instead. It is now in FM7 and the ISC_LOCK turns it on in the ETMY_TRANSITION state. This costed a 2.6 deg phase loss at 40 Hz which should be OK according to Evan's open loop measurement.

As a result, it reduced the DAC counts in RMS by a factor of between 2 and 3. This seemed to help reducing the saturation events so far. See the attached. We could have lower the cut off frequency more, but since it is going to be limited by frequency components below a couple of Hz anyway, we leave it as it is.

Images attached to this report
Comments related to this report
evan.hall@LIGO.ORG - 03:16, Tuesday 18 August 2015 (20619)

We did some more DARM filter cleanup:

  • The 200 Hz pole in LSC-DARM suscomp was moved to 1000 Hz.
  • In EX L3 lock, there is now a 200 Hz / 1000 Hz p/z. It is turned on from ALS DIFF onwards.
  • In EY L3 lock, the 1000 Hz / 200 Hz p/z was removed.
  • The above ELP950 was removed from the L3 drivealign and put into L3 lock.
H1 CDS
david.barker@LIGO.ORG - posted 17:42, Monday 17 August 2015 (20611)
conlog crashed on Saturday, again

Conlog stopped running at 10am PDT Saturday 15th:

Aug 15 10:01:21 h1conlog1-master conlog: ../conlog.cpp: 301: process_cac_messages: MySQL Exception: Error: Out of range value for column 'value' at row 1: Error code: 1264: SQLState: 22003: Exiting.

Aug 15 10:01:21 h1conlog1-master conlog: ../conlog.cpp: 331: process_cas: Exception: boost: mutex lock failed in pthread_mutex_lock: Invalid argument Exiting.
 
Here are the logs from the previous crash logged in FRS3433  on Saturday August 8th:
 
Aug  8 12:25:48 h1conlog1-master conlog: ../conlog.cpp: 301: process_cac_messages: MySQL Exception: Error: Out of range value for column 'value' at row 1: Error code: 1264: SQLState: 22003: Exiting.
 
Aug  8 12:25:49 h1conlog1-master conlog: ../conlog.cpp: 331: process_cas: Exception: boost: mutex lock failed in pthread_mutex_lock: Invalid argument Exiting.
 
I restarted conlog using a subset of instructions from the wikipage
 
1. Check the conlog master database:
 
> ssh cdsadmin@h1conlog1-master
cdsadmin@h1conlog1-master: mysqlcheck -u root -p --check --all-databases
Note: this takes 31 minutes to complete
 
2. Check the status of the replication processes on the master:
 
> ssh cdsadmin@h1conlog1-master
cdsadmin@h1conlog1-master: mysql -u root -p
mysql> SHOW PROCESSLIST G;
'state' should report 'Has sent all binlog to slave; waiting for binlog to be updated'.
mysql> quit
 
3. Start the Conlog process on the master:
 
cdsadmin@h1conlog1-master: sudo -b -E -u conlog /opt/conlog/bin/linux-x86_64/conlog
 
4. Set the process variable list:
 
cdsadmin@h1conlog1-master: sudo /opt/conlog/bin/linux-x86_64/conlog_admin use /ligo/lho/data/conlog/h1/input_pv_list/pv_list.txt
Note: this takes about 2 minutes for the queue size to come down to zero and the unmonitored number to come to zero
 
At the end of the process all 95444 channels are being acquired by conlog. The DAQ EDCU connected number incremented to 24159 as the 4 conlog channels were connected to the EDCU.
 
 
H1 CDS
david.barker@LIGO.ORG - posted 17:20, Monday 17 August 2015 (20609)
Frame writer 1 restart due to LDAS activities

The restart reporter for Sunday shows a h1fw1 restart:

model restarts logged for Sun 16/Aug/2015
2015_08_16 17:28 h1fw1

This is the first unexpected restart of either frame writer following the DAQ reconfiguration made several weeks ago. This restart is likely attributed to LDAS starting the disk archiver at that time, which intensively scanned the frame writers' disk systems. We would expect h1fw1 to be more susceptible to crashing when this is happening since it is writing the larger commissioning frame.

H1 CDS (CDS)
jonathan.hanks@LIGO.ORG - posted 17:03, Monday 17 August 2015 (20608)
Login problems resolved for CDS

Remote logins to LHO CDS work again.  It  was an issue with a security patch.  A long term solution will be worked on.  FRS3467 has been closed.

H1 AOS
betsy.weaver@LIGO.ORG - posted 16:33, Monday 17 August 2015 (20606)
LOAD COEFF buttons hit

There were quite a few partially loaded filter banks which were listed on the CDS Filter DAQ Status screen.  After lookign through the alog, and verifying what filters are in play compared to the archived files, I hit the LOAD button and cleared the "MODIFIED" state on:

H1SUSSRM

H1SUSETMY

H1LSC

H1CALCS

 

Most were just a filter or 2 that were in developement over the weekend. 

As well, we can hit LOAD COEFF on the H1SUSITMX, H1SUSITMY, and H1OMC, where I verified that the modifications logged in the CDS filter DAQ STat screen are in-deed the only filters which have been commissioned and loaded locally at the bank.

H1 CDS
david.barker@LIGO.ORG - posted 11:40, Monday 17 August 2015 - last comment - 16:59, Monday 17 August 2015(20597)
remote log into CDS system not functioning this morning

We are aware of problems with the remote 2FA Yubikey login system, we are investigating. It has an opened ticket FRS3467.

Comments related to this report
david.barker@LIGO.ORG - 13:59, Monday 17 August 2015 (20600)

Update: Jonathan's email explaining the situation:

There are problems remotely logging into LHO CDS via lhocds.ligo-wa.caltech.edu right now.  It is being looked at, but a solution is not yet known.

It appears that a security update has changed how authentication can work in the ssh server and is no longer allowing multiple passes/layers of authentication.  So your initial password (ie your LIGO.ORG) password is being checked by the LIGO.ORGauthentication infrastructure and then being passed onto the token system (without the chance of allowing a token to be prompted for).  As your LIGO.ORG password is not the same as the output of your token (yubikey) the authentication fails.

However this only happens most of the time.  Some of the time you do get prompted for both passwords (LIGO.ORG and OTP/token/yubikey) and it works.

Unfortunately we do not have a proper fix at this time.  Until such a time as we do, please try again.
---
Jonathan Hanks
CDS Software Engineer
LIGO Hanford Observatory

jonathan.hanks@LIGO.ORG - 16:59, Monday 17 August 2015 (20607)CDS

Logins are now working.

Displaying reports 57921-57940 of 78003.Go to page Start 2893 2894 2895 2896 2897 2898 2899 2900 2901 End