WP6438 Remove h1tw0 from DAQ EDCU
Dave:
Due to its extended downtime, h1tw0 was removed from the DAQ EDCU for now to GREEN-up the EDCU.
MR Vacuum Beckhoff Change
WP6431 Patrick, Dave:
h0vacmr Beckhoff EPICS was changed for IP5. H0EDCU_VAC.ini was modified. Also h0/target/h0vacmr/autoBurt.req.
EX change delayed to next week.
WP6415 Install new FMCS computer in MSR
Richard, Carlos:
The new FMCS controller rack-mount computer (fmcs-compass) is being installed in the MSR.
DAQ Restart
Dave:
After running for 43 days, the DAQ was restarted at 12:29PST. This was a clean restart.
TJ reported that Guardian DIAG_MAIN node processing time improved significanly after h1nds0 was restarted. Trends show this happed on the last h1nds0 restart as well, with the processing time gradually increased thereafter.
The loop time (exec time) for DIAG_MAIN has been higher than usual the past few weeks. The normal is around 2-3s depending on what conditions are met. Today, I noticed that the exec time on DIAG_MAIN dropped back to its 2-3s after the DAQ restart. I checked with Dave and there was nothing new on NDS0 that should have been introduced, so on his suggestion I looked back a few weeks. The second attached shot clearly shows where the restart happened on the 6th of this month. The first shot is from todays restart.
The length of the loop time is mostly determined by how long the handful of nds calls take. So presumably, it is the these calls that are changing after a DAQ restart, but why?
Jason O, Ed Merilh
We did a final sweep of the LVEA:
WP 6431 I have updated the PLC code on h0vacmr to change the IP5 controller from a multivac to gamma controller. The CP dewar at end X was being filled today, so I am holding off on updating h0vacex until next week.
There seem to be some new spots on the PRM baffle: I posted the first picture a couple of months ago, the second is from this mornings lock. This too suggests possible compensation plate changes and potential increases in scattering noise in the input arm.
I undamped ITMs and CPs, and measured the free swinging spectra (brown CPX, pink CPY).
Apparently all of the peaks are very damped for CPX and this is probably the sign of rubbing/touching.
I moved the CPX up by giving it a V offset of 200000 counts and P offset of -5000 counts and things got better (green). The seismic level was different between green and brown but it's clear that some of the peaks were recovered. It's not quite like CPY, though, and it's not clear if CPX is better or worse.
Anyway, if it's been rubbing, it's possible that it was stuck to some odd angle after a large EQ or something and stayed there since then.
Jeff is taking TF measurements to confirm if things look good both for IX and IY.
(Note: The reason ITMY V and R are elevated for high frequency is because of a noise RT BOSEM. Betsy confirmed that this has been a problem for a long time.)
Noting that HAM6 pressure is plateauing above 1e-7 Torr.
Note: There was a PSL trip last night & also ISS electronics work this morning for Maintenance.
Laser Status:
SysStat is good
Front End Power is 34.11W (should be around 30 W)
Front End Watch is GREEN
HPO Watch is GREEN
PMC:
It has been locked 0.0 days, 0.0 hr 10.0 minutes (should be days/weeks)
Reflected power is 17.05Watts and PowerSum = 76.71Watts.
FSS:
It has been locked for 0.0 days 0.0 hr and 10.0 min (should be days/weeks)
TPD[V] = 1.157V (min 0.9V)
ISS:
The diffracted power is around 4.2% (should be 3-5%)
Last saturation event was 0.0 days 0.0 hours and 9.0 minutes ago (should be days/weeks)
Possible Issues:
PMC reflected power is "noted" as being high in our PSL Report script. Attached is a 1-month trend of PMC Refl
This closes FAMIS 7421.
Work covered by permit 6436 is complete. This was to replace the chassis power regulator board in the ISS AA chassis. Nothing "wrong" appeared with the board. The positive voltage regulator (LM2941CT) tab had some discolouration but it was not the one that failed. The negative voltage regulator (LM2991CT) tab looked just fine. There was a small amount of lint stuck to the negative voltage regulator tab but no signs of shorting or other tell-tale signs.
EY Pressure looks like it dropped a tiny bit on 1-11-2017, but I don't see any other issues.
This closes FAMIS4528.
I have turned off the heat in Zone 3A in the LVEA.
We have increased fan flows in the corner in an attempt to stabilize temperatures. The 4 fans are now set to ~11000 cfm.
Highlights from my data quality shift last weekend (12-15 January 2017) at Hanford:
Full notes may be found here: https://wiki.ligo.org/DetChar/DataQuality/DQShiftH120170111
Lowered LLCV settings on both CP3 & CP4
CP3 from 17% to 15%
CP4 from 34% to 33%
Exhaust temps were lower than ambient and exhaust pressures above zero.
TITLE: 01/17 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventitive Maintenance
OUTGOING OPERATOR: Travis
CURRENT ENVIRONMENT:
Wind: 7mph Gusts, 4mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.57 μm/s
QUICK SUMMARY: Freezing rain on the way in. Maintenance has already began.
The IMC hasn't locked since the lockloss. While following the troubleshooting Wiki instructions, I attempted to use Dataviewer to look at trends of various IMC related channels to no avail (see screenshot of error) no matter which data rate setting I chose. With no data to point me to what is wrong, I cleared the IMC WFS as a hail mary. This didn't help. I'm not sure what to do next here.
Using TimeMachine as a last resort, I also set the IMC PZTs back to values from a previous lock stretch 12 hours ago. This also did not help.
Turns out this was due to some MC2 OSEMs being railed because I apparently forgot to take ISC_LOCK to DOWN before going back to INITIAL_ALIGNMENT. IMC is locked again now.
Dataviewer has been fixed. The operating system type wasn't determined properly, so paths to programs were incorrect. Operator should log out of the workstation and log back in.
PSL tripped at 06:15 (22:15) with a Head 1-4 Flow Sensor error. Jason recovered PSL. I reset the Noise Eater.
Jeff filed FRS 7115 for this trip.
ISC_LOCK at DOWN and observatory mode in corrective maintenance upon arrival. The HAM6 ISI is tripped. I will attempt to lock but given the alogs from last night it sounds like I may need help from a commissioner. Running 'ops_auto_alog -t Day' reported an error: patrick.thomas@operator0:~$ ops_auto_alog.py -t Day Traceback (most recent call last): File "/opt/rtcds/userapps/release/cds/h1/scripts/ops_auto_alog.py", line 300, inalog.main(Transition,shift) File "/opt/rtcds/userapps/release/cds/h1/scripts/ops_auto_alog.py", line 218, in main operator = self.get_oper_w_date('{day}-{month_name}'.format(day=lday, month_name=self.all_months[lmonth]), 'Owl') File "/opt/rtcds/userapps/release/cds/h1/scripts/ops_auto_alog.py", line 130, in get_oper_w_date date_ln = self.get_date_linum(date) - 1 TypeError: unsupported operand type(s) for -: 'NoneType' and 'int'
17:59 UTC Sheila, Jenne, Keita, Evan G. and Heather (new fellow) in control room.
The ops_auto_alog.py error seems to be an issue with only the ops account, I can et it to work on other accounts on both the operator0 machine and opsws12. Jim Batch mentioned some issues with the ops account this morning so they may be related.
The EDCU is now GREEN, operators should investigate if it turns RED. I've also blanked-out the TW0 slot so we dont have any INV white boxes.