Moved the barrel of HEPI Fluid (Thanks Mark & Tyler) to EndX in prep for filter change. So, topped up the EndY fluid before the barrel was gone.
HEPI & ISI Fully isolated now without issue.
TVo, Patrick A medm screen for RS232 readout of the TCS chillers has been added to the sitemap for TCS. Top right TCS on sitemap -> 'CO2X CHILLERS' or 'CO2Y CHILLERS' in light blue.
The supply toggles were still 'On' so this is in response to a VE glitch.
EY vacuum rack was rebooted last Tuesday for a new gauge that was installed which caused high volts to trip off because HV is interlocked with PT-425 pressure gauge.
Believe the 16 & 1/2" (C1) and new 2.75" fiber feedthru conflats (D4 2 & 3, see D1002877) are metal to metal now. Ready for leak check, later.
I thought to get a head-start on tomorrow's reboots by rebooting h1susey early. After having to power cycle the IO Chassis to regain connection with the computer, the code is now waiting for h1seib3 to restart. So I'm leaving it down for tonight. BTW I restarted the models on h1seiey and h1iscey after they were dolphin glitched.
https://services.ligo-la.caltech.edu/FRS/show_bug.cgi?id=6031
https://dcc.ligo.org/cgi-bin/private/DocDB/ShowDocument?docid=E1600252&version=
We forgot about this modification that was done at LLO, therefore our OMC DCPD dark noise (only 1 stage of whitening, whitening gain = 0dB, HighZ) is still dominated by ADC noise for f<60Hz or so. See attached and compare with LLO plot in https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=27556.
This modification will use the second stage of the OMC DCPD whitening as an anti-whitening such that the stage 1+2 will act as a low-frequency whitening zp([1,500];[10,50]) without amplifying the violin in analog. This way we can enable all stages to get more whitening in low frequency without worrying about violin saturating ADC.
(After the modification, CAL group should make a careful analog measurement.)
FAMIS 7487 Laser Status: SysStat is good Front End Power is 0.01658W (should be around 30 W) HPO Output Power is -0.03979W Front End Watch is RED HPO Watch is RED PMC: It has been locked 0 days, 0 hr 0 minutes (should be days/weeks) Reflected power = 0.02814Watts Transmitted power = -0.02408Watts PowerSum = 0.004063Watts. FSS: It has been locked for 0 days 0 hr and 0 min (should be days/weeks) TPD[V] = 0.06819V (min 0.9V) ISS: The diffracted power is around 1.9% (should be 3-5%) Last saturation event was 0 days 1 hours and 7 minutes ago (should be days/weeks) Possible Issues: Front End Power is Low FSS TPD is low ISS diffracted power is Low LRA out of range, see SYSSTAT.adl
Table work related to 70W install is still ongoing.
The ETMY M0 alignment sliders were changed on April 16, misaligning the optical levers. Jeff has returned the alignment sliders to their pre-April 16 value, re-centering the optical levers. Screenshots of optical lever and M0 opticalign trend for the last 10 days, and the current (good) slider values attached.
The access system at EX was taken down on Friday to allow APS contractor to continue with their work. The entry door to the change room is now unlocked.
Chandra, Dave:
while Chandra is working on CP4, and while MX-X1 cold cathode is trying to reacquire, I have bypassed these channels from sending cell phone texts (emails continue to be sent)
Bypass will expire:
Mon Apr 23 16:40:52 PDT 2018
For channel(s):
H0:VAC-MX_X1_PT343B_PRESS_TORR
H0:VAC-MY_CP4_TE253A_REGEN_TEMP_DEGC
we just lost H0:VAC-MX_X5_PT346B_PRESS_TORR, adding that to the list.
Jonathan, Dave:
Around 21:30 PDT last night (Sunday) h1boot froze up, presumably with the 208.5+day bug but the console message looks different. Every front end model eventually stopped processing. The front ends failed at various times, so some front ends showed IPC errors, others froze "green".
We reset h1boot via front panel reset-button. An FSCK file scan was enforced since the system had been running in excess of 232 days (boot servers can run far in excess of 208days before having a problem).
All frozen models came back with no restarts needed.
We are back to the weekend status of h1seib3 and h1susex computers are down due to 208.5+day bug.
CDS front end model status:
h1suspr2: awgtpman process has failed
h1susomc: DAC-KILL active (user watchdog?)
h1iopseiex: SWWD trip due to loss of h1iopsusex, DAC-KILLs active
h1susex: h1iopsusex, h1susetmx, h1sustmsx, h1susetmxpi: all down due to cpu lockup
h1seib3: h1iopseib3, h1isiitmx, h1hpiitmx: all down due to cpu lockup
h1pemmy: h1ioppemmy, h1pemmy: computer powered down for overtemp protection due to CP4 bake-out
I've restarted the awgtpman process for h1suspr2
I have verified that all the front end models continued to run overnight while h1boot was down, and their DAQ data was good. Only the EPICS channel access to the front end IOCs were unavailable. In the attached dataviewer minute trend plot for the past 24 hours, two LSC channels are shown; one comes directly from the front end over MX, the other is acquired via EPICS Channel Access using the EDCU. The EDCU signal is zero during the down time, the direct signal is unaffected.
The lasers are tripped off.
The PSL is not actually tripped off, because it is not yet connected to this interlock.
Appears to have tripped on Apr 21 2018 at 01:50:02 UTC. The ESTOP at the entrance to the VEA appears to have been engaged for a second or less. Glitch?
I've reset it.