Pt-114 is alarming after being isolated from vertex. The rise in pressure from daily tube warming is causing it to cross over the 5e-9 Torr alarm setting.
EX timing was reporting error with 4th Duotone slave, and h1iopiscex was in error. I found that the timing slave on h1iscex's IO Chassis was still trying to sync up (flashing LED). This system has a separate DC power supply for the timing slave, part of a past noise study. I power cycled both the timing slave and the IO Chassis, and now the code is running correctly and the timing error has been resolved.
started h1pemmy system while at MY. Powered up h1digivideo[0,1,2] and h1auxscript0
started h1pemmx system.
DAQ EDCU is now only complaining about corner station HEPI pump controller (Hugh is on this) and the end station HWS (TiVo is looking into this)
Hugh and Thomas have worked their magic, the EDCU is happy again.
The dust monitor alarm levels throughout the site have been set to their normal levels. In some cases these levels may need adjusting depending upon what is being done in the space they are monitoring. Please get in touch with Jeff Bartlett before adjusting the dust monitor alarm levels or with any contamination control concerns or questions.
ISO-Clean | 0.3µ Minor | 0.3µ Major | 0.5µ Minor | 0.5µ Major |
---|---|---|---|---|
100 | 200 | 300 | 70 | 100 |
1000 | 2000 | 3000 | 700 | 1000 |
10000 | 20000 | 30000 | 7000 | 10000 |
100000 | 200000 | 300000 | 70000 | 100000 |
Looks like three IP isolation valves are leaking: IP1,2,4. Turning those IPs OFF. Valves on HAM 1,6 look good and sealed.
Kyle, Gerardo, Chandra
Started {slow} venting corner station at 10:16 am local, including HAM 1,6. DP measured -45degC or less at HAM 6 & vertex.
Note: we don't trust pirani gauge on PT-120 at high pressures based on data from May vent. We are monitoring piranis on PT-170 & PT-180.
Dave B., Jeff B.,
Today we enabled the new dust monitors in the LVEA. The new dust monitors add trending data for the chambers being opened. The new monitor trend data will go live with a DAQ restart after HAM1 & HAM6 are vented.
LVEA Dust Monitor Locations
Dust Mon # | LVEA Location |
---|---|
2 | HAM2 Cleanroom |
3 | HAM3 Cleanroom |
4 | HAM4 Cleanroom |
5 | HAM5 Clenaroom |
6 | HAM6 Cleanroom |
10 | Biergarten - BSC1 |
30 | Biregarten - BSC3 |
Trended the dust the dust monitors in the LVEA for the past 30 days, 7 days, and 1 day as a base line for the vent. The three dust monitors in the LVEA are: #2 in HAM1/2 cleanroom #6 in HAM5/6 cleanroom #30 in Biergarten for BSC3 I have posted the 30 and 1 day plots. The data older than 24 ago, for the 0.3u on DM #2 is bogus. Since there is nothing interesting in the 7 day plot it was not posted.
The problem found with the diagnostic boardboard was that the bottom DB25 connector bundle was loose. This is because of the hardware involved, which has no jack screws to allow an adapter to screw into the mating connector and the cable into the connector. The connector/cable combo was re-seated. Changing from remote to local was indicated by the MEDM screen, whereas previously it remained on "LOCAL". This closes work permit 7159. Richard / Peter
Turned Kobelco ON at 7:45 am local. Working through vent procedure turning off/valving out equipment to prepare for slow vent today.
To verify that front ends could be started, I first started all the non-dolphin FECs (except for the mid station PEMs), there were no problems. Before starting h1psl0 I consulted with Peter King to see if there would be any PSL issues when I do so, he said there would be none.
I noticed a timing error in the corner station and found it to be the IO Chassis for h1seib3 (ITMX) which was powered down. Its front panel switch was in the OFF position. Richard and I think this was accidentally switched much earlier, and only caused a problem after the computer was power cycled. Before powering this IO Chassis up, I first powered down the two AI chassis because of the issue with the 16bit DACs outputting voltage when the IO Chassis is ON and the FEC is OFF (after h1seib3 was later operational I powered the AI chassis back on).
I restarted all the dolphin FECs, using IPMI for the end stations and front panel switches for the MSR. Many FECs started with IRIGB errors, both in the positive and negative directions. We know from experience some of these take many minutes to clear, so I continued with the non-FEC restarts.
I consulted with Jim Warner on starting the end station BRS computers (it was OK to do so) and the HEPI Pump Controllers (we will leave them till Monday).
I went to EY, the first of many trips as it turned out. I powered up h1ecaty1 and h1hwsey. I noticed that h1brsey was already powered up, but its code is not running?
Back in the control room, I noticed all EY FECs had the same large positive IRIG-B error, indicating a problem with the IRIG-B Fanout. Back at EY I confirmed the IRIG-B fanout was reporting the date as mid June 1999. After some issues, I power cycled the IRIG-B chassis and rebooted the FECs. The front ends were now running correctly.
At this point I ran out of time. There is timing issue with h1iscex, and the Beckhoff timing fanout is reporting an error with the fourth Duotone slave which I suspect is h1iscex.
To be done:
Start mid station PEM FECs.
power up h1ecatx1 and h1hwsex
Start Beckhoff slow controls code on h1ecat[x,y]1. Start HWS code on h1hwse[x,y]
Investigate Duotone Timing error at EX, get h1iscex running.
Start hepi pump controllers.
Start digital video servers.
Start BRS code.
Start PSL diode room Beckhoff computer.
attached CDS site overview MEDM.
Was able to use remote desktop to connect to h1ecaty1. The terminal for the EPICS IOC was open, but it appeared that something had not started properly. I used the icon on the desktop to restart the computer. This appears to have worked. I can not reach h1ecatx1. Will likely need to turn it on locally.
Jeff B. powered on h1ecatx1. I logged in with remote desktop and found the same issue as I had with h1ecaty1 (screenshot attached). I used the icon on the desktop to restart the computer. This appears to have worked.
Burtrestored the FMCS IOC to restore alarm levels: patrick.thomas@zotws12:/ligo/cds/lho/h0/burt/2017/09/15/06:00$ burtwb -f h0fmcs.snap
All the weather stations except the CS had lost their connections. I restarted and burtrestored the IOCs on h0epics.
At Approx. 8:12 this morning the site had a power outage. The GC and CDS UPS have mains shutting off and returning to normal in a 16 second span. This of course means anything not on UPS shut down. Bubba was able to log in and verify FMCS was functioning fine. I came to the site and verified no Fire Pumps were running as indicated by the alarms we were recieving. I have restarted the EPICS interface to FMCS to try to eliminate the text alarms.
Also started the 2fa machine so others could log in and monitor their systems.
There was a fire near the substation that provides our site power. I do not know if the cause of the power outage also caused the fire or if the fire perhaps caused our outage or they were unrelated. Not likely that they were unrelated. I will attempt to contact electrical dispatch Monday morning to see if they know.
Kyle is on site checking vacuum. Robert is crunching data and everything seems stable.
Dave will probably come in later today to get the front ends running again.
It looks like the only "hiccuup" experienced by the vacuum system following the site wide power outage was that IP6's controller defaulted to a "standby" state, i.e. High Voltage was disabled -> I re-enabled it. The vacuum bake ovens isolated from the roughing pumps at the loss of power but the Turbo pumps all spun down. As such, the bake load contents were exposed to a Torr*L or two of turbo exhaust but at least reversed viscous flow was prevented. Both ovens remained above 100C and the parts should clean up with the Turbos now restarted. The RGAs for the VBOs were isolated at the time and should not be affected by the spun down turbos. 1100 hrs, local -> Kyle leaving site now. Robert S. and Richard M. still on site.
Richard and Jonathan have fixed remote login to CDS.
h1tw1 is reporting a failed power supply on its RAID, presumably the one connected to facility power. I can confirm the RAID is operational and raw minute trend files are being written, so the second PS is OK.
cell phone texter is waiting for FMCS channels to restart. This is imminent, so I'll keep it running for now.
FMCS access still not working for me. Not urgent.