We tested the CP4 regen heater today and it works! We can adjust the P-I-D parameters in Beckhoff as needed, but "proportional gain" parameter alone does the job. Heated the GN2 to 30 deg. and verified that the "regen over temperature" interlock works and trips heater and does not reset automatically.
The bake enclosure contractor was on site today to program and hard wire thermocouples for heater over-temperature protection. Programmer will be back tomorrow morning to finish up. We are leaving the fan run overnight for testing.
Note that there was a pressure surge on PT-245 as a result of Kyle installing the N2 needle valve on CP4's turbo. It was quick enough to not cause alarms, but does show up on pressure trends.
WP7381 h1oaf testing for SEI common mode motion distribution
After getting h1oaf's cpu usage back into range, the second part of this investigation is to add two new RFM IPC senders to h1oaf (one per arm) to see if they cause any problems with the existing GeFanuc RFM nodes. Attached is a snapshot of the modified portion of h1oaf.mdl.
After restarting h1oaf, there were no immediate RFM IPC errors on any node on these networks.
Note that because IPC senders do not have channels, no DAQ restart was needed after this change.
Summary:
TImeline:
Feb 27 2018 15:25:45 UTC ChrisS to all out buildings for FAMIS/Maint.: fire ext. charging lifts
Feb 27 2018 15:53:09 UTC Mark and Mark to LVEA to get hoist and move to MY
Feb 27 2018 16:11:41 UTC JeffB to LVEA looking for parts
Feb 27 2018 16:12:26 UTC Peter in PSL, Jason heading out to PSL
Feb 27 2018 16:20:07 UTC Hugh to LVEA for fit check for LLO
Feb 27 2018 16:24:28 UTC Ed to PSL
Feb 27 2018 16:36:23 UTC Mark and Mark to all out buildings and CS
Feb 27 2018 16:37:02 UTC - EY: removing ion pump, craning arm to move to EX
Feb 27 2018 16:37:28 UTC - MY: retrieving forklift, taking to EY
Feb 27 2018 16:38:00 UTC - EY craning arm and taking to EX
Feb 27 2018 16:38:38 UTC - EY: snorkel to MX
Feb 27 2018 16:39:03 UTC LN delivery
Feb 27 2018 16:40:04 UTC Richard to LVEA H2 enclosure area
Feb 27 2018 16:41:46 UTC Hugh back from LVEA, starting WP7371, HEPI maintenance
Feb 27 2018 16:48:33 UTC Hugh heading to LVEA to complete HEPI Maint.
Feb 27 2018 16:49:40 UTC JeffB back from EX
Feb 27 2018 16:49:52 UTC Richard back from LVEA
Feb 27 2018 16:51:39 UTC Rick, Jim, Niko, Stephen, Alena to EY to work on PSL baffle
Feb 27 2018 16:55:46 UTC Sheila to optics lab, SQZR bay, and HAM6 gathering parts, working outside of HAM6
Feb 27 2018 16:56:41 UTC Fil to LVEA to pull a cable from the PSL to HAM5/6, will be climbing on HAM5 chamber
Feb 27 2018 17:03:40 UTC Ken to EX, will need to power down FMCS system to work, and he and Bubba will restore, expect FMCS alarms
Feb 27 2018 17:04:15 UTC Richard and Peter to LVEA H2 PSL area
Feb 27 2018 17:14:56 UTC Gerardo to EX for ion pump removal, working with Mark and Mark
Feb 27 2018 17:15:54 UTC Jonathan SDF: reinstalling slow controls, impact should be limited to medm going white, no expected other interactions
Feb 27 2018 17:29:06 UTC Nutsinee and Terry to SQZR bay
Feb 27 2018 17:31:31 UTC ISI HAM3 watchdog tripped
Feb 27 2018 17:35:22 UTC Richard out of LVEA
Feb 27 2018 17:35:45 UTC Peter working with Fil on cable pull for temp. interlock
Feb 27 2018 17:39:12 UTC JeffK and Alvero into LVEA to work in HAM6
Feb 27 2018 17:52:02 UTC Betsy to TCS chillers to check on coolant level after a couple TCSY chiller flow alarms
Feb 27 2018 17:54:58 UTC DaveB starting work on WP7381 OAF restart to add RFM channels
Feb 27 2018 17:55:42 UTC DaveB restarting SQZR front end to add channels
Feb 27 2018 17:57:14 UTC Peter out of LVEA
Feb 27 2018 17:58:00 UTC Betsy back from TCS chillers, all are OK
Feb 27 2018 17:58:18 UTC ChrisS done at all outbuildings, heading into LVEA
Feb 27 2018 18:02:47 UTC Sheila and TJ to optics lab then TJ to HAM6
Feb 27 2018 18:12:09 UTC Richard to LVEA
Feb 27 2018 18:10:38 UTC Paradise water through gate
Feb 27 2018 18:18:08 UTC second LN2 delivery
Feb 27 2018 18:22:35 UTC Mike taking guest through LVEA
Feb 27 2018 18:24:42 UTC Centas (mat contractor) through gate
Feb 27 2018 18:35:19 UTC talked with DaveB about TCS chiller alarms from OAF being down
Feb 27 2018 18:35:52 UTC TCS chiller sees a request of 0 deg
Feb 27 2018 18:36:19 UTC DaveB and Patrick and I talking about TCS chillers
Feb 27 2018 18:36:47 UTC timing system error - DaveB and Daniel looking into it
Feb 27 2018 18:37:36 UTC Jonathan finished restarting SDF
Feb 27 2018 18:38:25 UTC Greg starting on DMT patch
Feb 27 2018 18:40:22 UTC I could not get the Seismic DMT to run, rebooted Nuc5, DaveB looking into it
Feb 27 2018 18:42:51 UTC Betsy to Cleaning area for supplies
Feb 27 2018 18:52:40 UTC Mike and guest done in LVEA
Feb 27 2018 19:01:39 UTC TCSY laser is off, hardware to prevent a request of 0deg being sent to the chiller is installed
Feb 27 2018 19:18:12 UTC Besty done collecting supplies
Feb 27 2018 19:26:36 UTC ETMX HEPI tripped, will restore by the end of the day
Feb 27 2018 19:29:56 UTC cleared the watchdog on ETMX HEPI, STS X still railed at -32K, waiting in READY
Feb 27 2018 19:31:08 UTC HUGH done with HEPI, WP7371
Feb 27 2018 19:45:25 UTC Mark Mark and Gerardo done with ion pump removal at EX
Feb 27 2018 19:48:51 UTC Fil done connecting hardware to limit temperature request to TCS chillers when OAF is not on (sending zeros)
Feb 27 2018 19:53:15 UTC Rick to EY
Feb 27 2018 19:53:27 UTC Jason and Ed out of PSL
Feb 27 2018 19:54:37 UTC JeffB and Alvero out of HAM6, will test suspension
Feb 27 2018 20:04:14 UTC DaveB restarting SQZR and DAQ
Feb 27 2018 20:23:25 UTC Sheila to optics lab
Feb 27 2018 20:23:39 UTC TJ out of HAM6
Feb 27 2018 20:28:52 UTC JeffK WP7382 - ongoing
Feb 27 2018 20:29:59 UTC DaveB WP7381 - RFM and OAF - ongoing
Feb 27 2018 20:30:42 UTC Jonathan WP7377 - install h1tw0 - ongoing
Feb 27 2018 20:31:24 UTC Jonathan WP7376 - remove olf h1tw0 - ongoing
Feb 27 2018 20:31:54 UTC Greg WP7378 - DMT - ongoing
Feb 27 2018 20:32:51 UTC Sheila WP7380 - SQZ RCG model - ongoing
Feb 27 2018 20:33:31 UTC Hugh WP7372 - SEI HEPI BSC2 recenter - ongoing
Feb 27 2018 20:34:46 UTC Hugh WP7371 - complete
Feb 27 2018 20:46:23 UTC Sheila WP7380 - SQZ RCG model - complete
Feb 27 2018 20:51:53 UTC WP7372 Hugh starting, taking BS HEPI and ISI offline
Feb 27 2018 21:03:02 UTC DaveB OAF back up and running
Feb 27 2018 21:12:40 UTC Chandra to MY
Feb 27 2018 21:13:01 UTC MCE engineer through the gate heading to MY
Feb 27 2018 21:30:25 UTC Rick and Stephen back to EY
Feb 27 2018 21:31:12 UTC Jim, Niko, and Alena back to CS, then returning to EY
Feb 27 2018 21:51:37 UTC Fil pulling the TCS chiller setpoint box, unclear what its doing, but clear that it didn't work
Feb 27 2018 21:57:34 UTC 7382 DaveB DAQ restart for JeffK work permit
Feb 27 2018 21:59:49 UTC Nutsinee out from LVEA
Feb 27 2018 22:16:59 UTC DaveB noticed EY ISC is dead, calling EY
Feb 27 2018 22:27:40 UTC glitch in communication - sending TJ to EY
Feb 27 2018 22:47:33 UTC Corey to EY to get First Contact
Feb 27 2018 22:48:32 UTC DaveB EY ISC is back, physical contact with rack wires confirmed
Feb 27 2018 22:48:55 UTC Nutsinee out to SQZ bay
Feb 27 2018 22:49:19 UTC Hugh out of LVEA, work continues from CR
Feb 27 2018 22:52:38 UTC Sheila Alvero and TJ in the LVEA working in/around HAM6
Feb 27 2018 22:54:02 UTC Sheila also heading to optics lab and SQZ bay
Feb 27 2018 23:21:37 UTC Jason and Ed out of PSL
Feb 27 2018 23:21:45 UTC Corey back from EY
Feb 27 2018 23:36:37 UTC Matt and guest to LVEA and roof
Feb 28 2018 00:01:03 UTC Mark and Mark done for today
Feb 28 2018 00:11:37 UTC still working in outbuildings
Feb 28 2018 00:11:58 UTC - Chandra and MCE at MY
Feb 28 2018 00:13:15 UTC - Rick, Stephen, Jim, Niko, Alena at EY
Feb 28 2018 00:14:01 UTC Matt and guest out of LVEA
Á. Fernández-Galiana, J. Kissel Lots of work today culminating in a fully floating OPOS and functional control system. Details summarized below! First: Álvaro and I remedied the H1 / H2 OSEM in-vacuum cabling issue that resulted in non-conformance to design identified yesterday (LHO aLOG 40727). We've done so at the OSEM, so there was no need to modify the front-end code. Attached are a few pictures of the H1 (1st and 2nd picture attachments) and H2 (3rd picture attachment) OSEMs before we recabled, showing the directional kinks that Álvaro thought could not be remedied. The "problem:" Álvaro didn't realize how robust the OSEM cabling is, only having the optical fibers as his primary reference of in-vacuum cabling. Not a problem! Aside from the pain of unscrewing the tiny backshell screws, we merely swapped the cables, massaged them around a bit to relieve the "burned in" stiff kinks, and cable tied them into a better configuration. Thus, all of H1 SUS OPO's OSEM signal chains now conform to the OPOS wiring found in (redlined version of) the squeezer wiring diagram (D1700384). Second: We released the stops and balanced the table. Semi-surprisingly, we needed to add ~100 [grams] of mass to the table in order to bring the blades back to floating. We suspect this due to the temperature difference between the optics lab and installed in the chamber -- probably ~5-10 [deg C] cooler in the chamber. Third: we adjusted the horizontal motion limiter brackets beneath the blades (see "Blade Assembly D1500293) to be centered around the horizontal motion limiter pins on the breadboard (itme 6 in D1500292), and ensured all cables were neatly arranged and free from rubbing the suspended bench. Fourth: once mechanically free, we centered the OSEMs to the new equilibrium position. Because of the proximity to a large balance mass and the V1 OSEM, centering the H1 OSEM is miserable (i.e. using the PEEK nuts to expand or contract the OSEM sensor/coil w.r.t. to the flag/magnet). We had the most success by temporarily removing the balance mass, turning the nut, replacing it, checking the centering, then rinse and repeat until happy. This completed our in-chamber, mechanical work for the morning. In the afternoon, we found the need to make several changes to the control system: - Identified that my naive copy-and-paste of the OSEM2EUL and EUL2OSEM matrices (see LHO aLOG 40427) was wrong. Álvaro schooled me in a bit of linear algebra, updated the center of mass of the OPOS from his solidworks rendering, and re-calculated the matrix given the new order of OSEM inputs. The new version of the matrix generation can (at the moment) be found in /ligo/svncommon/SusSVN/sus/trunk/VOPO/Common/MatlabTools/AOSEM_Basis_2.m - Found that we had yet to install the updated naming convention for the OPOS in the h1susauxh56 computer for coil driver monitor signals, so we installed and restarted that. - Found that the OVERVIEW medm screen had a few mis-named channels that made Yaw show up instead of Transverse. This has been fixed. In the process, I've increased the visible precision of the EPICs records to reduce small-numbers vs. actually zero confusion. Finally -- with the control system finally functional, we drove the suspension to confirm free dynamics. We see a lot of cross-coupling between degrees of freedom that -- although resonant frequencies check out -- seem to be a little excessive. But, the chamber's pretty noisey, the ISI isn't yet floating, I haven't yet compared against a model, so maybe it's normal. I've found some L1 SUS OPO transfer functions in the SusSVN, but they look MUCH more diagonal. However, this is enough information to design some suitable damping loops.
On Tue 16th Jan 2018 I stopped the h1ngn model and removed it from h1oaf0. At this time the h1oaf model started running long (left hand plot in attachment). Note that h1oaf was not restarted at this time.
This morning I started with h1iopoaf0 and h1oaf as the only models running, h1oaf was running at pre_ngn_removal cpu usage with little deviation. After starting the other models on this computer h1oaf ramped up into the 61uS range with large deviations causing TIM errors.
Looking at the model/core distribution, the first CPU physical chip (6 cores non-hyperthreaded) were fully utilized until h1ngn was stopped, leaving a "hole" in core 4. I changed h1pemcs.mdl to move it from specific_cpu=7 to specific_cpu=4. After restarted all the models this appears to have fixed h1oaf's issues (right hand plot of attachment). It is not immediately clear why.
Here are the core layouts:
cpu0
core | model |
0 | General Linux |
1 | h1iopoaf0 |
2 | h1calcs |
3 | h1oaf |
4 | was h1ngn, empty 1/16-2/27, now h1pemcs |
5 | h1susprocpi |
cpu1
core | model |
6 | empty |
7 | was h1pemcs, now empty |
8 | h1tcscs |
9 | h1odcmaster |
10 | empty |
11 | empty |
The slow controls SDF installs (CA SDF) have been updated to include a fix identified for FRS 9749, which could allow a mis-behaving beckhoff system to crash the SDF model. h1sysecat[cxy]1plc[123]sdf + the h1 psl osc sdf + h1 hpi pump sdf where updated.
h1iscey front end glitched at 14:15 PST. We are holding off on its restart until we contact EY group.
killed and started all models on h1iscey with EY permission.
I have seen glitches on my test stand H1-style ISCEX machine here at LLO (actually quite frequently). It persists even with the GE FANUC RFM removed. I have not tried it in an L1-style model mix yet.
We believe this was physically due to brushing equipment past the cables which loop out of the front of the rack at the end station. Note, these racks are in the middle receiving bay so frequently see traffic traverse in and out of the VEA.
WP7381 Oaf testing prior to ISI common motion upgrade
Dave:
It looks like I've fixed the over-run of h1oaf by moving h1tcscs to fill the core vacated by h1ngn. I'll post details in an alog.
Stopping h1tcscs reminded us of the CO2 laser chiller issue when the setpoint temperature control voltage goes to zero. Fil wired up the chiller summation box which should resolve this issue, but we found that ITMY's water temp increased when h1tcscs was restarted. It had reached 26C when Fil removed the summation box from the control circuit. More details in another alog.
WP7378 OS upgrade for Scientific Linux Machines
Greg:
Greg updated and rebooted the DMT SL7.4 production machines (h1dtm[0,1,3]). He also upgrade h1hwinj2 from 7.3 to 7.4. h1hwinj1 is actually running 6.9 and its upgrade is not simple. We will most probably go ahead and re-install SL on this box next week.
WP7376,WP7377 Replace h1tw0 with new computer
Jonathan, Dave, Carlos:
Other issues this morning has pushed this activity to another time.
WP7319 new beckhoff SDF code
Jonathan:
All Beckhoff SDF systems were restarted other than C1PLC4 which had been upgraded some weeks ago.
WP7380 New SQZ model, add Guardian EPICS channels
Sheila, Dave:
h1sqz model was rebuild with LLO's latest SQZ_MASTER.mdl.
Complete renaming of VOPO to OPO
Jeff K, Dave:
Jeff modified h1susauxh56.mdl to complete the renaming of VOPO to OPO.
DAQ Restarts
Dave:
DAQ was restarted twice. First for h1sqz changes. Second for h1susauxh56 change.
Restart log is attached
(Mark D, Mark L, Gerardo M)
Ion pump 12 was removed from beam tube, and the vacuum port on the beam tube was covered with a blank. The ion pump's port was covered also with a blank. Varian SQ051 number 70095.
Measured dew point at -43.7 oC after IP12 removal.
--with Dave B.
The hardware inj boxes are not in production use, so we went ahead and patched and rebooted them. Note that box 1 is at SL6.9, while box 2 has been updated to SL7.4.
The patch and reboot the DMT production computers on WP 7378 is done.
Attached is a 500 day trend of 16 face BOSEMs from a few different randomly sampled suspensions in the corner station.
All of the sampled BOSEMs see 200-600 counts of decay over ~365 days of the plotted data.
Note, "face" facing BOSEMs on different types of suspensions have different reference names (T1 vs F1 vs RT BOSEMs are mounted in different locations on the different types of suspensions). For reference, search "Controls Arrangement Poster" by J. Kissel to see all of the configurations (E1100109 is the HLTS one for example). Or see the rendering on each medm screen.
Attached is the first plot I made of a few different randomly sampled suspensions, which included some vertically mounted BOSEMs. These trends are plotted ion brown and show other factors such as temperature in their shape over the last 500 days. Of the remaining red "face" mounted BOSEMs on the plot, all 11 show a downward trend of a couple hundred counts.
Using the same random selection of face OSEM channels as Betsy in the original aLOG entry above, but for LLO, 500 day trends are attached below. OSEM open-light decay trends appear similar between sites, with in general 100-600 counts of decay over ~500 days of the plotted data. However, it should be noted that the IM suspensions also included in the trends employ AOSEMs, and not BOSEMs, but the decay trends for both types of OSEMs appear to be consistent.
State of H1: Maintenance
PSL status checks were not run due to PSL being down during the 70w install.
Á. Fernández-Galiana, B. Gateley, C. Gray, J. Kissel, T. Shaffer Many more details to come, but Álvaro, Corey, and TJ installed the new H1 Optical Parametric Oscillator Suspension (OPOS) this morning (with Bubba as safety consult, and me as photographer. I attach my the highlights of my pictures from the activity as direct images below. All pictures are collected, scaled down, and merged into a .pdf also attached. Notes: - During the install, we had to move - the OMC TRANS path beam dump that was in the -X, -Y corner of the table (it was in the old position, and will need to be moved anyway) - A ballast mass that was (unbolted and) shoved against AS WFS A. The table needs to be rebalanced anyway, so this it's a big deal. - The second iris (closest to the OPOS and/or the future position of ZM1) capturing the beam path into HAM5. The location has been scribed in order to (hopefully) reproduce the position to high accuracy. See .pdf for detailed pictures of this. This will be re-installed omce we finalize the location of the OPOS and place ZM1. - In order to successfully attach the installation lifting fixture to the OPOS, we had to remove a V-dump from one corner of the optical bench. This dump (and its near interference) is shown highlighted in the second picture (IMG_3648.jpg). This will be re-installed shortly.
J. Kissel for T. Shaffer The V-Dump mentioned above has been reinstalled.
Wonderful, congratulations!
Adding a few more pictures and a (late) update from yesterday.
Yesterday, Alvaro and I finished routing the cables and fibers that come from the suspension. The cables were attached to their brackets, but have not been connected to their cables that run to the feedthrough. The fiber feedthrough has not been installed yet, so the fibers are just coiled up and waiting for that. The AS WFS are not in the same spot as in the drawing, because of this we had to reroute some of the cables for these WFS (see attached).
Today we will unlock the suspension and rebalance, then when we are happy with that we will recenter the OSEMs. Sometime this week we will need to add the 2" mirrors, beam dump, ZM1, and hook up the rest of the cables.
As seen in the attached pressure graph, the XBM pressure isn't following the expected pump down curve. Since GV2 was closed, thus isolating the XBM from the adjacent volumes, the pressure has been rising, flat or not decreasing etc.. In this configuration, IP6 and the XBM MTP are the only pumps pumping. Chandra R. had suspected previously that IP6 may be dying. Even so, I would expect that the symptom would be a loss of net pump speed. With the LVEA temperature constant, I'm wondering if this apparent increase is something else, like.....?
I confirmed that the MTP and IP6 are valved in. 4.9 x 10-9 Torr indicated at Turbo inlet -> PT170 gauge has been questioned in the past and is suspect
Here is a 10 day trend on both beam manifolds. The pressure in XBM seems to have settled. I need to follow up with the manufacturer of these gauges - an ongoing issue.
John and I spent time leak checking the XBM last year after noticing these pressure trends. No leaks found, but we didn't check welds.
Worth noting too that the turbo gate valve was closed during the needle valve installation so pressure rise up to 0.7 Torr was seen only on turbo side and not in CP4 volume.