Here is another snapshot of Y-arm pressures over five days. We don't see a response in Y2 from changes in CP4 temperature, but Y1 side trends closely with CP4. GV11 (nearer to mid-station) has an outer o-ring leak in gate, but we evacuated this annulus and valved out. GV12 (Y2 boundary valve) has both inner and outer o-ring leaks on gate - also evacuated and valved out.
When BSC6 was replaced with spool at mid-Y several years ago, GV11 was exposed to air and the spool was not baked after installation (was it baked at factory?), so we expect outgassing from these components during the CP4 bake, but the sharp cuffs in Y1 pressure are not what we would predict with temperature changes in steel.
CP4 air temp trend over five days in enclosure is also attached.
The replacement spool was baked at GNB but not after installation. The same is true for the MC tubes in the LVEA which were part of the same procurement for aLIGO.
Kyle and I replaced a bad solenoid valve on the compressed air drying tower skid outside after getting alarms 'after hours'. Loss of compressed air caused the safety valves on the three turbo stations to close and thus spin down turbos. It also caused GV 6,8 to sag. After we regained compressed air we spun turbos back up, but YBM turbo is being difficult and keeps shutting itself down due to vibration (a known issue). We tried all the tricks with bypass valve and air admittance to load rotors. We'll let it spin 100% down tonight and try again tomorrow. I closed all three turbo isolation gate valves in a timely manner so the corner should not have been affected from back streaming. Pressure rose for a period while turbos were valved out. I didn't have time to check on in-vacuum high voltage equipment - don't think anything was turned on.
FYI: we did not replace the solenoid portion of the valve at the drying tower.
YMB turbo spun back up today with no troubles or tricks. Foreline setpoint back to 5e-2 Torr.
The guardian upgrade has been completely aborted, due to the unexplained segfaults described in elog thread 40765 . The entire guardian system is back to the exact same configuration as it was two weeks ago (guardian 1.0.3, h1guardian0 ubuntu12). All nodes are up and running nominally.
h1guardian1 has been left in place to run demon excision tests: 20 fake guardian nodes are running under valgrind in the hope of catching a crash that would point to the problem. These tests should not affect interferometer operation in any way.
In our hunt for issues, we swapped a BOSEM on ETMY Side R0 (reported below). The BOSEM wasn't the problem tho.
OLV has been updated in medm:
| OFFSET | GAIN | |||
| OLD 083 | -15141 | 0.991 | ||
| NEW 291 (OLV 31220) | -15610 | 0.961 |
With the connecting and disconnecting the cables in hunting and fixing ground loops at EY the side coil on the RO failed to drive the suspension. Investigating it was obvious we had no coil connected. (Could not see 40 Ohms on the pins) Check the feed through and everything appeared good and tight but it must have been seated oddly. Reseated and tightened the connection and the coil was fine. 40 Ohms at he Sat amp 45 at the rack.
Fil and I also took this opportunity to replace the Binary IO card that had failed for the read back. All signals seem to work now.
all are aligned except ETMX and TMSX, which are damped
Adjusted CP4 Dewar pressure relief valve (aka economizer valve) by turning screw 2 turns clockwise to increase head pressure. It was at 12 psig with tank 62.1% full. This may help increase GN2 flow. However, the PSI document specifies a head pressure of 5 psig for regen exercise.
It will take a day or so for the head pressure to stabilize to new setting.
Backed off and made it one turn instead on economizer valve.
Dust6 appears to be off, JeffB checking
JeffB confirmed that it's working
After ~3 days of ground loop hunting, fixing, breaking, fixing as well as a relocking of the ISI, we finally are getting good results on the 18 DOF transfer functions for all 3 suspended chains in BSC10 (ETMY main, ETMY reaction, and TMS).
Attached are results of the TMS. Peaks from today overlap peaks from previous closeout sessions.
Kissel has cast an eye on these and calls them good.
Will post the ETM set shortly.
Next up: First contact and electrometer optic readings, chamber clean-out, then closeup!
Also, attached is the latest set of ETMY and TMSY spectra after all of the ground loop work. Combs are gone.
Attached are the ETMY M0 and R0 chain transfer functions taken yesterday. The 2.5Hz noise is showing itself again on the Transverse DOF TFs just a bit - maybe related to the ISI being locked again. No knobs to turn for it so we're going with it for now.
The h1susey computer was power cycled twice this afternoon. First to see if a power cycle of the IO Chassis would clear the binary error, the second to replace the Contec 6464 binary card. Even though h1susey had been taken out of the fabric, the first power up glitched the dolphin'ed models (h1seiey and h1iscey). This was not an issue since ISI was locked and the HWWD has the ISI coil drivers powered down.
After h1susey was working again I restarted all the other EY models with no problems.
2018_03_08 13:18 h1iopsusey
2018_03_08 13:18 h1susetmy
2018_03_08 13:18 h1sustmsy
2018_03_08 13:19 h1iopsusey
2018_03_08 13:19 h1susetmy
2018_03_08 13:19 h1sustmsy
2018_03_08 13:20 h1susetmypi
2018_03_08 13:45 h1iopsusey
2018_03_08 13:45 h1susetmy
2018_03_08 13:45 h1sustmsy
2018_03_08 13:46 h1iopsusey
2018_03_08 13:46 h1susetmy
2018_03_08 13:46 h1sustmsy
2018_03_08 13:47 h1susetmypi
2018_03_08 14:07 h1hpietmy
2018_03_08 14:07 h1iopseiey
2018_03_08 14:07 h1isietmy
2018_03_08 14:14 h1alsey
2018_03_08 14:14 h1caley
2018_03_08 14:14 h1iopiscey
2018_03_08 14:14 h1iscey
2018_03_08 14:14 h1pemey
Chandra, Dave:
the regen overtemp interlock alarm was added to the alarm system, this alarms with no latency. Also, the CP4 thermocouple low alarm level was raised to 70C.
Guardian:
• Reverted to previous config.
• BRS in CS has been removed, so is white
• New nodes to be added
• No work next Tuesday
CDS:
• Possible model reboots during Maintenance
• DAQ reboot
PSL:
• Alignment and mode matching for new 70W amp continues
HAM6:
• Locking
• Next week, smaller crew
• Ready for sei to start balancing
• Doors? Later
• Platform: Mark is working on it
• Chamber cover may need to be up without people in the cleanroom
LVEA:
• Kyle turning purge air off for about an hour – Tuesday Maintenance
EY:
• Card access work continues
• Card access contractors
EX:
• Card access work continues
• Card access contractors
• Pcal work started
• Kyle to check purge air
OpLevs:
• ITMY is blocked by GV1
GN2 flow survived the night (but did Kyle, who monitored the screen?). We now have an alarm enabled that will text the vacuum group almost immediately if the GN2 heater trips, in which case relatively cold GN2 would continue to flow through warm CP4, across bibraze joints. Our bake set up is different from the original PSI design, which I don't think was intended to run hot GN2 for days on end.
I increased the regen setpoint to 105C. Gas temp. was pretty steady at ~85C last night, with a setpoint of 95C and prop. gain of 7. There seems to be an offset (~10C) between setpoint and temp reading that I haven't converged on by changing the gain, so for now I increase the setpoint to make up for it. The GN2 flow is ~ 20 scfhx100 and 1/4 of the vaporizer is frosted (see photo). Dewar consumption varies but doesn't look to be more than the typical consumption of normally operating pumps. May need to open up vaporizer feed valve more to allow more flow.
Increased flow of regen GN2 and also temperature. The enclosure heater is outputting 100%; I suspect what is happening is the GN2 is sucking heat inside the enclosure. Because we're measing the temp. of GN2 outside, we should bump it up 10C higher from where we want it at CP4 to compensate for losses. New setpoint is 115C with intension to raise outside gas temp to 105C. Supply air at bake enclosure is at 95C.
The vaporizer feed valve is fully open at 2-1/8 turns, with flow measuring around 40 scfhx100. Not sure we can achieve PSI's 55 scfhx100 spec.
Continued with hunting of ground loops at EY. After various attempts all shorts for ETMY are now fixed. Some of the steps taken to remedy shorts:
1. Betsy loosen and re-tighten some of the connections at the feedthru (in chamber).
2. Shorts on the UIM and PUM were no longer present. Short on ETMY MO still present. New short on MO/RO.
3. Disconnected cables at feedthru (air side) and verified pin 13 was not shorted to ground inside chamber.
4. Checked that the shield and backshell on connectors were not touching (feedthru air side).
5. Disconnected all cables coming from chamber at the SUS Amplifier boxes.
6. Checked that all pins and backshells were not shorted to chamber ground. All passed.
7. Reconnected all cables back to SUS Amplifier units.
Further to the above, Fil reported that he had not secured the cables connected to the 7 SUS EY Sat Amp boxes with their mating screws.
This morning, we ran the spectra again and see that all of the combs cleared up. So that is good.
However a new round of TFs (since Jim had the ISI unlocked and we were diagnosis something else), showed no coherence in the R0 Transverse DOF. This DOF uses only the Side BOSEM. This T TF was fine on Friday before all of the cable short remediation work. (Not sure why it doesn't reveal itself in the spectra...) So we think the actuation of the 1 OSEM is now broken.
Travis and I went down to EY this morning and secured all of the cables at the Sat Amp Boxes. The R0 T DOF still looked bad. SO, we went ahead and did a quick swap of the R0 Side BOSEM inside on the QUAD. Still no change.
Richard happened to be in the building - he power cycled the R0 Face1,2,3,Side Coil driver. Still bad.
We all broke for lunch.
For the record, the olds BOSEM that came off of R0 Side was s/n 083, the new one going in is 291.
This morning, Travis had to repoint the ETMX with some bias to get it back on the Oplev which I found odd. The Oplev had been zeroed to the ETMX SUS last ~Monday reportedly by Jason. However, besides that jump in the trend data, there is another jump on Tuesday. I am guessing that the ISI EX model work/bootfest changed the pointing of the floating ISI/SUS. Travis used the ETMX SUS to repoint back to zero.
This morning, Jim locked the ISI so hopefully there will be no more unexpected shifts due to anything ISIish.
Side note - try not to bump the OPLEV piers. Ever.
Not that it isn't possible, but, it does not look to me like the reboots of the isi did anything to the OpLev readout. Further, what can SEI do to pitch unless the optic is locked? Based on everything I know and can see, I did not think the SEI could do this, it has only been DAMPED, READY or TRIPPED.
The attached 3 hour plot shows the OpLev Pitch, and WD, Guardian and CPS positions for the ETMX. The step on the OpLev occurs at 1940 on Tuesday. It is a full 14 minutes later that the Guardian is manipulated and 26 minutes later before the isietmx FE boot at 2006. Below is the day boot start log--nothing is causal to the OpLev shift. Meanwhile there is no hint of movement on the ISI (HEPI has been locked for more than a week) during the OpLev Pitch shift, (yaw is tiny) and none until the signals go to zero at 2005 at the start of the isi boot. Just goes to show, sometimes you have to look closer.
hugh.radkins@opsws1:06 1$ more *.log
2018_03_06 11:13 h1asc
2018_03_06 11:13 h1omc
2018_03_06 11:13 h1sqzwfs
2018_03_06 11:15 h1susopo
2018_03_06 12:02 h1isiitmy
2018_03_06 12:04 h1oaf
2018_03_06 12:06 h1isietmx
2018_03_06 12:06 h1pemex
2018_03_06 12:13 h1alsex
2018_03_06 12:13 h1calex
2018_03_06 12:13 h1iopiscex
2018_03_06 12:13 h1iscex
2018_03_06 12:13 h1pemex
2018_03_06 12:24 h1oaf
2018_03_06 12:43 h1iopiscex
2018_03_06 12:43 h1pemex
2018_03_06 12:45 h1alsex
2018_03_06 12:45 h1calex
2018_03_06 12:45 h1iscex
2018_03_06 12:56 h1dc0
2018_03_06 12:57 h1dc0
2018_03_06 13:01 h1broadcast0
2018_03_06 13:01 h1dc0
2018_03_06 13:01 h1fw0
2018_03_06 13:01 h1fw1
2018_03_06 13:01 h1fw2
2018_03_06 13:01 h1nds0
2018_03_06 13:01 h1nds1
2018_03_06 13:01 h1tw1
2018_03_06 15:20 h1oaf
2018_03_06 15:20 h1pemcs
2018_03_06 22:03 h1fw0
2018_03_06 22:12 h1fw0
hugh.radkins@opsws1:06 0$ pwd
/opt/rtcds/lho/h1/data/startlog/2018/03/06