Before we understood that the noise we were seeing was the bounce and roll modes of ETMX (see alexa's alog) we were looking around at Osem signals, and we saw that there is a large peak at around 25 Hz in L1_VOLTMON_LR_DQ. It was there before the bounce and roll modes were rung up, an it is there with damping on and damping off.
(Sheila, Alexa)
We have been trying to search for the 1/f feature present at 1-50Hz in the noise plots. We tried a few different things as described below. The noise spectrum under nominal conditions is represented in REF0-2.
All these plots can be found under /ligo/home/sheila.dwyer/ALS/HIFOX/COMM/COMM_COherences_April8.xml. In the set of three, the first reference is always the power spec, the second the RMS, and the third a coherence plot. I have attached an image which shows the nominal power spectrum and the cases with the HEPA fan on and one with the power reduced (NoiseSearchPic1).
For future improvement of the QPD servos, we adjusted the input beam onto the QPDs with the servo off, such that the beam was fairly centered onto both QPDS. We did not get a chance to work on the actual servos though.
We proceeded to do some more noise hunting, but halted the effort when we saw two new spikes in the noise spectra. One spike was at 9.8Hz and the other at 13.8Hz (see second png). Turns out these are the bounce and roll modes of the suspension that must have got rung-up during an ISI trip. (Refer to Sheila's alog).
(Sheila, Chris, Alexa)
Since we had seen a difference in the 1/f noise by decreasing the laser power at EX, we decided to try the following -- we placed an ND filter in the path of the x-arm green transmission on ISCT1. With the ND filter the ALS-C_TRX counts dropped from 0.94 to 0.471. We saw an increase by a factor of 2 in the 1/f noise. We would expect a factor of Sqrt(2). We should repeat this measurement by decreasing the power further. The red line in the attached figure is the nominal spectra; while the light blue line (REF 27) is with the ND filter in place. (Overall these spectra look higher then those posted earlier in the attached alog; it's possible that this change came from alignment drift).
Subsequently, we decided to look into the noise at the COMM BBPD.
Signal from the PD w/ no RF pre-amp @ 100Hz BW:
With PD blocked and no RF pre-amp @ 100Hz BW:
With PD blocked and RF pre-amp (+13dBm) @ 100Hz BW:
Instead of using the beatnote signal from the PD, we used a signal generator (-31dBm at 79.8Mhz) to lock the PLL and measured the in loop noise spectrum (with the input gain of the PLL at -25dB w/ the two compensation filters off). We will posts plots tomorrow. At first glance; however, this does not appear to be the 1/f noise source we have been hunting.
I have attached the plots and data we collected last night. The noise spectrum was measured out of IMON of the PFD with the PLL locked to a signal generator instead of the PD signal. This noise spectra is loop suppressed by the COMM PLL loop. The COMM PLL had an input of -25dB with the two common filters off (unlike the nominal settings). We took a OLTF of this modified loop.
The corner station controller medm was found white boxed. The computer (h1hpipumpctrll0) was running--we did not have to restart it. However, the servo program was not running. It has been down since early Sunday morning. The VFD was running at 31.xx Hz, the last value it received. System was recovered in the best way possible:
Restarted servo--see wiki. This database does not process any records when started and begins in manual mode, so even though its output is zero, it actually hasn't written anything to the VFD at this point and it continues driving the pump motors at 31hz.
The Servo motor speed slider was then rapidly dragged to the nominal running output for 80 psi. You can look in the trends at the CONTROL_VOUT channel, for the LVEA system it is ~1080. If you are dexterous enough, this will only slightly glitch the system if at all. I got lucky and saw no pressure drops.
Once pressure is steady, tweek the output til 80psi. Put 80 psi in the Setpoint (it is zero'd with the database restart.) Engage the Servo (push the On button.)
Following the h1isiham5 model change last friday to correct the BIO card assignments, Jim W reported continuing binary output switching problems with h1isiham5.
We have found the problem.
The drawing D1000298 shows the following:
The IO Chassis has two BIO cards installed. Each card has an input and output connector, each connector pig tails to two cables (lower 32 channels and upper 32 channels).
The CER rack has two binary input chassis installed (one for ham4 and one for ham5) and only one binary output chassis shared between the two chambers (ham4 uses lower 32 channels, ham5 uses upper 32 channels).
So the inputs are straight forward, CARD0 both cables go to the back of ham4 binary input chassis, CARD1 both cables go to the back of the ham5 binary input chassis.
The problem is with the binary outputs. The drawing shows that CARD0 lower 32 cable goes to the lower 32 of the binary output chassis and the upper 32 cable is not used. With CARD1, lower 32 cable goes to the upper 32 of the binary output chassis and the upper 32 cable is also not used.
We found that CARD1 binary output is missing its cable, and both cables from CARD0's output go to the output chassis. In other words ham4 gets two cables and ham5 gets none. The fix is to run a fourth cable to the CARD1 output, and use its lower32 for the output chassis upper 32 connector.
The h1isiham5 model is in agreement with the drawing, so no software change is needed.
(Rich M, Sheila, Alexa)
HEPI and ISI ETMX tripped because we were playing with the suspension watchdogs.
We had a hard time recovering from this trip which caused several more trips, the wd plotting script is not working this afternoon, possibly beause of our network problems so we don't have any plots. Once HEPI was isolated, we tried letting guardian isolate the ISI. It tripped immediately on CPSs (stage1) after we cleared this and tried again it tripped on GS13 (stage2).
We then paused the guardian, moved all the blends to start, and gave the gaurdian another try. (note, the ground motion is fairly quiet today). This tripped again, GS13s.
We then puased the guardian, and tried the command script, stage 1 level 3, tripped on the actuator limit. I then stopped taking notes.
The rough procedure that Rich used to bring it back was damping both stages, turning on 1 DOF at a time RX, RY, Z, RZ, X, Y (stage 1 first, then stage 2). Then Alexa moved the blends over to Tbetter without anything tripping.
We don't know why the things that used to work don't anymore, but now ETMX ISI is harder to untrip and neither the guardian nor the command script seem to be able to do it anymore.
Rich may not have mentioned in the alog, but after our trip the other day while trying to switch blends, he changed the filters from zero crossing to ramp, which should make it easier to switch blends without tripping the ISIs.
One thing about guardian, its not clear to me how we should unpause if we need to pause it for some reason. TO pause it we paused all three, stage 1, stage2 and the manager. Wen unpausing the subpodinates they don't come back in managed mode. I was able to get them back to managed mode by going to init on all three, but in the meantime they were doing their own thing, but happily didn't trip the ISI. What is the best way to do this? Also, maybe the right way to pause and unpause a manager with subordinates is something that needs to be made more obvious to a user.
Alexa also tried pausing, she only paused the manager, bringing that back was not a problem but it doesn't pause the subordinates.
quoth Sheila:
We don't know why the things that used to work don't anymore, but now ETMX ISI is harder to untrip and neither the guardian nor the command script seem to be able to do it anymore.
But you did mention one likely important thing that changed: new blend filters (Tbetter?). What happens if we go back to Tcrappy? Do we have the same problems?
also quoth Sheila:
One thing about guardian, its not clear to me how we should unpause if we need to pause it for some reason. TO pause it we paused all three, stage 1, stage2 and the manager. Wen unpausing the subpodinates they don't come back in managed mode. I was able to get them back to managed mode by going to init on all three, but in the meantime they were doing their own thing, but happily didn't trip the ISI. What is the best way to do this? Also, maybe the right way to pause and unpause a manager with subordinates is something that needs to be made more obvious to a user.
Alexa also tried pausing, she only paused the manager, bringing that back was not a problem but it doesn't pause the subordinates.
This is definitely sounds like something that could be improved. It is true that all nodes, manager and subordinates, need to independenty be paused and unpaused. I'll try to think about how to make that smoother. In the mean time, here's the procedure that should be used:
The manager INIT state resets all the works, restarts them, and puts them into managed mode. That should be the most straightforward way to do things for the moment.
Thanks Jamie.
We did try both the command script and guardian with all the blends on start, which used to work. I don't think we tried turning sensor correction off, which is another change.
I heard that rumors were spread that this alog was written during an earthquake that we hadn't noticed, so there is no reason to worry about our inability to use guardian or the command scripts to reisolate ETMX. That is false, there was no earthquake. PS, this is sheila, still loged in as Stefan.
Matlab has been changed to a 2011a version with limited licenses available for Linux control room workstations. Avoid using matlab for model changes. Version 2012b will be reinstalled when licenses become available. No matlab licenses are available for OS X at this time.
LVEA Laser Hazard Nitrogen delivery to Mid-X Ed Watt – Insulation on X-Arm 08:45 Jodi – Working at Mid-Y 08:45 Ken – Replacing flow bench fan motors in Optics Lab 08:57 Filiberto – Camera work at End-Y 09:38 Sheila & Alexa – Alignment work at End-X 09:30 Corey – Survey of ITMX spool cameras 09:42 Schneider Electrical contractor on site working on GC UPS system with Richard 09:50 Jonathan – Restarting the aLOG 09:50 Aaron – Cabling work in LVEA 10:00 Rick, Justin, Peter & Cris – Cleaning up in the H1-PSL enclosure 10:04 BSC2 – HEPI & ISI watchdogs tripped 10:06 BSC1 – SUS, HEPI, & ISI watchdogs tripped 10:15 Schneider Electrical contractor on site working on CDS UPS system with Richard 10:30 Praxair – Nitrogen delivery to Mid-X 11:25 Cyrus – Outside internet & phones are down due to fiber & copper cut at 400 area 12:30 Betsy & Travis – Working in the LVEA SUS Test Stand area 12:30 BSC9 – SUS, HEPI, and ISI watchdogs tripped 12:50 Richard – Changing dust monitor #5 (Beer Garden) to fixed vacuum connection 12:54 Corey – Going to End-X TMS lab 13:15 M itch & Scott – Taking forklift to Mid-X to get BSC plates 13:20 Corey – Going to End-Y 13:30 Michael – Survey tour in LVEA 13:35 Hugh & Greg – Replacing parker valve at HMA4 13:40 Cyrus – Going to End-Y 13:45 Richard – Working on dust monitor #5 tubing 14:06 Alastair & Dave – Turning on CO2 TCS-X laser 14:20 BSC2 – HEPI, ISI watchdogs tripped – Ken working on cabling near BSC2 14:44 Kyle – going to End-Y working on BSC6
@ 14:20 the BSC2 HEPI and ISI watchdogs tripped. Shortly after the HEIP watchdog on HAM3 tripped. There was work on the cable trays above these chambers at this time. Both chambers have been recovered.
For a while the BS chamber's HEPI has been an outlier because I accidentally loaded theposition filters to the level 2 slot, where all the other chambers use the level 1 slot. I used foton to copy everything over this afternoon so now the controllers occupy both slots. This means that now the BS-HEPI can be isolated just like the other chambers by pushing the isolate level 1 button on the Commands2 screen. The level 2 button also still works, I didn't change anything there.
I discovered why the reboot of h1seih45 yesterday did not get logged the same way in which a model restart does. When the OS is rebooted, the startup procedure for the front end models is slightly different and does not created the target/<name>/logs/reboot.log file. On h1boot I modified the /diskless/root/etc/rc.local file to call a new script /diskless/root/etc/log_fe_startup.sh which creates the reboot.log file for each model running on the computer (including the iop model).
While Greg & Jim replaced the Parker Valve here just last week, the replacement had a performance of 133% of nominal.. L124 needs calibration. We'll leave this in bleed mode for at least a couple hours.
While a contractor was on site commissioning our New GC UPS I asked him to look at our CDS UPS as we have been having communication problems with it. He quickly identified the problem we put the unit into bypass and swapped a lead/lag module around and restored the system. We are on the alternate module and a replacement will be ordered.
A Backhoe working in the 400 area for DOE struck our fiber connection to the Hanford fiber loop. Lockheed Martin is in the process evaluating the damage and will let us know how bad it is. Best case it is after our feed and on the Hanford loop so we could reroute. Worst case it is our fiber run and will have to be repaired.
Jeff B
@ 10:00 BSC2 HEPI & ISI Tripped. Reset watchdogs and both have recovered.
Jeff B
@ 10:02 the BCS1 SUS, HEPI, and ISI Watchdogs tripped. Reser watchdogs and all systems have recovered.
I closed these late yesterday but then turned them off again over night. I've collected overnight spectra now and have re-closed the loops. I'll make comparisons with the HEPI L4Cs and the ISI T240s for evidence of minimal harm from the loops. Attached is the comparison--I'll examine it closer too but I think there are elevated areas with the loops on between 0.7 and 10hz. Maybe we can live with this or tune this out.
Adjusted fan 1 output to 12000 SCFM, this also corresponds to the fan output at X end. This request was to see if we could reduce a noise peak that showed up in Jim Warner's test spectra.
model restarts logged for Mon 07/Apr/2014
none reported by the software, but...
looks like I've got a bug in the reporting software, we rebooted the h1seih45 computer and the models started up automatically. Somehow this method of startup was not logged, here are the restarts entered manually:
Mon Apr 7 11:51:26 2014
h1iopseih45
h1hpiham4
h1hpiham5
h1isiham4
h1isiham5