Something is wrong with ITMX HWS. I need help from experts.
As I wrote yesterday (26185), because the HWS codes were not running, I had restarted both ITMX and ITMY HWS codes. Today I checked the output, in particular the spherical power, and noticed that the noise level was way too high (+/- 200 uD !). I restarted the code with a new reference image to see if it helps, but it actually made the noise level worse. Now it fluctuates as big as 0.03 uD. Looking at the camera image itself, I saw a beam on the center and assumed that it was the right one. Also I steered the ITMX compesantion plate (which is now intentionally steered by approximately 500 urad due to the green view port issue, 26164) back to the zero-offset angle, but this did not help at all. The laser power has dropped to 1.1 mW which used to 1.4 mW on March 1st (25806), but I believe that this should not increase the noise level such drastically.
One suspicious thing I found is the number of centroids which now reads only 65. In the past it was 400. In summary, I have no idea what has gone wrong.
[Stefan, Sheila, Jenne, Kiwamu, Hang]
Tonight ended up being a night of retuning the PRC2 loops, to make them sensible. Now they have UGFs around 2 Hz, rather than ~20mHz.
The thought was that perhaps this 0.4Hz oscillation that we regularly see is a result of not holding the PRC2 loops tight enough. If the PRC isn't following the motion of the SOFT loops (which move when we power up due to radiation pressure), then we're getting a modulation in the amount of power coupled in to the arms from the PRC. This power modulation makes the SOFT oscillation even worse. If we force the PRC to follow the arm pointing, then we won't have this power modulation, so we will hopefully only have to deal with "regular" angular instabilities.
In the end, we tuned the shape of the PRC2 pitch and yaw loops while sitting at ASC_PART3 by compensating the PR3 suspension resonances. This made the loops beautifully stable, so we cranked up the gain. For the PRC2_Pit loop, we use the new shape (although low gain) for DRMI_ASC. For the PRC2_Yaw loop, we use the old loop shape and gain during DRMI_ASC, and then use the new loop shape for the main ENGAGE_ASC steps. Recall that all DRMI ASC loops except for MICH are turned off after offloading the alignments, and are left off for the carm offset reduction sequence. Also, during DRMI_ASC, the PRC2 loops actuate on PR2, while they actuate on PR3 in the final state, so it's not quite as weird as it sounds to be using different shapes at different states.
See the attached PRC2 open loop measurements for the OLG after our retuning and gain increases. All of these sequences have been added to the guardian.
Now however, when we try to go from 12W to 15W, we see an oscillation in PRC1 Yaw. We should probably retune the PRC1 loops, including actuating on the lower mass in addition to the top mass of the PRM (right now we only push on the top mass). We leave this for another day.
ASC plan for tomorrow: Run Hang's A2L script, and find SOFT offsets that put us at our actuation nodes. Hopefully this will reduce the angular effects of increasing the power, and help us get back to 20W.
Other things tonight:
1) I added filters with a gain of -1 to the roll mode damping for ETMY and ITMY, so that the gains for all of the loops will be positive from now on. This is in the guardian and should make things easier for people who need to damp roll by hand when it is rung up badly. I will work on doing this for bounce as well but that is not finished.
2) We had 2 locklosses in DRMI on pop tonight, looking at them I saw that we were switching the coil drivers while the DRMI sensors were still ramping. I added a timer to make sure they don't switch until the ramp is over now.
Jenne and stefan also made some bug fixes for the code that switches off the soft loops when their outputs are too large.
It seems like the problem we have now is really the PRC1 Y loop going unstable, which is not too suprising since this is cross coupled with the PRC2 loop which we changed last night.
The attached screenshot shows on the left the optical levers, POP build up and PRC control and error signals. The left panel is for a time when we rang up the half Hz CSOFT instability which is visible in the optical levers, and the pop build up somewhat. The second panel shows what happened after we increased the gain in the PRC2 lops, we see nothing in the optical levers but the PRC1 Y loop is unstable. This is the same problem kiwamu had later in the night.
As Jenne said it should be relatively easy to fix the PRC1Y instability.
I added H1:VAC-LX_X2_PT170_PRESS_TORR and H1:VAC-LY_Y2_PT180_PRESS_TORR to the exclude channel list and rescanned. 528 channels were added and 2 were removed (see attached). H1:PEM-CS_DUST_LAB1_ENABLE and H1:PEM-CS_DUST_LAB2_ENABLE are still unmonitored.
Corey, Daniel, Stefan
We moved one of the two AS56/45 WFS from the AS port in-air table (ISCT6) to the REFL port in-air table (ISCT1). (Serial number S1300512)
The WFS was connected to the RF cables for the REFLAIR B WFS, and powered by the third slot on the WFS interface chassis.
One thing to note (again) is that those 5-channel RF connetors are impossible to plug in - in fact the only way to guarantee that all connecions are good is to remove both the connector backshell and the WFS box cover, and idividually checking that every channel pin and shield makes a good contact. Corey has some pictures of this.
Also, we quickly wanted to check the RF transfer functions from test in to each channel, but they looked broken. The chain definitely needs to be checked out again.
As Stefan notes, the "connectors" for the WFS were a bear to work with. Since the pins would move, and you never knew if the connector was really connected, Stefan removed the covers to the connectors, and also removed the back panel of the WFS to make sure the connector is connected. He did this on both.
To remove the connectors, one will need to remove the connector covers, and then disconnect the connectors (not ideal, we know).
Attached are photos of the connectors connected (with WFS panel removed and connector covers removed).
MCL controller:
PZT mirror:
WFS:
Model/DAQ:
ISCT1:
End Station PI models WP5785
Terra, Betsy, Jim, Dave:
The end station PI models (h1susetmxpi and h1susetmypi) models were changed this morning to permit them to drive the PI inputs to the ESD drives. Prior to this change these channels were being driven by the main quad models. The process was:
remove the drive of the last two channels of the last DAC from ETM[X,Y]
restart the ETM[X,Y] model (to free up the DAC channels)
add the last two DAC channels to the ETM[X,Y]PI model
restart the ETM[X,Y]PI models
Now that the PI models are driving the ESD low voltage inputs, it was important that they stop driving if the main Quad model has stopped driving its DAC channels through either a MASTERSWITCH switch to OFF, or a WATCHDOG trip. This requires two shared memory IPC channels to be added to the ETM[X,Y] models. To get the MASTERSWITCH channel out of the QUAD_MASTER.mdl model, this was changed to add an output port. For the end station models this was plugged into the SHMEM IPC senders, for the corner station ITM[X,Y] this was terminated.
Code checked into SVN r12907
First Version of ITM PI model installed at LHO
Dave:
The first version of the corner station SUS PI model was installed on h1susb123 in the remaining core. This is a 'work in progress' as it does not have any input or outputs and at the moment is essentially a place-holder. This completes the increase in H1 models (along with h1ngn) and brings H1 model total to 104
DAQ Restart
Jim, Dave:
The DAQ was restarted at 14:57 PDT to:
The DAQ restarted cleanly with no problems.
Filiberto, Patrick, Richard WP 5784 The major issue encountered was that the X4 PT525 and X5 PT526 BPG402 EtherCAT gauges reported 0 torr after being connected to the end X vacuum chassis. I have not yet been able to figure out why. These gauges have been moved back to h1ecatx1 for now. All of the status information that I could find for them seemed to indicate that they were fine, and that the hot cathode ion gauge was in use (active sensor: 2). Richard measured the voltage directly from the pins on one of the gauges and it reported around 1.72 V, which would have been in the right range. I tried a factory reset by writing to CoE index FBF0 0x01, but it is unclear if a reset occurred. If one did, it did not help. I tried changing the units readback to counts and got a large changing number, but strangely the linked variable still read 0. I will connect a spare BPG402 gauge to the end Y chassis in the EE lab and do some more testing. In regards to the other changes, I trended the pressure and high voltage for IP 12 and it is back to around what it was before the change to Beckhoff. It is hard to tell at this point if the smoothing for the CP8 pump level is noticeable or improving the PID control.
Now that the commissioners start to use CO2Y I want it to be obvious if it trips.
Kiwamu, Nutsinee
In order to understand the characteristic of the COY rotation stage I ran Kiwamu's script with TCS CO2 channels asking the rotation stage to turn to random angles. The result is attached below. The first plot shows that CO2Y rotation stage works 85%-93% of the time (out of 300 samples). Changing the rotational state speed by 50% didn't make much difference (if not worse). The nominal speed is 100%. When the rotation stage didn't go to the requested angle the difference is always roughly 30 degree. In comparison I attached the second plot showing data from CO2X rotation stage that has quite a nice behavior. The requested angles and measured angles agree within half a degree (out of 100 samples).
It might be interesting to see the measured power for each of these as well. I would probably trust that more as an indication of the measured angle than the reading from the stage.
Below you will find the plot measured CO2 power vs. requested angle. Notice that CO2X power follows the sinusoidal pattern quite nicely while CO2Y power is ~30 degree phase delay from the main sine wave when the rotation state is busted.
John, Chandra Connected aux pumping carts to the following annuli: 1. HAM 7/8, with secondary turbo Pressure: 3.4e-5 Torr 2. HAM 11/12, with secondary turbo Pressure: 9.5e-5 Torr 3. HAM 9, no secondary turbo Pressure: 7.0e-4 Torr Still need to connect BSC4 annulus. Two annuli systems are leaking, based on the attached plot of diagonal gauge PT-140. Leak rate (O) e-3 Torr-l/s.
Tuesday evening: all three pressures on annuli are now low e-5 Torr range. I closed each turbo valve individually to see if pressure would rise in diagonal (on PT-140). No observed changes in pressure over 10 min time span for each. We have greatly reduced the leak rate by reducing pressure in the annuli.
The lock out/tag out for the HPO power supplies was removed. Both the internal and external shutters of the HPO had their flags replaced. The position of the flags were also adjusted so that the magnetic position sensor was triggered when the shutter was opened/closed. TwinCAT uses the position sensor for animation on the user screen. Each of the four laser heads was powered up, one at a time and with 5 amp increments up to the nominal pump power (50 A). No problems were observed with the fibre bundles. The diode currents were set back to zero. As we were closing up, a small pool of water was observed on the base plate. Fortunately none of it was spraying anywhere. The source was traced to the flow sensor in head 3. It was removed, its PTFE tape redone and re-installed. No leak was observed for a period of time afterwards but this should be monitored and kept in mind if the crystal chiller keeps complaining about low water level or a flow sensor problem. If the front end laser trips out and you do not know how to bring it back on line, PLEASE ASK or get someone who knows how to do it. Jason, Peter
Accidentally posted this in the wrong log entry! Corrected. All three pressures on annuli are now low e-5 Torr range. I closed each turbo valve individually to see if pressure would rise in diagonal (on PT-140). No observed changes in pressure over 10 min time span for each. We have greatly reduced the leak rate by reducing pressure in the annuli.
I reset the 35W FE power watchdog at 20:52 UTC (13:52 PDT).
re aLog 26140
Here are snaps of 0-120hz spectra for HAMs 2 thru 6 with the reference traces from 13 March (before the switch last week,) and the current traces from this morning at 3am. Not even really suttle differences so I'd say at least no harm done from the switch to the common timing source.
Here are photos of the HAM CPS Satellite racks which have been mounted onto the BoxBeam Mount. This is why the HAM switch over was much longer than the BSCs. The single port Power Regulator board was swapped for a dual on the primary (CPS1) rack. All grounding was made more robust for both the primary green ground wire and the probe cable shield (woven copper jacket.) Mounting the racks on the Box Mount and rerouting cables such that the racks could be accessed but are more out of the way or could be moved when needed for safety, such as slinging heavy wrenches around when removing a door.
The photos are in order, HAM2 Rack1, Rack2, HAM3 Rack1....HAM6 Rack2.
@DetChar: Can you watch HAM3 for the next few days, and see if this has had any effect on 0.6 Hz?
I logged into H1HWSMSR and investigated the problem. As a first step, I looked at the archived HWS images directly. For reference, these are stored in the /framearchive folder and are .raw images. The images contain no header and are literally just a file containing the intensity of each pixel in 16 bit format. An can be read into an array in MATLAB using the following code snippet.
fid = fopen('1142711735_hws_avg_image.raw', 'r');
arr = fread(fid, [1024,1024],'uint16');
fclose(fid);
The problem seems to be corrupted images coming in from HWSX. See the bottom of the image in the attached example. I downloaded five images from random times over the last 24 hours and found they all exhibited the same problem.
I'm not aware of any code changes to the way the HWS images are read into the computer. There is a report of 1 zombie process on H1HWSMSR, so it may be worthwhile trying to reboot the machine but at the moment, I'm not sure why the images are now corrupted.
I'll investigate further.