CP3 was filled at ~15:00 local time. Liquid was observed 1min 10 sec after opening the valve 1/2 turn. Next fill Thursday.
Flip flopping between the need to use FASTIMONs OR NOISEMONS for a variety of commissioning or DETCHAR reasons (but not being able to support both sets of channels) today we reverted a small portion of the work towards integrating the FASTIMONs into QUAD-land from alog 25329. Today we put the NOISEMONs back into the BS and QUAD models, keeping their original data rates and made the FASTIMONS commissioning frames only. This required restarts of the h1susauxb123, h1susauxex, and h1susauxey computers.
The SUS AUX Channel Monitor medms then had broken links which I partially fixed. I fixed the readback channel, but the long colored bit thing associated with every one of these channels (holds the same info as the read back) is a pain in the whooo to change so will have to wait until I can regrow some patience.
To help avoid problems caused by out of date safe.snaps, I went through and changed the symbolic links for safe.snaps for some models to point to down.snaps Since most of these down.snaps were not in the userapps repository I copied them to userapps, and commited all the burt files to the svn in the relevant directories. The I removed the down.snaps from the target area and made symolic links called down.snap to the down in the userapps repository that will be loaded when the guardian runs down.
I did this for:
omc, lsc, lscaux, asc, ascimc, alsex, alsey, iscex, iscey
This means that from now on at least for these models there is one fewer sdf file to worry about. It would still be good for anyone restarting models to be aware that the down may not be up to date, and either update down before restarting or do a burt to make sure that you are leaving things the way you found them.
John, Chandra IP3 Varian power supply failed. We replaced it with a Gamma PS. It still needs a signal cable (a couple PSs do). PS is currently set to 7kV. Tomorrow we will step down to 5kV.
Richard, Patrick Last week I found what turned out to be the cause of the issue I had with reading the pressure from the BPG402 gauges at end X when we tried to move them from h1ecatx1 to h0veex (alog 26198). It turned out that there was a type mismatch between the IO variable for the gauge pressure and the PLC variable I linked it to. The IO variable is defined as a REAL and the PLC variable I had defined as a LREAL. Last week I updated the script that generates the code to change the PLC variable to a REAL and committed it into svn. Today Richard moved the gauges back to the h0veex vacuum chassis. I did an svn update on h0veex, recreated the target code and ran it. The reading of the pressure still remained flat 0 in the system manager, but reported valid values in the IOC. I am calling this success. I did a scan for devices, and the solar powered gauge and second fiber to ethercat converter is not seen. I disabled them in the system manager. Richard says that the fiber cable may be damaged. The channels for these gauges still exist on h1ecatx1. The next task will be to remove these. I also installed the necessary software to run medm on h0veex. I installed Xming-6-9-0-31 and EPICS Win32 Extensions 1.40. I also installed python-2.7.11 to generate the medm screens from a template substitution script I wrote. I had to add the following system environment variable to get medm to run: DISPLAY localhost:0 When medm started it gave messages for finding each process variable on both the CDS NIC and Beckhoff EtherCAT NIC. I tried adding the EPICS_CAS_INTF_ADDR_LIST system environment variable, setting it to 10.1.0.60 and restarting medm, but the same messages appeared, so I removed it. I then set the following system environment variables and restarted medm again: EPICS_CAS_AUTO_BEACON_ADDR_LIST NO EPICS_CAS_BEACON_ADDR_LIST 10.1.255.255 The messages changed to finding each channel on both 10.1.0.60 and h0veex (10.1.0.60 is the IP address for h0veex). I'm not sure why this is, but it seems to be working, so I am leaving it with this configuration for now. To access the medm screens after a reboot it is currently necessary to start Xming, then start medm, then open H0_VAC_MENU_CUSTOM.adl in C:/SlowControls/TwinCAT3/Vacuum/MEDM/LHO/Target.
We first found that one bit in POPX demod whitening readback that corresponds to the least significant bit of the whitening gain for the third quadrant is stuck on (attached). Corresponding BIO cable was CAB-H1:ISC_88 (i.e. the second board in the chassis).
I got suspicious about the cable but it turns out that this was the BIO itself. We removed the field cables for POPX from the BIO chassis in the CER and used an I/O tester to see the output, and the offending bit was always stuck ON (=low). Short circuit at the connector?
This is not tragic as we can still use odd dB gains (3, 9, 15, 21, 27, 33, 39 and 45dB), but it's a good idea to check these during laser upgrade.
Filter BIO seems to be OK.
Cleared HEPI Accumulated WD counters for HAM2, HAM3, HAM4, ITMY, and ETMX. See attached screenshot for values before reset.
I've taken pictures of the optics on IOT2L, showing labels for all optics.
There are a small number of optics that don't have labels, and some are identifiable by the etching on the barrel.
Jason installed a new beamsplitter and beam dump on the MC REFL path, to accomodate high power, and a picture of those new components is attached.
Pressures still falling, but HAM 11/12 may have an outside o-ring leak: HAM 8: 8 mA @ IP and 4.6e-6 Torr at turbo HAM 9: 10 mA (no more red light) and 3.4e-6 Torr at turbo HAM 11: 10 mA+ (still with red light) and 3.0e-6 Torr at turbo
I reset the PSL 35W FE power watchdog at 15:36 UTC (8:36 PDT).
J. Kissel Carlos suggested that all workstations need a reboot to receive a security update, and whomever was in first should restart all work stations. It appears that no one had been in the control before me this morning, so I've booted all work stations.
Jenne, Sheila, Ed Daw, Jim
Earlier today we had an alignment that was far off enough that we couldn't engage the ASC with the normal sequence. This might have been due to the IM move, which was to restore us to the IM positions before the week long maintenance when the cameras were also moved. Jenne moved PR3 and PRM by hand as she slowly reduced the offsets in the soft loops. Once this was done and all ASC loops were sucsesfully engaged, we reset the green QPD offsets and updated the green camera offsets. Jim did an inital alingment with these new references and we locked with a high recycling gain and the error signals were relatively close to zero.
We are again using the SRC1+2 loops during the whole CARM offset reduction sequence.
We powered up to 10 Watts and again noticed that the X arm soft motion is not well controlled. We looked at XTR A as a possible problem, the power on the QPD does not scale correctly as we increase power. (The other QPDs all scale the same was a POP DC). We tried taking this out of the loop, but saw the same problem when we powered up. We then looked at the combination of TMS QPDs that is insensitive to DHARD, with the idea that our problem might be a cross coupling from HARD to soft. We can try this tomorow, the input matrix is in the screenshot attached.
We noticed that there was a model restart where the safe.snap was restored on March 14th, and that we still had bad values due to this. It would be good if people consider restoing the settings to a reasonable state as part of the job of doing a model restart. (ie, make sure you are not leaving a lot of settings changed when you do model restarts tomorow and in the future please...)
Michael, Krishna
This morning, we started the pump-down of BRS-2 by temporarily removing the safety interlock valve, which exists to prevent back-filling of the vacuum in case of a power outage. We expect to pump down with the cart in a day or two and switch to the ion pump, so this arrangement is okay.
Michael and I then placed 3 sets of Piezo stacks under the BRS-2 platform. We then drove the piezo stacks using the Beckhoff DAC module and an amplifier (0-100 V).
The tilt-transfer function measurement consists of driving the platform in pure rotation and measuring the response signal in the autocollimator, at a few frequencies. As the BRS-2 is connected to the pump and is not thermally shielded yet, the low-frequency noise is substantial, hence the measurement takes several hours. The attached pdf shows the measurement and a model assuming d = 10 microns (d is the distance between the center of mass (COM) and the pivot, with positive indicating COM below the pivot). The transfer function is shown relative to the drive which was roughly 1.7 microradian amplitude.
Plan for tomorrow:
1. Open the vacuum can and shift the COM up by ~10 microns.
2. Close up and begin pump down again. Remeasure the tilt-transfer function.
I've added in a new measurement point and added in curves showing error bars on the d measurement.
Thanks to Betsy's efforts, we have a portable weighing scale which we will use to add ~0.56 grams of weight (a small custom washer) to the top of the beam-balance shifting the COM up by 9.5 microns.
Dave, Nutsinee
TJ spotted the DIAG_MAIN guardian EZCA error (fail to grab a HWS channel) since this morning but we thought it's going to resolve itself. By evening the problem still persisted. Then Dave found out that the hwsmsr computer was crashed. We restarted the machine and re-ran the HWS codes for both ITMX and ITMY. ITMX had low peak counts when we restarted it. I stopped the code, turned the camera and fram grabber off and on then re-ran the code. That seems to have fixed the problem. I attached the image of the ITMX HWS before (low peak counts) and after camera restart (taken when the code wasn't running).
The end station chiller cleaning is on going. We ran out of water at End-X before the second chiller could be completed so we relocated to End-Y. I have called for a water delivery but that is not expected until Friday. We are and have been only running one chiller at each end station and the idle chiller at each station has been cleaned so I have switched units at each station. Chiller 2 at EX and Chiller 1 at EY are the ones currently running.
TITLE: 03/28 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: Jim
SHIFT SUMMARY: Struggling to get past DRMI locking most of the day due to ASC issues. Jenne has been parked in the OPS station helping me out. An EQ before lunch didn't help and took ~2 hours to ring down.
LOG:
16:26 Jeff B to EX
19:00 Jeff B back
19:19 Karen leaving EY
19:44 Chandra to EX checking pump cart
19:57 Jeff B to EX
19:57 TJ done with charge measurements
21:18 Chandra back, going to MY with John, Bubba
21:26 Krishna and Michael to EY
21:39 Chandra done at EY
I took charge measurements at both ends today, they finished around 12:50 local. I'll post plots soon.
The four plots are here
There were a few weeks that were not placed into the long trend, but a large majority of the data seems to be bad. I took out what I saw was obviously bad but the error bars are still huge on many of the points. Plots are attached but it definitely needs a second look hopefully tomorrow.
Note, I think it is time to change the sign on both of these ETM ESDs. The ETMx has now migrated ~20-30 volts away from 0, albiet at a slow rate. The ETMY sign flip from last month needs to be investigated since it seems that charge is still growing (slowly) there. More to follow.
We did a little bit of ASC work today.
First, while Kiwamu was running a TCS test I started a script to automate phasing of the WFS. It uses the lockin, first runs a servo to set the phase of the lockin demod, then servos to minimize some signal. We have it set up right now to phase the refl WFS to minimize the PR2 pit signal in Q for both REFL 9 and 45, and to minimize the SRM pit signal in AS 36 Q. There is some code for exciting DHARD, but we need to test amplitudes, phases and gains for this. The current version of the script does its job although it is painfully slow, and is checked into the svn under asc/h1/scripts THe resulting phases are in the attached screenshot.
We saw that the instability in CHARD pit was becasue somehow the LP9 got turned on again, this is now off and CHARD seems fine.
We tried powering up, were fine at 10 Watts. We had an instability in PRC1 and PRC2 yaw at 13 Watts. I reduced the Q on the complex zeros at 1.1 Hz for PRC2Y, which gives us slightly better phase and gain near the point where we seem to be unstable. Attached is a screenshot of the OLG measured with white noise at both 2 Watts and 10Watts, we might need to do a swept sign to get a good measurement around 1 Hz.
After about 10 minutes at 12 Watts, we had the usual fluctuations in the recycling gain. So the high bandwidth PRC2 loops haven't totally solved the problem.
For the record, these are angle settings that give approximately good CO2 powers tonight, and the powers to aim for from Kiwamu's note:
| X power (W) | X angle | Y | Y angle | |
| unlocked | 0.5 | 76 | 0.23 | 82 |
| 10W | 78 | 79 | ||
| 20W | 0.3 | 0.1 |
We have twice had the rotation stage for CO2 Y go to an angle that was wrong by a lot (sending a few watts to the test mass for a few seconds).
I'm leaving the IFO locked at 10Watts.
Sheila,
Do you know how much power was transmitted at CO2 Y to any precision? Can you say what the upper limit was?
thanks
The first time H1:TCS-ITMY_CO2_LSRPWR_MTR_OUTPUT read back 3.2 Watts for about 20 seconds.
THe second time H1:TCS-ITMY_CO2_LSRPWR_MTR_OUTPUT read 3 Watts for about 10 seconds.
This morning I looked at some of the data from friday night when we had our usual CSOFT instability. (16-03-26 7:52:56 UTC)
First, I used the moment of interia here, and the calibration of the arm circulating power from the transmon QPDs here, to estimate i it is reasonable that radiation pressure due to the fluctuations in arm circulating power (on the order of 2.5% fluctuations on 35 kWatts of circulating power) could cause the angular motion that we see (0.1-0.4 urad pp on the test masses), and it is not, the miscentering that would be required is far too large.
I never attached the sreenshot of the PRC2 Y OLG to the original alog. Here it is.
I've committed these SUSAUX model changes and medm edits to SVN.