Shift started out with a windy morning and the Control Room in a post-power outage glitch state. Here are today's activities:
I ended up power cycling the PSL environmental chassis and rebooting h1ecatc1. I was testing code for the rotation stage and had put things in a bad state. It should be recovered.
Restarted IOP models for the triple test stand and the bsc test stand in the staging building after power failure. Made sure that DAQ was running on each.
ITMY-ISI controls were turned back ON for the 6.18Hz peak investigation.
Stage 1-2:
- Level 3 Isolation Loops
- T750mHz blend on RX, RY and RZ (750mHz blend that includes the T240s)
- T100mHz_NO.44 on X, Y and Z (100mHz blend including Trilliums, with notch)
Stage 0-1 feedforward is ON.
After Dave restarted the front ends, hepi itmy matrix values were empty. I restored them as they were friday morning, turned the master switch and all the loops off, and saved a new safe snapshot from that config.
A soft link from /opt/rtcds/lho/h1/target/h1hpiitmy/h1hpiitmyepics/burt/safe.snap -> /opt/rtcds/userapps/release/hpi/h1/burtfiles/h1hpiitmy_safe.snap has been created
h1hpiitmy_safe.snap has been commited to the svn
sw-msr-h1fe decided to reboot itself today at 10:45AM, the cause is unknown but most likely related to the power problems earlier in the morning. When I went looking for syslogs for possible clues, I realized that none of the Netgear switches had remote sysloging enabled, so I took the time to make that configuration change on all the switches, as well as the Fujitsu DAQ broadcast switch. The installed Cisco switches and APs already had syslog logging enabled. There is no guarantee that having syslog messages will give any indication of what caused a problem (in this case reboot), but it provides more information than none at all...
Due to a pair of power failures, the DAQ test stand needed to be restarted. All front end computers were rebooted. The x1ldasgw computer was locked up and was rebooted. The DAQ computers which mount the SATAboy disk array were also rebooted once the x1ldasgw was operational in order to get the file system mounted and the daqd processes running. Finished about 11:45 PDT.
(David Barker, Cyrus Reed, James Batch) LHO experienced a 2 second power fail at 00:40:43 PDT and a power glitch at 06:06 PDT which caused front end reboots. Also affected were both CDS file servers. The main file server entered a read-only state, while the backup file server locked up and was unresponsive. Both file servers were restored by 08:30 PDT. All front-end computers were powered off, as several were not logically attached to their I/O chassis. On power up, the following computers still could not find the I/O chassis: h1seib1 h1seih45 h1sush2b h1susquadtst h1susex h1susexaux In each case, turning the power switch on the front of the I/O chassis from on to off to on caused the I/O chassis to power up. The computers were then power cycled to get the I/O chassis reattached. At this point, there were IPC problems in which channels could not be received from the following computers: h1lsc0 h1asc0 h1seih23 h1seih16 h1seib2 In order to correct this problem, the Dolphin switches in the MSR needed to be power cycled, which in turn required all computers in the MSR that were attached to the Dolphin network to be power cycled. Once this was accomplished, and a few models manually restarted, all appears to be well. The system is back in operation as of 11:30 PDT.
after the power glitch the PSL had to be restarted. Justin and I followed the procedure to start it in low power mode, we reset the kepco HV supply to 200mA, as Christina wrote in her alog instead of the 400mA in the procedure. We also reset the LRA, even though we were starting it in low power mode I wasn't sure if the beckhoff computer had lost power or not. The only hitch was that we couldn't open the external shutter, the situation was similar to last time , H1:PSL-MIS_FLOW_OK was 0, although H1:PSL-MIS_FLOW_INMON was within the limits. We tried the reset button on the medm screen, which didn't do anything, and we tried caput which also didn't change the value. Jim and Dave managed to open it, they will write a log about what they did. The PMC is now locked, but the medm screen reports -30W transmitted power. H1:PSL-PWR_PMC_TRANS_GAIN is now -0.024, while it was 1 before the power glitch. The ISS is not staying locked, which was also the case before the power glitch.
These needed to be restarted after an apparent power glitch from a wind storm.
- Joe, Paul, Cheryl
FI Isolation Measurement:
Two 2" beam splitters were installed in the main IO path, and the IMC was relocked. The beam splitters are at about 45 degrees w.r.t. the beam, and both reduce the power going into the IMC, requiring a change in threshold to lock the IMC. Locking was briefly delayed after a reboot to the LSC brought it back with outputs turned off, but after reenabling the LSC the IMC relocked right away. WFS were engaged, and the offsets WFS sent to each IMC optic were within +/-4 (units?) of the offsets during the IMC lock before we installed the two beam splitters, so input pointing was very closely restored.
The purpose of the two beam splitters is to catch the beam that makes it back through the FI and IMC, which gives us a measurement of the isolation ratio of the FI. The return beam should reflect off the back surface of the second beam splitter, but unfortunately, the return beam was either not present or overwhelmed by a direct reflection of the 2" lens downstream.
The rework of the IO path in March left that 2" lens (dowstream of the second beam splitter) at a very shallow angle to the IO beam, so basically 90 degrees to the beam. This means the reflection from that lens it makes back to the EOM is now actually clipping on the output aperture of the EOM. Before the rework the reflection was offset from the EOM output aperture by about 10mm. When Volker was at LHO this Summer and I made him aware of the clipping, it was decided to delay the fix (rotating the lens), until we go to high power, since it will efect the input pointing to the IMC.
Rotating the lens to eliminate the clipping on the EOM will be necessary before going to high power. Rotating the lens now would be beneficial to the isolation measurement, and given the feducials we have, and the reliability of the IMC locking, I believe we should go ahead and rotate the lens.
General PSL comments:
Both temporary beamsplitters have their reflected beams dumped on razor blade dumps. I installed s ome additional dumps on the PSL table. My alignment irises that track the pointing of the IO path on the PSL show that the beam has shifted in YAW since I aligned the irises. My iris that looks at the leakage beam from the steering mirror at the output of the PMC also shows a change in pointing, probably related to the recent PSL work.
Attached are plots of dust counts requested from 4 PM September 26 to 4 PM September 27.
Attached are plots of dust counts requested from 4 PM September 25 to 4 PM September 26.
[JoeG, Keita and Kiwamu]
We continued working on the HAM1 installation this afternoon. All the optics except for the detectors are now on the table including the newly installed QPD sled. We removed the counter weights although didn't check the table level yet.
Besides, there are two (unhonourable) highlights to mention from today's HAM1 incursion.
A big pitching in a tip-tilt (RM2):
After the removal of the counter weights we went through the alignment process again in the in-vac detector path. The alignment needed a slight touch mainly in pitch. Then we found that RM2, which is a tip-tilt , showed a large pitching so that the reflected beam went too low. Joe tried to correct it by screwing the bottom 8-32 screw further in. This screw is designed such that as we screw it in, the center of its mass changes the pitching of the whole holder (so as to lower reflected beam). We wanted screw it in further but however the screw was too long and in fact it was already touching the back cage. Keita brought a shorter screw which is 8-32 x 1" and we tried this. However sadly it still hit the back cage because of too large pitching. Joe translated the upper blades to the front to separate the back cage and holder further apart, but this didn't fix the issue. Note the mirror holder was picthing in the same direction even without the screw.
According to our inspection, the clamping point of the wire on the left wire clamp (when looking from the front) on the mirror holder is too close to the back side. Probably about 1 mm off from the center toward the back side. The other side looked also close to the back side but not as big as the left side. At the moment we think this gives such pitching in the mirror holder. Pictures of the clamping point for both left and right sides are attached below. You can see how they are clamped through a rectangular window of the side cage. We have no idea of if it has been like this or when and why this happened. Adjusting the clamping point is certainly beyond what we can do in the chamber because it probably involves with all the alignment and adjustment process. Therefore we removed the tip-tilt our of the chamber. It is now in the clean booth nearby HAM1.
Plan :
Next week we will bring another tip-tilt which was assembled for HAM6 and swap the bad one with it. Hopefully we can smoothly place the tip-tilt and move on to some measurements for checking the mode matching.
Some more pictures are available in ResourceSpace.
The wire not being centered in the clamp is intentional. This is because with the wire centered I found that the tip-tilt mirror tips backwards and it will require some pretty heavy counter balance screw to correct its pitch. At least this is my experience. I had started with the wire centered and faced this problem. I solved it by displacing the wire and then sliding the clamps on the mirror holder to coarse adjust the pitch angle.
Yes, that is correct. The wires are not clamped in the centre of the mirror wire clamp. The Wire Clamping Jig is designed to accommodate this offset in a controlled way. This also includes that there is a 'LEFT' and a 'RIGHT' suspension wire assembly.
I will see to get some dimensions.
12:00 Thomas V. out of the LVEA, to return later 12:14 I increased the dust alarm levels at end Y location 1 to conditions outside of a clean room 13:01 Dave B. and Jim B. to end Y 13:33 Dave B. power cycling ISCEY to fix an IPC error 13:56 Jim B. trying to start h1susey 14:10 Jim B. trying to start h1susauxey 14:16 Jim B. and Dave B. to end Y to swap the cabling between the sus and susaux frontends and their chassis. (sus was going to susaux, susaux was going to sus) 14:51 fw1 has stopped writing frames, Dave B., Jim B., Cyrus R. and Dan M. investigating 15:52 Cheryl V. and Paul F. done working in H1 PSL laser room on isolation measurements High dust count spike at .3 microns in H1 PSL diode room, investigated, lights off, nobody responded Hugo P. waiting for work to complete in HAM1 to start MC measurements Filiberto C. done with SUS and SEI field cabling at end X
[Pablo and Kiwamu]
We have measured the reflectivity of another 2" P-polarizing beam splitter (E040512-B3 SN: I0822-07) for S-polarization this morning at the OSB optics lab.
The results are :
HR reflectivity R = 92.5 % for S-pol @ 1064 nm, 45 deg
AR reflectivity R = 2.8 % for S-pol @ 1064 nm, 45 deg
Background :
Even though we already installed a 90% S-polarizing beam splitter on HAM1, we still thinks that this 90% might not be big enough to attenuate the reflection beam to avoid damaging the detectors. One way to increase the safety margin is to simple replace this 90% BS by one with a higher reflectivity. Because we have 90% P-pol BS in hand we simply wanted to try measuring the reflectivity with a S-polarizing beam.
Setup:
Same as the previous measurement (see alog #7817).
This measurement is now archived in DCC: