Turned the RF driver and the power supply on for the CO2Y laser, also connected the IR sensor, and was getting green lights for the control board. Using the PWM controller I tried to turn the laser on but wasn't getting any results on the power monitor. After double checking the power supply was set to 28V and 28A limit, and the RF driver was set to 40.68MHz, Fil and I found that the wrong RF cable was plugged in. After plugging the right one in the laser powered up in PWM mode and I was able to verify the rotation stage is working as well. Tomorrow I will check that the alignment hasn't changed significantly and that the periscope picomotors are operating correctly.
Kyle, Gerardo It would be desirable to remove all of GV11 and GV12's external AIP plumbing to aid in the installation of a bake-out enclosure scheduled to be installed as part of the CP4 decommissioning exercise. Today we connected a Tee having a port with a conductance-limited needle valve attached and a small turbo pump (backed by pump cart) to GV11's annulus pump port. Bottled UHP N2 was then plumbed to the needle valve which allowed us to administer a controlled amount of dry nitrogen into the inlet of the turbo pump. With the turbo inlet pressure at around 1 torr or so, we de-energized GV11's AIP controller and valved-in our test setup. The net effect is that we were slowly venting the annulus volume with dry nitrogen while monitoring PT210 and PT245 to see if any of GV11's inner O-rings were then leaking this administered N2. As can be seen in the attached graphs, we confirmed that there was leakage into PT210 side. We allowed the annulus volume to increase into the "tens of torr" before abandoning the experiment, shutting off the N2 and letting the turbo evacuate the administered gas.
The laser power watchdogs are back on.
J. Kissel, R. McCarthy, M. Pirello While trying to finally get around to finishing out the MEDM infrastructure for the new SQZ suspensions and the re-arranged OMs, we discovered that the RT and SD OSEMs are dead on SRM. They apparently died on Jan 10 ~19:30 UTC (about 11:30a PT). Remember, we know the problem is highly likely out of vacuum, since I was able to take successful transfer functions on Jan 3rd (LHO aLOG 40002). Much more likely -- Gerardo's Vacuum work mentioned in LHO aLOG 40090, or Fil and Liz's cable pulling mentioned in LHO aLOG 40085) After tracing and confirming integrity and correctness of all HAM5/6 SUS cables from the AA chassis to the remote satellite amplifier, I noticed that two of the SRM (T3 LF RT SD) sat amp fault lights were on (see attached picture). Power cycling the respective SRM T3LFRTSD coil driver chassis didn't find the problem. Talking with Richard, he suggested it was the last cable run from the sat amp to the chamber, since it's a tight squeeze getting the cable to seat properly at the feed through. As suspected, he went out to diagnose and was able to wiggle the connection at the feed through enough to intermittently restore life to RT and SD, but didn't have the right tools to really make the connection secure, so he (or Fil) will finish out the fix tomorrow morning.
This morning I was able to remove the strain relief cover and get to the connector. The aligo feedthrough are more finicky than iligo so clear access is needed to insure the 25 pin cables are seated properly. All OSEM signal are present on the MEDM
J. Kissel, R. McCarthy Ran a (damping loops closed) set of top mass transfer functions for a health check after Richard has properly seated the cable. Functionality appears to be completely restored. Nice work! I haven't exported and properly processed the data yet, but the data files are committed to the SVN here: /ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/SRM/SAGM1/Data/ 2018-01-19_1618_H1SUSSRM_M1_WhiteNoise_L_0p01to50Hz.xml 2018-01-19_1618_H1SUSSRM_M1_WhiteNoise_P_0p01to50Hz.xml 2018-01-19_1618_H1SUSSRM_M1_WhiteNoise_R_0p01to50Hz.xml 2018-01-19_1618_H1SUSSRM_M1_WhiteNoise_T_0p01to50Hz.xml 2018-01-19_1618_H1SUSSRM_M1_WhiteNoise_V_0p01to50Hz.xml 2018-01-19_1618_H1SUSSRM_M1_WhiteNoise_Y_0p01to50Hz.xml
Corresponds to FRS Ticket 9744.
[TVo, Jenne, Cheryl]
There was a lot of random walking this morning, but with cameras newly focused and pointing at useful things, we now have the beam hitting the bottom of the SRM cage. Once we found the beam, which required moving ITMX (ITMY misaligned), BS and PR3, we walked those optics back to their good locations from last night, while also moving SR2 and SR3 so that we kept the beam on the SRM cage. Now all PRMI-related optics are back at their locations from last night, and we still have the beam at SRM. I realigned ITMY and we have some nice MICH fringes. Next up will be moving the beam up onto the SRM itself, then seeing if we can either get SRMI/DRMI flashes, or maybe get the SRM beam retroreflected back to the POP port.
Attached is a screenshot of where the SR2 and SR3 mirrors need to be, to get the beam to the bottom of the SRM cage. Note that SRM's OSEMs are being looked at by the CDS/EE team, and so we may not be able to actuate it tonight, in which case we'll start here in the morning.
[Keita, TVo, Jenne]
We looked at trying to center the beam on SR2 using SR3, but we think that we'd like to misalign SRM to ensure that we're not confusing ourselves a bit. So, we'll come back to this in the morning.
Path forward: Misalign SRM, use SR3 to center on SR2 baffle aperture, use SR2 to center on SRM. Depending on time and how we're feeling, we could either work a bit on SRM alignment to get the beam back to POP and see flashes to confirm that we're really happy with SR2 and SR3, or we could go straight to HAM6 alignment since we don't care about SRM's alignment for that work.
Some might feel that the H1 BS is more susceptible to tripping in an earthquake. The HEPI Inductive Position Sensors on the BS are outliers in that the vertical sensors have the most extreme values. Can't explain why this is the case but through installation and commissioning, these sensors have managed to get into an apparent tilted attitude. The HEPI could have actually done this through relaxation, strain relief, tension introduction; or maybe the sensors have been disturbed. The BS has had a Parker Valve replaced and this may have tensioned something or disturbed the sensor. However done, the vertical sensors read -6100, -14500, -7400, & +10800 for V1--V4 before I changed anything.
First thing done was stroke the platform in its isolated state. The platform tilted and tripped the ISI when the vertical offset went from 6 to 7 500 to 700um. When checking into the trip, I mistakenly looked at the OUTFs thinking I was looking at the IPSs. The first plot has the ISI_WD, HPI Tilts, the Z OFFSET & location and the Actuator vertical OUTPUTs. When I looked at this and saw that V2 did not move when the OFFSET was stepped to 700um and Z started moving up, I thought this must have been the reason for the trip. So, I went to the chamber and with the Z OFFSET at -500um looked at V2 sensor. With the IPS reading ~-2000 (now I was looking at the correct channels) and while the Actuator was close to the mechanical limits, I thought it should still have a little room to move up. Still, the IPS reading did not reflect the actual position; so, I deisolated and this put the Actuator about in center position and given that, zero'd the IPS. With this change, the Z, RX & RY computed locations changed and the TARGET POSITION for Isolation was adjusted accordingly. Re-isolated the platform with no difficulty.
So, since I was on the adjustment track of the IPS, I stroked Z again, this time in the opposite direction and drove to dZ = -800um and did not trip HEPI. Okay, back the other direction. The platform was fine at dZ = 600um but tripped while attempting to move further. This time I had the channels right and it was in fact the V4 Actuator causing the trip. At the chamber, even without the Z OFFSET, the V4 Actuator appeared to be at the mechanical limit. Not sure where it was getting the ~14000ct stroke but certainly no surprise it was stopped and caused the trip; apparently lots of distortion.
I did not adjust the V4 sensor position as I did not see that getting us anything and for it to trip at ~24000 cts is not terribly unreasonable. So, sadly, the BS HEPI is limited in its upward vertical stroke and correcting this will require disruptive action: The Actuator will need to be in effect, disconnected, recentered and reconnected. Suspect I can do this in a few to several hours....
The second plot has the four IPS with Y normalized to zero back earlier. The flat line in the middle is with the dZ at 600um and the ramp following is when the offset was set to 700um. Comparing the previous ramp up from 500 to 600um, clearly V2 and V4 are unable to drive like V1 and V3 and also not as well as on the previous ramp up. Given the visual checks, I don't think V2 is mechanically limited but I certainly think V4 is limited. So, it is interesting both V2 and V4 show the same trend. Maybe the limit on V4 causes a platform distortion that V2 sees? Ah ha! All DOFs loops are closed and this is what is required to keep the tilts right. Need to do local motion tests to see if V2 is in fact limited but not today.
Before concluding this invasive work for the day, I took photos of the four Actuators to look at the ~position within the mechanical limits. These were taken while Isolated but no offsets and the V1--V4 position sensors reading -6100, -2100, -7400, & 10800cts. V1 & V2 look pretty much the same and centered so I don't attach V2 ( also for some reason the V2 file was 4 Mb.) V3 is slightly high of center and V4 is visibly near its mechanical limit.
WP 7297
Nutsinee Terry Daniel
We were able to lock the squeezer laser using the PZT only. BW around 25kHz with the attached parameters.
Cheryl, Jenne, TVo
Cheryl got the SRM digital camera focused and aligned correctly and attached are the lights on/off for future beam spot references. The view is down the manifold looking at HAM5 from HAM4. SRM is on the left, SR3 is on the right.
Completion of 01/18/18 operator shift: 21:00 (13:00) Taking over for Jeff 21:04 (13:04) Richard -- going to LVEA 21:04 (13:04) TJ -- out for lunch 21:13 (13:14) Marc -- going to CER to check suspension cabling 21:16 (13:16) Greg -- going into LVEA to work on CO2Y cable and maybe turn on laser 21:16 (13:16) Tyler and Mark -- going to End-Y 21:27 (13:27) Richard -- out of LVEA 21:43 (13:43) Kissel and Marc -- going to CER 21:54 (13:54) Kissel and Marc -- out of CER 21:56 (13:56) Betsy, Travis, Eiichi -- to Mid-X, LVEA, then End-Y 22:27 (14:27) Kissel -- going to CER 22:36 (14:36) TJ and Sheila -- going to Optics Lab 22:45 (14:45) Nutsinee -- going to Squeezer Bay 23:22 (15:22) Kissel -- out of CER 23:29 (15:29) Richard -- going to LVEA 23:45 (15:45) Richard -- out of LVEA 00:00 (16:00) End of shift
With help from the Apollo crew, Betsy and I removed the ETMy QUAD lower structure from BSC10 using the BSC repair arm. For future reference, the elevator on the repair arm does fit without swinging back the TMS (ACB was already swung back). The only issue we encountered was that a EQ stop screw at the PenRe had backed itself out a few threads and contacted the removable side plates of the elevator, bending the aluminum EQ stop bracket as we lowered the LS down. It would probably be best to install these plates after the LS is fully lowered in the future.
The main chain is currently residing in the welding cleanroom and the reaction chain is in the staging room of the garbing cleanroom.
[Patrick, Chandra]
Following up-to-air vent, we degassed filaments in hot cathode pressure gauges, PT-170 & PT-180, for 2-4 minutes, via software.
Richard informed me that the laser had tripped this morning. I reset the laser without any problems. I increased the current for head 1 to the maximum of 60A and increased the diode temperature for diode 3 to 23 degC. The output of diode box 1 has taken a bit of a dive since the weekend. So much so that I would start to become concerned that the thermal lens in the NdYAG rod isn't what it should be to make the resonator stable. At this point in time, we have two options as I see it, depending on commissioning constraints: i. replace the diode box ii. revert to low power mode Both would take about half a day to do.
I have not turned the power watchdogs back on.
Attached is a trend of the DB operating current increases, starting in April 2017. The ongoing failure of DB1 can be seen in the large operating current increases that have been required over the last 2 weeks to keep the laser running. The life of DB1 has been slightly extended by adjusting its operating temperature in lieu of adjusting its operating current, hence the flat spot near the end of the graph.
The power of diode 3 in DB1 has dropped by 50% of its original power. The TEC is already at the maximum of its performance. You should lower the temperature further and observe the performance of the HPO. The current for the diode boxes can be higher than 60A, if you enter the value directly and do not work on the plus and minus button. The diode itself will not last much longer.
If your trying to stretch out the lifetime I seem to recall that you can actually run these diodes up to 65A in a pinch
848 channels added. 868 channels removed. H1:TCS-ETMX_HWS_SOFTWARE_ERROR_CODE unmonitored. List attached.
[Sheila, TVo, Jenne]
Today was an exercise in trying to get PRX aligned, so that we trusted our input beam and were on our way to being ready to align the HAM6 table. So far, we think that we have the beam aligned to ITMX, and that ITMX is aligned well enough to retroreflect the beam. We have not, however, been able to see flashes for PRX. We are seeing fringing on the REFL camera, and on REFL_RF9_I. Hoooray! (This is a little bit stream-of-consciousness...we were definitely totally absolutely going home several hours ago)
Plan for going forward:
Note from Cheryl: There is some scattered light somewhere near the output of the Faraday. She has a photo looking in one of the viewports, and it's clear that there's light in places where we don't want light. Her suggestion was that perhaps the input beam is too close to a mirror near the output of the Faraday, and we're clipping. Her proposed check was moving IM1 to see if that scattered light glow (as seen on the PRM camera view that she set up) changed. Moving by a few thousand counts on IM1 didn't change the PRM camera image, so I'm not sure that it's the Faraday or the input beam we're having a problem with. We'll continue checking on this in parallel with the main alignment work.
We don't have analog cameras for POP, SR2. They might be unplugged? And I guess we need some CDS help to get the BS camera back. It would be helpful for alignment if we could have those all working tomorrow.
Note to ourselves: Tomorrow we will try to do SRY alignment to get down to HAM6.
The gain selection doesn't seem to work for the Sqz Fiber Trans PD.
The signal is not reaching the AUX 7 chassis at HAM 6. The following voltages were observed on the chassis.
Each of these signals originates at the same terminal, EL2124 in slot13 of Squeezer Control Chassis 3.
Best guess, either software initialization failure or a bad terminal.
edit: Or incorrectly connected power supply!
TVo noticed a moment ago, while we're trying to align the central part of the IFO, that the BS suspension was moving a lot. A quick look showed that the oplev is quite far off the center of the QPD, and it looks like the servo is kicking the BS around a bunch. For now, I've turned off the oplev damping on the BS suspension, but we should look into this in the near future (maybe we just need to center the oplev - are we still holding off on this for any known reason?).
Opened FRS Ticket 9737.
CO2Y is now running in remote operation mode. The beam is dumped before the periscope though. I quickly checked the beam path and everything looks clear, although I did remove the alignment laser as it could possibly be clipping the IR camera path. When the laser proves to be running stably with the new chiller the beam dump will be removed.
CO2Y ran for just under 3 days before faulting out to a flow rate trip. A quick inspection of the chiller shows nominal water level and no faults indicated, so flow rate sensor is the main suspect. Also note the odd drop in the power supply and power output. Not sure why that happened.