Activities:
Details:
J. Kissel, T. Shaffer Replacing S/N 551 (bad) with S/N 553 (hopefully good). New open light current value: 29106 OSEMINF OFFSET = -14553 OSEMINF GAIN = 1.0307
Daniel changed some internal wiring in h1sqz. Model was restarted, no DAQ restart needed.
F. Clara, J. Kissel, T. Shaffer We're back at the SUS OPO, trying to solve the last problem -- that the V2 OSEM shows a poor frequency response above its resonances (41256) that appeared between Feb 2018-02-27 and 2018-03-29. Because the poor frequency response has the same shape as the well-known sensor-to-actuator electronics coupling -- high-Q zero at 10-20 Hz -- (a.k.a. high frequency turn-ups a. la IIET Ticket 478040536 and 40613). The below set of tests has convinced us that the H1 SUS OPO V2 AOSEM needs to be replaced, so we'll do that this afternoon. We've therefore conducted several electronics tests this morning: - Ran the standard "ground loop" checks for shorting of the readout-chain's shield to ground: disconnected the H1V1H2V2 DB25 pin cable on the back of the US satamp, plugged a DB25 pin breakout board into the cable, then - Check for any short of pins to chamber ground: - run a lead from the closest chamber (in this we clipped to a bolt on one of the viewport blanks on the HAM5 side of the MCB Output Beam Tube between HAM4 / HAM5) into the Return (black) of the DVM - and check for continuity between any of the 25 pins and chamber ground. - There shouldn't be any continuity, and we found none. - Restoring the standard (black) lead into the Return of the DVM, check that pin 13 is continuous with the shield of the cable, and no other pin is continuous with ground. Only pin 13 should be continuous with the cable shield, that was true. - Check that no pin is unintentionally/unexpected continuous with any other pin. For this OSEM cable fed into a US Satellite Amplifier (D1002818), we expect pins 1 & 14, 4 & 17, 7 & 20, 10 & 23 to have low resistance because these connect the positive to negative legs of the OSEM coil (namely ~19 ohms). However, no other pins should be continuous with each other. We found all this to be true. - Scoured the in-chamber readout cabling, looking for potential electrical grounding between the readout cable and the ISI or chamber walls. We found nothing obviously suspicious, and no change in frequency response after several moves. - TJ physically disconnected the micro-D connector of the Quadropus (D1000239) from V2 OSEM at the OSEM, then re-seated and re-tightened the connection. No change in response. - TJ physically disconnected the DB25 connector of the Quadropus from the Table Cable Bracket (D1001346), then re-seated and re-tighted the connection. - TJ found that one of the screws securing the flexi-circuit (D0901252) to the V2 AOSEM assembly (D0901065) was loose, so he tightened it. After these two changes test, we saw a significant increase in the zero frequency - Further tightening and loosening of the AOSEM's flexi-circuit screws continued to have an effect on the zero frequency, namely tighter made it *better* but not perfect like all other OSEMs. - As a final nail on the coffin of the flexi-circuit of the V2 OSEM, we swapped micro-D connectors of the Quadropus cables between H2 and V2, and readout each the OSEM with the opposing signal chain -- taking a transfer function of V2 with the H2 electronics chain, and of H2 OSEM with the V2 electronics chain. V2's electronics chain reading out H2 showed an entirely normal high-frequency response [first attachment]. H2's electronics reading out V2 showed a similar bad response [second attachment]. Attachment Key: [first attachment] - black : Perfect (in-vacuum), fully functional reference from L1 SUS OPO - magenta : Before today, clearly identified badness (high-frequency zero) with V2 electronics reading out V2 OSEM. - red : Fully functional, today, with V2 electronics reading out H2 OSEM. final nail in the coffin [second attachment] - black : Perfect (in vacuum), fully functional reference from L1 SUS OPO - blue : H2 electronics reading out V2 OSEM Still bad, final nail in the coffin - magenta : Before today, clearly identified badness (high-frequency zero) with V2 electronics reading out V2 OSEM. - red : During today, after tightening flexi-circuit screw.
A significant roll of the ETM optic is apparent in the attached photos, which corresponds to the fibers breaking on one side of the optic. We used the welding mass jack to lift the mass off of the compressed viton EQ stops and to unload the remaining two fibers such that if they are compromised they would not catastrophically break. I would estimate that not more than 1-2 mm was required to fully unload and slacken the remaining fibers, indicating that they were already almost completely unloaded after the break.
The photo of the wipe shows a piece of metal particulate found on the lower structure and gives you an idea of the size and nature of particulate commonly found around the Quad as a result of the assembly/disassembly procedure.
To get h1seiey running again I rebooted it using the front-panel-reset button. This in turn caused a dolphin glitch of h1susey and h1iscey. I rebooted h1iscey because it had an uptime of 211 days, I just restarted all the models on h1susey (it has an uptime of 38 days).
Laser Status:
SysStat is good
Front End Power is 0.0003016W (should be around 30 W)
HPO Output Power is 0.7154W
Front End Watch is GREEN
HPO Watch is RED
PMC:
It has been locked 2 days, 18 hr 26 minutes (should be days/weeks)
Reflected power = 20.46Watts
Transmitted power = 42.4Watts
PowerSum = 62.87Watts.
FSS:
It has been locked for 2 days 23 hr and 39 min (should be days/weeks)
TPD[V] = 2.293V (min 0.9V)
ISS:
The diffracted power is around 0.13% (should be 3-5%)
Last saturation event was 0 days 3 hours and 43 minutes ago (should be days/weeks)
Possible Issues: (all accounted for, due to PSL commissioning)
Front End Power is Low
PMC reflected power is high
ISS diffracted power is Low
LRA out of range, see SYSSTAT.adl
Comparing to past plots, signals at EY are consistent with it being unlocked, signals at EX are consistent with it being locked, and Corner Station HEPIs pressures are consistent with being unlocked, which matches the information from the medms, so no issues.
Morning Meeting:
CS:
EY:
EX:
As has been reported in previous alogs, the 2.6.34 kernel has a timer counter overflow bug with makes the machines susceptible to freeze-up if they have been running in excess of 208.5 days. Last week h1build froze, and this weekend h1seiey, h1susauxb123 and h1susauxh34 did the same. The machines marked with an asterix in the list below have an uptime which exceeds 208.5 and could freeze at any time. We should work on verfifying their SDF settings are up to date and then reboot them at our earliest convenience.
* h1psl0 up 211 days
* h1seih16 up 211 days
* h1seih23 up 211 days
* h1seih45 up 211 days
* h1seib1 up 211 days
* h1seib2 up 211 days
* h1seib3 up 211 days
h1sush2a up 58 days
h1sush2b up 201 days
* h1sush34 up 211 days
h1sush56 up 192 days
* h1susb123 up 211 days
* h1susauxh2 up 211 days
h1susauxh34 up 7:01
h1susauxh56 up 191 days
h1susauxb123 up 7:03
h1oaf0 up 61 days
h1lsc0 up 58 days
h1asc0 up 202 days
* h1pemmx up 209 days
* h1susauxey up 211 days
h1susey up 38 days
* h1iscey up 211 days
* h1susauxex up 211 days
* h1susex up 211 days
h1seiex up 95 days
h1iscex up 163 days
This is a temporary problem, following LLO we will be upgrading all LHO front ends and DAQ machines to newer kernels (which do not have this bug) in the near future.
Here is the alog which initially discussed this bug alog 35901
On Friday afternoon Terry McRae and I crashed several of the ZOTAC workstations in the control room by dragging and dropping channel names from the INMONs of the OMC QPDs into the terminal. Dave Barker tried this on his workstation and it was fine for him.
In this situation the workstation completely froze and the solution was to ssh in from another workstation and pkill medm.
Now I am trying to redo the dark offsets for the AS WFS, and having a similar problem. When I try to middle click on an medm screen and drag it into the terminal, sometimes it works but it seems like about 50% of the time medm crashes. The workstation doesn't freeze, although my dataviewer did freeze. I cannot reopen the sitemap. Again this is a ZOTAC workstation.
When I've experienced this I was able to "snap out of it" by center clicking the mouse into a terminal or the like. Unlike with the old work stations where you just drag and release into the place you want to paste, I think here you have to drag, release, and then click. I think medm and X hang waiting for the copy/paste to complete.
I just noticed that four front end machines are offline: h1susauxb123, h1susauxh34, h1seiey, and h1pemmy.
All models on all these machines are dead, and the machiens themselves are inaccessible via ssh.

I reset h1susauxb123 and h1susauxh34. Dave says the mid station machine has been down. I'm leaving the EY SEI machine (h1seiey) down for now.
h1pemmy was turned off several weeks ago, the CP4 bake-out is making the VEA too warm to run the front end comfortably.
All the other front end problems are due to the 208.5 day bug, please see my alog Link
I was hoping to do an OMC scan today and it's tantalizingly close with the angular loops closed but it seems like the both OMC_DC are saturating even though I've turned off the all the gains and whitening(de-whitening). The attachment shows the scans going negative at 2 Watts into the IMC.
So I turned down the power into the IMC to about .1 watts and things got better but it was still very unclean scan data so I'm a little confused because Dan Hoak's OMC scan was able to handle about 4 miliamps on OMC_DCPD Sum.
I've misaligned SR2 so that the OMC isn't flashing and returned the DCPD whitening filters to their nominal observe state (1 whitening + 1 dewhitening). Also I've turned off the OMC ASC and the AS WFS DC loop so that if the IMC loses lock the optics don't go crazy.
I turned down the power to the IMC to .1 Watts and misaligned SRM to get rid of the SRM flashing and there's some decent, clean OMC resonances.
Attached are 3 OMC scans with SR3 at 0, 0.5, and 1.0 Watts. Before I take more data at higher SR3 heater power, I'll see if these results make sense. The columns are time in seconds, PZT offsets in Volts, and OMC_DCPD output in Amps.
Once we lock DRMI, we can try this again.
I've ran your scans through through my code. The plots are attached below. The "mismatch" (i.e. ratio of the average of all second order peaks to the average of all zeroth order peaks) is 0.08 +/- 0.02, 0.09 +/- 0.02, 0.083 +/- 0.005 for 0W, 0.5W, and 1W respectively. The uncertainties came from the standard deviation of the height of the peaks. Most of the uncertainty came from fluctuations in the height of the zeroth order peak. As it stands now I can't really draw any conclusion about the SR3 heater. It could be that the pzt is scanning too quickly, or the beam/cavity could be fluctuating in time. I can't say for sure. There's also this weird thing where the resonances on the upward pzt ramp appear in different locations to the ones on the downward ramp, even though they should correspond to the same length changes in the cavity. Could be some hysteresis in the pzt response. For the analysis I only used downward ramp on the pzt. Driving the pzt with a sawtooth waveform should get rid of hysteresis (since it would only ramp one way), but there might be some artifacts from the sudden voltage change.
Today, we performed a few iterations of weld annealing with the goal of reducing the differential pitch offset of the PUM-to-ETM from the initial welded value of ~3mRad. After 2 iterations, we had the pitch down to ~1.6 mRad. As we were preparing to break for lunch, I noticed that the PUM prism crack (referenced in aLog 41111) seemed to have grown from what I recalled. During the break, I looked at Betsy's pictures that were taken shortly after removal from chamber which confirmed my suspicions that the crack had increased in length. We decided this warranted a discussion with several of the decision making staff (Dennis, Peter, Calum, Fred, and Garilynn). Together we concluded that it was probably fine to carry on since the crack only seemed to lengthen when exposed to heat from welding (the initial cause of the crack and the cause of this case of extension) and that this side of the PUM would not need to see any more heat for the annealing that was continuing. (Note that the copper prism shield that was designed to protect this from happening again after the first instance WAS installed, but apparently enough radiative heat was present to affect the crack anyways.) We continued on with another iteration of annealing which resulted in a final differential pitch of ~350 µRad. This concludes the final post-O2 monolithic welding session. Check.
Attached are the pictures of the crack from today along with the 2013 and extraction PDF of pictures for comparison.
Note, we did discuss various options such as adding or wicking epoxy or silicate bonding solution into the crack. However, looking closer at the various viscosity properties of each and a previous attempt at this at LHO, we decided it was likely not to help the situation. We also are not convinced this will be a problem for us. We will however mobilize to prepare a spare PUM for the next time around.
Final alignment numbers for the ETMx fiber weld. All measurements were done with the ETMx suspended; the PUM and UIM were both locked. All numbers assume the reader is looking in the -X direction (i.e. at the AR surface of the ETMx); this is opposite the notation used in the alignment notebook, which is done from the perspective of the alignment equipment (i.e. looking at the HR surface of the ETMx). All measurements (except the pre-welding roll) were done after yesterday's correction of the differential pitch; before the correction the differential pitch was 3.13 mrad down. In addition, I have also included before and after welding numbers for the roll of both the PUM and ETMx, as there was a change in roll somewhere during the welding process that needs to be documented.
We're not entirely sure why the roll changed during the welding process, especially as much as it did. The going theory is that when the ETMx was hung it pulled on the PUM, causing the observed roll issue. Maybe a small mechanical misalignment between the PUM and ETMx (my guess is possibly a horizontal position offset between the 2 masses; this is something we do not measure with the IAS alignment equipment, so I have no numbers to back this up) caused a shift of the ETMx relative to the PUM during the hang which pulled on the PUM and caused the roll change (this is just a guess on my part). The differential pitch correction work did not have an effect on the roll of either mass, as the roll measured after the correction was identical to that measured before the correction. I don't recall seeing an issue like this with any of our previous welds at LHO, either during this post-O2 vent work or during aLIGO install. It should also be noted that this roll error is a contributor to the difference in measured fiber stretch between the left and right side of the monolithic.
The serial numbers of the fibers used and their location in the monolithic are as follows:
I'd like to thank Chris Sioke for his help in lugging the alignment equipment to the end station Monday morning, and Stephen Appert for his help in getting everything set up. You both made things go quicker than if I were doing it all myself.
pictures of fibre ends and welds taken after destress before initial hang - that is before the post-hang annealing to correct the pitch offset, These are the ETM welds
and these are the PUM