J. Oberling, P. King, E. Merilh
Summary
Today we went into the PSL enclosure to increase the front end (FE) diode currents and tweak the PMC and FSS alignment. We:
Details
FE Diodes:
We added ~2A to the 4 FE diodes to increase the FE power, which had been slowly drifting down over the last several months (threshold for diode current adjustment is 5% drop, we were at ~7%). Both power supplies are now feeding ~51A to the FE diodes. We then had to optimize the temperature of the diodes to maximize the FE power. Peter took before/after screenshots of the FE diode settings and will post them as a comment to this log. We are now reading 33.1 W out of the PSL FE.
PMC:
We then proceeded to tweak the PMC alignment. We adjusted both pitch and yaw on mirrors M06 and M07 and were able to get 22.9 W of transmitted power, with 2.4 W of reflected power. This is obviously not ideal (we want reflected power to be <10% of transmitted power), but all we could do within the alotted maintenance window. We will have to go back in at a later date and adjust the 2 mode matching lenses, L02 and L03.
Looking at the PMC RPD, it has a locked reading of -0.154 V and an unlocked reading of -1.52 V, giving a PMC visibility of 89.9%.
FSS:
We tweaked the alignment of the RefCav input periscope in both pitch and yaw and improved the RefCav transmission (as read from the RefCav TPD) from 0.75 V to 1.4 V. We then adjusted the AOM in pitch/yaw to see if we could improve the TPD reading further. We were able to improve the TPD to ~1.47 V.
Looking at the FSS RefCav RPD, it has a locked reading of 0.072 V and an unlocked reading of 0.260 V, giving a RefCav visibility of 72.3%.
DBB:
We opened the DBB shutters along the 35W laser path to measure the voltage on the DBB RPD; this has to be between 9 V and 11 V for proper DBB operation. We measured it at 9.53 V. We also measured the power into the DBB along the 35W laser path at 150 mW; this is >135 mW, so all is good here.
Greened up for engineering run. 119 channels added. 10 channels removed. H1:SUS-SR2_LKIN_P_OSC_SW2R and H1:SUS-SR2_LKIN_Y_OSC_SW2R were manually added to the exclude list. No unmonitored channels remain.
Copied compressed tarfile of logical database dump to /ligo/lho/data/conlog/h1/backups/h1conlog_dump_2015_05_26.sql.tgz. 5.8 GB. Completes WP 5226.
Patrick, Nutsinee
... burtrestore to 05/26 06:10.
Richard, Dave After a failure over the weekend in which DAC and ADC cards could not be detected in the I/O chassis (alog 18603), we replaced the power supply. The removed supply will be installed in the DAQ test stand for testing.
This morning I noticed a few EXC flagged as enabled on the CDS overview. After a roll around the room, Kissel and I cleared the LSC TPs on H1CALCS and H1SUSETMY.
1 | Start of Calibration for ER7 (starts after maintenance at LHO) | Tuesday, 2015 May 26 12:01 pm PDT, 2:01 pm CDT | |
2 | End of Calibration for ER7 | Friday, 2015 May 29 12:01 pm PDT | |
3 | Start of "Annealing" for ER7 | Friday, 2015 May 29 12:01 pm PDT | |
4 | End of "Annealing" for ER7 | Tuesday, 2015 June 2 7:59 am PDT | |
5 | Maintenance (four hours every Tuesday 8:00 - Noon local time) | Tuesday, 2015 June 2 8:00 am - Noon | |
6 | Buffer before start of ER7, to be used as needed | Tuesday, 2015 June 2 12:01 pm PDT - Wednesday, 2015 June 3 7:59 am PDT | |
7 | Start of ER7 | Wednesday, 2015 June 3 8:00 am PDT, 10:00 am CDT | |
8 | Maintenance | Tuesday, 2015 June 9 8:00 am - Noon | |
9 | End of ER7 | Monday, 2015 June 15 7:59 am PDT, 9:59 am CDT | |
10 | Start of Vent at LHO | Monday, 2015 June 15 8:00 am PDT, 10:00 am CDT |
Daily Global ER7 Run Status and Planning meeting will take place daily at 1:00pm PDT / 2:00pm CDT / 3:00pm EDT on the JRPC "TeamSpeak" channel. The plan is for a short meeting that provides updates and plans for the two interferometers and the data analysis community. This meeting will start on Tuesday, May 26, 2015.
Dan, Kiwamu, Evan
Tonight we worked on getting the interferometer back to its low-noise state. We are stable at 10 W, but there is some instability at higher powers.
First, at 3 W we manually steered the ITMs to a good recycling gain (38 W/W), and then updated the TMS QPD offsets. We also locked the arms in green, adjusted the green QPD offsets for maximum buildup, and then updated the ITM camera references. Then we re-enabled the ITM loops in the guardian. This allowed us to power up all the way to 21 W without significant degredation of the recycling gain.
After that, we were able to consistently engage the ASC with the guardian.
However, we found that at 21 W the interferomter suddenly unlocks in a matter of minutes. There seems to be no instability in the arm or sideband buildups before the lockloss. We looked at OMC DCPD signals for signs of PI, but we did not see anything ringing up during any of our short high-power locks. Some times to look at are 02:29:50, 02:59:50, 04:57:30, 06:55:00, all 2015-05-26 UTC. But any of the other 21 W locklosses in the past 12 hours follow this pattern.
We measured the OLTFs of PRCL, MICH, SRCL, and DARM before and after powering up, but they all look fine and did not change with power. For CARM, we start at 3 W with a UGF of 14 kHz with 47° of phase. Then during power-up, the electronic gain is automatically adjusted to compensate for the increased optical gain. The algorithm for this was shooting a little high, so after power-up the UGF was more like 27 kHz with 30° of phase. This is probably fine, but we adjusted the algorithm anyway, so that the UGF is placed at 19 kHz, with 45° of phase. Anyway, this did not solve the lockloss issue.
We also tried locking at some lower powers. At 15 W the interferometer lasted for about 15 minutes before unlocking. At 10 W, the lock time seems to be indefinite (at least 90 minutes).
Using FM9 in ETMY L1 LOCK L (zero at 2 Hz, pole at 5 Hz), we were able to push the L1 crossover from <1 Hz to 1.7 Hz by adjusting the filter gain from 0.16 to 0.31. Measurement attached, showing before and after. This is not included in the guardian. By pushing up the crossover, the rms drive to L2 decreases from >10000 ct to about 6000 ct or so.
For the record, we did not notice any kicks to the yaw of IMC REFL tonight.
Over the weekend we were able to re-commission the damping of the bounce, roll and violin modes. The bounce & violin damping settings have been propagated to the ISC_LOCK guardian, and should be stable (maybe). The roll mode settings have already changed once over the weekend, so I'll list what's been working, but your mileage may vary.
The attached spectrum (for 10W, low-noise ESD, *not calibrated*, no LSC FF, so don't study it too closely) shows the mode-damping progress. Note this was before the 2.4k and 2.8k violin harmonics were damped.
After struggling to apply very narrow band-pass filters a la Jeff's approach from alog:18483, we reverted to the method of very broad band-passes. These are loaded as FM3 in the DARM_DAMP_V filter banks. The frequencies follow those listed by Sheila in alog:18440 (we confirmed these frequencies were correct through the course of our damping exercise).
ETMY | ETMX | ITMY | ITMX | |
Frequency [Hz] | 9.73 | 9.77 | 9.81 | 9.85 |
Filters | FM1 (+60deg), FM3 | FM3 | FM1 (+60deg), FM3, FM6 (+30deg) | FM2 (-60deg), FM3 |
Gain | -0.3 | -0.5 | +1.0 | +0.3 |
The real key to squashing the bounce mode peak was to work out the damping settings for ETMX and ITMY (the optics which couple bounce --> longitudinal motion the least). The extra 30deg of phase for ITMY turned out to be important.
We were able to damp the ITMX roll mode, thus breaking the unpaired set of frequencies for roll modes and assigning each peak to an optic. The ITMX roll mode wasn't rung up this weekend, so we didn't have a chance to work out damping settings. The sign for damping the ETMY roll mode flipped between Sunday and Monday night, otherwise these damping settings were pretty stable.
For all the TMs the FM4 filter is a broad band-pass from 13.5 to 14.5Hz.
ETMY | ETMX | ITMY | ITMX | |
Frequency [Hz] | 13.816 | 13.889 | 13.930 | 13.978 |
Filters | FM3 (-100dB), FM4, FM6 (+30deg) | FM3 (-100dB), FM4 | FM3 (-100dB), FM4 | ?? |
Gain | -20 | +600 | -80 | ?? |
The roll mode is rung up after every lockloss (usually it's ETMY), so these settings need to be manually applied before the transition to DC readout. The gains listed in the table above are the "high-gain" damping state, if the mode is very rung up you need to start at a lower gain setting or you might saturate the M0 stage.
Recall that violin mode frequencies and their associated test masses were given in alogs 17365, 17502, and 17610.
All the identified modes are well-damped and have been enabled in the Guardian code, with the exception of ITMX. Despite many attempts I haven't been able to actuate on the ITMX modes at all. Before the realignment/recycling gain work the ITMX modes damped very easily, now I can't find a DOF (longitude, pitch, or yaw) or a phase setting to move the modes either up or down. It's hard to believe the L2 stage of ITMX isn't working, so we're not sure what the problem is. Maybe we just need more patience.
The complete set of violin mode damping settings is too large to list here; the various filters and gains are recorded in the guardian code. Some modes require a specific filter to get the right phase, others can be grouped together with broad band-pass filters without much trouble. In particular, ITMY requires separate filters for each mode, it's very difficult to construct a broad band-pass that catches more than one mode with the correct phase. We need to add more filter banks to the L2 DAMP section of the quad models if we want to squash the violin modes and their harmonics.
We did identify some new modes -- since we started feeding DARM back to the ETMY L2 stage we rang up the 4th, 5th, and 6th harmonics of that optic. These modes were easily damped and have been notched in the actuation path. The specific frequencies and damping settings were:
2424.38, 2427.25 Hz: Use FM6 of ETMY L2 DAMP MODE1, +60deg of phase, +100dB, gain=+20k, longitudinal direction
2878.7, 2882.5 Hz: Use FM6 of MODE2, no phase, gain=+10k, longitudinal direction
3330.6 Hz: Use FM5 of MODE3, -60deg of phase, gain=+20k, longitudinal direction
3335.7 Hz: Use FM6 of MODE3, no phase, gain=+20k, longitudinal direction
Keita, Sheila
In three of last night's 10 Watt locklosses, as well at the 15 Watt lockloss, the CARM loop dropped first, when IMC-F reached something around +/- 1440 kHz (the first screen shot attached is typical, 2015-05-26 15:23:17, 13:228:28, 12:10:29, and 7:38:16 at 15 Watts). Now that Jeff has fixed the model and we are using tidal again, this type of lockloss has not been bothering us tonight.
The slope of IMC-F is larger in the 15 Watt lockloss than the 10 Watts ones. A trend of IMC F and arm transmission from last night shows there are some inflection points in the slope of IMC F, although these don't corespond to changes in input power or changes in the state of the tidal state machine.
The other 4 locklosses that I looked at were not due to the IMC VCO, and I didn't come up with any good explanation for them. One notable feature in all of the others is the half a hertx oscillation in the ITM oplev damping loops, that starts when the power increased to 21 Watts, but it doesn't seem like this was the cause of the lockloss.
Tonight we were able to damp the roll modes with all of these settings, as well as ITMX for which we used -100 dB, bp13.9 (FM3+FM4) and a gain of 20. We also increased the gain for ETMX to 1000
The Symptoms ============ The laser is off, the chillers are running and the status screen shows either "Interlock OK" as red, or both "Interlock OK" and "Epics Alarm" as red. Possible Causes =============== a) One of the safety relays in the interlock box is faulty. b) There is a problem with the TwinCAT code controlling the chillers. c) There is a problem with the TwinSafe code implementation. d) Network delays in the PSL EtherCAT network accumulate to a point where TwinSafe initiates a shutdown. e) The turbine based flow sensor in the chiller gets stuck, registering a drop in flow which in turn triggers the laser interlock. The Evidence ============ a) The Dold LG5929 safety relay extension module has a mechanical life time of 20 million switching cycles and a mean time to dangerous failure of 144.3 years. The contacts are normally open. Safety relays fail, they do not fail intermittently. Simulating an interlock box failure by switching off the interlock box results in a sea of red on the status screen. b) I have gone through the TwinCAT code for both chillers. The only difference is that there is an extra variable declared for the diode chiller. However this variable is never invoked in the chiller code. The instruction sets for both the diode and crystal chiller are the same. c) I have gone through the TwinSafe function blocks and have not found anything wrong with it. d) This possibility game up because each time the laser had tripped, Patrick and I noticed that there was 1 lost frame in the TwinCAT datastream. However with the laser running last week I noticed that something like 201 frames were lost and the laser was still running. So network delays were ruled out. Simulating a network delay by removing either the Rx or Tx fibre from the Ethernet switch, results in a sea of red on the status screen. e) The output of the flow rate sensor goes to the chiller controller via a normally open switch. If the flow rate is within the allowed range, the switch remains closed. If it goes out of range, it opens and remains open until the flow rate is restored. For the crystal chiller, opening the flow rate switch does indeed switch off the chiller. Closing the flow rate switch does not switch the chiller back on automatically. A number of fields on the status screen go red and can only be cleared by switching the chiller back on and pressing the reset button on the status screen. For the diode chiller, opening the flow rate switch does switch off the chiller. Closing the flow rate switch, turns the chiller back on. What's more all the red flags on the status screen go green automatically except "Interlock OK". Conclusion ========== Of the 5 cases outlined above, the one that seems to reproduce the observed events is the last one. In particular it seems that it is the flow rate sensor in the diode chiller.
One could well ask why the diode chiller and not the crystal chiller. It might be that the diode chiller, having to cool the pump diode heatsinks, may have more accumulated particulate matter (gold flakes). We should inspect the filter(s) at the back of the chiller for any obvious signs of gold, or other stuff.
The answer I received from Termotek was that the behaviour of the diode and crystal chillers are indeed different when it comes to recovering from the flow switch opening. Just in case I mis-understood their reply, I have a clarification e-mail pending.
The MC2 suspension occasionally ran into a situation where some part of the suspension was kicked intermittenly for some unknown reason. I noticed this behavior this morning. I could not identify what was kicking.
The worst part is that the kicks (or glitches) seem to be gone since sometime around 14:14 local. Very bad.
The symptoms are:
(Checking various channels)
Apparently this behavior is different from the two DAC issues we have seen in this past week (alog 18453 outputting a high constant voltage, alog 18569 outputting a nonlinear signal). As mentioned above, I disabled ail the active loops including the LSC, ASC and the top stage damping loops. Even in this condition, the suspension still kept being kicked interminttently. I looked at VOLMONs on all the stages, but did not find any suspicious activities. Also, I checked motions of the HAM3 ISI in L, P and Y using the suspension-coordinate-witness-signals (i.e. SUS-MC2_M1_ISIWIT_L(P, Y)MON), but did not find such a fast signal. In addition, I checked the bottom stage witness sensors of PR2 as well to see if this is some kind of table motion, but the PR2 was very quiet and no glitches were found at the time when MC2 was glitching.
(A test without LSC or ASC signals)
I attach a second trend of some relevant channels. This is a two-hours trend, with the top stage damping loops fully on and with no LSC or ASC signals. You can see that the wintess sensors of the bottom stage showed glitches in the first 1/3 of the trend and suddenly stopped. Since the damping loops were engaged during this periode, the VOLMONs showed some reaction of the loops and I think they are just reactions and not the cause of the glitches.
Actually, now the top stage RT OSEM makes me think this is it. Its VOLTMON showed a noticable discrete jump (by 14 counts) in the attached trend right around the time when glitches stopped.
Thank you, Dave, Richard, John, Gerardo and those who helped us today even though it was a holiday. These pictures are for you guys.
Richard, Kiwamu, Dave:
Following Richard's suggestion, Kiwamu powered down the IO Chassis and removed the lid. After about 10 minutes the IO Chassis and then the CPU were powered up. I had removed h1seih45 from the rtsystab file to prevent any models auto starting. After several minutes I verified the Gen Std cards were still visible on the bus. I then started the IOP model. It came up with a minor IRIG-B excursion, which came back down in a few minutes. The Gen Std cards were still with us. I then started the user models, one at a time, with no problems.
Kiwamu and Evan are now untripping the watchdogs and starting isolation. Hopefully this will get us through to tomorrow.
I've put h1seih45 back into rtsystab. Kiwamu noted that the ADC and DAC Gen Std cards have LEDs which are on when powered up, and the DAC cards LEDs go out when the IOP is started. We should see what these LEDs mean.
SudarshanK, DarkhanT, RickS
We performed a routine calibration of Photon calibrator photodioides (both transmitter module PD-TxPD and receiver module PD-RxPD) at Xend on 20th May and Yend on 22nd May. We will post the results of the calibration soon.
Richard, Kiwamu, Dave:
the IOP model on h1seih45 has failed, it cannot see any General Standards cards in the IO Chassis when doing a software bus scan (lspci), but it can see the Contec Binary IO cards:
root@h1seih45 ~ 1# lspci -v |grep 3101
root@h1seih45 ~ 1#
root@h1seih45 ~ 1# lspci -v |grep 3120
root@h1seih45 ~ 1#
root@h1seih45 ~ 1# lspci -v |grep -i contec
13:00.0 Multimedia controller: Contec Co., Ltd Device 8682 (rev ff) (prog-if ff)
21:00.0 Multimedia controller: Contec Co., Ltd Device 8682 (rev ff) (prog-if ff)
2c:00.0 Multimedia controller: Contec Co., Ltd Device 8632 (rev ff) (prog-if ff)
Kiwamu went into the CER and confirmed that the h1seih45 chassis is powered up, the ADC interface cards have their LEDs lit, fans are on. We powered down the CPU and the Chassis (keeping the latter down for a minumum of 30 seconds) and then powered them back up.
This is where it gets strange. As soon as I could log back into h1seih45 I could see the General Standards cards on the PCI bus (six ADCs, two 16bit DACs). But, when the IOP model started, they disappeared from the bus and once again all I could see are the Contec cards.
controls@h1seih45 ~ 0$ lspci -v|grep 3101
Subsystem: PLX Technology, Inc. Device 3101
Subsystem: PLX Technology, Inc. Device 3101
Subsystem: PLX Technology, Inc. Device 3101
Subsystem: PLX Technology, Inc. Device 3101
Subsystem: PLX Technology, Inc. Device 3101
Subsystem: PLX Technology, Inc. Device 3101
controls@h1seih45 ~ 0$ lspci -v|grep 3120
Subsystem: PLX Technology, Inc. Device 3120
Subsystem: PLX Technology, Inc. Device 3120
controls@h1seih45 ~ 0$ lspci -v|grep -i contec
13:00.0 Multimedia controller: Contec Co., Ltd Device 8682
Subsystem: Contec Co., Ltd Device 8682
21:00.0 Multimedia controller: Contec Co., Ltd Device 8682
Subsystem: Contec Co., Ltd Device 8682
2c:00.0 Multimedia controller: Contec Co., Ltd Device 8632
Subsystem: Contec Co., Ltd Device 8632
< about here the models try to startup >
controls@h1seih45 ~ 0$ lspci -v|grep 3101
controls@h1seih45 ~ 1$
Richard suggested we power the sytem down for an extended time (10 minutes) to cool everything down and run with the lid off. Kiwamu is doing that at the moment.
I'll disable the autostartup of the models, so we can manually step through the startup process.
Kiwamu and Dan reported that there is no potable water in the OSB. It looks like the RO system went into fault on May 22 around 5:30pm and this morning the potable water tank went dry.
Kiwamu visited the water room for me and reset the RO system so it now appears to be operating normally and making water. However, the building is not yet pressurized.
I suspect the Naimco water skid will need to be reset.
Water is flowing somewhere..After the RO unit was reset the tank level started to climb but when the building was pressurized the level began to fall at an unusual rate.
We think the water system is back to normal.
Gerardo happened to be at the site when I got there so we visited the water room together to look into the problems.
We found that the water pumps were very hot and after breaking some fittings we found steam and hot water. After bleeding the system and cooling it down we restarted it and were able to immediatly build pressure.
The cause was likely a pressure switch which did not get reset during the initial startup. We'll investigate next week.