Svn up'd LHO's common guardian masterswitch and watchdog folders:
hugh.radkins@opsws1:masterswitch 0$ svn up
U states.py
Updated to revision 12509.
hugh.radkins@opsws1:masterswitch 127$ pwd
/opt/rtcds/userapps/release/isi/common/guardian/isiguardianlib/masterswitch
hugh.radkins@opsws1:masterswitch 0$
hugh.radkins@opsws1:masterswitch 0$ cd ../watchdog/
hugh.radkins@opsws1:watchdog 0$ svn st
hugh.radkins@opsws1:watchdog 0$ svn up
U states.py
Updated to revision 12509.
I restarted HAM6 and tested by disabling an output leg. The trip appeared to execute the functions as expected-I did not notice any problem. Restarted HAM5 and tested, it too turned off the FF and reduced the GS13 gains as expected, no problem noticed. Restarted HAM4 but did not test (by tripping the platform.)
Restarted HAM3 after enabling the GS13 Switching feature as I wanted to test the problem of the guardian being unable to turn the GS13 gain up without tripping the platform. This is where I noticed the problem.
When guardian turned up the GS13 gain, the HAM3 tripped and it successfully turned off the FF and lowered the GS13 gains but it left the DAMPing loops engaged. I thought the restart of the guardian may have been responsible for this behaviour. I cleared the trip but did not turn off the DAMPing path first. The ISI did not trip until the GS13s were again toggled to high gain but this time, the DAMPing path was turned off as I would expect. Okay, maybe first time around problem. Cleared the trip and again the platform tripped when the GS13 gain changed and again the DAMPing path was left on. I repeatd and again the DAMPing path was left on. I disabled the GS13 Gain Switching feature and we made it to isolated and I set the GS13 gains to high with the Command Script.
I've repeated the test on HAM5 and there too, DAMPing path remained on.
Meanwhile ITMX tripped due to LVEA activity, DAMPing path not turned off and this guardian has not been restarted with the new update. Repeated this on ITMY and it too left the DAMPing path enabled. Okay, it looks like this DAMPing problem is not related to the current code upgrade.
I will continue to restart the guardian with this current upgrade though as the turning off of the GS13s when tripped is a good thing and generally, the platform can deal with untripping the watchdog with stuff coming out of the DAMPing path, as long as the GS13 are in low gain. And since HAM2 & 3 can't handle guardian gain switching, they must have the gains toggled manually.
The LVEA has transitioned to laser safe. A reminder that no work involving opening a table enclosure door may occur.
I again looked at the cross-correlation of POP and POPAIR 45Q during full lock, in order to find out what the vertex LSC noise looks like below the shot noise level.
This time, I added a bandstop filter to PRCL, MICH, and SRCL that gives ≥40 dB attenuation between 82 and 101 Hz. This ensures that shot noise is not impressed on the vertex dofs in this band, so that we can get a true estimate of the freerunning displacement/sensing noises.
The result of 1 hour of cross-correlation is attached. In the case of POP 9I (used for PRCL) and POP 45I (used for SRCL), there is significant coherence between the two PDs. For 9I this is not so surprising, since the same (unknown) structure can be seen in both spectra. For 45I, a similar irregular structure is seen in both PDs. The coherence in the notched portion is 0.01.
For POP 45Q (used for the Michelson), the coherence seems remarkably free of the irregular structures seen in 9I and 45I. The notched portion appears to be at the noise floor of the correlation measurement, with a coherence <0.001. The coherence of 0.01 around 130 Hz probably comes from noise being injected into the Michelson loop.
The uncontrolled quadrature (9Q) shows some evidence of irregular structure, despite the fact that POP 9 and POPAIR 9 were both phased to mimize the apperance of PRCL in Q.
Time is from 2016-02-01 04:18:00 to 05:18:00 Z.
Title: 2/1 Eve Shift 22:00-6:00 UTC (14:00-22:00 PST). All times in UTC.
State of H1: Lock acquisition. Lost lock in the last 15 minutes of my shift.
Shift Summary: Several locklosses at various stages and from various causes (commissioning activities notwithstanding). Locked at NLN for ~2 hours, but in Commissioning mode, while Jenne and Evan do their thing.
Incoming operator: None
Activity log:
22:44 TCS crew to CO2Y table
23:11 Joe D back from beam tube sealing
23:26 reset H1SUSETMY tim error
0:21 TCS crew done
I pushed the numbers from Gabriele's Saturday alog (25265) through the beam position calculating / calibrating script, and see that the ETMY pitch spot was moved by 2.4mm.
Old position [mm] | New position [mm] | Delta [mm] | |
ITMX | -4.5 | -4.5 | 0 |
ITMY | -0.3 | -0.8 | 0.5 |
ETMX | -1.8 | -1.6 | 0.3 |
ETMY | 2.0 | 4.4 | 2.4 |
I'm not sure why this happened. I need to think some more on this.
The HWS SLED temperature (H1:TCS-ITMX_HWS_SLEDTEMPERATUREMON) has never read back correctly. I tracked down the problem to a missing wire in a 15-pin cable.
Lo and behold, the thermistor was correctly wired to the SLED driver chassis and now reads back a sensible temperature. What's more, turning on the SLED with 100mA going through it shows a 2.5C increase in temperature - as expected.
However ... I could now finally check if the SLED temperature changed as I changed the setpoint temperature to drive the TEC on-board in the diode package.
Changing the setpoint temperature had no response on the actual temperature.
I want to get this fixed as it will increase the lifetime and output power of the diodes.
(All of this was repeated for SLED 2).
[Alastair, Aidan]
The Y-arm CO2 laser, which has not been getting injected to the CP, has now had its beam block removed so that we can perform the alignment proceedure. The AOM drive from the DAC has a very low frequency pole that would stop us from injecting a large enough signal to do the alignment this way, so we have directly bypassed the module that has this filter.
Instead we put an oscillator directly on top of the enclosure set to 23.8Hz, 0.1V pk-pk, 0.1V offset. This oscillator is running on AC power and has quite a noisy fan. The oscillator is being fed through to the table and directly connected to the AOM driver, bypassing the DAC and all other electronics.
The laser has its rotation stage set to output the minimum power we could achieve (11mW) so this signal shouldn't be visible in the interferometer at the moment. If it is critical to stop it doing this then the laser can be turned off from MEDM, however I would request that if at all possible we leave the laser running because I am still working on the long-term stability of the locking loops.
Installed the table interlock box from Richard on the shelf inside the X and Y tables. I installed all the cables that seemed to be meant to attach to this. Not sure if it was meant to have anything in the connectors for Status or Monitor
[Alastair, Aidan]
Today we added the chiller loop to the laser servo on Y-arm. It takes an error signal from the PZT output voltage minus 35V, to set the PZT to the middle of it's range. From the step function measurement the time constant for a temperature change request to actually change the laser temperature is around 110s. Gain was taken from previous measurements of power as a function of pzt voltage and chiller temperature https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=11625.
We initially added a servo with unity gain at 0.001Hz, which should have given a reasonable phase margin, however we found that it was oscillating (very slowly), so we altered the unity gain down to 0.0003Hz and it now seems, from a few hours of operation, to be stable.
Attached is a graph of the 3hrs of laser power with the laser unlocked. The pk-pk is roughly 0.17W. Also attached is a chart of the laser with the PZT and chiller locked, showing pk-pk of around 0.02W, which in the full data plot looks to be the noise floor. This plot starts just after the chiller servo has been engaged and shows it bringing the PZT voltage down from 55V, and actuating on the chiller temperature.
Attached are the OpLev trends from the past week.
Laser Status:
SysStat is good
Front End power is 33.19W (should be around 30 W)
Frontend Watch is GREEN
HPO Watch is RED
PMC:
It has been locked 5.0 days, 22.0 hr 26.0 minutes (should be days/weeks)
Reflected power is 3.105Watts and PowerSum = 26.24Watts.
FSS:
It has been locked for 0.0 days 1.0 h and 43.0 min (should be days/weeks)
TPD[V] = 1.522V (min 0.9V)
ISS:
The diffracted power is around 7.575% (should be 5-9%)
Last saturation event was 0.0 days 2.0 hours and 10.0 minutes ago (should be days/weeks)
Activity Log: All Times in UTC (PT) 16:00 (08:00) Reset IPC error on H1SUSETMY 16:15 (08:15) Adjust ISS Diffracted power from -1.99v 9.4% to -2.0v 8.2% 16:20 (08:20) Goodwill on site for pickup – Bubba escorted 17:00 (09:00) Filiberto & Manny – Pulling cables at End-Y 17:03 (09:03) Christina – Opening OSB rollup door 17:51 (09:51) Corey – Working in the Optics Lab 17:57 (09:57) Chris & Joe – Beam tube sealing on X-Arm - ~800m from End station 18:00 (10:00) Filiberto – Back from End-Y 18:17 (10:17) Kyle – Going to End-X will be in compressor room 18:18 (10:18) Platt Electrical on site – Delivery for Richard 18:18 (10:18) Kiwamu – Transition LVEA to Laser Hazard 19:11 (11:11) Aidan & Nutsinee – Going into LVEA to IO table near HAM4 19:48 (11:48) Joe & Chris – Back from X-Arm 20:00 (12:00) Dale – Big tour in the control room 20:05 (12:05) Aidan & Nutsinee – Out of the LVEA 20:09 (12:09) Corey – Out of the Optics lab 20:10 (12:10) Dale – Another big tour in the control room 20:11 (12:11) Kyle – Back from X-Arm 20:11 (12:11) After initial alignment – Locked at DC_READOUT 20:15 (12:15) Start ASC-CHARD transfer functions for Jenne 20:21 (12:21) Lockloss – Unknown 20:37 (12:37) Kyle – Going to End-Y compressor room 21:11 (13:11) Locked at DC_READOUT for commissioning work 21:12 (13:12) GRB – Ignored alert due to status of IFO 21:20 (13:20) Joe & Chris – Bean tube sealing on X-Arm 21:30 (13:30) CW Injection running & CW Injection inactive messages 21:57 (13:57) Kyle – Back from End-Y 21:59 (13:59) Aidan – Going into LVEA 22:15 (14:15) Turn over to Travis End of Shift Summary: Title: 02/01/2015, Day Shift 16:00 – 00:00 (08:00 – 16:00) All times in UTC (PT) Support: Jenne, Incoming Operator: Travis Shift Detail Summary: After running an initial alignment relocked the IFO at DC_RREADOUT for commissioning work. Started TFs for Jenne. Lockloss after about 10 minutes not related to Jenne’s TFs. Relocked at DC_READOUT. Jenne is running her TFs. Hand off to Travis.
We had two occassions on Sunday where CRC errors showed up on certain front ends. Sunday 07:12 PST a single error showed up on some SUS, and ISI models, plus PSL DBB. Later at 18:05 PST all PSL models showed 8-10 errors. Normally CRC errors would be associated with model restarts, but no restarts happened on Sunday.
Attached are spectra comparing the SUS ETM L3 LV ESD channels from lock segments before (Jan 24, 2016 03:35:00 UTC) and after (Feb 1, 2016 01:50:00 UTC) the ESD Driver update last Tuesday alog 25175 (and the subsequent ESD fixes on Wed 25204). Before the install, the channels looked like ADC noise, while they look live now. The ETMx plot is included, but of course the ETMx ESD is not ON during either lock stretch, so all the plot really says is that something happened to the channels. Whether or not the ETMy channels look like they actually should, according to various ECRs, is to be determined.
Keita helped me make more useful plots to evalute the new LV chassis mods. See attached.
The alog that motivated all of these is alog 22199.
In Betsy's new plots, reference traces are with the old hardware, current traces are new, both in full low noise lock. In summary, this is good.
The spectrum/coherence plot shows that the new whitening is useful, the monitor is actually monitoring what we send rather than just ADC noise. (It used to be totally useless for f>7Hz or so as you can see from the reference coherence.) You can also see that there's a small coherence dip at around 30Hz, and there are deeper dip at around various band stop filters, but otherwise it's actually very good now.
In the second plot, you see Y channel spectrum together with X. Since we don't send anything to X during low noise lock, X is just the sensing noise. When you compare the signal (red) and the sensing noise (light green), we can see that the signal is larger than the noise across the entire frequency range except the stopbands.
At around 30Hz (where we saw a tiny coherence dip in the first plot) the noise is only a factor of 3 or 4 below the signal. We expect the higher frequency drive above f=100Hz to drop as we increase the power, so the signal/noise ratio there might drop in the future. There's still a large headroom before we rail the ADC (RMS is 600 counts), so if necessary I'm sure we can make some improvement, but this is already very good for now.
The only thing is, what's these lines even when we don't send anything (X)?
It seems as if 57Hz and harmonics, whatever they are, in the non-driven channel are at least as large as in the driven channel.
57Hz was the HWS at the end stations.
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=25383
Feb. 1 2016 18:31 UTC Stopped Conlog on conlog-test-master. Had been started 19:42 UTC Jan 27 2016 (alog 25201). Shutdown conlog-test-master to install Spectracom card.
Feb. 2 2016 02:36 UTC Connected conlog-test-master back to the same 97,469 process variables as h1conlog1-master.
Kiwamu transitioned the LVEA to laser hazard
Pulled in new CPS Sync cabling from SUS-R1 to CPS units on BSC10. Work still needed for install (EX/EY): 1. Install fanout chassis in SUS-R1 2. Run RF cable from Electronics Ebay to VEA SUS-R1 (71MHz) 3. Modifications to some of the CPS units (Converting units from masters to slaves)
Aftef improving the angular decouplig and the feedforward, the noise curve looks quite smooth to me, except for a small bump betwen 15 and 20 Hz and some other bumps between 55 and 80 Hz, to be investigated.
So I played the noise slope fitting game, to see what kind of noise shape we would need to explain the curve. At high frequency there's shot noise (flat) and Kiwamu's 1/f noise. At very low frequency (below 14 Hz) the noise curve looks very steep, and it seems to be something like 1/f^9, although it's very difficult to properly estimate the slope here, it could easily be 1/f^10.
What's most interesting to me is that between 20 and 40 Hz, the noise floor is explanable with a 1/f^4 slope. I find this interesting because it points to suspension displacement or actuation noise in the second to last stage, for example excess noise in a DAC or coil driver in one of the test masses, or even in the BS.
Caveat: this is just a hint, the slopes and amplitudes I estimated might be very wrong, and there's no real indication that we only have three or four separate noise contributions.
P.S. I wanted to upload the MATLAB fig file, but it's 21 Mb and so it seems I can't attach it here.
We have had multiple locklosses today from the beamsplitter coil driver switching.
This is puzzling, since this step was largely unproblematic during the run.
Opened FRS ticket 4325. Unfortunately for the this study, the only BS's coil driver monitor channels stored are noise and voltmons, both of which are upstream of the output impedance network (so they don't measure the current effect of switching the "acquire" network off as is done here) *and* they're only stored at 1024 at the fastest. I recommend we start by either (a) installing an analog voltage breakout pick-off in-line with the M2 BOSEM chain down-stream of the coil driver to identify the amplitude of glitching which takes out the IFO's lock, and address from there, or (b) Changing the h1susauxb123 front-end model to store the driver's FAST I MON at 16 [kHz]. (These can go in the commissioning frames, and also the noise and voltmon can be removed.)
Starting to look at this but have a question before I start. Why is the Ramp on UL 0sec and the other filter 10 Sec.
Richard refers to the ramp time in COILOUTF bank; however the ramping between coils is performed by the new Ramp Matrix part not this bank. It's likely that this COILOUTF bank ramp times were "set" some long time ago (clearly more than 300 days ago!) and because it's not used for any ramping of control signals, it has merely remained untouched.
This problem has gotten worse and better in the past without any known cause, for example durring ER7 it was particularly bad.
I've restarted all the LHO ISI Guardians. Tested the function/features and problems and they are all present on ITMX ISI too.
I modified watchdog/states.py to accomodate for this additional request to the T150549 update. The update was comited to the SVN: