Nutsinee, Robert, Elli
We have temporarily switched the EY HWS to a different power supply to see if this will eliminate the 57Hz coupling into DARM. The HWS is currently powered using a cable running from the big transformer box at EY. The EY HWS is currently switched on, and can't be switched off remotely in this current configuration. We plan to returm the HWS to its orgininal power supply tonight or tomorrow morning.
I have updated the safe.snaps for PEM and OAF models. These systems now have greened SDF screens.
I have "greened" up the IOP models SDF settings.
New STS-2 cable has been pulled from the CER SEI racks to the PEM area in the Beer Garden. This is to help with the huddle testing of the STS-2's in the VLEA. Work areas included HAM4, BSC2, and BSC3 where the cable was dressed in the cable tray. Work started around 8:30 and ended around 9:30.
This morning Richard and I tested the UPS used for the PSL Mephisto power supply. It is now set to log what happens. A run calibration was also done. The UPS claims that it can run for ~5-1/2 hours on the battery. We initiated a self-test, for which the UPS switched over to the battery. After a minute or so I checked the laser to find that the Mephisto power supply and frontend laser control box displayed an error condition. Interestingly the MEDM display indicated a flow sensor problem. The Beckhoff PC indicated an interlock problem and an EPICS alarm but nothing about a flow sensor problem. Both the diode and crystal chiller were still running when I entered the diode room to reset the system. Clearly the Mephisto tripped because of the UPS switching over to its batteries. Richard, Peter
Timing is pretty good, the ISIs tripped within a few seconds of each other and the HEPI tripped 16 seconds later. No issues on recovery.
Sheila, Evan
Evan struggled with the PSL tripping earlier tonight. After it stopped tripping we were able to lock long enough to get to low noise and make one DARM OLG measurement, which is attached here. Then we were hit by a large earthquake.
I made an attempt at making a filter for SRCL FF, using alpha to go the the DARM actuator For the SRCL OUT to DARM IN tf we used the noise injection from last night, screen shot of the TF is attached.
To get alpha, I took SRCLOUT-> DARM IN tf *(DARM OLG/(DARM Closed loop*DARM plant) which is (DARM IN1/SRCL OUT)*(DARM IN1/DARM IN2)/((DARM IN1/DARM EXC)*(DARM IN1/DARM OUT)). for the DARM measurements I used a cut of 0.6 on the product of all three coherences, from SRCL to DARM we have I used a cut of 0.8 on coherence. I fit the product using vecfit, the attached screenshot shows the result when using 8 zeros and 8 poles, anyone can adjust the number of zp pairs by editing line 58 in /sheila.dwyer/Noise/SRCLMICHFF/SRCL_DARM_TF.m. I did not load any filters.
Tripped around 2015-05-07 02:31:30 UTC. I reset it. Aside from the usual diode flow bit flipping issue, the diode chiller again appeared to have some unphysically fast jump in temperature for about 30 s.
PSL tripped again, around 2015-05-07 02:47:50 UTC. This time, the computer in the diode room showed a momentary dip in the diode chiller flow rate (see attached photo). I talked to Rick on the phone, and we agreed to restart it again.
PSL tripped a third time, around 04:23:00 UTC. I reset it again.
These trips are really an impediment to progress, not only because of the time it takes to restart the PSL but also the time it takes to relock the IFO and get back to low noise. If it is an option to replace the bad sensor soon, that would be good, although I think I remember from when Michael Rodruck was struggling with the same issue it was not a simple fix.
These recent PSL trips, as well as the power watchdog trips, are an example of how we have made the system less robust by being overzealous in trying to protect it.
Both the recent trips and many trips a few years ago when Michael Rodruck was still here were due to bad sensors. These sensors broke and indicated that the flow was low, when the flow was not actually low and there was no real danger to the laser.
Do we have any redunant sensors, so that we could check that mutliple sensors agree that the flow is low before we turn off the laser?
We see H1:PSL-IL_DCHILFLOW flipping for a while and then it stops flipping after 30 seconds in these recent spurious laser trips. Does this bit go to 0 and stay there if the flow is actually low? If so we could wait to trip until this bit is low for at least half a second, or when a rolling average of the bit drops below half.
The power watchdog seems like another example of overzealous protection of the hardware. (The power watch dog has tripped less often recently, I assume that is because the PSL maintence is now being done) This is a watchdog that goes off when the PSL power becomes low. It seems to me like a drop in the power is not a dangerous situation. I think it would be best to downgrade the power watchdog to an alarm or warning, or anything else that does not bring a multi million dollar facility to a standstill.
Sheila, Evan, et al.,
These laser trips are, of course, a big concern to the PSL "team." We are actively pursuing solutions.
For the past month or so we have been opeating without engaging the watchdog on the Front End laser.
We are trying to understand the functionality of the Beckhoff interlock which is appears is responsible for initiating the shutdowns. This week, we received some information from Maik Frede at LZH. He wrote the control software. It appears that the system is not functioning as designed and described in LIGO-T1000005.
Yesterday, Peter King and Jeff Bartlett replaced the flow sensors in the spare chillers with units that don't have moving parts. We have some indication that the flow sensors originally installed and currently operating (the ones that Michael Rodrick) was replacing, are the cause of the triggers. As soon as possible, we plan to either swap in the spare chillers (with the new sensors) or shut down for an hour or so to replace the sensors in the operating chillers.
Jeff, Peter, Jason, Ed Merilh and I are meeting at 9:00 this morning to discuss progress on several fronts in trying to understand and address these shutdowns.
J. Kissel I've processed the DARM OLG TF measurement Evan and Sheila took last night (LHO aLOG 18269), and was dismayed to find that the DARM coupled cavity pole (DCCP) frequency has decayed back down to 270 [Hz]. This is obvious from the attached residuals, where I show two different versions of model parameters for last nights measurement compared against the two previous measurements taken during the mini-run, where the DCCP frequency was 355 [Hz] up near the expect value. I've again used a 0.99 coherence threshold, I trust this assessment of the DCCP frequency to within 2%, especially since that's the only thing I have to change in the model (besides the overall scale factor) to arrive at this conclusion (for the skeptics of my precision, see LHO aLOG 18213). What's going on here? --- Total Blind Speculation --- As Sheila mentions in the main entry (LHo aLOG 18264), the recycling gain and initial alignment had been restored to values during the mini-run. There has been quite a lot of work on SRC alignment: maybe those SRC loops which are now higher bandwidth -- though good for stable SRCL to DARM coupling (LHO aLOG 18273) are not so good for the DCCP frequency. Perhaps because they have some bad alignment offset? Perhaps we should try very slightly altering the DC alignment of these loops to see if it has an effect on the DCCP frequency, and then optimize for it. Eventually, one might imagine using the amplitude of either the PCal or DARM calibration lines as feedback to keep the pole frequency stationary and near the design value. The good news is that the overall scale factor (what we're assuming is either the optical gain or ESD driver strength variation) changed by less than 0.5%. --------------------- The measurement template has been copied over and committed to the CalSVN here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER7/H1/Measurements/DARMOLGTFs/2015-05-06_H1_DARM_OLGTF_LHOaLOG18269.xml The new model parameter set can be found committed here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER7/H1/Scripts/H1DARMparams_1114947467.m and as usual, the model is here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER7/H1/Scripts/H1DARMmodel_preER7.m and all residual comparisons are done here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER7/H1/Scripts/CompareDARMOLGTFs.m
I guess another possibility is some effect from the lower SRCL ugf. When the guardian goes into LSC_FF, the SRCL gain is reduced by 30% to reduce the amount of control noise appearing in DARM (the SRCL ugf goes from 50 Hz to 25 Hz). I'm assuming this reduction was not done during the minirun, when the good DARM pole was measured.
I suppose we should just run two DARM OLTFs in quick succession, one with the low ugf and one without.
Evan's idea seems unlikely: how could the SRCL loop, with a bandwidth of few tens of Hz, affect what happens at mich higher frequency? This would imply huge couplings of both DARM->SRCL and SRCL->DARM. Such lareg coupling should be easily visible when measuring the SRCL open loop gain.
The most likely hypothesis is that the pole frequency depends critically on the SRC alignment. One interesting test would be to inject two DARM lines, one at low frequency (50 Hz) and one at high frequency (1000 Hz) and track their variations while aligning the ITMs. We expect the low frequency to be constant (tracking the optical gain) and the high frequency to change (tracking the pole frequency).
We already inject calibration lines at 34.7 and 538.1 [Hz] which have been on since April 1 2015 (17621). They've been at a fixed amplitude that had been tuned for a 30 [Mpc] lock stretch, so the 34.7 [Hz] line may not be visible at all times, but still -- it's on the to-do list to make this comparison. Any off-site would be much appreciated!
7:58 JimW taking ETMx ISI down to load new filters
8:55 Gerardo to valve out BSC1 aux cart
9:04 Gerardo done
9:17 Karen to MY, Cris to MX
9:49 Karen leaving MY
10:33 Cris back from MX
14:27 Richard to Mid ? with Simplex
14:57 Gerardo to LVEA to shutdown aux cart
There were a few days in the last week where we could not get the interferometer fully locked for some alignment-related reason. I studied what was going on in this period and came up with a hypothesis.
According to my hypothesis, the followings must have happened:
These resulted in the dark days where we were unable to lock the interferometer.
[The unlockable days]
As shown in the above cartoon, we were unable to fully lock the interferometer between some time in the Tuesday 28th and Thursday 30th of April. I also noted alignment-related activities in the same viewgraph to see what kind of activity we did. It is shown that there was one activity right around the time when we started realizing difficulty in locking, namely removal of the software aperture mask on the ITMY green camera (alog 18108). Nonetheless, I don't think this triggered the unlockable days. As will be discussed in the later part of this alog, I believe that the cause is a large lateral shift of the spot on ITMX.
[Optics' angle in the unlockable days]
I attach two trend plots which show angle of various optics. They are 10days trend. I drew two vertical lines in each plot for encmpassing the dark days. Looking at the two plots on the upper right corner of the first plot, one can clearly see that TRX or TRY did not reach a high value, meaning we could not lock. Also, it is very clear that we had a different alignment in this particular period.
Even though some optics exhibited different alignment in pitch, most of the optics indicate that the alignment in yaw changed significantly. Therefore I neglect any misalignment in pitch hereafter and concentrate on misalignment in yaw.
[X arm alignment]
Since the Y arm behaves as a slave of the X arm from the point of view of the interferometer alignment, I assume that the Y arm alignment had been continueously adjusted properly so that the Michelson becomes dark and the beam spot on ETMY was adjusted to be at the center of it and so forth. In this way I limit the discussion to the X arm and PRC hereafter.
Here is a summary table of how much the relevant optics moved in yaw before and after the Tuesday 28th. The values were extracted from the two trend plots that I showed above.
before Tuesday [urad] | after Tuesday [urad] | difference [urad] | sensor | |
ITMX | -8 | 0 | +8 | oplev |
ETMX | 0 |
-3 *2.74 (oplev calibration alog 18237) = -8.22 |
- 8.22 | oplev |
TMSX | -229 | -235 | - 6 | witness sensor |
PR3 | 271 | 275 | + 4 | alignment slider |
PR2 | 3373 | 3398 | + 25 | witness sensor |
PRM | 420 | 540 | + 120 | witness sensor |
IM4 | -645 | -569 | + 76 | witness sensor |
Using the ITMX and ETMX angle differences, I estimated the difference in the spot position on ITMX and ETMX. I used the following matrix to compute the spot position:
The variables are defined as shown in the cartoon below.
Substituting the realistic values: L = 3994.5 m, ROC_itm = 1934m, ROC_etm = 2245 m, Psi_i = 8 urad and Psi_e = -8.22 urad, I obtain displacement of x_i = -4.7 cm and x_e = -1.8 cm. The beam axis should look like this:
Discussion of the DRMI part is held in the next section. Also, please note that in this analysis we can not estimate the absolute spot positions.
Displacement of 47 mm on ITMX is quite large. For example, one can compare it with the beam radius of the red light on ITMX which is about 53 mm. So it moved by almost half of the beam diamter.
Since we had nonideal behavior in PRC2 ASC loop, I am speculating that the large displacement on ITMX resulted in a clipping or something similar in PRC or Micheslon part and hence bad ASC signal.
According to error propagation, the spot position on the test masses are found to be very senstive to precision of the measured angles. For example, x_e would get an error of +/-1.2 cm if error of +/-0.5 urad is added on the ETMX angle. Nonetheless, I think the displacement of 47 mm on ITMX is believable as this is a relatively large number. On the other hand, the displacement of 18 mm on ETMX maybe fishy because of measurement error. Additionally, the green WFS servos must have steered the test masses such that the spot position on ETMX stays at the same position. Anyway, at this point it is unclear if ETMX actually moved by 18 mm.
One thing which makes me a bit suspicious about this caluclation is the angle of TMSX. As listed in the table, it moved by +6 urad. On the other hand, according to the analysis, the arm eigen axis obtained a rotation of about 16 urad which does not match the angle of TMSX by a factor of almost 3. I am not sure what this means.
[PRC alignment]
Doing a similar matrix approach in PRC, one can build a displacement-misalignment matrix as
where h_j is displacement and phi_j is misalignment. Subscript p stands for PRM, 2 for PR2, 3 for PR3 and i for ITMX. Subsitituting the difference in the measured angles as listed in the table, one can get
[hp h2 h3 hi] = [1.8 mm, -2.5 mm, 1.9 cm, -1.9cm]
which unfortunately contradicts what I said about the ITMX spot position. However, in principle, our initial alignment scheme should bring the PRC beam axis so as to match that of the arm cavity. Now, instead of simply substituting the measured values, I do a small tweak on PR3 angle. I add a magic number of 0.4 urad on top of the measured PR3 alignment. This results in displacements of:
[hp h2 h3 hi] = [-1.3 mm, 5.3 mm, -4.7 cm, 4.7 cm]
which agrees with the previous arm cavity discussion. I don't have any data to support this magical 0.4 urad on PR3, but I believe that it is not crazy to say that the calibration of the PR3 alignment slider has been off by 10 %. Anyway, if we believe in this result, this introduces a translation of 4.7 cm in the PR3-ITMX beam line as expected. Also the spot position on PRM did not move so mcuh as expected because it should be passively determined by the IMC pointing which I think is stable on a time scale of many days. Even though the spot on PR2 moved by 5.3 mm, I don't think this is big enough to let the beam hit the PR2 baffle to cause a clipping.
[Why the yaw alignment changed ?]
It is unclear why the yaw alignment changed so drastically during the period.
One might think that this was due to a wrong reference position on the ITMX green camera. However, according to the past alogs, there was no obvious activities associated with the ITMX green camera at around the time when we started having difficulty. Keita suggested me checking whether a limiter was on in some of ALS WFSs because it has been an issue which hindered the ALS WFS performance. Indeed both ALS_X_DOF2_P and _Y had a limiter on (see the attached conlog and trend), but the _Y_LIMIT value was set to 1e7 counts while _P_LIMIT was set to 10 counts, meaning Y_LIMIT was essentially not effective. This could cause some funny behavior in pitch, but it is still hard to believe that it introduced such a big offset in yaw.
This morning I valved out the aux pump cart, 8:50 am, the cart continued to pump on the valve.
At 3:20 pm the aux cart was turned off, cable, hose and turbo decoupled from the valve. Ion pump is on its own now, current is trending down, currently at 4.02 mA.
No change was noticed on the pressure of the BSC1 annulus system as the cart was removed.
The valve was blanked off.
So why didn't other platforms trip? Here are time series of the ITMY's sensors. The BS & ETMX tripped on Actuators. For ITMY, most of the signals show a characteristic Earthquake arrival sequence of P S & Surface waves. The ITMY actuators (first graph) maxed at maybe 14000 (trip at 30000) and the T240 signals did approach 20000cts (second graph.) Other sensors see the EQ signal but they do not exceed 1/3 of max.