SEI - Data mining; HEPI pump investigation ongoing; performance measurements with DRMI locked.
SUS - Jamie will be coming to update Guardian infrastructure and work on SDF; TJ adding HAMAUX and TT to Guradian Overview; Thomas working on Drift Monitor
ISC/Commish - 10:00AM commissioning schedule; moving forward in a positive direction.
3IFO - BSC5 testing done; still has CPS issue; Will be moving from staging building into LVEA (TBA); 1 of 4 dessicant cabinets have arrivedand going into highbay.
Daniel, Filiburto, Dave:
The h1tcscs ADC signals were discovered today to have changed around 1pm Tuesday 27 Jan PST. We found that the DC power strip feeding the non-PEM AA and the 16bit DAC AI chassis for h1oaf0 to be powered down. We reconnected the DC power strip's cable and these systems are now operational.
K. Venkateswara
BRS was turned off just before the vent at EX (15711). I restarted the code in the BRS laptop and noticed that the DC position of the balance had changed owing to the 2 deg C temperature change in the XVEA. The balance is just out of the nominal range but still working correctly, as far as I can tell. If it proves to be noisier, I may adjust the set point in hardware on Tuesday, since it will take a couple of hours to get right. For now, I adjusted the DC offset in the code appropriately and it all seems to be working as usual (see image). The wind-speed is barely a few mph.
The sensor correction is now using the tilt-subtracted ground super-sensor. This should make no difference to the platform, at the moment, as the sensor correction is using the "0.43-Hz only" filter.
Rana and Travis updated a newer version of the drift monitor and added it to the sitemap at LHO, and I made some further changes: I have fixed bugs in the MEDM screen (/opt/rtcds/userapps/release/sus/common/medm/SUS_DRIFT_MONITOR.adl), and now the buttons for updating individual suspensions work again. LLO, if you svn update the MEDM screen, please be aware that instances of "H1" in the code will need to be changed to "L1" to prevent horrible, epic failure. I've added a set of dictionaries to the drift monitor update script (/opt/rtcds/userapps/release/sus/common/script/driftmon_update.py) that allow for setting fixed threshold values by editing the code itself (example below). The current values are somewhat arbitrary guesses, so they require tuning. The changes have been committed to svn, and the code should be LLO-friendly without any modification. Example: To set fixed thresholds, open driftmon_update.py, and scroll down until you find the following (at line 115 at the time of this post): ########## TUNE THRESHOLDS HERE ########### # yellow thresholds = mean +- yellow_factor * BOUND value # red thresholds = mean +- red_factor * BOUND value yellow_factor = 1 red_factor = 2 BOUND_MC1 = {'P' : 50, 'V' : 10, 'Y' : 15} BOUND_MC2 = {'P' : 50, 'V' : 10, 'Y' : 5} BOUND_MC3 = {'P' : 50, 'V' : 15 , 'Y' : 20} . . . and so on.... and edit the values corresponding to the suspensions and degrees of freedom you wish you tune. For instance, with the code above, if the script updates MC1 pitch and sets, say, 10uRad as the nominal value, then the code above will make the yellow alarm trip at <-40uRad and >60uRad (mean +- 50uRad), and the red alarm trip at <-90uRad and >110uRad (mean +- 2*50uRad).
I opened up the MEDM code for the driftmon, and changed all specific references to H1 to $(IFO), and now I believe it should run fine on either site. So please disregard my prior nonsense about having to change the MEDM code for use at LLO. Macros are awesome.
I updated the GUARD_OVERVIEW and the IM/TT medm screens to contain micro/mini Guardians for IM1, IM2, IM3, IM4, RM1, RM2, OM1, OM2, OM3. The scripts were already made available by Stuart Aston and Jameson Rollins at LLO see https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=15772 and also https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=15585. The Guardian Nodes were not yet created though so I had to create those and figure out that they also had to be started.
Currently RM1, RM2, OM1, OM2, OM3 do not have any offset.snap files saved to them in userapps/sus/h1/burtfiles/ (hence the red border on the Guardian Minis in the overview screen shot).
Attached are the before and after of the GUARD_OVERVIEW.adl screens along with the new HAUX and one of the new HSSS IM screens.
The red boarder actually indicates an ERROR with the node. If there's not actualy ERROR conditions in those nodes, then there might be a version incompatibility between the version of guardian in use and the indicator screen.
I'll help resovle this issue when I'm on site tomorrow.
ISS arrays are ready to be containerized. The bases for the cans needed a slight modification so they are currently at a machine shop. Once they are back and out of the air bake, work can continue for storage and shipping. I have attached a brief report with pictures for the next soul that works on this so things can be located easily. Also attached is a spreadsheet of which items are associated with which ISS and a report of progress.
There is a problem with the signal from the ITMY OSEM, it was verry noisy durring the lock losses last night and it is still noisy now, this seems to be always this way.
The attached screen shots show the signals up the chain on each quad.
It looks like the ITMy L1 stage UR sensor is the exact culprit.
Dan, Richard, Daniel, Rich We loaded the updated Beckhoff code today to enable the full functionality of the newly installed Fast Shutter Driver. The system is now operating properly. There are a couple of things worthy of note: 1. The ISI watchdogs trip in HAM6 whenever the fast shutter fires at full speed AND whenever the fast shutter is actuated in the slow mode either up or down. Reason for this sensitivity as compared to the apparently less sensitive response at LLO is not yet understood. 2. If the high voltage is not enabled (HV ready) on the front panel of the shutter driver, the shutter Beckhoff code defaults to a blocking shutter state (closed) and will not allow you to unblock the shutter. This is a code feature, and is not actually precluded in hardware. 3. We verified that the logic is correct on all signals including: shutter controller output, LV OMC Length shutter input, Fast Shutter input, all readbacks. All are correct.
When I came in this morning, DRMI was still locked and I had the place to myself, so I decided to try turning on the BSC RZ loops with a high blend. I was using SRCL, PRCL and MICH as witnesses. I was mostly able to turn the loops on without losing lock if I turned the loops on slowly, but the BS broke DRMI and Ed has been having a hard time getting DRMI back. I tried ITMX first, then added ITMY, then tried the BS. Attached plots show the cavity spectra for the baseline (in red), ITMX RZ on(blue) and both ITM's RZ on in green. Not much change at low frequency, although SRCL and PRCL both improve some. But between 1 and 4 hz the RZ loops make things worse. I've returned the BSCs to their standard configuration.
Time line from this morning, times in UTC
15:28 Turned on ITMX ST1 RZ
15:51 Turned on ITMY ST1 RZ
15:55 Daniel kills IMC, breaking lock
15:57 Ed restores DRMI
16:13 I try to turn on BS St1 RZ, this kills DRMI, Ed is unable to recover DRMI consistently after this point, probably because end station PLL's are down
model restarts logged for Thu 29/Jan/2015
2015_01_29 01:42 h1fw0
2015_01_29 05:43 h1fw0
both unexpected restarts. Conlog frequently changing channels report attached.
Reconfigured and restarted the corner EtherCAT system to link the slow channels for the fast shutter.
The attached plot shows the uptime of the 3 EtherCAT chassis. The y-axis shows the elapsed time, since the last restart/reboot. This can be due to a hardware problem, or just a normal software update. The record starts around 11/11. For the corner there was a restart on 12/9 and then it was up for more than 50 days. (The fact that the trace wraps around from +24 days to -24 days is due to a programming error.) EY also looks good with one additional restart around January 9th. However, EX clearly has a problem with a forced reboot every other Tuesday.
Evan, Alexa, Elli, Kiwamu, Rana, Keita, Sheila
We are getting closer.
Today it has been much more reliable and robust for us to get to the state where DARM is controlled on RF. Differences from yesterday that have probably helped with this:
The ISC_LOCK guardian takes us to an offset in TR CARM of 20. There we have been trying to transition to REFL9 I/TRY. We lowered the whitening gain of REFL9I which was saturating the ADC, we now have 0dB whitening gain and no whitening filters. The steps that we have been doing that are not yet in guardian are:
At this point, some kind of oscillation has been starting which eventually blows the lock. Some example times are 2:47:06 UTC and 3:13:01 UTC. The attached striptool shows both of these events. You can see the oscillation in POP18, AS 90, AS DC, and REFL DC. As I've been writing I've been letting the guardian try to lock on its own, it has lost the lock twice in the REDUCE_CARM_OFFSET_MORE state, I think this is because the DIFF offset wasn't adjusted well. I've extended the time that the servo runs before this step.
I've left DRMI locked, with the arms misaligned for seismic people.
This DRMI lock stretch Sheila's left us began undisturbed starting ~04:05 UTC. Here're some relevant conditions of the IFO. DRMI is Guardian is in requested state DRMI_1F_OFFLOADED. IM4, PR3, BS, and SR3 alignments must have been tweaked recently, as their alignments are not saved, according the Guardian. Corner station WFS are OFF ISS Second Loop is OFF Wind is low, <5 [mph] Seismic Env.: Band Limit [um/s] (12-hour trend) 0.03 - 0.1 = 4e-1 (steady) 0.1 - 0.3 = 2e-1 (on its way down) 0.3 - 1.0 = 3e-2 (steady) 1.0 - 3.0 = 1e-1 (Hanford is on shift and loud) 3.0 - 10 = 3e-1 (steady) Optical levers well centered (about ~5-7 [urad] off center in P&Y) for BS, ITMX, and PR3. ITMY and SR3 is a little further off centered (about 15-17 [urad] off center). BS Optical Lever Damping in P and Y is ON, no other Optical Lever Damping is ON. All corner station HEPIs are ON, position-sensor only, locked to the ground, ~4 [Hz] UGFs. ALL DOF isolation loops closed, including AC coupled HP and VP. BSC HEPIs have Z sensor correction ON. Corner Station HEPI Pump Servo is ON. All HAM-ISIs running ~30 [Hz] isolation loops, sensors blended with 01_28 filters (including HAM3!), sensor correction ON in X, Y, and Z. ALL DOF isolation loops closed. GS13s are in HIGH gain mode, with DeWhitening filters OFF. All BSC-ISIs running ~30 [Hz] isolation loops, sensors blended with ST1 X&Y 45mHz, Z 90mHz, RX&RY 250a*250b, RZ 750mHz ST2 X&Y 250mHz, Z, RX,RY,RZ T750mHz ITMs: ST1 RZ isolation loops are NOT closed, all other DOFs are closed ST2 only X&Y isolation loops are closed BS: ST1 RZ isolation loops are NOT closed, all other DOFs are closed All ST2 isolation loops are OFF ITM GS13s are in HIGH gain mode, with DeWhitening filters OFF. All relevant SUS happily damped.
Keita, Alexa, Sheila
The osciallation that was breaking the lock last night was at about 0.45 Hz, and shows up in all the quad oplev pitch signals. It looks like the soft mode in both arms, but the oscialltion in the two arms are not in phase with each other. There is some of this noise showing up in DARM, but not in the other LSC out channels, in the BS, PR3 or SR3 signals until after the lockloss
This shows up on the ITMs, and we are not actuating on the ITMs. This is strange, but one thing that we would like to try is using lower power.
We have noticed that it is sometimes taking a verry long time for guardian to calculate paths (the ISC_LOCK guardian).
We also just saw something more strange. It seems that the ALS COMM guardian, which was managed by the ISC_LOCK guardian, became executed, although we are pretty sure no one in the control room did this. screen shot of the log is attached.
The log files and conlog report the ALS_COMM out-of-managed-mode sequence is managed->pause->exec->pause->managed. Text file is attached.
On the long times for ISC_LOCK to calculate paths, I see a log entry which suggests a transition request from LOCKING_ALS_DIFF to DARM_WFS (via LOCKING_ALS_COMM) took 12 seconds to calculate the path. Is the processing of the current state causing the delay? Log details attached.
Can you guys ellaborate on this claim of overly long path calculation time? The log you post doesn't seem to support it. From the log you posted:
2015-01-30T03:36:06.566Z ISC_LOCK [LOCKING_ALS_DIFF.run] USERMSG cleared 2015-01-30T03:36:15.586Z ISC_LOCK new request: DARM_WFS 2015-01-30T03:36:15.586Z ISC_LOCK calculating path: LOCKING_ALS_DIFF->DARM_WFS 2015-01-30T03:36:16.778Z ISC_LOCK [LOCKING_ALS_DIFF.run] USERMSG: node ALS_DIFF: NOTIFICATION 2015-01-30T03:36:27.308Z ISC_LOCK [LOCKING_ALS_DIFF.run] ALS_XARM: REQUEST => LOCKED_TRANSITION
The path calculation happens at 3:36:15.586, followed by some usercode logging about changes to the ALS_XARM request, which presumably is a subordinate of this ISC_LOCK node.
What makes you think that the ISC_LOCK is taking a long time to calculate a path? My guess is that you're confusing the manager notification about changing subordinate request with a problem with the path calculation. They're not related.
After looking at all the L4C plots, Krishna suggested we look at the IPS thinking that with the loops closed by the HEPI Platform servo which has lots of gain, it should suppress the IPS signal and the L4C coherence we are seeing was from something else...like the fluid moving the piping... Seem reasonable although really?
However, see attached: Same times as the L4C coherence I posted Tuesday. I show HAM5 at the local position sensor (IPS.) It is very clear the Horizontal IPS have great coherence from 0.5 down to .02 hz when the pressure sensor loop is closed. The Vertical IPS coherence isn't nearly as strong but certainly present. The output drive to the HAM5 Actuators is very small both horizontal and vertical.
I attach too the coherences from the End Stations. The second attachment has ETMX in the left plots and the ETMY in the right plots. Both pressure servo loops are closed but ETMX pressure signals do seem to be quieter (although not as quiet as I've seen the corner station's back before December) and the ETMX is servoing on the direct output pressure at the Pump Station. Whereas, at the ETMY, we are servoing on the actual remote pressures at the BSC10 differenced in the epics database. The signals at ETMY have always been very noisy and so when I switched ETMY to the differential mode, I added serious smoothing to the signals before differencing.
Looking at the the comparison of the ETMs there are distinctions but what causes what them is not clear. The quieter EndX is evident in the amplitude and dispite the smoothing, the EndY has a much sharper and higher amplitude peak at 150mhz. Don't ask me what's happening at 3.5hz...
The lower plots at interesting in that unlike at HAM5, there is no coherence with the vertical IPS on either platform but the horizontals are very coherent. This coherence at ETMX is narrower in band though maybe somewhat more like the band at HAM5...related to the quieter pressure signal at ETMX or due the ETMY servoing on the difference rather than the direct supply pressure or the smoothing...?
I'm going to go out on a limb and say that we expect coherence in the local channels until we fix the pringle loops, which are essentially uncontrolled now.Why horizontal and vertical show different behavior i have no idea
I was able to close the ISS Second Loop few times this morning. The loop performance was on-par with what we achieved when we locked it last time. The picomotor closed to the ISS PD array was also moved to optimize the light on the ISS PDs and the QPD
Noticing that the ISS QPD pitch and yaw were off, I moved the picomotor closer to the ISS array to optimize the light on the PDs and the QPD. This work improved the light on half of the PDs by about 10-20% . This also improved the beam position on the QPD. Before and after readings are listed:
Before (cts) | After(cts) | Before (cts) | After(cts) | ||
PD1 | 4430 | 4460 | PD5 | 4600 | 5400 |
PD2 | 4150 | 4900 | PD6 | 4700 | 5000 |
PD3 | 4750 | 4800 | PD7 | 5750 | 5780 |
PD4 | 5050 | 5300 | PD8 | 5300 | 5380 |
Before | After | |
QPD_PIT | 0.73 | 0.08 |
QPD_YAW | 0.75 | 0.01 |
QPD_SUM | 24400 | 24300 |
I was able to close the loop without kicking the IMC out of lock. This was not robust and the previously used script wouldnot work because the second loop output fluctuation was much bigger than that we used as threshold in the script. Rather changing the script, I would want to investigate why we are not able to obtain the same robustness that we previously had. The loop performance is on par with what we have achieved in the past. With the loop closed and boost and integrator on, RIN was about 2E-8/sqrt(Hz) at 10 Hz . The attached plot shows the loop performance at different configurations.
For people interested in what the loop performance was downstream. Here is a plot that shows the loop performance at IM4_TRANS and MC2_TRANS. The loop closing is still not robust because of too much noise at the second loop ouput but I am working on understanding it.
Kiwamu, Elli, Alexa, Evan, Rana, Daniel, Sheila
Today we were able to reach CARM offsets around 30 pm.
We transitioned DARM to AS45Q, at a CARM offset where sqrt(TRX+TRY) was -7, we then normalized the signal by sqrt(TRX) (with a factor of 0.23). One important step in getting there was to implement the ezca servo that adjusts the ALS DIFF offset to bring AS45Q into the linear range. We are now using that both at a CARM offset of 1 (in SQRT TRX+TRY), and after we transition to the QPDs. We then change the DARM loww pass filter from a 33Hz low pass to a 80 Hz low pass, to get better phase margin since we are no longer saturating the ESD with ALS noise. We did this transition several times sucessfully. As Rana mentioned in alog 16334, we installed an ND1 filter on ASAIR A. Since this we haven't transitioned to RF DARM again (for reasons that seem to be unrelated to the ND filter), so we will need to check the gain before we transition again.
After making this transition, Elli, Kiwamu and Rana worked on the DHARD WFS, which we turned on to reduce the fuctuations in AS DC. This allowed us to go to CARM offsets -25 in sqrt(TRX+TRY), which is about half of the total power we expect in the arms. If you assume that the recyling gain is 30 (we don't really know) it is something like 30 pm. In Refl DC we saw the power drop by about 20%. We saw that the linearized REFL 9 I signal had turned over, and that without linearization the signal had reached the peak. We made a few attempts to transition, and we were able to turn down the gain of the TR CARM signal to 50% of the nominal and turn the REFL 9 signal to what we think the nominal gain should be (-100 in the input matrix). We lost the lock when we turned off the TR CARM signal. Our next plan was to leave the TR CARM engaged with reduced gain, and keep reducing the CARM offset.
However, we have been having a hard time locking in the last few hours. We think that it might help to try transitioning to DARM to RF a little sooner, so that we could use WFS.
The sequence that was working earlier this afternoon is in guardian up to the RF DARM transition, although this might need to be re-worked. We have added but not tested a state for the DHARD WFS.
Attached is a screenshot of our striptool durring the sequence, which was all handled by the guardian this time.
As we were about to leave we had a nice stable lock. We were able to transition to RF DARM again at -7.0 cts CARM offset after having adjusted for the ND filters. We now have a +20dB filter in ASAIR_RF45Q, and the input matrix is now 25. We turned on FM4(z4^2:p1^2) in DARM loop which significantly helped the DARM noise. We then proceeded to adjust the CARM offset to -20 cnts. At this point we transitioned TR_RELF9 to 100% with TR CARM at 50%. We were able to reduce the CARM offset to zero, but this only lasted for about a second or so. We never fully turned off TR CARM, but we think it has a zero slope here since we are at zero offset. More tomorrow when we are awake ....
Lock loss time: 09:58:40 UTC Jan 29th
Great work!
Assuming I am looking at the right lock attempt, (data attached starting at 09:48:00 UTC) it seems that REFL DC is only ~30% less than at the beginning of the sequence when you transition to RF. There should be room to get closer. P.S: For comparison, trend of powers with "lossy" arms is here. The build up in the arms for same relative REFL DC power was about a factor of 3 lower (by eye numbers).
Daniel and Rana have mentioned that optical torques may become significant as we come in to resonance.
For 10 mm of miscentering and 46 kW of circulating arm power (at 0 pm of CARM offset), we get a torque of 3×10−6 N m. I estimate the stiffness constants of each pendulum to be 4.9 N m for pitch and 6.5 N m for yaw (a better estimate could be made using the actual suspension models). This means that the static misalignments induced by the radiation torque could be as large as 1 μrad. The attached code computes the torsional stiffnesses of the pendula.
As a next step, we might also consider the stiffness of the optical springs using, e.g., eqs. 31 in the paper by Sidles and Sigg. At 46 kW of circulating power, we get 15 N m for the major mode and −0.6 N m for the minor mode.
[Edit: Also, Kiwamu has pointed out an error in the expression for the moment of inertia for the test masses. This has been fixed in this entry and in the attached code.]
Here are some oplev trends from last night's final lock attempt.
The drop in the buildup of POP18 seems correlated with a drift in ETMX pitch (0.3 μrad), and to a lesser extent BS pitch (0.2 μrad), SR3 pitch (0.4 μrad) and ITMY pitch (0.2 μrad). There may also be some drift in PR3 yaw and pitch (≈0.1 μrad). All of these drifts happen on time scales much slower than the change in TRX buildup, which supports the idea that these are thermal drifts induced, e.g., by wire heating.
For the record here are some lock loss times from last night:
Early in the evening we were trying to transition DARM from ALS_DIFF to ASAIR_A_RF45_Q and the lock dropped at the following times:
Jan 28 20:46:40 UTC, Jan 28 21:06:32 UTC, Jan 28 21:06:17 UTC, Jan 28 22:55:00 UTC.
Speculating from the lock loss plots, we think that DARM noise causes a big spike in light leaking out of the AS port. This causes the power on the LSC-TR_X/Y_QPDs controlling CARM to fluctatue enough that CARM drops lock. Running the ASAIR centering servo should help minimise big spikes at ASAIR_A_LF. Once we were able to transition DARM to RF this type of lock loss stopped happening.
-----------------------
Here are some lock losses from after the transition DARM to RF. The cause of these lock losses remained unclear. MICH, PRCL and SRCL were ringing up signals at various frequencies (4Hz-20Hz) but this changed from lock loss to lock loss. Again there are big spikes in ASAIR_A_LF right before the lock loss. ETMy alignment needed frequent touching up.
Jan 29 00:28:31 UTC, Jan 29 00:49:41 UTC, Jan 29 05:34:23 UTC.
--------------------
Later in the evening we were having a hard time locking. Again we were loosing lock before DARM transition to RF. Again there are big spikes in ASAIR_A_LF, probably caused by DARM motion.
Jan 29 07:35:25 UTC, Jan 29 07:35:25 UTC, Jan 29 08:24:26 UTC, Jan 29 08:40:40 UTC
Here is a screen shot of the CARM offset reduction from earlier in the evening, when the alingment must hve been slightly better. Although we didn't reduce the CARM offset, and were locked on TR CARM, we had a recylcing gain of about 10.2. Also, some of the signal from POP18 and POP90 is rotated into the Q phase as CARM offset is reduced.
Peter, Matt, Lisa For the records, we had this theory that if the f_1 was tuned such as to make the 2f_1 resonant in the arms, the beat between the 2f_1 and the carrier in the recycling cavity could be responsible for the decay in POP18. Looking in the L1/H1 logs and MEDM screens, we arrived at the conclusion that in H1, given the arm length of 3994.4704 m and f_1 = 9.100230 MHz, the offset from resonance for the 2f_1 should be 380 Hz. In L1 the offset for the 2f_1 from resonance is 500 Hz (reported here), as nominal.