[JimW, JeffK, Jenne]
Several times today we've had to hand-request the OMC guardian to go to ReadyForHandoff, after it fails to find the carrier during the PZT scan. I'm a little unsure why it sometimes works, and sometimes doesn't, since the scans look quite similar.
In the two attached screenshots, once the OMC scan worked, and once it didn't. The one that includes "succeed" in the filename starts at a slightly lower DCPD value (below 2mA), whereas the "failed" time the DCPD value starts at around 5mA, so maybe that's the difference in it recognizing that the first peak on the left is one of the 45MHz sidebands? I'll have to look at the OMC scan code, but if that's not it, then I'm certainly confused.
Okay, fixed.
Since the first 45MHz peak is so close to the start of the scan, the "pzt_diff" value that the OMC guardian is looking for isn't super accurate, so it's failing a test and returning the OMC to its Down state.
What the scan is doing is chunking the data into 7 pieces, and then finding the max value in that piece. It then compares the max DCPD value from each of those 7 chunks. It decides that it has found a pair of 45MHz peaks if the peaks are within 25% of the same height and are within 18V (where V are from the PZT2_MON_DC_OUT channel). Since we're at the beginning of a ramp, and it's the soft CDS ramp and not a sharp triangle ramp, I think that the peak location is maybe not getting mapped exactly correctly to the PZT value, so sometimes it looks like the 45MHz peaks are 18.03V apart, which causes the test to fail. However in some cases, like the "succeed" case from the parent comment, the difference is found to be something like 17.95V, which barely passes the test.
I have increased the acceptable peak difference to 20V. As you can see in the screenshots in the parent comment, the 45MHz peaks that are not a matching pair are more than 30V apart.
Upon our first try after this change, the pzt difference that it found was 18.05V, but kept locking happily since this is lower than the new threshold of 20V.
I have plotted about 18 minutes of data starting at GPS time 1164290368 (2016/11/27, 13:59:11 UTC). The attached pdf file has a time series plot of the kappas and the coherences on the first page, and histograms with averages and standard deviations on the second page. The output of CALCS, SLM, and GDS are included. For kappa_tst and kappa_pu, CALCS and GDS are ~1% different, while SLM values are ~4% larger. Agreement is fairly good for kappa_c and f_c, with the exception that the CALCS value of kappa_c is ~1-2% larger.
Robert, Kiwamu,
This is a summary of the recovery of the IMC alignment after the PSL PZT mirror mount swap today (for which I think Robert will make a separate report).
Synopsis-
IMC seems to have shifted its waist location horizontally by 0.25 mm based on the change seen by the suspension witness sensors. This apparently was large enough to reduce the amount of light at IMC-TRANS by a factor of two presumably due to a worse in-vac clipping in HAM2. Nevertheless, as of now, we seem to be able to lock the interferometer on DC readout without new issues.
Recovery process-
We temporarily placed an iris after the top periscope mirror and before the beam tubing. After the swap, we checked two existing irises that had been on the table and the new one at the top of the periscope to coarsely recover the alignment. Then checking the spot on the PSL wall coming out from the light pipe between the PSL and HAM1, we did a fine alignment. This was good enough to put us back to a position where we see the main light on IMC REFL camera. A final alignment was then done by engaging IMC ASC which automatically servoed the alignment to an optimum. The temporary iris was removed before we left the PSL.
After the successful lock of IMC and successful engagement of IMC ASC, we noticed that the IMC TRANS is smaller by a factor of two. This seems to be due to a slight change in the horizontal direction in the uncontrolled degree of freedom. Here is a table listing several changes in some relevant sensors.
before | after | diff | |
MC1 PIT witness | -3 | -2 | +1 urad |
MC2 PIT witness | 510 | 507 | 0 |
MC3 PIT witness | -841 | -842 | -1 urad |
IM4 PIT | -0.33 | -0.34 | almost 0 |
MC1 YAW witness | -1035 | -1043 | -8 urad |
MC2 YAW witness | -672 | -672 | 0 |
MC3 YAW witness | -996 | -987 | +9 urad |
IM4 YAW | 0.25 | -0.08 | - 0.33 cnts |
As shown above, the only appreciable change is that in DOF2 of IMC YAW (highlighted by red texts). Using Kawazoe's formula (P1000135), one can find that this amounts to a lateral shift of the spot position by 0.25 mm toward HAM3 or away from PSL. [EDIT] Keita pointed out that the direction of the move that I initially reported was wrong. So the correct statement is that the beam shifted by 0.25 mm toward the PSL or away from HAM3.
Things we didn't optimize-
Two addenda.
Firstly, to cope with the fact that IMC TRANS decreased by a factor of two, I have edited the IMC_LOCK guardian and lowered all the FM_TRIG_THRESH_ON and _OFF values by a factor of two. In addition, I manually changed the IMC-MCL_TRIG_THRESH_ON and _OFF values by a factor of two as well. They don't seem to under control of any guardians. IMC locks fine with these new settings. The guardian is then checked into SVN.
Secondly, the spot positions on the PSL wall seem to have shifted by 1-2 mm towards the West. No obvious change was found in the vertical direction. See the attached picture. The new positions are recorded with black 'X' marks as shown in the picture.
Jenne and I did a repeat of what we tried a few weeks ago after different PSL work: (30918) we restored the optics to their old values using the witness sensors, then moved the piezo to maximize build up without turning on the IMC WFS. This brought the spot back to its before position on IM4 trans, although the MC2 trans sum was low, so we think that as expected only the input beam has moved, not the mode cleaner optics or IMs. However, we can't fix the input beam change simply by moving the PZT.
We let the MCWFS run to increase the mode cleaner transmission, and watched the spots on both the ISS QPD and IM4 trans. We walked IM3 and IM2 in yaw to bring both QPDs back to the spot positions before this morning's PSL work, and now Cheryl is doing inital alingment.
To move this degree of freedom, we moved IM2 1.284 urad in yaw for every urad that we moved IM3. Since this is a degree of freedom that our inital alingment and ASC don't control, it may be a good idea to try moving this degree of freedom in full lock to see what impact it has on our noise and recycling gain. For the record, today we moved IM3 -2390 urad, and IM2 -3030 urad.
~1555 - 1605 hrs. local -> Kyle in and out of LVEA More data - I'm letting the gauge volume pressure accumulate overnight - I noticed that the pirani gauge mounted between the two in-series turbo pumps has changed again and is now showing 1.8 x 10-3 torr. This a few hours after the most recent decrease in heating power. Being positioned between these two turbos, i.e. at the inlet of the downstream turbo, this gauge should be "off-scale low" and unchanging regardless of what temperature the, now baked parts, are. If this pressure change is real (which it isn't) it would have to be due to a newly developed leak (which it isn't). The real reason to isolate the pumps from the gauge volume at this stage is two-fold; firstly, the off gassing/gas removal bang for the buck is over now that we would be pumping on it at, or near, room temperature and secondly, by doing so, we now have (2) closed valves between the site vacuum and any failure modes of the temporary pumping setup that could result in a venting of the pump line.
Frontend Watch is GREEN
For the record, these up-times are low because we've just gone through a PSL incursion to install new, stiffer, better damped, mounts for PZTs on the PSL periscope mirrors (aLOG pending, for now see ops report in LHO aLOG 31917). No problem here.
Attached are the long trends of the ETM charge measurements with this morning's data appended. Note in the first 2 plots that the ETMy charge looks slighly ramped up as it heads away from zero in all 4 quadrants - if indeed ramping, it could be due to the nice IFO duty cycle we have which drives through the ETMY L3. For this reason, Jenne wrote into the ISC_LOCK.py Guardian to set the ETMY L3 bias gain in DOWN to be the opposite of what it is in-lock in an attempt to de-charge it when not in low noise use. She also made sure the there was an appropriate re-setup step for this bias gain where needed in the guardian script.
This is a similar change that Sheila made in alog 31172 to "de-charge" ETMX via flipping the gain sign during unused times - in that case during low noise locking.
Change to the guardian script has been committed to SVN.
J. Kissel, S. Dwyer We've further modified the LOWNOISE_ETMY_ESD state to wait for this LOCK_BIAS gain change to finish ramping before moving to the transition. Also, now the transition of control to ETMY (after the bias has successfully flipped sign) happens in the run portion of the state as opposed to the main, with a couple of more counters that wait for all the appropriate 30 second ramps to finish. This means that it's going to take 30 seconds longer to go through this state. Your patience is appreciated! We've successfully transitioned over to ETMY using this further updated code.
TITLE: 11/28 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Jim
SHIFT SUMMARY: After the lockloss, we have been in DOWN for PSL PZT work, with other tasks happening opportunistically.
LOG:
16:12 Joe D beamtube patching Yarm
17:37 Lockloss, possibly due to coincident PSL Noise Eater OOR
17:58 Kyle to LVEA PT180
18:06 Jason to EX for OpLev work
18:10 Robert and Kiwamu to PSL PZT work
18:15 Joe D done
18:20 Kyle out
18:22 Jeff B forklifing chiller
18:25 Hugh to EX BRS work
18:30 Jason done at EX
18:33 Jason and Betsy to TCSy table to check flow sensor
18:54 Jason to EY OpLev, SEI_CONF to SC_OFF_NOBRSXY
19:08 Jeff B done
19:21 Jason done
20:02 Hugh done, SEI_CONF set to WINDY_NO_BRSX (Hugh said not to use BRSX yet)
22:08 Switched SEI_CONF back to WINDY since Hugh reports that BRSX is back up
22:34 Dave restarting models
23:55 Kyle to LVEA to close valve
A mini circuits amplifier was placed on the following RF signals going into the PEM QUAD I&Q Demodulator Chassis:
Cable PEM_CS_Radio_Ebay_45MHZ (AMP1)
Cable PEM_SPQ_PWR_HAM2-3_2 (AMP2)
Cable PEM CS_Radio_Ebay_9MHz (AMP3)
Cable PEM SPQ_PWR_HAM2-3_1 (AMP4)
The cables were re-routed from the demodulator chassis to the network rack (next to the PEM/OAF Rack). Four mini circuits amplifiers ZHL-1A+ were monted on a rack tray. A 5 dB attenuator was installed at the output of each amplifier.
Amplifiers are being powered by a DC power supply.
Special Monday Maintenance:
WP 6350 ALS and LSC model change
Keita, Dave:
Removed 3*2048Hz channels from each of h1alsex, h1alsey and 1*16384Hz lsc channel from DAQ Frame.
WP 6354 SUSPROCPI model change
Terra, Dave:
removed 28*2048Hz channels from DAQ frame.
WP 6353 ECR E1600351 Add GDS channels to DAQ broadcaster
Jess, Dave:
Added 41 channels to the DAQ frame broadcaster.
Additional data size is 2*16kHz and 39*256Hz. channels.
DAQ restart
Dave:
Did one DAQ restart to install the above changes.
Attached files show the DAQ channels removed from frame due to model changes, and the added channels to the broadcaster.
The online h(t) pipeline has been restarted at Hanford at approximately GPS second 116440990. This does not include any filter change, but does include the following changes:
The full command line is below.
gstlal_compute_strain
--data-source=lvshm
--shared-memory-partition=$LIGOSMPART
--filters-file=/home1/calibration/svncommon/CalSVN/ER10/GDSFilters/H1GDS_1163857500.npz
--ifo=H1
--frame-duration=4
--frames-per-file=1
--write-to-shm-partition=$HOFTSMPART
--compression-scheme=6
--compression-level=3
--control-sample-rate=4096
--expected-fcc=341.0
--partial-calibration
--coherence-uncertainty-threshold=0.004
--kappas-default-to-median
--apply-kappatst
--apply-kappapu
--apply-kappac
Significant exhaust temperature response (<-30C) 47 seconds after changing the manual mode %open value of the LLCV (liquid level control valve) from its as-found valve of 17 to 50 -> restored this value back to 17 following this exercise.
Here is the temp trend (in seconds). Not sure why the dip prior to fill. First I've seen that.
I trended CP3 exhaust TC temp gauges over 30 days to see if the dips noted above have always been there. Looks like it started on Nov. 15th. The only change to CP3 noted was a Dewar fill while LLCV was set high to 21%. LN2 was coming out the exhaust so I lowered to 16% while the LN2 truck was still filling Dewar. The previous day on the 14th I had filled CP3 by setting LLCV to 100% and noted this was too much flow. Is this an instability in pump reservoir where it burps every 20-40 minutes? Note that from Nov. 19-21 signal was smoother. Could be a function of LLCV setting and how full exhaust line is. Fills lately have been really fast.
CP3 exhaust is also noisy since Nov. 21st.
Numerous entries of the last few days mention the BRS X being down.
First notice is LHO aLog 31792 when PEM went to EndX. A couple hours later Jim mentioned that it was very rung up and he disabled the damping. On Thursday Jim worked on the BRSX but was unsuccessful at getting it to damp reliably. On Saturday and Sunday, TJ logged that the BRS was ringing down--it was but just passively.
This morning after reenabling the Damping, it was going nowhere. The Damping neither helped nor rung it up further so I'm unsure what it was doing.
After IFO dropped lock, remote logged into the BRS computer and looking at the encoder file showed it contained gibberish. At the End Station, opened up the box and saw the Damper Masses sitting at the 0 position. I thought I had cleared out the gibberish in the Encoder file but upon restart, the damper just started 360s, this might be what was going on for the past few days.
With Krishna on the phone and some guidance that needs updating, I managed to clear the encoder file and reset the damper encoder angle. Once this was successfully done, the damper quickly did its job and got things under control.
For the operator--see T1600103 for some troubleshooting. After today's lesson, this needs some updating, and it contains nothing about the Encoder file problem. This is on my to-do list.
When I went to the end station on Wednesday, the damper table wasn't moving at all because it had been disabled. When I came in on Thursday, I got it to turn on once, and it seemed to servo normally for a while, but then it stopped and I couldn't recover it. It may have gone crazy on Wednesday afternoon while Anna-Maria and Robert were down there, but it wasn't moving at all after Wednesday night.
Jim's observation is not inconsistent with our conclusion that the problem was with the corrupted encoder file, I think. What I am more curious about is what caused this corruption in the first place. After the damper was upgraded, the last time we had a similar problem was when we had a series of restarts of the Beckhoff computer (see 29871). It would be interesting to look at the history of BRS status bits over the last ~2 months to see if we can get a clue.
Attached trends are all of the "bit" channels I could find associated with BRSX. First trend is the last 10 days, second is the last 90. Where the AMPBIT shows excursions away from 1 lines up with the two times when the encoder file has been corrupted. If this is diagnostic, maybe we could monitor this in the DIAG guardian.
On Thursday we (Sheila, Daniel & I) increased the Thresholds on the TIDAL RED TRIGGER when the very low value kept the ISC driving the HEPI too long. It would appear there was only one long lock stretch with these settings before Sheila reported that the tidal was not coming on, in some locks. Beam diverters?... When Nutsinee reverted the values of the RED TRIGGER Thresholds, things seemed to start working properly again. I don't pretend to understand this and it is pretty clear that reverting the values solved the problem. See attached four days trend, when lowering the THRESH_ON value started the TIDAL running (I did not zoom in but it looks correlated.) But also seen is the REDTRIG_INMON has dropped to 153000 from the previous 165000. If this level (Power?) change was enough to disable the 8000 count THRESH_ON setting, clearly something else must be in play. I need to understand the triggering better to really understand this. We don't want the ISI to trip as it can be several minutes for the T240s to settle and be ready for complete ISI isolation but obviously we need the TIDAL relief to engage.
It might be that the threshold wasn't exceeded, when it tried to engage REDLOCK. The fundamental problem is that neither the arm power nor the trigger thresholds are getting scaled with the power up, so the threshold needs to be set low enough to catch the lock at low power. Once you power up, the arm powers will then be far above threshold. Too much it seems. Probably best to figure out how to scale the NSUM value with the inverse of the input power. Alternatively, one could rewrite the trigger so that it scales automatically.
Opened FRS Ticket 6787. This has happened a few more times this evening.
Travis, Sheila
Tonight we saw more pathological behavoir in the PSL electronics.
Robert wanted to do a test of powering down the DBB, so he switched the power supply that sits on top of the DBB chassis off and unplaggued the DIN connector from the back. After this, engaging the ISS second loop caused the diffracted power to drop from almost 5% to less than 0.5%, which caused locklosses as we tried to increase power.
We tried to let the offset adjustment servo (30932) run for a while, but this didn't help.
We plugged back in the DBB so we could get back to locking, but doing so tripped the PSL external shutter.
We've continued having trouble with the ISS second loop introducing a large offset into the ISS, although it is not as bad with the DBB plugged back in.
The attached screenshot shows the AC coupled ISS second loop moving the diffracted power by almost 2% as we power up, so we need to be able to keep the diffracted power high enough that we don't saturated while powering up. Right now turning on the second loop decreases the diffracted power by about 4%, so we are just setting it high to start with.
In addition to electronics problems, there is also a chance that there might be alignment differences in the beam path next to the DBB when it is shut off since it is 4 degrees warmer than the table.
How to adjust the diffraction power (when out of lock):
How to adjust the diffraction power (when out of lock) -- translated with screenshots of the relevant MEDM screens, and with the language color coded to match the colored highlighted boxes on the screenshots: - Make sure the second loop is in its down state. This means only the input switch is off. The output switch is on as well as its input into the first loop board. - Make sure the excitation input is enabled and the AC coupling servo switches are on (shouldn't do anything, since the input switch is off). - In this state, the output offset should be continuously zeroed by the offset servo. Just check that the second loop output readback reads near zero. - Adjust the REFSIGNAL of the first ISS loop, so that you roughly have 4% diffracted power. - Turn second loop on with the input switch (the IMC needs to be locked). - In the AC Coupling medm screen there is an input offset (top left). Adjust it so that the diffracted power stays at 4%. - Toggle the input switch and make sure that the diffracted power doesn't change.