Moved the isctey green beam shutter (Y, B) to the position that matched the isctex green beam shutter, although the name is defined outside of medm, so it still says green fiber shutter. Info attached.
BRSY has been continuing its slow drift off of the photodiode, and was about 2-300 counts from the edge, so this morning I went to EY to try to recenter it. I think I was successful, but will need a couple hours to tell. Right now it's still rung up pretty bad, so we will need to wait for it to damp down on it's own a bit before trying to re-engage it. For now, operators should use one of the seismic configurations that doesn't use BRSY.
Looks like BRSY is closer to center now (at ~ -3000) than before, but given the current drift of ~1500 cts/week I didn't get as much margin before the next adjustment as I'd prefer. Will probably have to do this again in ~2 months.
Remember it will probably drift up because of the slow thermal equilibration for the next 1-2 days, probably ending up above 3k counts. I think that is very good. Good job, you have mastered the BRS!
Patrick, Kiwamu,
In this morning, Patrick found that the CO2Y was not outputting a laser power. In the end we could not figure out why it had shut off. The laser is now back on.
[Some more details]
I thought this was a return of the faulty behavior that we were trying to diagnose in the early October (30472). However, the combination of looking at the front panel of the laser controller and trending the warning/alarm states did not show us something conclusive. So no conclusion again.
When I went to the floor and checked the front panel, no red LED was found. The only unusual thing is the GATE LED which was found to be off. Pressing the red gate button then brought the GATE LED back in green as expected. This could be an indication of the IR sensor momentarily went to the fault state and came back normal leaving the laser shut off. In this scenario, the IR sensor does not latch any LEDs and for this reason I thought this could be it. Then looking at the trend at around the time the laser went off, I did not find any alarm flags raising at all. Even if it is a fast transient in the IR sensor, I would expect to see it in the trend. So these two observations together can't support the IR sensor scenario. Another plausible scenario would be somebody accidentally hitting the gate button resulting in no laser output.
I also went to the chiller and confirmed no error there - water level was mid-way (which I topped off), all seemed good.
That certainly sounds like the IR sensor. Unfortunately we don't currently have analogue readout from that channel, or a good error reporting system. We are already planning on fixing this with a new version of the controller that we should be getting ready for post O2 install.
Has there been a temperature change in the LVEA recently? And the Y-arm laser power is a bit highrer than before, but not as high as during your recent testing? I'm just wondering what else could be causing this sensor to be close to its tripping point.
Alastair, if this was due to the IR sensor, how do you exlain the fact that it didn't show up in ITMY_CO2_INTRLK_RTD_OR_IR_ALRM? Is it so fast that the digital system can not reacord the transient?
I don't understand that. Even if it doesn't latch the laser off, it should still show up on that channel. Is it possible that the chassis itself got a brief power glitch? If that was turned off/on momentarily then it would also put the laser into this state.
From trends the laser tripped off around 15:52 UTC this morning. This was well before the work on h1oaf took it down.
It's very possible that the Tuesday maint activity that involved IO chassis hardware work which may or may not have been involved in the dolphin network glitch -> Beckhoff issues (which lasted most of the day Tues), is what caused this particular TCS laser issue. It was compounded by the later h1oaf work that day which caused other chiller tripping. Cause of this full saga TBD...
This weekend, as part of clipping investigations, I found that I could adjust the pointing of the PSL piezo mirror to minimize the jitter peaks in IMC-REFL_DC ( pitch: 1846 to 2300, yaw: 2043 to 1500) with the IMC off. The figure shows the resulting reduction of peaks in REFL_DC. Shaking of HAM2 was used to enhance some of the peaks, but there was little shaking at the 280 Hz piezo mirror peak. Sheila and I tried to move the piezo mirror to these settings while keeping the IMC locked but we were unsuccessful. It might be worth adjusting the beam on the diode while trying to minimize peaks.
Robert, Sheila
added 125ml to the crystal chiller
nothing added to the diode chiller (no fault on the chiller panel, trend of 14 days shows no "check chiller" faults)
PSL: SysStat: All Green, except VB program offline Frontend Output power: 34.7W Frontend Watch: Red HPO Watch: Red PMC: Locked: 0 days, 0 hours, 0 minutes Reflected power: 32.7W Transmitted power: 100.5W Total Power: 133.2W ISS: Diffracted power: 2.576% Last saturation event: 0 days, 0 hours, 0 minutes FSS: Locked: 0 days, 0 hours, 0 minutes Trans PD: 0.073V
Not much hope here. With any luck I would get all the way to lock PRMI and DRMI but they don't last. Below I attached some plots from the lockloss tool using Sheila's ALS list of channel (/ligo/home/sheila.dwyer/Desktop/Locklosses/channels_to_look_at_ALS.txt). ALS REFL glitched prior to locklosses but why does that matter to DRMI and PRMI that have nothing to do with the arms?
To run lockloss tool using custom channel list do
$ lockloss -c custom_channel_list.txt select
The default channel list doesn't seems to work.
TITLE: 10/31 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Wind
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY:
Had some trouble with the MC. Sometimes Clearing History (ASC WFS) would help. Another time FSS was oscillating, so the Fast Gain was ramped down to it's lowest value and returned to 9.0. (Sheila mentioned both of these to me.)
For first attempts at locking, simply tried to lock (no initial alignment), and would lock DRMI, but it would drop out shortly after. Jenne mentioned going to LOCK_DRMI_1F and then looking at ASC ERROR signals on striptool & tweak optics for signals which have big ERROR signals.
LOG:
During locking tonight, had the following ERROR for ISC_DRMI:
EZCA CONNECTION ERROR: Could not connect to channel (timeout=2s): H1:LSC-PD_DOF_MTRX_SETTING_1_23
I tried a couple things: I hit "LOAD", which did nothing. Then I hit "Execute" which broke the lock.
One thing I did not do was Re-Request the state I was in. (Nutsinee just let me know that this is what works for her when she has had "CONNECTION ERRORS".
After double checking that all CDS systems are running, waiting a few minutes, and checking that you can do a caget with the channel in question, then change the operating mode of the node with the connection error from EXEC to STOP. Wait for node to change to a Yellow background before requesting EXEC again. If one of these nodes was previously managed, then you may need to INIT the manager (if the manager is working, a possible way to do this would be wait for the current state to finish, if it can, and then go to manual, INIT, and then back to where it was was).
I tuned the periscope a little and tuned and damped the piezo mirror mount. The 280 Hz peak is from the piezo mirror on the table, like the 300 Hz peak at LLO, and not from the top mount on the periscope, which is holding as I tuned it last year. I am not sure why the piezo mirror mount is suddenly more prominent than it was last year. I would like to check again for some problem like clipping. In any case, I would like to sub in one of the newer piezo mirror mounts; what I did was pretty jury rigged and I dont expect much improvement, especially since much time was spent trying to re-lock the mode cleaner. We havent had a fully locked interferometer since to evaluate the changes I made.
We are having a recurrence of the glitches that have been described in 30519 25523 22184
This time they are verry intermitent (only seem to happen about once every ten minutes), which would make it diffifcult to trouble shoot right now.
Sheila said to watch time series of (4) ALS Channels for glitches. They are:
Have noticed some drops on these channels corresponding with locklosses while locking.
Sheila mentioned that previously this effect would go away on its own, so there may be hope. Or, if the effect becomes more infrequent, if one can zip through the locking sequence and get to atleast RF_DARM, then one could be "home free". (So far, the glitches have been taking H1 down at most steps starting at LOCKING_ARMS_GREEN up to PREP_TR_CARM.
I just finished an Initial_Alignment, and am now trying to see if we can get past RF_DARM and elude the dreaded ALS glitches. (So far I am 0for5 in the last 20minutes of locking.)
Matt, Evan
We were perplexed by the steep slope of IMC F below 100 Hz, paticularly since it seemed to vary with PMC gain in the same way as the flat part of the spectrum above 1 kHz.
The attached plot shows the IN1 readbacks (i.e., no digital filtering or compensation applied) for IMC F and IMC L. Evidently, they seem to have the same spectral shape above 10 Hz. Since IMC L does not have any analog whitening, this would seem to indicate that IMC F readback has no analog whitening applied (despite what is implied by the schematic for the MC board).
However, the IMC F filter module (which produces the calibrated IMC frequency control channel that we've been using to estimate the IMC control noise) has a filter consisting of two 10 Hz / 100 Hz p/z pairs, as if to compensate for some kind of analog whitening.
What is actually stuffed into the analog whitening for the IMC F readback on the MC board?
[Also, we don't claim to understand why the TF is 0 dB at the peaks but -5 dB everywhere else.]
According to the schematics both MC_L and MC_F have the same whitening: 10Hz/100Hz double zero/pole with DC gain of 1. MC_I has a simple gain of 100.
Yes, this was confusion on our part about the analog source of IMC L. Indeed, they both seem to have whitening installed (by comparison with IMC I).
Summary: Repeating the Pcal timing signals measurements made at LHO (aLOG 28942) and LLO (aLOG 27207) with more test point channels in the 65k IOP model, we now have a more complete picture of the Pcal timing signals and where there are time delays. Bottom line: 61 usec delay from user model (16 kHz) to IOP model (65 kHz); no delay from IOP model to user model; 7.5 usec zero-order-hold delay in the DAC; and 61 usec delay in the DAC or the ADC or a combination of the two. Unfortunately, we cannot determine from these measurements on which of the ADC or DAC has the delay. Details: I turned off the nominal high frequency Pcal x-arm excitation and the CW injections for the duration of this measurement. I injected a 960 Hz sine wave, 5000 counts amplitude in H1:CAL-PCALX_SWEPT_SINE_EXC. Then I made transfer function measurements from H1:IOP-ISC_EX_ADC_DT_OUT to H1:CAL-PCALX_DAC_FILT_DTONE_IN1, H1:IOP-ISC_EX_MADC0_TP_CH30 to H1:CAL-PCALX_DAC_NONFILT_DTONE_IN1, and H1:CAL-PCALX_SWEPT_SINE_OUT to H1:CAL-PCALX_TX_PD_VOLTS_IN1, as well as points in between (see attached diagram, and plots) The measurements match the expectation, except there is one confusing point: the transfer function H1:IOP-ISC_EX_MADC0_TP_CH30 to H1:CAL-PCALX_DAC_NONFILT_DTONE_IN1 does not see the 7.5 usec zero-order-hold DAC delay. Why? There is a 61 usec delay from just after the digital AI and just before the digital AA (after accounting for the known phase loss by the DAC zero-order-hold, and the analog AI and AA filters). From these measurements, we cannot determine if the delay is in the ADC or DAC or a combination of both. For now, we have timing documentation such as LIGO-G-1501195 to suggest that there are 3 IOP clock cycles delay in the DAC and 1 IOP clock cycle delay at the ADC. It is important to note that there is no delay in the channels measured in the user model acquired by the ADC. In addition, the measurements show that there is a 61 usec delay when going from the user model to the IOP model. All this being said, I'm still a little confused from various other timing measurements. See, for example, LLO aLOG 22227 and LHO aLOG 22117. I'll need a little time to digest this and try to reconcile the different results.
By looking at the phase of the DuoTone signals we can constrain whether there is any delay in ADC side (like Keita's analysis here). The DuoTone signals are desgined such that the two sinusoidal signals 960 Hz and 961 Hz will be maximum at the start of a GPS second (and also in phase with each other). To be presice, the maximum will be 6.7 µs delayed from the integer GPS boundary (T1500513). The phase of 960 Hz signal at IOP (L1:IOP-ISC_EX_ADC_DT_OUT) is -92.52 degrees with respect to GPS integer boundary (LLO a-log 27207). Since the DuoTone signal is supposed to be maximum at GPS integer boundary i.e, it is a cosine function, this corresponds to -2.52 degrees (estimate of 92.52 assumes it is a sine function) phase change. Converting this phase change to time delay we get 7.3 µs. Since there is an inherent 6.7µs delay by the time the DuoTone signals reaches the ADC, we are left with only 0.6 µs delay possibly from ADC process (or some small systematic we haven't accounted for yet). This is what Keita's measurements were showing. Combing this measurment and above transfer function measurments we can say that we understand the ADC chain and there are no time delays more than 0.6 µ in that chain. This also suggest that the 61 µs delay we see in ADC-DAC combination exist completely in DAC side.
The DuoTone signals are sine waves, so a minor correction to Shivaraj's comment above, the zero-crossing corresponds to the supposed GPS integer second. I looked at a time series and observe that the zero-crossing occurs at ~7.2 usec. Since the analog DuoTone signal lags behind the GPS second by ~6.7 usec, I can confirm that the ADC side has essentially no delay. Thus, the 61 usec seen through the DAC-ADC loop is entirely on the DAC side. Attached is a time series zoom showing the zero crossing of the DuoTone signal.
When using dtt to make a transfer function measurement between an IOP model and a user model, one has to keep in mind that dtt does another decimation silently. This is due to dtt trying to match the number of data points between two models. Fortunately, this does not seem to affect the phase, see my note at https://dcc.ligo.org/T1600454.
Updated the timing diagram for consistency with other timing measurements (LHO aLOG 30965). See attached PDF to this comment.