M. Pirello, E. Castrellon, F. Clara, R. McCarthy
Per FRS Ticket 6059 and Work Permit 6284 we investigated and repaired the bad channel and restored the unit to operation.
We initially suspected the 5V linear voltage regulator in the Slow Controls Contentrator #2 (S1103451) and pulled it for inspection, but found that its outputs were working correctly. We replaced the suspect regulator and added a heat sink to help dissipate heat from this regulator.
Upon reinstallation, most of the bad channels were working, and only one set of related channels were not working. We traced this failure back to a bent pin on the EtherCAT Corner Station Chassis #3 (S1107447). This pin corresponds with channel DIV40M which Daniel says was not working since its installation. The Beckhoff Module associated with these signals was behaving poorly, acting like it had an internal short. We pulled the Chassis #3 and replaced the Beckhoff module #11 on panel #9, with a brand new E1124 Digital IO Module.
After installation other problems with this chassis surfaced so we replaced one of the computer modules and after much testing we were satisfied that the chassis would work when installed. I attached an image of all blocks green as seen from the CER.
Need also to move (1) ea. EDP200 BT roughing pump from CS Mechanical Room to Y-mid (Part of mid-term - next 6 months - effort to establish an emergency Beam Tube rough pumping option)
WP#6255 and WP#6293 completed 0930 hrs. local -> Valved-in RGA turbo to RGA volume and energized filament. 1130 hrs. local -> Took scans of the RGA volume with and without cal-gases -> isolated RGA turbo from RGA volume -> combined RGA volume with X-end volume and took scans of the X-end with and without calibration gases (inadvertently dumped ~5 x 10-4 torr*L of Krypton or 2 hrs accumulation @ 5 x 10-8 torr*L/sec into site) -> vented RGA turbo and removed from RGA hardware -> installed 1 1/2" UHV valve in its place -> Pumped volume between two 1 1/2" valves to 10-4 torr range before decoupling and de-energizing all pumps, controllers and noise sources with the exception of the RGA electronics which was left energized and with its fan running 24/7. Leaving RGA exposed to X-end, filament off and cal-gases isolated. Will post scan data as a comment to this entry within next 24 hrs..
Here are the scans from yesterday: Note the presence of amu 7 obviously "sourced" from the N2 cal-gas bottle. I will need to revisit the noted observation of the appearance of amu 7 when the cal-gas isolation valve used with Vacuum Bake Oven C is closed and the baffling disappearance of this amu when the isolation valve is opened???.
I have updated the SEI_CONF configuration table to more accurately reflect our experience with higher microseism. The extremes of this table (i.e. any version of very high wind and/or microseism) are still being explored as we roll into winter, but so far the nominal "WINDY" state has been sufficient up to 1+ micron/s RMS microseism and 40 mph winds. I have also made a few of the states in SEI_CONF not "requestable", mostly states that had "microseism" in the name. These states are all versions of our high microseism configuration from O1, which only worked in low winds. These states are still available, but you will have to hit the "all" button on SEI_CONF, they no longer show up on the top level drop-down.
We also might be close to getting some epics earthquake notifications, so that information might get included on this screen in the future.
Operators should reference Jeff's alog yesterday (31029), and my alog 30848 when trying to make decisions about seismic configuration.
Temporarily installed a 2 in. diameter, 45 degree thin film polariser in the output of the pre-modecleaner. Measured 0.342 W reflected from the polariser with the 10A-V2-SH power meter. Measured 92.5 W transmitted through the pre-modecleaner with the polariser removed, and with the L300W-LP power meter. The power stabilisation was on for both measurements. The output polarisation is calculated to be (1 - 0.342/92.5)*100 = 99.6% linearly polarised. Using the same thin film polariser in the input beam to the pre-modecleaner, 15.0 mW was measured in reflection. 1.51 W was measured in transmission. The power stabilisation was off for this measurement. The input polarisation is calculated to be (1 - 0.015/1.51)*100 = 99%. It's not obvious why it would be this low. Other than perhaps the angle of incidence alignment is off a little - a degree of freedom we do not have - because typically thin film polarisers are somewhat sensitive to input angle. Another high power attenuator was installed in the beam between the two Picomotor equipped mounts. We confirmed that we could re-lock the pre-modecleaner prior to turning off the high power oscillator to allow for modifications to the field box(es) by Daniel and Keita. Jason/Peter
I applied a bug fix suggested by John Zweizig in the demodulation routine in the GDS pipeline that reduces error due to finite machine precision. After this, it appears that the kappas as computed by GDS, especially the cavity pole, are significantly less noisy, but still not in agreement with the SLM tool (See this aLOG for reference: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=30888). Below is a table of mean and standard deviation values for the data taken from GDS, SLM, and the ratio GDS / SLM: SLM mean SLM std GDS mean GDS std ratio mean ratio std Re(kappa_tst) 0.8920 0.0068 0.8916 0.0056 0.9995 0.0043 Im(kappa_tst) -0.0158 0.0039 -0.0145 0.0008882 1.0013 0.0041 Re(kappa_pu) 0.8961 0.0080 0.8958 0.0057 0.9997 0.0065 Im(kappa_pu) -0.0050 0.0056 -0.0035 0.0013 1.0015 0.0059 kappa_c 1.1115 0.0094 1.1154 0.0072 1.0035 0.0060 f_c 354.2338 2.9305 345.6435 0.7686 0.9758 0.0084 Here are covariance matrices and correlation coefficient matrices between SLM and GDS: Covariance Correlation Re(kappa_tst) 1.0e-04 * 0.4615 0.3157 1.0000 0.8238 0.3157 0.3181 0.8238 1.0000 Im(kappa_tst) 1.0e-04 * 0.1506 0.0007 1.0000 -0.0216 0.0007 0.0079 -0.0216 1.0000 Re(kappa_pu) 1.0e-04 * 0.6387 0.3113 1.0000 0.6866 0.3113 0.3219 0.6866 1.0000 Im(kappa_pu) 1.0e-04 * 0.3139 -0.0036 1.0000 -0.0490 -0.0036 0.0174 -0.0490 1.0000 kappa_c 1.0e-04 * 0.8895 0.4815 1.0000 0.7118 0.4815 0.5144 0.7118 1.0000 f_c 8.5876 -0.0023 1.0000 -0.0010 -0.0023 0.5908 -0.0010 1.0000 Plots and histograms are attached.
During the windstorm yesterday, the PCal team attempted to complete end station calibrations of both ends. The calibration for EY went off without a hitch (results to come in a separate aLog). However, while setting up for the EX calibration, I dropped the working standard from the top of the PCal pylon onto the floor of the VEA. The working standard assembly ended up in 3 pieces: the integrating sphere, one spacer piece, and the PD with the second spacer piece. Minor damage was noted, mostly to the flanges of the integrating sphere and spacer pieces where the force of the fall had pulled the set screws through the thin mating flanges. I cleaned up and reassembled the working standard assembly and completed the end station calibration. Worried that some internal damage had occurred to the PD or integrating sphere, I immediately did a ratios measurement in the PCal lab. The results of this showed that the calibration of the working standard had changed by ~2% which is at the edge of our acceptable error. As a result of this accident, we are currently working to put together a new working standard assembly from PCal spares. Unfortunately this means that we will lose the calibration history of this working standard and will start fresh with a new standard. We plan to do frequent (~daily) ratios measurements of the new working standard in the PCal lab in order to establish a new calibration trend before the beginning of O2.
Opened FRS Ticket 6576 - PCal working standard damaged.
WP6289 Dave
during the model shutdown due to h1oaf0 ADC problem, h1guardian0 was rebooted (it had been up 28 days). Yesterday INJ_TRANS node needed a restart to permit hardware injections, so we felt a reboot was in order.
All nodes came back automatically and correctly.
WP6287. Richard, Jim, Dave:
this morning between 10:18 and 11:20 PDT we installed a seventh ADC card into the h1oaf0 IO Chassis for PEM expansion. Unfortunately this ADC (a PMC card on a PMC-to-PCIe adapter) has a fault which causes the front end computer to not boot up. In fact with the One-Stop fiber optics cable attached h1oaf0 did not output anything on its VGA graphics port, so not even the BIOS was shown. When the fiber was disconnected, the computer booted.
When h1oaf0 was powered up, it glitched the Dolphin network even though it was shutdown correctly. This glitched all the Dolphined front ends in the corner station, which now included the PSL. While we were removing the new ADC card from the IOC chassis, I restarted the PSL models. On the second h1oaf0 restart the PSL was again shutdown. At that point we restarted all the corner station models (starting with the PSL).
We did make two changes to the h1oaf0 IO Chassis which we left in:
WP6288 Dave, Jim:
We cleanly power cycled h1iscex and its IO Chassis this morning between 11:45 and 12:06 PDT. We were not able to reproduce the slight offset on an ADC channel (see attached). Note this channel is actually only seeing ADC bit-noise, so the offset is in the micro-volt level.
Sheila,.Jeff, Ed
This was an attempt to fix the drop in green arm power that happened last sunday. 30884
Since it didn't work, operators will continue to see that green power from the X arm is low.
If this can't be fixed, we can just rescale the normalization.
Moved the isctey green beam shutter (Y, B) to the position that matched the isctex green beam shutter, although the name is defined outside of medm, so it still says green fiber shutter. Info attached.
BRSY has been continuing its slow drift off of the photodiode, and was about 2-300 counts from the edge, so this morning I went to EY to try to recenter it. I think I was successful, but will need a couple hours to tell. Right now it's still rung up pretty bad, so we will need to wait for it to damp down on it's own a bit before trying to re-engage it. For now, operators should use one of the seismic configurations that doesn't use BRSY.
Looks like BRSY is closer to center now (at ~ -3000) than before, but given the current drift of ~1500 cts/week I didn't get as much margin before the next adjustment as I'd prefer. Will probably have to do this again in ~2 months.
Remember it will probably drift up because of the slow thermal equilibration for the next 1-2 days, probably ending up above 3k counts. I think that is very good. Good job, you have mastered the BRS!
Patrick, Kiwamu,
In this morning, Patrick found that the CO2Y was not outputting a laser power. In the end we could not figure out why it had shut off. The laser is now back on.
[Some more details]
I thought this was a return of the faulty behavior that we were trying to diagnose in the early October (30472). However, the combination of looking at the front panel of the laser controller and trending the warning/alarm states did not show us something conclusive. So no conclusion again.
When I went to the floor and checked the front panel, no red LED was found. The only unusual thing is the GATE LED which was found to be off. Pressing the red gate button then brought the GATE LED back in green as expected. This could be an indication of the IR sensor momentarily went to the fault state and came back normal leaving the laser shut off. In this scenario, the IR sensor does not latch any LEDs and for this reason I thought this could be it. Then looking at the trend at around the time the laser went off, I did not find any alarm flags raising at all. Even if it is a fast transient in the IR sensor, I would expect to see it in the trend. So these two observations together can't support the IR sensor scenario. Another plausible scenario would be somebody accidentally hitting the gate button resulting in no laser output.
I also went to the chiller and confirmed no error there - water level was mid-way (which I topped off), all seemed good.
That certainly sounds like the IR sensor. Unfortunately we don't currently have analogue readout from that channel, or a good error reporting system. We are already planning on fixing this with a new version of the controller that we should be getting ready for post O2 install.
Has there been a temperature change in the LVEA recently? And the Y-arm laser power is a bit highrer than before, but not as high as during your recent testing? I'm just wondering what else could be causing this sensor to be close to its tripping point.
Alastair, if this was due to the IR sensor, how do you exlain the fact that it didn't show up in ITMY_CO2_INTRLK_RTD_OR_IR_ALRM? Is it so fast that the digital system can not reacord the transient?
I don't understand that. Even if it doesn't latch the laser off, it should still show up on that channel. Is it possible that the chassis itself got a brief power glitch? If that was turned off/on momentarily then it would also put the laser into this state.
From trends the laser tripped off around 15:52 UTC this morning. This was well before the work on h1oaf took it down.
It's very possible that the Tuesday maint activity that involved IO chassis hardware work which may or may not have been involved in the dolphin network glitch -> Beckhoff issues (which lasted most of the day Tues), is what caused this particular TCS laser issue. It was compounded by the later h1oaf work that day which caused other chiller tripping. Cause of this full saga TBD...
During locking tonight, had the following ERROR for ISC_DRMI:
EZCA CONNECTION ERROR: Could not connect to channel (timeout=2s): H1:LSC-PD_DOF_MTRX_SETTING_1_23
I tried a couple things: I hit "LOAD", which did nothing. Then I hit "Execute" which broke the lock.
One thing I did not do was Re-Request the state I was in. (Nutsinee just let me know that this is what works for her when she has had "CONNECTION ERRORS".
After double checking that all CDS systems are running, waiting a few minutes, and checking that you can do a caget with the channel in question, then change the operating mode of the node with the connection error from EXEC to STOP. Wait for node to change to a Yellow background before requesting EXEC again. If one of these nodes was previously managed, then you may need to INIT the manager (if the manager is working, a possible way to do this would be wait for the current state to finish, if it can, and then go to manual, INIT, and then back to where it was was).
Summary: Repeating the Pcal timing signals measurements made at LHO (aLOG 28942) and LLO (aLOG 27207) with more test point channels in the 65k IOP model, we now have a more complete picture of the Pcal timing signals and where there are time delays. Bottom line: 61 usec delay from user model (16 kHz) to IOP model (65 kHz); no delay from IOP model to user model; 7.5 usec zero-order-hold delay in the DAC; and 61 usec delay in the DAC or the ADC or a combination of the two. Unfortunately, we cannot determine from these measurements on which of the ADC or DAC has the delay. Details: I turned off the nominal high frequency Pcal x-arm excitation and the CW injections for the duration of this measurement. I injected a 960 Hz sine wave, 5000 counts amplitude in H1:CAL-PCALX_SWEPT_SINE_EXC. Then I made transfer function measurements from H1:IOP-ISC_EX_ADC_DT_OUT to H1:CAL-PCALX_DAC_FILT_DTONE_IN1, H1:IOP-ISC_EX_MADC0_TP_CH30 to H1:CAL-PCALX_DAC_NONFILT_DTONE_IN1, and H1:CAL-PCALX_SWEPT_SINE_OUT to H1:CAL-PCALX_TX_PD_VOLTS_IN1, as well as points in between (see attached diagram, and plots) The measurements match the expectation, except there is one confusing point: the transfer function H1:IOP-ISC_EX_MADC0_TP_CH30 to H1:CAL-PCALX_DAC_NONFILT_DTONE_IN1 does not see the 7.5 usec zero-order-hold DAC delay. Why? There is a 61 usec delay from just after the digital AI and just before the digital AA (after accounting for the known phase loss by the DAC zero-order-hold, and the analog AI and AA filters). From these measurements, we cannot determine if the delay is in the ADC or DAC or a combination of both. For now, we have timing documentation such as LIGO-G-1501195 to suggest that there are 3 IOP clock cycles delay in the DAC and 1 IOP clock cycle delay at the ADC. It is important to note that there is no delay in the channels measured in the user model acquired by the ADC. In addition, the measurements show that there is a 61 usec delay when going from the user model to the IOP model. All this being said, I'm still a little confused from various other timing measurements. See, for example, LLO aLOG 22227 and LHO aLOG 22117. I'll need a little time to digest this and try to reconcile the different results.
By looking at the phase of the DuoTone signals we can constrain whether there is any delay in ADC side (like Keita's analysis here). The DuoTone signals are desgined such that the two sinusoidal signals 960 Hz and 961 Hz will be maximum at the start of a GPS second (and also in phase with each other). To be presice, the maximum will be 6.7 µs delayed from the integer GPS boundary (T1500513). The phase of 960 Hz signal at IOP (L1:IOP-ISC_EX_ADC_DT_OUT) is -92.52 degrees with respect to GPS integer boundary (LLO a-log 27207). Since the DuoTone signal is supposed to be maximum at GPS integer boundary i.e, it is a cosine function, this corresponds to -2.52 degrees (estimate of 92.52 assumes it is a sine function) phase change. Converting this phase change to time delay we get 7.3 µs. Since there is an inherent 6.7µs delay by the time the DuoTone signals reaches the ADC, we are left with only 0.6 µs delay possibly from ADC process (or some small systematic we haven't accounted for yet). This is what Keita's measurements were showing. Combing this measurment and above transfer function measurments we can say that we understand the ADC chain and there are no time delays more than 0.6 µ in that chain. This also suggest that the 61 µs delay we see in ADC-DAC combination exist completely in DAC side.
The DuoTone signals are sine waves, so a minor correction to Shivaraj's comment above, the zero-crossing corresponds to the supposed GPS integer second. I looked at a time series and observe that the zero-crossing occurs at ~7.2 usec. Since the analog DuoTone signal lags behind the GPS second by ~6.7 usec, I can confirm that the ADC side has essentially no delay. Thus, the 61 usec seen through the DAC-ADC loop is entirely on the DAC side. Attached is a time series zoom showing the zero crossing of the DuoTone signal.
When using dtt to make a transfer function measurement between an IOP model and a user model, one has to keep in mind that dtt does another decimation silently. This is due to dtt trying to match the number of data points between two models. Fortunately, this does not seem to affect the phase, see my note at https://dcc.ligo.org/T1600454.
Updated the timing diagram for consistency with other timing measurements (LHO aLOG 30965). See attached PDF to this comment.