(Joe, Gerardo and Peter)
After Peter blocked the beam inside the PSL enclosure, we removed the viewport on BF1 location, cleaned and inspected.
We found some small scratches on the outer surface and some particulate on the inside surface, the particulate was removed without a problem, and no scratches on the inside.
The scratches were measured and probed, the results were good, they are not deep to concern us.
The assembly was re-installed and clocked such as the scratches are not on the path of the beam.
Removed PEM AA chassis D1001421 (SN S1300102) to troubleshoot output offset on one of its channels. Replaced U1 for CH24 on the aLIGO AA BNC Interface (x10 Gain) D1000487 SN S1300072. Unit has been reinstalled and powered on.
Peter, Alexa
We measured the digital delay from the IR TR PDs at the end station to the CARM slow path at the corner to be ~50 deg at 300 Hz. This delay will include an end station model delay and two LSC delays. We have two LSC delays because TR CARM is in the LSC model, as well as the digital slow path after the IFO common mode board. Our measurement is reasonable given Chris's delay plot (LLO#15933).
We turned off the LSC-X_TR_A_LF_OUT (IR TR PD at END X) input and sent in an excitation with an amplitude of 100 cts. We turned off any filters along the TR CARM path (i.e. LSC-TR_CARM --> LSC-REFLBIAS --> ALS-C_REFL_DC_BIAS path), modulo some gains, and measured the transfer function at LSC_REFL_SERVO_SLOW_OUT. See LHO#15489 for the path.
So the above measurements only accounts for some of the phase delay we see in the CARM TF . We realized that there was margin for improvement in the compensation filter for the transmission signals. We made another filter for LSC-TR_CARM (FM8, 35:3000) that gives us some phase back. With respect to the one currently used (FM9, 35:1000^2), we removed one of the poles @ 1 KHz, and moved the other 1 kHz pole to 3 kHz. At 200 Hz we get back about 20 degrees of phase. We will test this filter as soon as we can (another earthquake now..).
We tested FM8, and this was indeed a good change. Attached is the sqrt(TRX+TRY) CARM OLTF with FM9 (green trace) and FM8 (brown trace). As designed, we get more phase with FM8 on. This is now implemented in the guardian.
Note: ignore the peak at about 50 Hz ... when we made this measurement we were father away from resonance than when we normally transition to sqrt(TX+TRY) so our z at 35 Hz was not properly compenstating for the cavity pole properly. This goes away and flattens as we reduce the offset.
Shutdown dust monitor #2 at End-Y. This dust monitor is used to monitor the BSC10 cleanroom during chamber opening. As the chamber is currently closed this monitor is not needed. Dust monitor #1 is running monitoring the general room air of the VEA. Alarms on this monitor should be investigated as per normal procedures. With DM #2 powered down, the network connection with DM #1 would drop. DM #1 would continually flip between "Sampling" and "Count Interrupted". After swapping cables and recycling the comptroller, the dust monitor, weather station, and restarting the various bits of software, still had no communications with DM #1. Patrick pointed out there is pin swap between the comptroller and the dust monitor. After installing the correct cable, the monitor is up and running correctly. I will adjust the alarm levels to reflect general monitoring levels.
Powered down dust monitor #2 at End-X. This monitor is used for monitoring in the BSC9 cleanroom during chamber opening. With the door on no need to run this monitor. Dust monitor #1 is running monitoring the general room air of the VEA. Alarms on this monitor should be investigated as per normal procedures.
Kyle, Gerardo Used Main crane to "fly" pump cart into Beer Garden -> Connected to and pumped BSC3 annulus for a few hours -> Controller current never came on scale -> Used Genie Snorkel lift to access and swap out controller -> No help, current off scale (high) -> Pump was replaced ~5-8 weeks before problem -> squirted alcohol on conflat joints while Gerardo monitored pump cart pressure gauges -> no response -> Out of time for today - Isolated pumps from annulus pump port and shut down pumps -> To be continued
I took the advantage of the resonated green light this morning and adjusted the camera focus. However, I had a little trouble focusing the End Y camera. Let me know if it is not good enough and I will try to refocus it.
The HAM-ISI MEDM screens were updated to show an orange border when any of the sensors/actuators saturation counters register one saturation, or more.
More detais about this update can be found in SEI aLog #692.
Work was covered by WP #5062, which can now be closed.
I attempted to replace the EPICS gateway machine today as part of maintenance and WP5053, but failed miserably. The old machine/configuration is now back in place while I figure out why the new machine does not work as expected. There will most likely be a gap for the vacuum data in the DAQ from appx. 11:40AM local time to 12:45PM (or so), since the DAQ still requires the gateway to get channels in other subnets. I need to check with Dave that the EDCU actually reconnected to the vacuum channels.
J. Kissel In prep for maintenance activities on the SEI front ends, folks are turning off the SEI system. DARM inputs don't make any sense without a locked IFO, and the optical levers are swinging in and out of range of the QPD (simply because the SEI systems aren't isolated). As such I've done the following: ETMX: - turned off optical lever damping (by simply turning OFF the OUTPUT switch) - turned off the M0 / Top mass DARM ERR damping (by ramping the gain to 0.0 -- it was 0.01) ETMY - turned off optical lever damping (by simply turning OFF the OUTPUT switch) - turn of the DARM violin mode damping (by turning OFF the INPUT and ramping the gain to 0.0 -- the MODE3 bank was ON with a gain of 1.0). - Zeroed the MODE 1 to Pitch (H1:SUS-ETMY_L2_DAMP_MODE_MTRX_2_1) matrix element.
The summary page with all the coherences is available at this address:
https://ldas-jobs.ligo.caltech.edu/~gabriele.vajente/bruco_1107840506/
I used data from the end of the lock stretch, starting from GPS time 1107840506 and using 600 seconds of data.
The main thing to notice is that DARM is coherent with MICH, PRCL and SRCL over almost all the frequencies, see the first attached plot. This is suspicious, and I'm wondering if there is something wrong with the data I'm using...
There is coherence with EY magnetometers at about 74 Hz, I guess this is the ring heater line.
Yup, this is the right data -- we saw the same coherences using DTT.
There is also coherence with the ISS second loop PDs (the loop was open during this lock).
Yes, I confirm coherence with ISS second loop as well. The shape of the coherence and the projected noise is very similar. So my guess is that the DARM signal was actually limited almost everywhere by intensity noise. But maybe you guys alredy knew that...
Restart finished at 0902pst 17Feb. We'll run this way for an hour or so.
Had stopped running sometime between 1:00 and 2:00 AM last Friday.
The guardian upgrade has started. All nodes will be moved to the latest guardian and cdsutils releases and restarted. Will start with all SUS and SEI nodes, then move to ISC.
These versions may get a couple more bumps over the next couple of days as minor cleanup is done.
A more complete log describing all changes will follow later today.
I've taken a look at DAC MCT glitches as requested by Dan in alog 16707
After running through our usual process to check for DAC glitches coupling into DARM, none of the SUS drive signals seem to be statistically correlated to glitches in DARM when crossing zero or +/- 2^16. I wish I had a smoking gun like we've been able to find with this method in the past, but we've had no such luck.
I wanted to make sure that the triggers indicating these zero-crossings were being generated properly, so I followed up a number of them by hand. They did seem to be reporting the correct times, so it doesn't look like this non-result is a bug in the trigger generation process. I've also generated omega scans for a few times when the ETMX ESD drive signals were crossing -2^16 and didn't see any glitches that looked like the transients we'd normally expect from DAC MCT glitches. This makes me think that the DAC glitches are hiding under the current noise floor (at least in the sense that Omicron doesn't seem to witness them) and will start becoming a more problematic noise source as sensitivity increases.
Is there any way to know when the calibration was last run for the 18-bit DACs? I really would've expected these crossings to cause fairly loud glitches from past experience.
I've attached a normalized spectrogram showing 5 minutes of data from the Feb 13th lock and it looks like there are two separate families of glitches that we can see by eye. One populating the frequency band between ~80 Hz - 300 Hz and one populating the frequency band from ~40 Hz - 100 Hz (which we initially thought might be DAC glitches). I've set some tools running to see if we can identify any of these and we're also doing some by-hand followup of the louder glitches that are likely to show up in omega scans of auxiliary channels.
RCG 2.9, which came with the band-aid, 3/4's of-the-way, fix to the 18-bit DAC Major Carry Transitions (see e.g. G1401269), that we believe lasts for ~1 month (see LHO aLOG 16376), was installed on January 13th 2015 -- see LHO aLOG 16060. This was the last time all IOP models were systematically restarted (the calibration is now performed automatically, but only when a front end's IOP model is restarted -- see T1400570 and Bugzilla Entry 732). Good to hear we have some evidence that the calibration seems to last a little longer at LHO!
Thanks TJ!
The glitches in the spectrogram seem to agree with the breathing that we saw in the control room. With the DAC glitches ruled out, we suspect that this noise is related to input beam jitter, there is some coherence with the IMC WFS signals. Later in the same lock we were able to reduce the noise by changing the BS alignment, so there is likely some bilinear coupling that changes over time. We'll try to get some longer lock stretches for glitch investigations.