New NDS2 client software has been installed on the DAQ test stand. The new version is nds-client-0.11.3, and is the default version when you log in. To facilitate the installation, a new version of SWIG (2.0.4) was installed on x1work, and a path to jdk1.7.0_60 was added to the PATH environment variable.
Today was an extended maintenance day, 8am to 5pm.
1. Removal of RFM Cards From End Station Seismic Front End Computers
Cyrus, Jim, Dave: WP5051
The RFM 5565 PCIe cards were removed from the front end computers h1seiex and h1seiey. The procedure was:
The RFM Switch was reconfigured. The port assignments are as follows (green=unchanged, blue=changed)
port | before | after |
0 | ISC | ISC |
1 | SEI | SUS |
2 | SUS | empty |
3 | empty | empty |
4 | empty | empty |
5 | empty | empty |
6 | empty | empty |
7 | UPLINK | UPLINK |
When the seismic FE computer was powered back on, they gliched all the other computers in the fabric (both end statins). We restarted all the user models on h1susex, h1susey, h1iscex and h1iscey.
This closes WP5051
2. Testing New EPICS Gateways
Cyrus: WP5053
Please see Cyrus's alogs.
3. Restart of h1ecatx1
Daniel, Dave:
It was time for h1ecatx1's fortnightly freeze, which it faithfully did at 10am. We rebooted h1ecatx1 and all came back up automatically.
4. Reboot of Guardian Machine
Jamie, Dave:
The h1guardian0 machine was rebooted to test autostartup after reboot and change the NFS client mount options to turn off file attribute caching. Please refer to Jamie's logs for details.
5. Loading Filter Module Files
The following models had partially loaded filter module files. I performed a full COEFF load on: h1isiitmy, h1isiham4, h1isibs, h1isiitmx, h1suspr3, h1susim, h1sussr3, h1susomc, h1susbs, h1lsc, h1asc, h1scimc, h1sushtts
The model h1calcs reported a modified filter file, but this was a latched condition and the filter file was actually identical to the running configuration. I pressed the COEFF load to remove the warning.
The model h1tcscs is reporting a modified filter file, it has new filters for ITMY_CO2_CHILLER_SERVO_GAIN. I have emailed Aidan and Alastair for guidance.
6. Models With Local Modifications Pending SVN Commit
Dave:
Attached is the list of FE code which have local mods relative to the SVN repository
7. Guardian Nodes/MEDM Check
Jamie, Dave:
Nodes which exist but not on OVERVIEW MEDM (temporary nodes): TIDAL_HACK.
Nodes on OVERVIEW MEDM which dont exist (placeholders): IFO, PSL
All Op Lever whitening chassis in the corner station have been modified/verified to have the appropriate insulating film on the -15V regulator. S1101555 S1101537 S1101541 S1101547 S1101550 S1101551
Per bug report 664, verified: 1. 5V regulator heatsink is installed 2. Code on unit is version 3 D1001370 SN S1201886
A few spectra regarding BRS at EX. The red traces are from today at about 11:00am local (when I'm pretty sure people were working at EX, I wanted something noisy), blue traces are from ~3:00 last night (quiet), green are from the night of the 10th this month, after Krishna did his check up and recuperation of the BRS (alog 16434), though the peak at 8mhz suggests the BRS was rung up. Solid traces are the STS, dashed lines are the BRS signal for each measurement (see alog 14047 regarding calibrations for these spectra). Krishna was worried that the BRS software may have crashed (as it does every ~2 weeks), but no one has had time to check it out in person today.
(Joe, Gerardo and Peter)
After Peter blocked the beam inside the PSL enclosure, we removed the viewport on BF1 location, cleaned and inspected.
We found some small scratches on the outer surface and some particulate on the inside surface, the particulate was removed without a problem, and no scratches on the inside.
The scratches were measured and probed, the results were good, they are not deep to concern us.
The assembly was re-installed and clocked such as the scratches are not on the path of the beam.
Removed PEM AA chassis D1001421 (SN S1300102) to troubleshoot output offset on one of its channels. Replaced U1 for CH24 on the aLIGO AA BNC Interface (x10 Gain) D1000487 SN S1300072. Unit has been reinstalled and powered on.
Peter, Alexa
We measured the digital delay from the IR TR PDs at the end station to the CARM slow path at the corner to be ~50 deg at 300 Hz. This delay will include an end station model delay and two LSC delays. We have two LSC delays because TR CARM is in the LSC model, as well as the digital slow path after the IFO common mode board. Our measurement is reasonable given Chris's delay plot (LLO#15933).
We turned off the LSC-X_TR_A_LF_OUT (IR TR PD at END X) input and sent in an excitation with an amplitude of 100 cts. We turned off any filters along the TR CARM path (i.e. LSC-TR_CARM --> LSC-REFLBIAS --> ALS-C_REFL_DC_BIAS path), modulo some gains, and measured the transfer function at LSC_REFL_SERVO_SLOW_OUT. See LHO#15489 for the path.
So the above measurements only accounts for some of the phase delay we see in the CARM TF . We realized that there was margin for improvement in the compensation filter for the transmission signals. We made another filter for LSC-TR_CARM (FM8, 35:3000) that gives us some phase back. With respect to the one currently used (FM9, 35:1000^2), we removed one of the poles @ 1 KHz, and moved the other 1 kHz pole to 3 kHz. At 200 Hz we get back about 20 degrees of phase. We will test this filter as soon as we can (another earthquake now..).
We tested FM8, and this was indeed a good change. Attached is the sqrt(TRX+TRY) CARM OLTF with FM9 (green trace) and FM8 (brown trace). As designed, we get more phase with FM8 on. This is now implemented in the guardian.
Note: ignore the peak at about 50 Hz ... when we made this measurement we were father away from resonance than when we normally transition to sqrt(TX+TRY) so our z at 35 Hz was not properly compenstating for the cavity pole properly. This goes away and flattens as we reduce the offset.
Shutdown dust monitor #2 at End-Y. This dust monitor is used to monitor the BSC10 cleanroom during chamber opening. As the chamber is currently closed this monitor is not needed. Dust monitor #1 is running monitoring the general room air of the VEA. Alarms on this monitor should be investigated as per normal procedures. With DM #2 powered down, the network connection with DM #1 would drop. DM #1 would continually flip between "Sampling" and "Count Interrupted". After swapping cables and recycling the comptroller, the dust monitor, weather station, and restarting the various bits of software, still had no communications with DM #1. Patrick pointed out there is pin swap between the comptroller and the dust monitor. After installing the correct cable, the monitor is up and running correctly. I will adjust the alarm levels to reflect general monitoring levels.
Powered down dust monitor #2 at End-X. This monitor is used for monitoring in the BSC9 cleanroom during chamber opening. With the door on no need to run this monitor. Dust monitor #1 is running monitoring the general room air of the VEA. Alarms on this monitor should be investigated as per normal procedures.
Kyle, Gerardo Used Main crane to "fly" pump cart into Beer Garden -> Connected to and pumped BSC3 annulus for a few hours -> Controller current never came on scale -> Used Genie Snorkel lift to access and swap out controller -> No help, current off scale (high) -> Pump was replaced ~5-8 weeks before problem -> squirted alcohol on conflat joints while Gerardo monitored pump cart pressure gauges -> no response -> Out of time for today - Isolated pumps from annulus pump port and shut down pumps -> To be continued
I took the advantage of the resonated green light this morning and adjusted the camera focus. However, I had a little trouble focusing the End Y camera. Let me know if it is not good enough and I will try to refocus it.
The HAM-ISI MEDM screens were updated to show an orange border when any of the sensors/actuators saturation counters register one saturation, or more.
More detais about this update can be found in SEI aLog #692.
Work was covered by WP #5062, which can now be closed.
I attempted to replace the EPICS gateway machine today as part of maintenance and WP5053, but failed miserably. The old machine/configuration is now back in place while I figure out why the new machine does not work as expected. There will most likely be a gap for the vacuum data in the DAQ from appx. 11:40AM local time to 12:45PM (or so), since the DAQ still requires the gateway to get channels in other subnets. I need to check with Dave that the EDCU actually reconnected to the vacuum channels.
The summary page with all the coherences is available at this address:
https://ldas-jobs.ligo.caltech.edu/~gabriele.vajente/bruco_1107840506/
I used data from the end of the lock stretch, starting from GPS time 1107840506 and using 600 seconds of data.
The main thing to notice is that DARM is coherent with MICH, PRCL and SRCL over almost all the frequencies, see the first attached plot. This is suspicious, and I'm wondering if there is something wrong with the data I'm using...
There is coherence with EY magnetometers at about 74 Hz, I guess this is the ring heater line.
Yup, this is the right data -- we saw the same coherences using DTT.
There is also coherence with the ISS second loop PDs (the loop was open during this lock).
Yes, I confirm coherence with ISS second loop as well. The shape of the coherence and the projected noise is very similar. So my guess is that the DARM signal was actually limited almost everywhere by intensity noise. But maybe you guys alredy knew that...
The guardian upgrade has started. All nodes will be moved to the latest guardian and cdsutils releases and restarted. Will start with all SUS and SEI nodes, then move to ISC.
These versions may get a couple more bumps over the next couple of days as minor cleanup is done.
A more complete log describing all changes will follow later today.
I've taken a look at DAC MCT glitches as requested by Dan in alog 16707
After running through our usual process to check for DAC glitches coupling into DARM, none of the SUS drive signals seem to be statistically correlated to glitches in DARM when crossing zero or +/- 2^16. I wish I had a smoking gun like we've been able to find with this method in the past, but we've had no such luck.
I wanted to make sure that the triggers indicating these zero-crossings were being generated properly, so I followed up a number of them by hand. They did seem to be reporting the correct times, so it doesn't look like this non-result is a bug in the trigger generation process. I've also generated omega scans for a few times when the ETMX ESD drive signals were crossing -2^16 and didn't see any glitches that looked like the transients we'd normally expect from DAC MCT glitches. This makes me think that the DAC glitches are hiding under the current noise floor (at least in the sense that Omicron doesn't seem to witness them) and will start becoming a more problematic noise source as sensitivity increases.
Is there any way to know when the calibration was last run for the 18-bit DACs? I really would've expected these crossings to cause fairly loud glitches from past experience.
I've attached a normalized spectrogram showing 5 minutes of data from the Feb 13th lock and it looks like there are two separate families of glitches that we can see by eye. One populating the frequency band between ~80 Hz - 300 Hz and one populating the frequency band from ~40 Hz - 100 Hz (which we initially thought might be DAC glitches). I've set some tools running to see if we can identify any of these and we're also doing some by-hand followup of the louder glitches that are likely to show up in omega scans of auxiliary channels.
RCG 2.9, which came with the band-aid, 3/4's of-the-way, fix to the 18-bit DAC Major Carry Transitions (see e.g. G1401269), that we believe lasts for ~1 month (see LHO aLOG 16376), was installed on January 13th 2015 -- see LHO aLOG 16060. This was the last time all IOP models were systematically restarted (the calibration is now performed automatically, but only when a front end's IOP model is restarted -- see T1400570 and Bugzilla Entry 732). Good to hear we have some evidence that the calibration seems to last a little longer at LHO!
Thanks TJ!
The glitches in the spectrogram seem to agree with the breathing that we saw in the control room. With the DAC glitches ruled out, we suspect that this noise is related to input beam jitter, there is some coherence with the IMC WFS signals. Later in the same lock we were able to reduce the noise by changing the BS alignment, so there is likely some bilinear coupling that changes over time. We'll try to get some longer lock stretches for glitch investigations.