TITLE: 12/01 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
Wind: 6mph Gusts, 4mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.47 μm/s
QUICK SUMMARY:
TITLE: 12/01 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Cheryl
SHIFT SUMMARY: Having troubles relocking, seems 4.7khz violins are acting up
LOG:
IFO was locked when I got here. Jeff was working on tamping down second harmonic violins. Eventually got that taken care of got to observing. After an hour (after everyone else went home) started getting ETMY saturations. Guess I didn't notice the 4.7khz modes ringing up. Haven't been able to relock since, nothing I've tried has gotten these violins to quit playing. ISC_LOCK also keeps failing to go to and come out of down, I've had to select down after each lockloss and then manually go ready. Don't know if that is a problem.
Just had a lockloss. It started with a long string of continual EY saturations, some broad "low frequency" noise (roughly 10-50 hz) appeared in the live DARM spectra, then a lump appear around 50 hz. This eventually became a sharp peak at about 47 hz that would get large for a few seconds, then settle down a little. Attached spectra show a measurement just before the lock loss (red) with the 47 hz feature, a measurement with the low frequency stuff before the 47hz peak came up (brown), and a measurement 2 hours ago (green). 47 hz sounds familiar, but I know not from whence.
I add: during hand off it was mentioned the low frequency stuff had been poking up all day occasionally. And this didn't get bad until after everyone else went home, but looking at the verbal log, it looks like EY saturations had been increasing in frequency over this lock stretch, but not in a noticeable way.
Looks like it might have been one of the 4.7khz violin modes. Red is from right before a lockloss just a couple minutes ago, green is from the earlier lockloss, blue is from around the time Jeff had damped down this mode. Looks like the lower frequency mode got rung up somehow and is still pretty high.
I've made my spot position plotting scripts much simpler to use, and put them in the svn folder that holds the A2L scripts.
The script to open, modify if needed, and run is /opt/rtcds/userapps/release/isc/common/scripts/decoup/BeamPosition/Run_A2L_plotter.m
This script calls 2 sub-scripts. The first one (GetA2Lspots.m) will go through the data in ..../scripts/decoup/rec_LHO/ and pull out any measrements that have been taken since the last time this was run. It will calculate the resulting beam spot positions for the test masses, and save the data. The second sub-script (PlotTestMassSpotPositions.m) is just a plotter, for which you can change some parameters (start and stop time for the time axis, etc) in the Run_A2L_plotter script. Or you can just run each script individually, using the Run_A2L_plotter as an example.
I haven't yet added a few fancy options that I'd like, such as the option to plot the O1 or O2 average spot positions, but the basics are there.
I didn't check in our A2Ldata.mat file, but the scripts are all in the /opt/rtcds/userapps/release/isc/common/scripts/decoup/BeamPosition/ folder.
J. Kissel, E. Merilh, J. Warner, T. Hardwick, J. Driggers, S. Dwyer Prompted by DetChar worries about glitching around the harmonics of violin modes, Ed, Jim, and I went on an epic campaign to damp the ~1kHz, 2nd harmonic violin modes. These are tricky because not all modes had been successfully damped before, and one has to alternate filters in two filter banks to hit all 8 modes for a given suspension. We've either used, or updated Nutsinee's violin mode table, with the notable newly damped entries being 994.8973 ITMY -60deg, +gain MODE9: FM2 (-60deg), FM4 (100dB), FM9 (994.87) 997.7169 ITMY 0deg, -400gain MODE9: FM4 (100dB), FM6(997.717) VERY Slow 997.8868 ITMY 0deg, -200gain MODE10: FM4 (100dB), FM6(997.89) Also, we inadvertently rung up modes around 4735 Hz, so we spent a LONG time trying to fight that. We eventually won by temporarily turning on the 4735Hz notch in FM3 of the LSC-DARM2 filter bank and waiting a few hours. I had successfully damped the ETMY mode at 4735.09 Hz by moving the band-pass filter in H1:SUS-ETMY_L2_DAMP_MODE9 's FM10 from centered around 4735.5 to centered around 4735 Hz exactly, and using positive gain with zero phase. However, there still remains a mode rung up at 4735.4 Hz but it's from an as-of-yet unidentified test mass, and we didn't want to spend the time exploring. These 4.7 kHz lines have only appeared once before in late October (LHO aLOG 31020). Attched is a before vs. after ASD of DELTAL_EXTERNAL. I question the calibration, but what's important is the difference between the two traces. Pretty much all modes in this frequency band have been reduced by 2 or 3 orders of magnitude -- better than O1 levels. Hopefully these stick through the next few lock losses and acquisitions. Thanks again to the above mentioned authors for all their help!
Thanks to all for your efforts! You can really see the dramatic decrease in the glitch rate around 21:00 UTC in the attached plot. The glitch rate in the lock after you did this work (which ended around 5 UTC today) looks much more typical of what we know the glitch rate at LHO to be.
Comparing yesterday before damping to today the high frequency effect of the damping seems to be the removal of glitchy forests around 2, 3, 4, and 5 kHz (base frequency 2007.9 Hz but wide). Great! Not sure of the mechanism to get these frequencies yet, seems to be more than double the modes you damped. As noted above the 4735 is pretty large.
Attached is a spectrogram showing how the 2000 and 3000 Hz bands go away as the 1000 Hz violin modes are damped. You can also see that the bursts in these bands correspond with places where the spectrogram is 'bright' at 1000 Hz. Having two violin modes very close at 1000 Hz is like having one mode at 2000 Hz with a slow amplitude modulation. Probably that is getting turned into bursts in DARM by some non-linear process, modulated by that effective amplitude variation. The 1080 Hz band is bursting on its own time scale, and does not seem to be related.
Sheila D., Evan G. We investigated whether 1080 Hz glitching seen throughout ER10 and into the start of O2 is due to jitter noise. It has been seen in various runs on Hveto that a large number of the 1080 Hz glitches are vetoed with channels like H1:IMC-PZT_PIT_OUT_DQ (see, for example, here). Note that the auxiliary channel has glitches over a broad range of frequencies that are used to veto the 1080 Hz glitches. To test if jitter really could be a culprit, we inject a broadband excitation into IMC PZT YAW and use IMC WFS A DC segment 1 and IMC WFS B DC segment 1 as witnesses for the jitter coupling. In the witness channels, the noise goes up by a factor of 2-4. At the same time, the broad noise feature in DELTAL_EXTERNAL at 1080 Hz basically remains unchanged (see attached figure). It is possible there is some underlying jitter noise at these frequencies (because the DARM spectrum goes up a little at nearby frequencies), but the primary cause of the 1080 Hz doesn't seem to be caused by jitter noise. We tried excitations in IMC PZT PIT, but this also does not have any effect on the 1080 feature or the glitch level.
The free memory size on the guardian machine is about 4GB. At the current rate of usage we predict a reboot is needed before next Tuesday. At the next opportune time, we will increase the memory size from 12GB to 48GB and perhaps schedule regular reboots on Tuesdays.
Plot of available memory for the month of November is attached (Y-axis mis-labelled, actually MB).
We did similar analysis at LLO ( See LLO aLOG 30004 ). We do see increasing memory over time from the guardian process.
Does this LHO memory plot include cached memory? It would be interesting to see the amount of cache memory used along with the free memory.
The character of memory usage on the LLO guardian machine is quite different, and doesn't look the same as what Dave has posted at all. The LLO usage seems to plateau and not continually increase. The plots that Dave is showing here look like a very steady increase, which looks much different. The LHO plot looks more disturbing, as if there's a memory leak in something. Memory usage has been fairly flat when we've looked in the past, so I'm surprised to see such a high rate of increase.
I also note that something changed two Tuesday's ago, which is what we also notice at LLO. Was there an OS upgrade on h1guardian0 on Nov. 14?
The LLO guardian script machine was rebooted 16 days ago on Nov 15 (typically after we do a 'aptitude safe-upgrade'). The other dips are likely due to Guardian restarts due to DAQ, etc.
3:45pm local
Took 26 seconds to overfill CP3 from control room. Increased LLCV to 50% from 17%. Put it back to 17%.
I lowered LLCV to 16% open. I think 17% was slightly too much flow based on exhaust pressure and TC temp dips periodically (and colder temps outside). I drove out to CP3 to inspect. Less than half the horizontal exhaust pipe is frosted. Nothing unusual.
We are having issues accessing nds2 data at the LHO nds2 server that is from the last few days. I am looking at it now. The data is available on the cluster at LHO, nds2 is just not seeing it. I'm updating the frame source lists on the server and will restart it when that is done, hopefully that will clear up the problems. ETA less than an hour.
This will take a little longer. After some digging the NDS2 server was looking at an old frame cache file (the frame cache is used to map from frame time and time to frame file locations in ldas). I'm working with Dan Moraru now to make sure we are using a good source.
Recent data can once again be fetched from nds2 @LHO.
nds2_channel_source -a -n nds.ligo-wa.caltech.edu H1:DAQ-DC0_GPS
{...
H-H1_R:1163880064-1164408128 H-H1_R:1164408320-1164578304}
So up to 1164578304 is available in LDAS right now via nds2 at LHO.
h1cam17 was reported down this morning by Whatsup Monitoring system, we will leave the camera down since we are in Observation until is needed it, if someone has an imperative need to use it let cdsadmin know and we will reboot the camera remotely. From: cds-alerts@LIGO.ORG Subject: Device is Down (h1cam17.cds.ligo-wa.caltech.edu). Date: November 30, 2016 at 9:33:38 AM PST To: carlos.perez@ligo.org Ping is Down on Device: h1cam17.cds.ligo-wa.caltech.edu (10.106.0.37). Details: Monitors that are down include: Ping Monitors that are up include: Notes on this device (from device property page): This device was scanned by discovery on 10/6/2016 10:12:15 AM. ---------------------------------------- See More Ping is Down on Device: h1cam17.cds.ligo-wa.caltech.edu (10.106.0.37). Details: Monitors that are down include: Ping Monitors that are up include: Notes on this device (from device property page): This device was scanned by discovery on 10/6/2016 10:12:15 AM. ---------------------------------------- This mail was sent on November 30, 2016 at 09:33:32 AM Ipswitch WhatsUp Gold
Carlos, thank you for the report. The camera was removed on this Tuesday (31962). Richard will re-install this to a different place for monitoring the SRM cage at some point.
Day Shift: 16:00-00:00 UTC (08:00-16:00 PST)
With pomp & circumstance, O2 started with H1 down (& quickly on its way up) & a livestream of the start O2 going on in the Control Room. Cheryl took H1 to NLN & after a bit of shuffling, we are now in OBSERVING.
1) Hit LOAD on IMC_LOCK
2) Notify Carlos, so he can run to EX & EY to turn OFF the wifi routers.
20:25 Re-booted Video 1 FOM
20:30 Yellow PSL Dust Alarm
20:34 Dave B informed me that "Elli's" digital camera went down approx 9:30. It doesn't sem to be missed at the moment and I'm not sure what it as for.
20:42 Red PSL dust alarm
21:01 Intent bit set to Commissionig accidentally bye Sheila but left that way on order to do some vioin mode damping. 2nd order harmonics were showing prominent noise in DMT Omega glitch plot.
22:00 Richard M asked us to go hands off fo a few minutes while power to the facility was being switched between sub-stations.
22:55 Lockloss - probably due to trying to get the 4735Hz mode damped
23:15 re-locking - re-aligned X/Y ALS. Fiber Polarization 17% wrong on Y. Going to correct.
23:25 It seemed that simply turning the unit on to adjust the fiber polarization cause the Y channel to jump to 27%. None of the fibers were disturbed opening the door or activating the switch. Y is curently at 0% and X is at 4%.
23:29 reset ALS Y VCO
23:53 Nominal Low Noise - Jeff continuing to chase HF violin mode (4.735KHz)
23:59 Handing off to Jim
I did reload IMC Guradian Node and I called Carlos about the WiFis at the End Stations. These both happened as the Weekly meeting was starting. I don't know if he got to address thiese. I didn't hear from him.
I have been investigating the spike in the computed value of kappa_tst shown here in the summary pages: https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20161122/cal/time_varying_factors/ A closer look reveals a seeming correlation between the DARM line coherence and the values of kappa_tst (see the attached plot). Currently, kappa_tst computed values are gated with PCALY_LINE1 and SUS_LINE1, the only lines used to compute kappa_tst. Things to note in the plot: 1) kappa_tst diverges from the expected range about a minute after the DARM line coherence uncertainty skyrockets, just the amount of time it takes to corrupt the 128 running median. 2) kappa_tst as computed by CALCS also diverges when the DARM coherence goes bad. 3) Ungated kappa_pu behaves similarly to kappa_tst, but since it is gated with the DARM coherence, the GDS output is not corrupted.
Here is a plot of offline-calibrated data that includes the same time, this time adding DARM cohreence gating of kappa_tst. Note that the the GDS timeseries now looks good, and kappa_tst is well behaved.