Looking at the subtraction for today, I noticed that the PEM mainsmon (PEM-EY_MAINSMON_EBAY_1_DQ) was coherent enough to subtract noise at frequencies other than the power lines. This is pretty unusual, so we're trying to think up things to check, like whether the ETM is much more charged than it used to be. TVo and Sheila are looking at BruCo to see if anything else looks suspicious.
I'll take a quick look to see if other mainsmon channels are also highly coherent, but this may be an interesting avenue to look at for why we have this extra noise now.
Of note is that I didn't see this mainsmon subtraction for data from July 15th (alog 37590).
The mainsmon channels also seem more noisy than in the past. Is this something that we've seen before?
The attached spectra show that the mainsmons are noisier now than they were a few days ago. Spot checking, the noise was there on July18th, but not July17th. It's not clear from the time series trends that there is anything different going on.
EDIT: Spot checking more spectra, it looks like the noise may have started on 17July2017, between 20:00 and 21:00 UTC.
Josh, TJ
Can you explain how you're finding the broadband coherence between the MAINSMON and h(t)?
The bruco run from today doesn't show anything other than 60 Hz + harmonics:
We did some spot checking and don't see significant broadband coherence throughout the day, it looks consistent with other days from the last few weeks:
https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=172856
Blue: 2017-07-19 03:30:00
Yellow: 2017-07-19 01:30:00
Green: 2017-06-30 14:30:00
Red: 2017-07-04 14:30:00
For the record, the analog CDS team reverted the ESD power supplies on July 11th -- see LHO aLOG 37455.
Hmmm. Looking at coherence on DTT, I'm also not seeing much. I was inferring that there would be coherence based on the subtractability of the noise. As Kiwamu pointed out, perhaps it's a series of glitches or something, where the coupling is constant but the noise isn't, so when you look at coherence averaged over long times, it doesn't show up?
EDIT: It looks like Kiwamu was right, that there was a glitch, probably in the power lines. I re-did the subtraction in sections of 256 seconds rather than a full 1024, and the first sets were fine and normal (no broad subtraction with mainsmon), and the last set is pretty significant. So, maybe this is just a regular thing that happens, and I just caught it by accident. The attached is a spectrum during the time of the glitch. I assume that the glitch must be on the power lines, since I get such good subtraction using them as my witness.
Sheila and I ran BruCo:
https://ldas-jobs.ligo-wa.caltech.edu/~thomas.vo/bruco_July19/
During this GPS time (1184486418) around 25-35 Hz range, there is a lot of coherence between H1:ASC-OMC_A_PIT(YAW) to DARM. But spot checking the few hours after with DTT, this seems to go away so maybe there's some transient stuff going on during this time.
Looking at a spectrogram of the MAINSMON channel, there are two broadband glitches near the end of the 1024 second stretch from your original plot:
https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=172923
attached is h1guardian0's performance plot for the month of July. The restart of DIAG_MAIN a week ago cleared memory, yesterday's reboot reset the CPU overload.
Everything looks nominally ok except for a couple of glitches in EX
Starting at 04:25 PDT this morning (11:25 UTC) the virgo alert system stopped working.
The log file reports a bad request error:
Traceback (most recent call last):
File "/opt/rtcds/userapps/release/cal/common/scripts/vir_alert.py", line 498, in <module>
far=args.far, test=args.test))
File "/opt/rtcds/userapps/release/cal/common/scripts/vir_alert.py", line 136, in query_gracedb
'label: %sOPS %d .. %d' % (ifo, start, end))
File "/opt/rtcds/userapps/release/cal/common/scripts/vir_alert.py", line 162, in log_query
return list(connection.events(query))
File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 618, in events
response = self.get(uri).json()
File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 374, in get
return self.request("GET", url, headers=headers)
File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 488, in request
return GsiRest.request(self, method, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 356, in request
return self.adjustResponse(response)
File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 369, in adjustResponse
raise HTTPError(response.status, response.reason, response_content)
ligo.gracedb.rest.HTTPError: (400, 'BAD REQUEST / {"error":"Invalid query"}')
After discussion with Keita, we will stop monit trying to run vir_alert on h1fescript0 for now. I believe the plan is that these alerts will be added to the standard gracedb database prior to the next V1 engineering run. I have sent emails bringing sysadmins attention to a potential issue with gracedb-test in case other users are being impacted.
TITLE: 07/19 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 53Mpc
OUTGOING OPERATOR: Jeff
CURRENT ENVIRONMENT:
Wind: 4mph Gusts, 2mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY:
Smooth handoff. A2l measurement shows some PIT issues but nothing so serious yet as to affect DARM.
At mid shift, all is green and clear. No problems or issues to report.
All dust monitor vacuum pumps are operating within normal temperature and pressure ranges. The pump at End-Y is getting a bit noisy. Will monitor the exhaust filter for carbon build up, which foretells of a pending vane failure.
TITLE: 07/18 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: OBSERVING 54Mpc
INCOMING OPERATOR: Jeff
SHIFT SUMMARY:
Post Maintenance Commissioning & Calibration activities started off the shift. Had a lockloss due to commissioning, but H1 came back with no hitches.
LOCK CLOCK appears down
Reran Verbal Alarm, since it appeared to not work at the beginning of the shift (that's 2-shifts in a row).
LOG:
H1's been in OBSERVING for just under 2hrs (& locked for over 3hrs...but the Lock Clock on nuc1 isn't working).
The lock clock was restarted without incident.
Sheila, Pep, TVo
Motivated by the extra DARM noise below 90 Hz (see Jenne's Alog-37590), Sheila thought that maybe it was do to the CARM loop moving around due to optical gain changes after the Montana EQ.
So we went to the LVEA during commissioning time to take a transfer function (Picture 2), and found the UGF to be around 24 kHz and about 15 degrees phase margin
In a previous Alog-36686, it seemed like a good UGF was around 18 kHz so we decided to change the gain on the REFL servo from 7db to 6db. This shifted the UGF down to 17 kHz and about 65 degrees of phase margin (Picture 1).
However, this didn't really help DARM solve its mystery noise mentioned above, so the search continues.
This is a quick summary of work done by many people today:
The last plot shows a couple of sensitivity curves, illustrating the problem.
Josh, TJ, Andy, Beverly,
A status update on Detchar's efforts to follow up the excess noise reported above. We haven't found anything clear yet, but there are some small oddities.
The spectra posted above look strange because the DARM noise below 30Hz is lower now than before while above 30Hz it's higher - a hint perhaps. Perhaps a filter or switch not engaged?
We ran Bruco: https://ldas-jobs.ligo.caltech.edu/~thomas.massinger/bruco_H1_July19_10UTC/
No smoking gun broadband coherence around 30-80Hz. Notable coherent things are:
We checked that h(t) and DARM IN1 both see this noise (so not a calibration filter mismatch) (yellow and blue are now/bad times).
We started looking for a switch that says it's switched but isn't like in LLO's 33071, but there one FASTIMONs was way higher than the others, and so far we don't see that. We checked all of the L2/M2/M3 FASTIMONs and nearly all of them are the same now as they were for two quiet reference times with good range (30th of June and 4th of July). Some exceptions are (yellow and blue are now/bad times)
We checked all SUS BIO MON channels and they are all in same state as the reference good times.
Took H1 back to OBSERVING at 1:22utc (6:22pm). Had an SDF Diff, but it was related to the REFL Servo Gain noted by Thomas & Pep.
Running with a range of ~54Mpc.