Everything looks nominally ok except for a couple of glitches in EX
Starting at 04:25 PDT this morning (11:25 UTC) the virgo alert system stopped working.
The log file reports a bad request error:
Traceback (most recent call last):
File "/opt/rtcds/userapps/release/cal/common/scripts/vir_alert.py", line 498, in <module>
far=args.far, test=args.test))
File "/opt/rtcds/userapps/release/cal/common/scripts/vir_alert.py", line 136, in query_gracedb
'label: %sOPS %d .. %d' % (ifo, start, end))
File "/opt/rtcds/userapps/release/cal/common/scripts/vir_alert.py", line 162, in log_query
return list(connection.events(query))
File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 618, in events
response = self.get(uri).json()
File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 374, in get
return self.request("GET", url, headers=headers)
File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 488, in request
return GsiRest.request(self, method, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 356, in request
return self.adjustResponse(response)
File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 369, in adjustResponse
raise HTTPError(response.status, response.reason, response_content)
ligo.gracedb.rest.HTTPError: (400, 'BAD REQUEST / {"error":"Invalid query"}')
After discussion with Keita, we will stop monit trying to run vir_alert on h1fescript0 for now. I believe the plan is that these alerts will be added to the standard gracedb database prior to the next V1 engineering run. I have sent emails bringing sysadmins attention to a potential issue with gracedb-test in case other users are being impacted.
TITLE: 07/19 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 53Mpc
OUTGOING OPERATOR: Jeff
CURRENT ENVIRONMENT:
Wind: 4mph Gusts, 2mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY:
Smooth handoff. A2l measurement shows some PIT issues but nothing so serious yet as to affect DARM.
At mid shift, all is green and clear. No problems or issues to report.
All dust monitor vacuum pumps are operating within normal temperature and pressure ranges. The pump at End-Y is getting a bit noisy. Will monitor the exhaust filter for carbon build up, which foretells of a pending vane failure.
TITLE: 07/18 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: OBSERVING 54Mpc
INCOMING OPERATOR: Jeff
SHIFT SUMMARY:
Post Maintenance Commissioning & Calibration activities started off the shift. Had a lockloss due to commissioning, but H1 came back with no hitches.
LOCK CLOCK appears down
Reran Verbal Alarm, since it appeared to not work at the beginning of the shift (that's 2-shifts in a row).
LOG:
H1's been in OBSERVING for just under 2hrs (& locked for over 3hrs...but the Lock Clock on nuc1 isn't working).
The lock clock was restarted without incident.
Sheila, Pep, TVo
Motivated by the extra DARM noise below 90 Hz (see Jenne's Alog-37590), Sheila thought that maybe it was do to the CARM loop moving around due to optical gain changes after the Montana EQ.
So we went to the LVEA during commissioning time to take a transfer function (Picture 2), and found the UGF to be around 24 kHz and about 15 degrees phase margin
In a previous Alog-36686, it seemed like a good UGF was around 18 kHz so we decided to change the gain on the REFL servo from 7db to 6db. This shifted the UGF down to 17 kHz and about 65 degrees of phase margin (Picture 1).
However, this didn't really help DARM solve its mystery noise mentioned above, so the search continues.
This is a quick summary of work done by many people today:
The last plot shows a couple of sensitivity curves, illustrating the problem.
Josh, TJ, Andy, Beverly,
A status update on Detchar's efforts to follow up the excess noise reported above. We haven't found anything clear yet, but there are some small oddities.
The spectra posted above look strange because the DARM noise below 30Hz is lower now than before while above 30Hz it's higher - a hint perhaps. Perhaps a filter or switch not engaged?
We ran Bruco: https://ldas-jobs.ligo.caltech.edu/~thomas.massinger/bruco_H1_July19_10UTC/
No smoking gun broadband coherence around 30-80Hz. Notable coherent things are:
We checked that h(t) and DARM IN1 both see this noise (so not a calibration filter mismatch) (yellow and blue are now/bad times).
We started looking for a switch that says it's switched but isn't like in LLO's 33071, but there one FASTIMONs was way higher than the others, and so far we don't see that. We checked all of the L2/M2/M3 FASTIMONs and nearly all of them are the same now as they were for two quiet reference times with good range (30th of June and 4th of July). Some exceptions are (yellow and blue are now/bad times)
We checked all SUS BIO MON channels and they are all in same state as the reference good times.
Took H1 back to OBSERVING at 1:22utc (6:22pm). Had an SDF Diff, but it was related to the REFL Servo Gain noted by Thomas & Pep.
Running with a range of ~54Mpc.
J. Kissel
I took a set of our regularly scheduled measurements of the DARM sensing function to add to our O2 data set. Processed results to come. The data lives here:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Measurements/SensingFunctionTFs/
2017-07-19_H1_OMCDCPDSUM_to_DARMIN1.xml == C, covering conversion from mA of DCPD sum to DARM_IN1 [ct].
2017-07-19_H1DARM_OLGTF_4to1200Hz_25min.xml == 1 / (1 + G)
2017-07-19_H1_PCAL2DARMTF_4to1200Hz_8min.xml == C / (1 + G)
2017-07-19_H1_PCAL2DARMTF_BB_5to1000Hz_0p25BW_250avgs_5min.xml == broad band excitation to compare PCAL against calibrated Detector Output
As before, we should use TX PD as reference, since it's been confirmed that PCALY's RX PD is continuing to suffer from clipping (e.g. LHO aLOG 37409)
Laser Status:
SysStat is good
Front End Power is 33.96W (should be around 30 W)
HPO Output Power is 154.9W
Front End Watch is GREEN
HPO Watch is GREEN
PMC:
It has been locked 0 days, 7 hr 19 minutes (should be days/weeks)
Reflected power = 16.12Watts
Transmitted power = 57.58Watts
PowerSum = 73.7Watts.
FSS:
It has been locked for 0 days 1 hr and 44 min (should be days/weeks)
TPD[V] = 2.82V (min 0.9V)
ISS:
The diffracted power is around 3.2% (should be 3-5%)
Last saturation event was 0 days 1 hours and 44 minutes ago (should be days/weeks)
Possible Issues:
No issues to report
TITLE: 07/18 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Commissioning
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
Wind: 10mph Gusts, 7mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.06 μm/s
All is quiet *knock on wood!*
QUICK SUMMARY:
Arrived to Sheila doing beam spot work. H1 then dropped out & was restored fairly quickly to NLN. Thomas is currently out on floor measuring CARM Gain & then Jeff K. will perform calibration measurement. Will aim for OBSERVING after this work.
Just had a visit from Harrah Elementary School led by Elizabeth.
Richard, Fil, Dave, Carlos, Cheryl
The two digital GIGE cameras Cheryl installed on the PSL table are now producing images.
The cameras are called h1iogige1 (10.106.0.52) and h1iogige2 (10.106.0.53).
I have reconfigured h1digivideo2's VID-CAM28 and VID-CAM29 to be IO GIGE 1 and 2. For some reason monit had VID-CAM28 commented out, I reactivated this camera. I changed the MEDM screen to show the correct names (image attached).
GigEs were not connected to medm's until after the installation was complete, so the cameras were aligned without being able to see the images, and further alignment and more attenuation are needed.
WP 7075
To further proceed with the 72 MHz WFS (Wave Front Sensor)s (37042), today I made the hardware changes (mostly cabling), as summarized below, while the interferometer was in a violin-mode-damping state.
I am going to leave the hardware configuration as it is. If this new setup doesn't cause extra noise in the interferometer, they will stay semi-permanently.
Here is a summary of the new configuration:
[The modifications]
Later, Jenne pointed out that the dark offset should have been readjusted. So we re-adjusted the dark offset. As a result, the -1 dB gain I originally placed turned out to be inaccurate. I set it to 2 dB in order to get roughly 550 counts at the normalized in-phase output when the DRMI is locked with the arm at a off resonance point. The RF phase is also adjusted accordingly.
It seems like since this work there has been excess low frequency noise in the RF9 AM stabilization control signal. The attachment shows the difference.