使用Prometheus监视Apache Spark [英] Monitoring Apache Spark with Prometheus

查看:462
本文介绍了使用Prometheus监视Apache Spark的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经阅读到Spark没有将Prometheus作为预包装的接收器之一.因此,我找到了这篇帖子关于如何使用prometheus监视Apache Spark.

I have read that Spark does not have Prometheus as one of the pre-packaged sinks. So I found this post on how to monitor Apache Spark with prometheus.

但是我发现这很难理解和成功,因为我是初学者,这是第一次使用Apache Spark.

But I found it difficult to understand and to success because I am beginner and this is a first time to work with Apache Spark.

我没有得到的第一件事是我需要做什么?

First thing that I do not get is what I need to do?

  • 我需要更改metrics.properties

  • I need to change the metrics.properties

我应该在应用程序中添加一些代码还是?

Should I add some code in the app or?

我不知道该怎么做...

I do not get what are the steps to make it...

我要做的是:更改链接中的属性,编写此命令:

The thing that I am making is: changing the properties like in the link, write this command:

--conf spark.metrics.conf=<path_to_the_file>/metrics.properties

要查看来自Apache Spark的指标,我还需要做些什么?

And what else I need to do to see metrics from Apache spark?

我也找到了以下链接: 使用Prometheus监视Apache Spark

Also I found this links: Monitoring Apache Spark with Prometheus

https://argus-sec.com/monitoring-spark-prometheus/

但是我也无法做到这一点...

But I could not make it with it too...

我已经读到有一种方法可以从Graphite获取指标,然后将其导出到Prometheus,但是我找不到一些有用的文档.

I have read that there is a way to get metrics from Graphite and then to export them to Prometheus but I could not found some useful doc.

推荐答案

很少有使用Prometheus监视Apache Spark的方法.

There are few ways to monitoring Apache Spark with Prometheus.

一种方法是通过JmxSink + jmx-exporter

  • Uncomment *.sink.jmx.class=org.apache.spark.metrics.sink.JmxSink in spark/conf/metrics.properties
  • Download jmx-exporter by following link on prometheus/jmx_exporter
  • Download Example prometheus config file

在以下命令中,在先前的步骤中下载了jmx_prometheus_javaagent-0.3.1.jar文件和spark.yml.可能需要相应地进行更改.

In the following command, the jmx_prometheus_javaagent-0.3.1.jar file and the spark.yml are downloaded in previous steps. It might need be changed accordingly.

bin/spark-shell --conf "spark.driver.extraJavaOptions=-javaagent:jmx_prometheus_javaagent-0.3.1.jar=8080:spark.yml" 

访问

运行后,我们可以使用 localhost:8080/metrics

然后可以配置prometheus来从jmx-exporter抓取指标.

It can then configure prometheus to scrape the metrics from jmx-exporter.

注意:如果发现部分在集群环境中运行,我们必须进行适当处理.

NOTE: We have to handle to discovery part properly if it's running in a cluster environment.

这篇关于使用Prometheus监视Apache Spark的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆