在 Spark 批处理作业中读取 Kafka 主题 [英] Read Kafka topic in a Spark batch job

查看:38
本文介绍了在 Spark 批处理作业中读取 Kafka 主题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在编写一个从 Kafka 主题读取的 Spark (v1.6.0) 批处理作业.
为此,我可以使用 org.apache.spark.streaming.kafka.KafkaUtils#createRDD 但是,我需要为所有分区设置偏移量,还需要将它们存储在某个地方(ZK?HDFS?)以了解从哪里开始下一个批处理作业.

I'm writing a Spark (v1.6.0) batch job which reads from a Kafka topic.
For this I can use org.apache.spark.streaming.kafka.KafkaUtils#createRDD however, I need to set the offsets for all the partitions and also need to store them somewhere (ZK? HDFS?) to know from where to start the next batch job.

批处理作业中读取 Kafka 的正确方法是什么?

What is the right approach to read from Kafka in a batch job?

我也在考虑编写一个流媒体作业,它从 auto.offset.reset=smallest 读取并保存检查点到 HDFS,然后在下一次运行中从那里开始.

I'm also thinking about writing a streaming job instead, which reads from auto.offset.reset=smallest and saves the checkpoint to HDFS and then in the next run it starts from that.

但是在这种情况下,我怎样才能在第一批之后只获取一次并停止流式传输?

But in this case how can I just fetch once and stop streaming after the first batch?

推荐答案

createRDD 是从 kafka 读取批处理的正确方法.

createRDD is the right approach for reading a batch from kafka.

要查询有关最新/最早可用偏移量的信息,请查看 KafkaCluster.scala 方法 getLatestLeaderOffsetsgetEarliestLeaderOffsets.该文件是 private,但在最新版本的 spark 中应该是 public.

To query for info about the latest / earliest available offsets, look at KafkaCluster.scala methods getLatestLeaderOffsets and getEarliestLeaderOffsets. That file was private, but should be public in the latest versions of spark.

这篇关于在 Spark 批处理作业中读取 Kafka 主题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆