在Spark流中,是否可以将批数据从kafka插入到Hive? [英] In Spark streaming, Is it possible to upsert batch data from kafka to Hive?

查看:116
本文介绍了在Spark流中,是否可以将批数据从kafka插入到Hive?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的计划是:

1. using spark streaming to load data from kafka every period like 1 minute.
2. convert the data loading every 1 min into DataFrame.
3. upsert the DataFrame into a Hive table (a table storing all history data)

目前,我已经成功实现了步骤1-2.

Currently, I successfully implemented the step1-2.

我想知道是否有任何可行的方法来实现step3.详细信息:

And I want to know if there is any practical way to realize the step3. In detail:

1. load the latest history table with a certain partition in spark streaming.
2. use batch DataFrame to join the history table/DataFrame with a partition, and generate a new DataFrame.
3. save the new DataFrame to Hive, overwriting the history table of that partition.

这是我的代码:


public final class SparkConsumer {
  private static final Pattern SPACE = Pattern.compile(" ");

  public static void main(String[] args) throws Exception {
    String brokers = "device1:9092,device2:9092,device3:9092";
    String groupId = "spark";
    String topics = "zhihu_comment";
    String destTable = "ods.zhihu_comment";

    // Create context with a certain seconds batch interval
    SparkConf sparkConf = new SparkConf().setAppName("TestKafkaStreaming");
    sparkConf.set("spark.streaming.backpressure.enabled", "true");
    sparkConf.set("spark.streaming.backpressure.initialRate", "10000");
    sparkConf.set("spark.streaming.kafka.maxRatePerPartition", "10000");
    JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, Durations.seconds(60));

    Set<String> topicsSet = new HashSet<>(Arrays.asList(topics.split(",")));
    Map<String, Object> kafkaParams = new HashMap<>();
    kafkaParams.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, brokers);
    kafkaParams.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);

    kafkaParams.put("enable.auto.commit", false);
    kafkaParams.put("max.poll.records", "500");

    SparkSession spark = SparkSession.builder().appName(topics).getOrCreate();

    // Create direct kafka stream with brokers and topics
    JavaInputDStream<ConsumerRecord<String, String>> messages = KafkaUtils.createDirectStream(
            jssc,
        LocationStrategies.PreferConsistent(),
        ConsumerStrategies.Subscribe(topicsSet, kafkaParams));

    messages.foreachRDD(rdd -> {
        OffsetRange[] offsetRanges = ((HasOffsetRanges) rdd.rdd()).offsetRanges();
        // some time later, after outputs have completed
        ((CanCommitOffsets) messages.inputDStream()).commitAsync(offsetRanges);
    });
    /*
     * Keep only the actual message in JSON format
     */
    Column[] colList =  { col("answer_id"), col("author"), col("content"), col("vote_count") };

    JavaDStream<String> recordStream = messages.flatMap(record -> Arrays.asList(record.value()).iterator());
    /*
     * Extract RDDs from stream
     */
    recordStream.foreachRDD(rdd -> {
        if (rdd.count() > 0) {
            Dataset<Row> df = spark.read().json(rdd.rdd());

            df.select(colList).show();
        }
    });

    // Get the lines, split them into words, count the words and print
    JavaDStream<String> lines = messages.map(ConsumerRecord::value);

    jssc.start();
    jssc.awaitTermination();
  }
}

我想知道这种方法是否可行?如果您能给我一些建议,我将不胜感激.

I want to know if this way is practical? I'll appreciate if you could give me some advice.

推荐答案

我强烈建议

Instead of re-inventing the wheel, I would strongly recommend Kafka Connect. All you need is the HDFS Sink Connector, that replicates the data from a Kafka topic to Hive:

Kafka Connect HDFS Sink连接器允许您从中导出数据Kafka主题以多种格式将HDFS文件集成使用Hive可以立即使用HiveQL查询数据

The Kafka Connect HDFS Sink connector allows you to export data from Kafka topics to HDFS files in a variety of formats and integrates with Hive to make data immediately available for querying with HiveQL

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆