如何将Confluent Schema Registry与from_avro标准功能配合使用? [英] How to use Confluent Schema Registry with from_avro standard function?

本文介绍了如何将Confluent Schema Registry与from_avro标准功能配合使用?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

My Kafka和Schema Registry基于Confluent社区平台5.2.2,而My Spark的版本为2.4.4.我通过以下方式启动了Spark REPL env:

My Kafka and Schema Registry are based on Confluent Community Platform 5.2.2, and My Spark has version 2.4.4. I started Spark REPL env with:

./bin/spark-shell --packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.4,org.apache.spark:spark-avro_2.11:2.4.4

并设置Spark会话的Kafka源代码:

And setup Kafka source for spark session:

val brokerServers = "my_confluent_server:9092"
val topicName = "my_kafka_topic_name" 
val df = spark
  .readStream
  .format("kafka")
  .option("kafka.bootstrap.servers", brokerServers)
  .option("subscribe", topicName)
  .load()

然后我获得了有关键和值的架构信息:

And I got schema information about key and value with:

import io.confluent.kafka.schemaregistry.client.rest.RestService
val schemaRegistryURL = "http://my_confluent_server:8081"
val restService = new RestService(schemaRegistryURL)
val keyRestResponseSchemaStr: String = restService.getLatestVersionSchemaOnly(topicName + "-key")
val valueRestResponseSchemaStr: String = restService.getLatestVersionSchemaOnly(topicName + "-value")

首先,如果我使用writeStream查询"密钥",即

Firstly, if I queried it with writeStream for "key", i.e.

import org.apache.spark.sql.avro._
import org.apache.spark.sql.streaming.Trigger
import org.apache.spark.sql.DataFrame
import java.time.LocalDateTime
val query = df.writeStream
  .outputMode("append")
  .foreachBatch((batchDF: DataFrame, batchId: Long) => {
    val rstDF = batchDF
      .select(
        from_avro($"key", keyRestResponseSchemaStr).as("key"),
        from_avro($"value", valueRestResponseSchemaStr).as("value"))

    println(s"${LocalDateTime.now} --- Batch ${batchId}, ${batchDF.count} rows")
    //rstDF.select("value").show
    rstDF.select("key").show
  })
  .trigger(Trigger.ProcessingTime("120 seconds"))
  .start()

query.awaitTermination()

没有错误,甚至显示了行数,但是我没有任何数据.

There is no errors, even count of rows are shown, but I could not got any data.

2019-09-16T10:30:16.984 --- Batch 0, 0 rows
+---+
|key|
+---+
+---+

2019-09-16T10:32:00.401 --- Batch 1, 27 rows
+---+
|key|
+---+
| []|
| []|
| []|
| []|
| []|
| []|
| []|
| []|
| []|
| []|
| []|
| []|
| []|
| []|
| []|
| []|
| []|
| []|
| []|
| []|
+---+
only showing top 20 rows

但是,如果我选择"":

But if I select "value":

import org.apache.spark.sql.avro._
import org.apache.spark.sql.streaming.Trigger
import org.apache.spark.sql.DataFrame
import java.time.LocalDateTime
val query = df.writeStream
  .outputMode("append")
  .foreachBatch((batchDF: DataFrame, batchId: Long) => {
    val rstDF = batchDF
      .select(
        from_avro($"key", keyRestResponseSchemaStr).as("key"),
        from_avro($"value", valueRestResponseSchemaStr).as("value"))

    println(s"${LocalDateTime.now} --- Batch ${batchId}, ${batchDF.count} rows")
    rstDF.select("value").show
    //rstDF.select("key").show
  })
  .trigger(Trigger.ProcessingTime("120 seconds"))
  .start()

query.awaitTermination()

我收到消息:

2019-09-16T10:34:54.287 --- Batch 0, 0 rows
+-----+
|value|
+-----+
+-----+

2019-09-16T10:36:00.416 --- Batch 1, 19 rows
19/09/16 10:36:03 ERROR Executor: Exception in task 0.0 in stage 4.0 (TID 3)
org.apache.avro.AvroRuntimeException: Malformed data. Length is negative: -1
    at org.apache.avro.io.BinaryDecoder.doReadBytes(BinaryDecoder.java:336)
    at org.apache.avro.io.BinaryDecoder.readString(BinaryDecoder.java:263)
    at org.apache.avro.io.ResolvingDecoder.readString(ResolvingDecoder.java:201)
    at org.apache.avro.generic.GenericDatumReader.readString(GenericDatumReader.java:422)
    at org.apache.avro.generic.GenericDatumReader.readString(GenericDatumReader.java:414)
    at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:181)
    at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:153)
    at org.apache.avro.generic.GenericDatumReader.readField(GenericDatumReader.java:232)
    at org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:222)
    at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:175)
    at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:153)
    at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:145)
    at org.apache.spark.sql.avro.AvroDataToCatalyst.nullSafeEval(AvroDataToCatalyst.scala:50)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.serializefromobject_doConsume_0$(Unknown Source)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:255)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:836)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:836)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:123)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

所以我认为问题有两个层次:

So I think there are two levels fo problems:

  1. 首先,具有不同的反序列化逻辑,并且当前的"from_avro"仅支持

  1. Firstly, there are different avro deserialization logic for key and value, and current "from_avro" only support key, rather than value

即使是密钥,也没有错误,但是"from_avro"的反序列化器无法获取真实数据.

Even for key, there is no error, but deserializer of "from_avro" could not get real data.

您认为我有任何错误的步骤吗?还是应该增强from_avro和to_avro?

Do you think I have any wrong steps? Or, should from_avro and to_avro need be enhanced?

谢谢.

推荐答案

您的键和值完全是字节数组,并且以ID的整数值作为前缀.Spark-Avro不支持该格式,仅支持将架构包含在记录中的"Avro容器对象"格式.

Your key and value are entirely byte arrays, and are prefixed with integer values for their IDs. Spark-Avro does not support that format, only "Avro container object" formats that contain the schema as part of the record.

换句话说,您需要

In other words, you need to invoke the functions from Confluent deserializers , not the "plain Avro" deserializers, in order to first get Avro objects, then you can put schemas on those

火花应该增强from_avro和to_avro吗?

Spark should enhance from_avro and to_avro?

他们应该,但不会.引用 SPARK-26314 .旁注:Databricks 确实提供了具有相同名称的功能的Schema Registry集成,只是增加了混乱

They should, but they won't. Ref SPARK-26314. Sidenote that Databricks does offer Schema Registry integration with functions of the same name, only to add to the confusion

解决方法是使用此库- https://github.com/AbsaOSS/ABRiS

The workaround would be to use this library - https://github.com/AbsaOSS/ABRiS

或在将Spark结构化流媒体与融合架构注册表

这篇关于如何将Confluent Schema Registry与from_avro标准功能配合使用?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆