Kafka 消息编解码器 - 压缩和解压 [英] Kafka message codec - compress and decompress

查看:90
本文介绍了Kafka 消息编解码器 - 压缩和解压的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

使用 kafka 时,我可以通过设置 kafka.compression.codec 属性来设置编解码器我的 kafka 制作人.

When using kafka, I can set a codec by setting the kafka.compression.codec property of my kafka producer.

假设我在我的生产者中使用 snappy 压缩,当使用一些 kafka 消费者从 kafka 消费消息时,我应该做些什么来解码来自 snappy 的数据还是它是 kafka 消费者的一些内置功能?

Suppose I use snappy compression in my producer, when consuming the messages from kafka using some kafka-consumer, should I do something to decode the data from snappy or is it some built-in feature of kafka consumer?

相关文档中,我找不到任何与编码相关的属性在 kafka 消费者中(它只与生产者相关).

In the relevant documentation I could not find any property that relates to encoding in kafka consumer (it only relates to the producer).

有人可以清除这个吗?

推荐答案

根据我的理解,解压缩是由消费者自己负责的.正如在他们的官方维基页面中提到的消费者迭代器透明地解压压缩数据,只返回未压缩的消息

As per my understanding goes the de-compression is taken care by the Consumer it self. As mentioned in their official wiki page The consumer iterator transparently decompresses compressed data and only returns an uncompressed message

这篇文章中所见,消费者的方式作品如下

As found in this article the way consumer works is as follows

消费者有后台fetcher"线程,可以连续地从代理中批量获取 1MB 的数据,并将其添加到内部阻塞队列中.消费者线程从这个阻塞队列中取出数据,解压并遍历消息

The consumer has background "fetcher" threads that continuously fetch data in batches of 1MB from the brokers and add it to an internal blocking queue. The consumer thread dequeues data from this blocking queue, decompresses and iterates through the messages

也在端到端批量压缩下的文档页面中写道

And also in the doc page under End-to-end Batch Compression its written that

可以将一批消息压缩在一起并以这种形式发送到服务器.这批消息将以压缩形式写入,并在日志中保持压缩状态,只会被消费者解压.

A batch of messages can be clumped together compressed and sent to the server in this form. This batch of messages will be written in compressed form and will remain compressed in the log and will only be decompressed by the consumer.

因此,解压缩部分似乎在消费者中自行处理,您需要做的就是在创建生产者时使用 compression.codec ProducerConfig 属性提供有效/支持的压缩类型.我找不到任何示例或解释,其中说明了消费者端的任何减压方法.如果我错了,请纠正我.

So it appears that the decompression part is handled in the consumer it self all you need to do is to provide the valid / supported compression type using the compression.codec ProducerConfig attribute while creating the producer. I couldn't find any example or explanation where it says any approach for decompression in the consumer end. Please correct me if I am wrong.

这篇关于Kafka 消息编解码器 - 压缩和解压的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆