如果我在主题级别和生产者级别设置"compression.type",则优先 [英] If i set 'compression.type' at topic level and producer level, which takes precedence

查看:67
本文介绍了如果我在主题级别和生产者级别设置"compression.type",则优先的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图了解'compression.type'的配置,我的问题是,如果我在主题级别和生产者级别设置"compression.type",哪个优先?

I'm trying to understand the 'compression.type' configuration and my question is, If i set 'compression.type' at topic level and producer level, which takes precedence?

推荐答案

我尝试了一些实验来回答这个问题:

I tried out some experiments to answer this:

**Note:** server.properties has the config compression.type=producer 

./kafka-topics.sh --create --zookeeper localhost:2181 --partitions 1-复制因子1--config compression.type = producer --topic t

./kafka-topics.sh --create --zookeeper localhost:2181 --partitions 1 --replication-factor 1--config compression.type=producer --topic t

./kafka-console-producer.sh --broker-list node:6667  --topic t
./kafka-console-producer.sh --broker-list node:6667  --topic t --compression-codec gzip
./kafka-console-producer.sh --broker-list node:6667  --topic t

sh kafka-run-class.sh kafka.tools.DumpLogSegments --deep-iteration --files /kafka-logs/t-0/00000000000000000000.log

Dumping /kafka-logs/t-0/00000000000000000000.log
Starting offset: 0
offset: 0 position: 0 compresscodec: NONE 
offset: 1 position: 69 compresscodec: GZIP 
offset: 2 position: 158 compresscodec: NONE 

./kafka-topics.sh --create --zookeeper localhost:2181 --partitions 1-复制因子1--config compression.type = gzip --topic t1

./kafka-topics.sh --create --zookeeper localhost:2181 --partitions 1 --replication-factor 1--config compression.type=gzip --topic t1

./kafka-console-producer.sh --broker-list node:6667  --topic t1
./kafka-console-producer.sh --broker-list node:6667  --topic t1 --compression-codec gzip
./kafka-console-producer.sh --broker-list node:6667  --topic t1 --compression-codec snappy

 sh kafka-run-class.sh kafka.tools.DumpLogSegments --deep-iteration --files /kafka-logs/t1-0/00000000000000000000.log
Dumping /kafka-logs/t1-0/00000000000000000000.log
Starting offset: 0
offset: 0 position: 0 compresscodec: GZIP 
offset: 1 position: 89 compresscodec: GZIP 
offset: 2 position: 178 compresscodec: GZIP 

很明显,该主题取代了该主题.

w.r.t压缩和解压缩卡夫卡寄来的文字-权威指南

w.r.t compression and decompression text from Kafka - the definitive guide

但是,Kafka代理必须解压缩所有消息批处理,以便验证各个消息的校验和并分配偏移量.然后,它需要重新压缩消息批处理才能将其存储在磁盘上.

The Kafka broker must decompress all message batches, however, in order to validate the checksum of the individual messages and assign offsets. It then needs to recompress the message batch in order to store it on disk.

从0.10版本开始,存在一种新的消息格式,该消息格式允许消息批处理中的相对偏移量.这意味着新的生产者将在发送消息批之前设置相对偏移,这允许代理跳过消息批的重新压缩.

As of version 0.10, there is a new message format that allows for relative offsets in a message batch. This means that newer producers will set relative offsets prior to sending the message batch, which allows the broker to skip recompression of the message batch.

因此,当压缩类型不同时,将遵循主题压缩.如果相同,它将保留生产者设置的原始压缩编解码器.

So, when the compression type is different, the topic compression is honoured. if it is same, it will retain the original compression codec set by the producer.

参考- https://kafka.apache.org/documentation/

这篇关于如果我在主题级别和生产者级别设置"compression.type",则优先的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆