为什么 kafka 生产者在第一条消息上很慢? [英] Why kafka producer is very slow on first message?

查看:62
本文介绍了为什么 kafka 生产者在第一条消息上很慢?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 kafka 生产者将价格发送到主题.当我发送第一条消息时,它会打印生产者配置,然后发送消息,因此发送第一条消息需要更多时间.

I am using kafka producer to send prices to topic. When I send first message it prints producer config and then send message due to this it takes more time to send first message .

在第一条消息之后,发送消息只用了不到 1/2 毫秒.

After first message it tooks hardly 1/2 milliseconds to send a message .

我的问题是我们可以做些什么来跳过配置部分还是我们可以在发送第一条消息之前开始?

My question is can we do something so that configuration part will skip or we can start before to send first message ?

我在我的项目中使用 spring kafka.我也阅读了其他问题,但并不是很有帮助.

I am using spring kafka into my project. I read other question also but not really helpful .

应用程序.yml

server:
  port: 8081
spring:
    kafka:
      bootstrap-servers:   ***.***.*.***:9092
      producer:
          key-serializer: org.apache.kafka.common.serialization.StringSerializer
          value-serializer: org.apache.kafka.common.serialization.StringSerializer
      

生产者价值观:

acks = 1
batch.size = 16384
bootstrap.servers = [192.168.1.190:9092]
buffer.memory = 33554432
client.dns.lookup = default
client.id = 
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer

我参考了以下问题但没有帮助.

I refer following questions but not helped.

  1. 为什么camel kafka生产者很慢?
  2. Kafka 生产者在第一条消息上很慢

推荐答案

在第一次调用 KafkaProducer.send 方法,Kafka 生产者获取主题的分区元数据.获取元数据会阻止 send 方法立即返回.Kafka 生产者缓存元数据,因此后续发送要快得多.Kafka 生产者缓存 metadata.max.age.ms(默认 5 分钟),之后它会再次获取元数据以主动发现任何新的代理或分区.

During the first invocation of the KafkaProducer.send method, the Kafka producer fetches the partition metadata for the topic. Fetching the metadata blocks the send method from returning immediately. The Kafka producer caches the metadata, so subsequent sends are much faster. The Kafka producer caches the metadata for metadata.max.age.ms (default 5 minutes), after which it again fetches the metadata to proactively discover any new brokers or partitions.

当您的应用程序启动时,您可以调用 KafkaProducer.partitionsFor 方法来获取和缓存元数据,但是当缓存在5分钟后过期时,下次发送会很慢,因为它会再次获取元数据.如果您的 Kafka 环境是静态的,即您的应用程序运行时没有创建新的 brokers 和 partitions,那么考虑将 metadata.max.age.ms 配置为很长的持续时间,这样元数据在缓存中保存的时间更长.

When your application starts, you could invoke the KafkaProducer.partitionsFor method to fetch and cache the metadata, but when the cache expires after 5 minutes, the next send will be slow because it fetches the metadata again. If your Kafka environment is static, that is, new brokers and partitions are not created while your application is running, then consider configuring metadata.max.age.ms to a very long time duration, so the metadata is kept in the cache longer.

这篇关于为什么 kafka 生产者在第一条消息上很慢?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆