为什么我不能增加session.timeout.ms? [英] Why can't I increase session.timeout.ms?

查看:233
本文介绍了为什么我不能增加session.timeout.ms?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想增加 session.timeout.ms 以便允许更长的时间来处理在 poll()调用之间收到的消息.但是,当我将 session.timeout.ms 更改为高于30000的值时,它无法创建Consumer对象,并抛出以下错误.

I want to increase session.timeout.ms to allow longer time for processing the messages received between poll() calls. However when I change session.timeout.ms to a higher value than 30000, it fails to create Consumer object and throws below error.

谁能告诉我为什么我不能增加 session.timeout.ms 的值,或者我是否缺少某些东西?

Could anyone tell why can't I increase session.timeout.ms value or if I am missing something?

0    [main] INFO  org.apache.kafka.clients.consumer.ConsumerConfig  - ConsumerConfig values: 

request.timeout.ms = 40000
check.crcs = true
retry.backoff.ms = 100
ssl.truststore.password = null
ssl.keymanager.algorithm = SunX509
receive.buffer.bytes = 262144
ssl.cipher.suites = null
ssl.key.password = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.provider = null
sasl.kerberos.service.name = null
session.timeout.ms = 40000
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [server-name:9092]
client.id = 
fetch.max.wait.ms = 500
fetch.min.bytes = 50000
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
sasl.kerberos.kinit.cmd = /usr/bin/kinit
auto.offset.reset = latest
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
ssl.endpoint.identification.algorithm = null
max.partition.fetch.bytes = 2097152
ssl.keystore.location = null
ssl.truststore.location = null
ssl.keystore.password = null
metrics.sample.window.ms = 30000
metadata.max.age.ms = 300000
security.protocol = PLAINTEXT
auto.commit.interval.ms = 5000
ssl.protocol = TLS
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
ssl.trustmanager.algorithm = PKIX
group.id = test7
enable.auto.commit = false
metric.reporters = []
ssl.truststore.type = JKS
send.buffer.bytes = 131072
reconnect.backoff.ms = 50
metrics.num.samples = 2
ssl.keystore.type = JKS
heartbeat.interval.ms = 3000

线程"main"中的异常org.apache.kafka.common.KafkaException:无法在以下位置构建kafka消费者org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:624)在org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:518)在org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:500)

Exception in thread "main" org.apache.kafka.common.KafkaException: Failed to construct kafka consumer at org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:624) at org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:518) at org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:500)

推荐答案

使用者会话超时的范围由代理 group.max.session.timeout.ms (默认30秒)和 group.min.session.timeout.ms (默认6秒).

The range of consumer session timeout is controlled by broker group.max.session.timeout.ms(default 30s) and group.min.session.timeout.ms(default 6s).

您应该首先在代理端增加group.max.session.timeout.ms,否则您将获得会话超时不在可接受的范围内".

You should increase group.max.session.timeout.ms first in broker side, otherwise you will get "The session timeout is not within an acceptable range.".

这篇关于为什么我不能增加session.timeout.ms?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆