Apache Kafka:无法更新元数据/java.nio.channels.ClosedChannelException [英] Apache Kafka: Failed to Update Metadata/java.nio.channels.ClosedChannelException

查看:19
本文介绍了Apache Kafka:无法更新元数据/java.nio.channels.ClosedChannelException的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我刚刚开始使用 Apache Kafka/Zookeeper,在尝试在 AWS 上设置集群时遇到了问题.目前我有三台服务器:

I'm just getting started with Apache Kafka/Zookeeper and have been running into issues trying to set up a cluster on AWS. Currently I have three servers:

一个运行 Zookeeper,两个运行 Kafka.

One running Zookeeper and two running Kafka.

我可以毫无问题地启动 Kafka 服务器,并且可以在这两个服务器上创建主题.但是,当我尝试在一台机器上启动生产者并在另一台机器上启动消费者时,麻烦就来了:

I can start the Kafka servers without issue and can create topics on both of them. However, the trouble comes when I try to start a producer on one machine and a consumer on the other:

关于卡夫卡制作人:

kafka-console-producer.sh --broker-list <kafka server 1 aws public dns>:9092,<kafka server 2 aws public dns>:9092 --topic samsa

关于 Kafka 消费者:

on the Kafka consumer:

kafka-console-consumer.sh --zookeeper <zookeeper server ip>:2181 --topic samsa

我在生产者上输入了一条消息(hi"),但有一段时间没有任何反应.然后我收到这条消息:

I type in a message on the producer ("hi") and nothing happens for a while. Then I get this message:

ERROR Error when sending message to topic samsa with key: null, value: 2 bytes
with error: Failed to update metadata after 60000 ms.
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)

在消费者方面,我收到此消息,该消息会定期重复:

On the consumer side I get this message, which repeats periodically:

WARN Fetching topic metadata with correlation id # for topics [Set(samsa)] from broker [BrokerEndPoint(<broker.id>,<producer's advertised.host.name>,9092)] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
    at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
    at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:75)
    at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:74)
    at kafka.producer.SyncProducer.send(SyncProducer.scala:119)
    at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
    at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:94)
    at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
    at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)

过了一会儿,生产者会开始快速抛出这个错误信息,#递增:

After a while, the producer will then start rapidly throwing this error message with # increasing incrementally:

WARN Error while fetching metadata with correlation id # : {samsa=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)

不知道从这里去哪里.如果需要有关我的配置文件的更多详细信息,请告诉我

Not sure where to go from here. Let me know if more details about my configuration files are needed

推荐答案

这是一个配置问题.

为了让它运行,必须对 config 文件进行一些更改:

In order to get it running several changes to config files had to happen:

在每个 Kafka 服务器上的 config/server.properties 中:

In config/server.properties on each Kafka server:

  • host.name:<公共IP>
  • advertised.host.name:

在每个 Kafka 服务器上的 config/producer.properties 中:

In config/producer.properties on each Kafka server:

  • metadata.broker.list:<生产者服务器广告.host.name>:<生产者服务器端口>,<消费者服务器广告.host.name>:<消费者服务器端口>

在每个 Kafka 服务器上的/etc/hosts 中,将 127.0.0.1 localhost localhost.localdomain 更改为:

In /etc/hosts on each Kafka server, change 127.0.0.1 localhost localhost.localdomain to:

<Public IP>  localhost localhost.localdomain

这篇关于Apache Kafka:无法更新元数据/java.nio.channels.ClosedChannelException的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆