Apache Kafka:无法更新Metadata/java.nio.channels.ClosedChannelException [英] Apache Kafka: Failed to Update Metadata/java.nio.channels.ClosedChannelException

查看:322
本文介绍了Apache Kafka:无法更新Metadata/java.nio.channels.ClosedChannelException的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我刚开始使用Apache Kafka/Zookeeper,并且遇到了尝试在AWS上建立集群的问题.目前,我有三台服务器:

I'm just getting started with Apache Kafka/Zookeeper and have been running into issues trying to set up a cluster on AWS. Currently I have three servers:

一个正在运行的Zookeeper和两个正在运行的Kafka.

One running Zookeeper and two running Kafka.

我可以毫无问题地启动Kafka服务器,并且可以在两个服务器上创建主题.但是,当我尝试在一台计算机上启动生产者而在另一台计算机上启动消费者时,麻烦就来了:

I can start the Kafka servers without issue and can create topics on both of them. However, the trouble comes when I try to start a producer on one machine and a consumer on the other:

关于卡夫卡制片人:

kafka-console-producer.sh --broker-list <kafka server 1 aws public dns>:9092,<kafka server 2 aws public dns>:9092 --topic samsa

关于卡夫卡消费者:

kafka-console-consumer.sh --zookeeper <zookeeper server ip>:2181 --topic samsa

我在生产者("hi")上输入了一条消息,一段时间没有任何反应.然后我收到此消息:

I type in a message on the producer ("hi") and nothing happens for a while. Then I get this message:

ERROR Error when sending message to topic samsa with key: null, value: 2 bytes
with error: Failed to update metadata after 60000 ms.
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)

在消费者方面,我收到此消息,该消息会定期重复:

On the consumer side I get this message, which repeats periodically:

WARN Fetching topic metadata with correlation id # for topics [Set(samsa)] from broker [BrokerEndPoint(<broker.id>,<producer's advertised.host.name>,9092)] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
    at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
    at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:75)
    at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:74)
    at kafka.producer.SyncProducer.send(SyncProducer.scala:119)
    at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
    at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:94)
    at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
    at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)

过一会儿,生产者将开始以#递增的方式迅速抛出此错误消息:

After a while, the producer will then start rapidly throwing this error message with # increasing incrementally:

WARN Error while fetching metadata with correlation id # : {samsa=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)

不确定从这里要去哪里.让我知道是否需要有关我的配置文件的更多信息

Not sure where to go from here. Let me know if more details about my configuration files are needed

推荐答案

这是一个配置问题.

为了使其运行,必须对config文件进行几处更改:

In order to get it running several changes to config files had to happen:

在每台Kafka服务器上的config/server.properties中:

In config/server.properties on each Kafka server:

  • host.name: <Public IP>
  • advertised.host.name: <AWS Public DNS Address>
  • host.name: <Public IP>
  • advertised.host.name: <AWS Public DNS Address>

在每台Kafka服务器上的config/producer.properties中:

In config/producer.properties on each Kafka server:

  • metadata.broker.list: <Producer Server advertised.host.name>:<Producer Server port>,<Consumer Server advertised.host.name>:<Consumer Server port>

在每台Kafka服务器上的/etc/hosts中,将127.0.0.1 localhost localhost.localdomain更改为:

In /etc/hosts on each Kafka server, change 127.0.0.1 localhost localhost.localdomain to:

<Public IP>  localhost localhost.localdomain

这篇关于Apache Kafka:无法更新Metadata/java.nio.channels.ClosedChannelException的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆