融合的kafkarest ERROR服务器意外死亡至少需要配置bootstrap.servers或zookeeper.connect中的一种 [英] Confluent kafkarest ERROR Server died unexpectedly Atleast one of bootstrap.servers or zookeeper.connect needs to be configured

查看:117
本文介绍了融合的kafkarest ERROR服务器意外死亡至少需要配置bootstrap.servers或zookeeper.connect中的一种的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在通过Confluent平台运行Kafka。我已按照此处记录的步骤进行操作, https://docs.confluent。 io / 2.0.0 / quickstart.html#quickstart

I am running Kafka via Confluent platform. I have followed the steps as per documented here, https://docs.confluent.io/2.0.0/quickstart.html#quickstart

启动Zookeeper,

start zookeeper,

$ sudo ./bin/zookeeper-server-start ./etc/kafka/zookeeper.properties

启动kafka,

$ sudo ./bin/kafka-server-start ./etc/kafka/server.properties

开始 schema-registry 命令,

$ sudo ./bin/schema-registry-start ./etc/schema-registry/schema-registry.properties

一切都很好。

下一步我想要运行REST代理命令,如此处所述 https://docs.confluent.io/2.0.0/kafka-rest/docs/intro.html#quickstart

Next I want to run REST proxy commands, as per documented here, https://docs.confluent.io/2.0.0/kafka-rest/docs/intro.html#quickstart

$ sudo bin/kafka-rest-start

但这命令失败,并显示以下错误。 (ERROR Server意外死亡:(io.confluent.kafkarest.KafkaRestMain:63)
java.lang.RuntimeException:至少需要配置bootstrap.servers或zookeeper.connect之一。

But this commands fail with the following error. (ERROR Server died unexpectedly: (io.confluent.kafkarest.KafkaRestMain:63) java.lang.RuntimeException: Atleast one of bootstrap.servers or zookeeper.connect needs to be configured).

一切正常。我不明白为什么会收到此错误,您能帮忙解决这个问题吗?

All are running fine. I don't understand why am I getting this error, could you please help solving this?

ESDGH-C02K648W:confluent-4.0.0 user$ sudo bin/kafka-rest-start
[2018-01-09 14:44:06,922] INFO KafkaRestConfig values: 
    metric.reporters = []
    client.security.protocol = PLAINTEXT
    bootstrap.servers = 
    response.mediatype.default = application/vnd.kafka.v1+json
    authentication.realm = 
    ssl.keystore.type = JKS
    metrics.jmx.prefix = kafka.rest
    ssl.truststore.password = [hidden]
    id = 
    host.name = 
    consumer.request.max.bytes = 67108864
    client.ssl.truststore.location = 
    ssl.endpoint.identification.algorithm = 
    compression.enable = false
    client.zk.session.timeout.ms = 30000
    client.ssl.keystore.type = JKS
    client.ssl.cipher.suites = 
    client.ssl.keymanager.algorithm = SunX509
    client.ssl.protocol = TLS
    response.mediatype.preferred = [application/vnd.kafka.v1+json, application/vnd.kafka+json, application/json]
    client.sasl.kerberos.ticket.renew.window.factor = 0.8
    ssl.truststore.type = JKS
    consumer.iterator.backoff.ms = 50
    access.control.allow.origin = 
    ssl.truststore.location = 
    ssl.keystore.password = [hidden]
    zookeeper.connect = 
    port = 8082
    client.ssl.keystore.password = [hidden]
    client.ssl.provider = 
    client.init.timeout.ms = 60000
    simpleconsumer.pool.size.max = 25
    simpleconsumer.pool.timeout.ms = 1000
    ssl.client.auth = false
    consumer.iterator.timeout.ms = 1
    client.sasl.kerberos.service.name = 
    ssl.trustmanager.algorithm = 
    authentication.method = NONE
    schema.registry.url = http://localhost:8081
    client.ssl.truststore.type = JKS
    request.logger.name = io.confluent.rest-utils.requests
    ssl.key.password = [hidden]
    client.sasl.kerberos.ticket.renew.jitter = 0.05
    client.ssl.endpoint.identification.algorithm = 
    authentication.roles = [*]
    client.ssl.trustmanager.algorithm = PKIX
    metrics.num.samples = 2
    consumer.threads = 1
    ssl.protocol = TLS
    client.ssl.keystore.location = 
    debug = false
    listeners = []
    ssl.provider = 
    ssl.enabled.protocols = []
    client.sasl.kerberos.min.time.before.relogin = 60000
    producer.threads = 5
    shutdown.graceful.ms = 1000
    ssl.keystore.location = 
    consumer.request.timeout.ms = 1000
    ssl.cipher.suites = []
    client.timeout.ms = 500
    consumer.instance.timeout.ms = 300000
    client.sasl.kerberos.kinit.cmd = /usr/bin/kinit
    client.ssl.key.password = [hidden]
    access.control.allow.methods = 
    ssl.keymanager.algorithm = 
    metrics.sample.window.ms = 30000
    client.ssl.truststore.password = [hidden]
    client.ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1
    kafka.rest.resource.extension.class = 
    client.sasl.mechanism = GSSAPI
 (io.confluent.kafkarest.KafkaRestConfig:175)
[2018-01-09 14:44:06,954] INFO Logging initialized @402ms (org.eclipse.jetty.util.log:186)
[2018-01-09 14:44:07,154] ERROR Server died unexpectedly:  (io.confluent.kafkarest.KafkaRestMain:63)
java.lang.RuntimeException: Atleast one of bootstrap.servers or zookeeper.connect needs to be configured
    at io.confluent.kafkarest.KafkaRestApplication.setupInjectedResources(KafkaRestApplication.java:104)
    at io.confluent.kafkarest.KafkaRestApplication.setupResources(KafkaRestApplication.java:83)
    at io.confluent.kafkarest.KafkaRestApplication.setupResources(KafkaRestApplication.java:45)
    at io.confluent.rest.Application.createServer(Application.java:157)
    at io.confluent.rest.Application.start(Application.java:495)
    at io.confluent.kafkarest.KafkaRestMain.main(KafkaRestMain.java:56)
ESDGH-C02K648W:confluent-4.0.0 user$ 


推荐答案

kafka-rest-start脚本将属性文件作为参数。您必须在命令行中传递./etc/kafka-rest/kafka-rest.properties。

The kafka-rest-start script takes a properties file as an argument. you must pass ./etc/kafka-rest/kafka-rest.properties in command line.

bin / kafka-rest-start ./ etc / kafka-rest / kafka-rest.properties

这篇关于融合的kafkarest ERROR服务器意外死亡至少需要配置bootstrap.servers或zookeeper.connect中的一种的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆