Kafka to hdfs3 sink 缺少必需的配置“confluent.topic.bootstrap.servers"没有默认值 [英] Kafka to hdfs3 sink Missing required configuration "confluent.topic.bootstrap.servers" which has no default value

查看:24
本文介绍了Kafka to hdfs3 sink 缺少必需的配置“confluent.topic.bootstrap.servers"没有默认值的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的 HDFS 是通过 ambari、HDP 安装的.我目前正在尝试将 kafka 主题加载到 HDFS 接收器中.Kafka 和 HDFS 安装在同一台机器 x.x.x.x 上.除了一些根据我需要的端口外,我没有对默认设置进行太多更改.

My HDFS was installed via ambari, HDP. I'm Currently trying to load kafka topics into HDFS sink. Kafka and HDFS was installed in the same machine x.x.x.x. I didn't change much stuff from the default settings, except some port that according to my needs.

这是我执行 kafka 的方式:

/usr/hdp/3.1.4.0-315/kafka/bin/connect-standalone.sh /etc/kafka/connect-standalone.properties /etc/kafka-connect-hdfs/quickstart-hdfs.properties

在 connect-standalone.properties 内

bootstrap.servers=x.x.x.x:6667
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000

在 quickstart-hdfs.properties 内

name=hdfs-sink
#connector.class=io.confluent.connect.hdfs.HdfsSinkConnector
connector.class=io.confluent.connect.hdfs3.Hdfs3SinkConnector
tasks.max=1
topics=test12
hdfs.url=hdfs://x.x.x.x:9000
flush.size=3

这是我执行它时得到的结果:

Here are the results i get when excute it:

[2020-06-23 03:26:00,918] INFO Started o.e.j.s.ServletContextHandler@71d9cb05{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:855)
[2020-06-23 03:26:00,928] INFO Started http_8083@329a1243{HTTP/1.1,[http/1.1]}{0.0.0.0:8083} (org.eclipse.jetty.server.AbstractConnector:292)
[2020-06-23 03:26:00,928] INFO Started @10495ms (org.eclipse.jetty.server.Server:410)
[2020-06-23 03:26:00,928] INFO Advertised URI: http://x.x.x.x:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:267)
[2020-06-23 03:26:00,928] INFO REST server listening at http://x.x.x.x:8083/, advertising URL http://x.x.x.x:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:217)
[2020-06-23 03:26:00,928] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:55)
[2020-06-23 03:26:00,959] ERROR Failed to create job for quickstart-hdfs.properties (org.apache.kafka.connect.cli.ConnectStandalone:102)
[2020-06-23 03:26:00,960] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:113)
java.util.concurrent.ExecutionException: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector configuration is invalid and contains the following 1 error(s):
Missing required configuration "confluent.topic.bootstrap.servers" which has no default value.
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
        at org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:79)
        at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:66)
        at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:110)
Caused by: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector configuration is invalid and contains the following 1 error(s):
Missing required configuration "confluent.topic.bootstrap.servers" which has no default value.
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
        at org.apache.kafka.connect.runtime.AbstractHerder.maybeAddConfigErrors(AbstractHerder.java:415)
        at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:189)
        at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:107)
[2020-06-23 03:26:00,961] INFO Kafka Connect stopping (org.apache.kafka.connect.runtime.Connect:65)
[2020-06-23 03:26:00,961] INFO Stopping REST server (org.apache.kafka.connect.runtime.rest.RestServer:223)
[2020-06-23 03:26:00,964] INFO Stopped http_8083@329a1243{HTTP/1.1,[http/1.1]}{0.0.0.0:8083} (org.eclipse.jetty.server.AbstractConnector:341)
[2020-06-23 03:26:00,965] INFO node0 Stopped scavenging (org.eclipse.jetty.server.session:167)
[2020-06-23 03:26:00,972] INFO Stopped o.e.j.s.ServletContextHandler@71d9cb05{/,null,UNAVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:1045)
[2020-06-23 03:26:00,974] INFO REST server stopped (org.apache.kafka.connect.runtime.rest.RestServer:241)
[2020-06-23 03:26:00,974] INFO Herder stopping (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:95)
[2020-06-23 03:26:00,974] INFO Worker stopping (org.apache.kafka.connect.runtime.Worker:184)
[2020-06-23 03:26:00,974] INFO Stopped FileOffsetBackingStore (org.apache.kafka.connect.storage.FileOffsetBackingStore:67)
[2020-06-23 03:26:00,975] INFO Worker stopped (org.apache.kafka.connect.runtime.Worker:205)
[2020-06-23 03:26:00,975] INFO Herder stopped (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:112)
[2020-06-23 03:26:00,975] INFO Kafka Connect stopped (org.apache.kafka.connect.runtime.Connect:70)

我对 kafka 和 hdfs 环境真的很陌生.任何建议和帮助将不胜感激.谢谢

I'm really new in kafka and hdfs envinronment. Any suggestion and help will be appreciated so much. Thank you

我已经将我的 connect-standalone.properties 添加到

edit: i've add my connect-standalone.properties into

bootstrap.servers=x.x.x.x:6667
confluent.license=
confluent.topic.bootstrap.server=x.x.x.x:6667
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000

没有任何改变它仍然显示相同的日志错误

nothing changes it still showing the same log error

name=hdfs-sink
connector.class=io.confluent.connect.hdfs3.Hdfs3SinkConnector
tasks.max=1
topics=test12
hdfs.url=hdfs://x.x.x.x:8020
flush.size=3
confluent.license=
confluent.topic.bootstrap.servers=x.x.x.x:6667

connect-standalone.properties

bootstrap.servers=x.x.x.x:6667
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors,
plugin.path=/usr/share/java,/usr/share/confluent-hub-components

推荐答案

错误如下:

缺少必需的配置confluent.topic.bootstrap.servers";没有默认值.

问题是你已经为 HDFS Sink 连接器,并更改了不同的连接器(HDFS 3 Sink),这个有不同的配置要求.

The problem is that you've taken the config for the HDFS Sink connector, and changed the connector for a different one (HDFS 3 Sink), and this one has different configuration requirements.

您可以按照HDFS 3 接收器连接器的快速入门,或通过添加修复现有配置

You can follow the quickstart for the HDFS 3 sink connector, or fix your existing configuration by adding

confluent.topic.bootstrap.servers=10.64.2.236:6667
confluent.topic.replication.factor=1

注意:在您的示例中,您错过了 confluent.topic.bootstrap.servers 中的 s,这就是它不起作用的原因

Note: in your example you missed the s from confluent.topic.bootstrap.servers which is why it didn't work

这篇关于Kafka to hdfs3 sink 缺少必需的配置“confluent.topic.bootstrap.servers"没有默认值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆