无法连接到Kafka经纪人 [英] Not able to connect to kafka brokers

查看:399
本文介绍了无法连接到Kafka经纪人的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经部署了 https://github.com/confluentinc/cp-helm-charts/tree/master/charts/cp-kafka 在我的Prem k8s群集上.我正在尝试使用带有nginx的TCP控制器公开它.

I've deployed https://github.com/confluentinc/cp-helm-charts/tree/master/charts/cp-kafka on my on prem k8s cluster. I'm trying to expose it my using a TCP controller with nginx.

我的TCP nginx configmap看起来像

My TCP nginx configmap looks like

data:
  "<zookeper-tcp-port>": <namespace>/cp-zookeeper:2181
  "<kafka-tcp-port>": <namespace>/cp-kafka:9092

我已经在我的nginx入口控制器中输入了相应的条目

And i've made the corresponding entry in my nginx ingress controller

  - name: <zookeper-tcp-port>-tcp
    port: <zookeper-tcp-port>
    protocol: TCP
    targetPort: <zookeper-tcp-port>-tcp
  - name: <kafka-tcp-port>-tcp
    port: <kafka-tcp-port>
    protocol: TCP
    targetPort: <kafka-tcp-port>-tcp

现在,我正在尝试连接到我的kafka实例.当我只是尝试使用kafka工具连接到IP和端口时,出现错误消息

Now I'm trying to connect to my kafka instance. When i just try to connect to the IP and port using kafka tools, I get the error message

Unable to determine broker endpoints from Zookeeper.
One or more brokers have multiple endpoints for protocol PLAIN...
Please proved bootstrap.servers value in advanced settings
[<cp-broker-address-0>.cp-kafka-headless.<namespace>:<port>][<ip>]

输入后,我假设是正确的经纪人地址(我已经尝试了所有...),我超时了.没有来自nginx控制器excep的日志

When I enter, what I assume are the correct broker addresses (I've tried them all...) I get a time out. There are no logs coming from the nginx controler excep

[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:14 +0000]TCP200000.001

我从吊舱 kafka-zookeeper-0

[2020-04-08 15:52:02,415] INFO Accepted socket connection from /<ip:port> (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2020-04-08 15:52:02,415] WARN Unable to read additional data from client sessionid 0x0, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn)
[2020-04-08 15:52:02,415] INFO Closed socket connection for client /<ip:port>  (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)

尽管我不确定它们是否与此有关?

Though I'm not sure these have anything to do with it?

关于我在做什么错的任何想法?提前致谢.

Any ideas on what I'm doing wrong? Thanks in advance.

推荐答案

TL; DR:

  • 在部署之前,将 cp-kafka/values.yaml 中的 nodeport.enabled 值更改为 true .
  • 更改TCP NGINX Configmap和Ingress对象中的服务名称和端口.
  • 在您的kafka工具上将 bootstrap-server 设置为< Cluster_External_IP>:31090
  • Change the value nodeport.enabled to true inside cp-kafka/values.yaml before deploying.
  • Change the service name and ports in you TCP NGINX Configmap and Ingress object.
  • Set bootstrap-server on your kafka tools to <Cluster_External_IP>:31090

说明:

与StatefulSet一起创建了无头服务.创建的服务将被赋予 clusterIP ,而是仅包含 Endpoints 的列表.然后,这些 Endpoints 用于生成实例特定的DNS记录,格式为:< StatefulSet>-<序号>.<服务>.<命名空间> .svc.cluster.local

The Headless Service was created alongside the StatefulSet. The created service will not be given a clusterIP, but will instead simply include a list of Endpoints. These Endpoints are then used to generate instance-specific DNS records in the form of: <StatefulSet>-<Ordinal>.<Service>.<Namespace>.svc.cluster.local

它为每个Pod创建一个DNS名称,例如:

It creates a DNS name for each pod, e.g:

[ root@curl:/ ]$ nslookup my-confluent-cp-kafka-headless
Server:    10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local

Name:      my-confluent-cp-kafka-headless
Address 1: 10.8.0.23 my-confluent-cp-kafka-1.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 2: 10.8.1.21 my-confluent-cp-kafka-0.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 3: 10.8.3.7 my-confluent-cp-kafka-2.my-confluent-cp-kafka-headless.default.svc.cluster.local

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆