无法连接到 kafka 代理 [英] Not able to connect to kafka brokers
问题描述
我已经部署了https://github.com/confluentinc/cp-helm-charts/tree/master/charts/cp-kafka 在我的本地 k8s 集群上.我正在尝试使用带有 nginx 的 TCP 控制器来公开它.
I've deployed https://github.com/confluentinc/cp-helm-charts/tree/master/charts/cp-kafka on my on prem k8s cluster. I'm trying to expose it my using a TCP controller with nginx.
我的 TCP nginx 配置映射看起来像
My TCP nginx configmap looks like
data:
"<zookeper-tcp-port>": <namespace>/cp-zookeeper:2181
"<kafka-tcp-port>": <namespace>/cp-kafka:9092
我已经在我的 nginx 入口控制器中做了相应的条目
And i've made the corresponding entry in my nginx ingress controller
- name: <zookeper-tcp-port>-tcp
port: <zookeper-tcp-port>
protocol: TCP
targetPort: <zookeper-tcp-port>-tcp
- name: <kafka-tcp-port>-tcp
port: <kafka-tcp-port>
protocol: TCP
targetPort: <kafka-tcp-port>-tcp
现在我正在尝试连接到我的 kafka 实例.当我尝试使用 kafka 工具连接到 IP 和端口时,我收到错误消息
Now I'm trying to connect to my kafka instance. When i just try to connect to the IP and port using kafka tools, I get the error message
Unable to determine broker endpoints from Zookeeper.
One or more brokers have multiple endpoints for protocol PLAIN...
Please proved bootstrap.servers value in advanced settings
[<cp-broker-address-0>.cp-kafka-headless.<namespace>:<port>][<ip>]
当我进入时,我假设正确的经纪人地址(我已经尝试了所有这些...)我超时了.没有来自 nginx 控制器的日志,例外
When I enter, what I assume are the correct broker addresses (I've tried them all...) I get a time out. There are no logs coming from the nginx controler excep
[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:14 +0000]TCP200000.001
从 pod kafka-zookeeper-0
我得到了很多
From the pod kafka-zookeeper-0
I'm gettting loads of
[2020-04-08 15:52:02,415] INFO Accepted socket connection from /<ip:port> (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2020-04-08 15:52:02,415] WARN Unable to read additional data from client sessionid 0x0, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn)
[2020-04-08 15:52:02,415] INFO Closed socket connection for client /<ip:port> (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)
虽然我不确定这些与它有什么关系?
Though I'm not sure these have anything to do with it?
对我做错了什么有什么想法吗?提前致谢.
Any ideas on what I'm doing wrong? Thanks in advance.
推荐答案
TL;DR:
- 在部署之前,将
cp-kafka/values.yaml
中的值nodeport.enabled
更改为true
. - 更改 TCP NGINX Configmap 和 Ingress 对象中的服务名称和端口.
- 将 kafka 工具上的
bootstrap-server
设置为:31090
- Change the value
nodeport.enabled
totrue
insidecp-kafka/values.yaml
before deploying. - Change the service name and ports in you TCP NGINX Configmap and Ingress object.
- Set
bootstrap-server
on your kafka tools to<Cluster_External_IP>:31090
说明:
Headless Service 与 StatefulSet 一起创建.创建的服务将不获得clusterIP
,而是简单地包含一个端点
列表.然后使用这些 Endpoints
以以下形式生成特定于实例的 DNS 记录:
The Headless Service was created alongside the StatefulSet. The created service will not be given a
clusterIP
, but will instead simply include a list ofEndpoints
. TheseEndpoints
are then used to generate instance-specific DNS records in the form of:<StatefulSet>-<Ordinal>.<Service>.<Namespace>.svc.cluster.local
它为每个 Pod 创建一个 DNS 名称,例如:
It creates a DNS name for each pod, e.g:
[ root@curl:/ ]$ nslookup my-confluent-cp-kafka-headless
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
Name: my-confluent-cp-kafka-headless
Address 1: 10.8.0.23 my-confluent-cp-kafka-1.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 2: 10.8.1.21 my-confluent-cp-kafka-0.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 3: 10.8.3.7 my-confluent-cp-kafka-2.my-confluent-cp-kafka-headless.default.svc.cluster.local
- 这就是使这些服务在集群内相互连接的原因.
- Nginx ConfigMap 询问for:
.: " / : " - 我意识到您不需要公开 Zookeeper,因为它是一项内部服务,由 kafka 代理处理.
- 我还意识到您正试图公开
cp-kafka:9092
,它是无头服务,也仅在内部使用,正如我上面所解释的. - 为了获得外部访问您必须将参数
nodeport.enabled
设置为true
,如下所述:外部访问参数. - 它在图表部署期间为每个 kafka-N pod 添加一项服务.
- 然后您将配置映射更改为映射到其中之一:
- The Nginx ConfigMap asks for:
<PortToExpose>: "<Namespace>/<Service>:<InternallyExposedPort>"
. - I realized that you don't need to expose the Zookeeper, since it's a internal service and handled by kafka brokers.
- I also realized that you are trying to expose
cp-kafka:9092
which is the headless service, also only used internally, as I explained above. - In order to get outside access you have to set the parameters
nodeport.enabled
totrue
as stated here: External Access Parameters. - It adds one service to each kafka-N pod during chart deployment.
- Then you change your configmap to map to one of them:
我经历了大量的反复试验,直到我意识到它应该如何工作.基于您的 TCP Nginx Configmap,我相信您遇到了同样的问题.
I've gone through a lot of trial and error, until I realized how it was supposed to be working. Based your TCP Nginx Configmap I believe you faced the same issue.
data:
"31090": default/demo-cp-kafka-0-nodeport:31090
请注意,创建的服务具有选择器 statefulset.kubernetes.io/pod-name: demo-cp-kafka-0
这是该服务识别它打算连接的 pod 的方式.
Note that the service created has the selector statefulset.kubernetes.io/pod-name: demo-cp-kafka-0
this is how the service identifies the pod it is intended to connect to.
- 编辑 nginx-ingress-controller:
- containerPort: 31090
hostPort: 31090
protocol: TCP
- 将您的 kafka 工具设置为
:31090
复制:- 在 cp-kafka/values.yaml
中编辑的片段:
Reproduction:
- Snippet edited in cp-kafka/values.yaml
:
nodeport:
enabled: true
servicePort: 19092
firstListenerPort: 31090
- 部署图表:
$ helm install demo cp-helm-charts
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-cp-control-center-6d79ddd776-ktggw 1/1 Running 3 113s
demo-cp-kafka-0 2/2 Running 1 113s
demo-cp-kafka-1 2/2 Running 0 94s
demo-cp-kafka-2 2/2 Running 0 84s
demo-cp-kafka-connect-79689c5c6c-947c4 2/2 Running 2 113s
demo-cp-kafka-rest-56dfdd8d94-79kpx 2/2 Running 1 113s
demo-cp-ksql-server-c498c9755-jc6bt 2/2 Running 2 113s
demo-cp-schema-registry-5f45c498c4-dh965 2/2 Running 3 113s
demo-cp-zookeeper-0 2/2 Running 0 112s
demo-cp-zookeeper-1 2/2 Running 0 93s
demo-cp-zookeeper-2 2/2 Running 0 74s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-cp-control-center ClusterIP 10.0.13.134 <none> 9021/TCP 50m
demo-cp-kafka ClusterIP 10.0.15.71 <none> 9092/TCP 50m
demo-cp-kafka-0-nodeport NodePort 10.0.7.101 <none> 19092:31090/TCP 50m
demo-cp-kafka-1-nodeport NodePort 10.0.4.234 <none> 19092:31091/TCP 50m
demo-cp-kafka-2-nodeport NodePort 10.0.3.194 <none> 19092:31092/TCP 50m
demo-cp-kafka-connect ClusterIP 10.0.3.217 <none> 8083/TCP 50m
demo-cp-kafka-headless ClusterIP None <none> 9092/TCP 50m
demo-cp-kafka-rest ClusterIP 10.0.14.27 <none> 8082/TCP 50m
demo-cp-ksql-server ClusterIP 10.0.7.150 <none> 8088/TCP 50m
demo-cp-schema-registry ClusterIP 10.0.7.84 <none> 8081/TCP 50m
demo-cp-zookeeper ClusterIP 10.0.9.119 <none> 2181/TCP 50m
demo-cp-zookeeper-headless ClusterIP None <none> 2888/TCP,3888/TCP 50m
- 创建 TCP 配置映射:
$ cat nginx-tcp-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: kube-system
data:
31090: "default/demo-cp-kafka-0-nodeport:31090"
$ kubectl apply -f nginx-tcp.configmap.yaml
configmap/tcp-services created
- 编辑 Nginx 入口控制器:
$ kubectl edit deploy nginx-ingress-controller -n kube-system
$kubectl get deploy nginx-ingress-controller -n kube-system -o yaml
{{{suppressed output}}}
ports:
- containerPort: 31090
hostPort: 31090
protocol: TCP
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
- 我的入口在 IP
35.226.189.123
上,现在让我们尝试从集群外部连接.为此,我将连接到另一个具有 minikube 的 VM,以便我可以使用kafka-client
pod 进行测试: - My ingress is on IP
35.226.189.123
, now let's try to connect from outside the cluster. For that I'll connect to another VM where I have a minikube, so I can usekafka-client
pod to test:
user@minikube:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kafka-client 1/1 Running 0 17h
user@minikube:~$ kubectl exec kafka-client -it -- bin/bash
root@kafka-client:/# kafka-console-consumer --bootstrap-server 35.226.189.123:31090 --topic demo-topic --from-beginning --timeout-ms 8000 --max-messages 1
Wed Apr 15 18:19:48 UTC 2020
Processed a total of 1 messages
root@kafka-client:/#
如您所见,我能够从外部访问 kafka.
As you can see, I was able to access the kafka from outside.
- 如果您还需要对 Zookeeper 进行外部访问,我会为您留下一个服务模型:
zookeeper-external-0.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: cp-zookeeper
pod: demo-cp-zookeeper-0
name: demo-cp-zookeeper-0-nodeport
namespace: default
spec:
externalTrafficPolicy: Cluster
ports:
- name: external-broker
nodePort: 31181
port: 12181
protocol: TCP
targetPort: 31181
selector:
app: cp-zookeeper
statefulset.kubernetes.io/pod-name: demo-cp-zookeeper-0
sessionAffinity: None
type: NodePort
- 它将为其创建一个服务:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-cp-zookeeper-0-nodeport NodePort 10.0.5.67 <none> 12181:31181/TCP 2s
- 修补您的配置映射:
data:
"31090": default/demo-cp-kafka-0-nodeport:31090
"31181": default/demo-cp-zookeeper-0-nodeport:31181
- 添加入口规则:
ports:
- containerPort: 31181
hostPort: 31181
protocol: TCP
- 使用您的外部 IP 对其进行测试:
pod/zookeeper-client created
user@minikube:~$ kubectl exec -it zookeeper-client -- /bin/bash
root@zookeeper-client:/# zookeeper-shell 35.226.189.123:31181
Connecting to 35.226.189.123:31181
Welcome to ZooKeeper!
JLine support is disabled
如果您有任何疑问,请在评论中告诉我!
If you have any doubts, let me know in the comments!
这篇关于无法连接到 kafka 代理的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!