无法连接到Kafka经纪人 [英] Not able to connect to kafka brokers
问题描述
我已经部署了 https://github.com/confluentinc/cp-helm-charts/tree/master/charts/cp-kafka 在我的Prem k8s群集上.我正在尝试使用带有nginx的TCP控制器公开它.
I've deployed https://github.com/confluentinc/cp-helm-charts/tree/master/charts/cp-kafka on my on prem k8s cluster. I'm trying to expose it my using a TCP controller with nginx.
我的TCP nginx configmap看起来像
My TCP nginx configmap looks like
data:
"<zookeper-tcp-port>": <namespace>/cp-zookeeper:2181
"<kafka-tcp-port>": <namespace>/cp-kafka:9092
我已经在我的nginx入口控制器中输入了相应的条目
And i've made the corresponding entry in my nginx ingress controller
- name: <zookeper-tcp-port>-tcp
port: <zookeper-tcp-port>
protocol: TCP
targetPort: <zookeper-tcp-port>-tcp
- name: <kafka-tcp-port>-tcp
port: <kafka-tcp-port>
protocol: TCP
targetPort: <kafka-tcp-port>-tcp
现在,我正在尝试连接到我的kafka实例.当我只是尝试使用kafka工具连接到IP和端口时,出现错误消息
Now I'm trying to connect to my kafka instance. When i just try to connect to the IP and port using kafka tools, I get the error message
Unable to determine broker endpoints from Zookeeper.
One or more brokers have multiple endpoints for protocol PLAIN...
Please proved bootstrap.servers value in advanced settings
[<cp-broker-address-0>.cp-kafka-headless.<namespace>:<port>][<ip>]
输入后,我假设是正确的经纪人地址(我已经尝试了所有...),我超时了.没有来自nginx控制器excep的日志
When I enter, what I assume are the correct broker addresses (I've tried them all...) I get a time out. There are no logs coming from the nginx controler excep
[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:14 +0000]TCP200000.001
我从吊舱 kafka-zookeeper-0
[2020-04-08 15:52:02,415] INFO Accepted socket connection from /<ip:port> (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2020-04-08 15:52:02,415] WARN Unable to read additional data from client sessionid 0x0, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn)
[2020-04-08 15:52:02,415] INFO Closed socket connection for client /<ip:port> (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)
尽管我不确定它们是否与此有关?
Though I'm not sure these have anything to do with it?
关于我在做什么错的任何想法?提前致谢.
Any ideas on what I'm doing wrong? Thanks in advance.
推荐答案
TL; DR:
- 在部署之前,将
cp-kafka/values.yaml
中的nodeport.enabled
值更改为true
. - 更改TCP NGINX Configmap和Ingress对象中的服务名称和端口.
- 在您的kafka工具上将
bootstrap-server
设置为< Cluster_External_IP>:31090
- Change the value
nodeport.enabled
totrue
insidecp-kafka/values.yaml
before deploying. - Change the service name and ports in you TCP NGINX Configmap and Ingress object.
- Set
bootstrap-server
on your kafka tools to<Cluster_External_IP>:31090
说明:
与StatefulSet一起创建了无头服务.创建的服务将不被赋予
clusterIP
,而是仅包含Endpoints
的列表.然后,这些Endpoints
用于生成实例特定的DNS记录,格式为:< StatefulSet>-<序号>.<服务>.<命名空间> .svc.cluster.local
The Headless Service was created alongside the StatefulSet. The created service will not be given a
clusterIP
, but will instead simply include a list ofEndpoints
. TheseEndpoints
are then used to generate instance-specific DNS records in the form of:<StatefulSet>-<Ordinal>.<Service>.<Namespace>.svc.cluster.local
它为每个Pod创建一个DNS名称,例如:
It creates a DNS name for each pod, e.g:
[ root@curl:/ ]$ nslookup my-confluent-cp-kafka-headless
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
Name: my-confluent-cp-kafka-headless
Address 1: 10.8.0.23 my-confluent-cp-kafka-1.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 2: 10.8.1.21 my-confluent-cp-kafka-0.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 3: 10.8.3.7 my-confluent-cp-kafka-2.my-confluent-cp-kafka-headless.default.svc.cluster.local
- 这就是使这些服务在群集内彼此连接的原因.
- Nginx ConfigMap 询问对于:
< PortToExpose> ;:<名称空间>/<服务>:< InternallyExposedPort>"
. - 我意识到您不需要公开Zookeeper,因为它是内部服务,由kafka经纪人处理.
- 我还意识到您正在尝试公开
cp-kafka:9092
,它是无头服务,也仅在内部使用,如上所述. - 为了获得外部访问权限,您必须将参数
nodeport.enabled
设置为true
,如下所示: - The Nginx ConfigMap asks for:
<PortToExpose>: "<Namespace>/<Service>:<InternallyExposedPort>"
. - I realized that you don't need to expose the Zookeeper, since it's a internal service and handled by kafka brokers.
- I also realized that you are trying to expose
cp-kafka:9092
which is the headless service, also only used internally, as I explained above. - In order to get outside access you have to set the parameters
nodeport.enabled
totrue
as stated here: External Access Parameters. - It adds one service to each kafka-N pod during chart deployment.
- Then you change your configmap to map to one of them:
我经历了很多试验和错误,直到我意识到它应该如何工作.基于您的TCP Nginx Configmap,我相信您也遇到了同样的问题.
I've gone through a lot of trial and error, until I realized how it was supposed to be working. Based your TCP Nginx Configmap I believe you faced the same issue.
data:
"31090": default/demo-cp-kafka-0-nodeport:31090
请注意,创建的服务具有选择器 statefulset.kubernetes.io/pod-name:demo-cp-kafka-0
,这是服务识别其打算连接的Pod的方式.
Note that the service created has the selector statefulset.kubernetes.io/pod-name: demo-cp-kafka-0
this is how the service identifies the pod it is intended to connect to.
- 编辑nginx-ingress-controller:
- containerPort: 31090
hostPort: 31090
protocol: TCP
- 将您的kafka工具设置为
< Cluster_External_IP>:31090
复制:-在 cp-kafka/values.yaml
中编辑的代码段:
Reproduction:
- Snippet edited in cp-kafka/values.yaml
:
nodeport:
enabled: true
servicePort: 19092
firstListenerPort: 31090
- 部署图表:
$ helm install demo cp-helm-charts
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-cp-control-center-6d79ddd776-ktggw 1/1 Running 3 113s
demo-cp-kafka-0 2/2 Running 1 113s
demo-cp-kafka-1 2/2 Running 0 94s
demo-cp-kafka-2 2/2 Running 0 84s
demo-cp-kafka-connect-79689c5c6c-947c4 2/2 Running 2 113s
demo-cp-kafka-rest-56dfdd8d94-79kpx 2/2 Running 1 113s
demo-cp-ksql-server-c498c9755-jc6bt 2/2 Running 2 113s
demo-cp-schema-registry-5f45c498c4-dh965 2/2 Running 3 113s
demo-cp-zookeeper-0 2/2 Running 0 112s
demo-cp-zookeeper-1 2/2 Running 0 93s
demo-cp-zookeeper-2 2/2 Running 0 74s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-cp-control-center ClusterIP 10.0.13.134 <none> 9021/TCP 50m
demo-cp-kafka ClusterIP 10.0.15.71 <none> 9092/TCP 50m
demo-cp-kafka-0-nodeport NodePort 10.0.7.101 <none> 19092:31090/TCP 50m
demo-cp-kafka-1-nodeport NodePort 10.0.4.234 <none> 19092:31091/TCP 50m
demo-cp-kafka-2-nodeport NodePort 10.0.3.194 <none> 19092:31092/TCP 50m
demo-cp-kafka-connect ClusterIP 10.0.3.217 <none> 8083/TCP 50m
demo-cp-kafka-headless ClusterIP None <none> 9092/TCP 50m
demo-cp-kafka-rest ClusterIP 10.0.14.27 <none> 8082/TCP 50m
demo-cp-ksql-server ClusterIP 10.0.7.150 <none> 8088/TCP 50m
demo-cp-schema-registry ClusterIP 10.0.7.84 <none> 8081/TCP 50m
demo-cp-zookeeper ClusterIP 10.0.9.119 <none> 2181/TCP 50m
demo-cp-zookeeper-headless ClusterIP None <none> 2888/TCP,3888/TCP 50m
- 创建TCP configmap:
$ cat nginx-tcp-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: kube-system
data:
31090: "default/demo-cp-kafka-0-nodeport:31090"
$ kubectl apply -f nginx-tcp.configmap.yaml
configmap/tcp-services created
- 编辑Nginx入口控制器:
$ kubectl edit deploy nginx-ingress-controller -n kube-system
$kubectl get deploy nginx-ingress-controller -n kube-system -o yaml
{{{suppressed output}}}
ports:
- containerPort: 31090
hostPort: 31090
protocol: TCP
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
- 我的入口位于IP
35.226.189.123
上,现在让我们尝试从群集外部进行连接.为此,我将连接到具有minikube的另一个VM,因此可以使用kafka-client
pod进行测试: - My ingress is on IP
35.226.189.123
, now let's try to connect from outside the cluster. For that I'll connect to another VM where I have a minikube, so I can usekafka-client
pod to test:
user@minikube:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kafka-client 1/1 Running 0 17h
user@minikube:~$ kubectl exec kafka-client -it -- bin/bash
root@kafka-client:/# kafka-console-consumer --bootstrap-server 35.226.189.123:31090 --topic demo-topic --from-beginning --timeout-ms 8000 --max-messages 1
Wed Apr 15 18:19:48 UTC 2020
Processed a total of 1 messages
root@kafka-client:/#
如您所见,我能够从外部访问kafka.
As you can see, I was able to access the kafka from outside.
- 如果您还需要外部访问Zookeeper,我将为您保留服务模型:
zookeeper-external-0.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: cp-zookeeper
pod: demo-cp-zookeeper-0
name: demo-cp-zookeeper-0-nodeport
namespace: default
spec:
externalTrafficPolicy: Cluster
ports:
- name: external-broker
nodePort: 31181
port: 12181
protocol: TCP
targetPort: 31181
selector:
app: cp-zookeeper
statefulset.kubernetes.io/pod-name: demo-cp-zookeeper-0
sessionAffinity: None
type: NodePort
- 它将为其创建服务:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-cp-zookeeper-0-nodeport NodePort 10.0.5.67 <none> 12181:31181/TCP 2s
- 修补您的配置图:
data:
"31090": default/demo-cp-kafka-0-nodeport:31090
"31181": default/demo-cp-zookeeper-0-nodeport:31181
- 添加入口规则:
ports:
- containerPort: 31181
hostPort: 31181
protocol: TCP
- 使用您的外部IP对其进行测试:
pod/zookeeper-client created
user@minikube:~$ kubectl exec -it zookeeper-client -- /bin/bash
root@zookeeper-client:/# zookeeper-shell 35.226.189.123:31181
Connecting to 35.226.189.123:31181
Welcome to ZooKeeper!
JLine support is disabled
如果您有任何疑问,请在评论中告诉我!
If you have any doubts, let me know in the comments!
这篇关于无法连接到Kafka经纪人的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!