部署在Kubernetes上的Kafka Producer无法生产在本地计算机上运行的Kafka集群 [英] Kafka Producer deployed on Kubernetes not able to produce to Kafka cluster running on local machine
问题描述
我在本地计算机上运行的Kafka集群的minikube设置以外的默认设置.我已经在一个Web服务中创建了一个生产者,并将其部署在minikube上.
I have a Kafka cluster running on the local machine with default settings outside of my minikube setup. I have created a producer in one of my web services and deployed it on minikube.
为使生产者连接到Kafka,我使用的是10.0.2.2
IP,我还将其用于在minikube外部连接Cassandra和DGraph,因为它们工作正常.
For producer to connect to Kafka I am using 10.0.2.2
IP which I am also using to connect Cassandra and DGraph outside of minikube for these it is working fine.
但是,Kafka生产者无法正常工作,甚至在发送数据时也没有抛出错误,提示Broker may not be available
或其他任何错误.但是在消费者方面我什么也没收到.
However Kafka producer is not working and not even throwing an error saying Broker may not be available
or any other errors while sending data. But I am not receiving anything on the consumer side.
当我在Kubernetes外部运行此Web服务时,一切正常.
When I run this web service outside the Kubernetes everything works.
如果你们有任何想法,请问这里可能有什么问题.
Please if you guys have any idea what might be wrong here.
下面是我正在使用的Kubernetes yaml
文件.
Below is the Kubernetes yaml
file that I am using.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: servicename
labels:
app: servicename
metrics: kamon
spec:
replicas: 1
template:
metadata:
labels:
app: servicename
metrics: kamon
spec:
containers:
- image: "image:app"
imagePullPolicy: IfNotPresent
name: servicename
env:
- name: CIRCUIT_BREAKER_MAX_FAILURES
value: "10"
- name: CIRCUIT_BREAKER_RESET_TIMEOUT
value: 30s
- name: CIRCUIT_BREAKER_CALL_TIMEOUT
value: 30s
- name: CONTACT_POINT_ONE
value: "10.0.2.2"
- name: DGRAPH_HOSTS
value: "10.0.2.2"
- name: DGRAPH_PORT
value: "9080"
- name: KAFKA_BOOTSTRAP_SERVERS
value: "10.0.2.2:9092"
- name: KAFKA_PRODUCER_NOTIFICATION_CLIENT_ID
value: "notificationProducer"
- name: KAFKA_NOTIFICATION_TOPIC
value: "notification"
- name: LAGOM_PERSISTENCE_READ_SIDE_OFFSET_TIMEOUT
value: 5s
- name: LAGOM_PERSISTENCE_READ_SIDE_FAILURE_EXPONENTIAL_BACKOFF_MIN
value: 3s
- name: LAGOM_PERSISTENCE_READ_SIDE_FAILURE_EXPONENTIAL_BACKOFF_MAX
value: 30s
- name: LAGOM_PERSISTENCE_READ_SIDE_FAILURE_EXPONENTIAL_BACKOFF_RANDOM_FACTOR
value: "0.2"
- name: LAGOM_PERSISTENCE_READ_SIDE_GLOBAL_PREPARE_TIMEOUT
value: 30s
- name: LAGOM_PERSISTENCE_READ_SIDE_RUN_ON_ROLE
value: ""
- name: LAGOM_PERSISTENCE_READ_SIDE_USE_DISPATCHER
value: lagom.persistence.dispatcher
- name: AKKA_TIMEOUT
value: 30s
- name: NUMBER_OF_DGRAPH_REPOSITORY_ACTORS
value: "2"
- name: DGRAPH_ACTOR_TIMEOUT_MILLIS
value: "20000"
- name: AKKA_ACTOR_PROVIDER
value: "cluster"
- name: AKKA_CLUSTER_SHUTDOWN_AFTER_UNSUCCESSFUL_JOIN_SEED_NODES
value: 40s
- name: AKKA_DISCOVERY_METHOD
value: "kubernetes-api"
- name: AKKA_IO_DNS_RESOLVER
value: "async-dns"
- name: AKKA_IO_DNS_ASYNC_DNS_PROVIDER_OBJECT
value: "com.lightbend.rp.asyncdns.AsyncDnsProvider"
- name: AKKA_IO_DNS_ASYNC_DNS_RESOLVE_SRV
value: "true"
- name: AKKA_IO_DNS_ASYNC_DNS_RESOLV_CONF
value: "on"
- name: AKKA_MANAGEMENT_HTTP_PORT
value: "10002"
- name: AKKA_MANAGEMENT_HTTP_BIND_HOSTNAME
value: "0.0.0.0"
- name: AKKA_MANAGEMENT_HTTP_BIND_PORT
value: "10002"
- name: AKKA_MANAGEMENT_CLUSTER_BOOTSTRAP_CONTACT_POINT_DISCOVERY_REQUIRED_CONTACT_POINT_NR
value: "1"
- name: AKKA_REMOTE_NETTY_TCP_PORT
value: "10001"
- name: AKKA_REMOTE_NETTY_TCP_BIND_HOSTNAME
value: "0.0.0.0"
- name: AKKA_REMOTE_NETTY_TCP_BIND_HOSTNAME
value: "0.0.0.0"
- name: AKKA_REMOTE_NETTY_TCP_BIND_PORT
value: "10001"
- name: LAGOM_CLUSTER_EXIT_JVM_WHEN_SYSTEM_TERMINATED
value: "on"
- name: PLAY_SERVER_HTTP_ADDRESS
value: "0.0.0.0"
- name: PLAY_SERVER_HTTP_PORT
value: "9000"
ports:
- containerPort: 9000
- containerPort: 9095
- containerPort: 10001
- containerPort: 9092
name: "akka-remote"
- containerPort: 10002
name: "akka-mgmt-http"
---
apiVersion: v1
kind: Service
metadata:
name: servicename
labels:
app: servicename
spec:
ports:
- name: "http"
port: 9000
nodePort: 31001
targetPort: 9000
- name: "akka-remote"
port: 10001
protocol: TCP
targetPort: 10001
- name: "akka-mgmt-http"
port: 10002
protocol: TCP
targetPort: 10002
selector:
app: servicename
type: NodePort
推荐答案
我已经连接到与Kafka在同一台计算机上运行的Cassandra和Dgraph
I am already connecting to Cassandra and Dgraph running on the same machine as Kafka
那么,这些服务不会通过Zookeeper公布其网络地址.
Well, those services don't advertise their network address via Zookeeper.
我的Kafka集群在K8之外.但是,生产者在K8中.
My Kafka cluster is outside the K8. However, the producer is in K8.
为了使k8s之外的服务知道Kafka的位置,必须将advertised.listeners
设置为一个外部IP或DNS地址,以便可以识别k8s环境中的所有生产者/消费者服务,并且这就是您的服务的地址将连接到.例如PLAINTEXT://10.0.2.2:9092
In order for services outside of k8s to know Kafka's location, the advertised.listeners
needs to be set to an external IP or DNS address that all producer/consumer services in the k8s environment will be recognize and that's the address that your services will connect to. For example PLAINTEXT://10.0.2.2:9092
换句话说,如果您没有设置侦听器,而只是在localhost上侦听,那仅仅是因为Kafka端口是外部暴露的,这意味着尽管您可以联系到一个经纪人,但您获得的地址作为协议的一部分,不能保证与客户端的配置相同,而这就是广告监听器地址起作用的地方.
In other words, if you had not set up the listeners, and it was only listening on localhost, just because the Kafka port is externally exposed, means that while you might be able to reach one broker, the address you get back as part of the protocol is not guaranteed to be the same thing as your client's configuration, and that's where the advertised listener address comes into play.
这篇关于部署在Kubernetes上的Kafka Producer无法生产在本地计算机上运行的Kafka集群的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!