部署在 Kubernetes 上的 Kafka Producer 无法生产到本地机器上运行的 Kafka 集群 [英] Kafka Producer deployed on Kubernetes not able to produce to Kafka cluster running on local machine

查看:28
本文介绍了部署在 Kubernetes 上的 Kafka Producer 无法生产到本地机器上运行的 Kafka 集群的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个 Kafka 集群在本地机器上运行,默认设置在我的 minikube 设置之外.我在我的一个 Web 服务中创建了一个生产者并将其部署在 minikube 上.

I have a Kafka cluster running on the local machine with default settings outside of my minikube setup. I have created a producer in one of my web services and deployed it on minikube.

为了让生产者连接到 Kafka,我使用了 10.0.2.2 IP,我也用它来连接 minikube 之外的 Cassandra 和 DGraph,因为它们工作正常.

For producer to connect to Kafka I am using 10.0.2.2 IP which I am also using to connect Cassandra and DGraph outside of minikube for these it is working fine.

然而,Kafka 生产者没有工作,甚至在发送数据时也没有抛出一个错误,说 Broker 可能不可用 或任何其他错误.但我没有收到消费者方面的任何东西.

However Kafka producer is not working and not even throwing an error saying Broker may not be available or any other errors while sending data. But I am not receiving anything on the consumer side.

当我在 Kubernetes 之外运行此 Web 服务时,一切正常.

When I run this web service outside the Kubernetes everything works.

请如果你们知道这里可能有什么问题.

Please if you guys have any idea what might be wrong here.

下面是我正在使用的 Kubernetes yaml 文件.

Below is the Kubernetes yaml file that I am using.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: servicename
  labels:
    app: servicename
    metrics: kamon
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: servicename
        metrics: kamon
    spec:
      containers:
      - image: "image:app"
        imagePullPolicy: IfNotPresent
        name: servicename
        env:
        - name: CIRCUIT_BREAKER_MAX_FAILURES
          value: "10"
        - name: CIRCUIT_BREAKER_RESET_TIMEOUT
          value: 30s
        - name: CIRCUIT_BREAKER_CALL_TIMEOUT
          value: 30s
        - name: CONTACT_POINT_ONE
          value: "10.0.2.2"
        - name: DGRAPH_HOSTS
          value: "10.0.2.2"
        - name: DGRAPH_PORT
          value: "9080"
        - name: KAFKA_BOOTSTRAP_SERVERS
          value: "10.0.2.2:9092"
        - name: KAFKA_PRODUCER_NOTIFICATION_CLIENT_ID
          value: "notificationProducer"
        - name: KAFKA_NOTIFICATION_TOPIC
          value: "notification"
        - name: LAGOM_PERSISTENCE_READ_SIDE_OFFSET_TIMEOUT
          value: 5s
        - name: LAGOM_PERSISTENCE_READ_SIDE_FAILURE_EXPONENTIAL_BACKOFF_MIN
          value: 3s
        - name: LAGOM_PERSISTENCE_READ_SIDE_FAILURE_EXPONENTIAL_BACKOFF_MAX
          value: 30s
        - name: LAGOM_PERSISTENCE_READ_SIDE_FAILURE_EXPONENTIAL_BACKOFF_RANDOM_FACTOR
          value: "0.2"
        - name: LAGOM_PERSISTENCE_READ_SIDE_GLOBAL_PREPARE_TIMEOUT
          value: 30s
        - name: LAGOM_PERSISTENCE_READ_SIDE_RUN_ON_ROLE
          value: ""
        - name: LAGOM_PERSISTENCE_READ_SIDE_USE_DISPATCHER
          value: lagom.persistence.dispatcher
        - name: AKKA_TIMEOUT
          value: 30s
        - name: NUMBER_OF_DGRAPH_REPOSITORY_ACTORS
          value: "2"
        - name: DGRAPH_ACTOR_TIMEOUT_MILLIS
          value: "20000"
        - name: AKKA_ACTOR_PROVIDER
          value: "cluster"
        - name: AKKA_CLUSTER_SHUTDOWN_AFTER_UNSUCCESSFUL_JOIN_SEED_NODES
          value: 40s
        - name: AKKA_DISCOVERY_METHOD
          value: "kubernetes-api"
        - name: AKKA_IO_DNS_RESOLVER
          value: "async-dns"
        - name: AKKA_IO_DNS_ASYNC_DNS_PROVIDER_OBJECT
          value: "com.lightbend.rp.asyncdns.AsyncDnsProvider"
        - name: AKKA_IO_DNS_ASYNC_DNS_RESOLVE_SRV
          value: "true"
        - name: AKKA_IO_DNS_ASYNC_DNS_RESOLV_CONF
          value: "on"
        - name: AKKA_MANAGEMENT_HTTP_PORT
          value: "10002"
        - name: AKKA_MANAGEMENT_HTTP_BIND_HOSTNAME
          value: "0.0.0.0"
        - name: AKKA_MANAGEMENT_HTTP_BIND_PORT
          value: "10002"
        - name: AKKA_MANAGEMENT_CLUSTER_BOOTSTRAP_CONTACT_POINT_DISCOVERY_REQUIRED_CONTACT_POINT_NR
          value: "1"
        - name: AKKA_REMOTE_NETTY_TCP_PORT
          value: "10001"
        - name: AKKA_REMOTE_NETTY_TCP_BIND_HOSTNAME
          value: "0.0.0.0"
        - name: AKKA_REMOTE_NETTY_TCP_BIND_HOSTNAME
          value: "0.0.0.0"
        - name: AKKA_REMOTE_NETTY_TCP_BIND_PORT
          value: "10001"
        - name: LAGOM_CLUSTER_EXIT_JVM_WHEN_SYSTEM_TERMINATED
          value: "on"
        - name: PLAY_SERVER_HTTP_ADDRESS
          value: "0.0.0.0"
        - name: PLAY_SERVER_HTTP_PORT
          value: "9000"
        ports:
        - containerPort: 9000
        - containerPort: 9095
        - containerPort: 10001
        - containerPort: 9092
          name: "akka-remote"
        - containerPort: 10002
          name: "akka-mgmt-http"
---
apiVersion: v1
kind: Service
metadata:
  name: servicename
  labels:
    app: servicename
spec:
  ports:
    - name: "http"
      port: 9000
      nodePort: 31001
      targetPort: 9000
    - name: "akka-remote"
      port: 10001
      protocol: TCP
      targetPort: 10001
    - name: "akka-mgmt-http"
      port: 10002
      protocol: TCP
      targetPort: 10002
  selector:
    app: servicename
  type: NodePort

推荐答案

我已经连接到与 Kafka 在同一台机器上运行的 Cassandra 和 Dgraph

I am already connecting to Cassandra and Dgraph running on the same machine as Kafka

好吧,这些服务不会通过 Zookeeper 公布它们的网络地址.

Well, those services don't advertise their network address via Zookeeper.

我的 Kafka 集群在 K8 之外.但是,生产者在 K8 中.

My Kafka cluster is outside the K8. However, the producer is in K8.

为了让 k8s 之外的服务知道 Kafka 的位置,需要将 advertised.listeners 设置为 k8s 环境中所有生产者/消费者服务都会识别的外部 IP 或 DNS 地址这就是您的服务将连接到的地址.例如 PLAINTEXT://10.0.2.2:9092

In order for services outside of k8s to know Kafka's location, the advertised.listeners needs to be set to an external IP or DNS address that all producer/consumer services in the k8s environment will be recognize and that's the address that your services will connect to. For example PLAINTEXT://10.0.2.2:9092

换句话说,如果你没有设置监听器,它只监听本地主机,仅仅因为 Kafka 端口是对外暴露的,这意味着虽然你可能能够访问一个代理,但你得到的地址作为协议的一部分,不能保证与您客户端的配置相同,这就是广告侦听器地址发挥作用的地方.

In other words, if you had not set up the listeners, and it was only listening on localhost, just because the Kafka port is externally exposed, means that while you might be able to reach one broker, the address you get back as part of the protocol is not guaranteed to be the same thing as your client's configuration, and that's where the advertised listener address comes into play.

这篇关于部署在 Kubernetes 上的 Kafka Producer 无法生产到本地机器上运行的 Kafka 集群的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆