通过Docker-Compose和Spring Boot运行的IP地址访问远程主机中的Kafka [英] Access Kafka in Remote Host by IP Address running with Docker-Compose and Spring Boot

查看:78
本文介绍了通过Docker-Compose和Spring Boot运行的IP地址访问远程主机中的Kafka的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有这个docker-compose.yml,其中运行 Zookeeper Kafka Kafka Connect KafDrop strong>,问题是,当我在本地运行时,可以从我的 Spring Boot 应用程序连接以使用一些主题消息.

I have this docker-compose.yml in which I run Zookeeper, Kafka, Kafka Connect, and KafDrop, the thing is, when I run locally I can connect from my Spring Boot application to consume some topic messages.

我需要在Linux机器上运行相同的配置,并能够以相同的方式从Spring Boot应用程序进行连接.

What I need is to run the same configuration on a Linux machine and be able to connect from the Spring Boot application the same way.

在Linux机器上远程运行它时,一切似乎都运行正常,但是当我尝试从Spring Boot应用程序进行连接时,我收到一些错误消息,表明连接中存在某些问题.

When run it remotely on the Linux machine everything seems to be running Ok, but when I try to connect from the Spring Boot application I receive some erros showing that somethin is wrong in the connection.

我将尝试逐步解释,看看是否有人可以给出提示".对此:

I will try to explain step by step and see if someone can give a "light" on that:

docker-compose.yml:

docker-compose.yml:

version: '3'

services:

  zookeeper:
    image: confluentinc/cp-zookeeper:latest
    networks: 
      - broker-kafka
    ports:
      - 2181:2181
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  kafka:
    image: confluentinc/cp-kafka:latest
    networks: 
      - broker-kafka
    restart: unless-stopped
    depends_on:
      - zookeeper
    ports:
      - 9092:9092
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_LISTENERS: 
         INTERNAL://kafka:29092,
         EXTERNAL://localhost:9092
      KAFKA_ADVERTISED_LISTENERS: 
         INTERNAL://kafka:29092,
         EXTERNAL://localhost:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 
         INTERNAL:PLAINTEXT,
         EXTERNAL:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_LOG_RETENTION_HOURS: 12
    
  connect:
    image: cdc:latest
    networks: 
      - broker-kafka
    depends_on:
      - zookeeper
      - kafka
    ports:
      - 8083:8083
    environment:
      CONNECT_BOOTSTRAP_SERVERS: kafka:29092
      CONNECT_REST_PORT: 8083
      CONNECT_GROUP_ID: connect-1
      CONNECT_CONFIG_STORAGE_TOPIC: connect-1-config
      CONNECT_OFFSET_STORAGE_TOPIC: connect-1-offsets
      CONNECT_STATUS_STORAGE_TOPIC: connect-1-status
      CONNECT_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_OFFSET.STORAGE.REPLICATION.FACTOR: 1
      CONNECT_CONFIG.STORAGE.REPLICATION.FACTOR: 1
      CONNECT_OFFSET.STORAGE.PARTITIONS: 1
      CONNECT_STATUS.STORAGE.REPLICATION.FACTOR: 1
      CONNECT_STATUS.STORAGE.PARTITIONS: 1
      CONNECT_REST_ADVERTISED_HOST_NAME: localhost
      
  kafdrop:
    image: obsidiandynamics/kafdrop:latest
    networks: 
      - broker-kafka
    depends_on:
      - kafka
    ports:
      - 19000:9000
    environment:
      KAFKA_BROKERCONNECT: kafka:29092
      
networks: 
  broker-kafka:
    driver: bridge

我需要的是将这个IP机器公开给我的网络,以供我的Spring Boot应用程序访问.假设此Linux机器具有IP 10.12.54.99.如何通过以下方式访问Kafka:10.12.54.99:9090?

What I need is to expose to my network this IP machine to be accessed by my Spring Boot application. Let´s suppose this Linux machine has the IP 10.12.54.99. How can I make it Kafka be accessible by: 10.12.54.99:9090 ?

这是我的application.properties:

Here is my application.properties:

spring.kafka.bootstrap-servers=10.12.54.99:9092

spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer

spring.kafka.consumer.enable-auto-commit=false
spring.kafka.consumer.auto-commit-interval=100
spring.kafka.consumer.max-poll-records=10
spring.kafka.consumer.key-deserializer=org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
spring.kafka.consumer.value-deserializer=org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
spring.kafka.consumer.group-id=connect-sql-server
spring.kafka.consumer.auto-offset-reset=earliest

spring.kafka.listener.ack-mode=manual-immediate
spring.kafka.listener.poll-timeout=3000
spring.kafka.listener.concurrency=3

spring.kafka.properties.spring.deserializer.key.delegate.class=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.properties.spring.deserializer.value.delegate.class=org.apache.kafka.common.serialization.StringDeserializer

这是唯一一个特定于消费者的应用程序(此处不使用生产者).

This is a only consumer-specif application (no producers are used here).

当我运行应用程序时:

2020-12-07 10:59:40.361  WARN 58716 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-connect-sql-server-1, groupId=connect-sql-server] Connection to node -1 (/10.12.54.99:9092) could not be established. Broker may not be available.
2020-12-07 10:59:40.362  WARN 58716 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-connect-sql-server-1, groupId=connect-sql-server] Bootstrap broker 10.12.54.99:9092 (id: -1 rack: null) disconnected

在Linux防火墙中启用了所有防火墙端口.

All the firewall ports are enabled in the Linux firewall machie.

任何启发都会受到赞赏.

Any enlightenment would be very much appreciated.

推荐答案

您需要绑定服务器的公共ip才能远程访问代理.但是,如果您不想对IP进行硬编码,则可以使用.env文件.

You need to bind your server's public ip in order to be able to access brokers remotely. However if you don't want to hardcode the ip, you can use .env file.

执行以下操作:

  1. 创建config.env文件.

  1. Create config.env file.

在config.env中添加此行,并添加主机IP,如下所示:

Add this line in config.env and add your host ip as below:

DOCKER_HOST_IP = 111.111.11.111

DOCKER_HOST_IP=111.111.11.111

更新您的docker-compose:

Update your docker-compose:

version: '3'

services:

  zookeeper:
    image: confluentinc/cp-zookeeper:latest
    networks: 
      - broker-kafka
    ports:
      - ${DOCKER_HOST_IP:-127.0.0.1}:2181:2181
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  kafka:
    image: confluentinc/cp-kafka:latest
    networks: 
      - broker-kafka
    restart: unless-stopped
    depends_on:
      - zookeeper
    ports:
      - ${DOCKER_HOST_IP:-127.0.0.1}:9092:9092
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_LISTENERS: 
         INTERNAL://kafka:29092,
         EXTERNAL://localhost:9092
      KAFKA_ADVERTISED_LISTENERS: 
         INTERNAL://kafka:29092,
         EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9092:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 
         INTERNAL:PLAINTEXT,
         EXTERNAL:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_LOG_RETENTION_HOURS: 12
    
  connect:
    image: cdc:latest
    networks: 
      - broker-kafka
    depends_on:
      - zookeeper
      - kafka
    ports:
      - 8083:8083
    environment:
      CONNECT_BOOTSTRAP_SERVERS: kafka:29092
      CONNECT_REST_PORT: 8083
      CONNECT_GROUP_ID: connect-1
      CONNECT_CONFIG_STORAGE_TOPIC: connect-1-config
      CONNECT_OFFSET_STORAGE_TOPIC: connect-1-offsets
      CONNECT_STATUS_STORAGE_TOPIC: connect-1-status
      CONNECT_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_OFFSET.STORAGE.REPLICATION.FACTOR: 1
      CONNECT_CONFIG.STORAGE.REPLICATION.FACTOR: 1
      CONNECT_OFFSET.STORAGE.PARTITIONS: 1
      CONNECT_STATUS.STORAGE.REPLICATION.FACTOR: 1
      CONNECT_STATUS.STORAGE.PARTITIONS: 1
      CONNECT_REST_ADVERTISED_HOST_NAME: localhost
      
  kafdrop:
    image: obsidiandynamics/kafdrop:latest
    networks: 
      - broker-kafka
    depends_on:
      - kafka
    ports:
      - 19000:9000
    environment:
      KAFKA_BROKERCONNECT: kafka:29092
      
networks: 
  broker-kafka:
    driver: bridge

如果未找到DOCKER_HOST_IP,它将绑定到1​​27.0.0.1.

It will bind to 127.0.0.1, if DOCKER_HOST_IP is not found.

  1. 运行以下命令:

sudo docker-compose -f path-to-docker-compose.yml --env-file path-to-config.env up -d --force-recreate

这篇关于通过Docker-Compose和Spring Boot运行的IP地址访问远程主机中的Kafka的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆