使用strimzi在Openshift上设置Kafka [英] Setting up Kafka on Openshift with strimzi

查看:562
本文介绍了使用strimzi在Openshift上设置Kafka的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用以下指南在Openshift平台上设置kafka集群:

I am trying to set up a kafka cluster on the Openshift platform using this guide: https://developers.redhat.com/blog/2018/10/29/how-to-run-kafka-on-openshift-the-enterprise-kubernetes-with-amq-streams/

我的zookeeper和kafka集群正在运行,如下所示: 当将我的应用程序作为引导服务器运行时,我输入了到my-cluster-kafka-external引导程序的路由.但是,当我尝试向Kafka发送消息时,会收到以下消息:

I have my zookeeper and kafka clusters running as shown here: and when running my application as the bootstrap-servers I input the route to the my-cluster-kafka-external bootstrap. But when I try to send a message to Kafka i get this message:

21:32:40.548 [http-nio-8080-exec-1] ERROR o.s.k.s.LoggingProducerListener () - Exception thrown when sending a message with key='key' and payload='Event(id=null, number=30446C77213B40000004tgst15, itemId=, serialNumber=0,  locat...' to topic tag-topic:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

该主题已成功创建,并且在我的计算机上与本地kafka一起运行时,应用程序运行良好.那么我在做什么错了,为什么我不能访问Kafka并发送消息?

The topic was successfully created and the application runs fine when running with local kafka on my computer. So what am I doing wrong, why can't I access Kafka and send messages?

这是我在spring-kafka中的kafka生产者配置:

Here is my kafka producer config in spring-kafka:

    @Value("${kafka.bootstrap-servers}")
    private String bootstrapServers;    

    @Bean
    public Map<String, Object> producerConfigs() {
        Map<String, Object> props = new HashMap<>();

        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "........kafka.EventSerializer");
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);

        return props;
    }


    @Bean
    public ProducerFactory<String, Event> producerFactory() {
        return new DefaultKafkaProducerFactory<>(producerConfigs());
    }

我将日志记录级别设置为调试,并发现了这一点:

I set the logging level to debug and found this:

23:59:27.412 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] DEBUG o.a.k.c.NetworkClient () - [Consumer clientId=consumer-1, groupId=id] Initialize connection to node my-cluster-kafka-bootstrap-kafka-test............... (id: -1 rack: null) for sending metadata request
23:59:27.412 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] DEBUG o.a.k.c.NetworkClient () - [Consumer clientId=consumer-1, groupId=id] Initiating connection to node my-cluster-kafka-bootstrap-kafka-test............ (id: -1 rack: null)
23:59:28.010 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] DEBUG o.a.k.c.n.Selector () - [Consumer clientId=consumer-1, groupId=id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1
23:59:28.010 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] DEBUG o.a.k.c.NetworkClient () - [Consumer clientId=consumer-1, groupId=id] Completed connection to node -1. Fetching API versions.
23:59:28.010 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] DEBUG o.a.k.c.NetworkClient () - [Consumer clientId=consumer-1, groupId=id] Initiating API versions fetch from node -1.
23:59:28.510 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] DEBUG o.a.k.c.n.Selector () - [Consumer clientId=consumer-1, groupId=id] Connection with my-cluster-kafka-bootstrap-kafka-test........../52.215.40.40 disconnected
java.io.EOFException: null
    at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:124) ~[kafka-clients-1.0.2.jar:?]
    at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:93) ~[kafka-clients-1.0.2.jar:?]
    at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:235) ~[kafka-clients-1.0.2.jar:?]
    at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:196) ~[kafka-clients-1.0.2.jar:?]
    at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:547) ~[kafka-clients-1.0.2.jar:?]
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:483) [kafka-clients-1.0.2.jar:?]
    at org.apache.kafka.common.network.Selector.poll(Selector.java:412) [kafka-clients-1.0.2.jar:?]
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460) [kafka-clients-1.0.2.jar:?]
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:258) [kafka-clients-1.0.2.jar:?]
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:230) [kafka-clients-1.0.2.jar:?]
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:221) [kafka-clients-1.0.2.jar:?]
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:153) [kafka-clients-1.0.2.jar:?]
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:228) [kafka-clients-1.0.2.jar:?]
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:205) [kafka-clients-1.0.2.jar:?]
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:284) [kafka-clients-1.0.2.jar:?]
    at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1146) [kafka-clients-1.0.2.jar:?]
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1111) [kafka-clients-1.0.2.jar:?]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:700) [spring-kafka-2.1.10.RELEASE.jar:2.1.10.RELEASE]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514) [?:?]
    at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264) [?:?]
    at java.util.concurrent.FutureTask.run(FutureTask.java) [?:?]
    at java.lang.Thread.run(Thread.java:844) [?:?]
23:59:28.510 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] DEBUG o.a.k.c.NetworkClient () - [Consumer clientId=consumer-1, groupId=id] Node -1 disconnected.
23:59:28.510 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] DEBUG o.a.k.c.NetworkClient () - [Consumer clientId=consumer-1, groupId=id] Give up sending metadata request since no node is available
2

这与经纪人的connections.max.idle.ms属性有关吗? 这里有人遇到类似的问题.

Can this have something to do with the connections.max.idle.ms property of the broker? Here someone had a similar problem.

我通过运行以下命令尝试使用kafka-console-producer:

I tried using kafka-console-producer by running this command:

bin\windows\kafka-console-producer --broker-list https://my-cluster-kafka-bootstrap-kafka-test.domain.com:443 --topic tag-topic --producer.config config/producer.properties

,并在producer.properties中具有此配置:

and with this configuration in the producer.properties:

compression.type=none
security.protocol=SSL
ssl.truststore.location=C:\\Tools\\kafka_2.12-2.2.0\\config\\store.jks
ssl.truststore.password=password
ssl.keystore.location=C:\\Tools\\kafka_2.12-2.2.0\\config\\store.jks
ssl.keystore.password=password
ssl.key.password=password

但是我收到一条响应,说该连接在身份验证时已终止:

but I get a response saying that the connection was terminated while authenticating:

[2019-05-21 16:15:58,444] WARN [Producer clientId=console-producer] Connection to node 1 (my-cluster-kafka-1-kafka-test.domain.com/52.xxx.xx.40:443) terminated during authentication. This may happen due to any of the following reasons: (1) Authentication failed due to invalid credentials with brokers older than 1.0.0, (2) Firewall blocking Kafka TLS traffic (eg it may only allow HTTPS traffic), (3) Transient network issue. (org.apache.kafka.clients.NetworkClient)

openshift的证书有什么办法错误吗?

Is there any way that the certificate from openshift is wrong?

推荐答案

只有使用Strimzi生成的CA证书(如本文所述,您必须提取)才能通过TLS通过路由进行访问.然后,您必须创建一个密钥库,以导入证书并将其提供给客户端应用程序.我在您的生产者中看不到这种配置.

The access through routes is possible via TLS only using the CA certificate generated by Strimzi that you have to extract as described in the article. Then you have to create a key store importing the certificate and providing that to the client application. I don't see such configuration in your producer.

这篇关于使用strimzi在Openshift上设置Kafka的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆