spring 云流 kinesis Binder [英] spring cloud stream kinesis Binder

查看:41
本文介绍了spring 云流 kinesis Binder的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试实现一个能够自动缩放的 spring boot aws kinesis 消费者,以便与原始实例共享负载(拆分处理分片).

I am trying to implement a spring boot aws kinesis consumer that is capable of being auto-scaled in order to share the load (split processing shards) with the original instance.

我能够做的:使用定义明确的自述和此处提供的示例Kinesis binder docs 我已经能够启动多个消费者,通过提供这些属性来实际划分分片进行处理.

What I have been able to do: using the well defined read me and examples available hereKinesis binder docs I have been able to start up multiple consumers that actually divide the shards for processing by supplying these properties.

在生产者上,我通过应用程序属性提供 partitionCount: 2.在消费者身上,我提供了 instanceIndex 和 instanceCount.

on the producer, I supply partitionCount: 2 via an application property. and on the consumers, I supply both the instanceIndex and the instanceCount.

在消费者 1 上,我有 instanceIndex=0 和 instantCount =2 ,在消费者 2 上,我有 instanceIndex=1 和 instantCount=2

on consumer 1 i have instanceIndex=0 and instantCount =2 , on consumer 2 i have instanceIndex=1 and instantCount=2

这很好用,我有两个 Spring Boot 应用程序处理它们的特定分片.但在这种情况下,我必须为每个启动应用程序有一个预配置的属性文件,需要在加载时可用,以便它们分担负载.如果我只启动第一个消费者(非自动缩放),我只处理特定于索引 0 的分片,而不处理其他分片.

this works fine and I have two spring boot applications dealing with their specific shards. But in this case, I have to have a pre-configured properties file per boot application that needs to be available upon load for them to split the load. and if I only start up the first consumer(non auto-scaled) I only process shards specific to index 0, leaving other shards unprocessed.

我想做但不确定是否可以部署单个使用者(处理所有分片).如果我部署另一个实例,我希望该实例重温某些负载的第一个消费者,换句话说,如果我有 2 个分片和一个消费者,它将同时处理这两个分片,如果我然后部署另一个应用程序,我希望第一个消费者到现在只处理来自单个分片的处理,将第二个分片留给第二个消费者.

What I would like to do but not sure if it is possible is to have a single consumer deployed (that processes all shards). if I deploy another instance I would like that instance to relive the first consumer of some of the load, in other words, if I have 2 shards and one consumer it would process both, if I then deploy another app I would like that first consumer to now only processes from a single shard leaving the second shard to the second consumer.

我试图通过不在消费者上指定 instanceIndex 或 instanceCount 而只提供组名来做到这一点,但这会使第二个消费者空闲,直到第一个消费者关闭.仅供参考,我还创建了自己的元数据和锁定表,以防止活页夹创建默认的.

I have tried to do this by not specifying instanceIndex or instanceCount on the consumers and only supplying the group name, but that leaves the second consumer idle until the first is shut down. FYI I have also created my own metadata and locking table, preventing the binder from creating the default ones.

配置:制作人-----------------

Configurations: Producer -----------------

originator: KinesisProducer
server:
 port: 8090

    spring: 
      cloud: 
        stream: 
          bindings:
            output: 
              destination: <stream-name> 
              content-type: application/json
              producer: 
                headerMode: none
                partitionKeyExpression: headers.type

消费者------------------------------------

consumers-------------------------------------

originator: KinesisSink
server:
 port: 8091

spring:
  cloud:
    stream:
      kinesis:
        bindings:
          input:
            consumer:
              listenerMode: batch
              recordsLimit: 10
              shardIteratorType: TRIM_HORIZON
        binder:
          checkpoint:
            table: <checkpoint-table>
          locks:
            table: <locking-table
      bindings:
        input:
          destination: <stream-name>
          content-type: application/json
          consumer:
            concurrency: 1
            listenerMode: batch
            useNativeDecoding: true
            recordsLimit: 10
            idleBetweenPolls: 250
            partitioned: true
          group: mygroup

推荐答案

没错.这就是它现在的工作方式:如果有一个消费者,它需要所有的分片进行处理.仅当第一个以某种方式损坏至少一个分片时,第二个才会采取行动.

That’s correct. That’s how it works for now: if one consumer is there, it takes all the shards for processing. The second one will take an action only if the first one is broken somehow for at least one shard.

适当的类似 Kafka 的重新平衡在我们的路线图上.我们还没有真正的愿景,所以欢迎就此事发表意见和后续贡献!

The proper Kafka-like rebalancing is on our roadmap. We don’t have the solid vision yet, so issue on the matter and subsequent contribution are welcome!

这篇关于spring 云流 kinesis Binder的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆