Amazon-Kinesis:记录每个碎片 [英] Amazon-Kinesis: Put record to every shard

查看:143
本文介绍了Amazon-Kinesis:记录每个碎片的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个Amazon Kinesis流,包含多个分片.分片的数量(因此也就是消费者的数量)不是一个常数.

I have an Amazon Kinesis stream, consisting of multiple shards. The number of shards, and therefore the number of consumers, is not a constant.

我想将不常见的事件类型广播到流中的每个消费者.

There is an infrequent type of event that I want broadcasted to every consumer on the stream.

生产者是否有办法广播记录,即发现碎片并将记录放到每个记录上?

Is there a way for a producer to broadcast a record, i.e. to discover the shards and put the record on each one?

推荐答案

您可以执行此操作!有点...

You can do this! Kind of...

使用参数"ExplicitHashKey"的技巧.

The trick it to use the parameter "ExplicitHashKey".

这使您可以设置用于记录的哈希键,因此可以选择数据正在处理的分片.

This lets you set the hash key used for the record, and therefore lets you chose what shard your data is going on.

然后您可以致电

aws kinesis describe-stream --stream-name name-of-your-stream

获取有关每个分片的信息,包括每个分片覆盖的哈希范围.

to get info about each shard including what hash range each shard covers.

然后,您只需要将数据发送到每个分片(使用PutRecord或PutRecords),就可以设置好了.

Then you only need to send your data to each shard (with PutRecord or PutRecords) and you are all set.

不幸的是,没有广播",但是您可以使用ExplicitHashKey参数轻松编写Lambda以便将相同的数据发送到每个分片.

Unfortunately, there is no "broadcast" but you can easily write a Lambda to send the same data to each shard using the ExplicitHashKey parameter.

另一种解决方案是使用单独的流并将其写出.

Another solution is to use separate streams and write out to each one.

这篇关于Amazon-Kinesis:记录每个碎片的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆