AWS Kinesis中的分区密钥到底是什么? [英] What is partition key in AWS Kinesis all about?

查看:156
本文介绍了AWS Kinesis中的分区密钥到底是什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在阅读有关AWS Kinesis的信息.在以下程序中,我将数据写入名为TestStream的流中.我将这段代码运行了10次,在流中插入了10条记录.

I was reading about AWS Kinesis. In the following program, I write data into the stream named TestStream. I ran this piece of code 10 times, inserting 10 records into the stream.

var params = {
    Data: 'More Sample data into the test stream ...',
    PartitionKey: 'TestKey_1',
    StreamName: 'TestStream'
};

kinesis.putRecord(params, function(err, data) {
   if (err) console.log(err, err.stack); // an error occurred
   else     console.log(data);           // successful response
});

所有记录均已成功插入. partition key在这里实际上是什么意思?它在后台做什么?我阅读了它的文档,但不理解内容它的意思.

All the records were inserted successfully. What does partition key really mean here? What is it doing in the background? I read its documentation but did not understand what it meant.

推荐答案

分区键仅在流中有多个分片时才重要(但始终是必需的). Kinesis计算分区键的MD5哈希值,以决定将记录存储在哪个分片上(如果您描述流,您将看到哈希范围作为分片解密的一部分).

Partition keys only matter when you have multiple shards in a stream (but they're required always). Kinesis computes the MD5 hash of a partition key to decide what shard to store the record on (if you describe the stream you'll see the hash range as part of the shard decription).

那为什么这么重要?

每个分片只能接受1000条记录和/或每秒1 MB(请参阅

Each shard can only accept 1,000 records and/or 1 MB per second (see PutRecord doc). If you write to a single shard faster than this rate you'll get a ProvisionedThroughputExceededException.

使用多个分片,可以扩展此限制:4个分片可为您提供4,000条记录和/或每秒4 MB.当然,有一些警告.

With multiple shards, you scale this limit: 4 shards gives you 4,000 records and/or 4 MB per second. Of course, there are caveats.

最大的问题是必须使用不同的分区键.如果所有记录都使用相同的分区键,那么您仍将写入单个分片,因为它们都具有相同的哈希值.解决方案的方式取决于您的应用程序:如果您是从多个进程中编写的,则使用进程ID,服务器的IP地址或主机名可能就足够了.如果您是从单个进程编写的,则可以使用记录中的信息(例如,唯一的记录ID),也可以生成随机字符串.

The biggest is that you must use different partition keys. If all of your records use the same partition key then you're still writing to a single shard, because they'll all have the same hash value. How you solve this depends on your application: if you're writing from multiple processes then it might be sufficient to use the process ID, server's IP address, or hostname. If you're writing from a single process then you can either use information that's in the record (for example, a unique record ID) or generate a random string.

第二个警告是,分区键相对于总写大小进行计数,并存储在流中.因此,尽管您可以通过在记录中使用某些文本成分来获得良好的随机性,但您会浪费空间.另一方面,如果您有一些随机的文本成分,则可以从中计算出自己的哈希值,然后将其字符串化为分区键.

Second caveat is that the partition key counts against the total write size, and is stored in the stream. So while you could probably get good randomness by using some textual component in the record, you'd be wasting space. On the other hand, if you have some random textual component, you could calculate your own hash from it and then stringify that for the partition key.

最后,如果您使用的是 PutRecords (其中您应该(如果要写入大量数据),则请求中的单个记录可能会被拒绝,而其他记录则被接受.发生这种情况是因为这些记录进入了已经处于其写入极限的分片,并且您必须重新发送它们(在延迟之后).

Lastly, if you're using PutRecords (which you should, if you're writing a lot of data), individual records in the request may be rejected while others are accepted. This happens because those records went to a shard that was already at its write limits, and you have to re-send them (after a delay).

这篇关于AWS Kinesis中的分区密钥到底是什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆