CosmosDB更改Feed缩放比例 [英] CosmosDB Change Feed Scaling

查看:32
本文介绍了CosmosDB更改Feed缩放比例的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我具有CosmosDB触发器的Azure函数,该函数使用租约收集机制来侦听收集.此功能托管在消费计划中.

I have my Azure function with CosmosDB trigger, that listens to a collection using lease collection mechanism. This function is hosted on consumption plan.

我注意到在重负载下,我倾向于以越来越大的延迟来更新我的函数.阅读文档后,我没有找到一种方法来改善设置的缩放比例.有办法吗?

I have noticed that under heavy load I tend to get updates to my function with a greater and greater delay. After reading documentation I did not find a way how to improve scaling of my setup. Is there a way to?

推荐答案

消费计划实例应根据您的功能落后多少而增长.如果使用的是消费计划,则使用的是App Service Plan,则可以自己扩展规模.

Consumption Plan instances should grow based on how far behind your Function is lagging. If you are using Consumption Plan, if you are using App Service Plan, you can scale them yourself.

话虽如此,当前的工作单位是基于分区键值范围的.这意味着类似于 Event Hub ,根据您的数据分布,并行处理有一个软限制.

That being said, the current unit of work is based on the Partition Key value ranges. This means that, similar to Event Hub, the parallel processing has a soft limit based on your data distribution.

检测到此情况的一种方法是检查您的租赁收款情况.如果您仅看到一个租约(忽略以 .info .lock 作为其ID的项目),则意味着您当前的数据分配产生一个分区键值范围,而只有一个实例可以进行处理(无论配置了多少其他实例).

One way to detect this is to check your leases collection. If you see only one lease (disregard items with .info or .lock as their ids), that means your current data distribution yields one partition key value range, and only one instance can be processing that (not matter how many other instances get provisioned).

日志还可以显示伸缩的行为方式,以及在有多个租约的情况下实例如何选择不同的租约.

Logs can also show how scaling is behaving and how are instances picking up the different leases in case there are multiple.

这篇关于CosmosDB更改Feed缩放比例的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆