CosmosDB 更改源扩展 [英] CosmosDB Change Feed Scaling

查看:15
本文介绍了CosmosDB 更改源扩展的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的 Azure 函数带有 CosmosDB 触发器,它使用租约收集机制侦听收集.此功能托管在消费计划上.

I have my Azure function with CosmosDB trigger, that listens to a collection using lease collection mechanism. This function is hosted on consumption plan.

我注意到在繁重的负载下,我倾向于以越来越大的延迟来更新我的功能.阅读文档后,我没有找到一种方法来改进我的设置的缩放.有办法吗?

I have noticed that under heavy load I tend to get updates to my function with a greater and greater delay. After reading documentation I did not find a way how to improve scaling of my setup. Is there a way to?

推荐答案

Consumption Plan 实例应该根据你的 Function 滞后多远而增长.如果您使用的是消费计划,如果您使用的是应用服务计划,则可以自行扩展.

Consumption Plan instances should grow based on how far behind your Function is lagging. If you are using Consumption Plan, if you are using App Service Plan, you can scale them yourself.

话虽如此,当前的工作单元是基于分区键值范围的.这意味着,类似于 Event Hub,并行处理根据您的数据分布有一个软限制.

That being said, the current unit of work is based on the Partition Key value ranges. This means that, similar to Event Hub, the parallel processing has a soft limit based on your data distribution.

检测这种情况的一种方法是检查您的租约集合.如果您只看到一个租约(忽略以 .info.lock 作为其 id 的项目),这意味着您当前的数据分布产生一个分区键值范围,并且只有一个实例可以处理它(无论提供多少其他实例).

One way to detect this is to check your leases collection. If you see only one lease (disregard items with .info or .lock as their ids), that means your current data distribution yields one partition key value range, and only one instance can be processing that (not matter how many other instances get provisioned).

日志 还可以显示扩展的行为方式以及实例在有多个租约的情况下如何获取不同的租约.

Logs can also show how scaling is behaving and how are instances picking up the different leases in case there are multiple.

这篇关于CosmosDB 更改源扩展的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆