Azure自动扩展规模可消除使用实例中的数据 [英] Azure autoscale scale in kills in use instances

查看:47
本文介绍了Azure自动扩展规模可消除使用实例中的数据的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用Azure自动缩放功能来处理数百个文件.系统可以正确扩展到8个实例,每个实例一次处理一个文件.

I'm using Azure Autoscale feature to process hundreds of files. The system scales up correctly to 8 instances and each instance processes one file at a time.

问题在于扩展.因为规则中的扩展似乎是基于所有实例的,所以如果我告诉它在平均CPU负载为<之后将实例数减少回1.25%的设备将任意杀死仍在处理数据的实例.

The problem is with scaling in. Because the scale in rules seem to be based on ALL instances, if I tell it to reduce the instance count back to 1 after an average CPU load of < 25% it will arbitrarily kill instances that are still processing data.

有没有办法阻止它关闭仍在使用的单个实例?

Is there a way to prevent it from shutting down individual instances that are still in use?

推荐答案

按比例缩小将首先删除最高实例编号.例如,如果您有WorkerRole_IN_0,WorkerRole_IN_1,...,WorkerRole_IN_8,然后按比例缩小1,Azure将首先删除WorkerRole_IN_8.Azure不知道您的代码在做什么(即,它是否仍在处理文件)或它是否已完成并准备关闭.

Scale down will remove the highest instance numbers first. For example, if you have WorkerRole_IN_0, WorkerRole_IN_1, ..., WorkerRole_IN_8, and then you scale down by 1, Azure will remove WorkerRole_IN_8 first. Azure has no idea what your code is doing (ie. if it is still processing a file) or if it is finished and ready to shut down.

您有几种选择:

  1. 如果文件处理很快,则可以在OnStop事件中将关闭延迟最多5分钟,使您的实例有足够的时间完成文件的处理.这是最容易实现的解决方案,但不是最可靠的解决方案.
  2. 如果可以将文件处理分解为较短的工作块,则可以让实例处理这些块,直到文件完成为止.这样,关闭任意实例并不重要,因为您不会丢失任何大量工作,并且另一个实例将在中断的地方继续工作.请参阅 https://docs.microsoft.com/en-us/azure/architecture/patterns/pipes-and-filters 表示模式.这是理想的解决方案,因为它是针对分布式工作负载的优化架构,但是某些工作负载(例如,图像/视频处理)可能无法轻松分解.
  3. 您可以实现自己的自动缩放算法,并手动关闭所选的单个实例.为此,您可以调用Delete Role Instance API( https://msdn.microsoft.com/zh-CN/library/azure/dn469418.aspx ).这需要一些外部过程来监视您的工作负载并执行管理操作,因此根据您的基础结构,这可能不是一个好的解决方案.
  1. If the file processing is quick, you can delay the shutdown for up to 5 minutes in the OnStop event, giving your instance enough time to finish processing the file. This is the easiest solution to implement, but not the most reliable.
  2. If processing the file can be broken up into shorter chunks of work then you can have the instances process chunks until the file is complete. This way it doesn't really matter if an arbitrary instance is shut down since you don't lose any significant amount of work and another instance will pick up where it left off. See https://docs.microsoft.com/en-us/azure/architecture/patterns/pipes-and-filters for a pattern. This is the ideal solution as it is an optimized architecture for distributed workloads, but some workloads (ie. image/video processing) may not be able to break up easily.
  3. You can implement your own autoscale algorithm and manually shut down individual instances that you choose. To do this you would call the Delete Role Instance API (https://msdn.microsoft.com/en-us/library/azure/dn469418.aspx). This requires some external process to be monitoring your workload and executing management operations so may not be a good solution depending on your infrastructure.

这篇关于Azure自动扩展规模可消除使用实例中的数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆