可以使用 Spring Cloud 数据流根据负载动态配置从进程吗? [英] Can slave process be dynamically provisioned based on load using Spring Cloud data flow?

查看:44
本文介绍了可以使用 Spring Cloud 数据流根据负载动态配置从进程吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们目前正在使用 Spring 批处理 - 远程分块来扩展批处理.正在考虑使用 Cloud 数据流,但想知道是否可以基于负载动态配置 Slaves?我们部署在 Google Cloud 中,因此如果 Cloud 数据流满足我们的需求,还想考虑对 kubernetes 使用 Spring Cloud 数据流支持吗?

解决方案

当使用 Spring Cloud Task 的批处理扩展(特别是 DeployerPartitionHandler)时,worker 会根据需要动态启动.PartitionHandler 允许您配置最大数量的工作人员,然后它将每个分区作为独立工作人员进行处理,直至达到该最大值(在其他分区完成时处理其余分区).动态"方面实际上是由 Partitioner 返回的分区数控制的.返回的分区越多意味着启动的 worker 越多.

您可以在此 repo 中看到一个配置为使用 CloudFoundry 的简单示例:https://github.com/mminella/S3JDBC 它与您需要的主要区别在于您将 CloudFoundryTaskLauncher 换成 KubernetesTaskLauncher 并且它是适当的配置.

We are currently using Spring batch - remote chunking for scaling batch process . Thinking of using Cloud data flow but would like to know if based on load Slaves can be dynamically provisioned? we are deployed in Google Cloud and hence want to think of using Spring Cloud data flow support for kubernetes as well if Cloud data flow would fit our needs ?

解决方案

When using the batch extensions of Spring Cloud Task (specifically the DeployerPartitionHandler), workers are dynamically launched as needed. That PartitionHandler allows you to configure a maxiumum number of workers, then it will process each partition as an independent worker up to that max (processing the rest of the partitions as others finish up). The "dynamic" aspect is really controlled by the number of partitions returned by the Partitioner. The more partitions returned means the more workers launched.

You can see a simple example configured to use CloudFoundry in this repo: https://github.com/mminella/S3JDBC The main difference between it and what you'd need is that you'd swap out the CloudFoundryTaskLauncher for a KubernetesTaskLauncher and it's appropriate configuration.

这篇关于可以使用 Spring Cloud 数据流根据负载动态配置从进程吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆