Cloud Dataflow作业扩展超出了最大工作人员价值 [英] Cloud Dataflow job scaling beyond max worker value

查看:37
本文介绍了Cloud Dataflow作业扩展超出了最大工作人员价值的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

数据流作业ID: 2016-01-13_16_00_09-15016519893798477319

管道使用以下工作程序/缩放配置进行配置:

Pipeline was configured with the following worker/scaling config:

  • 最少2名工人
  • 最多50名工人

然而,这份工作扩大到了55名工人.为什么不兑现最大工人价值50?

However, the job scaled to 55 workers. Why was the max worker value of 50 not honoured?

Jan 14, 2016, 11:00:10 AM
(77f7e53b4884ba02): Autoscaling: Enabled for job 2016-01-13_16_00_09-15016519893798477319 between 1 and 1000000 worker processes.

Jan 14, 2016, 11:00:17 AM
(374d4f69f65e2506): Worker configuration: n1-standard-1 in us-central1-a.

Jan 14, 2016, 11:00:18 AM
(28acda8454e90ad2): Starting 2 workers...

Jan 14, 2016, 11:01:49 AM
(cf611e5d4ce4784d): Autoscaling: Resizing worker pool from 2 to 50.

Jan 14, 2016, 11:06:20 AM
(36c68efd7f1743cf): Autoscaling: Resizing worker pool from 50 to 55.

推荐答案

事实证明这是我们代码中的错误.我们调用了错误的方法.我们需要调用setMaxNumWorkers,而不是setNumWorkers.

This turned out to be a bug in our code. We were calling the wrong method. We need to call setMaxNumWorkers, and not setNumWorkers.

这篇关于Cloud Dataflow作业扩展超出了最大工作人员价值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆