Hadoop:reducer 的数量不等于我在程序中设置的数量 [英] Hadoop: Number of reducer is not equal to what I have set in program

查看:35
本文介绍了Hadoop:reducer 的数量不等于我在程序中设置的数量的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在 mapred-site.xml 中将 mapred.tasktracker.reduce.tasks.maximum 设置为 10,并且我还在我的 jobConf.setNumReduceTasks(5)工作.

I have set mapred.tasktracker.reduce.tasks.maximum to 10 in mapred-site.xml, and I also write jobConf.setNumReduceTasks(5) in my job.

如果我在 Shell 中运行作业,一切正常.

Everything is ok if I run the job in Shell.

但是当我通过 eclipse 运行相同的作业时,只启动了一个 reducer.

But when I run the same job by eclipse, only one reducer was launched.

我尝试在 Eclipse 中编辑 Map/Reduce Locations,并将 mapred.reduce.tasks 设置为 10.但这仍然不起作用.

I try to edit Map/Reduce Locations in eclipse, and set mapred.reduce.tasks to 10. But that still doesn't work.

在eclipse中还有什么可以调整的参数吗?

Is there any other parameters I can adjust in eclipse?

推荐答案

在eclipse中运行好像是使用本地job runner.它只支持 0 或 1 个减速器.如果您尝试将其设置为使用多个减速器,它会忽略它并只使用一个.

Running it in eclipse seems to use the local job runner. It only supports 0 or 1 reducers. If you try to set it to use more than one reducer, it ignores it and just uses one anyway.

这篇关于Hadoop:reducer 的数量不等于我在程序中设置的数量的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆