内存中的Hadoop Reducer值? [英] Hadoop Reducer Values in Memory?

查看:88
本文介绍了内存中的Hadoop Reducer值?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在编写一个MapReduce作业,最终可能会在reducer中创建大量的值。我担心所有这些值会一次加载到内存中。



Iterable< VALUEIN>值在需要时将值加载到内存中? Hadoop:权威指南似乎暗示了这种情况,但并没有给出明确的答案。



减速器的输出将远远大于数值输入,但我相信输出会根据需要写入磁盘。 解决方案

您正在正确阅读本书。减速器不会将所有值存储在内存中。相反,当循环遍历Iterable值列表时,每个Object实例都会被重用,所以它只在给定的时间保持一个实例。



例如,在以下代码中,objs ArrayList将在循环之后具有预期的大小,但每个元素将是相同的b / c Text值实例在每次迭代中被重新使用。

  public static class ReducerExample extends Reducer< Text,Text,Text,Text> {
public void reduce(Text key,Iterable< Text> values,Context context){
ArrayList< Text> objs = new ArrayList< Text>();
for(Text val:values){
objs.add(val);





如果由于某种原因,确实想对每个val采取进一步的行动,你应该做一个深层复制并存储它。)



当然,即使单个值也可能大于内存。在这种情况下,建议开发人员采取措施在先前的Mapper中削减数据,使其值不会太大。



UPDATE

strong>:参见Hadoop权威指南第二版第199-200页。

 这段代码清楚地表明,相同的密钥并且值对象用于map()方法的每个
调用 - 只有它们的内容被更改(通过读者的
next()方法)。这对于用户来说可能是一个惊喜,他们可能期望密钥和价值是
不变的。当对key或value对象的引用在map()方法外保留
时,这会导致prolems,因为其值可能会在没有警告的情况下更改。如果您需要执行
这个操作,请复制要保留的对象。例如,对于Text对象
,您可以使用其复制构造函数:new Text(value)。

情况与减速器类似。在这种情况下,Reducer的
迭代器中的value对象会被重用,因此您需要复制在调用迭代器到
之间需要保留的任何对象。


I'm writing a MapReduce job that may end up with a huge number of values in the reducer. I am concerned about all of these values being loaded into memory at once.

Does the underlying implementation of the Iterable<VALUEIN> values load values into memory as they are needed? Hadoop: The Definitive Guide seems to suggest this is the case, but doesn't give a "definitive" answer.

The reducer output will be far more massive than the values input, but I believe the output is written to disk as needed.

解决方案

You're reading the book correctly. The reducer does not store all values in memory. Instead, when looping through the Iterable value list, each Object instance is re-used, so it only keeps one instance around at a given time.

For example in the follow code, the objs ArrayList will have the expected size after the loop but every element will be the same b/c the Text val instance is re-used every iteration.

public static class ReducerExample extends Reducer<Text, Text, Text, Text> {
public void reduce(Text key, Iterable<Text> values, Context context) {
    ArrayList<Text> objs = new ArrayList<Text>();
            for (Text val : values){
                    objs.add(val);
            }
    }
}

(If for some reason you did want to take further action on each val, you should make a deep copy and then store it.)

Of course even a single value could be larger than memory. In this case it's recommended to the developer to take steps to pare the data down in the preceding Mapper so that the value is not so large.

UPDATE: See pages 199-200 of Hadoop The Definitive Guide 2nd Edition.

This code snippet makes it clear that the same key and value objects are used on each 
invocation of the map() method -- only their contents are changed (by the reader's 
next() method). This can be a surprise to users, who might expect keys and vales to be 
immutable. This causes prolems when a reference to a key or value object is retained 
outside the map() method, as its value can change without warning. If you need to do 
this, make a copy of the object you want to hold on to. For example, for a Text object, 
you can use its copy constructor: new Text(value).

The situation is similar with reducers. In this case, the value object in the reducer's 
iterator are reused, so you need to copy any that you need to retain between calls to 
the iterator.

这篇关于内存中的Hadoop Reducer值?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆