在Hadoop中写入HDFS中的文件 [英] Writing to a file in HDFS in Hadoop

查看:167
本文介绍了在Hadoop中写入HDFS中的文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在寻找一个Disk Hadoop应用程序来测试Hadoop的I / O活动,但是我找不到任何这样的应用程序来维持磁盘利用率,比如说50%或者某种实际上会让磁盘忙碌的应用程序。我尝试了randomwriter,但令人惊讶的不是磁盘I / O密集型。

所以,我写了一个小程序在Mapper中创建一个文件,并在其中写入一些文本。此应用程序运行良好,但仅在主节点(也是名称节点,作业跟踪器和其中一个从属节点)中的利用率很高。其他任务跟踪器中的磁盘利用率为NIL或可忽略不计。我无法理解为什么磁盘I / O在任务跟踪器中这么低。如果我做错了什么,任何人都可以请正确的方向推动我?



以下是我在WordCount.java文件中编写的示例代码段,用于创建UTF字符串并将其写入文件中 -

  Configuration conf = new Configuration(); 
FileSystem fs = FileSystem.get(conf);
路径outFile;
while(itr.hasMoreTokens()){
word.set(itr.nextToken());
context.write(word,one);
outFile = new Path(./ dummy+ context.getTaskAttemptID());
FSDataOutputStream out = fs.create(outFile);

out.writeUTF(helloworld);
out.close();
fs.delete(outFile);
}


解决方案

好的。我以前没有检查过,一定很愚蠢。实际的问题是我的所有数据节点都没有真正运行。我重新格式化了名称节点,一切都恢复原状,我获得了15-20%的利用率,这对WC来说并不坏。我将为TestDFSIO运行它,看看我是否可以更多地使用磁盘。


I was looking for a Disk intensive Hadoop application to test the I/O activity in Hadoop but I couldn't find any such application which kept the Disk utilization above, say 50% or some such application which actually keeps disk busy. I tried randomwriter, but that surprisingly is not disk I/o intensive.

So, I wrote a tiny program to create a file in Mapper and write some text into it. This application works well, but the utilization is high only in the master node which is also name node, job tracker and one of the slaves. The disk utilization is NIL or negligible in the other task trackers. I'm unable to understand why disk I/O is so low in task trackers. Could anyone please nudge me in right direction if I'm doing something wrong? Thanks in advance.

Here is my sample code segment that I wrote in WordCount.java file to create and write UTF string into a file-

Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);
Path outFile;
while (itr.hasMoreTokens()) {
    word.set(itr.nextToken());
    context.write(word, one);
    outFile = new Path("./dummy"+ context.getTaskAttemptID());
    FSDataOutputStream out = fs.create(outFile);

    out.writeUTF("helloworld");
    out.close();
    fs.delete(outFile);
  }

解决方案

OK. I must have been really stupid for not checking before. The actual problem was that all of my data nodes were not really running. I reformatted the namenode and everything fell back into place, I was getting a utilization of 15-20% which is not bad for WC. I will run it for the TestDFSIO and see if I could utilize the Disk even more.

这篇关于在Hadoop中写入HDFS中的文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆