hadoop的namenode的内存消耗? [英] The memory consumption of hadoop's namenode?

查看:406
本文介绍了hadoop的namenode的内存消耗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

任何人都可以详细分析namenode的内存消耗情况吗?还是有一些参考资料?无法在网络中找到材料。谢谢!

Can anyone give a detailed analysis of memory consumption of namenode? Or is there some reference material ? Can not find material in the network.Thank you!

推荐答案

我想内存消耗量取决于您的HDFS设置,在HDFS的整体大小上,并与块大小有关。
Hadoop NameNode Wiki

I suppose the memory consumption would depend on your HDFS setup, so depending on overall size of the HDFS and is relative to block size. From the Hadoop NameNode wiki:


使用具有大量RAM的优质服务器。您拥有的RAM越多,文件系统越大,或者块大小越小。

Use a good server with lots of RAM. The more RAM you have, the bigger the file system, or the smaller the block size.

From https://twiki.opensciencegrid.org/bin/view/Documentation/HadoopUnderstanding


Namenode:Hadoop的核心元数据服务器。这是系统中最关键的部分,只能有其中之一。这将同时存储文件系统映像和文件系统日志。 namenode保留所有的文件系统布局信息(文件,块,目录,权限等)和块位置。文件系统布局保存在磁盘上,块位置只保存在内存中。当客户端打开一个文件时,namenode告诉客户端文件中所有块的位置;客户端不再需要与namenode进行数据传输通信。

Namenode: The core metadata server of Hadoop. This is the most critical piece of the system, and there can only be one of these. This stores both the file system image and the file system journal. The namenode keeps all of the filesystem layout information (files, blocks, directories, permissions, etc) and the block locations. The filesystem layout is persisted on disk and the block locations are kept solely in memory. When a client opens a file, the namenode tells the client the locations of all the blocks in the file; the client then no longer needs to communicate with the namenode for data transfer.

同一网站推荐以下内容:

the same site recommends the following:


Namenode:我们推荐至少8GB内存(最小2GB内存),最好是16GB或更多。一个粗略的经验法则是每100TB的原始磁盘空间1GB;实际需求约为每百万个对象1GB(文件,目录和块)。 CPU要求是任何现代多核服务器CPU。通常,namenode只会使用你的CPU的2-5%。
由于这是一个单点故障,最重要的要求是可靠的硬件而不是高性能的硬件。我们建议使用冗余电源和至少2个硬盘驱动器的节点。

Namenode: We recommend at least 8GB of RAM (minimum is 2GB RAM), preferably 16GB or more. A rough rule of thumb is 1GB per 100TB of raw disk space; the actual requirements is around 1GB per million objects (files, directories, and blocks). The CPU requirements are any modern multi-core server CPU. Typically, the namenode will only use 2-5% of your CPU. As this is a single point of failure, the most important requirement is reliable hardware rather than high performance hardware. We suggest a node with redundant power supplies and at least 2 hard drives.

有关内存使用情况的更详细分析,请查看此链接out:
https://issues.apache.org/jira/browse/ HADOOP-1687

For a more detailed analysis of memory usage, check this link out: https://issues.apache.org/jira/browse/HADOOP-1687

您也许会发现这个问题很有趣: Hadoop namenode内存使用情况

You also might find this question interesting: Hadoop namenode memory usage

这篇关于hadoop的namenode的内存消耗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆