HDinsight没有足够的内存来打开 [英] HDinsight not enough memory to open

查看:195
本文介绍了HDinsight没有足够的内存来打开的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是新用户.我刚刚在西欧部署了一个新的HDinsight Spark,它具有2个头节点(8个核心)和4个工作节点(16个核心).我在那里什么都没做.但是它表明没有足够的内存来打开" Zeppelin,ambari和所有其他服务.我什么都不能用 完全没有.有人碰巧知道发生了什么事,以及如何解决吗?

解决方案

要修改NameNode Java堆大小.

HDFS => 配置 => 高级 => NameNode Java堆大小 = 2048 MB => 保存

在Ambari UI中修改YARN Java堆大小.

=> 配置 => 高级 => 资源管理器Java堆大小= 2048 MB => 保存

--------------- -------------------------------------------------- ------------------------------

如果此答案有帮助,请单击标记为答案"或向上" -投票".要提供有关您的论坛体验的其他反馈,请单击 在这里 >

I am new user. I just deployed a new HDinsight Spark with 2 headnodes (8 cores) and 4 worknodes (16 cores) in West Europe. I haven't do anything there. But it shows that 'not enough memory to open' Zeppelin, ambari, and all other service. I cannot use anything at all. Does anybody happen to know what is going on, and how to solve it ?

解决方案

The NameNode Java heap size depends on many factors such as the load on the cluster, the numbers of files, and the numbers of blocks. The default size of 1 GB works well with most clusters, although some workloads can require more or less memory.

To modify the NameNode Java heap size.

HDFS => Config => Advanced => NameNode Java heap size = 2048 MB => Save

To modify the YARN Java heap size in the Ambari UI.

YARN => Config => Advanced => Resource Manager Java heap size = 2048 MB => Save

-----------------------------------------------------------------------------------------------

If this answer was helpful, click "Mark as Answer" or "Up-Vote". To provide additional feedback on your forum experience, click here


这篇关于HDinsight没有足够的内存来打开的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆