为什么将HDFS ACL max_entries设置为32? [英] why are HDFS ACL max_entries set to 32?

查看:186
本文介绍了为什么将HDFS ACL max_entries设置为32?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在Hadoop HDFS中,当您启用ACL时,我发现最大ACL条目设置为32.我在org/apache/hadoop/hdfs/server/namenode/AclTransformation.java中获得了源代码:

私有静态最终整数MAX_ENTRIES = 32;

这是基于什么?有哪些注意事项? 我们可以将32更改为另一个更大的数字吗?我想重新配置它.

解决方案

ACL在 HDFS-4685 -HDFS中ACL的实现.

据我所知,没有关于32的限制的设计决策.但是,由于大多数Hadoop系统都在Linux上运行,并且此功能受Linux ACL的启发,因此该值很可能是从ext3的限制中借用的.如在 POSIX访问控制列表在Linux上,由AndreasGrünbacher撰写.

文章继续提到,太多的ACL会产生问题,并且还显示了启用ACL带来的性能差异(请参阅标题为" EA和ACL性能"的部分).

In Hadoop HDFS, when you enable ACLs, I found that the max ACL entries is set to 32. I got the source code here, in org/apache/hadoop/hdfs/server/namenode/AclTransformation.java:

private static final int MAX_ENTRIES = 32;

What is the basis for this? What are the considerations? Can we change 32 to another larger number? I want to reconfigure it.

解决方案

ACLs were implemented in HDFS-4685 - Implementation of ACLs in HDFS.

As far as I can tell, there was no design decision around the limit of 32. However, since most Hadoop systems run on Linux, and this feature was inspired by Linux ACLs this value was most likely borrowed from the limits on ext3 as mentioned in POSIX Access Control Lists on Linux by Andreas Grünbacher.

The article goes on to mention that having too many ACLs create problems and also shows performance differences introduced with having ACLs enabled (See the section titled "EA and ACL Performance").

这篇关于为什么将HDFS ACL max_entries设置为32?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆