vm.max_map_count和mmapfs [英] vm.max_map_count and mmapfs

查看:1189
本文介绍了vm.max_map_count和mmapfs的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

将vm.max_map_count从64k增加到256k有什么优缺点?

What are the pros and cons of increasing vm.max_map_count from 64k to 256k?

vm.max_map_count = 65530暗示 - >
64k地址* 64kb页面大小=高达4GB的数据可以被引用过程?

Does vm.max_map_count = 65530 imply --> 64k addresses * 64kb page size = upto 4GB of data can be referenced by the process?

如果我超过4GB - 由于vm.max_map_count限制,可寻址空间,将OS需要翻出一些旧的访问索引数据?

And if i exceed 4GB - the addressable space due to the vm.max_map_count limit, will OS need to page out some of the older accessed index data?

也许我以上的理解是不正确的,因为FS缓存可以是非常大的

Maybe my above understanding is not correct as FS cache can be pretty huge

这个限制如何导致OOM?

How does this limit result in OOM?

我在弹性搜索上下文中发布了一个类似的问题: https://discuss.elastic.co/t/mmapfs-and-impact-of-vm -max-map-count / 55568

I posted a similar question on elasticsearch context at https://discuss.elastic.co/t/mmapfs-and-impact-of-vm-max-map-count/55568

推荐答案

根据Uwe Schindler的进一步挖掘和回复,回答我自己的问题 - Lucene PMC

Answering my own question based on further digging and reply from Uwe Schindler - Lucene PMC


页面大小无关与max_map_count。它是分配的映射数。 Lucene的MMapDirectory映射到
部分,高达1 GiB。对于依赖的
的映射数量(索引目录中的文件数)和
的大小。索引目录中具有40个文件的典型索引,所有
小于1 GiB需要40个映射。如果索引较大,
有40个文件,大多数段都有20 GB,那么
最多可以获得800个映射。

The page size has nothing to do with the max_map_count. It is the number of mappings that are allocated. Lucene's MMapDirectory maps in portions of up to 1 GiB. The number of mappings is therefor dependent on the number of segments (number of files in the index directory) and their size. A typical index with like 40 files in index directory, all of them smaller than 1 GiB needs 40 mappings. If the index is larger, has 40 files and most segments have like 20 Gigabytes, then it could take up to 800 mappings.

为什么弹性搜索者建议提高max_map_count是因为他们的客户结构。大多数Logstash
用户具有弹性搜索云,其中10,000个索引每个可能
非常大,因此映射的数量可能会受到限制。

The reson why Elasticsearch people recommend to raise max_map_count is because of their customer structure. Most Logstash users have Elasticsearch clouds with like 10,000 indexes each possibly very large, so the number of mapping could get a limiting factor.

I '建议不要更改默认设置,除非你得到IOExceptions关于地图失败(请注意:它不会导致
OOMs最近的Lucene版本,因为这是内部处理!!!!)

I'd suggest to not change the default setting, unless you get IOExceptions about "map failed" (please note: it will not result in OOMs with recent Lucene versions as this is handled internally!!!!)

操作系统的分页与映射文件计数无关。 max_map_count仅限于可以使用
的总共多少个映射。映射需要一个最多为1个GiB的块,该块大小为mmapped。操作系统中的分页
发生在一个较低的级别,它将根据这些块的页面大小独立地交换任何部分
:chunk!=
页面大小

The paging of the OS has nothing to do with the mapped file count. The max_map_count is just a limit on how many mappings in total can be used. A mapping needs one chunk of up to 1 GiB that is mmapped. Paging in the OS happens on a much lower level, it will swap any part according to the page size of those chunks independently: chunk != page size

总结 - 如果我错了,请更正我,与文档建议不同。不要认为在所有情况下需要增加max_map_count

Summary - Please correct me if I am wrong, unlike what the documentation suggests. Dont think it is required to increase max_map_count in all scenarios

ES 2.x -
在默认(混合nio + mmap)FS模式下,只有.dvd和.tim文件(也许点也)是mmaped,这将允许每个节点约30000分片。

ES 2.x - In the default (hybrid nio +mmap) FS mode only the .dvd and .tim files (maybe point too) are mmaped and that would allow for ~30000 shards per node.

ES 5.x - 有段节流,尽管默认移动对于mmapfs,默认的64k可能仍然可以正常工作。

ES 5.x - there is segment throttling so although default moves to mmapfs, the default of 64k may still work fine.

如果您计划使用mmapfs并且每个节点具有> 1000个碎片,这可能会很有用。 (我个人看到许多toher问题在高碎片/节点中蠕动)

This could be useful if you plan to use mmapfs and have > 1000 shards per node. ( i personally see many toher issues creep in with high shards/node)

mmapfs存储 - 只有当存储是mmapfs并且每个节点存储> 65000个段文件(或1000 +碎片)将会进入这个限制。我宁愿添加更多的节点,而不是每个节点在mmapfs上有如此大量的碎片

mmapfs store - only when the store is mmapfs and each node stores > 65000 segment files (or 1000+ shards) will this limit come in. I would rather add more nodes than have such massive number of shards per node on mmapfs

这篇关于vm.max_map_count和mmapfs的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆