卡夫卡 vm.max_map_count [英] Kafka vm.max_map_count

查看:41
本文介绍了卡夫卡 vm.max_map_count的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们有一个用于 Kafka 流应用程序的 Kafka 集群.

We have a Kafka cluster for Kafka stream application.

几个小时后,我们的经纪人宕机了,我们遇到了 OutOfMemory 异常.

After some hours our broker went down and we got OutOfMemory exception.

我们看到vm.max_map_count不够,进程的map内存在40K以上.

We saw the vm.max_map_count is not enough and maps memory of the process is above 40K.

有人可以解释可能是什么问题或对该参数有什么影响吗?

Can someone explain what can be the problem or what influence on that parameter?

数字总是增加,永远不会减少.

The number always increases and never goes down.

推荐答案

基于 https://github.com/apache/kafka/pull/4358/files(提议的更改和对其做出反应的评论),似乎每个日志段(即文件)中的每个代理上每个主题的分区消耗两个映射.

Based on the pull request at https://github.com/apache/kafka/pull/4358/files (both the change being proposed and the comments reacting to it), it appears that each log segment (i.e. file) in each partition on each topic on the broker consumes two maps.

我希望该值会上升,直到您达到稳定状态,其中所有主题的日志都足够旧,由于保留时间间隔而开始被删除.在这一点上,预计每个新文件将在删除旧文件的同时出现(假设消息速率大致恒定).如果主题被删除或者如果您更改了现有主题或完整代理的配置(例如减少日志保留时间或导致日志更频繁地滚动),我希望该值会下降,如果您更改,则该值会上升反方向的配置.

I would expect the value to rise until you reach a steady-state where all topics have logs that are old enough to start being deleted due to the retention interval. At that point, each new file would be expected to occur at around the same time as an older one is deleted (assuming roughly constant message rates). I would expect the value to drop if topics were deleted or if you changed the configuration of an existing topic or the full broker (e.g. reduce the log retention time or cause the logs to roll over less frequently), and to go up if you change the configuration in the opposite direction.

这篇关于卡夫卡 vm.max_map_count的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆