kafka 打开的文件太多 [英] kafka Too many open files

查看:159
本文介绍了kafka 打开的文件太多的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

你有没有遇到过关于 kafka 的类似问题?我收到此错误:打开的文件太多.我不知道为什么.以下是一些日志:

Have you ever had a similar problem about kafka? I get this error: Too many open files. I don't know why. Here are some logs:

[2018-08-27 10:07:26,268] ERROR Error while deleting the clean shutdown file in dir /home/weihu/kafka/kafka/logs (kafka.server.LogD)
java.nio.file.FileSystemException: /home/weihu/kafka/kafka/logs/BC_20180821_1_LOCATION-87/leader-epoch-checkpoint: Too many open fis
        at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
        at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
        at java.nio.file.Files.newByteChannel(Files.java:361)
        at java.nio.file.Files.createFile(Files.java:632)
        at kafka.server.checkpoints.CheckpointFile.<init>(CheckpointFile.scala:45)
        at kafka.server.checkpoints.LeaderEpochCheckpointFile.<init>(LeaderEpochCheckpointFile.scala:62)
        at kafka.log.Log.initializeLeaderEpochCache(Log.scala:278)
        at kafka.log.Log.<init>(Log.scala:211)
        at kafka.log.Log$.apply(Log.scala:1748)
        at kafka.log.LogManager.loadLog(LogManager.scala:265)
        at kafka.log.LogManager.$anonfun$loadLogs$12(LogManager.scala:335)
        at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
[2018-08-27 10:07:26,268] ERROR Error while deleting the clean shutdown file in dir /home/weihu/kafka/kafka/logs (kafka.server.LogD)
java.nio.file.FileSystemException: /home/weihu/kafka/kafka/logs/BC_20180822_PARSE-136/leader-epoch-checkpoint: Too many open files
        at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
        at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
        at java.nio.file.Files.newByteChannel(Files.java:361)
        at java.nio.file.Files.createFile(Files.java:632)
        at kafka.server.checkpoints.CheckpointFile.<init>(CheckpointFile.scala:45)
        at kafka.server.checkpoints.LeaderEpochCheckpointFile.<init>(LeaderEpochCheckpointFile.scala:62)
        at kafka.log.Log.initializeLeaderEpochCache(Log.scala:278)
        at kafka.log.Log.<init>(Log.scala:211)
        at kafka.log.Log$.apply(Log.scala:1748)
        at kafka.log.LogManager.loadLog(LogManager.scala:265)
        at kafka.log.LogManager.$anonfun$loadLogs$12(LogManager.scala:335)
        at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
[2018-08-27 10:07:26,269] ERROR Error while deleting the clean shutdown file in dir /home/weihu/kafka/kafka/logs (kafka.server.LogD)
java.nio.file.FileSystemException: /home/weihu/kafka/kafka/logs/BC_20180813_1_STATISTICS-402/leader-epoch-checkpoint: Too many opens
        at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
        at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
        at java.nio.file.Files.newByteChannel(Files.java:361)
        at java.nio.file.Files.createFile(Files.java:632)
        at kafka.server.checkpoints.CheckpointFile.<init>(CheckpointFile.scala:45)
        at kafka.server.checkpoints.LeaderEpochCheckpointFile.<init>(LeaderEpochCheckpointFile.scala:62)
        at kafka.log.Log.initializeLeaderEpochCache(Log.scala:278)
        at kafka.log.Log.<init>(Log.scala:211)
        at kafka.log.Log$.apply(Log.scala:1748)
        at kafka.log.LogManager.loadLog(LogManager.scala:265)
        at kafka.log.LogManager.$anonfun$loadLogs$12(LogManager.scala:335)
        at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

推荐答案

在 Kafka 中,每个主题都(可选)拆分为多个分区.对于每个分区,一些文件由代理维护(用于索引和实际数据).

In Kafka, every topic is (optionally) split into many partitions. For each partition some files are maintained by brokers (for index and actual data).

kafka-topics --zookeeper localhost:2181 --describe --topic topic_name

将为您提供主题 topic_name 的分区数.每个主题的默认分区数 num.partitions/etc/kafka/server.properties

will give you the number of partitions for topic topic_name. The default number of partitions per topic num.partitions is defined under /etc/kafka/server.properties

如果代理托管许多分区并且特定分区有许多日志段文件,则打开文件的总数可能非常大.

The total number of open files could be very huge if the broker hosts many partitions and a particular partition has many log segment files.

您可以通过运行查看当前的文件描述符限制

You can see the current file descriptor limit by running

ulimit -n

您还可以使用 lsof 来检查打开文件的数量:

You can also check the number of open files using lsof:

lsof | wc -l

要解决此问题,您需要更改打开文件描述符的限制:

To solve the issue you either need to change the limit of open file descriptors:

ulimit -n <noOfFiles>

或者以某种方式减少打开文件的数量(例如,减少每个主题的分区数量).

or somehow reduce the number of open files (for example, reduce number of partitions per topic).

这篇关于kafka 打开的文件太多的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆