如何更改Kafka客户端日志记录级别/首选项? [英] How to change the Kafka client logging levels/preferences?

查看:382
本文介绍了如何更改Kafka客户端日志记录级别/首选项?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用一个普通的Java项目来运行(无框架)Kafka生产者和使用者.

我正在尝试控制 KafkaProducer KafkaConsumer 代码生成的日志,并且无法使用 log4j.properties 配置对其进行影响:

  log4j.rootLogger = ERROR,stdoutlog4j.logger.kafka =错误,标准输出log4j.logger.org.apache.kafka.clients.producer.ProducerConfig =错误,标准输出log4j.logger.org.apache.kafka.common.utils.AppInfoParser =错误,标准输出log4j.logger.org.apache.kafka.clients.consumer.internals.AbstractCoordinator = ERROR,stdoutlog4j.appender.stdout = org.apache.log4j.ConsoleAppenderlog4j.appender.stdout.layout = org.apache.log4j.PatternLayoutlog4j.appender.stdout.layout.ConversionPattern = [%d]%p%m(%c)%n 

无论我在 log4j.properties 文件中提供了什么设置,我仍然得到如下所示的日志输出:

  [main]信息org.apache.kafka.clients.producer.ProducerConfig-ProducerConfig值:...[main] INFO org.apache.kafka.clients.producer.ProducerConfig-ProducerConfig值:...[main] INFO org.apache.kafka.clients.producer.ProducerConfig-ProducerConfig值:...[main] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator-[Consumer clientId = UM00160,groupId = string-group](重新)加入组 

如何控制Kafka客户端库的日志记录?将我的 log4j.properties 文件链接到Kafka客户端库日志时,我缺少什么?为了不向输出垃圾邮件,我必须使用以下命令运行Maven测试: mvn test 2>/dev/null .我可以通过 log4j.properties 进行配置.

上下文:

我有以下相关文件:

 ──测试├──Java│└──com│└──例子│├──PropertyReader.java│└──琴弦│──└──TestKafkaStringValues.java└──资源├──application.properties└──log4j.properties 

我正在尝试使用Maven surefire插件( mvn test )或Eclipse JUnit插件(等同于 java)运行 TestKafkaStringValues.java ... ).

为了确保安全,我在Maven pom.xml 中使用以下配置:

 < plugin>< artifactId> maven-surefire-plugin</artifactId>< version> 2.22.2</version><配置>< systemPropertyVariables>< log4j.configuration>文件:log4j.properties</log4j.configuration></systemPropertyVariables></configuration></plugin> 

,对于JUnit,我使用以下Java VM参数: -Dlog4j.configuration = log4j.properties .

在两种情况下,我都尝试使用 log4j.properties 的绝对路径.仍然无法正常工作.

您可以在此处看到完整的代码.

解决方案

上面的代码中的问题是Maven运行时依赖项(缺少实际的Log4j日志记录实现).在pom中,提供了 slf4j-simple 日志记录实现.该实现是:

  • 可以将Kafka日志打印到标准输出
  • 无法理解 log4j.properties -Dlog4j.* 属性.
因此,曾经必须将其包含在Log4J实现中.在这里,您可以选择 Log4j 1.x (报废)或 Log4j2 .

使用以下配置,应该可以对日志记录(包括Kafka客户端)进行非常全面/细粒度的控制.

pom.xml 中:

 < dependency>< groupId> org.apache.logging.log4j</groupId>< artifactId> log4j-api</artifactId>< version> 2.13.1</version></dependency><依赖关系>< groupId> org.apache.logging.log4j</groupId>< artifactId> log4j-core</artifactId>< version> 2.13.1</version></dependency><依赖关系>< groupId> org.apache.logging.log4j</groupId>< artifactId> log4j-slf4j-impl</artifactId>< version> 2.13.1</version>< scope> test</scope></dependency> 

log4j-api 和 log4j-core 是您的最低要求.为了使Log4j2也能够控制/配置在SLF4J之上编写的库/组件(而Kafka客户端就是这样的库),您需要添加第三个依赖项: log4j-slf4j-impl

注意:注意,对于使用SLF4J 1.8.x和更高版本的库,您将需要此Log4j-SLF4J适配器的另一个版本.有关更多信息,请参见> .

现在有关配置日志记录, Log4j2 会自动加载配置文件它会找到它们,并自动在多个位置中进行搜索..>

如果将以下 log4j2.properties 文件放在资源类路径中(对于主代码,在 src/java/resources/中;对于 src/test/资源(用于测试代码),您将获得理想的结果:

  rootLogger.level =信息rootLogger.appenderRefs =标准输出rootLogger.appenderRef.stdout.ref = STDOUT附录= stdoutappender.stdout.name = STDOUTappender.stdout.type =控制台appender.stdout.layout.type = PatternLayoutappender.stdout.layout.pattern =%d {yyyy-MM-dd HH:mm:ss.SSS} [%level] [%t]%c-%m%n记录器= kafka,kafka消费者logger.kafka.name = org.apache.kafkalogger.kafka.level =警告logger.kafka-consumer.name = org.apache.kafka.clients.consumerlogger.kafka-consumer.level =信息 

在上面的示例中,所有日志记录均写入 stdout 并:*根记录器正在记录 info 及更高版本*所有 org.apache.kafka 前缀的记录器日志 warn 及更高版本*所有 org.apache.kafka.clients.consumer 前缀的记录器都在记录 info 及更高版本

使用Log4j2时,还有一些其他发现:

  • 如果您想要JSON或YAML配置,则需要额外的依赖条件
  • 如果Log4j配置不正确,则Eclipse中的JUnit插件将无提示地终止,而没有任何输出. mvn 输出将向您显示错误.

I am using a plain Java project to run (no framework) a Kafka producer and a consumer.

I am trying to control the logs generated by the KafkaProducer and KafkaConsumer code and I cannot influence it using the log4j.properties configuration:

log4j.rootLogger=ERROR,stdout

log4j.logger.kafka=ERROR,stdout
log4j.logger.org.apache.kafka.clients.producer.ProducerConfig=ERROR,stdout
log4j.logger.org.apache.kafka.common.utils.AppInfoParser=ERROR,stdout
log4j.logger.org.apache.kafka.clients.consumer.internals.AbstractCoordinator=ERROR,stdout

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n

Still I get log output like the one below, whatever settings I provide in the log4j.properties file:

[main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:
...
[main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:
...
[main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:
...
[main] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=UM00160, groupId=string-group] (Re-)joining group

How can I control the logging of the Kafka clients library? What am I missing to link my log4j.properties file to the Kafka clients library logging? In order not to spam the output I have to run Maven test using: mvn test 2> /dev/null. Can I configure this via the log4j.properties.

Context:

I have the following relevant files:

── test
   ├── java
   │   └── com
   │       └── example
   │           ├── PropertyReader.java
   │           └── strings
   │               └── TestKafkaStringValues.java
   └── resources
       ├── application.properties
       └── log4j.properties

And I am trying to run the TestKafkaStringValues.java both using the Maven surefire plugin (mvn test) or the Eclipse JUnit plugin (equivalent to java ...).

For surefire I use the following configuration in the Maven pom.xml:

<plugin>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>2.22.2</version>
    <configuration>
        <systemPropertyVariables>
            <log4j.configuration>file:log4j.properties</log4j.configuration>
        </systemPropertyVariables>
    </configuration>
</plugin>

and for JUnit I use the following Java VM argument: -Dlog4j.configuration=log4j.properties.

I also tried in both cases to use the absolute path to log4j.properties. Still not working.

You can see the complete code here.

解决方案

The problem in the code above was that the Maven runtime dependencies (the actual Log4j logging implementation was missing). In the pom, the slf4j-simple logging implementation was provided. This implementation was:

  • able print the Kafka logs to stdout
  • NOT able to understand the log4j.properties or -Dlog4j.* properties.

Hence, once would have to include in a Log4J implementation. Here one would have the choice for Log4j 1.x (End of life) or Log4j2.

With the following configuration, one should be able to have a very comprehensive/granular control over the logging (including the Kafka clients).

In the pom.xml:

<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-api</artifactId>
    <version>2.13.1</version>
</dependency>
<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-core</artifactId>
    <version>2.13.1</version>
</dependency>
<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-slf4j-impl</artifactId>
    <version>2.13.1</version>
    <scope>test</scope>
</dependency>

While the log4j-api and log4j-core are the minimum requirements you would need. In order for Log4j2 to be able to control/configure also libraries/components written on top of SLF4J (and Kafka client is such a library), you need to add the 3rd dependecy: log4j-slf4j-impl.

NOTE: Note that for libraries that use SLF4J 1.8.x and higher, you will need another version of this Log4j-SLF4J adapter. See this for more information.

Now regarding configuring the logging, Log4j2 is automatically loading the configuration files it it finds them, automatically searching in multiple locations.

If you place the following log4j2.properties file in the resource classpath (in src/java/resources/ for main code and in src/test/resource for test code) you will get the desired outcome:

rootLogger.level = info
rootLogger.appenderRefs = stdout
rootLogger.appenderRef.stdout.ref = STDOUT

appenders = stdout

appender.stdout.name = STDOUT
appender.stdout.type = Console
appender.stdout.layout.type = PatternLayout
appender.stdout.layout.pattern =%d{yyyy-MM-dd HH:mm:ss.SSS} [%level] [%t] %c - %m%n

loggers = kafka, kafka-consumer

logger.kafka.name = org.apache.kafka
logger.kafka.level = warn

logger.kafka-consumer.name = org.apache.kafka.clients.consumer
logger.kafka-consumer.level = info

In the above example, all logging is written to stdout and: * the root logger is logging info and above * all org.apache.kafka-prefixed loggers log warn and above * all org.apache.kafka.clients.consumer-prefixed loggers are logging info and above

Here are some extra observations when using Log4j2:

  • if you want JSON or YAML configuration you need extra dependecies
  • the JUnit plugin in Eclipse will silently terminate without any output if the Log4j configuration is not correct. mvn output will show you the error though.

这篇关于如何更改Kafka客户端日志记录级别/首选项?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆