重新登录将我的日志消息丢失到文件 [英] Logback losing my log messages to file

查看:93
本文介绍了重新登录将我的日志消息丢失到文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我编写了一个测试程序来验证log4j上logback的性能改进.但是令我惊讶的是,我遇到了这个奇怪的问题.我正在使用它们的Async和文件追加程序将200k日志消息循环写入文件.但是,每次只记录大约140k左右的消息并在此之后停止.它只是打印我的最后一条日志语句,指示它已将所有内容写入缓冲区,程序终止.如果仅使用Log4j运行相同的程序,则可以在日志文件中看到所有200k消息.是否有任何基本的体系结构差异使这种情况发生?反正有避免的方法吗?我们正在考虑从log4j切换到logback,现在这让我重新思考.

I wrote a test program to verify the performance improvements of logback over log4j. But to my surprise, I ran into this strange problem. I am writing some 200k log messages in a loop to a file using their Async and file appenders. But, every time, it only logs some 140k or so messages and stops after that. It just prints my last log statement indicating that it has written everything in the buffer and the program terminates. If I just run the same program with Log4j, i can see all 200k messages in the log file. Is there any fundamental architectural differences making this happen? Is there anyway to avoid it? We are thinking switching from log4j to logback and now this is making me re-think.

这是我的登录配置:

<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
    <encoder>
        <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n
        </pattern>
    </encoder>
</appender>

<appender name="FILE" class="ch.qos.logback.core.FileAppender">
<file>logback.log</file>
<encoder>
  <pattern>%date %level [%thread] %logger{10} [%file:%line] %msg%n</pattern>
</encoder>
 </appender>

<appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
    <appender-ref ref="FILE" />
</appender>

<root level="info">
    <appender-ref ref="ASYNC" />
</root>
 </configuration>

这是我的代码------------------

This is my code ------------------

      public static void main(String[] args) throws InterruptedException {
        org.slf4j.Logger logbackLogger = LoggerFactory
                .getLogger(LogbackTest.class);

        List<Integer> runs = Arrays.asList(1000, 5000, 50000, 200000);
        ArrayList<Long> logbackRuntimes = new ArrayList<>(4);

        for (int run = 0; run < runs.size(); run++) {
            logbackLogger.info("------------------------>Starting run: "
                    + (run + 1));
            // logback test
            long stTime = System.nanoTime();
            int i = 0;
            for (i = 1; i <= runs.get(run); i++) {
                Thread.sleep(1);
                logbackLogger
                .info("This is a Logback test log, run: {},     iter: {}",
                                run, i);
            }
            logbackRuntimes.add(System.nanoTime() - stTime);
            logbackLogger.info("logback run - " + (run + 1) + " " + i);
        }
        Thread.sleep(5000);
        // print results
        logbackLogger.info("Run times:");
        logbackLogger
            .info("Run\tNoOfMessages\tLog4j Time(ms)\tLogback Time(ms)");
        for (int run = 0; run < runs.size(); run++) {
            logbackLogger.info((run + 1) + "\t" + runs.get(run) + "\t"
                    + logbackRuntimes.get(run) / 10e6d);
        }
    }

推荐答案

根据文档:

[...]默认情况下,当剩余队列容量的不到20%时,AsyncAppender将丢弃级别TRACE,DEBUG和INFO的事件,仅保留级别WARN和ERROR的事件.当队列的容量不足20%时,此策略可确保以级别TRACE,DEBUG和INFO的成本丢失事件来对日志记录事件进行非阻塞处理(因此具有出色的性能).通过将discardingThreshold属性设置为0(零),可以防止事件丢失.

[...] by default, when less than 20% of the queue capacilty remains, AsyncAppender will drop events of level TRACE, DEBUG and INFO keeping only events of level WARN and ERROR. This strategy ensures non-blocking handling of logging events (hence excellent performance) at the cost loosing events of level TRACE, DEBUG and INFO when the queue has less than 20% capacity. Event loss can be prevented by setting the discardingThreshold property to 0 (zero).

这篇关于重新登录将我的日志消息丢失到文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆