Dropwizard不会将自定义记录器记录到文件中 [英] Dropwizard doesn't log custom loggers to file

查看:126
本文介绍了Dropwizard不会将自定义记录器记录到文件中的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个dropwizard应用程序,我将logger appender配置为文件,如下所示:

  logging:
level :INFO

loggers:
mylogger:INFO
com.path.to.class:INFO

appenders:
- 类型:file
currentLogFilename:。logs / mylogs.log
archivedLogFilenamePattern:.logs / archive。%d。log.gz
archivedFileCount:14

并且,在我的应用中创建了记录器:

  import org.slf4j.Logger; 
import org.slf4j.LoggerFactory;

private final Logger OpLogger = LoggerFactory.getLogger(mylogger);
(和)
private final Logger ClassLogger = LoggerFactory.getLogger(pathToClass.class);

在main()中进行一些测试记录:

  OpLogger.info(test 1); 
ClassLogger.info(test 2);

应用程序启动并运行没有问题;但我没有得到任何日志(除了Jetty访问日志,当然,正确打印到mylogs.log),在stdout或mylogs.log文件中都没有。相反,如果我删除configuration.yml中的记录器配置,我把所有日志打印到stdout。
也许这是dropwizard的问题,或者我必须向configuration.yml添加一些东西?
我正在使用Dropwizard 0.8.0

解决方案

更新最新版本的dropwizard支持开箱即用的日志配置



<我遇到了同样的问题,试图用一个单独的文件设置Dropwizard(0.8.4)。我遇到了同样的问题。所以我挖得更深一些,为我找到了一个解决方案(不是最干净但我无法'似乎有不同的工作方式。)



问题在于 LoggingFactory #configure 自动广告ds每个appender到root。这不是很理想,所以需要覆盖。我做的是:


  1. 覆盖 LoggingFactory

这有点乱,因为有些东西需要黯然复制:(这是我的实施:

  import java.io.PrintStream; 
import java.lang.management.ManagementFactory;
import java.util.Map ;

import javax.management.InstanceAlreadyExistsException;
import javax.management.MBeanRegistrationException;
import javax.management.MBeanServer;
import javax.management.MalformedObjectNameException;
import javax.management.NotCompliantMBeanException;
import javax.management.ObjectName;

import org.slf4j.LoggerFactory;
import org.slf4j.bridge.SLF4JBridgeHandler;

import com.codahale.metrics.MetricRegistry;
import com.codahale.metrics.logback.InstrumentedAppender;
import com.fasterxml.jackson.annotation.JsonIgnore;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.google.common.collect.ImmutableMap;

import ch.qos .logback.classic.Level;
import ch.qos.logback.classic.Logger;
import ch.qos.logback.classic.LoggerContext;
import ch.qos.logback.classic.PatternLayout;
import ch.qos.logback.classic.jmx.JMXConfigurator;
import ch.qos.logback.classic.jul.LevelChangePropagator;
import ch.qos.logback.classic.spi.ILoggingEvent;
import ch.qos.logback.core.Appender;
import ch.qos.logback.core.util.StatusPrinter;
import io.dropwizard.logging.AppenderFactory;
import io.dropwizard.logging.LoggingFactory;

公共类BetterDropWizardLoggingConfig扩展了LoggingFactory {

@JsonIgnore
final LoggerContext loggerContext;

@JsonIgnore
final PrintStream configurationErrorsStream;

@JsonProperty(loggerMapping)
private ImmutableMap< String,String> loggerMappings;

private static void hijackJDKLogging(){
SLF4JBridgeHandler.removeHandlersForRootLogger();
SLF4JBridgeHandler.install();
}

public BetterDropWizardLoggingConfig(){
PatternLayout.defaultConverterMap.put(h,HostNameConverter.class.getName());
this.loggerContext =(LoggerContext)LoggerFactory.getILoggerFactory();
this.configurationErrorsStream = System.err;
}

private Logger configureLevels(){
final Logger root = loggerContext.getLogger(org.slf4j.Logger.ROOT_LOGGER_NAME);
loggerContext.reset();

final LevelChangePropagator propagator = new LevelChangePropagator();
propagator.setContext(loggerContext);
propagator.setResetJUL(true);

loggerContext.addListener(传播者);

root.setLevel(getLevel());

for(Map.Entry< String,Level> entry:getLoggers()。entrySet()){
loggerContext.getLogger(entry.getKey())。setLevel(entry.getValue( ));
}

返回root;
}

@Override
public void configure(MetricRegistry metricRegistry,String name){
hijackJDKLogging();

final Logger root = configureLevels();

for(AppenderFactory输出:getAppenders()){
Appender< ILoggingEvent> build = output.build(loggerContext,name,null);
if(输出instanceof MappedLogger&&((MappedLogger)输出).getLoggerName()!= null){
String appenderName =((MappedLogger)output).getLoggerName();
String loggerName = loggerMappings.get(appenderName);
Logger logger = this.loggerContext.getLogger(loggerName);
logger.addAppender(build);
} else {
root.addAppender(build);
}
}

StatusPrinter.setPrintStream(configurationErrorsStream);
try {
StatusPrinter.printIfErrorsOccured(loggerContext);
} finally {
StatusPrinter.setPrintStream(System.out);
}

final MBeanServer server = ManagementFactory.getPlatformMBeanServer();
try {
final ObjectName objectName = new ObjectName(io.dropwizard:type = Logging);
if(!server.isRegistered(objectName)){
server.registerMBean(new JMXConfigurator(loggerContext,server,objectName),objectName);
}
} catch(MalformedObjectNameException | InstanceAlreadyExistsException | NotCompliantMBeanException
| MBeanRegistrationException e){
throw new RuntimeException(e);
}

configureInstrumentation(root,metricRegistry);
}

private void configureInstrumentation(Logger root,MetricRegistry metricRegistry){
final InstrumentedAppender appender = new InstrumentedAppender(metricRegistry);
appender.setContext(loggerContext);
appender.start();
root.addAppender(appender);
}

}

你可以,我很遗憾必须复制/粘贴一些私有成员和方法,以使事情按预期工作。



我添加了一个新字段:

  @JsonProperty(loggerMapping )
private ImmutableMap< String,String> loggerMappings;

这允许我为每个记录器配置映射。这不是开箱即用的,因为我无法得到一个名字(dropwizard默认了appender名称,非常不方便......)



所以我添加了一个新的记录器,在我的情况下也进行主机名替换,我需要由于不同的原因。为此,我覆盖旧的 FileAppenderFactory 并实现我自己的接口 MappedLogger 。在此实现:

  import com.fasterxml.jackson.annotation.JsonProperty; 
import com.fasterxml.jackson.annotation.JsonTypeName;

import ch.qos.logback.classic.LoggerContext;
import ch.qos.logback.classic.spi.ILoggingEvent;
import ch.qos.logback.core.FileAppender;
import ch.qos.logback.core.rolling.RollingFileAppender;
import io.dropwizard.logging.AppenderFactory;
import io.dropwizard.logging.FileAppenderFactory;

@JsonTypeName(hostnameFile)
public class HostnameFileAppender extends FileAppenderFactory实现AppenderFactory,MappedLogger {

private static String uuid = UUID.randomUUID()。toString( );

@JsonProperty
private String name;

public void setCurrentLogFilename(String currentLogFilename){
super.setCurrentLogFilename(substitute(currentLogFilename));
}

private String substitute(final String pattern){
String substitute = null;

try {
substitute = InetAddress.getLocalHost()。getHostName();
} catch(UnknownHostException e){
System.err.println(无法获取本地主机名:);
e.printStackTrace(System.err);
substitute = uuid;
System.err.println(使用+替换+作为后备。);
}
返回pattern.replace($ {HOSTNAME},替换);
}

@Override
public void setArchivedLogFilenamePattern(String archivedLogFilenamePattern){
super.setArchivedLogFilenamePattern(substitute(archivedLogFilenamePattern));
}

@Override
public String getLoggerName(){
return name;
}
}

请注意,为了添加新的json类型,您将必须在 AppenderFactory 中关注JavaDoc(将Meta-inf添加到类路径并使新的appender可被发现)



到目前为止,我们现在有一个可以接收记录器映射的配置,我们有一个可以选择名称的记录器。



在配置方法中,我现在将这两者联系在一起:

  for(AppenderFactory输出:getAppenders()){
Appender< ILoggingEvent> build = output.build(loggerContext,name,null);
if(输出instanceof MappedLogger&&((MappedLogger)输出).getLoggerName()!= null){
String appenderName =((MappedLogger)output).getLoggerName();
String loggerName = loggerMappings.get(appenderName);
Logger logger = this.loggerContext.getLogger(loggerName);
logger.addAppender(build);
} else {
root.addAppender(build);
}
}

为了向后兼容,我保留了默认行为。如果没有定义名称,则将appender添加到根记录器。否则我解析输入的记录器,并根据需要添加appender。



最后但并非最不重要的是旧的yaml配置:

  logging:
#所有记录器的默认级别。可以是OFF,ERROR,WARN,INFO,DEBUG,TRACE或ALL。
级别:INFO

loggers:
EVENT:INFO

loggerMapping:
#更容易搜索这个定义为:appenderName - > loggerName而不是另一种方式
eventLog:EVENT

appenders:
- 类型:console
threshold:ALL
logFormat: myformat

- 类型:hostnameFile#注意具有HOSTNAME RESOLVE的新类型
currentLogFilename:/Users/artur/tmp/log/my-${HOSTNAME}.log
threshold :ALL
存档:true
archivedLogFilenamePattern:mypattern
archivedFileCount:31
timeZone:UTC
logFormat:myFormat

- 类型: hostnameFile
name:eventLog#注意附录名称
currentLogFilename:
阈值:ALL
存档:true
archivedLogFilenamePattern:
archivedFileCount:31
timeZone:UTC
logFormat:myFormat

- 类型:hostnameFile
currentLogFilename:something
threshold:ERROR
archive:true
archivedLogFilenamePattern:
archivedFileCount:31
timeZone:UTC
logFormat:myFormat

如您所见,我将事件appender映射到事件记录器。这样我的所有事件都会在文件A中结束,而其他信息最终会在其他地方结束。



我希望这会有所帮助。可能不是最干净的解决方案,但我不认为Dropwizard目前允许此功能。


I have a dropwizard app, where I configured logger appenders to file as follows:

logging:
  level: INFO

  loggers:
    "mylogger": INFO
    "com.path.to.class": INFO

  appenders:
    - type: file
      currentLogFilename: .logs/mylogs.log
      archivedLogFilenamePattern: .logs/archive.%d.log.gz
      archivedFileCount: 14

And, created logger in my app:

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;    

private final Logger OpLogger = LoggerFactory.getLogger("mylogger");
(and)
private final Logger ClassLogger = LoggerFactory.getLogger(pathToClass.class);

Do some test logging in main():

OpLogger.info("test 1");
ClassLogger.info("test 2);

The application starts and runs without problems; but I don't get any logs (except for Jetty access logs, of course, that are correctly printed to mylogs.log), neither in stdout or in mylogs.log file. Instead, if I remove the loggers configuration in configuration.yml, I get all logs printed to stdout. Perhaps it's a problem of dropwizard or I have to add something to configuration.yml? I'm using Dropwizard 0.8.0

解决方案

UPDATE The latest version of dropwizard supports logging configurations out of the box

I ran into the same issue trying to set up Dropwizard (0.8.4) with a separate files. I ran into the same issue. So I dug a bit deeper and found a solution for me (not the cleanest but I couldn't seem to get that working differently).

The issue is that LoggingFactory#configure automatically adds every appender to root. This is not very ideal so it needed overwriting. What I did was:

  1. Overwrite LoggingFactory.

This is slightly messy since there is a few things that need to be copied sadly :( Here is my implementation:

import java.io.PrintStream;
import java.lang.management.ManagementFactory;
import java.util.Map;

import javax.management.InstanceAlreadyExistsException;
import javax.management.MBeanRegistrationException;
import javax.management.MBeanServer;
import javax.management.MalformedObjectNameException;
import javax.management.NotCompliantMBeanException;
import javax.management.ObjectName;

import org.slf4j.LoggerFactory;
import org.slf4j.bridge.SLF4JBridgeHandler;

import com.codahale.metrics.MetricRegistry;
import com.codahale.metrics.logback.InstrumentedAppender;
import com.fasterxml.jackson.annotation.JsonIgnore;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.google.common.collect.ImmutableMap;

import ch.qos.logback.classic.Level;
import ch.qos.logback.classic.Logger;
import ch.qos.logback.classic.LoggerContext;
import ch.qos.logback.classic.PatternLayout;
import ch.qos.logback.classic.jmx.JMXConfigurator;
import ch.qos.logback.classic.jul.LevelChangePropagator;
import ch.qos.logback.classic.spi.ILoggingEvent;
import ch.qos.logback.core.Appender;
import ch.qos.logback.core.util.StatusPrinter;
import io.dropwizard.logging.AppenderFactory;
import io.dropwizard.logging.LoggingFactory;

public class BetterDropWizardLoggingConfig extends LoggingFactory {

    @JsonIgnore
    final LoggerContext loggerContext;

    @JsonIgnore
    final PrintStream configurationErrorsStream;

    @JsonProperty("loggerMapping")
    private ImmutableMap<String, String> loggerMappings;

    private static void hijackJDKLogging() {
        SLF4JBridgeHandler.removeHandlersForRootLogger();
        SLF4JBridgeHandler.install();
    }

    public BetterDropWizardLoggingConfig() {
        PatternLayout.defaultConverterMap.put("h", HostNameConverter.class.getName());
        this.loggerContext = (LoggerContext) LoggerFactory.getILoggerFactory();
        this.configurationErrorsStream = System.err;
    }

    private Logger configureLevels() {
        final Logger root = loggerContext.getLogger(org.slf4j.Logger.ROOT_LOGGER_NAME);
        loggerContext.reset();

        final LevelChangePropagator propagator = new LevelChangePropagator();
        propagator.setContext(loggerContext);
        propagator.setResetJUL(true);

        loggerContext.addListener(propagator);

        root.setLevel(getLevel());

        for (Map.Entry<String, Level> entry : getLoggers().entrySet()) {
            loggerContext.getLogger(entry.getKey()).setLevel(entry.getValue());
        }

        return root;
    }

    @Override
    public void configure(MetricRegistry metricRegistry, String name) {
        hijackJDKLogging();

        final Logger root = configureLevels();

        for (AppenderFactory output : getAppenders()) {
            Appender<ILoggingEvent> build = output.build(loggerContext, name, null);
            if(output instanceof MappedLogger && ((MappedLogger) output).getLoggerName() != null) {
                String appenderName = ((MappedLogger) output).getLoggerName();
                String loggerName = loggerMappings.get(appenderName);
                Logger logger = this.loggerContext.getLogger(loggerName);
                logger.addAppender(build);
            } else {
                root.addAppender(build);
            }
        }

        StatusPrinter.setPrintStream(configurationErrorsStream);
        try {
            StatusPrinter.printIfErrorsOccured(loggerContext);
        } finally {
            StatusPrinter.setPrintStream(System.out);
        }

        final MBeanServer server = ManagementFactory.getPlatformMBeanServer();
        try {
            final ObjectName objectName = new ObjectName("io.dropwizard:type=Logging");
            if (!server.isRegistered(objectName)) {
                server.registerMBean(new JMXConfigurator(loggerContext, server, objectName), objectName);
            }
        } catch (MalformedObjectNameException | InstanceAlreadyExistsException | NotCompliantMBeanException
                | MBeanRegistrationException e) {
            throw new RuntimeException(e);
        }

        configureInstrumentation(root, metricRegistry);
    }

    private void configureInstrumentation(Logger root, MetricRegistry metricRegistry) {
        final InstrumentedAppender appender = new InstrumentedAppender(metricRegistry);
        appender.setContext(loggerContext);
        appender.start();
        root.addAppender(appender);
    }

}

As you can se, I sadly had to copy/paste a few private members and methods to make things work as intended.

I added a new field:

@JsonProperty("loggerMapping")
private ImmutableMap<String, String> loggerMappings;

This allows me to configure a mapping for each logger. This wasn't out of the box allowed as I can't get a name (dropwizard defaults the appender names, very inconvenient ...)

So I added a new Logger which in my case also does hostname substitution which I needed for different reasons. For this I overwrite the good old FileAppenderFactory and implement my own interface MappedLogger. Implementation here:

import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.annotation.JsonTypeName;

import ch.qos.logback.classic.LoggerContext;
import ch.qos.logback.classic.spi.ILoggingEvent;
import ch.qos.logback.core.FileAppender;
import ch.qos.logback.core.rolling.RollingFileAppender;
import io.dropwizard.logging.AppenderFactory;
import io.dropwizard.logging.FileAppenderFactory;

@JsonTypeName("hostnameFile")
public class HostnameFileAppender extends FileAppenderFactory implements AppenderFactory, MappedLogger {

    private static String uuid = UUID.randomUUID().toString();

    @JsonProperty
    private String name;

    public void setCurrentLogFilename(String currentLogFilename) {
        super.setCurrentLogFilename(substitute(currentLogFilename));
    }

    private String substitute(final String pattern) {
        String substitute = null;

        try {
            substitute = InetAddress.getLocalHost().getHostName();
        } catch (UnknownHostException e) {
            System.err.println("Failed to get local hostname:");
            e.printStackTrace(System.err);
            substitute = uuid;
            System.err.println("Using " + substitute + " as fallback.");
        }
        return pattern.replace("${HOSTNAME}", substitute);
    }

    @Override
    public void setArchivedLogFilenamePattern(String archivedLogFilenamePattern) {
        super.setArchivedLogFilenamePattern(substitute(archivedLogFilenamePattern));
    }

    @Override
    public String getLoggerName() {
        return name;
    }
}

Please note that in order to add a new json type, you will have to follow the JavaDoc in AppenderFactory (Add Meta-inf to the classpath and make the new appender discoverable)

So far so good, we now have a config that can pick up on logger mappings, we have a logger that can take an optional name.

In the configure method I now tie those two together:

for (AppenderFactory output : getAppenders()) {
        Appender<ILoggingEvent> build = output.build(loggerContext, name, null);
        if(output instanceof MappedLogger && ((MappedLogger) output).getLoggerName() != null) {
            String appenderName = ((MappedLogger) output).getLoggerName();
            String loggerName = loggerMappings.get(appenderName);
            Logger logger = this.loggerContext.getLogger(loggerName);
            logger.addAppender(build);
        } else {
            root.addAppender(build);
        }
    }

For backwards compatibility I kept the default behaviour. If there is no name defined, the appender will be added to the root logger. Otherwise I resolve the typed logger and add the appender to it as wished.

And last but not least the good old yaml config:

logging:
  # The default level of all loggers. Can be OFF, ERROR, WARN, INFO, DEBUG, TRACE, or ALL.
  level: INFO

  loggers:
    "EVENT" : INFO

  loggerMapping:
    # for easier search this is defined as: appenderName -> loggerName rather than the other way around
    "eventLog" : "EVENT"

  appenders:
   - type: console   
     threshold: ALL
     logFormat: "myformat"

   - type: hostnameFile # NOTE THE NEW TYPE WITH HOSTNAME RESOLVE
     currentLogFilename: /Users/artur/tmp/log/my-${HOSTNAME}.log
     threshold: ALL
     archive: true
     archivedLogFilenamePattern: mypattern
     archivedFileCount: 31
     timeZone: UTC
     logFormat: "myFormat"

   - type: hostnameFile
     name: eventLog # NOTE THE APPENDER NAME
     currentLogFilename: something
     threshold: ALL
     archive: true
     archivedLogFilenamePattern: something
     archivedFileCount: 31
     timeZone: UTC
     logFormat: "myFormat"

   - type: hostnameFile
     currentLogFilename: something
     threshold: ERROR
     archive: true
     archivedLogFilenamePattern: something
     archivedFileCount: 31
     timeZone: UTC
     logFormat: "myFormat"

As you can see I am mapping the events appender to the events logger. This way all my events end up in file A, while the other information ends up somewhere else.

I hope this helps. Might not be the cleanest solution but I don't think Dropwizard allows this feature currently.

这篇关于Dropwizard不会将自定义记录器记录到文件中的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆