如何在使用hadoop时使用记录器api编写java日志文件 [英] how to write java Log file using the logger api while using hadoop

查看:96
本文介绍了如何在使用hadoop时使用记录器api编写java日志文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我写了一个映射减少代码,我想调试。

为了做到这一点,我不能使用标准输出,因为Hadoop平台不打印它除非发生错误。

而是我试图使用记录器,以创建日志文件。



我使用处理程序将其分割为两个文件,不幸的是严重日志文件是空的,而通用日志文件仅记录主线程中发生的事情,而不记录map reduce函数。



问题如下:

hadoop和日志文件有问题吗?我的记录器配置有问题?

日志配置代码:
i为整个应用程序使用一个日志记录器(这次是根日志记录器)

  public static Logger configureLogging()
{
try
{
logger = Logger.getLogger );
// FileSystem hdfs = FileSystem.get(URI.create(Misc.S3FS),getConfiguration());
logger.setLevel(Level.ALL);

// StreamHandler handler = new StreamHandler(hdfs.create(new Path(Misc.LOGS_PATH +mylog.log)),new SimpleFormatter());
FileHandler handler = new FileHandler(Misc.LOGS_PATH +mylog.xml,true);
FileHandler severeHandler = new FileHandler(Misc.LOGS_PATH +mylogSevere.xml,true);
severeHandler.setLevel(Level.INFO);
logger.addHandler(handler);
logger.addHandler(severeHandler);

$ b catch(例外e)
{
e.printStackTrace();
}
返回记录器;

}


解决方案

Hadoop与预先配置的log4j。你只需要导入两个类:

  import org.apache.commons.logging.Log; 
import org.apache.commons.logging.LogFactory;

现在您可以在映射器,缩减器和任何您想要的位置定义记录器:

  private static final Log LOG = LogFactory.getLog(MyClass.class); 

并记录您所需:

  LOG.info(My message); 

这些信息会在作业执行期间显示出来。您可以使用

  conf / log4j.properties 

i Wrote a map reduce code that i want to debug.

in order to do so i cant use standard output, because the Hadoop platform doesn't print it to the screen unless an error occurs.

instead i tried to use logger, in order to create log file.

i split it to two files using a handler, unfortunately the "severe" log file comes out empty and the general log file only logs things that happens in the main thread and not in the map reduce functions.

the question is as follows:

is there an issue with hadoop and log files or is it a problem with my configuration of the logger? if so how to correct it.

the log configuration code: i use the one logger for the entire application (this time root logger)

public static Logger configureLogging() 
    {
        try
        {
            logger=Logger.getLogger("");
            //FileSystem hdfs=FileSystem.get(URI.create(Misc.S3FS),getConfiguration());
            logger.setLevel(Level.ALL);

            //StreamHandler handler=new StreamHandler(hdfs.create(new Path(Misc.LOGS_PATH+"mylog.log")),new SimpleFormatter());
            FileHandler handler=new FileHandler(Misc.LOGS_PATH+"mylog.xml",true);   
            FileHandler severeHandler=new FileHandler(Misc.LOGS_PATH+"mylogSevere.xml",true);
            severeHandler.setLevel(Level.INFO);
            logger.addHandler(handler);
            logger.addHandler(severeHandler);

        }
        catch (Exception e) 
        {
            e.printStackTrace();
        }
        return logger;

    }

解决方案

Hadoop comes with preconfigured log4j. All you have to do is import two classes:

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;

And now you can define logger inside your mappers, reducers and wherever you want:

private static final Log LOG = LogFactory.getLog(MyClass.class);

And log what you need:

LOG.info("My message");

The messages will show up during job execution. You can tweak log4j configuration with

conf/log4j.properties

这篇关于如何在使用hadoop时使用记录器api编写java日志文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆