覆盖hadoop中的log4j.properties [英] override log4j.properties in hadoop

查看:131
本文介绍了覆盖hadoop中的log4j.properties的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

如何覆盖hadoop中的默认log4j.properties?如果我设置了hadoop.root.logger = WARN,console,它不会在控制台上打印日志,而我想要的是它不应该在日志文件中打印INFO。我在我的jar中添加了一个log4j.properties文件,但我无法覆盖默认的文件。简而言之,我希望日志文件只打印错误和警告。

 #定义一些可以被覆盖的默认值系统属性
hadoop.root.logger =信息,控制台
hadoop.log.dir =。
hadoop.log.file = hadoop.log


#作业摘要Appender

#使用以下记录器将摘要发送到单独的文件定义由
#hadoop.mapreduce.jobsummary.log.file每日滚动:
#hadoop.mapreduce.jobsummary.logger =信息,JSA

hadoop.mapreduce.jobsummary.logger = $ {hadoop.root.logger}
hadoop.mapreduce.jobsummary.log.file = hadoop-mapreduce.jobsummary.log

#将根记录器定义为系统属性hadoop。 root.logger。
log4j.rootLogger = $ {hadoop.root.logger},EventCounter

#记录阈值
log4j.threshold = ALL


#每日滚动文件附加程序


log4j.appender.DRFA = org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFA.File = $ {hadoop.log。 dir} / $ {hadoop.log.file}

#午夜滚动
log4j.appender.DRFA.DatePattern = .yyyy-MM-dd

# 30天备份
#log4j.appender.DRFA.MaxBackupIndex = 30
log4j.appender.DRFA.layout = org.apache.log4j.PatternLayout

#模式格式:日期LogLevel LoggerName LogMessage
log4j.appender.DRFA.layout.ConversionPattern =%d {ISO8601}%p%c:%m%n
#调试模式格式
#log4j.appender.DRFA。 layout.ConversionPattern =%d {ISO8601}%-5p%c {2}(%F:%M(%L)) - %m%n



#console
#如果你想使用这个

$ b $,将console添加到rootlogger上面log4j.appender.console = org.apache.log4j.ConsoleAppender
log4j.appender.console.target = SY stem.err
log4j.appender.console.layout = org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern =%d {yy / MM / dd HH:mm:ss} %p%c {2}:%m%n


#TaskLog Appender


#默认值
hadoop。 tasklog.taskid = null
hadoop.tasklog.iscleanup = false
hadoop.tasklog.noKeepSplits = 4
hadoop.tasklog.totalLogFileSize = 100
hadoop.tasklog.purgeLogSplits = true
hadoop.tasklog.logsRetainHours = 12

log4j.appender.TLA = org.apache.hadoop.mapred.TaskLogAppender
log4j.appender.TLA.taskId = $ {hadoop.tasklog .taskid}
log4j.appender.TLA.isCleanup = $ {hadoop.tasklog.iscleanup}
log4j.appender.TLA.totalLogFileSize = $ {hadoop.tasklog.totalLogFileSize}

log4j.appender.TLA.layout = org.apache.log4j.PatternLayout
log4j.appender.TLA.layout.ConversionPattern =%d {ISO8601}%p%c:%m%n


#安全附加程序

hadoop.security.log.file = SecurityAuth.audit
log4j.appender.DRFAS = org.apache.log4j .DailyRollingFileAppender
log4j.appender.DRFAS.File = $ {hadoop.log.dir} / $ {hadoop.security.log.file}

log4j.appender.DRFAS.layout = org .apache.log4j.PatternLayout
log4j.appender.DRFAS.layout.ConversionPattern =%d {ISO8601}%p%c:%m%n
#new logger
#定义一些默认值可以被系统属性覆盖
hadoop.security.logger =信息,控制台
log4j.category.SecurityLogger = $ {hadoop.security.logger}


#滚动文件附加程序


#log4j.appender.RFA = org.apache.log4j.RollingFileAppender
#log4j.appender.RFA.File = $ {hadoop.log .dir} / $ {hadoop.log.file}

#日志文件大小和30天备份
#log4j.appender.RFA.MaxFileSize = 1MB
#log4j。 appender.RFA.MaxBackupIndex = 30

#log4j.appender.RFA.layout = org.apache.log4j.PatternLayout
#log4j.appender.RFA.layout.ConversionPattern =%d {ISO8601 }%-5p%c {2} - %m%n
#log4j.appender.RFA.layout.ConversionPattern =%d {ISO8601}%-5p%c {2}(%F:%M(% L)) - % m%n


#FSNamesystem审计记录
#所有审计事件都记录在INFO级别

log4j.logger.org.apache。 hadoop.hdfs.server.namenode.FSNamesystem.audit = WARN

#自定义日志记录级别

#log4j.logger.org.apache.hadoop.mapred.JobTracker = DEBUG
#log4j.logger.org.apache.hadoop.mapred.TaskTracker = DEBUG
#log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit = DEBUG

#Jets3t库
log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service =错误


#事件计数器附加器
#将不同严重级别的日志记录发送到Hadoop Metrics。

log4j.appender.EventCounter = org.apache.hadoop.metrics.jvm.EventCounter


#作业摘要Appender

log4j.appender.JSA = org.apache.log4j.DailyRollingFileAppender
log4j.appender.JSA.File = $ {hadoop.log.dir} / $ {hadoop.mapreduce.jobsummary.log.file}
log4j.appender.JSA.layout = org.apache.log4j.PatternLayout
log4j.appender.JSA.layout.ConversionPattern =%d {yy / MM / dd HH:mm:ss}%p%c { 2}:%m%n
log4j.appender.JSA.DatePattern = .yyyy-MM-dd
log4j.logger.org.apache.hadoop.mapred.JobInProgress $ JobSummary = $ {hadoop.mapreduce .jobsummary.logger}
log4j.additivity.org.apache.hadoop.mapred.JobInProgress $ JobSummary = false

$ $ b $ MapReduce审计日志Appender


#设置MapReduce审计日志文件名
#hadoop.mapreduce.audit.log.file = hadoop-mapreduce.audit.log

#用于AuditLogger的Appender。
#需要设置以下系统属性
# - hadoop.log.dir(Hadoop日志目录)
# - hadoop.mapreduce.audit.log.file(MapReduce审计日志文件名)

#log4j.logger.org.apache.hadoop.mapred.AuditLogger = INFO,MRAUDIT
#log4j.additivity.org.apache.hadoop.mapred.AuditLogger = false
#log4j.appender.MRAUDIT = org.apache.log4j.DailyRollingFileAppender
#log4j.appender.MRAUDIT.File = $ {hadoop.log.dir} / $ {hadoop.mapreduce.audit.log.file}
#log4j.appender.MRAUDIT.DatePattern = .yyyy-MM-dd
#log4j.appender.MRAUDIT.layout = org.apache.log4j.PatternLayout
#log4j.appender.MRAUDIT.layout。 ConversionPattern =%d {ISO8601}%p%c:%m%n


解决方案<如果使用默认的Log4j.properties文件,日志设置将被启动脚本中的环境变量覆盖。如果你想使用默认的log4j,只是简单地想改变日志级别,可以使用 $ HADOOP_CONF_DIR / hadoop-env.sh



例如,要将您的记录器更改为DEBUG日志级别和DRFA记录器,请使用

  export HADOOP_ROOT_LOGGER =DEBUG ,DRFA


How do I override the default log4j.properties in hadoop? If I set the hadoop.root.logger=WARN,console, it doesnot print the logs on the console, whereas what I want is that it shouldn't print the INFO in the logs file. I added a log4j.properties file in my jar, but I am unable to override the default one. In short, I want the log file to print only the errors and warnings.

# Define some default values that can be overridden by system properties
hadoop.root.logger=INFO,console
hadoop.log.dir=.
hadoop.log.file=hadoop.log

#
# Job Summary Appender 
#
# Use following logger to send summary to separate file defined by 
# hadoop.mapreduce.jobsummary.log.file rolled daily:
# hadoop.mapreduce.jobsummary.logger=INFO,JSA
# 
hadoop.mapreduce.jobsummary.logger=${hadoop.root.logger}
hadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log

# Define the root logger to the system property "hadoop.root.logger".
log4j.rootLogger=${hadoop.root.logger}, EventCounter

# Logging Threshold
log4j.threshold=ALL

#
# Daily Rolling File Appender
#

log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file}

# Rollver at midnight
log4j.appender.DRFA.DatePattern=.yyyy-MM-dd

# 30-day backup
#log4j.appender.DRFA.MaxBackupIndex=30
log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout

# Pattern format: Date LogLevel LoggerName LogMessage
log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
# Debugging Pattern format
#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n


#
# console
# Add "console" to rootlogger above if you want to use this 
#

log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n

#
# TaskLog Appender
#

#Default values
hadoop.tasklog.taskid=null
hadoop.tasklog.iscleanup=false
hadoop.tasklog.noKeepSplits=4
hadoop.tasklog.totalLogFileSize=100
hadoop.tasklog.purgeLogSplits=true
hadoop.tasklog.logsRetainHours=12

log4j.appender.TLA=org.apache.hadoop.mapred.TaskLogAppender
log4j.appender.TLA.taskId=${hadoop.tasklog.taskid}
log4j.appender.TLA.isCleanup=${hadoop.tasklog.iscleanup}
log4j.appender.TLA.totalLogFileSize=${hadoop.tasklog.totalLogFileSize}

log4j.appender.TLA.layout=org.apache.log4j.PatternLayout
log4j.appender.TLA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n

#
#Security appender
#
hadoop.security.log.file=SecurityAuth.audit
log4j.appender.DRFAS=org.apache.log4j.DailyRollingFileAppender 
log4j.appender.DRFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}

log4j.appender.DRFAS.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
#new logger
# Define some default values that can be overridden by system properties
hadoop.security.logger=INFO,console
log4j.category.SecurityLogger=${hadoop.security.logger}

#
# Rolling File Appender
#

#log4j.appender.RFA=org.apache.log4j.RollingFileAppender
#log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file}

# Logfile size and and 30-day backups
#log4j.appender.RFA.MaxFileSize=1MB
#log4j.appender.RFA.MaxBackupIndex=30

#log4j.appender.RFA.layout=org.apache.log4j.PatternLayout
#log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} - %m%n
#log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n

#
# FSNamesystem Audit logging
# All audit events are logged at INFO level
#
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=WARN

# Custom Logging levels

#log4j.logger.org.apache.hadoop.mapred.JobTracker=DEBUG
#log4j.logger.org.apache.hadoop.mapred.TaskTracker=DEBUG
#log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=DEBUG

# Jets3t library
log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR

#
# Event Counter Appender
# Sends counts of logging messages at different severity levels to Hadoop Metrics.
#
log4j.appender.EventCounter=org.apache.hadoop.metrics.jvm.EventCounter

#
# Job Summary Appender
#
log4j.appender.JSA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.JSA.File=${hadoop.log.dir}/${hadoop.mapreduce.jobsummary.log.file}
log4j.appender.JSA.layout=org.apache.log4j.PatternLayout
log4j.appender.JSA.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
log4j.appender.JSA.DatePattern=.yyyy-MM-dd
log4j.logger.org.apache.hadoop.mapred.JobInProgress$JobSummary=${hadoop.mapreduce.jobsummary.logger}
log4j.additivity.org.apache.hadoop.mapred.JobInProgress$JobSummary=false

#
# MapReduce Audit Log Appender
#

# Set the MapReduce audit log filename
#hadoop.mapreduce.audit.log.file=hadoop-mapreduce.audit.log

# Appender for AuditLogger.
# Requires the following system properties to be set
#    - hadoop.log.dir (Hadoop Log directory)
#    - hadoop.mapreduce.audit.log.file (MapReduce audit log filename)

#log4j.logger.org.apache.hadoop.mapred.AuditLogger=INFO,MRAUDIT
#log4j.additivity.org.apache.hadoop.mapred.AuditLogger=false
#log4j.appender.MRAUDIT=org.apache.log4j.DailyRollingFileAppender
#log4j.appender.MRAUDIT.File=${hadoop.log.dir}/${hadoop.mapreduce.audit.log.file}
#log4j.appender.MRAUDIT.DatePattern=.yyyy-MM-dd
#log4j.appender.MRAUDIT.layout=org.apache.log4j.PatternLayout
#log4j.appender.MRAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n

解决方案

If you use the default Log4j.properties file the logging settings get overridden by environment variables from the startup script. If you want to use the default log4j and just simply want to change the logging level, use $HADOOP_CONF_DIR/hadoop-env.sh

For example, to change your logger to DEBUG log level and DRFA logger, use

export HADOOP_ROOT_LOGGER="DEBUG,DRFA"

这篇关于覆盖hadoop中的log4j.properties的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆