Log4j不写入HDFS / Log4j.properties [英] Log4j not writing to HDFS / Log4j.properties

查看:147
本文介绍了Log4j不写入HDFS / Log4j.properties的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

基于以下配置,我预计我的log4j应该写入HDFS文件夹(/ myfolder / mysubfolder)。但它甚至不创建一个名为hadoop9.log的文件。我尝试在hdfs上手动创建hadoop9.log。



我是否缺少log4j.properties中的任何内容?

 #定义可被系统属性覆盖的一些默认值
hadoop.root.logger =信息,控制台,RFA,DRFA
hadoop.log.dir = / myfolder / mysubfolder
hadoop.log.file = hadoop9.log

#将根记录器定义为系统属性hadoop.root.logger。
log4j.rootLogger = $ {hadoop.root.logger},EventCounter

#记录阈值
log4j.threshold = ALL

#空Appender
log4j.appender.NullAppender = org.apache.log4j.varia.NullAppender


#滚动文件Appender - 5GB的空间使用量。

hadoop.log.maxfilesize = 256MB
hadoop.log.maxbackupindex = 20
log4j.appender.RFA = org.apache.log4j.RollingFileAppender
log4j。 appender.RFA.File = $ {hadoop.log.dir} / $ {hadoop.log.file}

log4j.appender.RFA.MaxFileSize = $ {hadoop.log.maxfilesize}
log4j.appender.RFA.MaxBackupIndex = $ {hadoop.log.maxbackupindex}

log4j.appender.RFA.layout = org.apache.log4j.PatternLayout

#Pattern format:Date LogLevel LoggerName LogMessage
log4j.appender.RFA.layout.ConversionPattern =%d {ISO8601}%p%c:%m%n
#调试模式格式
#log4j.appender .RFA.layout.ConversionPattern =%d {ISO8601}%-5p%c {2}(%F:%M(%L)) - %m%n



#每日滚动文件附加程序


log4j.appender.DRFA = org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFA.File = $ {hadoop。 log.dir} / $ {hadoop.log.file}

#午夜滚动
log4j.appender.DRFA.DatePattern = .yyyy-MM-dd

#30天备份
#log4j.appender.D RFA.MaxBackupIndex = 30
log4j.appender.DRFA.layout = org.apache.log4j.PatternLayout

#模式格式:日期LogLevel LoggerName LogMessage
log4j.appender.DRFA。 layout.ConversionPattern =%d {ISO8601}%p%c:%m%n
#调试模式格式
#log4j.appender.DRFA.layout.ConversionPattern =%d {ISO8601}%-5p% c {2}(%F:%M(%L)) - %m%n



#console
#将console添加到rootlogger上面如果你想使用这个


log4j.appender.console = org.apache.log4j.ConsoleAppender
log4j.appender.console.target = System.err
log4j.appender.console.layout = org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern =%d {yy / MM / dd HH:mm:ss}%p%c { 2}:%m%n


#TaskLog Appender


#默认值
hadoop.tasklog.taskid = null
hadoop.tasklog.iscleanup =假
hadoop.tasklog.noKeepSplits = 4
hadoop.tasklog.totalLogFileSize = 100
hadoop.tasklog.purgeLogSplits =真
的hadoop。 tasklog.l ogsRetainHours = 12

log4j.appender.TLA = org.apache.hadoop.mapred.TaskLogAppender
log4j.appender.TLA.taskId = $ {hadoop.tasklog.taskid}
log4j.appender.TLA.isCleanup = $ {hadoop.tasklog.iscleanup}
log4j.appender.TLA.totalLogFileSize = $ {hadoop.tasklog.totalLogFileSize}

log4j.appender.TLA。 layout = org.apache.log4j.PatternLayout
log4j.appender.TLA.layout.ConversionPattern =%d {ISO8601}%p%c:%m%n


#块管理器中的HDFS块状态更改日志

#取消注释以下操作以禁止来自NameManager中BlockManager的正常块状态更改
#消息。
#log4j.logger.BlockStateChange = WARN


#Security appender

hadoop.security.logger = INFO,NullAppender
hadoop.security.log.maxfilesize = 256MB
hadoop.security.log.maxbackupindex = 20
log4j.category.SecurityLogger = $ {hadoop.security.logger}
hadoop.security.log。 file = SecurityAuth - $ {user.name} .audit
log4j.appender.RFAS = org.apache.log4j.RollingFileAppender
log4j.appender.RFAS.File = $ {hadoop.log.dir} / $ {hadoop.security.log.file}
log4j.appender.RFAS.layout = org.apache.log4j.PatternLayout
log4j.appender.RFAS.layout.ConversionPattern =%d {ISO8601}%p %c:%m%n
log4j.appender.RFAS.MaxFileSize = $ {hadoop.security.log.maxfilesize}
log4j.appender.RFAS.MaxBackupIndex = $ {hadoop.security.log.maxbackupindex }


#每日滚动安全appender

log4j.appender.DRFAS = org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFAS .File = $ {hadoop.log.dir} / $ {hadoop.security.log.file}
log4j.appender.DRFAS.layou t = org.apache.log4j.PatternLayout
log4j.appender.DRFAS.layout.ConversionPattern =%d {ISO8601}%p%c:%m%n
log4j.appender.DRFAS.DatePattern =。 yyyy-MM-dd


#hadoop配置记录


#取消注释以下行以关闭配置弃用警告。
#log4j.logger.org.apache.hadoop.conf.Configuration.deprecation = WARN


#hdfs审计记录

hdfs。 audit.logger = INFO,NullAppender
hdfs.audit.log.maxfilesize = 256MB
hdfs.audit.log.maxbackupindex = 20
log4j.logger.org.apache.hadoop.hdfs.server .namenode.FSNamesystem.audit = $ {hdfs.audit.logger}
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit = false
log4j.appender.RFAAUDIT = org.apache.log4j.RollingFileAppender进行
log4j.appender.RFAAUDIT.File = $ {hadoop.log.dir} /hdfs-audit.log
log4j.appender.RFAAUDIT.layout = org.apache.log4j .PatternLayout
log4j.appender.RFAAUDIT.layout.ConversionPattern =%d {ISO8601}%p%C {2}:%M%N
log4j.appender.RFAAUDIT.MaxFileSize = $ {hdfs.audit .log.maxfilesize}
log4j.appender.RFAAUDIT.MaxBackupIndex = $ {hdfs.audit.log.maxbackupindex}


#mapred审计记录

mapred.audit.logger = INFO,NullAppender
mapred.audit.log.maxfilesize = 256MB
mapred.audit.lo g.maxbackupindex = 20
log4j.logger.org.apache.hadoop.mapred.AuditLogger = $ {mapred.audit.logger}
log4j.additivity.org.apache.hadoop.mapred.AuditLogger = false
log4j.appender.MRAUDIT = org.apache.log4j.RollingFileAppender
log4j.appender.MRAUDIT.File = $ {hadoop.log.dir} /mapred-audit.log
log4j.appender .MRAUDIT.layout = org.apache.log4j.PatternLayout
log4j.appender.MRAUDIT.layout.ConversionPattern =%d {ISO8601}%p%c {2}:%m%n
log4j.appender .MRAUDIT.MaxFileSize = $ {} mapred.audit.log.maxfilesize
log4j.appender.MRAUDIT.MaxBackupIndex = $ {} mapred.audit.log.maxbackupindex

#自定义日志记录级别

#log4j.logger.org.apache.hadoop.mapred.JobTracker = DEBUG
#log4j.logger.org.apache.hadoop.mapred.TaskTracker = DEBUG
#log4j。 logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit = DEBUG

#Jets3t库
log4j.logger.org.jets3t.service.impl.rest.httpclient。 RestS3Service =错误


#事件计数器附加程序
#发送将不同严重级别的消息记录到Hadoop Metrics中。

log4j.appender.EventCounter = org.apache.hadoop.log.metrics.EventCounter


#作业摘要Appender

#使用以下记录器将摘要发送到由
#定义的单独文件hadoop.mapreduce.jobsummary.log.file:
#hadoop.mapreduce.jobsummary.logger = INFO,JSA

hadoop.mapreduce.jobsummary.logger = $ {hadoop.root.logger}
hadoop.mapreduce.jobsummary.log.file = hadoop-mapreduce.jobsummary.log
hadoop.mapreduce.jobsummary .log.maxfilesize = 256MB
hadoop.mapreduce.jobsummary.log.maxbackupindex = 20
log4j.appender.JSA = org.apache.log4j.RollingFileAppender
log4j.appender.JSA.File = $ {hadoop.log.dir} / $ {hadoop.mapreduce.jobsummary.log.file}
log4j.appender.JSA.MaxFileSize = $ {hadoop.mapreduce.jobsummary.log.maxfilesize}
的log4j .appender.JSA.MaxBackupIndex = $ {hadoop.mapreduce.jobsummary.log.maxbackupindex}
log4j.appender.JSA.layout = org.apache.log4j.PatternLayout
log4j.appender.JSA.layout。 ConversionPattern =%d {yy / MM / dd HH:m m:ss}%p%c {2}:%m%n
log4j.logger.org.apache.hadoop.mapred.JobInProgress $ JobSummary = $ {hadoop.mapreduce.jobsummary.logger}
log4j.additivity.org.apache.hadoop.mapred.JobInProgress $ JobSummary = false


#Yarn ResourceManager应用程序摘要日志

#设置ResourceManager摘要日志文件名
yarn.server.resourcemanager.appsummary.log.file = rm-appsummary.log
#设置ResourceManager汇总日志级别和appender
yarn.server.resourcemanager.appsummary.logger = $ {hadoop.root.logger}
#yarn.server.resourcemanager.appsummary.logger = INFO,RMSUMMARY

#为RM启用AppSummaryLogging,
#设置纱线。 server.resourcemanager.appsummary.logger到
#< LEVEL>,RMSUMMARY in hadoop-env.sh

#用于ResourceManager的Appender应用程序摘要日志
#需要以下属性设置
# - hadoop.log.dir(Hadoop日志目录)
# - yarn.server.resourcemanager.appsummary.log.file(resource经理应用程序摘要日志文件名)
# - yarn.server.resourcemanager.appsummary.logger(资源管理器应用程序摘要日志级别和appender)

log4j.logger.org.apache.hadoop.yarn .server.resourcemanager.RMAppManager $ ApplicationSummary = $ {yarn.server.resourcemanager.appsummary.logger}
log4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager $ ApplicationSummary = false
org.apache.log4j.RollingFileAppender
log4j.appender.RMSUMMARY.File = $ {hadoop.log.dir} / $ {yarn.server.resourcemanager.appsummary.log.file}
log4j.appender.RMSUMMARY.MaxFileSize = 256MB
log4j.appender.RMSUMMARY.MaxBackupIndex = 20
log4j.appender.RMSUMMARY.layout = org.apache.log4j.PatternLayout
log4j.appender .RMSUMMARY.layout.ConversionPattern =%d {ISO8601}%p%c {2}:%m%n

#HS审计日志配置
#mapreduce.hs.audit.logger = INFO,HSAUDIT
#log4j.logger.org.apache.hadoop.mapreduce.v2.hs.HSAuditLogger = $ {mapreduce.hs.audit.logger}
#log4j.additivity.org.apache.hadoop.mapreduce.v2.hs.HSAuditLogger = false
#log4j.appender.HSAUDIT = org.apache.log4j.DailyRollingFileAppender
#log4j.appender.HSAUDIT.File = $ {hadoop.log.dir} /hs-audit.log
#log4j.appender.HSAUDIT.layout = org.apache.log4j.PatternLayout
#log4j.appender.HSAUDIT.layout.ConversionPattern = %d {ISO8601}%p%c {2}:%m%n
#log4j.appender.HSAUDIT.DatePattern = .yyyy-MM-dd

Http Server Request Logs
#log4j.logger.http.requests.namenode = INFO,namenoderequestlog
#log4j.appender.namenoderequestlog = org.apache.hadoop.http.HttpRequestLogAppender
#log4j.appender.namenoderequestlog.Filename = $ {hadoop.log.dir} /jetty-namenode-yyyy_mm_dd.log
#log4j.appender.namenoderequestlog.RetainDays = 3

#log4j.logger.http.requests.datanode = INFO ,datanoderequestlog
#log4j.appender.datanoderequestlog = org.apache.hadoop.http.HttpRequestLogAppender
#log4j.appender.datanoderequestlog.Filename = $ {hadoop.log.dir} / jetty-datanode-yyyy_mm_dd。 log
#log4j.appender.datanoderequestlog.RetainDays = 3

#log4j.logger.http.requests.resourcemanager = INFO,resourcemanagerrequestlog
#log4j.appender.resourcemanagerrequestlog = org。 apache.hadoop.http.HttpRequestLogAppender
#log4j.appender.resourcemanagerrequestlog.Filename = $ {hadoop.log.dir} /jetty-resourcemanager-yyyy_mm_dd.log
#log4j.appender.resourcemanagerrequestlog.RetainDays = 3

#log4j.logger.http.requests.jobhistory = INFO,jobhistoryrequestlog
#log4j.appender.jobhistoryrequestlog = org.apache.hadoop.http.HttpRequestLogAppender
#log4j.appender .jobhistoryrequestlog.Filename = $ {hadoop.log.dir} /jetty-jobhistory-yyyy_mm_dd.log
#log4j.appender.jobhistoryrequestlog.RetainDays = 3

#log4j.logger.http。 requests.nodemanager = INFO,nodemanagerrequestlog
#log4j.appender.nodemanagerrequestlog = org.apache.hadoop.http.HttpRequestLogAppender
#log4j.appender.nodemanagerrequestlog.Filename = $ {hadoop.log.dir} / jetty -nodemanager-yyyy_mm_dd.log
#log4j.appender.nodemanagerrequestlog.RetainDays = 3
log4j.logger.org.apache.zookeeper = ERROR
log4j.logger.com.mapr.util.zookeeper = WARN
log4j.logger .org.apache.hadoop.yarn.client.MapRZKBasedRMFailoverProxyProvider = WARN


解决方案

RollingFileAppender 只会写入本地磁盘。除非你可以以某种方式挂载你的HDFS,因此它看起来像本地磁盘到你的操作系统,它不会工作。您必须选择另一个支持远程日志记录的Log4j Appender类型,例如 Flume Appender 或自己推出。

Based on the following configuration i am expecting my log4j should write to HDFS folder (/myfolder/mysubfolder). But it's not even creating a file with the given name hadoop9.log. I tried by creating hadoop9.log manually on hdfs. Still it didn't work.

Am i missing anything in log4j.properties.?

# Define some default values that can be overridden by system properties
hadoop.root.logger=INFO,console,RFA,DRFA
hadoop.log.dir= /myfolder/mysubfolder
hadoop.log.file=hadoop9.log

# Define the root logger to the system property "hadoop.root.logger".
log4j.rootLogger=${hadoop.root.logger}, EventCounter

# Logging Threshold
log4j.threshold=ALL

# Null Appender
log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender

#
# Rolling File Appender - cap space usage at 5gb.
#
hadoop.log.maxfilesize=256MB
hadoop.log.maxbackupindex=20
log4j.appender.RFA=org.apache.log4j.RollingFileAppender
log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file}

log4j.appender.RFA.MaxFileSize=${hadoop.log.maxfilesize}
log4j.appender.RFA.MaxBackupIndex=${hadoop.log.maxbackupindex}

log4j.appender.RFA.layout=org.apache.log4j.PatternLayout

# Pattern format: Date LogLevel LoggerName LogMessage
log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
# Debugging Pattern format
#log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n


#
# Daily Rolling File Appender
#

log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file}

# Rollver at midnight
log4j.appender.DRFA.DatePattern=.yyyy-MM-dd

# 30-day backup
#log4j.appender.DRFA.MaxBackupIndex=30
log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout

# Pattern format: Date LogLevel LoggerName LogMessage
log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
# Debugging Pattern format
#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n


#
# console
# Add "console" to rootlogger above if you want to use this
#

log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n

#
# TaskLog Appender
#

#Default values
hadoop.tasklog.taskid=null
hadoop.tasklog.iscleanup=false
hadoop.tasklog.noKeepSplits=4
hadoop.tasklog.totalLogFileSize=100
hadoop.tasklog.purgeLogSplits=true
hadoop.tasklog.logsRetainHours=12

log4j.appender.TLA=org.apache.hadoop.mapred.TaskLogAppender
log4j.appender.TLA.taskId=${hadoop.tasklog.taskid}
log4j.appender.TLA.isCleanup=${hadoop.tasklog.iscleanup}
log4j.appender.TLA.totalLogFileSize=${hadoop.tasklog.totalLogFileSize}

log4j.appender.TLA.layout=org.apache.log4j.PatternLayout
log4j.appender.TLA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n

#
# HDFS block state change log from block manager
#
# Uncomment the following to suppress normal block state change
# messages from BlockManager in NameNode.
#log4j.logger.BlockStateChange=WARN

#
#Security appender
#
hadoop.security.logger=INFO,NullAppender
hadoop.security.log.maxfilesize=256MB
hadoop.security.log.maxbackupindex=20
log4j.category.SecurityLogger=${hadoop.security.logger}
hadoop.security.log.file=SecurityAuth-${user.name}.audit
log4j.appender.RFAS=org.apache.log4j.RollingFileAppender
log4j.appender.RFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}
log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout
log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
log4j.appender.RFAS.MaxFileSize=${hadoop.security.log.maxfilesize}
log4j.appender.RFAS.MaxBackupIndex=${hadoop.security.log.maxbackupindex}

#
# Daily Rolling Security appender
#
log4j.appender.DRFAS=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}
log4j.appender.DRFAS.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
log4j.appender.DRFAS.DatePattern=.yyyy-MM-dd

#
# hadoop configuration logging
#

# Uncomment the following line to turn off configuration deprecation warnings.
# log4j.logger.org.apache.hadoop.conf.Configuration.deprecation=WARN

#
# hdfs audit logging
#
hdfs.audit.logger=INFO,NullAppender
hdfs.audit.log.maxfilesize=256MB
hdfs.audit.log.maxbackupindex=20
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender
log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.RFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize}
log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex}

#
# mapred audit logging
#
mapred.audit.logger=INFO,NullAppender
mapred.audit.log.maxfilesize=256MB
mapred.audit.log.maxbackupindex=20
log4j.logger.org.apache.hadoop.mapred.AuditLogger=${mapred.audit.logger}
log4j.additivity.org.apache.hadoop.mapred.AuditLogger=false
log4j.appender.MRAUDIT=org.apache.log4j.RollingFileAppender
log4j.appender.MRAUDIT.File=${hadoop.log.dir}/mapred-audit.log
log4j.appender.MRAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.MRAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.MRAUDIT.MaxFileSize=${mapred.audit.log.maxfilesize}
log4j.appender.MRAUDIT.MaxBackupIndex=${mapred.audit.log.maxbackupindex}

# Custom Logging levels

#log4j.logger.org.apache.hadoop.mapred.JobTracker=DEBUG
#log4j.logger.org.apache.hadoop.mapred.TaskTracker=DEBUG
#log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=DEBUG

# Jets3t library
log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR

#
# Event Counter Appender
# Sends counts of logging messages at different severity levels to Hadoop Metrics.
#
log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter

#
# Job Summary Appender
#
# Use following logger to send summary to separate file defined by
# hadoop.mapreduce.jobsummary.log.file :
# hadoop.mapreduce.jobsummary.logger=INFO,JSA
#
hadoop.mapreduce.jobsummary.logger=${hadoop.root.logger}
hadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log
hadoop.mapreduce.jobsummary.log.maxfilesize=256MB
hadoop.mapreduce.jobsummary.log.maxbackupindex=20
log4j.appender.JSA=org.apache.log4j.RollingFileAppender
log4j.appender.JSA.File=${hadoop.log.dir}/${hadoop.mapreduce.jobsummary.log.file}
log4j.appender.JSA.MaxFileSize=${hadoop.mapreduce.jobsummary.log.maxfilesize}
log4j.appender.JSA.MaxBackupIndex=${hadoop.mapreduce.jobsummary.log.maxbackupindex}
log4j.appender.JSA.layout=org.apache.log4j.PatternLayout
log4j.appender.JSA.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
log4j.logger.org.apache.hadoop.mapred.JobInProgress$JobSummary=${hadoop.mapreduce.jobsummary.logger}
log4j.additivity.org.apache.hadoop.mapred.JobInProgress$JobSummary=false

#
# Yarn ResourceManager Application Summary Log
#
# Set the ResourceManager summary log filename
yarn.server.resourcemanager.appsummary.log.file=rm-appsummary.log
# Set the ResourceManager summary log level and appender
yarn.server.resourcemanager.appsummary.logger=${hadoop.root.logger}
#yarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY

# To enable AppSummaryLogging for the RM,
# set yarn.server.resourcemanager.appsummary.logger to
# <LEVEL>,RMSUMMARY in hadoop-env.sh

# Appender for ResourceManager Application Summary Log
# Requires the following properties to be set
#    - hadoop.log.dir (Hadoop Log directory)
#    - yarn.server.resourcemanager.appsummary.log.file (resource manager app summary log filename)
#    - yarn.server.resourcemanager.appsummary.logger (resource manager app summary log level and appender)

log4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=${yarn.server.resourcemanager.appsummary.logger}
log4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=false
log4j.appender.RMSUMMARY=org.apache.log4j.RollingFileAppender
log4j.appender.RMSUMMARY.File=${hadoop.log.dir}/${yarn.server.resourcemanager.appsummary.log.file}
log4j.appender.RMSUMMARY.MaxFileSize=256MB
log4j.appender.RMSUMMARY.MaxBackupIndex=20
log4j.appender.RMSUMMARY.layout=org.apache.log4j.PatternLayout
log4j.appender.RMSUMMARY.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n

# HS audit log configs
#mapreduce.hs.audit.logger=INFO,HSAUDIT
#log4j.logger.org.apache.hadoop.mapreduce.v2.hs.HSAuditLogger=${mapreduce.hs.audit.logger}
#log4j.additivity.org.apache.hadoop.mapreduce.v2.hs.HSAuditLogger=false
#log4j.appender.HSAUDIT=org.apache.log4j.DailyRollingFileAppender
#log4j.appender.HSAUDIT.File=${hadoop.log.dir}/hs-audit.log
#log4j.appender.HSAUDIT.layout=org.apache.log4j.PatternLayout
#log4j.appender.HSAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
#log4j.appender.HSAUDIT.DatePattern=.yyyy-MM-dd

# Http Server Request Logs
#log4j.logger.http.requests.namenode=INFO,namenoderequestlog
#log4j.appender.namenoderequestlog=org.apache.hadoop.http.HttpRequestLogAppender
#log4j.appender.namenoderequestlog.Filename=${hadoop.log.dir}/jetty-namenode-yyyy_mm_dd.log
#log4j.appender.namenoderequestlog.RetainDays=3

#log4j.logger.http.requests.datanode=INFO,datanoderequestlog
#log4j.appender.datanoderequestlog=org.apache.hadoop.http.HttpRequestLogAppender
#log4j.appender.datanoderequestlog.Filename=${hadoop.log.dir}/jetty-datanode-yyyy_mm_dd.log
#log4j.appender.datanoderequestlog.RetainDays=3

#log4j.logger.http.requests.resourcemanager=INFO,resourcemanagerrequestlog
#log4j.appender.resourcemanagerrequestlog=org.apache.hadoop.http.HttpRequestLogAppender
#log4j.appender.resourcemanagerrequestlog.Filename=${hadoop.log.dir}/jetty-resourcemanager-yyyy_mm_dd.log
#log4j.appender.resourcemanagerrequestlog.RetainDays=3

#log4j.logger.http.requests.jobhistory=INFO,jobhistoryrequestlog
#log4j.appender.jobhistoryrequestlog=org.apache.hadoop.http.HttpRequestLogAppender
#log4j.appender.jobhistoryrequestlog.Filename=${hadoop.log.dir}/jetty-jobhistory-yyyy_mm_dd.log
#log4j.appender.jobhistoryrequestlog.RetainDays=3

#log4j.logger.http.requests.nodemanager=INFO,nodemanagerrequestlog
#log4j.appender.nodemanagerrequestlog=org.apache.hadoop.http.HttpRequestLogAppender
#log4j.appender.nodemanagerrequestlog.Filename=${hadoop.log.dir}/jetty-nodemanager-yyyy_mm_dd.log
#log4j.appender.nodemanagerrequestlog.RetainDays=3
log4j.logger.org.apache.zookeeper=ERROR
log4j.logger.com.mapr.util.zookeeper=WARN
log4j.logger.org.apache.hadoop.yarn.client.MapRZKBasedRMFailoverProxyProvider=WARN

解决方案

The RollingFileAppender will only write to local disk. Unless you can somehow mount your HDFS so it "looks like local disk to your OS" it won't work. You have to choose another Log4j Appender type that supports remote logging such as the Flume Appender or roll your own.

这篇关于Log4j不写入HDFS / Log4j.properties的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆