如何配置 Flink 集群以通过 web ui 进行日志记录? [英] How to configure Flink cluster for logging via web ui?

查看:43
本文介绍了如何配置 Flink 集群以通过 web ui 进行日志记录?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我设置了一个 Flink 集群,我希望能够查看 JobManager 和 TaskManager 的日志和标准输出.当我转到 Web ui 时,我在相应的选项卡上看到以下错误消息:

I have a Flink cluster set up and I'd like to be able to view the logs and stdout for the JobManager and TaskManagers. When I go to the web ui, I see the following error messages on the respective tabs:

JobManager:
    Logs
        (log file unavailable)
    Stdout
        (stdout file unavailable)

TaskManager
    Logs
        Fetching TaskManager log failed.
    Stdout
        Fetching TaskManager log failed.

我可以看到有一些可以设置的配置参数,特别是taskmanager.log.pathjob manager.web.log.pathenv.log.dir.但是,没有提到这些应该是网络可访问路径还是本地路径等.

I can see that there are some config parameters that could be set, notably taskmanager.log.path, job manager.web.log.path and env.log.dir. However, there is no mention of whether these should be network accessible paths or are they local paths etc.

我需要做什么才能查看任务管理器和作业管理器日志?

What do I need to do to be able to view task manager and job manager logs?

推荐答案

我发现如果你正在运行官方的 Flink docker 容器 (https://hub.docker.com/_/flink),默认情况下它会将所有内容都吐到控制台(即我猜一般来说 docker 最佳实践).因此,似乎与调整相关的 log4j 配置是 /opt/flink/conf/log4j-console.properties.jobamanger(s)taskmanager(s) 都是这种情况.

What I've found is that if you are running the official Flink docker container (https://hub.docker.com/_/flink), it by default spits everything to the console (i.e docker best practice generally speaking I guess). Thus, the log4j config that seems relevant to adjust is /opt/flink/conf/log4j-console.properties. This is the case for both jobamanger(s) and taskmanager(s).

因此,我将该文件配置为不仅吐到控制台,而且还记录到文件(在我的情况下是滚动的):

Thus I've configured that file to not just spit to console but also to file (a rolling one in my case):

log4j-console.properties:

    log4j.rootLogger=INFO, console, file
    # Uncomment this if you want to _only_ change Flink's logging
    #log4j.logger.org.apache.flink=INFO
    # The following lines keep the log level of common libraries/connectors on
    # log level INFO. The root logger does not override this. You have to manually
    # change the log levels here.
    log4j.logger.akka=INFO
    log4j.logger.org.apache.kafka=INFO
    log4j.logger.org.apache.hadoop=INFO
    log4j.logger.org.apache.zookeeper=INFO
    # Log all infos to the console
    log4j.appender.console=org.apache.log4j.ConsoleAppender
    log4j.appender.console.layout=org.apache.log4j.PatternLayout
    log4j.appender.console.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
    # Log all INFOs to the given rolling file
    log4j.appender.file=org.apache.log4j.RollingFileAppender
    log4j.appender.file.file=/opt/flink/log/output.log
    log4j.appender.file.MaxFileSize=5MB
    log4j.appender.file.MaxBackupIndex=5
    log4j.appender.file.append=true
    log4j.appender.file.layout=org.apache.log4j.PatternLayout
    log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
    # Suppress the irrelevant (wrong) warnings from the Netty channel handler
    log4j.logger.org.apache.flink.shaded.akka.org.jboss.netty.channel.DefaultChannelPipeline=ERROR, console, file

上面结合下面的flink-conf.yaml,可以在Jobmanager's Log Tab中显示jobmanager的日志,在Taskmanager's Log Tab中显示taksmanager的日志.

The above combined with the flink-conf.yaml below was able to display the jobmanager's log in the Jobmanager's Log Tab, and display the taksmanager's log in the Taskmanager's Log Tab.

flink-conf.yaml:

    # General configuration
    taskmanager.data.port: 6121
    taskmanager.rpc.port: 6122
    jobmanager.rpc.port: 6123
    blob.server.port: 6124
    query.server.port: 6125
    jobmanager.rpc.address: <your location>
    jobmanager.heap.size: 1024m
    taskmanager.heap.size: 1024m
    taskmanager.numberOfTaskSlots: 1
    web.log.path: /opt/flink/log/output.log
    taskmanager.log.path: /opt/flink/log/output.log

注意:我使用的是 Flink 1.8.0,在 Kubernetes 中运行一个小型集群(即作业管理器和任务管理器的独立 Pod)

NOTE: I'm on Flink 1.8.0, running a small cluster in Kubernetes (i.e. separate pods for the jobmanager and taskmanagers)

这篇关于如何配置 Flink 集群以通过 web ui 进行日志记录?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆