如何在python上的一个文件中配置日志系统 [英] how to configure logging system in one file on python

查看:53
本文介绍了如何在python上的一个文件中配置日志系统的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有两个文件.首先是TCP服务器.第二个是烧瓶应用程序.它们是一个项目,但它们位于一个单独的 docker 容器内由于是同一个项目,他们应该将日志写入相同的文件我尝试创建我的日志库我将我的日志库导入到两个文件中我尝试了很多东西
首先我删除了波纹管代码

I have two files. first is the TCP server. second is the flask app. they are one project but they are inside of a separated docker container they should write logs same file due to being the same project ı try to create my logging library ı import my logging library to two file ı try lots of things
firstly ı deleted bellow code

    if (logger.hasHandlers()):
        logger.handlers.clear()

当我删除时,我得到两次相同的日志

when ı delete,ı get same logs two times

我的结构

docker-compose
docker file
loggingLib.py
app.py
tcp.py
requirements.txt
.
.
.

我最后的日志代码

from logging.handlers import RotatingFileHandler
from datetime import datetime
import logging
import time
import os, os.path

project_name= "proje_name"


def get_logger():
    if not os.path.exists("logs/"):
        os.makedirs("logs/")
    now = datetime.now()
    file_name = now.strftime(project_name + '-%H-%M-%d-%m-%Y.log')
    log_handler = RotatingFileHandler('logs/'+file_name,mode='a', maxBytes=10000000, backupCount=50)
    formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(funcName)s - %(message)s  ', '%d-%b-%y %H:%M:%S')

    formatter.converter = time.gmtime
    log_handler.setFormatter(formatter)
    logger = logging.getLogger(__name__)
    logger.setLevel(level=logging.INFO)
    if (logger.hasHandlers()):
        logger.handlers.clear()
    logger.addHandler(log_handler)
    return logger

它正在工作,但只在一个文件中如果 app.py 首先工作,它只会生成一个日志其他文件不做任何日志

it is working but only in one file if app.py works first, it only makes a log other file don't make any logs

推荐答案

任何直接使用文件的东西——配置文件、日志文件、数据文件——在 Docker 中管理比在本地运行要复杂一些.特别是对于日志,通常最好将您的进程设置为直接记录到标准输出.Docker 将收集日志,您可以使用 docker logs 查看它们.在此设置中,无需更改代码,您可以配置 Docker 以将日志发送到其他地方 或使用 fluentd 或 logstash 之类的日志收集器来管理日志.

Anything that directly uses files – config files, log files, data files – is a little trickier to manage in Docker than running locally. For logs in particular, it's usually better to set your process to log directly to stdout. Docker will collect the logs, and you can review them with docker logs. In this setup, without changing your code, you can configure Docker to send the logs somewhere else or use a log collector like fluentd or logstash to manage the logs.

在您的 Python 代码中,您通常需要在根记录器的顶层配置详细的日志记录设置

In your Python code, you usually will want to configure the detailed logging setup at the top level, on the root logger

import logging
def main():
  logging.basicConfig(
    format='%(asctime)s - %(levelname)s - %(funcName)s - %(message)s  ',
    datefmt='%d-%b-%y %H:%M:%S',
    level=logging.INFO
  )
  ...

并且在每个单独的模块中,您可以获得一个本地记录器,它将继承根记录器的设置

and in each individual module you can just get a local logger, which will inherit the root logger's setup

import logging
LOGGER = logging.getLogger(__name__)

在默认设置下,Docker 会将日志消息捕获到磁盘上的 JSON 文件中.如果在长时间运行的容器中生成大量日志消息,可能会导致本地磁盘耗尽(它不会影响进程可用的内存).Docker 日志记录文档 建议使用 本地文件日志驱动,它进行自动日志轮换.在 Compose 设置中,您可以指定 logging: 选项:

With its default setup, Docker will capture log messages into JSON files on disk. If you generate a large amount of log messages in a long-running container, it can lead to local disk exhaustion (it will have no effect on memory available to processes). The Docker logging documentation advises using the local file logging driver, which does automatic log rotation. In a Compose setup you can specify logging: options:

version: '3.8'
services:
  app:
    image: ...
    logging:
      driver: local

您还可以在 默认 JSON 文件日志记录驱动程序上配置日志轮换:

version: '3.8'
services:
  app:
    image: ...
    logging:
      driver: json-file # default, can be omitted
      options:
        max-size: 10m
        max-file: 50

你不应该"直接访问日志,但它们在/var/lib/docker中的格式相当稳定,并且fluentd和logstash等工具知道如何收集它们.

You "shouldn't" directly access the logs, but they are in a fairly stable format in /var/lib/docker, and tools like fluentd and logstash know how to collect them.

如果您决定在像 Kubernetes 这样的集群环境中运行这个应用程序,它将拥有自己的日志管理系统,但同样是围绕直接登录到其标准输出的容器设计的.您将能够在 Kubernetes 中未经修改地运行此应用程序,并使用适当的集群级别配置将日志转发到某处.从远程集群的不透明存储中检索日志文件可能很难设置.

If you ever decide to run this application in a cluster environment like Kubernetes, that will have its own log-management system, but again designed around containers that directly log to their stdout. You would be able to run this application unmodified in Kubernetes, with appropriate cluster-level configuration to forward the logs somewhere. Retrieving a log file from opaque storage in a remote cluster can be tricky to set up.

这篇关于如何在python上的一个文件中配置日志系统的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆