docker-py在生成器挂起时读取容器日志 [英] docker-py reading container logs as a generator hangs

查看:80
本文介绍了docker-py在生成器挂起时读取容器日志的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 docker-py 以流的形式读取容器日志.按照文档中的指示将 stream 标志设置为 True .基本上,我要遍历所有容器,并以生成器的身份读取其容器日志,并将其写到如下文件中:

I am using docker-py to read container logs as a stream. by setting the stream flag to True as indicated in the docs. Basically, I am iterating through all my containers and reading their container logs in as a generator and writing it out to a file like the following:

for service in service_names:
    dkg = self.container.logs(service, stream=True)
    with open(path, 'wb') as output_file:
        try:
            while True:
                line = next(dkg).decode("utf-8")
                print('line is: ' + str(line))
                if not line or "\n" not in line:  # none of these work
                    print('Breaking...')
                    break
                output_file.write(str(line.strip()))

        except Exception as exc:                  # nor this
            print('an exception occurred: ' + str(exc))

但是,它仅读取第一个服务并挂在文件末尾.它不会中断循环,也不会引发异常(例如StopIteration异常).根据文档,如果 stream = True 应该返回一个生成器,我打印出了生成器类型,它显示为 docker.types.daemon.CancellableStream ,所以不要认为如果遵循容器日志生成器的末尾并调用next(),它将遵循传统的python生成器并排除异常.

However, it only reads the first service and hangs at the end of the file. It doesn't break out of the loop nor raise an exception (e,g. StopIteration exception). According to the docs if stream=True it should return a generator, I printed out the generator type and it shows up as a docker.types.daemon.CancellableStream so don't think it would follow the traditional python generator and exception out if we hit the end of the container log generator and call next().

如您所见,我已经尝试检查eol是否虚假或包含换行符,甚至查看它是否会捕获任何类型的异常但没有运气.还有另一种方法可以.确定它是否到达该服务的流的末尾并退出 while 循环并继续编写下一个服务?我之所以要使用流的原因是因为该流很大大量数据导致系统内存不足,因此我更喜欢使用生成器.

As you can see I've tried checking if eol is falsy or contains newline, even see if it'll catch any type of exception but no luck. Is there another way I can. determine if it hits the end of the stream for the service and break out of the while loop and continue writing the next service? The reason why I wanted to use a stream is because the large amount of data was causing my system to run low on memory so I prefer to use a generator.

推荐答案

问题是流直到容器停止才真正停止流,只是暂停以等待下一个数据到达.为了说明这一点,当它挂在第一个容器上时,如果您在该容器上执行 docker stop ,您将得到一个 StopIteration 异常,并且您的for循环将继续进行到下一个容器的日志.

The problem is that the stream doesn't really stop until the container is stopped, it is just paused waiting for the next data to arrive. To illustrate this, when it hangs on the first container, if you do docker stop on that container, you'll get a StopIteration exception and your for loop will move on to the next container's logs.

您可以使用 follow = False 告诉 .logs()不遵循日志.奇怪的是,文档说默认值是False,但事实并非如此,至少不是流媒体.

You can tell .logs() not to follow the logs by using follow = False. Curiously, the docs say the default value is False, but that doesn't seem to be the case, at least not for streaming.

我遇到了同样的问题,并且使用 follow = False 的这段代码摘录没有挂在第一个容器的日志上:

I experienced the same problem you did, and this excerpt of code using follow = False does not hang on the first container's logs:

import docker
client = docker.from_env()
container_names = ['container1','container2','container3']
for container_name in container_names:
    dkg = client.containers.get(container_name).logs(stream = True, follow = False)
    try:
      while True:
        line = next(dkg).decode("utf-8")
        print(line)
    except StopIteration:
      print(f'log stream ended for {container_name}')   

这篇关于docker-py在生成器挂起时读取容器日志的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆