使用Docker和Kubernetes进行日志记录.拆分的日志超过16k [英] Logging with Docker and Kubernetes. Logs more than 16k split up

查看:275
本文介绍了使用Docker和Kubernetes进行日志记录.拆分的日志超过16k的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 Docker版本17.12.1-CE Kubernetes版本v1.10.11

我的应用程序将Json格式的日志打印到控制台.字段之一是stackTrace,其中可以包含一个巨大的stackTrace.

问题在于该日志消息被分为两个消息.因此,如果我查看/var/lib/docker/containers/... .log,则会看到两条消息.我读到这样做是出于安全原因,但是我真的不明白该怎么办?

我应该切掉我的stackTrace吗?还是自定义尺寸?可以吗?这是解决此问题的正确方法吗?

p/s我正在使用json文件记录驱动程序

解决方案

这是预期的行为. Docker对日志消息的块大小为16K,这是因为日志消息有16K的缓冲区.如果消息长度超过16K,则应由json文件记录器拆分并在端点处合并.

它确实将日志标记为部分消息,但实际上取决于重新组合驱动程序/服务.

Docker文档提到了受支持的驱动程序不同.

对于您的体系结构(Stacktraces),json-driver可能不是最佳选择.

我发现 github上的该线程添加了有关主题的其他信息(以及许多offtop).

编辑.

日志记录体系表示容器化应用程序写入stderr由容器引擎处理并重定向到某处.

Docker容器引擎将这两个流重定向到日志记录驱动程序,该驱动程序在Kubernetes中配置为以json格式写入文件.

注意:Docker json日志记录驱动程序将每行视为一条单独的消息. 另一个特点是,使用Docker日志记录驱动程序时,不直接支持多行消息.您需要在日志记录代理级别或更高级别处理多行消息.

我真的不明白该怎么办?

这是对Docker大小的限制. 这里是另一个很好的讨论,最终以使用filebeat/fluentd的想法结尾. /p>

似乎Fluentbit的 Docker_mode 选项可能帮助,但我不确定您解析容器日志的方式.

我应该切掉我的stackTrace吗?

这取决于您是否需要日志中的跟踪.

还是自定义尺寸? 我已经在Docker方面搜索了某种旋钮"来进行调整,但到目前为止找不到.

看来,唯一的解决方案是使用一些可以合并分割线的日志处理工具.

I am using Docker version 17.12.1-ce Kubernetes verision v1.10.11

My application prints the log in Json format to the console. One of the fields is stackTrace, which can include a huge stackTrace.

The problem is that the log message is split up into two messages. So if I look at the /var/lib/docker/containers/ ... .log I see two messages. I read that this is done for security reasons, but I don't really understand what I can do with that?

Should I cut my stackTrace? Or customize the size? Is this permitted? Is it the correct way to deal with this issue?

p/s I am using json-file logging driver

解决方案

This is an expected behavior. Docker chunks log messages at 16K, because of a 16K buffer for log messages. If a message length exceeds 16K, it should be split by the json file logger and merged at the endpoint.

It does mark the log as a partial message but really depends on the driver/service to re-assemble.

Docker Docs mentions that there are different supported drivers.

With your architecture (Stacktraces), the json-driver might be not a best option.

And I've found this thread on github that adds additional info on topic (as well as a lot of offtop).

Edit.

Logging Architecture says that everything a containerized application writes to stdout and stderr is handled and redirected somewhere by a container engine.

The Docker container engine redirects those two streams to a logging driver, which is configured in Kubernetes to write to a file in json format.

Note: The Docker json logging driver treats each line as a separate message. Another peculiarity is that when using the Docker logging driver, there is no direct support for multi-line messages. You need to handle multi-line messages at the logging agent level or higher.

I don't really understand what I can do with that?

It's a limitation on Docker size. Here is another good discussion that ends up with the idea to use filebeat/fluentd .

It looks like the Docker_mode option for Fluentbit might help, but I'm not sure how exactly you are parsing container logs.

Should I cut my stackTrace?

It depends if you need traces in logs or not.

Or customize the size? I have searched for some kind of "knob" to adjust on Docker side, and can't find any as of now.

It looks like the only solution for that is to use some log processing tool that can combine split lines.

这篇关于使用Docker和Kubernetes进行日志记录.拆分的日志超过16k的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆