使用 Docker 和 Kubernetes 进行日志记录.超过 16k 的日志拆分 [英] Logging with Docker and Kubernetes. Logs more than 16k split up

查看:53
本文介绍了使用 Docker 和 Kubernetes 进行日志记录.超过 16k 的日志拆分的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用Docker 版本 17.12.1-ceKubernetes 版本 v1.10.11

I am using Docker version 17.12.1-ce Kubernetes verision v1.10.11

我的应用程序将 Json 格式的日志打印到控制台.其中一个字段是stackTrace,它可以包含一个巨大的stackTrace.

My application prints the log in Json format to the console. One of the fields is stackTrace, which can include a huge stackTrace.

问题是日志消息被拆分成两条消息.因此,如果我查看/var/lib/docker/containers/... .log 我会看到两条消息.我读到这是出于安全原因而完成的,但我真的不明白我能用它做什么?

The problem is that the log message is split up into two messages. So if I look at the /var/lib/docker/containers/ ... .log I see two messages. I read that this is done for security reasons, but I don't really understand what I can do with that?

我应该剪掉我的 stackTrace 吗?或者自定义尺寸?这是允许的吗?这是处理这个问题的正确方法吗?

Should I cut my stackTrace? Or customize the size? Is this permitted? Is it the correct way to deal with this issue?

p/s 我正在使用 json 文件日志驱动程序

p/s I am using json-file logging driver

推荐答案

这是预期的行为.由于日志消息的缓冲区为 16K,Docker 将日志消息分块为 16K.如果消息长度超过 16K,则应由 json 文件记录器拆分并在端点合并.

This is an expected behavior. Docker chunks log messages at 16K, because of a 16K buffer for log messages. If a message length exceeds 16K, it should be split by the json file logger and merged at the endpoint.

它确实将日志标记为部分消息,但实际上取决于重新组装的驱动程序/服务.

It does mark the log as a partial message but really depends on the driver/service to re-assemble.

Docker Docs 提到有不同的受支持驱动程序.

Docker Docs mentions that there are different supported drivers.

对于您的架构 (Stacktraces),json-driver 可能不是最佳选择.

With your architecture (Stacktraces), the json-driver might be not a best option.

而且我发现 github 上的这个线程 添加了有关主题的其他信息(还有很多offtop).

And I've found this thread on github that adds additional info on topic (as well as a lot of offtop).

编辑.

日志架构 表示容器化应用程序写入stdoutstderr 由容器引擎处理和重定向到某个地方.

Logging Architecture says that everything a containerized application writes to stdout and stderr is handled and redirected somewhere by a container engine.

Docker 容器引擎将这两个流重定向到日志驱动程序,该驱动程序在 Kubernetes 中配置为以 json 格式写入文件.

The Docker container engine redirects those two streams to a logging driver, which is configured in Kubernetes to write to a file in json format.

注意:Docker json 日志驱动程序将每一行视为一条单独的消息.另一个特点是在使用 Docker 日志驱动程序时,不直接支持多行消息.您需要在日志代理级别或更高级别处理多行消息.

Note: The Docker json logging driver treats each line as a separate message. Another peculiarity is that when using the Docker logging driver, there is no direct support for multi-line messages. You need to handle multi-line messages at the logging agent level or higher.

我真的不明白我能用它做什么?

I don't really understand what I can do with that?

这是对 Docker 大小的限制.这里是另一个很好的讨论,最终提出了使用 filebeat/fluentd 的想法.

It's a limitation on Docker size. Here is another good discussion that ends up with the idea to use filebeat/fluentd .

看起来 Fluentbit 的 Docker_mode 选项可能帮助,但我不确定您是如何解析容器日志的.

It looks like the Docker_mode option for Fluentbit might help, but I'm not sure how exactly you are parsing container logs.

我应该削减我的 stackTrace 吗?

Should I cut my stackTrace?

这取决于您是否需要日志中的跟踪.

It depends if you need traces in logs or not.

或者自定义尺寸?我已经在 Docker 端搜索了某种旋钮"以进行调整,但目前找不到.

Or customize the size? I have searched for some kind of "knob" to adjust on Docker side, and can't find any as of now.

看来唯一的解决办法是使用一些可以合并分割线的日志处理工具.

It looks like the only solution for that is to use some log processing tool that can combine split lines.

这篇关于使用 Docker 和 Kubernetes 进行日志记录.超过 16k 的日志拆分的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆