如何将 Kubernetes 日志发送到 AWS CloudWatch? [英] How to Send Kubernetes Logs to AWS CloudWatch?
问题描述
在 docker
中设置 AWS CloudWatch Logs 驱动程序是通过 log-driver=awslogs
和 log-opt
完成的,例如 ->
Setting an AWS CloudWatch Logs driver in docker
is done with log-driver=awslogs
and log-opt
, for example -
#!/bin/bash
docker run
--log-driver=awslogs
--log-opt awslogs-region=eu-central-1
--log-opt awslogs-group=whatever-group
--log-opt awslogs-stream=whatever-stream
--log-opt awslogs-create-group=true
wernight/funbox
fortune
我的问题
我想在 Kubernetes 集群中使用 AWS CloudWatch 日志,其中每个 pod 包含几个 Docker 容器.每个部署都有一个单独的日志组,每个容器都有一个单独的流.我找不到通过 Kubernetes create
/apply
将日志参数发送到 docker 容器的方法.
My Problem
I would like to use AWS CloudWatch logs in a Kubernetes cluster, where each pod contains a few Docker containers. Each deployment would have a separate Log Group, and each container would have a separate stream. I could not find a way to send the logging parameters to the docker containers via Kubernetes create
/ apply
.
如何将 log-driver
和 log-opt
参数发送到 Pod/部署中的 Docker 容器?
How can I send the log-driver
and log-opt
parameters to a Docker container in a pod / deployment?
- 为每台机器上的 Docker 守护进程设置相关参数.这是可能的,但这样同一台机器上的所有容器将共享同一个流 - 因此与我的情况无关.
- 用于
kubectl apply
的 RTFM - 阅读
kops
中的相关 README - 阅读
Kubernetes 日志架构
莉>
- Setting relevant parameters for the Docker daemon on each machine. It's possible, but this way all the containers on the same machine would share the same stream - therefore irrelevant for my case.
- RTFM for
kubectl apply
- Reading the relevant README in
kops
- Read
Kubernetes Logging Architecture
推荐答案
据我所知,与 Docker 日志驱动程序相比,Kubernetes 更喜欢集群级别的日志记录.
From what I understand, Kubernetes prefer Cluster-level logging to Docker logging driver.
我们可以使用 fluentd 来收集、转换容器日志并将其推送到 CloudWatch Logs.
We could use fluentd to collect, transform, and push container logs to CloudWatch Logs.
您只需要创建一个带有 ConfigMap 和 Secret 的 fluentd DaemonSet.文件可以在 Github 中找到.它已经过 Kubernetes v1.7.5 测试.
All you need is to create a fluentd DaemonSet with ConfigMap and Secret. Files can be found in Github. It has been tested with Kubernetes v1.7.5.
以下是一些解释.
使用 DaemonSet,fluentd 从主机文件夹 /var/lib/docker/containers
收集每个容器日志.
With DaemonSet, fluentd collect every container logs from the host folder /var/lib/docker/containers
.
fluent-plugin-kubernetes_metadata_filter 插件从 Kubernetes API 服务器加载 pod 的元数据.
fluent-plugin-kubernetes_metadata_filter plugin load the pod's metadata from Kubernetes API server.
日志记录应该是这样的.
The log record would be like this.
{
"log": "INFO: 2017/10/02 06:44:13.214543 Discovered remote MAC 62:a1:3d:f6:eb:65 at 62:a1:3d:f6:eb:65(kube-235)
",
"stream": "stderr",
"docker": {
"container_id": "5b15e87886a7ca5f7ebc73a15aa9091c9c0f880ee2974515749e16710367462c"
},
"kubernetes": {
"container_name": "weave",
"namespace_name": "kube-system",
"pod_name": "weave-net-4n4kc",
"pod_id": "ac4bdfc1-9dc0-11e7-8b62-005056b549b6",
"labels": {
"controller-revision-hash": "2720543195",
"name": "weave-net",
"pod-template-generation": "1"
},
"host": "kube-234",
"master_url": "https://10.96.0.1:443/api"
}
}
使用 Fluentd 制作一些标签 record_transformer 过滤插件.
Make some tags with Fluentd record_transformer filter plugin.
{
"log": "...",
"stream": "stderr",
"docker": {
...
},
"kubernetes": {
...
},
"pod_name": "weave-net-4n4kc",
"container_name": "weave"
}
出
fluent-plugin-cloudwatch-logs 插件发送到 AWS CloudWatch Logs.
Out
fluent-plugin-cloudwatch-logs plugin send to AWS CloudWatch Logs.
通过log_group_name_key
和log_stream_name_key
配置,日志组和流名称可以是记录的任何字段.
With log_group_name_key
and log_stream_name_key
configuration, log group and stream name can be any field of the record.
<match kubernetes.**>
@type cloudwatch_logs
log_group_name_key pod_name
log_stream_name_key container_name
auto_create_stream true
put_log_events_retry_limit 20
</match>
这篇关于如何将 Kubernetes 日志发送到 AWS CloudWatch?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!