如何将Kubernetes日志发送到AWS CloudWatch? [英] How to Send Kubernetes Logs to AWS CloudWatch?

查看:143
本文介绍了如何将Kubernetes日志发送到AWS CloudWatch?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

AWS CloudWatch登录Docker

docker中设置AWS CloudWatch Logs驱动程序是通过log-driver=awslogslog-opt完成的,例如-

#!/bin/bash

docker run \
    --log-driver=awslogs \
    --log-opt awslogs-region=eu-central-1 \
    --log-opt awslogs-group=whatever-group \
    --log-opt awslogs-stream=whatever-stream \
    --log-opt awslogs-create-group=true \
    wernight/funbox \
        fortune

我的问题

我想在Kubernetes集群中使用AWS CloudWatch日志,其中每个Pod都包含几个Docker容器.每个部署将具有一个单独的日志组,并且每个容器将具有一个单独的流.我找不到通过Kubernetes create/apply将日志记录参数发送到docker容器的方法.

我的问题

如何将log-driverlog-opt参数发送到pod/部署中的Docker容器?

我尝试了什么

  • 在每台机器上为Docker守护程序设置相关参数.有可能,但是通过这种方式,同一台机器上的所有容器将共享同一流-因此与我的情况无关.
  • 用于 kubectl apply 的RTFM li>
  • 阅读 Kubernetes Logging Architecture

解决方案

据我所知,Kubernetes较之Docker日志记录驱动程序更喜欢集群级日志记录.

我们可以使用 fluentd 来收集,转换容器日志并将其推送到CloudWatch Logs.

您需要做的就是使用ConfigMap和Secret创建一个流畅的DaemonSet.可以在 Github 中找到文件.它已通过Kubernetes v1.7.5进行了测试.

以下是一些解释.

使用DaemonSet,可以流畅地从主机文件夹/var/lib/docker/containers收集每个容器日志.

过滤器

fluent-plugin-kubernetes_metadata_filter 插件从Kubernetes API服务器加载pod的元数据. /p>

日志记录将是这样.

{
    "log": "INFO: 2017/10/02 06:44:13.214543 Discovered remote MAC 62:a1:3d:f6:eb:65 at 62:a1:3d:f6:eb:65(kube-235)\n",
    "stream": "stderr",
    "docker": {
        "container_id": "5b15e87886a7ca5f7ebc73a15aa9091c9c0f880ee2974515749e16710367462c"
    },
    "kubernetes": {
        "container_name": "weave",
        "namespace_name": "kube-system",
        "pod_name": "weave-net-4n4kc",
        "pod_id": "ac4bdfc1-9dc0-11e7-8b62-005056b549b6",
        "labels": {
            "controller-revision-hash": "2720543195",
            "name": "weave-net",
            "pod-template-generation": "1"
        },
        "host": "kube-234",
        "master_url": "https://10.96.0.1:443/api"
    }
}

使用Fluentd record_transformer做一些标签过滤器插件.

{
    "log": "...",
    "stream": "stderr",
    "docker": {
        ...
    },
    "kubernetes": {
        ...
    },
    "pod_name": "weave-net-4n4kc",
    "container_name": "weave"
}

出局

fluent-plugin-cloudwatch-logs 插件发送到AWS CloudWatch Logs.

通过log_group_name_keylog_stream_name_key配置,日志组和流名称可以是记录的任何字段.

<match kubernetes.**>
  @type cloudwatch_logs
  log_group_name_key pod_name
  log_stream_name_key container_name
  auto_create_stream true
  put_log_events_retry_limit 20
</match>

AWS CloudWatch Logs in Docker

Setting an AWS CloudWatch Logs driver in docker is done with log-driver=awslogs and log-opt, for example -

#!/bin/bash

docker run \
    --log-driver=awslogs \
    --log-opt awslogs-region=eu-central-1 \
    --log-opt awslogs-group=whatever-group \
    --log-opt awslogs-stream=whatever-stream \
    --log-opt awslogs-create-group=true \
    wernight/funbox \
        fortune

My Problem

I would like to use AWS CloudWatch logs in a Kubernetes cluster, where each pod contains a few Docker containers. Each deployment would have a separate Log Group, and each container would have a separate stream. I could not find a way to send the logging parameters to the docker containers via Kubernetes create / apply.

My Question

How can I send the log-driver and log-opt parameters to a Docker container in a pod / deployment?

What have I tried

解决方案

From what I understand, Kubernetes prefer Cluster-level logging to Docker logging driver.

We could use fluentd to collect, transform, and push container logs to CloudWatch Logs.

All you need is to create a fluentd DaemonSet with ConfigMap and Secret. Files can be found in Github. It has been tested with Kubernetes v1.7.5.

The following are some explains.

In

With DaemonSet, fluentd collect every container logs from the host folder /var/lib/docker/containers.

Filter

fluent-plugin-kubernetes_metadata_filter plugin load the pod's metadata from Kubernetes API server.

The log record would be like this.

{
    "log": "INFO: 2017/10/02 06:44:13.214543 Discovered remote MAC 62:a1:3d:f6:eb:65 at 62:a1:3d:f6:eb:65(kube-235)\n",
    "stream": "stderr",
    "docker": {
        "container_id": "5b15e87886a7ca5f7ebc73a15aa9091c9c0f880ee2974515749e16710367462c"
    },
    "kubernetes": {
        "container_name": "weave",
        "namespace_name": "kube-system",
        "pod_name": "weave-net-4n4kc",
        "pod_id": "ac4bdfc1-9dc0-11e7-8b62-005056b549b6",
        "labels": {
            "controller-revision-hash": "2720543195",
            "name": "weave-net",
            "pod-template-generation": "1"
        },
        "host": "kube-234",
        "master_url": "https://10.96.0.1:443/api"
    }
}

Make some tags with Fluentd record_transformer filter plugin.

{
    "log": "...",
    "stream": "stderr",
    "docker": {
        ...
    },
    "kubernetes": {
        ...
    },
    "pod_name": "weave-net-4n4kc",
    "container_name": "weave"
}

Out

fluent-plugin-cloudwatch-logs plugin send to AWS CloudWatch Logs.

With log_group_name_key and log_stream_name_key configuration, log group and stream name can be any field of the record.

<match kubernetes.**>
  @type cloudwatch_logs
  log_group_name_key pod_name
  log_stream_name_key container_name
  auto_create_stream true
  put_log_events_retry_limit 20
</match>

这篇关于如何将Kubernetes日志发送到AWS CloudWatch?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆