将Cloudwatch日志流式传输到Amazon ES [英] Streaming Cloudwatch Logs to Amazon ES

查看:209
本文介绍了将Cloudwatch日志流式传输到Amazon ES的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用Fargate部署我的应用程序。要记录容器日志,我使用 awslogs 作为日志驱动程序。现在,我想将日志发送到Amazon ES服务。在浏览文档运送时,我遇到一条提示,其中提到

I'm using Fargate to deploy my application. To log the container logs, I'm using awslogs as the log-driver. Now I want to ship my logs to Amazon ES service. While going through the docs for shipping, I encountered a note that mentions

Streaming large amounts of CloudWatch Logs data to other
destinations might result in high usage charges. 

我想了解将日志运送到ELK时需要支付的全部费用吗?他们如何定义大笔金额

I want to understand what all will I be billed for while shipping the logs to ELK? How do they define large amounts?

我将为

a)Cloudwatch?

a) Cloudwatch?

b)日志驱动程序?

c)Lambda函数?

c) Lambda function? Does every log-line triggers a lambda function?

最后,还有降低成本的可能性吗?

Lastly, is there still a possibility to lower the cost more?

推荐答案

我个人看起来会在应用程序旁边的另一个容器中流畅地运行fluentbit或fluentbit https://docs.fluentbit.io/manual/pipeline/outputs/elasticsearch

Personally I would look running fluent or fluentbit in another container along side your application https://docs.fluentbit.io/manual/pipeline/outputs/elasticsearch

您可以直接发送日志到ES,则无需任何cloudwatch费用。

You can send your logs direct to ES then without any cloudwatch costs.

编辑

这是最终解决方案,以防万一有人在寻找更便宜的解决方案。

Here's the final solution, just in case someone is looking for a cheaper solution.

在您的应用程序旁边的另一个容器中运行Fluentd / Fuentbit

Run Fluentd/Fuentbit in another container alongside your application

使用 Github配置,我能够转发e使用以下配置登录到ES。

Using the Github Config, I was able to forward the logs to ES with the below config.

{
    "family": "workflow",
    "cpu": "256",
    "memory": "512",
    "containerDefinitions": [
        {
            "name": "log_router",
            "image": "docker.io/amazon/aws-for-fluent-bit:latest",
            "essential": true,
            "firelensConfiguration": {
                "type": "fluentbit",
                "options":{
                   "enable-ecs-log-metadata":"true"
                }
            },
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-create-group": "true",
                    "awslogs-group": "your_log_group",
                    "awslogs-region": "us-east-1",
                    "awslogs-stream-prefix": "ecs"
                }
            },
            "memoryReservation": 50
        },
        {
            "name": "ContainerName",
            "image": "YourImage",
            "cpu": 0,
            "memoryReservation": 128,
            "portMappings": [
                {
                    "containerPort": 5005,
                    "protocol": "tcp"
                }
            ],
            "essential": true,
            "command": [
                "YOUR COMMAND"
            ],
            "environment": [],
            "logConfiguration": {
                "logDriver": "awsfirelens",
                "secretOptions": [],
                "options": {
                    "Name": "es",
                    "Host": "YOUR_ES_DOMAIN_URL",
                    "Port": "443",
                    "tls": "On",
                    "Index": "INDEX_NAME",
                    "Type": "TYPE"
                }
            },
            "resourceRequirements": []
        }
    ]
}

log_router 容器收集日志并将其发送到ES。有关更多信息,请参阅自定义日志路由

The log_router container collects the logs and ships it to ES. For more info, refer Custom Log Routing

请注意,对于Fargate,需要 log_router 容器,而对于ECS则不需要。

Please note that the log_router container is required in the case of Fargate, but not with ECS.

这是我所知道的最便宜的解决方案,它不涉及Cloudwatch,Lamdas,Kinesis。

这篇关于将Cloudwatch日志流式传输到Amazon ES的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆