我的 AWS Cloudwatch 账单很大.我如何确定哪个日志流导致它? [英] My AWS Cloudwatch bill is huge. How do I work out which log stream is causing it?

查看:32
本文介绍了我的 AWS Cloudwatch 账单很大.我如何确定哪个日志流导致它?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

上个月,我从 Amazon 收到了一张 1,200 美元的 Cloudwatch 服务发票(特别是AmazonCloudWatch PutLogEvents"中 2 TB 的日志数据摄取),当时我预计会有几十美元.我已经登录到 AWS 控制台的 Cloudwatch 部分,可以看到我的一个日志组使用了大约 2TB 的数据,但是该日志组中有数千个不同的日志流,我怎么知道哪个使用了该数量数据?

I got a $1,200 invoice from Amazon for Cloudwatch services last month (specifically for 2 TB of log data ingestion in "AmazonCloudWatch PutLogEvents"), when I was expecting a few tens of dollars. I've logged into the Cloudwatch section of the AWS Console, and can see that one of my log groups used about 2TB of data, but there are thousands of different log streams in that log group, how can I tell which one used that amount of data?

推荐答案

在 CloudWatch 控制台上,使用 IncomingBytes 指标,使用指标页面查找特定时间段内每个日志组摄取的未压缩字节数据量.请按照以下步骤操作 -

On the CloudWatch console, use the IncomingBytes metrics to find the amount of data ingested by each log group for a particular time period in uncompressed bytes using Metrics page. Follow the below steps -

  1. 转到 CloudWatch 指标页面,然后单击 AWS 命名空间日志"-->日志组指标".
  2. 选择所需日志组的 IncomingBytes 指标,然后点击图表指标"选项卡以查看图表.
  3. 更改开始时间和结束时间,使其相差 30 天,并将期间更改为 30 天.这样,我们只会得到一个数据点.还将图表更改为 Number,将统计信息更改为 Sum.

这样,您将看到每个日志组摄取的数据量,并了解哪个日志组摄取了多少数据.

This way, you will see the amount of data ingested by each log groups and get an idea about which log group is ingesting how much.

您也可以使用 AWS CLI 获得相同的结果.一个示例场景,您只想知道日志组摄取的数据总量,例如 30 天,您可以使用 get-metric-statistics CLI 命令-

You can also achieve the same result using AWS CLI. An example scenario where you just want to know the total amount of data ingested by log groups for say 30 days, you can use get-metric-statistics CLI command-

示例 CLI 命令 -

sample CLI command -

aws cloudwatch get-metric-statistics --metric-name IncomingBytes --start-time 2018-05-01T00:00:00Z --end-time 2018-05-30T23:59:59Z --period 2592000 --namespace AWS/Logs --statistics Sum --region us-east-1

样本输出 -

{
    "Datapoints": [
        {
            "Timestamp": "2018-05-01T00:00:00Z", 
            "Sum": 1686361672.0, 
            "Unit": "Bytes"
        }
    ], 
    "Label": "IncomingBytes"
}

要查找特定日志组的相同内容,您可以更改命令以适应维度,例如 -

To find the same for a particular log group, you can change the command to accommodate dimensions like -

aws cloudwatch get-metric-statistics --metric-name IncomingBytes --start-time 2018-05-01T00:00:00Z --end-time 2018-05-30T23:59:59Z --period 2592000 --namespace AWS/Logs --statistics Sum --region us-east-1 --dimensions Name=LogGroupName,Value=test1

可以对所有日志组一一执行此命令,查看哪个日志组负责大部分数据摄取的账单并采取纠正措施.

One by one, you can run this command on all log groups and check which log group is responsible for most of the bill for data ingested and take corrective measures.

注意:更改特定于您的环境和要求的参数.

NOTE: Change the parameters specific to your environment and requirement.

OP 提供的解决方案提供了存储日志量的数据,这与摄取的日志量不同.

The solution provided by OP gives data for the amount of logs stored which is different from logs ingested.

有什么区别?

每月摄取的数据与数据存储字节数不同.数据被摄取到 CloudWatch 后,由 CloudWatch 存档,其中每个日志事件包含 26 字节的元数据,并使用 gzip 级别 6 压缩进行压缩.所以存储字节是指 Cloudwatch 用于存储日志摄取后的存储空间.

Data ingested per month is not same as Data storage bytes. After the data is ingested to CloudWatch, it is archived by CloudWatch which includes 26 bytes of metadata per log event and is compressed using gzip level 6 compression. So the Storage bytes refers to the storage space used by Cloudwatch to store the logs after they're ingested.

参考:https://docs.aws.amazon.com/cli/latest/reference/cloudwatch/get-metric-statistics.html

这篇关于我的 AWS Cloudwatch 账单很大.我如何确定哪个日志流导致它?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆