我的AWS Cloudwatch账单庞大。如何确定是哪个日志流引起的? [英] My AWS Cloudwatch bill is huge. How do I work out which log stream is causing it?

查看:59
本文介绍了我的AWS Cloudwatch账单庞大。如何确定是哪个日志流引起的?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

上个月,我从亚马逊收到了一份关于Cloudwatch服务的1200美元发票(特别是 AmazonCloudWatch PutLogEvents中2 TB的日志数据提取),当时我期望几十美元。我已经登录到AWS控制台的Cloudwatch部分,可以看到我的一个日志组使用了大约2TB的数据,但是该日志组中有成千上万个不同的日志流,我怎么知道哪个使用了那么多数据

I got a $1,200 invoice from Amazon for Cloudwatch services last month (specifically for 2 TB of log data ingestion in "AmazonCloudWatch PutLogEvents"), when I was expecting a few tens of dollars. I've logged into the Cloudwatch section of the AWS Console, and can see that one of my log groups used about 2TB of data, but there are thousands of different log streams in that log group, how can I tell which one used that amount of data?

推荐答案

在CloudWatch控制台上,使用IncomingBytes指标来查找每个日志组为一个日志收集的数据量。使用度量标准页面的未压缩字节中的特定时间段。请按照以下步骤-

On the CloudWatch console, use the IncomingBytes metrics to find the amount of data ingested by each log group for a particular time period in uncompressed bytes using Metrics page. Follow the below steps -


  1. 转到CloudWatch指标页面,然后单击AWS名称空间 Logs-> Log Group Metrics。 / li>
  2. 选择所需日志组的IncomingBytes指标,然后单击图形指标选项卡以查看该图。

  3. 更改开始时间和结束时间,以使它们之间的差值为30天,并将期限更改为30天。这样,我们将仅获得一个数据点。

这样,您将看到每个日志组和了解哪个日志组正在摄取多少。

This way, you will see the amount of data ingested by each log groups and get an idea about which log group is ingesting how much.

您还可以使用AWS CLI获得相同的结果。例如,您只想知道日志组在30天内摄入的数据总量,可以使用get-metric-statistics CLI命令-

You can also achieve the same result using AWS CLI. An example scenario where you just want to know the total amount of data ingested by log groups for say 30 days, you can use get-metric-statistics CLI command-

sample CLI命令-

sample CLI command -

aws cloudwatch get-metric-statistics --metric-name IncomingBytes --start-time 2018-05-01T00:00:00Z --end-time 2018-05-30T23:59:59Z --period 2592000 --namespace AWS/Logs --statistics Sum --region us-east-1

样本输出-

{
    "Datapoints": [
        {
            "Timestamp": "2018-05-01T00:00:00Z", 
            "Sum": 1686361672.0, 
            "Unit": "Bytes"
        }
    ], 
    "Label": "IncomingBytes"
}

要为特定的日志组找到相同的内容,可以更改命令以适应-

To find the same for a particular log group, you can change the command to accommodate dimensions like -

aws cloudwatch get-metric-statistics --metric-name IncomingBytes --start-time 2018-05-01T00:00:00Z --end-time 2018-05-30T23:59:59Z --period 2592000 --namespace AWS/Logs --statistics Sum --region us-east-1 --dimensions Name=LogGroupName,Value=test1

可以一对一地在所有日志组上运行此命令,并检查哪个日志组负责

One by one, you can run this command on all log groups and check which log group is responsible for most of the bill for data ingested and take corrective measures.


注意:更改特定于您的环境和要求的参数。

NOTE: Change the parameters specific to your environment and requirement.

OP提供的解决方案提供的数据存储量与摄取的日志不同。

The solution provided by OP gives data for the amount of logs stored which is different from logs ingested.

有什么区别?

每月摄取的数据与数据存储字节。将数据提取到CloudWatch之后,它由CloudWatch存档,其中每个日志事件包含26字节的元数据,并使用gzip 6级压缩对其进行压缩。因此,存储字节是指Cloudwatch在摄取日志后用于存储日志的存储空间。

Data ingested per month is not same as Data storage bytes. After the data is ingested to CloudWatch, it is archived by CloudWatch which includes 26 bytes of metadata per log event and is compressed using gzip level 6 compression. So the Storage bytes refers to the storage space used by Cloudwatch to store the logs after they're ingested.

参考: https://docs.aws.amazon.com/cli/latest/reference/cloudwatch/get- metric-statistics.html

这篇关于我的AWS Cloudwatch账单庞大。如何确定是哪个日志流引起的?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆