AWS IAM政策连接AWS Cloudwatch Logs,Kinesis Firehose,S3和ElasticSearch [英] AWS IAM Policies to connect AWS Cloudwatch Logs, Kinesis Firehose, S3 and ElasticSearch

查看:913
本文介绍了AWS IAM政策连接AWS Cloudwatch Logs,Kinesis Firehose,S3和ElasticSearch的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试通过Kinesis Firehose将AWS云端日志流传输到ES。下面的代码是给出一个错误。任何建议..
错误是:




  • aws_cloudwatch_log_subscription_filter.test_kinesis_logfilter:发生1错误:

  • aws_cloudwatch_log_subscription_filter.test_kinesis_logfilter:InvalidParameterException:无法将测试消息传递到指定的Firehose流。检查给定的Firehose流是否处于ACTIVE状态。



 资源aws_s3_bucketbucket{
bucket =cw-kinesis-es-bucket
acl =private


资源aws_iam_rolefirehose_role{
name =firehose_test_role

假设_role_policy =<< EOF
{
Version:2012-10-17,
声明:[
{
Action:sts:AssumeRole,
Principal:{
服务:firehose.amazonaws.com
},
效果:允许,
Sid:
}
]
}
EOF
}

资源aws_elasticsearch_domaines{
domain_name =firehose-es-test
elasticsearch_version =1.5
cluster_config {
instance_type =t2.micro.elasticsearch
}
ebs_options {
ebs_enabled = true
volume_size = 10
}

advanced_options {
rest.action.multi.allow_explicit_index=true
}

access_policies =<  CONFIG
{
版本:2012-10 -17,
声明:[
{
Action:es:*,
Principal:*,
:允许,
条件:{
IpAddress:{aws:SourceIp:[xxxxx]}
}
}
]
}
CONFIG

snapshot_options {
automated_snapshot_start_hour = 23
}

标签{
Domain = TestDomain
}
}

资源aws_kinesis_firehose_delivery_streamtest_stream{
name =terraform-kinesis-firehose-test-stream
destination =elasticsearch

s3_configuration {
role_arn =$ {aws_iam_role.firehose_role.arn}
bucket_arn =$ {aws_s3_bucket.bucket.arn}
buffer_size = 10
buffer_interval = 400
compression_format =GZIP
}

elasticsearch_configuration {
domain_arn =$ {aws_elasticsearch_domain.es.arn}
role_arn =$ { aws_iam_role.firehose_role.arn}
index_name =test
type_name =test
}
}

资源aws_iam_roleiam_for_lambda {
name =iam_for_lambda
假设_role_policy =<< EOF
{
版本:2012-10-17,
声明 [
{
Action:sts:AssumeRole,
Principal:{
Service:lambda.amazonaws.com
}
效果:允许,
Sid:
}
]
}
EOF
}

资源aws_cloudwatch_log_subscription_filtertest_kinesis_logfilter{
name =test_kinesis_logfilter
role_arn =$ {aws_iam_role.iam_for_lambda.arn}
log_group_name =loggorup.log
filter_pattern =
destination_arn =$ {a ws_kinesis_firehose_delivery_stream.test_stream.arn}
}

p>

解决方案

在此配置中,您将指导Cloudwatch Logs将日志记录发送到Kinesis Firehose,Kinesis Firehose依次配置为写入接收到的数据到S3和ElasticSearch。因此,您使用的AWS服务正在如下所述:





为了让一个AWS服务与另一个AWS服务通话,第一个服务必须承担授予其访问权限的角色。在IAM术语中,承担角色意味着暂时以授予该角色的权限行事。 AWS IAM角色有两个关键部分:




  • 承担角色策略,用于控制哪些服务和/或用户可以承担角色。

  • 控制角色授予访问权限的策略。这决定了服务或用户在承担角色后可以做什么。



这里需要两个不同的角色。一个角色将授予Cloudwatch Logs访问Kinesis Firehose的权限,第二个将授权Kinesis Firehose访问与S3和ElasticSearch进行通话。



对于本答案的其余部分,我将假设Terraform作为具有对AWS帐户的完全管理访问权限的用户运行。如果不是这样,首先需要确保Terraform作为可以创建和传递角色的IAM主体运行。






Cloudwatch日志访问Kinesis Firehose



在问题中给出的示例中, aws_cloudwatch_log_subscription_filter 有一个 role_arn ,其 puts_role_policy 用于AWS Lambda,因此Cloudwatch Logs无法承担此角色。 / p>

要解决这个问题,可以将假设角色策略更改为使用Cloudwatch Logs的服务名称:

 资源aws_iam_rolecloudwatch_logs{
名称=cloudwatch_logs_to_firehose
假设_role_policy =< EOF
{
版本: 2012-10-17,
声明:[
{
Action:sts:AssumeRole,
Principal:{
:logs.us-east-1.amazonaws.com
},
效果:允许,
Sid:
}
]
}
EOF
}

上述允许Cloudwatch Logs服务承担角色。现在这个角色需要一个允许写入Firehose Delivery Stream的访问策略:

  resourceaws_iam_role_policycloudwatch_logs{
role =$ {aws_iam_role.cloudwatch_logs.name}

policy =< EOF
{
声明:[
{
效果:允许,
动作:[firehose:*],
资源:[$ {aws_kinesis_firehose_delivery_stream.test_stream.arn}]
}
]
}
EOF
}

只要它针对由此Terraform配置创建的特定传送流,上述权限授予Cloudwatch Logs服务访问权限以调用任何 Kinesis Firehose操作。这是比严格必要的更多的访问;有关详细信息,请参阅 Amazon Kinesis的操作和条件上下文键Firehose



要完成此操作,必须更新 aws_cloudwatch_log_subscription_filter 资源以引用到这个新角色:

 资源aws_cloudwatch_log_subscription_filtertest_kinesis_logfilter{
name =test_kinesis_logfilter
$ {aws_iam_role.cloudwatch_logs.arn}
log_group_name =loggorup.log
filter_pattern =
destination_arn =$ {aws_kinesis_firehose_delivery_stream.test_stream.arn}

#等到角色需要访问才能创建
depends_on = [aws_iam_role_policy.cloudwatch_logs]
}

不幸的是,由于AWS IAM的内部设计,通常需要几分钟的时间才能将策略更改为在Terraform提交之后生效,因此在策略本身创建之后,尝试使用策略创建新资源时,有时会出现与策略相关的错误。在这种情况下,通常只需等待10分钟,然后再次运行Terraform,此时它应该恢复到停止状态,然后重试创建资源。






访问Kinesis Firehose到S3和Amazon ElasticSearch



该问题中给出的示例已经具有具有适当假设角色的IAM角色Kinesis Firehose的政策:

 资源aws_iam_rolefirehose_role{
name =firehose_test_role

假设_role_policy =<< EOF
{
版本:2012-10-17,
声明:[
{
Action:sts:AssumeRole,
Principal:{
Service:firehose.amazonaws.com
},
效果:允许,
Sid:
}
]
}
EOF
}
pre>

上述授权Kinesis Firehose访问承担此角色。与以前一样,此角色还需要一个访问策略来授予用户对目标S3存储区的访问权限:

  resourceaws_iam_role_policy firehose_role{
role =$ {aws_iam_role.firehose_role.name}

policy =< EOF
{
语句:[
{
效果:允许,
动作:[s3:*],
资源:[$ {aws_s3_bucket.bucket.arn } $]
},
{
效果:允许,
动作:[es:ESHttpGet],
资源 [$ $ {aws_elasticsearch_domain.es.arn} / *]
},
{
效果:允许,
动作:[
日志:PutLogEvents
],
资源:[
arn:aws:logs:*:*:log-group:*:log-stream:*
]
}
]
}
EOF
}

上述策略允许Kinesis Firehose在创建的S3桶上执行任何操作,对所创建的Elast执行任何操作icSearch域,并将日志事件写入Cloudwatch Logs中的任何日志流。最后一部分不是绝对必要的,但是如果Firehose Delivery Stream启用了日志记录,或者Kinesis Firehose无法将日志写回Cloudwatch Logs,则非常重要。



<再次,这是更严格必要的访问。有关支持的具体操作的更多信息,请参阅以下参考资料:





由于此单一角色可以访问写入S3和ElasticSearch,因此可以为Kinesis Firehose传递中的这两种传送配置指定流:

 资源aws_kinesis_firehose_delivery_streamtest_stream{
name =terraform-kinesis-firehose-test-stream
destination =elasticsearch

s3_configuration {
role_arn =$ {aws_iam_role.firehose_role.arn}
bucket_arn =$ {aws_s3_bu cket.bucket.arn}
buffer_size = 10
buffer_interval = 400
compression_format =GZIP
}

elasticsearch_configuration {
domain_arn $$ {aws_elasticsearch_domain.es.arn}
role_arn =$ {aws_iam_role.firehose_role.arn}
index_name =test
type_name =test
}

#等到在创建firehose
#传递流之前已被授予访问权限。
depends_on = [aws_iam_role_policy.firehose_role]
}






上述所有接线完成后,服务应该具有连接这个交付流水线部分所需的访问权限。



同样的一般模式适用于两个AWS服务之间的任何连接。每个案件所需的重要信息是:




  • 将启动请求的服务的服务名称,例如 logs.us-east-1.amazonaws.com firehose.amazonaws.com 。这些不幸的一般记录不良,很难找到,但通常可以在每个服务的用户指南中的政策示例中找到。

  • 需要授予的操作的名称。每个服务的全部操作可以在 AWS中找到用于IAM策略的服务操作和条件上下文键 。遗憾的是,针对特定服务到服务集成所需的特定操作的文档通常相当缺乏,但是在简单的环境中(尽管有任何严格的监管要求或关于访问的组织策略)通常就足够了使用上述示例中使用的通配符语法授予对给定服务的所有操作的访问权限。


I am trying to stream the AWS cloudwatch logs to ES via Kinesis Firehose. Below terraform code is giving an error. Any suggestions.. The error is:

  • aws_cloudwatch_log_subscription_filter.test_kinesis_logfilter: 1 error(s) occurred:
  • aws_cloudwatch_log_subscription_filter.test_kinesis_logfilter: InvalidParameterException: Could not deliver test message to specified Firehose stream. Check if the given Firehose stream is in ACTIVE state.

resource "aws_s3_bucket" "bucket" {
  bucket = "cw-kinesis-es-bucket"
  acl    = "private"
}

resource "aws_iam_role" "firehose_role" {
  name = "firehose_test_role"

  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "firehose.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
EOF
}

resource "aws_elasticsearch_domain" "es" {
  domain_name           = "firehose-es-test"
  elasticsearch_version = "1.5"
  cluster_config {
    instance_type = "t2.micro.elasticsearch"
  }
  ebs_options {
    ebs_enabled = true
    volume_size = 10
  }

  advanced_options {
    "rest.action.multi.allow_explicit_index" = "true"
  }

  access_policies = <<CONFIG
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": "es:*",
            "Principal": "*",
            "Effect": "Allow",
            "Condition": {
                "IpAddress": {"aws:SourceIp": ["xxxxx"]}
            }
        }
    ]
}
CONFIG

  snapshot_options {
    automated_snapshot_start_hour = 23
  }

  tags {
    Domain = "TestDomain"
  }
}

resource "aws_kinesis_firehose_delivery_stream" "test_stream" {
  name        = "terraform-kinesis-firehose-test-stream"
  destination = "elasticsearch"

  s3_configuration {
    role_arn           = "${aws_iam_role.firehose_role.arn}"
    bucket_arn         = "${aws_s3_bucket.bucket.arn}"
    buffer_size        = 10
    buffer_interval    = 400
    compression_format = "GZIP"
  }

  elasticsearch_configuration {
    domain_arn = "${aws_elasticsearch_domain.es.arn}"
    role_arn   = "${aws_iam_role.firehose_role.arn}"
    index_name = "test"
    type_name  = "test"
  }
}

resource "aws_iam_role" "iam_for_lambda" {
  name = "iam_for_lambda"
  assume_role_policy = <<EOF
  {
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
EOF
}

resource "aws_cloudwatch_log_subscription_filter" "test_kinesis_logfilter" {
  name            = "test_kinesis_logfilter"
  role_arn        = "${aws_iam_role.iam_for_lambda.arn}"
  log_group_name  = "loggorup.log"
  filter_pattern  = ""
  destination_arn = "${aws_kinesis_firehose_delivery_stream.test_stream.arn}"
}

解决方案

In this configuration you are directing Cloudwatch Logs to send log records to Kinesis Firehose, which is in turn configured to write the data it receives to both S3 and ElasticSearch. Thus the AWS services you are using are talking to each other as follows:

In order for one AWS service to talk to another the first service must assume a role that grants it access to do so. In IAM terminology, "assuming a role" means to temporarily act with the privileges granted to that role. An AWS IAM role has two key parts:

  • The assume role policy, that controls which services and/or users may assume the role.
  • The policies controlling what the role grants access to. This decides what a service or user can do once it has assumed the role.

Two separate roles are needed here. One role will grant Cloudwatch Logs access to talk to Kinesis Firehose, while the second will grant Kinesis Firehose access to talk to both S3 and ElasticSearch.

For the rest of this answer, I will assume that Terraform is running as a user with full administrative access to an AWS account. If this is not true, it will first be necessary to ensure that Terraform is running as an IAM principal that has access to create and pass roles.


Access for Cloudwatch Logs to Kinesis Firehose

In the example given in the question, the aws_cloudwatch_log_subscription_filter has a role_arn whose assume_role_policy is for AWS Lambda, so Cloudwatch Logs does not have access to assume this role.

To fix this, the assume role policy can be changed to use the service name for Cloudwatch Logs:

resource "aws_iam_role" "cloudwatch_logs" {
  name = "cloudwatch_logs_to_firehose"
  assume_role_policy = <<EOF
  {
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "logs.us-east-1.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
EOF
}

The above permits the Cloudwatch Logs service to assume the role. Now the role needs an access policy that permits writing to the Firehose Delivery Stream:

resource "aws_iam_role_policy" "cloudwatch_logs" {
  role = "${aws_iam_role.cloudwatch_logs.name}"

  policy = <<EOF
{
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["firehose:*"],
      "Resource": ["${aws_kinesis_firehose_delivery_stream.test_stream.arn}"]
    }
  ]
}
EOF
}

The above grants the Cloudwatch Logs service access to call into any Kinesis Firehose action as long as it targets the specific delivery stream created by this Terraform configuration. This is more access than is strictly necessary; for more information, see Actions and Condition Context Keys for Amazon Kinesis Firehose.

To complete this, the aws_cloudwatch_log_subscription_filter resource must be updated to refer to this new role:

resource "aws_cloudwatch_log_subscription_filter" "test_kinesis_logfilter" {
  name            = "test_kinesis_logfilter"
  role_arn        = "${aws_iam_role.cloudwatch_logs.arn}"
  log_group_name  = "loggorup.log"
  filter_pattern  = ""
  destination_arn = "${aws_kinesis_firehose_delivery_stream.test_stream.arn}"

  # Wait until the role has required access before creating
  depends_on = ["aws_iam_role_policy.cloudwatch_logs"]
}

Unfortunately due to the internal design of AWS IAM, it can often take several minutes for a policy change to come into effect after Terraform submits it, so sometimes a policy-related error will occur when trying to create a new resource using a policy very soon after the policy itself was created. In this case, it's often sufficient to simply wait 10 minutes and then run Terraform again, at which point it should resume where it left off and retry creating the resource.


Access for Kinesis Firehose to S3 and Amazon ElasticSearch

The example given in the question already has an IAM role with a suitable assume role policy for Kinesis Firehose:

resource "aws_iam_role" "firehose_role" {
  name = "firehose_test_role"

  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "firehose.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
EOF
}

The above grants Kinesis Firehose access to assume this role. As before, this role also needs an access policy to grant users of the role access to the target S3 bucket:

resource "aws_iam_role_policy" "firehose_role" {
  role = "${aws_iam_role.firehose_role.name}"

  policy = <<EOF
{
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["s3:*"],
      "Resource": ["${aws_s3_bucket.bucket.arn}"]
    },
    {
      "Effect": "Allow",
      "Action": ["es:ESHttpGet"],
      "Resource": ["${aws_elasticsearch_domain.es.arn}/*"]
    },
    {
      "Effect": "Allow",
      "Action": [
          "logs:PutLogEvents"
      ],
      "Resource": [
          "arn:aws:logs:*:*:log-group:*:log-stream:*"
      ]
    }
  ]
}
EOF
}

The above policy allows Kinesis Firehose to perform any action on the created S3 bucket, any action on the created ElasticSearch domain, and to write log events into any log stream in Cloudwatch Logs. The final part of this is not strictly necessary, but is important if logging is enabled for the Firehose Delivery Stream, or else Kinesis Firehose is unable to write logs back to Cloudwatch Logs.

Again, this is more access than strictly necessary. For more information on the specific actions supported, see the following references:

Since this single role has access to write to both S3 and to ElasticSearch, it can be specified for both of these delivery configurations in the Kinesis Firehose delivery stream:

resource "aws_kinesis_firehose_delivery_stream" "test_stream" {
  name        = "terraform-kinesis-firehose-test-stream"
  destination = "elasticsearch"

  s3_configuration {
    role_arn           = "${aws_iam_role.firehose_role.arn}"
    bucket_arn         = "${aws_s3_bucket.bucket.arn}"
    buffer_size        = 10
    buffer_interval    = 400
    compression_format = "GZIP"
  }

  elasticsearch_configuration {
    domain_arn = "${aws_elasticsearch_domain.es.arn}"
    role_arn   = "${aws_iam_role.firehose_role.arn}"
    index_name = "test"
    type_name  = "test"
  }

  # Wait until access has been granted before creating the firehose
  # delivery stream.
  depends_on = ["aws_iam_role_policy.firehose_role"]
}


With all of the above wiring complete, the services should have the access they need to connect the parts of this delivery pipeline.

This same general pattern applies to any connection between two AWS services. The important information needed for each case is:

  • The service name for the service that will initiate the requests, such as logs.us-east-1.amazonaws.com or firehose.amazonaws.com. These are unfortunately generally poorly documented and hard to find, but can usually be found in policy examples within each service's user guide.
  • The names of the actions that need to be granted. The full set of actions for each service can be found in AWS Service Actions and Condition Context Keys for Use in IAM Policies. Unfortunately again the documentation for specifically which actions are required for a given service-to-service integration is generally rather lacking, but in simple environments (notwithstanding any hard regulatory requirements or organizational policies around access) it usually suffices to grant access to all actions for a given service, using the wildcard syntax used in the above examples.

这篇关于AWS IAM政策连接AWS Cloudwatch Logs,Kinesis Firehose,S3和ElasticSearch的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆