使用来自执行程序的环境凭据在远程运行 aws_s3 任务 [英] Run aws_s3 task on remote with environment credentials from executor

查看:22
本文介绍了使用来自执行程序的环境凭据在远程运行 aws_s3 任务的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想将文件从远程主机上传到 s3 存储桶,但使用来自本地执行环境的凭据.这可能吗?

I would like to upload a file from remote host to a s3 bucket but with credentials from the local execution environment. Is that possible?

- name: Upload file
   host: '{{target}}'
   gather_facts : False
   tasks:
   - name: copy file to bucket
     become: yes
     aws_s3:
       bucket={{bucket_name}}
       object={{filename}}
       src=/var/log/{{ filename }}
       mode=put

有任何开关,我可以使用的选项吗?.最好是这样的:

Is there any switch, option I could use?. The best would be something like that:

AWS_PROFILE=MyProfile ansible-playbook upload_file.yml -e target=somehost -e bucket_name=mybucket -e filename=myfile

所以我可以从我自己的本地 .aws/config 文件中指定配置文件.

So I could specify the profile from my own local .aws/config file.

显然,当像这样运行剧本时:

Obviously when running the playbook like this:

ansible-playbook upload_file.yml -e target=somehost -e bucket_name=mybucket -e filename=myfile

我收到以下错误:

TASK [copy file to bucket] ******************************************************************************************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: NoCredentialsError: Unable to locate credentials
fatal: [somehost]: FAILED! => {"boto3_version": "1.7.50", "botocore_version": "1.10.50", "changed": false, "msg": "Failed while looking up bucket (during bucket_check) adverity-trash.: Unable to locate credentials"}

但是当我尝试以下操作时:

But when I try the following:

 AWS_ACCESS_KEY=<OWN_VALID_KEY> AWS_SECRET_KEY=<OWN_VALID_SECRET> ansible-playbook upload_file.yml -e target=somehost -e bucket_name=mybucket -e filename=myfile

同样的错误.

Ansible v2.6

推荐答案

他是我问题的令人满意的解决方案.

He're a satisfying solution to my problem.

在@einarc 和 ansible 主机变量的帮助下,我能够使用来自本地环境的凭据实现远程上传功能事实收集不是必需的,我使用 delegate_to 在本地完成一些任务.一切都在一本剧本中

With help of @einarc and the ansible hostvars I was able to achieve a remote upload capability with credentials comming from local environment The facts gathering was not necessary and I used delegate_to to do some tasks locally. Everything is in one playbook

- name: Transfer file
  hosts: '{{ target }}'
  gather_facts : False
  tasks:
  - name: Set AWS KEY ID
    set_fact: aws_key_id="{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
    delegate_to: 127.0.0.1
  - name: Set AWS SECRET
    set_fact: aws_secret_key="{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
    delegate_to: 127.0.0.1
  - name: Get AWS KEY ID
    set_fact: aws_key_id={{hostvars[inventory_hostname]['aws_key_id']}}
  - name: Get AWS SECRET KEY
    set_fact: aws_secret_key={{hostvars[inventory_hostname]['aws_secret_key']}}
  - name: ensure boto is available
    become: true
    pip: name=boto3 state=present
  - name: copy file to bucket
    become: yes
    aws_s3:
      aws_access_key={{aws_key_id}}
      aws_secret_key={{aws_secret_key}}
      bucket=my-bucket
      object={{filename}}
      src=/some/path/{{filename}}
      mode=put

奖励:我找到了一种不显式将 aws 凭据放入命令行的方法.

Bonus: I found a way to not explicitly put the aws credentials in command line.

aws-cli 的帮助下,我使用以下 bash 包装器从配置文件中获取凭据.

I've used the following bash wrapper to get the credentials from config file with the help of aws-cli.

#!/bin/bash
AWS_ACCESS_KEY_ID=`aws configure get aws_access_key_id --profile $1`
AWS_SECRET_ACCESS_KEY=`aws configure get aws_secret_access_key --profile $1`

AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \ 
AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
ansible-playbook transfer_to_s3.yml -e target=$2 -e filename=$3

这篇关于使用来自执行程序的环境凭据在远程运行 aws_s3 任务的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆