使用执行者提供的环境凭据在远程上运行aws_s3任务 [英] Run aws_s3 task on remote with environment credentials from executor
问题描述
我想将文件从远程主机上传到s3存储桶,但要使用本地执行环境中的凭据.有可能吗?
I would like to upload a file from remote host to a s3 bucket but with credentials from the local execution environment. Is that possible?
- name: Upload file
host: '{{target}}'
gather_facts : False
tasks:
- name: copy file to bucket
become: yes
aws_s3:
bucket={{bucket_name}}
object={{filename}}
src=/var/log/{{ filename }}
mode=put
我可以使用选项吗?最好的是这样的:
Is there any switch, option I could use?. The best would be something like that:
AWS_PROFILE=MyProfile ansible-playbook upload_file.yml -e target=somehost -e bucket_name=mybucket -e filename=myfile
因此,我可以从自己的本地 .aws/config
文件中指定配置文件.
So I could specify the profile from my own local .aws/config
file.
很明显,当像这样运行剧本时:
Obviously when running the playbook like this:
ansible-playbook upload_file.yml -e target=somehost -e bucket_name=mybucket -e filename=myfile
我遇到以下错误:
TASK [copy file to bucket] ******************************************************************************************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: NoCredentialsError: Unable to locate credentials
fatal: [somehost]: FAILED! => {"boto3_version": "1.7.50", "botocore_version": "1.10.50", "changed": false, "msg": "Failed while looking up bucket (during bucket_check) adverity-trash.: Unable to locate credentials"}
但是当我尝试以下操作时:
But when I try the following:
AWS_ACCESS_KEY=<OWN_VALID_KEY> AWS_SECRET_KEY=<OWN_VALID_SECRET> ansible-playbook upload_file.yml -e target=somehost -e bucket_name=mybucket -e filename=myfile
这是同样的错误.
Ansible v2.6
推荐答案
他是我问题的令人满意的解决方案.
He're a satisfying solution to my problem.
借助@einarc和ansible hostvars,我能够使用来自本地环境的凭据实现远程上传功能事实收集不是必需的,因此我使用了proxy_to在本地执行一些任务.一切都在一本剧本里
With help of @einarc and the ansible hostvars I was able to achieve a remote upload capability with credentials comming from local environment The facts gathering was not necessary and I used delegate_to to do some tasks locally. Everything is in one playbook
- name: Transfer file
hosts: '{{ target }}'
gather_facts : False
tasks:
- name: Set AWS KEY ID
set_fact: aws_key_id="{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
delegate_to: 127.0.0.1
- name: Set AWS SECRET
set_fact: aws_secret_key="{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
delegate_to: 127.0.0.1
- name: Get AWS KEY ID
set_fact: aws_key_id={{hostvars[inventory_hostname]['aws_key_id']}}
- name: Get AWS SECRET KEY
set_fact: aws_secret_key={{hostvars[inventory_hostname]['aws_secret_key']}}
- name: ensure boto is available
become: true
pip: name=boto3 state=present
- name: copy file to bucket
become: yes
aws_s3:
aws_access_key={{aws_key_id}}
aws_secret_key={{aws_secret_key}}
bucket=my-bucket
object={{filename}}
src=/some/path/{{filename}}
mode=put
奖金:我找到了一种不将aws凭证明确放置在命令行中的方法.
Bonus: I found a way to not explicitly put the aws credentials in command line.
我已经使用以下bash包装器在 aws-cli
的帮助下从配置文件中获取凭据.
I've used the following bash wrapper to get the credentials from config file with the help of aws-cli
.
#!/bin/bash
AWS_ACCESS_KEY_ID=`aws configure get aws_access_key_id --profile $1`
AWS_SECRET_ACCESS_KEY=`aws configure get aws_secret_access_key --profile $1`
AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
ansible-playbook transfer_to_s3.yml -e target=$2 -e filename=$3
这篇关于使用执行者提供的环境凭据在远程上运行aws_s3任务的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!