将dynamoDB表复制到没有S3的另一个AWS帐户 [英] Copy dynamoDB table to another aws account without S3
问题描述
我想将所有dynamoDB表复制到另一个不带s3的aws帐户中,以保存数据。我看到了使用数据管道复制表的解决方案,但是所有解决方案都使用s3保存数据。我想跳过s3步骤,因为该表包含大量数据,因此可能需要花费一些时间进行s3写入和s3读取过程。因此,我需要将表从一个帐户直接复制到另一个帐户。
I would like to copy all the dynamoDB tables to another aws account without s3 to save the data. I saw solutions to copy table with data pipeline but all are using s3 to save the data. I would like to skip s3 step as the table contains a large amount of data so it may take time for s3 write and s3 read process. So I need to directly copy table from one account to another.
推荐答案
如果您不介意使用Python,请添加boto3库(sudo python -m pip install boto3),然后我会这样做(我假设您知道如何分别在代码中填充键,区域和表名):
If you don't mind using Python, and add boto3 library (sudo python -m pip install boto3), then I'd do it like this (I assume you know how to fill the keys, regions and table names in code respectively):
import boto3
import os
dynamoclient = boto3.client('dynamodb', region_name='eu-west-1',
aws_access_key_id='ACCESS_KEY_SOURCE',
aws_secret_access_key='SECRET_KEY_SOURCE')
dynamotargetclient = boto3.client('dynamodb', region_name='us-west-1',
aws_access_key_id='ACCESS_KEY_TARGET',
aws_secret_access_key='SECRET_KEY_TARGET')
dynamopaginator = dynamoclient.get_paginator('scan')
tabname='SOURCE_TABLE_NAME'
targettabname='TARGET_TABLE_NAME'
dynamoresponse = dynamopaginator.paginate(
TableName=tabname,
Select='ALL_ATTRIBUTES',
ReturnConsumedCapacity='NONE',
ConsistentRead=True
)
for page in dynamoresponse:
for item in page['Items']:
dynamotargetclient.put_item(
TableName=targettabname,
Item=item
)
这篇关于将dynamoDB表复制到没有S3的另一个AWS帐户的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!