将 dynamoDB 表复制到另一个没有 S3 的 aws 帐户 [英] Copy dynamoDB table to another aws account without S3
问题描述
我想将所有 dynamoDB 表复制到另一个没有 s3 的 aws 帐户以保存数据.我看到了使用数据管道复制表的解决方案,但都使用 s3 来保存数据.我想跳过 s3 步骤,因为该表包含大量数据,因此 s3 写入和 s3 读取过程可能需要时间.所以我需要直接将表从一个帐户复制到另一个帐户.
I would like to copy all the dynamoDB tables to another aws account without s3 to save the data. I saw solutions to copy table with data pipeline but all are using s3 to save the data. I would like to skip s3 step as the table contains a large amount of data so it may take time for s3 write and s3 read process. So I need to directly copy table from one account to another.
推荐答案
如果你不介意使用 Python,并且添加 boto3 库(sudo python -m pip install boto3),那么我会这样做(我假设您知道如何在代码中分别填写键、区域和表名):
If you don't mind using Python, and add boto3 library (sudo python -m pip install boto3), then I'd do it like this (I assume you know how to fill the keys, regions and table names in code respectively):
import boto3
import os
dynamoclient = boto3.client('dynamodb', region_name='eu-west-1',
aws_access_key_id='ACCESS_KEY_SOURCE',
aws_secret_access_key='SECRET_KEY_SOURCE')
dynamotargetclient = boto3.client('dynamodb', region_name='us-west-1',
aws_access_key_id='ACCESS_KEY_TARGET',
aws_secret_access_key='SECRET_KEY_TARGET')
dynamopaginator = dynamoclient.get_paginator('scan')
tabname='SOURCE_TABLE_NAME'
targettabname='TARGET_TABLE_NAME'
dynamoresponse = dynamopaginator.paginate(
TableName=tabname,
Select='ALL_ATTRIBUTES',
ReturnConsumedCapacity='NONE',
ConsistentRead=True
)
for page in dynamoresponse:
for item in page['Items']:
dynamotargetclient.put_item(
TableName=targettabname,
Item=item
)
这篇关于将 dynamoDB 表复制到另一个没有 S3 的 aws 帐户的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!