Google Cloud Spanner的外部备份/快照 [英] External Backups/Snapshots for Google Cloud Spanner

查看:168
本文介绍了Google Cloud Spanner的外部备份/快照的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

是否可以快照Google Cloud Spanner数据库/表格?出于合规性原因,我们必须每天都有当前数据库的快照,在发生灾难时可以回滚到这个数据库中:这在Spanner中是可行的吗?是否有意愿支持它,如果没有?



对于那些可能会问为什么我们需要它,因为扳手复制/冗余等 - 它不防范人为错误(不小心丢掉一张桌子)或破坏/间谍活动,因此这个问题和要求。



谢谢,M

解决方案今天,您可以通过使用您最喜欢的工具(mapreduce,spark,dataflow)读取所有数据并在特定时间戳(使用时间戳记边界)读取数据流来创建一致的快照。



https://云。 google.com/spanner/docs/timestamp-bounds



在数据被垃圾收集之前,您大约需要一个小时完成导出操作。



未来,我们将提供一个Apache Beam / Dataflow连接器,以更具扩展性的方式完成此操作。这将是我们进行数据导入/导出到Cloud Spanner的首选方法。从长远来看,我们将支持备份和恢复到备份的功能,但该功能目前尚不可用。


Is it possible to snapshot a Google Cloud Spanner Database/table(s)? For compliance reasons we have to have daily snapshots of the current database that can be rolled back to in the event of a disaster: is this possible in Spanner? Is there intention to support it if not?

For those who might ask why we would need it as Spanner is replicated/redundant etc - it doesn't guard against human error (dropping a table by accident) or sabotage/espionage hence the question and requirement.

Thanks, M

解决方案

Today, you can stream out a consistent snapshot by reading out all the data using your favorite tool (mapreduce, spark, dataflow) and reads at a specific timestamp (using Timestamp Bounds).

https://cloud.google.com/spanner/docs/timestamp-bounds

You have about an hour to do the export before the data gets garbage collected.

In the future, we will provide a Apache Beam/Dataflow connector to do this in a more scalable fashion. This will be our preferred method for doing import/export of data into Cloud Spanner.

Longer term, we will support backups and the ability to restore to a backup but that functionality is not currently available.

这篇关于Google Cloud Spanner的外部备份/快照的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆