使用Cassandra 2.2处理不同的还原方案 [英] Handle different restore scenarios with Cassandra 2.2

查看:118
本文介绍了使用Cassandra 2.2处理不同的还原方案的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个Cassandra 3节点集群和一个replication_factor为3的键空间.

I have a Cassandra 3-node cluster and a keyspace created with a replication_factor of 3.

我使用nodetool snapshot为此键空间进行备份.根据Cassandra文档的建议,要进行全局备份,请在每个节点上启动一个cron作业(3个节点是NTP同步的).我没有使用增量快照,它始终是新的全局快照.

I make my backups for this keyspace with nodetool snapshot. As recommended by Cassandra documentation, to make a global backup I start it with a cron job on each node (3 nodes are NTP synchronized). I'm not using incremental snapshots, it's always a new global snapshot.

不幸的是,我在还原过程中遇到了一些麻烦.

Unfortunately, I've some troubles with the restore process.

首先,我将复制因子设置为3(以及READ和WRITE操作的QUORUM一致性级别),以确保即使1个节点发生故障,我的应用程序仍能正常工作.

First of all, I've set a replication factor to 3 (and QUORUM level of consistency on READ and WRITE operations) to make sure my app keeps working even if 1 node is down.

  • 我的第一种情况实际上不是还原过程: 一个节点发生故障是因为某个人或某人关闭了该节点正在运行的VM.其他2个节点继续工作并接收写/读请求. 24小时后,我设法重新启动第一个节点的VM,所有服务和文件仍然存在,并且我将重新启动该节点. 在重新启动之前或之后,我应该执行任何操作吗?

  • My first scenario is not really a restore process: one node goes down because of, let's say the someone or something shutdown the VM that the node was running on. The 2 others nodes keep working and receiving write/read requests. 24 hours later, I manage to restart the VM of the first node, all services and files are still there, and I'm about to restart the node. Are there any actions that I should do before or after the restarting?

第二种情况几乎相同,但是我无法恢复第一个节点的VM,因此我需要重新安装它的所有内容,包括Cassandra. 如何使用备份重新同步该节点?我应该使用它还是Cassandra能够重新同步所有内容而无需还原任何内容?在这种情况下我该怎么办?

Second scenario is pretty much the same, but I was not able to recover the VM of the first node and I need to reinstall everything on it, including Cassandra. How should I use my backup to resync this node? Should I even use it or is Cassandra capable to resync everything without me having to restore anything? What should I do precisely in this case?

我的最后一个场景是不同的.我丢失了所有节点,无法恢复任何东西.我有全局快照(3个快照,每个节点1个,同时拍摄). 在这种情况下该如何处理?

My last scenario is different. I've lost all my nodes and cannot recover anything. I've my global snapshot (3 snapshots, 1 for each node, taken at the same time). What is the process in this case?

我已经阅读了Cassandra文档中的还原过程,并且我偏爱简单的复制-还原(换句话说,我宁愿不使用sstableloader).我很难理解在这些情况下何时应该使用refresh和/或repair命令.

I've read the Cassandra documentation for the restore process, and I've a preference for the simple copy-restore (in other words, I rather not use sstableloader). I've troubles to understand when I should use refresh and/or repair commands in those scenarios.

推荐答案

在这些情况下我应该何时使用刷新和/或修复命令很麻烦

I've troubles to understand when I should use refresh and/or repair commands in those scenarios

根据文档,您应该执行refresh从快照还原数据时,第2种情况和第3种情况.

According to documentation you should perform refresh when you restore data from a snapshot, the 2nd and the 3rd scenarios.

我认为这三种情况都不是必需的维修步骤.但我建议执行此操作,因为在刚刚恢复的节点上具有一致的数据是简单而有用的步骤.

I suppose repair is not required step for all three scenarios. But I would recommend perform it because it is easy and useful step to have consistent data on just restored nodes.

此外,定期repair是cassandra群集维护的推荐部分.

Furthermore repair on a regular basis is a recommended part of cassandra cluster maintenance.

这篇关于使用Cassandra 2.2处理不同的还原方案的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆