Cassandra中的原子批是如何工作的? [英] How do atomic batches work in Cassandra?

查看:199
本文介绍了Cassandra中的原子批是如何工作的?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

原子批次如何保证单个批次中的所有语句都将执行或不执行?

解决方案

为了理解批次如何工作,它有助于查看批处理执行的各个阶段。 / p>

客户



支持使用CQL3 或现代Cassandra客户端API。在每种情况下,您都可以指定要作为批处理的一部分执行的语句列表,用于所有语句的一致性级别和可选的时间戳。您将能够批量执行INSERT,DELETE和UPDATE语句。如果您选择不提供时间戳,当前时间将自动使用并与批处理相关联。



客户端将必须处理两个异常,




  • UnavailableException - 没有足够的节点执行指定批次CL的任何更新

  • WriteTimeoutException - 在写批处理或应用批处理中的任何更新时超时。这可以通过读取异常的 writeType 值( BATCH_LOG BATCH )。



在batchlog阶段写入失败将为 C> DefaultRetryPolicy ,自动重试一次%20int,%20int,%20int%29批处理日志创建对于确保在协调器在操作中间失败的情况下始终完成批处理至关重要。继续阅读,了解原因。



协调员



客户端将由协调器执行,就像任何写操作一样。与正常写入操作不同的是,Cassandra还将使用专用日志,该日志将包含当前执行的所有待处理批处理(称为batchlog)。此日志将存储在本地系统键空间中,并由每个节点单独管理。每个批处理执行开始于创建一个日志条目,完整批处理优选地在协调器之外的两个节点上。在协调器能够在其他节点上创建batchlog之后,它将开始执行批处理中的实际语句。



批次中的每个语句将使用整个批次的CL和时间戳记写入副本。除此之外,这里没有什么特别的写作。写入也可以被暗示或抛出WriteTimeoutException,它可以由客户端处理(见上文)。



批处理执行后,所有创建的批处理都可以安全删除。因此,协调器将在成功执行后向之前接收到batchlog的节点发送batchlog删除消息。



在批处理执行过程中,协调器会执行以下动作:




  • 向其他两个节点(最好在不同的机架中)发送batchlog

  • 批量执行所有语句

  • 批次执行成功后,再次从节点中删除批处理日志



批次处理副本节点
$ b

如上所述,batchlog将在批处理执行之前跨两个其他节点(如果群集大小允许)被复制。这个想法是,任何这些节点将能够拾取未决批处理,以防协调器在完成批处理中的所有语句之前关闭。



让思想有点复杂的是,这些节点不会注意到协调器不再存在。使用批处理执行的当前状态更新批处理节点的唯一点是协调器发出指示批处理已成功执行的删除消息。如果这样的消息未到达,则批处理节点将假设该批处理由于某些原因未被执行并且从日志重播该批处理。



批次重播可能每分钟发生一次,即。即节点将检查本地batchlog中是否存在尚未被可能被杀死的协调器删除的任何未决批处理的间隔。为了使协调器在创建批处理日志和实际执行之间有一段时间,使用固定的宽限期( write_request_timeout_in_ms * 2 ,默认为4秒)。如果批处理在4秒后仍然存在,它将被重放。



与Cassandra中的任何写操作一样,可能会发生超时。在这种情况下,节点将回退超时操作的写入提示。当超时副本将重新启动时,写入可以从提示恢复。如果 hinted_handoff_enabled 已启用,则此行为似乎无效。还有一个与提示相关联的TTL值,这将导致在较长时间后(对于任何涉及的CF,最小 GCGraceSeconds )丢弃提示。



现在您可能想知道是否在同一时间在两个节点上重播批次没有潜在危险,这可能发生在我们在两个节点上复制batchlog 。需要注意的是,由于支持的操作(更新和删除)的种类有限以及与该批处理相关联的固定时间戳,每个批处理执行将是等幂的。



原子性保证

p>

让我们回到原子批的原子性方面,回顾一下原子是什么意思( source ):


意味着数据库中的原子意味着如果
的任何部分批处理成功,则所有它都将被隐含;没有其他保证;
特别是没有隔离;其他客户端将能够
从批处理中读取第一个更新的行,而其他的在
进度。


在大多数情况下,协调器只会将批处理中的所有语句写入集群,但是在写入超时的情况下,我们必须检查超时发生的位置,方法是读取 writeType 值。批处理必须已写入批处理日志,以确保这些保证仍然适用。此外,在这一点上,其他客户端也可以从批处理中读取部分执行的结果。



回到这个问题,Cassandra如何保证一个批次中的所有或全部语句都不会被执行?
原子批处理基本上取决于成功的复制和幂等语句。这不是一个100%保证的解决方案,因为理论上可能有场景,仍然会导致不一致。但对于Cassandra中的很多用例它是一个非常有用的工具,如果你知道它的工作原理。


How can atomic batches guarantee that either all statements in a single batch will be executed or none?

解决方案

In order to understand how batches work under the hood, its helpful to look at the individual stages of the batch execution.

The client

Batches are supported using CQL3 or modern Cassandra client APIs. In each case you'll be able to specify a list of statements you want to execute as part of the batch, a consistency level to be used for all statements and an optional timestamp. You'll be able to batch execute INSERT, DELETE and UPDATE statements. If you choose not to provide a timestamp, the current time is automatically used and associated with the batch.

The client will have to handle two exception in case the batch could not be executed successfully.

  • UnavailableException - there are not enough nodes alive to fulfill any of the updates with specified batch CL
  • WriteTimeoutException - timeout while either writing the batchlog or applying any of the updates within the batch. This can be checked by reading the writeType value of the exception (either BATCH_LOG or BATCH).

Failed writes during the batchlog stage will be retried once automatically by the DefaultRetryPolicy in the Java driver. Batchlog creation is critical to ensure that a batch will always be completed in case the coordinator fails mid-operation. Read on for finding out why.

The coordinator

All batches send by the client will be executed by the coordinator just as with any write operation. Whats different from normal write operations is that Cassandra will also make use of a dedicated log that will contain all pending batches currently executed (called the batchlog). This log will be stored in the local system keyspace and is managed by each node individually. Each batch execution starts by creating a log entry with the complete batch on preferably two nodes other than the coordinator. After the coordinator was able to create the batchlog on the other nodes, it will start to execute the actual statements in the batch.

Each statement in the batch will be written to the replicas using the CL and timestamp of the whole batch. Beside from that, there's nothing special about writes happening at this point. Writes may also be hinted or throw a WriteTimeoutException, which can be handled by the client (see above).

After the batch has been executed, all created batchlogs can be safely removed. Therefor the coordinator will send a batchlog delete message upon successfull execution to the nodes that have received the batchlog before. This happens in the background and will go unnoticed in case it fails.

Lets wrap up what the coordinator does during batch execution:

  • sends batchlog to two other nodes (preferably in different racks)
  • execute all statements in batch
  • deletes batchlog from nodes again after successful batch execution

The batchlog replica nodes

As described above, the batchlog will be replicated across two other nodes (if the cluster size allows it) before batch execution. The idea is that any of these nodes will be able to pick up pending batches in case the coordinator will go down before finishing all statements in the batch.

What makes thinks a bit complicated is the fact that those nodes won't notice that the coordinator is not alive anymore. The only point at which the batchlog nodes will be updated with the current status of the batch execution, is when the coordinator is issuing a delete messages indicating the batch has been successfully executed. In case such a message doesn't arrive, the batchlog nodes will assume the batch hasn't been executed for some reasons and replay the batch from the log.

Batchlog replay is taking place potentially every minute, ie. that is the interval a node will check if there are any pending batches in the local batchlog that haven't been deleted by the -possibly killed- coordinator. To give the coordinator some time between the batchlog creation and the actual execution, a fixed grace period is used (write_request_timeout_in_ms * 2, default 4 sec). In case that the batchlog still exists after 4 sec, it will be replayed.

Just as with any write operation in Cassandra, timeouts may occur. In this case the node will fall back writing hints for the timed out operations. When timed out replicas will be up again, writes can resume from hints. This behavior doesn't seem to be effected whether hinted_handoff_enabled is enabled or not. There's also a TTL value associated with the hint which will cause the hint to be discarded after a longer period of time (smallest GCGraceSeconds for any involved CF).

Now you might be wondering if it isn't potentially dangerous to replay a batch on two nodes at the same time, which may happen has we replicate the batchlog on two nodes. Whats important to keep in mind here is that each batch execution will be idempotent due to the limited kind of supported operations (updates and deletes) and the fixed timestamp associated to the batch. There won't be any conflicts even if both nodes and the coordinator will retry executing the batch at the same time.

Atomicity guarantees

Lets get back to the atomicity aspects of "atomic batches" and review what exactly is meant with atomic (source):

"(Note that we mean "atomic" in the database sense that if any part of the batch succeeds, all of it will. No other guarantees are implied; in particular, there is no isolation; other clients will be able to read the first updated rows from the batch, while others are in progress."

So in a sense we get "all or nothing" guarantees. In most cases the coordinator will just write all the statements in the batch to the cluster. However, in case of a write timeout, we must check at which point the timeout occurred by reading the writeType value. The batch must have been written to the batchlog in order to be sure that those guarantees still apply. Also at this point other clients may also read partially executed results from the batch.

Getting back to the question, how can Cassandra guarantee that either all or no statements at all in a batch will be executed? Atomic batches basically depend on successful replication and idempotent statements. It's not a 100% guaranteed solution as in theory there might be scenarios that will still cause inconsistencies. But for a lot of use cases in Cassandra its a very useful tool if you're aware how it works.

这篇关于Cassandra中的原子批是如何工作的?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆