Kafka GlobalKTable延迟问题 [英] Kafka GlobalKTable Latency Issue

查看:145
本文介绍了Kafka GlobalKTable延迟问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个主题,该主题在商店中被读为GlobalKTable且已实现.问题是,如果我更新了该主题的密钥,然后从商店中读取了一段时间(〜0.5秒),我就会得到旧值.

I have a topic which is read as GlobalKTable and Materialized in a store. The issue is if I update a key on the topic and then read from store, for a while(~0.5sec) I get the old value.

此问题的原因可能是什么?

What could be the reason for this issue?

globalktable将每个应用程序实例的数据存储在rocksDB中,因此,如果更新了另一个分区上的键,则需要一些时间才能从所有分区中提取数据并更新其本地rocksDB.如果不是,请说明globalktable存储如何在内部维护其状态?

Is it that globalktable stores the data in rocksDB per application instance so if the key on another partition is updated it takes some time to pull data from all partitions and update its local rocksDB. If not, please explain how does globalktable store maintain its state internally?

如何解决以上问题? 在这样的期望一致性与mysql数据库相匹配的方案中,我们不应该使用globalktable吗?

How can the above issue be resolved ? Should we not use globalktable in such scenarios where consistency is expected to match that of say a mysql database?

推荐答案

globalktable将每个应用程序实例的数据存储在rocksDB中,因此,如果更新了另一个分区上的键,则需要一些时间才能从所有分区中提取数据并更新其本地rocksDB.如果没有,请说明globalktable存储如何在内部维护其状态?

Is it that globalktable stores the data in rocksDB per application instance so if the key on another partition is updated it takes some time to pull data from all partitions and update its local rocksDB. If not, please explain how does globalktable store maintain its state internally?

绝对可以.在Kafka Streams再次发送话题并更新为本地RocksDB之前,总会有一些延迟.

Absolutely yes. There is always some latency until Kafka Streams poll() the topic again and updates is local RocksDB.

在期望一致性与mysql数据库相匹配的情况下,我们不应该使用globalktable吗?

Should we not use globalktable in such scenarios where consistency is expected to match that of say a mysql database?

这取决于您需要的保证-如果生产者将GlobalKTable主题写入并且写入成功,则不能保证Kafka Streams应用程序已使用此写入并已更新GlobalKTable.生产者和消费者在设计上是相互分离的.

It depends on what guarantees you need -- if the producer writes into the GlobalKTable topic and the write was successful, this does not guarantee that a Kafka Streams application has consumed this write and has updated the GlobalKTable. Producers and Consumers are decoupled in Kafka by design.

这篇关于Kafka GlobalKTable延迟问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆