Kafka GlobalKTable 延迟问题 [英] Kafka GlobalKTable Latency Issue

查看:27
本文介绍了Kafka GlobalKTable 延迟问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个主题,它被读取为 GlobalKTable 并在商店中实现.问题是,如果我更新该主题的密钥,然后从存储中读取一段时间(~0.5 秒),我会得到旧值.

I have a topic which is read as GlobalKTable and Materialized in a store. The issue is if I update a key on the topic and then read from store, for a while(~0.5sec) I get the old value.

这个问题的原因是什么?

What could be the reason for this issue?

globalktable 是否将每个应用程序实例的数据存储在 RocksDB 中,所以如果另一个分区上的键被更新,则需要一些时间从所有分区中提取数据并更新其本地 RocksDB.如果不是,请解释 globalktable 存储如何在内部维护其状态?

Is it that globalktable stores the data in rocksDB per application instance so if the key on another partition is updated it takes some time to pull data from all partitions and update its local rocksDB. If not, please explain how does globalktable store maintain its state internally?

如何解决上述问题?我们不应该在这样的场景中使用 globalktable,在这种情况下,一致性应该与 mysql 数据库相匹配吗?

How can the above issue be resolved ? Should we not use globalktable in such scenarios where consistency is expected to match that of say a mysql database?

推荐答案

globalktable 是否将每个应用程序实例的数据存储在 RocksDB 中,所以如果另一个分区上的键被更新,则需要一些时间从所有分区中提取数据并更新其本地 RocksDB.如果不是,请解释 globalktable 存储如何在内部维护其状态?

Is it that globalktable stores the data in rocksDB per application instance so if the key on another partition is updated it takes some time to pull data from all partitions and update its local rocksDB. If not, please explain how does globalktable store maintain its state internally?

绝对是的.总是有一些延迟,直到 Kafka Streams poll() 主题再次更新到本地 RocksDB.

Absolutely yes. There is always some latency until Kafka Streams poll() the topic again and updates is local RocksDB.

在这样的场景中,我们是否应该不使用 globalktable,其中要求一致性与 mysql 数据库相匹配?

Should we not use globalktable in such scenarios where consistency is expected to match that of say a mysql database?

这取决于您需要什么保证——如果生产者写入 GlobalKTable 主题并且写入成功,这并不能保证 Kafka Streams 应用程序已经消费了这次写入并更新了 GlobalKTable.生产者和消费者在 Kafka 中通过设计解耦.

It depends on what guarantees you need -- if the producer writes into the GlobalKTable topic and the write was successful, this does not guarantee that a Kafka Streams application has consumed this write and has updated the GlobalKTable. Producers and Consumers are decoupled in Kafka by design.

这篇关于Kafka GlobalKTable 延迟问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆