我们可以使用 Flink kafka Upsert 连接器连接到/从 Kafka 压缩的主题吗? [英] Can we connect to/from a Kafka compacted topic with the Flink kafka Upsert connector?

查看:58
本文介绍了我们可以使用 Flink kafka Upsert 连接器连接到/从 Kafka 压缩的主题吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

感觉很明显,但我还是问一下,因为我在文档中找不到明确的确认:

This feels obvious, but I'm asking anyway since I can't find a clear confirmation in the documentation:

Flink 1.12 中可用的 Flink Table API upsert kafka 连接器 与 Kafka 压缩主题的语义非常匹配:将流解释为变更日志并使用 NULL 值作为墓碑来标记删除.

The semantics of the Flink Table API upsert kafka connector available in Flink 1.12 match pretty well the semantics of a Kafka compacted topics: interpreting the stream as a changelog and using NULL values as tombstone to mark deletions.

所以我的假设是可以使用它来消费和生产压缩主题,并且它可能正是为此而制作的,尽管它应该可以很好地与非-压缩主题假设其内容确实是一个变更日志.但是我很惊讶在文档的那部分中没有找到任何对压缩主题的引用.

So my assumption is that it is ok to use it to consume from and produce to a compacted topic, and it's probably made precisely for that, although it should work fine as well with a non-compacted topic assuming its content is indeed a changelog. But I'm surprised not to find any reference to compacted topic in that part of the documentation.

有人可以证实或否定这个假设吗?

Could somebody please confirm or infirm this assumption?

推荐答案

是的,它是为用于压缩主题而设计的.根据 FLIP-149:

Yes, it was made for use with compacted topics. According to FLIP-149:

一般来说,upsert-kafka源码的底层topic必须要压缩.另外,底层topic必须在同一个partition中包含所有key相同的数据,否则会报错.

Generally speaking, the underlying topic of the upsert-kafka source must be compacted. Besides, the underlying topic must have all the data with the same key in the same partition, otherwise, the result will be wrong.

这篇关于我们可以使用 Flink kafka Upsert 连接器连接到/从 Kafka 压缩的主题吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆