为卡夫卡生产者保证独特的全球交易 [英] Guarantee unique global transaction for Kafka Producers

查看:93
本文介绍了为卡夫卡生产者保证独特的全球交易的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在Kafka 0.11.0.0的最新版本中,Apache团队正在引入幂等的生产者和交易. 是否可以保证要记录的整套消息(例如一百万条)仅在最后提交? 我希望这样,例如,如果生产者失去与经纪人的联系并且无法重新建立联系,那么消费者就不会看到任何消息.是否有可能?

With the last version of Kafka 0.11.0.0 the Apache team is introducing idempotent producer and transactions. Is It possible to guarantee that an entire set of messages (for example 1 million) we want to log, will be committed only at the end? I would like that, if for example the Producers loose the connection with the brokers and cannot restabilish it, no messages will be seen by the consumers. Is it possible?

推荐答案

是的,可以使用生产者中的交易"来实现.您开始一个事务,发布所有消息,然后提交该事务.所有消息一次都被写入Kafka,但是在新的READ_COMMITTED模式下,消费者仅在生产者提交事务并将特殊事务标记添加到Kafka提交日志之后才能看到消息.

Yes this is possible using Transactions in your producer. You start a transaction, publish all your messages, and then commit the transaction. All the messages are written to Kafka one at a time but consumers in the new READ_COMMITTED mode will only see the messages after the transaction is committed by the producer and a special transaction marker is added to the Kafka commit log.

未处于READ_COMMITTED模式的消费者可以看到单独编写的消息,即使它们可能尚未(或从未提交).

Consumers not in READ_COMMITTED mode can see the messages as they are written individually even though they may not yet (or ever) be committed.

打开的事务可以不提交多长时间是有限制的,因此最终,如果生产者死亡并且没有明确结束事务,它将超时并回滚,而READ_COMMITTED使用者将永远不会看到这些消息.

There is a limit to how long an open transaction can stay uncommitted so eventually if the producer dies and does not explicitly end the transaction it will timeout and rollback and READ_COMMITTED consumers will never see those messages.

这篇关于为卡夫卡生产者保证独特的全球交易的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆