kafka保留政策未按预期运行 [英] kafka retention policy didn't work as expected

查看:82
本文介绍了kafka保留政策未按预期运行的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我希望某个kafka主题仅保留1天的数据.但是,如果我们继续向主题(活动)发送数据,它似乎根本不会删除任何数据.我尝试了主题端参数(retention.ms)和服务器端:

I wanted a certain kafka topic to only keep 1 day of data. But it didn't seem to delete any data at all if we keep sending data to the topic(active). I tried topic side parameter (retention.ms) and server side:

    log.retention.hours=1 or log.retention.ms= 86400000 
    cleanup.policy=delete

但是,如果我们继续向其发送数据,它似乎不适用于活跃的主题.只有当我们停止向该主题发送数据时,它才会遵循保留策略.

But it didn't seem to work for alive topics, if we keep sending data to it. Only when we stop sending data to the topic, it will follow the retention policy.

那么,对于一个活动主题,仅保留一段时间的正确配置是什么?

So, what's the right config for a active topic, to retain data only for some time?

推荐答案

日志保留基于日志文件的创建日期.尝试设置您的 log.roll.hours <24(因为默认情况下为24 * 7).

Log retention is based on the creation date of the log file. Try setting your log.roll.hours < 24 (since by default it is 24 * 7).

如果您只想控制每个主题的日志文件创建,请在主题配置中设置 log.roll.hours.per.topic .

If you only want to control log file creation per topic, set log.roll.hours.per.topic in the topic config.

日志已分段,并且每个主题的日志分段配置为:

Logs are segmented and the per topic config for log segments is:

segment.ms 注意:这以毫秒为单位,并覆盖服务器范围的 log.roll.ms .

另请参阅:清除Kafka主题

这篇关于kafka保留政策未按预期运行的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆