如何使用Kafka将日志保留在logstash中更长的时间? [英] How can I use Kafka to retain logs in logstash for longer period?

查看:145
本文介绍了如何使用Kafka将日志保留在logstash中更长的时间?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当前,我使用redis-> s3->弹性搜索-> kibana堆栈来管道显示我的日志.但是由于弹性搜索中的大量数据,我最多可以保留7天的日志.

Currently I use redis -> s3 -> elastic search -> kibana stack to pipe and visualise my logs. But due to large volume of data in elastic search I can retain logs upto 7 days.

我想将kafka群集放入此堆栈中,并将日志保留更多天.我正在考虑跟随堆栈.

I want to bring kafka cluster in this stack and retain logs for more number of days. I am thinking of following stack.

app节点将日志传递到kafka-> kafka集群->弹性搜索集群-> kibana

app nodes piping logs to kafka -> kafka cluster -> elastics search cluster -> kibana

如何使用kafka将日志保留更多天?

How can I use kafka to retain logs for more number of days?

推荐答案

浏览Apache Kafka 经纪人配置,其中有两个属性可确定何时删除日志.一个按时间,另一个按空间.

Looking through the Apache Kafka broker configs, there are two properties that determine when a log will get deleted. One by time and the other by space.

log.retention.{ms,minutes,hours}
log.retention.bytes

还请注意,如果同时设置了log.retention.hours和log.retention.bytes字节,则当超出其中一个限制时,我们将删除一个段.

Also note that if both log.retention.hours and log.retention.bytes are both set we delete a segment when either limit is exceeded.

这两个指示何时在Kafka中删除日志. log.retention.bytes默认为-1,我很确定将其保留为-1仅允许时间配置单独确定何时删除日志.

Those two dictate when logs are deleted in Kafka. The log.retention.bytes defaults to -1, and I'm pretty sure leaving it to -1 allows only the time config to solely determine when a log gets deleted.

因此,要直接回答您的问题,请将log.retention.hours设置为您希望保留数据的小时数,并且不要更改log.retention.bytes配置.

So to directly answer your question, set log.retention.hours to however many hours you wish to retain your data and don't change the log.retention.bytes configuration.

这篇关于如何使用Kafka将日志保留在logstash中更长的时间?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆