Azure Log Analytics拆分没有任何相关ID的巨大json日志 [英] Azure Log Analytics splitting a huge json log without any corelation ID

查看:49
本文介绍了Azure Log Analytics拆分没有任何相关ID的巨大json日志的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个运行在AKS(Azure Kubernetes服务)上的springboot应用程序.在该应用程序中,我尝试打印字符长度为1,00,000的日志(JSON对象).当我在Azure的LogAnalytics中查询此消息时,它会将整个日志消息拆分为4个或更多条目,而没有任何关联ID,并且我无法识别拆分后的日志是否属于特定事务.消息ID/线程ID,时间戳和其他详细信息会在第一个条目中打印出来.第二个条目从第一个日志条目终止的地方继续.

I have a springboot application which is running on AKS(Azure Kubernetes Service). In that app i am trying to print a log (JSON object) which has a character length of 1,00,000. When i query this in LogAnalytics in Azure, it splits the entire log message in 4 or more entries without any co-relation ID and there is no way I can identify if the split logs belongs to a particular transaction. the message ID/thread ID, timestamp and other details are printed with first entry. The second entry continues from the point where first log entry gets terminated.

有没有办法在天蓝色中设置logEntry列的字符长度?

Is there as way we can set the character length of the logEntry column in azure?

当一项的字符数等于16384时,将拆分日志

The logs are split when the character count of one entry becomes equal to 16384

推荐答案

我发现讨论了类似的问题,并且

I have found that the similar question was discussed and answered on StackOverflow.

fluent-plugin-concat 完成了这项工作日志.解决docker的最大日志行(16Kb)解决我的容器日志中的长行被拆分成多行解决消息的最大大小似乎是16KB,因此对于85KB的消息,结果是在不同的块中创建了6条消息./p>

fluent-plugin-concat did the job combine to one log. solve docker's max log line (of 16Kb) solve long lines in my container logs get split into multiple lines solve max size of the message seems to be 16KB therefore for a message of 85KB the result is that 6 messages were created in different chunks.

这篇关于Azure Log Analytics拆分没有任何相关ID的巨大json日志的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆