追踪Google App Engine Golang应用程序中的内存泄漏情况? [英] Tracking down memory leak in Google App Engine Golang application?

查看:172
本文介绍了追踪Google App Engine Golang应用程序中的内存泄漏情况?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我看到了这个Python问题:推迟App Engine:追踪内存泄漏

...同样,我遇到了这个可怕的错误:


超过128 MB的软私人内存限制,128 MB后服务总共384次请求

...


处理完这个请求后,处理这个请求的进程被发现使用了太多的内存并被终止。这很可能会导致下一个请求应用程序使用新的进程。如果你经常看到这个消息,你可能会在你的应用程序中发生内存泄漏。


根据另一个问题,可能是实例类太小,无法运行此应用程序,但在增加它之前,我想确定。



通过应用程序检查后,我看不到任何明显的至于泄漏的位置可能是什么(例如,未关闭的缓冲区等)...所以无论它是一个非常小但可能常见的错误。

<因为这是在GAE上运行的,所以我不能很简单地在本地进行配置,就我所知,这是运行时环境。 任何人都可以提出如何进行并确保内存正在被正确回收的建议? - 我对Go有点新,但我一直很喜欢使用它。

解决方案

作为一个起点,您可以尝试 pprof.WriteHeapProfile 。它会写入任何 Writer ,包括一个 http.ResponseWriter ,所以你可以编写一个视图来检查一些auth并为您提供堆配置文件。一个恼人的事情是,它真的在跟踪分配,而不是在GC之后分配剩余。因此,从某种意义上说,它告诉你什么是内存饥饿,但没有专门针对泄漏。



标准 expvar 包可以公开一些JSON,包括memstats,它告诉你关于GC和特定分配大小的数字分配和释放 示例)。如果有泄漏,你可以使用 allocs - frees 来了解它是大分配还是小分支随着时间的推移,但这不是很好。

最后,一个函数可以转储当前状态堆,但我不确定它在GAE中的作用,它似乎很少使用。注意,为了保持GC工作正常,Go进程的增长量是其实际实时数据量的两倍大,这是正常稳态操作的一部分 / em>的。 (在GC取决于 runtime.GOGC 之前,它确实增长的百分比,人们有时为了节省收集器工作而换用更多的内存来增加)。(很老的)线程暗示 App引擎流程像其他任何方式一样管理GC ,尽管他们自2011年以来可能已经调整过了。无论如何,如果您的分配速度缓慢(对您有好处!),您应该预计增长缓慢;只是在每次收集周期后,使用量应该再次下降。

I saw this Python question: App Engine Deferred: Tracking Down Memory Leaks

... Similarly, I've run into this dreaded error:

Exceeded soft private memory limit of 128 MB with 128 MB after servicing 384 requests total

...

After handling this request, the process that handled this request was found to be using too much memory and was terminated. This is likely to cause a new process to be used for the next request to your application. If you see this message frequently, you may have a memory leak in your application.

According to that other question, it could be that the "instance class" is too small to run this application, but before increasing it I want to be sure.

After checking through the application I can't see anything obvious as to where a leak might be (for example, unclosed buffers, etc.) ... and so whatever it is it's got to be a very small but perhaps common mistake.

Because this is running on GAE, I can't really profile it locally very easily as far as I know as that's the runtime environment. Might anyone have a suggestion as to how to proceed and ensure that memory is being recycled properly? — I'm sort of new to Go but I've enjoyed working with it so far.

解决方案

For a starting point, you might be able to try pprof.WriteHeapProfile. It'll write to any Writer, including an http.ResponseWriter, so you can write a view that checks for some auth and gives you a heap profile. An annoying thing about that is that it's really tracking allocations, not what remains allocated after GC. So in a sense it's telling you what's RAM-hungry, but doesn't target leaks specifically.

The standard expvar package can expose some JSON including memstats, which tells you about GCs and the number allocs and frees of particular sizes of allocation (example). If there's a leak you could use allocs-frees to get a sense of whether it's large allocs or small that are growing over time, but that's not very fine-grained.

Finally, there's a function to dump the current state of the heap, but I'm not sure it works in GAE and it seems to be kind of rarely used.

Note that, to keep GC work down, Go processes grow to be about twice as large as their actual live data as part of normal steady-state operation. (The exact % it grows before GC depends on runtime.GOGC, which people sometimes increase to save collector work in exchange for using more memory.) A (very old) thread suggests App Engine processes regulate GC like any other, though they could have tweaked it since 2011. Anyhow, if you're allocating slowly (good for you!) you should expect slow process growth; it's just that usage should drop back down again after each collection cycle.

这篇关于追踪Google App Engine Golang应用程序中的内存泄漏情况?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆