内存使用情况在前端和后端之间差别很大(奇怪) [英] Memory usage differs greatly (and strangely) between frontend and backend

查看:211
本文介绍了内存使用情况在前端和后端之间差别很大(奇怪)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的App Engine应用程序存在内存泄漏问题。
我记录内存使用情况以查找问题。

  from google.appengine.api.runtime import memory_usage 
memory_usage()。current()

这个函数超过了soft private memory limit of 128 MB在延迟任务内。它应该每次都有相同的表现。
我从控制台任务队列(后端)和前端通过get-request重新运行它。在第六次日志之后都会得到异常。



结果不同是我无法包裹头部的方式:

 <前端运行> 
1:40.3515625
2:50.3515625
3:59.71875
4:63.5234375
5:72.49609375
6:75.48046875

<后端运行>
1:98.83203125
2:98.83203125
3:98.83203125
4:98.83203125
5:98.83203125
6:98.83203125

结果有三个问题:


  • 总内存池的一个和三分之二是在开始时分配的

  • 后端使用两倍的内存(运行相同的功能)

  • 后端内存使用情况不会像前端一样随时间增加。


任何人都可以为我理解这一点吗?

解决方案

除了基于与他们处理的请求相关的实际活动所期望的内存使用情况之外,实例还具有可变交叉 - 请求内存使用偏移量,包括,例如:


  • 语言(python)沙箱本身

  • 在处理之前收到的以前请求时加载的附加python库(例如,后端可能会加载被拒绝的库而垃圾回收站可能不会)

  • 剩菜未被垃圾收集器清除(它们最终应该消失,但偶尔的活动高峰可能会导致超出限制甚至死亡(并重新启动)) - 您会注意到当使用率超过限制时,会发生死亡,例如,我看到,对于128M限制,> 150MB)


库的按需加载是改进实例启动时间的典型方法。这种技术将完全导致可能出现的内存泄漏,但并不一定意味着它确实是内存泄漏。



128M也可能只是简单的对于一个应用程序来说还不够(你可能会惊讶实际需要多少,而128M并不是很多!),升级实例类型是向前发展的唯一方法。您现在可以真正尝试并监控使用情况--6个请求是恕我直言,不足以建立模式 - 如果您升级并且最终看到内存使用率趋于平稳,那么您可能需要升级。如果它不平衡,那很可能是你实际上有内存泄漏。


My App Engine-application is having problem with memory leakage. I log memory usage along the way to find the issue.

from google.appengine.api.runtime import memory_usage
memory_usage().current()

The function that exceeded "soft private memory limit of 128 MB" is within a deferred task. It should behave the same each time. I re-run it from the consoles task-queue (backend) and from the frontend via get-request. Both get the exception after the 6th log.

The result differs is a way I can't wrap my head around:

<Frontend-run>
1: 40.3515625
2: 50.3515625
3: 59.71875
4: 63.5234375
5: 72.49609375
6: 75.48046875

<Backend-run>
1: 98.83203125
2: 98.83203125
3: 98.83203125
4: 98.83203125
5: 98.83203125
6: 98.83203125

I have three issues with the result:

  • One vs. two thirds of total memory-pool is allocated at the start
  • Backend uses twice as much memory (running the same function)
  • The backend memory usage doesn't increases with time like the frontend does.

Can anyone make sense of this for me?

解决方案

Apart from the memory usage you'd expect based on the actual activity related to the requests they handle, the instances also have a variable cross-request memory usage offset, including, for example:

  • the language (python) sandbox itself
  • the additional python libraries loaded while handling previous requests received so far (for example the backend may load the defered library while the frontent might not)
  • leftovers not yet cleaned by the garbage collector (they should eventually go away, but occasional activity peaks may cause exceeding the limit and even instance death (and restart) - you'll notice the death happens when the usage goes significantly above the limit, I saw, for example, >150MB for the 128M limit)

On-demand loading of libraries is a typical method of improving instance startup time. Such technique will lead exactly to what may appear a memory leak, but it doesn't necessarily mean it really is a memory leak.

It's also possible that the 128M is simply not enough for an app (you'd be surprised how much may actually be needed and 128M is not a lot!), upgrading the instance type is the only way to move forward. You may actually try it now and monitor the usage - 6 requests is IMHO not sufficient to establish a pattern - if you upgrade and you see the memory usage eventually levelling off then it's likely that you need the upgrade. If it doesn't level off then it's likely that you actually have a memory leak.

这篇关于内存使用情况在前端和后端之间差别很大(奇怪)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆