内存在Rails应用中不断增加 [英] Memory constantly increasing in Rails app

查看:137
本文介绍了内存在Rails应用中不断增加的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我最近推出了一个新的Ruby on Rails应用程序,它在开发模式下运行良好。发布后,我一直在使用的内存在不断增加:





已更新:从New Relic获得此屏幕转储(下图)时。我已经安排了每小时重新启动一次web dyno(两个web dynos中的一个)。因此,它没有达到500Mb的碰撞水平,实际上它有一点信号锯齿图案。这个问题并没有完全解决,只是一些症状。正如你所看到的早晨不是很忙,但下午更忙。我在11点30分做了一个小小的细节上传,即使它在统计数据中出现,也不会影响问题。



可以注意到,即使图形显示AVG内存,MIN内存也会不断增加。即使图形看起来暂时下降,最小内存也保持不变或增加。 MIN存储器永远不会减少!



应用程序会(无需重新启动dyno)增加内存,直到达到Heroku的最高级别并且应用程序崩溃且执行过期类型错误。

我不是一个优秀的程序员,但是我以前做过几个应用程序,却没有遇到这种类型的问题。

执行故障排除

A。我认为这个问题将存在于application_controller的before_filter中(应用程序控制器中的变量是否会导致Rails中的内存泄漏?)但这不是问题。

B。我安装了oink,但没有给出任何结果(根本就没有)。它会创建一个oink.log,但在运行heroku run oink -m log / oink.log时不会产生任何结果,不管它是什么阈值。

C。我尝试过bleak_house,但它已被弃用,无法安装。

D。我已经用Google搜索并阅读了大部分主题的文章,但我并不聪明。



E。我很想测试memprof,但我不能安装它(我有Ruby 1.9x,并不知道如何将它降级到1.8x)。

我的问题:

Q1。我真正想知道的是每个请求中增加的变量的名称,或者至少哪个控制器使用的是最多的内存。

Q2。下面的代码会增加内存控制器吗?

  related_feed_categories = [] 
@ gift.tags.each do |标签|
tag.category_connections.each do | cc |
related_feed_categories<< cc.category_from_feed
end
end

(对不起, - 由于某种原因,将代码格式化为容易阅读的代码)。



之后我需要用related_feed_categories = nil杀死ated_feed_categories或者垃圾收集器处理那是什么?



Q3。我需要寻找哪些主要的东西?现在我无法缩小它的范围。我不知道哪部分代码要深入了解,而且我也不知道要寻找什么。



Q4。如果我真的不能解决问题。有没有在线咨询服务,我可以发送我的代码,并让他们找到问题?

谢谢!



<强>已更新即可。收到评论后,可能需要与会话。这是我认为可能不好的一部分代码:

 #为上一代创建会话
friend_data_arr = [@ generator.age,@ generator.price_low,@ generator.price_high]
friend_positive_tags_arr = []
friend_negative_tags_arr = []
friend_positive_tags_arr<< @positive_tags
friend_negative_tags_arr<< @negative_tags
session [last_generator] = [friend_data_arr,friend_positive_tags_arr,friend_negative_tags_arr]

#清理变量
friend_data_arr =无
friend_positive_tags_arr =无
friend_negative_tags_arr = nil

它用于生成器#show controller。当通过我的礼物生成引擎生成一些礼物时,我将输入保存在一个会话中(以防他们希望在稍后阶段使用该信息)。我永远不会杀死或过期这些会话,以防万一这可能会导致内存增加。



再次更新:我删除了这段代码,但内存仍然增加,所以我想这个部分是不是它,但类似的代码可能会导致错误?

解决方案

这是不可能的,我们related_feed_categories挑起这一点。



您是否使用了大量文件?



您保留会话数据多长时间?看起来你有一个电子商务网站,你是否在会议中保留对象?基本上,我认为它是文件或会话,或临时数据增加刷新当服务器崩溃时(memcache?)。



在深夜,我想你的客户较少。你可以在高峰时段发布相同的记忆图表吗?

可能与这个问题有关:

内存增长无限期地在空的rails-app?rq = 1 更新:

Rails不会在客户端存储所有数据。我不记得默认的商店,除非你选择cookie :: store,否则rails只发送session_id之类的数据。



他们很少有关于session的指南,ActiveRecord :: SessionStore似乎是性能目的的最佳选择。而且你不应该在会议中保留大型对象或秘密数据。更多关于这里的会议: http://guides.rubyonrails.org/security.html #what-are-session



在2.9部分中,您有解释会话被破坏,未使用一段时间。



不是将会话存储在会话中,而是建议您存储提供搜索结果的网址。您甚至可以将其存储在数据库中,为您的客户提供保存少量研究的可能性,和/或默认加载上次使用的数据。



但是在这个阶段,我们仍然不完全确定会议是罪魁祸首。为了确保,您可以尝试在测试服务器上测试您的应用程序,并使用过期会话。所以基本上,你创建了大量的会话,也许20分钟后,铁轨必须压制它们。如果您发现内存消耗存在任何差异,则会缩小范围。

第一种情况:会话过期时内存明显下降,您知道这是会话相关的。



第二种情况:内存以更快的速度增加,但在会话过期时不会下降,您知道它与用户相关,但与会话无关。第三种情况:没有什么变化(平时增加的内存),所以你知道它不取决于用户数量。但我不知道是什么原因造成的。



当我说压力测试时,我的意思是大量的会话,而不是真正的压力测试。您需要的会话数量取决于您的平均用户数量。如果你有50个用户,在你的应用程序崩溃之前,20-30个会话可能是很重要的。因此,如果您手动使用它们,请配置更高的过期时间限制。我们只是在寻找记忆差异。



更新2:



所以这很可能是内存泄漏。因此使用对象空间,它有一个count_objects方法,它将显示当前使用的所有对象。它应该缩小事情。当内存已经增加很多时使用它。



否则,你有bleak_house,一个能够发现内存泄漏的gem,仍然是内存泄漏的ruby工具,效率不如Java的,但它是值得一试。



Github: https://github.com/evan/bleak_house



更新3

这可能是一个解释,这不是真正的内存泄漏,但它增加了内存:
http://www.tricksonrails.com/2010/06/avoid-memory-leaks- in-ruby-rails-code-and-protect-against-denial-of-service /

总之,符号会一直保留在内存中,直到重新启动红宝石。所以如果符号是用随机的名字创建的,内存就会增长,直到你的应用崩溃。这不会发生与字符串,是GCed。



旧的,但有效的红宝石1.9.x试试这个:Symbol.all_symbols.size



更新4:

因此,您的符号可能是内存泄漏。现在我们仍然必须找到它发生的地方。使用Symbol.all_symbols。它给你的名单。我想你可能会把它存储在某个地方,然后与新数组进行比较,以便查看添加的内容。

可能是i18n,也可能是某种东西否则以像i18n这样的隐式方式生成。但无论如何,这可能会产生名称中带有随机数据的符号。然后这些符号不再使用。


I recently launched a new Ruby on Rails application that worked well in development mode. After the launch I have been experiencing the memory being used is constantly increasing:

UPDATED: When this screen dump (the one below) from New Relic was taken. I have scheduled a web dyno restart every hour (one out of two web dynos). Thus, it does not reach the 500Mb-crash level and it actually gets a bit of a sig saw pattern. The problem is not at all resolved by this though, only some of the symptoms. As you can see the morning is not so busy but the afternoon is more busy. I made an upload at 11.30 for a small detail, it could not have affected the problem even though it appears as such in the stats.

It could be noted as well that it is the MIN memory that keeps on increasing even though the graph shows AVG memory. Even when the graph seems to go down temporarily in the graph, the min memory stays the same or increases. The MIN memory never decreases!

The app would (without dyno restarts) increase in memory until it reached the maximum level at Heroku and the app crashes with execution expired-types of errors.

I am not a great programmer but I have made a few apps before without having this type of problem.

Troubleshooting performed

A. I thought the problem would lie within the before_filter in the application_controller (Will variables in application controller cause a memory leak in Rails?) but that wasn't the problem.

B. I installed oink but it does not give any results (at all). It creates an oink.log but does not give any results when I run "heroku run oink -m log/oink.log", no matter what threshold.

C. I tried bleak_house but it was deprecated and could not be installed

D. I have googled and read most articles in the topic but I am none the wiser.

E. I would love to test memprof but I can't install it (I have Ruby 1.9x and don't really know how to downgrade it to 1.8x)

My questions:

Q1. What I really would love to know is the name(s) of the variable(s) that are increasing for each request, or at least which controller is using the most memory.

Q2. Will a controller as the below code increase in memory?

related_feed_categories = []
@gift.tags.each do |tag|
  tag.category_connections.each do |cc|
    related_feed_categories << cc.category_from_feed
  end
end

(sorry, SO won't re-format the code to be easily readable for some reason).

Do I need to "kill off" related_feed_categories with "related_feed_categories = nil" afterwards or does the Garbage Collector handle that?

Q3. What would be my major things to look for? Right now I can't narrow it down AT ALL. I don't know which part of the code to look deeper into, and I don't really know what to look for.

Q4. In case I really cannot solve the problem. Are there any online consulting service where I can send my code and get them to find the problem?

Thanks!

UPDATED. After receiving comments it could have to do with sessions. This is a part of the code that I guess could be bad:

# Create sessions for last generation
friend_data_arr = [@generator.age, @generator.price_low, @generator.price_high]
friend_positive_tags_arr = []
friend_negative_tags_arr = []
friend_positive_tags_arr << @positive_tags
friend_negative_tags_arr << @negative_tags    
session["last_generator"] = [friend_data_arr, friend_positive_tags_arr, friend_negative_tags_arr]

# Clean variables
friend_data_arr = nil
friend_positive_tags_arr = nil
friend_negative_tags_arr = nil

it is used in the generator#show controller. When some gifts have been generated through my gift-generating-engine I save the input in a session (in case they want to use that info in a later stage). I never kill or expire these sessions so in case this could cause memory increase.

Updated again: I removed this piece of code but the memory still increases, so I guess this part is not it but similar code might causing the error?

解决方案

That's unlikely our related_feed_categories provoke this.

Are you using a lot of files ?

How long do you keep sessions datas ? Looks like you have an e-commerce site, are you keeping objects in sessions ?

Basically, i think it is files, or session, or an increase in temporary datas flushed when the server crash(memcache ?).

In the middle of the night, i guess that ou have fewer customer. Can you post the same memory chart, in peak hours ?

It may be related to this problem : Memory grows indefinitely in an empty Rails app

UPDATE :

Rails don't store all the datas on client side. I don't remember the default store, bu unless you choose the cookie::store, rails send only datas like session_id.

They are few guidelines about sessions, the ActiveRecord::SessionStore seem to be the best choice for performance purpose. And you shouldn't keep large objects nor secrets datas in sessions. More on session here : http://guides.rubyonrails.org/security.html#what-are-sessions

In the 2.9 part, you have an explanation to destroy sessions, unused for a certain time.

Instead of storing objects in sessions, i suggest you store the url giving the search results. You may even store it in database, offering the possibility to save few research to your customer, and/or by default load the last used.

But at these stage we are still, not totally sure that sessions are the culprits. In order to be sure, you may try on a test server, to stress test your application, with expiring sessions. So basically, you create a large number of sessions, and maybe 20 min later rails has to suppress them. If you find any difference in memory consumption, it will narrow things.

First case : memory drop significantly when sessions expires, you know that's is session related.

Second case : The memory increase at a faster rate, but don't drop when sessions expires, you know that it is user related, but not session related.

Third case : nothing change(memory increase at usual), so you know it do not depend on the number of user. But i don't know what could cause this.

When i said stress tests, i mean a significant number of sessions, not really a stress test. The number of sessions you need, depends on your average numbers of users. If you had 50 users, before your app crashed, 20 -30 sessions may be sginificant. So if you had them by hand, configure a higher expire time limit. We are just looking for differences in memory comsuption.

Update 2 :

So this is most likely a memory leak. So use object space, it has a count_objects method, which will display all the objets currently used. It should narrow things. Use it when memory have already increased a lot.

Otherwise, you have bleak_house, a gem able to find memory leaks, still ruby tools for memory leaks are not as efficient as java ones, but it's worth a try.

Github : https://github.com/evan/bleak_house

Update 3 :

This may be an explanation, this is not really memory leak, but it grows memory : http://www.tricksonrails.com/2010/06/avoid-memory-leaks-in-ruby-rails-code-and-protect-against-denial-of-service/

In short, symbols are keep in memory until your restart ruby. So if symbols are created with random name, memory will grow, until your app crash. This don't happen with Strings, the are GCed.

Bit old, but valid for ruby 1.9.x Try this : Symbol.all_symbols.size

Update 4:

So, your symbols are probably the memory leak. Now we still have to find where it occurs. Use Symbol.all_symbols. It gives you the list. I guess you may store this somewhere, and make a diff with the new array, in order to see what was added.

It may be i18n, or it may be something else generating in an implicit way like i18n. But anyway, this is probably generating symbols with random data in the name. And then these symbols are never used again.

这篇关于内存在Rails应用中不断增加的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆