Solr使用太多内存 [英] Solr uses too much memory

查看:953
本文介绍了Solr使用太多内存的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们有一个在Windows 2008 R2上运行的Solr 3.4实例,该实例带有无响应的Oracle Java 6 Hotspot JDK.当我们查看机器时,我们注意到可用的物理内存为零.

We have a Solr 3.4 instance running on Windows 2008 R2 with Oracle Java 6 Hotspot JDK that becomes unresponsive. When we looked at the machine, we noticed that the available physical memory went to zero.

Tomcat7.exe进程使用了​​约70Gigs(专用工作集),但是工作集(内存)正在使用系统上的所有内存. Tomcat/Solr日志中没有错误.我们使用VMMap来确定该内存正在用于映射Solr段文件的内存.

The Tomcat7.exe process was using ~70Gigs (Private Working Set) but Working Set (Memory) was using all the memory on the system. There were no errors in the Tomcat / Solr logs. We used VMMap to identify that the memory was being used for memory mapping the Solr segement files.

重新启动Tomcat可以暂时解决问题,但最终又回来了.

Restarting Tomcat fixed the problem temporarily, but it eventually came back.

然后,我们尝试减小JVM的大小,以便为内存映射文件提供更多空间,但是Solr最终对旧版本的100%无响应.再次重置解决了该问题,但在重置之前它没有引发内存不足异常.

We then tried decreasing the JVM size to give more space for the memory mapped files, but then the Solr eventually becomes unresponsive with the old generation at 100%. Again resetting fixed the problem, but it did not throw an out-of-memory exception before we reset.

目前,我们的欺骗意识是告诉我们,当存在内存压力时,缓存不会收缩,并且可能有太多的MappedByteBuffer悬空,因此操作系统无法从内存映射文件中释放内存.

Currently our spidey sense is telling us that the cache doesn't shrink when there is memory pressure, and that maybe there are too many MappedByteBuffers hanging around so that the OS can not free up the memory from memory mapped files.

推荐答案

参数太多,信息也很少,无法提供任何详细信息.这个答案和提到的系统一样,都已经很老了.

There are too many parameters and too little information to help with any details. This answer is also quite old as are the mentioned systems.

以下是一些对我的经验有所帮助的事情:

Here are some things that helped in my experience:

    可以减少Tomcat和SOLR中的RAM使用率,以降低交换的风险.留出呼吸的系统空间.
  • 如果在没有对Tomcat或SOLR配置进行任何更改的情况下显示开始",则可能是由于SOLR必须索引和查询的数据量增加了.这可能意味着原始配置从一开始就不好用,或者意味着当前资源的限制已经达到并且必须进行审查.是哪一个?
  • 检查查询(如果可以影响它们的话):将经常请求的所有子查询结构移到过滤器查询中,将非常单独的请求结构移到常规查询参数中.减少查询缓存,增加/保留过滤器查询缓存-或减少过滤器缓存,以防系统中过滤器查询使用率不高.
  • 检查SOLR的schema.xml是否存在配置错误(实际上可能只是误解).我曾经遇到过这种情况:在导入字段时会大量创建,导致RAM溢出.
  • 如果它在导入期间发生:请检查导入过程是否设置为自动提交并相当频繁地进行提交,并且也进行了优化-也许可以减少频繁进行提交,并且最后只进行一次优化.
  • 升级Java,Tomcat和SOLR
  • rather decrease RAM usage in Tomcat as well as in SOLR to decrease the risk of swapping. Leave the system space to breath.
  • if this "starts" to appear without any changes to Tomcat or the SOLR config - than maybe it is due to the fact that the amount of data that SOLR has to index and query has increased. This can either mean that the original config was never good to begin with or that the limit of the current resources have been reached and have to be reviewed. Which one is it?
  • check the queries (if you can influence them): move any subquery constructs that are often requested into filterqueries, move very individual request constructs into the regular query parameter. Decrease query cache, increase/keep filter query cache - or decrease filter cache in case filter queries aren't used that much in your system.
  • check the schema.xml of SOLR on configuration errors (could just be misconceptions really). I ran into this once: while importing fields would be created en masse causing the RAM to overflow.
  • if it happens during import: check whether the import process is set to autocommit and commits rather often and does also optimize - maybe it is possible to commit less often and optimize only once at the end.
  • upgrade Java, Tomcat and SOLR

这篇关于Solr使用太多内存的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆