卡桑德拉吃了记忆 [英] Cassandra eats memory

查看:79
本文介绍了卡桑德拉吃了记忆的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经设置了 Cassandra 2.1 和以下属性:

I have Cassandra 2.1 and following properties set:

MAX_HEAP_SIZE="5G"
HEAP_NEWSIZE="800M"
memtable_allocation_type: heap_buffers

top实用程序显示cassandra占用了14.6G虚拟内存:

top utility shows that cassandra eats 14.6G virtual memory:

KiB Mem:  16433148 total, 16276592 used,   156556 free,    22920 buffers
KiB Swap: 16777212 total,        0 used, 16777212 free.  9295960 cached Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
23120 cassand+  20   0 14.653g 5.475g  29132 S 318.8 34.9  27:07.43 java

当我从Spark访问它时,它还会死于各种 OutOfMemoryError 异常.

It also dies with various OutOfMemoryError exceptions when I am accessing it from Spark.

如何防止出现" OutOfMemoryErrors "并减少内存使用量?

How I can prevent this "OutOfMemoryErrors" and reduce memory usage?

推荐答案

Cassandra确实消耗了很多内存,但是可以控制它,但是可以调整GC [Garbage Collection]设置.

Cassandra do eat to much memory but it can be controlled but tuning the GC [Garbage Collection] setting.

GC参数包含在JAVA_OPTS变量的bin/cassandra.in.sh文件中.

GC parameters are contained in the bin/cassandra.in.sh file in the JAVA_OPTS variable.

您可以在JAVA_OPTS中应用这些设置

you can apply these settings in JAVA_OPTS

    -XX:+UseConcMarkSweepGC
  -XX:ParallelCMSThreads=1
  -XX:+CMSIncrementalMode
  -XX:+CMSIncrementalPacing
  -XX:CMSIncrementalDutyCycleMin=0
  -XX:CMSIncrementalDutyCycle=10

或者不用指定MAX_HEAP_SIZEHEAP_NEWSIZE这些参数,而是让cassandra'script指定这些参数,因为它将为这些参数分配最佳值.

Or instead of specifying MAX_HEAP_SIZE and HEAP_NEWSIZE these parameter let cassandra'script specify these parameter Because it will assign best values for these parameter.

这篇关于卡桑德拉吃了记忆的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆