Opscenter备份到S3位置失败 [英] Opscenter backup to S3 location fails

查看:126
本文介绍了Opscenter备份到S3位置失败的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在AWS中使用OpsCenter 5.1.1,Datastax Enterprise 4.5.1、3节点集群.我设置了计划备份到本地服务器以及S3中的存储桶.服务器上的备份已在所有3个节点上成功完成. S3备份运行缓慢,并且在所有3个节点上均失败.

Using OpsCenter 5.1.1, Datastax Enterprise 4.5.1, 3 node cluster in AWS. I set up a scheduled backup to local server and also to bucket in S3. The On Server backup finished successfully on all 3 nodes. The S3 backup runs slowly and fails on all 3 nodes.

备份了一些键空间,并在S3存储桶中创建了文件.似乎并非所有表都已备份.查看/var/log/opscenter/opscenterd.log,我看到一个OOM错误.本地备份成功后,为什么在写入S3时为什么会出现内存不足错误?

Some keyspaces are backed up, files are created in the S3 bucket. It appears that not all tables are backed up. Looking at /var/log/opscenter/opscenterd.log, I see an OOM error. Why should there be an out-of-memory error when writing to S3 when the local backup is successful??

数据约为6GB,我正在备份所有键空间.总共少于100张桌子.我已将备份设置为每天一次.

The data is about 6GB, I'm backing up all keyspaces. There are less than 100 tables altogether. I've set the backup to once daily.

这是日志中的摘录:

2015-03-31 14:30:34+0000 []  WARN: Marking request 15ae726b-abf6-42b6-94b6-e87e6b0cb592 as failed: {'sstables': {'solr_admin': {u'solr_resources': {'total_size': 186626, 'total
_files': 18, 'done_files': 18, 'errors': []}}, 'stage_scheduler': {u'schedule_servers': {'total_size': 468839, 'total_files': 12, 'done_files': 12, 'errors': []}, u'lock_flags'
: {'total_size': 207313249, 'total_files': 30, 'done_files': 25, 'errors': [u'java.lang.OutOfMemoryError: Java heap space', u'java.lang.OutOfMemoryError: Java heap space', u'ja
va.lang.OutOfMemoryError: Java heap space', u'java.lang.OutOfMemoryError: Java heap space', u'java.lang.OutOfMemoryError: Java heap space']}, u'scheduled_tasks': {'total_size':
 3763468, 'total_files': 18, 'done_files': 18, 'errors': []}

推荐答案

增加分配给OpsCenter的datastax-agent的内存:

一种选择是尝试增加分配给在群集周围寻找datastax-agent-env.sh文件并修改以下属性:

around your cluster look for the datastax-agent-env.sh file and modify the following properties:

-Xmx128M
-Djclouds.mpu.parts.size=16777216

-Xmx设置控制代理的堆大小. -Djclouds设置控制上传到S3时文件的块大小.由于S3支持最大为10,000个部分的分段文件上传,因此块大小控制着我们可以上传文件的大小.增加块大小还需要在代理上使用更多的内存,因此代理堆大小也需要增加. 以下是允许加载250 GB SSTables的示例设置:

The -Xmx setting controls the heap size of the agent. The -Djclouds setting controls the chunk size for files when uploading to S3. Since S3 supports multipart file uploads with a maximum number of 10,000 parts, the chunk size controls how large a file we can upload. Increasing the chunk size also requires using more memory on the agent, so the agent heap size also needs to be increased. Here are example settings that allow loading 250 GB SSTables:

-Xmx256M
-Djclouds.mpu.parts.size=32000000

这些设置将块大小增加到32MB,将堆大小增加到256MB,并允许更大的SSTable大小.

These settings increase the chunk size to 32MB and the heap size to 256MB and allow for the larger SSTable sizes.

请在您的帖子中添加以下信息:

Please add the following information to your post:

1)您要备份多少张表,每个节点要备份多少张表?

1)How many tables are you backing up and how large are they per node?

2)您配置备份的频率如何?

2)How frequently did you configure your backups?

这篇关于Opscenter备份到S3位置失败的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆