使用spark-submit设置Spark Job的HBase属性 [英] set HBase properties for Spark Job using spark-submit
问题描述
在Hbase数据迁移期间,我遇到了 java.lang.IllegalArgumentException:KeyValue大小太大
During Hbase data migration I have encountered ajava.lang.IllegalArgumentException: KeyValue size too large
长期来看:
我需要增加/etc/hbase/conf/hbase-site.xml
中的属性 hbase.client.keyvalue.maxsize
(从1048576到10485760)但是我现在无法更改此文件(我需要验证).
I need to increase the properties hbase.client.keyvalue.maxsize
(from 1048576 to 10485760) in the /etc/hbase/conf/hbase-site.xml
but I can't change this file now (I need validation).
短期内:
我成功使用命令导入数据:
I have success to import data using command :
hbase org.apache.hadoop.hbase.mapreduce.Import \
-Dhbase.client.keyvalue.maxsize=10485760 \
myTable \
myBackupFile
现在我需要使用spark-submit运行一个Spark Job
Now I need to run a Spark Job using spark-submit
有什么更好的方法:
- 在HBase属性的前面加上"spark".(我不确定是否有可能,是否可行)
spark-submit \
--conf spark.hbase.client.keyvalue.maxsize=10485760
- 使用'spark.executor.extraJavaOptions'和'spark.driver.extraJavaOptions'显式传输HBase属性
spark-submit \
--conf spark.executor.extraJavaOptions=-Dhbase.client.keyvalue.maxsize=10485760 \
--conf spark.driver.extraJavaOptions=-Dhbase.client.keyvalue.maxsize=10485760
推荐答案
如果可以更改代码,则应该能够以编程方式设置这些属性.我认为过去类似这样的事情过去在Java中对我有用:
If you can change your code, you should be able to set these properties programmatically. I think something like this used to work for me in the past in Java:
Configuration conf = HBaseConfiguration.create();
conf.set("hbase.client.scanner.timeout.period", SCAN_TIMEOUT); // set BEFORE you create the connection object below:
Connection conn = ConnectionFactory.createConnection(conf);
这篇关于使用spark-submit设置Spark Job的HBase属性的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!