使用R集成Spark2.0和cassandra [英] Integration of Spark2.0 and cassandra using R

查看:62
本文介绍了使用R集成Spark2.0和cassandra的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

目前,我们正在将配置单元持久性存储迁移到Cassandra集群。我们一直在使用Spark 2.0和sparkR框架来运行我们的分析报告。我们刚刚开始与Cassandra集成,并且希望通过一些示例代码来启动Spark会话。

Currently we are migrating the hive persistence store to Cassandra cluster.We have been using Spark 2.0 and sparkR framework to run our analytics report.We have just started with Cassandra integration and we would appreciate some sample code to initiate the spark session from within an R module.We would also need help on additional input on optimizing such integration at spark 2.0 run time.

推荐答案

需要遵循 Spark R文档,使用正确的<一个href = https://github.com/datastax/spark-cassandra-connector/blob/master/doc/0_quick_start.md rel = nofollow noreferrer> Spark包以连接到Cassandra,然后设置必要的属性

You just need to follow Spark R documentation, use correct Spark packages to connect to Cassandra, and setup necessary properties:

以Spark支持启动R:

Start R with Spark support:

SPARK_HOME=`pwd` R

加载Spark R库:

Load Spark R library:

library(SparkR, lib.loc = c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib")))

初始化Spark会话:

Initialize Spark session:

sparkR.session(master = "local[*]",
  sparkConfig = list(spark.driver.memory = "2g", 
                     spark.cassandra.connection.host = "IP"),
  sparkPackages = "com.datastax.spark:spark-cassandra-connector_2.11:2.4.0")

spark.cassandra.connection。主机需要指向Cassandra主机。 sparkPackages 的值可能取决于您使用的Spark版本-是否使用Scala 2.10或2.11等。有关更多详细信息,请参阅连接器文档。

spark.cassandra.connection.host need to point to Cassandra host. The value of sparkPackages may depend on the version of Spark you're using - does it use Scala 2.10 or 2.11, etc. See connector documentation for more details.

读取数据:

df <-read.df(source = "org.apache.spark.sql.cassandra", keyspace = "test", table = "tm2")

并与他们合作:

> head(df)
  id          d                  ts
1  1 2019-07-10 2019-07-18 11:56:16
2  2 2019-07-18 2019-07-18 11:03:10

您可以像其他来源一样将数据保存到Cassandra表中-只需使用正确的来源: source = org.apache.spark.sql.cassandra

You can save data into Cassandra table the same way as for other sources - just need to use correct source: source = "org.apache.spark.sql.cassandra"

这篇关于使用R集成Spark2.0和cassandra的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆