火花接头的负载与sstableloader的性能 [英] spark connector loading vs sstableloader performance
问题描述
我有一个火花工作,现在可以从HDFS中提取数据,并将数据转换为平面文件以加载到Cassandra中.
I have a spark job that right now pulls data from HDFS and transforms the data into flat files to load into the Cassandra.
cassandra表本质上是3列,但最后两列是map集合,因此是复杂的"数据结构.
The cassandra table is essentially 3 columns but the last two are map collections, so a "complex" data structure.
现在,我使用COPY命令并获得大约3k行/秒的加载,但是考虑到我需要加载大约50亿条记录,这的速度非常慢.
Right now I use the COPY command and get about 3k rows/sec load but thats extremely slow given that I need to load about 50milllion records.
我看到可以将CSV文件转换为sstables,但是没有看到涉及地图集合和/或列表的示例.
I see I can convert the CSV file to sstables but I don't see an example involving map collections and/or lists.
我可以使用spark连接器将cassandra加载到地图集合和列表中,并获得比仅COPY命令更好的性能吗?
Can I use the spark connector to cassandra to load data with map collections and lists and get better performance than just the COPY command?
推荐答案
是的,对于已经存在于HDFS中的文件,Spark Cassandra连接器可以快得多.使用spark,您将能够分布式抓取并写入C *.
Yes the Spark Cassandra Connector can be much much faster for files already in HDFS. Using spark you'll be able to distributedly grab and write into C*.
即使没有Spark也可以使用基于Java的加载器,例如 https://github.com/brianmhess/cassandra-loader将为您带来显着的速度改进.
Even without Spark using a java based loader like https://github.com/brianmhess/cassandra-loader will give you a significant speed improvement.
这篇关于火花接头的负载与sstableloader的性能的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!