如何从Spark.scala访问HBase?是否有明确定义的Scala API? [英] How to access HBase from Spark.scala? is there clear defined scala api?

查看:131
本文介绍了如何从Spark.scala访问HBase?是否有明确定义的Scala API?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

如何从Spark.scala访问HBase?是否有明确定义的Scala API?我正在查看数据帧级别,而不是RDD.

How to access HBase from Spark.scala? is there any clear defined scala api? I am looking at dataframe level instead of RDD's.

网络上有许多可用的选项,例如 Apache HBase连接器 SparkOnHBase 还有更多选择.

Many options available over web like Apache HBase Connector SparkOnHBase and more options there.

但是很高兴知道或使用该行业中最常用的东西.

But it would be nice to know or use most used in the industry.

感谢您的帮助.

推荐答案

Spark-Hbase连接器被广泛用于从Spark访问HBase. 它在低级RDD和数据帧中都提供了API.

Spark-Hbase connector by Hortonworks is widely used to access HBase from Spark. It provides an API in both low-level RDD and Dataframes.

连接器要求您定义HBase表的架构.下面是为HBase表定义的架构示例,该表的名称为table1,行键为键,列数为(col1-col8).请注意,行键还必须详细定义为具有特定cf(行键)的列(col0).

The connector requires you to define a Schema for HBase table. Below is an example of Schema defined for a HBase table with name as table1, row key as key and a number of columns (col1-col8). Note that the rowkey also has to be defined in details as a column (col0), which has a specific cf (rowkey).

def catalog = s"""{
        |"table":{"namespace":"default", "name":"table1"},
        |"rowkey":"key",
        |"columns":{
          |"col0":{"cf":"rowkey", "col":"key", "type":"string"},
          |"col1":{"cf":"cf1", "col":"col1", "type":"boolean"},
          |"col2":{"cf":"cf2", "col":"col2", "type":"double"},
          |"col3":{"cf":"cf3", "col":"col3", "type":"float"},
          |"col4":{"cf":"cf4", "col":"col4", "type":"int"},
          |"col5":{"cf":"cf5", "col":"col5", "type":"bigint"},
          |"col6":{"cf":"cf6", "col":"col6", "type":"smallint"},
          |"col7":{"cf":"cf7", "col":"col7", "type":"string"},
          |"col8":{"cf":"cf8", "col":"col8", "type":"tinyint"}
        |}
      |}""".stripMargin

要将HBase表作为数据框读取:

To Read HBase table as a Dataframe:

val df = spark
  .read
  .options(Map(HBaseTableCatalog.tableCatalog->cat))
  .format("org.apache.spark.sql.execution.datasources.hbase")
  .load()

要将Dataframe写入HBase表:

To write Dataframe to HBase table:

df.write.options(
  Map(HBaseTableCatalog.tableCatalog -> catalog, HBaseTableCatalog.newTable -> "5"))
  .format("org.apache.spark.sql.execution.datasources.hbase")
  .save()

更多详细信息: https://github.com/hortonworks-spark/shc

这篇关于如何从Spark.scala访问HBase?是否有明确定义的Scala API?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆