带过滤器的 HBASE SPARK 查询,无需加载所有 hbase [英] HBASE SPARK Query with filter without load all the hbase

查看:55
本文介绍了带过滤器的 HBASE SPARK 查询,无需加载所有 hbase的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我必须查询 HBASE,然后使用 spark 和 scala 处理数据.我的问题是,在我的解决方案中,我获取了 HBASE 表的所有数据,然后进行过滤,这不是一种有效的方法,因为它占用了太多内存.所以我想直接做过滤器,我该怎么做?

I have to query HBASE and then work with the data with spark and scala. My problem is that with my solution, i take ALL the data of my HBASE table and then i filter, it's not an efficient way because it takes too much memory. So i would like to do the filter directly, how can i do that ?

def HbaseSparkQuery(table: String, gatewayINPUT: String, sparkContext: SparkContext): DataFrame = {

    val sqlContext = new SQLContext(sparkContext)

    import sqlContext.implicits._

    val conf = HBaseConfiguration.create()

    val tableName = table

    conf.set("hbase.zookeeper.quorum", "localhost")
    conf.set("hbase.master", "localhost:60000")
    conf.set(TableInputFormat.INPUT_TABLE, tableName)

    val hBaseRDD = sparkContext.newAPIHadoopRDD(conf, classOf[TableInputFormat], classOf[ImmutableBytesWritable], classOf[Result])


    val DATAFRAME = hBaseRDD.map(x => {
      (Bytes.toString(x._2.getValue(Bytes.toBytes("header"), Bytes.toBytes("gatewayIMEA"))),
        Bytes.toString(x._2.getValue(Bytes.toBytes("header"), Bytes.toBytes("eventTime"))),
        Bytes.toString(x._2.getValue(Bytes.toBytes("node"), Bytes.toBytes("imei"))),
        Bytes.toString(x._2.getValue(Bytes.toBytes("measure"), Bytes.toBytes("rssi"))))

    }).toDF()
      .withColumnRenamed("_1", "GatewayIMEA")
      .withColumnRenamed("_2", "EventTime")
      .withColumnRenamed("_3", "ap")
      .withColumnRenamed("_4", "RSSI")
      .filter($"GatewayIMEA" === gatewayINPUT)

    DATAFRAME
  }

正如您在我的代码中看到的那样,我在创建数据框之后,在​​加载 Hbase 数据之后进行过滤..

As you can see in my code, I do the filter after the creation of the dataframe, after the loading of Hbase data ..

提前感谢您的回答

推荐答案

这是我找到的解决方案

Here is the solution I found

import org.apache.hadoop.hbase.client._
import org.apache.hadoop.hbase.filter._
import org.apache.hadoop.hbase.io.ImmutableBytesWritable
import org.apache.hadoop.hbase.mapreduce.TableInputFormat
import org.apache.hadoop.hbase.util.Bytes
import org.apache.spark.sql.SQLContext
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil

object HbaseConnector {

  def main(args: Array[String]): Unit = {

//    System.setProperty("hadoop.home.dir", "/usr/local/hadoop")
    val sparkConf = new SparkConf().setAppName("CoverageAlgPipeline").setMaster("local[*]")
    val sparkContext = new SparkContext(sparkConf)

    val sqlContext = new SQLContext(sparkContext)

    import sqlContext.implicits._

    val spark = org.apache.spark.sql.SparkSession.builder
      .master("local")
      .appName("Coverage Algorithm")
      .getOrCreate

    val GatewayIMEA = "123"

    val TABLE_NAME = "TABLE"

    val conf = HBaseConfiguration.create()

    conf.set("hbase.zookeeper.quorum", "localhost")
    conf.set("hbase.master", "localhost:60000")
    conf.set(TableInputFormat.INPUT_TABLE, TABLE_NAME)

    val connection = ConnectionFactory.createConnection(conf)
    val table = connection.getTable(TableName.valueOf(TABLE_NAME))
    val scan = new Scan

    val GatewayIDFilter = new SingleColumnValueFilter(Bytes.toBytes("header"), Bytes.toBytes("gatewayIMEA"), CompareFilter.CompareOp.EQUAL, Bytes.toBytes(String.valueOf(GatewayIMEA)))
    scan.setFilter(GatewayIDFilter)

    conf.set(TableInputFormat.SCAN, TableMapReduceUtil.convertScanToString(scan))

    val hBaseRDD = sparkContext.newAPIHadoopRDD(conf, classOf[TableInputFormat], classOf[ImmutableBytesWritable], classOf[Result])


    val DATAFRAME = hBaseRDD.map(x => {
      (Bytes.toString(x._2.getValue(Bytes.toBytes("header"), Bytes.toBytes("gatewayIMEA"))),
        Bytes.toString(x._2.getValue(Bytes.toBytes("header"), Bytes.toBytes("eventTime"))),
        Bytes.toString(x._2.getValue(Bytes.toBytes("node"), Bytes.toBytes("imei"))),
        Bytes.toString(x._2.getValue(Bytes.toBytes("measure"), Bytes.toBytes("Measure"))))

    }).toDF()
      .withColumnRenamed("_1", "GatewayIMEA")
      .withColumnRenamed("_2", "EventTime")
      .withColumnRenamed("_3", "ap")
      .withColumnRenamed("_4", "measure")


    DATAFRAME.show()

  }

}

所做的是设置您的输入表,设置您的过滤器,使用过滤器进行扫描并将扫描到RDD,然后将RDD转换为数据帧(可选)

What is done is to set your input table, set your filter, do the scan with the filter and get the scan to a RDD, and then transform the RDD to a dataframe (optional)

做多个过滤器:

val timestampFilter = new SingleColumnValueFilter(Bytes.toBytes("header"), Bytes.toBytes("eventTime"), CompareFilter.CompareOp.GREATER, Bytes.toBytes(String.valueOf(dateOfDayTimestamp)))
val GatewayIDFilter = new SingleColumnValueFilter(Bytes.toBytes("header"), Bytes.toBytes("gatewayIMEA"), CompareFilter.CompareOp.EQUAL, Bytes.toBytes(String.valueOf(GatewayIMEA)))

val filters = new FilterList(GatewayIDFilter, timestampFilter)
scan.setFilter(filters)

这篇关于带过滤器的 HBASE SPARK 查询,无需加载所有 hbase的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆