基于spark中的列值拆分数据集 [英] Split dataset based on column values in spark

查看:27
本文介绍了基于spark中的列值拆分数据集的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试根据制造商列的内容将数据集拆分为不同的数据集.很慢
请提出改进​​代码的方法,使其执行得更快并减少Java代码的使用.

I am trying to split the Dataset into different Datasets based on Manufacturer column contents. It is very slow
Please suggest a way to improve the code, so that it can execute faster and reduce the usage of Java code.

    List<Row> lsts= countsByAge.collectAsList();

        for(Row lst:lsts){
             String man=lst.toString();
             man = man.replaceAll("[\\p{Ps}\\p{Pe}]", "");
             Dataset<Row> DF = src.filter("Manufacturer='"+man+"'");
             DF.show();

        }

代码、输入和输出数据集如下所示.

The Code, Input and Output Datasets are as shown below.

    package org.sparkexample;
    import org.apache.parquet.filter2.predicate.Operators.Column;
    import org.apache.spark.SparkConf;
    import org.apache.spark.api.java.JavaSparkContext;
    import org.apache.spark.sql.Dataset;
    import org.apache.spark.sql.RelationalGroupedDataset;
    import org.apache.spark.sql.Row;
    import org.apache.spark.sql.SQLContext;
    import org.apache.spark.sql.SparkSession;

    import java.util.Arrays;
    import java.util.List;

    import org.apache.spark.api.java.JavaPairRDD;
    import org.apache.spark.api.java.JavaRDD;
            public class GroupBy {

                public static void main(String[] args) {
                    System.setProperty("hadoop.home.dir", "C:\\winutils");
                    JavaSparkContext sc = new JavaSparkContext(new SparkConf().setAppName("SparkJdbcDs").setMaster("local[*]"));
                    SQLContext sqlContext = new SQLContext(sc);
                    SparkSession spark = SparkSession.builder().appName("split datasets").getOrCreate();
                    sc.setLogLevel("ERROR");

                    Dataset<Row> src= sqlContext.read()
                                .format("com.databricks.spark.csv")
                                .option("header", "true")
                                .load("sample.csv");


                    Dataset<Row> unq_manf=src.select("Manufacturer").distinct();
                    List<Row> lsts= unq_manf.collectAsList();

                    for(Row lst:lsts){
                         String man=lst.toString();
                         man = man.replaceAll("[\\p{Ps}\\p{Pe}]", "");
                         Dataset<Row> DF = src.filter("Manufacturer='"+man+"'");
                         DF.show();

                }
            }
        }

        INPUT TABLE-
        +------+------------+--------------------+---+
        |ItemID|Manufacturer|       Category name|UPC|
        +------+------------+--------------------+---+
        |   804|         ael|Brush & Broom Han...|123|
        |   805|         ael|Wheel Brush Parts...|124|
        |   813|         ael|      Drivers Gloves|125|
        |   632|        west|       Pipe Wrenches|126|
        |   804|         bil|     Masonry Brushes|127|
        |   497|        west|   Power Tools Other|128|
        |   496|        west|   Power Tools Other|129|
        |   495|         bil|           Hole Saws|130|
        |   499|         bil|    Battery Chargers|131|
        |   497|        west|   Power Tools Other|132|
        +------+------------+--------------------+---+

        OUTPUT-
        +------------+
        |Manufacturer|
        +------------+
        |         ael|
        |        west|
        |         bil|
        +------------+

        +------+------------+--------------------+---+
        |ItemID|Manufacturer|       Category name|UPC|
        +------+------------+--------------------+---+
        |   804|         ael|Brush & Broom Han...|123|
        |   805|         ael|Wheel Brush Parts...|124|
        |   813|         ael|      Drivers Gloves|125|
        +------+------------+--------------------+---+

        +------+------------+-----------------+---+
        |ItemID|Manufacturer|    Category name|UPC|
        +------+------------+-----------------+---+
        |   632|        west|    Pipe Wrenches|126|
        |   497|        west|Power Tools Other|128|
        |   496|        west|Power Tools Other|129|
        |   497|        west|Power Tools Other|132|
        +------+------------+-----------------+---+

        +------+------------+----------------+---+
        |ItemID|Manufacturer|   Category name|UPC|
        +------+------------+----------------+---+
        |   804|         bil| Masonry Brushes|127|
        |   495|         bil|       Hole Saws|130|
        |   499|         bil|Battery Chargers|131|
        +------+------------+----------------+---+

谢谢

推荐答案

如果您需要频繁地基于制造商进行查询,那么最好使用制造商作为分区键编写数据帧,而不是按制造商拆分数据集/数据帧

Instead of splitting the dataset/dataframe by manufacturers it might be optimal to write the dataframe using manufacturer as the partition key if you need to query based on manufacturer frequently

如果您仍然需要基于列值之一的单独数据帧,使用 pyspark 和 spark 2.0+ 的方法之一可能是-

Incase you still want separate dataframes based on one of the column values one of the approaches using pyspark and spark 2.0+ could be-

from pyspark.sql import functions as F

df = spark.read.csv("sample.csv",header=True)

# collect list of manufacturers
manufacturers = df.select('manufacturer').distinct().collect()

# loop through manufacturers to filter df by manufacturers and write it separately 
for m in manufacturers:
    df1 = df.where(F.col('manufacturers')==m[0])
    df1[.repartition(repartition_col)].write.parquet(<write_path>,[write_mode])

这篇关于基于spark中的列值拆分数据集的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆