如何在GroupBy操作后从spark DataFrame列中收集字符串列表? [英] How do I collect a List of Strings from spark DataFrame Column after a GroupBy operation?

查看:2392
本文介绍了如何在GroupBy操作后从spark DataFrame列中收集字符串列表?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

解决方案此处(by zero323) )非常接近我想要的两个曲折:

The solution described here (by zero323) is very close to what I want with two twists:


  1. 我如何用Java做到这一点?

  2. 如果列中包含字符串列表而不是单个字符串,并且我想在GroupBy(其他一些列)之后将所有这些列表收集到单个列表中,该怎么办?

我使用的是Spark 1.6并试图使用

I am using Spark 1.6 and have tried to use

org.apache.spark.sql.functions.collect_list(Column col)如该问题的解决方案中所述,但出现以下错误

org.apache.spark.sql.functions.collect_list(Column col) as described in the solution to that question, but got the following error


线程main中的异常org.apache.spark.sql.AnalysisException:undefined function collect_list;
at org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry $$ anonfun $ 2.apply(FunctionRegistry.scala:65)
at org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry $ $ anonfun $ 2.apply(FunctionRegistry.scala:65)
at scala.Option.getOrElse(Option.scala:121)

Exception in thread "main" org.apache.spark.sql.AnalysisException: undefined function collect_list; at org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry$$anonfun$2.apply(FunctionRegistry.scala:65) at org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry$$anonfun$2.apply(FunctionRegistry.scala:65) at scala.Option.getOrElse(Option.scala:121)


推荐答案

您看到的错误建议您使用普通的 SQLContext 而不是 HiveContext collect_list 是一个Hive UDF,因此需要 HiveContext 。它也不支持复杂的列,所以唯一的选择是 explode 首先:

Error you see suggests you use plain SQLContext not HiveContext. collect_list is a Hive UDF and as such requires HiveContext. It also doesn't support complex columns so the only option is to explode first:

import org.apache.spark.api.java.*;
import org.apache.spark.SparkConf;
import org.apache.spark.sql.SQLContext;
import org.apache.spark.sql.hive.HiveContext;
import java.util.*;
import org.apache.spark.sql.DataFrame;
import static org.apache.spark.sql.functions.*;

public class App {
  public static void main(String[] args) {
    JavaSparkContext sc = new JavaSparkContext(new SparkConf());
    SQLContext sqlContext = new HiveContext(sc);
    List<String> data = Arrays.asList(
            "{\"id\": 1, \"vs\": [\"a\", \"b\"]}",
            "{\"id\": 1, \"vs\": [\"c\", \"d\"]}",
            "{\"id\": 2, \"vs\": [\"e\", \"f\"]}",
            "{\"id\": 2, \"vs\": [\"g\", \"h\"]}"
    );
    DataFrame df = sqlContext.read().json(sc.parallelize(data));
    df.withColumn("vs", explode(col("vs")))
           .groupBy(col("id"))
           .agg(collect_list(col("vs")))
           .show();
  }
}

虽然它不太可能表现良好。

It is rather unlikely it will perform well though.

这篇关于如何在GroupBy操作后从spark DataFrame列中收集字符串列表?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆