scala.collection.Seq在Java上不起作用 [英] scala.collection.Seq doesn't work on Java

查看:67
本文介绍了scala.collection.Seq在Java上不起作用的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

使用:

  • Apache Spark 2.0.1
  • Java 7

在DataSet类的Apache Spark Java API文档上,出现

On the Apache Spark Java API documentation for the class DataSet appears an example to use the method join using a scala.collection.Seq parameter to specify the columns names. But I'm not able to use it. On the documentation they provide the following example:

df1.join(df2, Seq("user_id", "user_name"))

错误:找不到符号方法Seq(String)

我的代码:

import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import scala.collection.Seq;

public class UserProfiles {

public static void calcTopShopLookup() {
    Dataset<Row> udp = Spark.getDataFrameFromMySQL("my_schema","table_1");

    Dataset<Row> result = Spark.getSparkSession().table("table_2").join(udp,Seq("col_1","col_2"));
}

推荐答案

Seq(x,y,...)是一种创建序列的Scala方法.Seq有它的伴随对象,该对象具有apply方法,该方法不允许每次都写 new .

Seq(x, y, ...) is a Scala way to create sequence. Seq has it's companion object, which has apply method, which allows to not write new each time.

应该可以写:

import scala.collection.JavaConversions;
import scala.collection.Seq;

import static java.util.Arrays.asList;

Dataset<Row> result = Spark.getSparkSession().table("table_2").join(udp, JavaConversions.asScalaBuffer(asList("col_1","col_2")));`

或者您可以创建自己的小方法:

Or you can create own small method:

 public static <T> Seq<T> asSeq(T... values) {
        return JavaConversions.asScalaBuffer(asList(values));
    }

这篇关于scala.collection.Seq在Java上不起作用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆