Spark SQL thrift 服务器无法在集群模式下运行? [英] Spark SQL thrift server can't run in cluster mode?

查看:34
本文介绍了Spark SQL thrift 服务器无法在集群模式下运行?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在 Spark 1.2.0 中,当我尝试以集群模式启动 Spark SQL thrift 服务器时,我得到以下输出:

In Spark 1.2.0, when I attempt to start the Spark SQL thrift server in cluster mode, I get the following output:

Spark assembly has been built with Hive, including Datanucleus jars on classpath
Spark Command: /usr/java/latest/bin/java -cp ::/home/tpanning/Projects/spark/spark-1.2.0-bin-hadoop2.4/sbin/../conf:/home/tpanning/Projects/spark/spark-1.2.0-bin-hadoop2.4/lib/spark-assembly-1.2.0-hadoop2.4.0.jar:/home/tpanning/Projects/spark/spark-1.2.0-bin-hadoop2.4/lib/datanucleus-core-3.2.10.jar:/home/tpanning/Projects/spark/spark-1.2.0-bin-hadoop2.4/lib/datanucleus-rdbms-3.2.9.jar:/home/tpanning/Projects/spark/spark-1.2.0-bin-hadoop2.4/lib/datanucleus-api-jdo-3.2.6.jar -XX:MaxPermSize=128m -Xms512m -Xmx512m org.apache.spark.deploy.SparkSubmit --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --deploy-mode cluster --master spark://xd-spark.xdata.data-tactics-corp.com:7077 spark-internal
========================================

Jar url 'spark-internal' is not in valid format.
Must be a jar file path in URL format (e.g. hdfs://host:port/XX.jar, file:///XX.jar)

Usage: DriverClient [options] launch <active-master> <jar-url> <main-class> [driver options]
Usage: DriverClient kill <active-master> <driver-id>

Options:
   -c CORES, --cores CORES        Number of cores to request (default: 1)
   -m MEMORY, --memory MEMORY     Megabytes of memory to request (default: 512)
   -s, --supervise                Whether to restart the driver on failure
   -v, --verbose                  Print more debugging output

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties

spark-internal"参数似乎是一个特殊标志,用于告诉 spark-submit 要运行的类是 Spark 库的一部分,因此它不需要分发 jar.但出于某种原因,这在这里似乎不起作用.

The "spark-internal" argument seems to be a special flag to tell spark-submit that the class to be run is part of Spark's libraries, so it doesn't need to distribute a jar. But for some reason, this doesn't seem to be working here.

推荐答案

我将此提交为 SPARK-5176,它将通过一条错误消息解决,说明 Thrift 服务器无法在集群模式下运行.

I filed this as SPARK-5176 and it will be addressed with an error message that explains that the Thrift server can not run in cluster mode.

这篇关于Spark SQL thrift 服务器无法在集群模式下运行?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆