如何克服火花“无法解析主URL";错误? [英] how to overcome spark "cannot parse master URL" error?
本文介绍了如何克服火花“无法解析主URL";错误?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我在Mac上的IntelliJ IDEA中具有以下简单代码:
I have the following simple code in IntelliJ IDEA on my Mac:
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
object SparkGrep {
def main(args: Array[String]) {
if (args.length < 3) {
System.err.println("Usage: SparkGrep <host> <input_file> <match_term>")
System.exit(1)
}
val conf = new SparkConf().setAppName("SparkGrep").setMaster(args(0))
val sc = new SparkContext(conf)
val inputFile = sc.textFile(args(1), 2).cache()
val matchTerm : String = args(2)
val numMatches = inputFile.filter(line => line.contains(matchTerm)).count()
println("%s lines in %s contain %s".format(numMatches, args(1), matchTerm))
System.exit(0)
}
}
在运行配置中,我添加了以下程序参数:
In my run configuration, I have added the following program arguments:
local[*] src/SparkGrep.scala val
运行此代码时,出现以下错误:
When I run this code, I get the following error:
Exception in thread "main" org.apache.spark.SparkException: Could not parse Master URL: 'local[*]'
at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:1304)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:199)
at spark.SparkTest.SparkGrep$.main(SparkGrep.scala:26)
at spark.SparkTest.SparkGrep.main(SparkGrep.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
我该怎么做才能克服此错误?
What can I do to overcome this error?
推荐答案
InetlliJ IDEA + MAC + SPARK
经过一步,让智能做好准备,因为从Maven的拉出有时可能很慢
InetlliJ IDEA + MAC + SPARK
After ever step let intelliJ be ready since pulls from maven can be slow sometime
- 从
Preferences>安装Scala插件.插件>Scala
-
文件>新>项目
,在左窗格中选择Scala
,在右窗格中选择SBT
- 右键单击项目名称>
打开模块设置
>库
- 按
+
模块图标>Maven
>org.apache.spark:spark-core_2.11:1.6.1
>输入
- 将库添加到
项目名称
- Spark库应显示在外部库"部分下
- 新的
scala文件
在src/main/scala
Test.scala
- Install Scala plugin from
Preferences > Plugins > Scala
File > New > Project
, SelectScala
on the left pane, selectSBT
on the right pane- Right click on the projects name >
Open Module Settings
>Libraries
- Press the
+
module icon >Maven
>org.apache.spark:spark-core_2.11:1.6.1
>Enter
- Add the library to the
project name
- The Spark Library should appear under the External Library section
- New
scala file
insrc/main/scala
E.g. Test.scala
Test.scala
import org.apache.spark.{SparkContext,SparkConf}
object Test {
def main(args: Array[String]){
val conf = new SparkConf().setAppName("DevDemo").setMaster("local")
val sc = new SparkContext(conf)
val inputFile = sc.textFile("/var/log/fsck_hfs.log").cache()
// Creates a DataFrame having a single column named "line"
val errAs = inputFile.filter(line => line.contains("ERROR"))
println("Error count : %s".format(errAs.count()))
}
}
IntelliJ
运行菜单
> 运行
结果:<<<<片段
16/06/13 14:39:19 INFO DAGScheduler: ResultStage 0 (count at Test.scala:14) finished in 1.258 s
16/06/13 14:39:19 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
16/06/13 14:39:19 INFO DAGScheduler: Job 0 finished: count at Test.scala:14, took 1.829030 s
Error count : 18
这篇关于如何克服火花“无法解析主URL";错误?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文