阿帕奇星火的Sql问题,多节点的Hadoop集群 [英] Apache Spark Sql issue in multi node hadoop cluster

查看:678
本文介绍了阿帕奇星火的Sql问题,多节点的Hadoop集群的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用星火的Java API来从蜂巢的数据。这code是工作在hadoop的单节点集群。但是,当我试图在Hadoop中的多节点集群环境中使用它,它抛出误差

  org.apache.spark.SparkException:检测纱线集群模式,但是是不是一个集群上运行。部署到纱不SparkContext直接支持。请使用火花提交。

请注意:我已经使用主本地单节点和纱线集群多节点

这是我的Java code

  SparkConf sparkConf =新SparkConf()setAppName(蜂巢)setMaster(纱集群)。;
 JavaSparkContext CTX =新JavaSparkContext(sparkConf);
 HiveContext sqlContext =新HiveContext(ctx.sc());
org.apache.spark.sql.Row []结果= sqlContext.sql(选择表名*)收集()。

我也尝试过改变主本地,现在它抛出未知的主机异常。结果
谁能帮我在这?

更新

错误日志

  15/08/05十一点30分25秒信息查询:在读取结果查询org.datanucleus.store.rdbms.query.SQLQuery@0,因为所用的连接关闭
15/08/05十一时三十分25秒INFO的ObjectStore:ObjectStore的初始化
15/08/05十一时三十分25秒INFO HiveMetaStore:在metastore添加管理员角色
15/08/05十一时三十分25秒INFO HiveMetaStore:在metastore新增公共角色
15/08/05十一时三十分25秒INFO HiveMetaStore:没有用户管理员角色加入,因为配置是空的
15/08/05十一时三十分25秒INFO SessionState会:在这一点无需TEZ会话。 hive.execution.engine = MR。
15/08/05十一时三十分25秒INFO HiveMetaStore 0:get_table:DB =默认TBL =活动
15/08/05十一时三十分25秒INFO审计:UGI = labuser IP =未知的IP地址CMD = get_table:DB =默认TBL =活动
15/08/05十一时三十分25秒WARN HiveConf:DE preCATED:hive.metastore.ds.retry *不再有任何效果。使用hive.hmshandler.retry。*代替
15/08/05十一时三十分25秒INFO德precation:MA pred.map.tasks是pcated德$ P $。相反,使用MA preduce.job.maps
15/08/05十一点30分26秒INFO MemoryStore的:ensureFreeSpace(39.9)调用curMem = 0,MAXMEM = 1030823608
15/08/05十一点30分26秒INFO MemoryStore的:阻止broadcast_0存储在内存中的值(估计大小389.6 KB,免费982.7 MB)
15/08/05十一点30分26秒INFO MemoryStore的:ensureFreeSpace(34309)调用curMem = 39.9,MAXMEM = 1030823608
15/08/05十一点30分26秒INFO MemoryStore的:阻止broadcast_0_piece0存储在内存中的字节(估计大小33.5 KB,免费982.7 MB)
15/08/05十一点30分26秒INFO BlockManagerInfo:新增broadcast_0_piece0在内存上172.16.100.7:61775(尺寸:33.5 KB,自由:983.0 MB)
15/08/05十一点30分26秒INFO SparkContext:在Hive.java:29从收集创建广播0
异常的线程主java.lang.IllegalArgumentException异常:的java.net.UnknownHostException:hadoopcluster
    在org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:373)
    在org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:258)
    在org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:153)
    在org.apache.hadoop.hdfs.DFSClient<&初始化GT;(DFSClient.java:602)
    在org.apache.hadoop.hdfs.DFSClient<&初始化GT;(DFSClient.java:547)
    在org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:139)
    在org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
    在org.apache.hadoop.fs.FileSystem.access $ 200(FileSystem.java:89)
    在org.apache.hadoop.fs.FileSystem $ Cache.getInternal(FileSystem.java:2625)
    在org.apache.hadoop.fs.FileSystem $ Cache.get(FileSystem.java:2607)
    在org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
    在org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
    在org.apache.hadoop.ma pred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:256)
    在org.apache.hadoop.ma pred.FileInputFormat.listStatus(FileInputFormat.java:228)
    在org.apache.hadoop.ma pred.FileInputFormat.getSplits(FileInputFormat.java:313)
    在org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:207)
    在org.apache.spark.rdd.RDD $$ anonfun $ $分区2.适用(RDD.scala:219)
    在org.apache.spark.rdd.RDD $$ anonfun $ $分区2.适用(RDD.scala:217)
    在scala.Option.getOrElse(Option.scala:120)
    在org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    在org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
    在org.apache.spark.rdd.RDD $$ anonfun $ $分区2.适用(RDD.scala:219)
    在org.apache.spark.rdd.RDD $$ anonfun $ $分区2.适用(RDD.scala:217)
    在scala.Option.getOrElse(Option.scala:120)
    在org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    在org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
    在org.apache.spark.rdd.RDD $$ anonfun $ $分区2.适用(RDD.scala:219)
    在org.apache.spark.rdd.RDD $$ anonfun $ $分区2.适用(RDD.scala:217)
    在scala.Option.getOrElse(Option.scala:120)
    在org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    在org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
    在org.apache.spark.rdd.RDD $$ anonfun $ $分区2.适用(RDD.scala:219)
    在org.apache.spark.rdd.RDD $$ anonfun $ $分区2.适用(RDD.scala:217)
    在scala.Option.getOrElse(Option.scala:120)
    在org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    在org.apache.spark.SparkContext.runJob(SparkContext.scala:1783)
    在org.apache.spark.rdd.RDD $$ anonfun $ $收集1.适用(RDD.scala:885)
    在org.apache.spark.rdd.RDDOperationScope $ .withScope(RDDOperationScope.scala:148)
    在org.apache.spark.rdd.RDDOperationScope $ .withScope(RDDOperationScope.scala:109)
    在org.apache.spark.rdd.RDD.withScope(RDD.scala:286)
    在org.apache.spark.rdd.RDD.collect(RDD.scala:884)
    在org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:105)
    在org.apache.spark.sql.DataFrame.collect(DataFrame.scala:1255)
    在com.Hive.main(Hive.java:29)
的java.net.UnknownHostException:引起hadoopcluster
    ... 44更多


解决方案

由于异常指示,纱线群集模式不能直接从 SparkContext 使用。但是你可以使用 SparkContext 独立的多节点集群上运行它。首先,你必须启动独立的火花群集,然后将 sparkConf.setMaster(火花:// HOST:PORT),其中 HOST:PORT 是火花群集的URL。我希望这能解决你的问题。

Hi I am using Spark java apis to fetch data from hive. This code is working in hadoop single node cluster. But when I tried to use it in hadoop multi node cluster it throws error as

org.apache.spark.SparkException: Detected yarn-cluster mode, but isn't running on a cluster. Deployment to YARN is not supported directly by SparkContext. Please use spark-submit.

Note : I have used master as local for single node and yarn-cluster for multi node.

And this is my java code

 SparkConf sparkConf = new SparkConf().setAppName("Hive").setMaster("yarn-cluster");
 JavaSparkContext ctx = new JavaSparkContext(sparkConf);
 HiveContext sqlContext = new HiveContext(ctx.sc());
org.apache.spark.sql.Row[] result = sqlContext.sql("Select * from Tablename").collect();

Also I have tried to change master as local and now it throws unknown hostname exception.
Can anyone help me in this?

Updated

Error logs

15/08/05 11:30:25 INFO Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
15/08/05 11:30:25 INFO ObjectStore: Initialized ObjectStore
15/08/05 11:30:25 INFO HiveMetaStore: Added admin role in metastore
15/08/05 11:30:25 INFO HiveMetaStore: Added public role in metastore
15/08/05 11:30:25 INFO HiveMetaStore: No user is added in admin role, since config is empty
15/08/05 11:30:25 INFO SessionState: No Tez session required at this point. hive.execution.engine=mr.
15/08/05 11:30:25 INFO HiveMetaStore: 0: get_table : db=default tbl=activity
15/08/05 11:30:25 INFO audit: ugi=labuser   ip=unknown-ip-addr  cmd=get_table : db=default tbl=activity 
15/08/05 11:30:25 WARN HiveConf: DEPRECATED: hive.metastore.ds.retry.* no longer has any effect.  Use hive.hmshandler.retry.* instead
15/08/05 11:30:25 INFO deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
15/08/05 11:30:26 INFO MemoryStore: ensureFreeSpace(399000) called with curMem=0, maxMem=1030823608
15/08/05 11:30:26 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 389.6 KB, free 982.7 MB)
15/08/05 11:30:26 INFO MemoryStore: ensureFreeSpace(34309) called with curMem=399000, maxMem=1030823608
15/08/05 11:30:26 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 33.5 KB, free 982.7 MB)
15/08/05 11:30:26 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 172.16.100.7:61775 (size: 33.5 KB, free: 983.0 MB)
15/08/05 11:30:26 INFO SparkContext: Created broadcast 0 from collect at Hive.java:29
Exception in thread "main" java.lang.IllegalArgumentException: java.net.UnknownHostException: hadoopcluster
    at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:373)
    at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:258)
    at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:153)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:602)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:547)
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:139)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2625)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2607)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
    at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:256)
    at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228)
    at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:207)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1783)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:885)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:109)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:286)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:884)
    at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:105)
    at org.apache.spark.sql.DataFrame.collect(DataFrame.scala:1255)
    at com.Hive.main(Hive.java:29)
Caused by: java.net.UnknownHostException: hadoopcluster
    ... 44 more

解决方案

As the exception indicates, the yarn-cluster mode cannot be used directly from the SparkContext. But you can run it on a standalone multi-node cluster using the SparkContext. First you have to start your standalone spark cluster and then you set sparkConf.setMaster("spark://HOST:PORT") where HOST:PORT is the URL of your spark cluster. I hope this solves your problem.

这篇关于阿帕奇星火的Sql问题,多节点的Hadoop集群的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆