你能给我任何线索为什么'不能调用停止的SparkContext方法'? [英] Could you give me any clue Why 'Cannot call methods on a stopped SparkContext'?

查看:2004
本文介绍了你能给我任何线索为什么'不能调用停止的SparkContext方法'?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当我在纱线客户端中输入'val lines = sc.textFile(hdfs:/// input)'时,发生'无法调用停止的SparkContext上的方法'错误。我整天搜查了两天,但我不知道原因在哪里。 hdfs:/// input是正确的,因为当我在独立模式下执行它时,我工作得很好。



你能否给我一个这样的想法?
我使用spark 1.5.2,hadoop 2.7.2。

  tarting org.apache.spark.deploy .master.Master,登录到/opt/spark-1.5.2-bin-hadoop2.6/sbin/../logs/spark-root-org.apache.spark.deploy.master.Master-1-master.out 
192.168.111.203:启动org.apache.spark.deploy.worker.Worker,记录到/opt/spark-1.5.2-bin-hadoop2.6/sbin/../logs/spark-root-org .apache.spark.deploy.worker.Worker-1-slave2.out
192.168.111.202:启动org.apache.spark.deploy.worker.Worker,记录到/opt/spark-1.5.2-bin- hadoop2.6 / sbin /../ logs / spark-root-org.apache.spark.deploy.worker.Worker-1-slave1.out
[root @ master spark-1.5.2-bin-hadoop2。 6]#bin / spark-shell --master yarn-client
16/03/19 05:59:12 WARN util.NativeCodeLoader:无法为您的平台加载native-hadoop库...使用builtin-java类适用
16/03/19 05:59:12 INFO spark.SecurityManager:将视图acls更改为:root
16/03/19 05:59:12信息spark.SecurityManager:更改修改acls至: root
16/03/19 05:59:12 INFO spark.SecurityManager:SecurityManager:身份验证已禁用;用户禁用;具有查看权限的用户:Set(root);具有修改权限的用户:Set(root)
16/03/19 05:59:13 INFO spark.HttpServer:启动HTTP Server
16/03/19 05:59:13 INFO server.Server: jetty-8.yz-SNAPSHOT
16/03/19 05:59:13 INFO server.AbstractConnector:Started SocketConnector@0.0.0.0:46780
16/03/19 05:59:13 INFO util .Utils:在端口46780上成功启动服务HTTP类服务器。
欢迎使用
____ __
/ __ / __ ___ _____ / / __
_ \ \ / _ \ / _`/ __ /'_ /
/ ___ / .__ / \ _,_ / _ / / _ / \_\版本1.5.2
/ _ /

使用Scala版本2.10.4(Java HotSpot™64位服务器虚拟机,Java 1.8.0_73)
键入表达式以评估它们。
输入:help获取更多信息。
16/03/19 05:59:17信息spark.SparkContext:运行Spark版本1.5.2
16/03/19 05:59:17 WARN spark.SparkConf:
SPARK_JAVA_OPTS was检测到(设置为'-Dspark.driver.port = 53411')。
这在Spark 1.0+中已弃用。

请改为使用:
- ./spark-smit with conf / spark-defaults.conf为应用程序设置默认值
- ./spark-submit with --driver -java选项设置驱动程序的-X选项
- spark.executor.extraJavaOptions设置执行程序的-X选项
- SPARK_DAEMON_JAVA_OPTS设置独立守护进程(master或worker)的java选项

16/03/19 05:59:17 WARN spark.SparkConf:将'spark.executor.extraJavaOptions'设置为'-Dspark.driver.port = 53411'作为解决方法。
16/03/19 05:59:17 WARN spark.SparkConf:将'spark.driver.extraJavaOptions'设置为'-Dspark.driver.port = 53411'作为解决方法。
16/03/19 05:59:17信息spark.SecurityManager:将视图acls更改为:root
16/03/19 05:59:17信息spark.SecurityManager:将修改acls更改为:root
16/03/19 05:59:17 INFO spark.SecurityManager:SecurityManager:验证已禁用;用户禁用;具有查看权限的用户:Set(root);具有修改权限的用户:Set(root)
16/03/19 05:59:18 INFO slf4j.Slf4jLogger:Slf4jLogger开始
16/03/19 05:59:18信息远程处理:启动远程处理
16/03/19 05:59:18信息远程处理:远程处理已开始;监听地址:[akka.tcp://sparkDriver@192.168.111.201:53411]
16/03/19 05:59:18信息util.Utils:已成功启动端口53411上的服务'sparkDriver'。
16/03/19 05:59:18信息spark.SparkEnv:注册MapOutputTracker
16/03/19 05:59:18信息spark.SparkEnv:注册BlockManagerMaster
16/03/19 05 :59:18信息storage.DiskBlockManager:在/ tmp / blockmgr -f70b1bb6-288b-4894-bb49-22d1fc3d8d89
16/03/19 05:59:18创建本地目录信息storage.MemoryStore:MemoryStore以容量启动534.5 MB
16/03/19 05:59:18 INFO spark.HttpFileServer:HTTP文件服务器目录是/ tmp / spark-58591b6b-5b19-4bc0-a993-0b846de5ef6f / httpd-fe0c46a2-1d87-4bc7-8b4f -adfc79cb762a
16/03/19 05:59:18 INFO spark.HttpServer:启动HTTP服务器
16/03/19 05:59:18 INFO server.Server:jetty-8.yz-SNAPSHOT
16/03/19 05:59:18信息server.AbstractConnector:已启动SocketConnector@0.0.0.0:40258
16/03/19 05:59:18信息util.Utils:已成功启动服务'HTTP文件服务器'端口40258.
16/03/19 05:59:18信息spark.SparkEnv:注册OutputCommitCoordinator
16/03/19 05:59:18信息server.Server: jetty-8.yz-SNAPSHOT
16/03/19 05:59:18 INFO server.AbstractConnector:Started SelectChannelConnector@0.0.0.0:4040
16/03/19 05:59:18 INFO util .Utils:在端口4040上成功启动服务SparkUI。
16/03/19 05:59:18信息ui.SparkUI:启动SparkUI,地址为http://192.168.111.201:4040
16 / 03/19 05:59:19 WARN metrics.MetricsSystem:使用默认名称DAGScheduler作为源,因为spark.app.id未设置。
16/03/19 05:59:19 INFO client.RMProxy:连接到ResourceManager at /192.168.111.201:8032
16/03/19 05:59:19 INFO yarn.Client:请求一个2节点管理器集群中的新应用程序
16/03/19 05:59:19 INFO yarn.Client:验证我们的应用程序没有请求超过集群的最大内存容量(每个容器8192 MB)
16/03/19 05:59:19信息yarn.Client:将分配AM容器,有896 MB内存,包括384 MB开销
16/03/19 05:59:19信息yarn.Client:设置AM
16/03/19 05:59:19 INFO yarn.Client:为我们的AM容器设置启动环境
16/03/19 05:59:19 INFO纱线.Client:为AM容器准备资源
16/03/19 05:59:21 INFO yarn.Client:上传资源文件:/opt/spark-1.5.2-bin-hadoop2.6/lib/spark -assembly-1.5.2-hadoop2.6.0.jar - > hdfs://192.168.111.201:9000 / user / root / .sparkStaging / application_1458334003417_0002 / spark-assembly-1.5.2-hadoop2.6.0.jar
16/03/19 05:59:25 INFO yarn.Client :上传资源文件:/ tmp / spark-58591b6b-5b19-4bc0-a993-0b846de5ef6f / __ spark_conf__2052137095112870542.zip - > hdfs://192.168.111.201:9000 / user / root / .sparkStaging / application_1458334003417_0002 / __ spark_conf__2052137095112870542.zip
16/03/19 05:59:25 INFO spark.SecurityManager:将视图acls更改为:root
16/03/19 05:59:25 INFO spark.SecurityManager:将修改的acls更改为:root
16/03/19 05:59:25 INFO spark.SecurityManager:SecurityManager:身份验证已禁用;用户禁用;具有查看权限的用户:Set(root);具有修改权限的用户:Set(root)
16/03/19 05:59:25 INFO yarn.Client:将应用程序2提交给ResourceManager
16/03/19 05:59:25 INFO impl。 YarnClientImpl:已提交申请application_1458334003417_0002
16/03/19 05:59:26 INFO yarn.Client:application_1458334003417_0002(state:ACCEPTED)的申请报告
16/03/19 05:59:26 INFO纱线。客户端:
客户端令牌:不适用
诊断:不适用
ApplicationMaster主机:不适用
ApplicationMaster RPC端口:-1
队列:默认
开始时间:1458334765746
最终状态:UNDEFINED
跟踪网址:http:// master:8088 / proxy / application_1458334003417_0002 /
用户:root
16/03/19 05 :59:27 INFO yarn.Client:application_1458334003417_0002(state:ACCEPTED)的申请报告
16/03/19 05:59:28 INFO yarn.Client:application_1458334003417_0002(state:ACCEPTED)的申请报告
16/03/19 05:59:29信息yarn.Client:应用报告for application_1458334003417_0002(state:ACCEPTED)
16/03/19 05:59:30 INFO yarn.Client:应用程序报告application_1458334003417_0002(state:ACCEPTED)
16/03/19 05:59:31 INFO yarn.Client:application_1458334003417_0002(state:ACCEPTED)的申请报告
16/03/19 05:59:32 INFO yarn.Client:申请报告application_1458334003417_0002(州:ACCEPTED)
16/03/19 05:59:33 INFO yarn.Client:application_1458334003417_0002(state:ACCEPTED)的申请报告
16/03/19 05:59:34 INFO yarn.Client:application_1458334003417_0002(state:ACCEPTED)的申请报告
16/03/19 05:59:35 INFO cluster.YarnSchedulerBackend $ YarnSchedulerEndpoint:ApplicationMaster注册为AkkaRpcEndpointRef(Actor [akka.tcp://sparkYarnAM@192.168.111.203:46505 / user / YarnAM#149895142])
16/03/19 05:59:35 INFO cluster.YarnClientSchedulerBackend:添加WebUI筛选器。 org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,Map(PROXY_HOSTS - > master,PROXY_URI_BASES - > http:// master:8088 / proxy / application_1458334003417_0002),/ proxy / application_1458334003417_0002
16 / 03/19 05:59:35 INFO ui.JettyUtils:添加过滤器:org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
16/03/19 05:59:35 WARN cluster.YarnSchedulerBackend $ YarnSchedulerEndpoint:ApplicationMaster已解除关联:192.168.111.203:46505
16/03/19 05:59:35 WARN remote.ReliableDeliverySupervisor:与远程系统的关联[akka.tcp://sparkYarnAM@192.168.111.203:46505]失败了,地址现在被门控为[5000] ms。原因:[解除关联]
16/03/19 05:59:35 WARN cluster.YarnSchedulerBackend $ YarnSchedulerEndpoint:ApplicationMaster已解除关联:192.168.111.203:46505
16/03/19 05:59:35 INFO yarn.Client:申请报告application_1458334003417_0002(状态:RUNNING)
16/03/19 05:59:35 INFO yarn.Client:
客户令牌:N / A
诊断:N / A
ApplicationMaster主机:192.168.111.203
ApplicationMaster RPC端口:0
队列:默认
开始时间:1458334765746
最终状态:UNDEFINED
跟踪URL: http:// master:8088 / proxy / application_1458334003417_0002 /
user:root
16/03/19 05:59:35 INFO cluster.YarnClientSchedulerBackend:应用程序application_1458334003417_0002已开始运行。
16/03/19 05:59:36 INFO util.Utils:在端口42938上成功启动服务'org.apache.spark.network.netty.NettyBlockTransferService'。
16/03/19 05: 59:36 INFO storage.BlockManagerMaster:试图注册BlockManager
16/03/19 05:59:36 INFO netty.NettyBlockTransferService:在42938
上创建的服务器16/03/19 05:59:36信息storage.BlockManagerMasterEndpoint:注册块管理器192.168.111.201:42938与534.5 MB RAM,BlockManagerId(驱动程序,192.168.111.201,42938)
16/03/19 05:59:36信息storage.BlockManagerMaster:注册的块管理器
16/03/19 05:59:40 INFO cluster.YarnSchedulerBackend $ YarnSchedulerEndpoint:ApplicationMaster注册为AkkaRpcEndpointRef(Actor [akka.tcp://sparkYarnAM@192.168.111.203:34633 / user / YarnAM#-40449267])
16/03/19 05:59:40 INFO cluster.YarnClientSchedulerBackend:添加WebUI筛选器。 org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,Map(PROXY_HOSTS - > master,PROXY_URI_BASES - > http:// master:8088 / proxy / application_1458334003417_0002),/ proxy / application_1458334003417_0002
16 / 03/19 05:59:40 INFO ui.JettyUtils:添加过滤器:org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
16/03/19 05:59:41 WARN cluster.YarnSchedulerBackend $ YarnSchedulerEndpoint:ApplicationMaster已解除关联:192.168.111.203:34633
16/03/19 05:59:41 WARN cluster.YarnSchedulerBackend $ YarnSchedulerEndpoint:ApplicationMaster已解除关联:192.168.111.203:34633
16/03 / 19 05:59:41 WARN remote.ReliableDeliverySupervisor:与远程系统关联[akka.tcp://sparkYarnAM@192.168.111.203:34633]失败,地址现在门控为[5000] ms。原因:[解除关联]
16/03/19 05:59:41错误cluster.YarnClientSchedulerBackend:纱线应用程序已退出,状态为FINISHED!
16/03/19 05:59:41 INFO handler.ContextHandler:stopped osjsServletContextHandler {/ metrics / json,null}
16/03/19 05:59:41 INFO handler.ContextHandler:stopped osjsServletContextHandler {/ stages / stage / kill,null}
16/03/19 05:59:41 INFO handler.ContextHandler:stopped osjsServletContextHandler {/ api,null}
16/03/19 05 :59:41 INFO handler.ContextHandler:stopped osjsServletContextHandler {/,null}
16/03/19 05:59:41 INFO handler.ContextHandler:stopped osjsServletContextHandler {/ static,null}
16 / 03/19 05:59:41 INFO handler.ContextHandler:stopped osjsServletContextHandler {/ executors / threadDump / json,null}
16/03/19 05:59:41 INFO handler.ContextHandler:stopped osjsServletContextHandler { / executors / threadDump,null}
16/03/19 05:59:41 INFO handler.ContextHandler:stopped osjsServletContextHandler {/ executors / json,null}
16/03/19 05:59: 41 INFO handler.ContextHandler:stopped osjsServletContextHandler {/ executors,null}
16/03/1 9 05:59:41 INFO handler.ContextHandler:stopped osjsServletContextHandler {/ environment / json,null}
16/03/19 05:59:41 INFO handler.ContextHandler:stopped osjsServletContextHandler {/ environment,null}
16/03/19 05:59:41 INFO handler.ContextHandler:stopped osjsServletContextHandler {/ storage / rdd / json,null}
16/03/19 05:59:41 INFO handler.ContextHandler :stopped osjsServletContextHandler {/ storage / rdd,null}
16/03/19 05:59:41 INFO handler.ContextHandler:stopped osjsServletContextHandler {/ storage / json,null}
16/03 / 19 05:59:41 INFO handler.ContextHandler:stopped osjsServletContextHandler {/ storage,null}
16/03/19 05:59:41 INFO handler.ContextHandler:stopped osjsServletContextHandler {/ stages / pool / json, null}
16/03/19 05:59:41 INFO handler.ContextHandler:stopped osjsServletContextHandler {/ stages / pool,null}
16/03/19 05:59:41 INFO handler.ContextHandler :stopped osjsServletContextHandler {/ stages / stage / json,null}
16/03/19 05:59:41 INFO handler.ContextHandler:stopped osjsServletContextHandler {/ stages / stage,null}
16/03/19 05:59:41 INFO handler.ContextHandler:stopped osjsServletContextHandler {/ stages / json,null }
16/03/19 05:59:41 INFO handler.ContextHandler:stopped osjsServletContextHandler {/ stages,null}
16/03/19 05:59:41 INFO handler.ContextHandler:stopped osjs ServletContextHandler {/ jobs / job / json,null}
16/03/19 05:59:41 INFO handler.ContextHandler:stopped osjsServletContextHandler {/ jobs / job,null}
16/03/19 05:59:41 INFO handler.ContextHandler:stopped osjsServletContextHandler {/ jobs / json,null}
16/03/19 05:59:41 INFO handler.ContextHandler:stopped osjsServletContextHandler {/ jobs,null}
16/03/19 05:59:41信息ui.SparkUI:在http://192.168.111.201:4040
停止Spark Web用户界面16/03/19 05:59:41 INFO scheduler.DAGScheduler :停止DAGScheduler
16/03/19 05:59:41信息cluster.YarnClientSchedulerBackend:关闭所有执行程序
1 6/03/19 05:59:41 INFO cluster.YarnClientSchedulerBackend:要求每个执行程序关闭
16/03/19 05:59:41 INFO cluster.YarnClientSchedulerBackend:已停止
16/03/19 05:59:42 INFO spark.MapOutputTrackerMasterEndpoint:MapOutputTrackerMasterEndpoint停止!
16/03/19 05:59:42 INFO storage.MemoryStore:MemoryStore清除
16/03/19 05:59:42 INFO storage.BlockManager:BlockManager停止
16/03 / 19 05:59:42 INFO storage.BlockManagerMaster:BlockManagerMaster停止
16/03/19 05:59:42 INFO remote.RemoteActorRefProvider $ RemotingTerminator:关闭远程守护进程。
16/03/19 05:59:42 INFO remote.RemoteActorRefProvider $ RemotingTerminator:远程守护进程关闭;继续冲洗远程传输。
16/03/19 05:59:42 INFO spark.SparkContext:成功停止SparkContext
16/03/19 05:59:42 INFO remote.RemoteActorRefProvider $ RemotingTerminator:远程关闭。
16/03/19 05:59:49 INFO cluster.YarnClientSchedulerBackend:SchedulerBackend准备好在等待之后开始调度maxRegisteredResourcesWaitingTime:30000(ms)
16/03/19 05:59:49 INFO repl。 SparkILoop:创建火花上下文..
作为sc的Spark上下文可用。
16/03/19 05:59:49 INFO hive.HiveContext:初始化执行配置单元,版本1.2.1
16/03/19 05:59:49 INFO client.ClientWrapper:检查过的Hadoop版本: 2.6.0
16/03/19 05:59:49 INFO client.ClientWrapper:加载org.apache.hadoop.hive.shims.Hadoop23 Hashop for Hadoop 2.6.0版
16/03/19 05 :59:50 INFO metastore.HiveMetaStore:0:使用实现类打开原始存储:org.apache.hadoop.hive.metastore.ObjectStore
16/03/19 05:59:50 INFO metastore.ObjectStore:ObjectStore,初始化称为
16/03/19 05:59:50 INFO DataNucleus.Persistence:属性hive.metastore.integral.jdo.pushdown未知 - 将被忽略
16/03/19 05:59:50 INFO DataNucleus.Persistence:属性datanucleus.cache.level2未知 - 将被忽略
16/03/19 05:59:50 WARN DataNucleus.Connection:BoneCP指定但不存在于CLASSPATH(或其中一个依赖项)中
16/03/19 05:59:51 WARN DataNucleus.Connection:BoneCP指定但不存在于CLASSPATH(或依赖关系之一cies)
16/03/19 05:59:53 INFO metastore.ObjectStore:使用hive.metastore.cache.pinobjtypes =Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema等设置MetaStore对象引脚类。 Order
16/03/19 05:59:54 INFO DataNucleus.Datastore:类org.apache.hadoop.hive.metastore.model.MFieldSchema被标记为嵌入式,因此没有它自己的数据存储表。
16/03/19 05:59:54 INFO DataNucleus.Datastore:类org.apache.hadoop.hive.metastore.model.MOrder被标记为嵌入式,因此没有自己的数据存储表。
16/03/19 05:59:56 INFO DataNucleus.Datastore:类org.apache.hadoop.hive.metastore.model.MFieldSchema被标记为嵌入式,因此没有自己的数据存储表。
16/03/19 05:59:56 INFO DataNucleus.Datastore:类org.apache.hadoop.hive.metastore.model.MOrder被标记为嵌入式,因此没有自己的数据存储表。
16/03/19 05:59:56 INFO metastore.MetaStoreDirectSql:使用直接SQL,底层DB是DERBY
16/03/19 05:59:56 INFO metastore.ObjectStore:初始化ObjectStore
16/03/19 05:59:57 WARN metastore.ObjectStore:在Metastore中找不到的版本信息。 hive.metastore.schema.verification未启用,因此记录架构版本1.2.0
16/03/19 05:59:57 WARN metastore.ObjectStore:无法获取数据库默认值,返回NoSuchObjectException
16 / 03/19 05:59:57 INFO metastore.HiveMetaStore:在metastore中添加管理角色
16/03/19 05:59:57 INFO metastore.HiveMetaStore:在metastore中添加公共角色
16/03 / 19 05:59:58 INFO metastore.HiveMetaStore:管理员角色中没有添加用户,因为config为空
16/03/19 05:59:58 INFO metastore.HiveMetaStore:0:get_all_databases
16/03/19 05:59:58 INFO HiveMetaStore.audit:ugi = root ip = unknown-ip-addr cmd = get_all_databases
16/03/19 05:59:58 INFO metastore.HiveMetaStore:0:get_functions :db = default pat = *
16/03/19 05:59:58 INFO HiveMetaStore.audit:ugi = root ip = unknown-ip-addr cmd = get_functions:db = default pat = *
16/03/19 05:59:58 INFO DataNucleus.Datastore:类org.apache.hadoop.hive.metastore.model.MResourceUri被标记为嵌入只有,所以没有自己的数据存储表。
16/03/19 05:59:58 INFO session.SessionState:创建的HDFS目录:/ tmp / hive / root
16/03/19 05:59:58 INFO session.SessionState:创建的本地目录:/ tmp / root
16/03/19 05:59:58 INFO session.SessionState:创建本地目录:/ tmp / e16dc45f-de41-4e69-9f73-c976cc3358c9_resources
16/03/19 05:59:58 INFO session.SessionState:创建HDFS目录:/ tmp / hive / root / e16dc45f-de41-4e69-9f73-c976cc3358c9
16/03/19 05:59:58 INFO session.SessionState:Created本地目录:/ tmp / root / e16dc45f-de41-4e69-9f73-c976cc3358c9
16/03/19 05:59:58 INFO session.SessionState:创建HDFS目录:/ tmp / hive / root / e16dc45f-de41 -4e69-9f73-c976cc3358c9 / _tmp_space.db
16/03/19 05:59:58 INFO hive.HiveContext:默认仓库位置是/ user / hive / warehouse
16/03/19 05: 59:58 INFO hive.HiveContext:使用Spark类初始化HiveMetastoreConnection版本1.2.1。
16/03/19 05:59:58 INFO client.ClientWrapper:检查过的Hadoop版本:2.6.0
16/03/19 05:59:59 INFO client.ClientWrapper:Loaded org.apache。 hadoop.hive.shims.Hadoop23Shims for Hadoop 2.6.0版
16/03/19 06:00:00 WARN util.NativeCodeLoader:无法为您的平台加载native-hadoop库...使用内建-java类适用时
16/03/19 06:00:00 INFO metastore.HiveMetaStore:0:使用实施类打开原始存储:org.apache.hadoop.hive.metastore.ObjectStore
16/03/19 06:00:00 INFO metastore.ObjectStore:ObjectStore,初始化称为
16/03/19 06:00:00信息DataNucleus.Persistence:属性hive.metastore.integral.jdo.pushdown未知 - 将被忽略
16/03/19 06:00:00信息DataNucleus.Persistence:属性datanucleus.cache.level2未知 - 将被忽略
16/03/19 06:00:00 WARN DataNucleus.Connection:BoneCP指定但不存在于CLASSPATH(或其中一个依赖项)中
16/03/19 06:00:00 WARN DataNucleus.Connection:B oneCP指定但在CLASSPATH(或其中一个依赖项)中不存在
16/03/19 06:00:01 INFO metastore.ObjectStore:使用hive.metastore.cache.pinobjtypes =Table,StorageDescriptor设置MetaStore对象引脚类,SerDeInfo,Partition,Database,Type,FieldSchema,Order
16/03/19 06:00:02 INFO DataNucleus.Datastore:类org.apache.hadoop.hive.metastore.model.MFieldSchemais标记为嵌入式,因此没有自己的数据存储表。
16/03/19 06:00:02 INFO DataNucleus.Datastore:类org.apache.hadoop.hive.metastore.model.MOrder被标记为仅嵌入,因此没有自己的数据存储表。
16/03/19 06:00:04 INFO DataNucleus.Datastore:类org.apache.hadoop.hive.metastore.model.MFieldSchema被标记为嵌入式,因此没有自己的数据存储表。
16/03/19 06:00:04 INFO DataNucleus.Datastore:类org.apache.hadoop.hive.metastore.model.MOrder被标记为嵌入式,因此没有自己的数据存储表。
16/03/19 06:00:04 INFO metastore.MetaStoreDirectSql:使用直接SQL,底层数据库是DERBY
16/03/19 06:00:04 INFO metastore.ObjectStore:初始化ObjectStore
16/03/19 06:00:04 WARN metastore.ObjectStore:在Metastore中找不到的版本信息。 hive.metastore.schema.verification未启用,因此记录模式版本1.2.0
16/03/19 06:00:05 WARN metastore.ObjectStore:无法获取数据库默认值,返回NoSuchObjectException
16 / 03/19 06:00:05 INFO metastore.HiveMetaStore:在metastore中添加管理员角色
16/03/19 06:00:05 INFO metastore.HiveMetaStore:在metastore中添加公共角色
16/03 / 19 06:00:05 INFO metastore.HiveMetaStore:管理员角色中没有添加用户,因为config为空
16/03/19 06:00:05 INFO metastore.HiveMetaStore:0:get_all_databases
16/03/19 06:00:05 INFO HiveMetaStore.audit:ugi = root ip = unknown-ip-addr cmd = get_all_databases
16/03/19 06:00:06 INFO metastore.HiveMetaStore:0:get_functions :db = default pat = *
16/03/19 06:00:06 INFO HiveMetaStore.audit:ugi = root ip = unknown-ip-addr cmd = get_functions:db = default pat = *
16/03/19 06:00:06 INFO DataNucleus.Datastore:类org.apache.hadoop.hive.metastore.model.MResourceUri被标记为嵌入只有,所以没有自己的数据存储表。
16/03/19 06:00:06 INFO session.SessionState:创建本地目录:/ tmp / b046e212-ccbd-4415-aec3-5b207f147fda_resources
16/03/19 06:00:06 INFO session.SessionState:创建的HDFS目录:/ tmp / hive / root / b046e212-ccbd-4415-aec3-5b207f147fda
16/03/19 06:00:06 INFO session.SessionState:创建的本地目录:/ tmp / root / b046e212-ccbd-4415-aec3-5b207f147fda
16/03/19 06:00:06 INFO session.SessionState:创建的HDFS目录:/ tmp / hive / root / b046e212-ccbd-4415-aec3-5b207f147fda /_tmp_space.db
16/03/19 06:00:06 INFO repl.SparkILoop:创建sql上下文(使用Hive支持)..
SQL上下文可用作sqlContext。

scala> val lines = sc.textFile(hdfs:/// input)
java.lang.IllegalStateException:无法在已停止的SparkContext上调用方法
在org.apache.spark.SparkContext.org $ apache $ spark $ SparkContext $$ assertNotStopped(SparkContext.scala:104)
at org.apache.spark.SparkContext.defaultParallelism(SparkContext.scala:2063)
at org.apache.spark.SparkContext.defaultMinPartitions(SparkContext .scala:2076)
at org.apache.spark.SparkContext.textFile $ default $ 2(SparkContext.scala:825)
at $ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC。< init>(< console> 21)
at $ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC。< init> (< console>:26)
at $ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC。< init>(< console> 28)
at $ iwC $$ iwC $$ iwC $$ iwC $$ iwC。< init>(< console> 30)
at $ iwC $$ iwC $$ iwC $$ iwC。< init>(< console> ;< 32>
at $ iwC $$ iwC $$ iwC。< init>(< console> 34)
at $ iwC $$ iwC。< init>(< console> :36)
at (< console>:40)
at< init>(< console> 38)
at< init> < init>(< console>:7)
at< clinit>(< console>)
at。< init>(< console>)
at $ print(< console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)$ b $ at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.repl .SparkIMain $ ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain $ Request.loadAndRun(SparkIMain.scala:1340)
at org.apache.spark.repl .SparkIMain.loadAndRunReq $ 1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain。解释(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret $ 1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop .scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoop.processLine $ 1(SparkILoop.scala:
at org.apache.spark.repl.SparkILoop.innerLoop $ 1(SparkILoop.scala:665)
at org.apache.spark.repl.SparkILoop.org $ apache $ spark $ repl $ SparkILoop $$循环(SparkILoop.scala:670)
在org.apache.spark.repl.SparkILoop $$ anonfun $ org $ apache $ spark $ repl $ SparkILoop $$进程$ 1.apply $ mcZ $ sp(SparkILoop。
at org.apache.spark.repl.SparkILoop $$ anonfun $ org $ apache $ spark $ repl $ SparkILoop $$进程$ 1.apply(SparkILoop.scala:945)
at org .apache.spark.repl.SparkILoop $$ anonfun $ org $ apache $ spark $ repl $ Sparkiloop $$ process $ 1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader $。 savingContextLoad er(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org $ apache $ spark $ repl $ Sparkiloop $$进程(SparkILoop.scala:945)
at org.apache .spark.repl.SparkILoop.process(SparkILoop.scala:1059)$ or $ $ b $ org.apache.spark.repl.Main $ .main(Main.scala:31)
at org.apache.spark。 repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit $ .org $ apache $ spark $ deploy $ SparkSubmit $$ runMain(SparkSubmit.scala:674)
at org.apache.spark.deploy.SparkSubmit $ .doRunMain $ 1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit $ .submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit $ .main(SparkSubmit.s cala:120)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)




  16/03  / 19 05:59:41 WARN cluster.YarnSchedulerBackend $ YarnSchedulerEndpoint:ApplicationMaster已解除关联:192.168.111.203:34633 
16/03/19 05:59:41 WARN cluster.YarnSchedulerBackend $ YarnSchedulerEndpoint:ApplicationMaster已取消关联:192.168。 111.203:34633
16/03/19 05:59:41 WARN remote.ReliableDeliverySupervisor:与远程系统[akka.tcp://sparkYarnAM@192.168.111.203:34633]的关联失败,地址现在被门控[ 5000]毫秒。原因:[解除关联]
16/03/19 05:59:41错误cluster.YarnClientSchedulerBackend:纱线应用程序已退出,状态为FINISHED!

然后,SparkContext被关闭,所以对这个上下文的任何操作都会抛出你看到的异常。 p>

检查应用程序主文件日志(通过YARN的用户界面可见)以查看失败的原因。这可能是内存配置问题,网络问题(例如主机无​​法访问)等等 - 驱动程序端的日志(这是您粘贴的内容)不会告诉您它是哪一个。

When I put the 'val lines = sc.textFile("hdfs:///input")' in yarn-client, 'Cannot call methods on a stopped SparkContext' error occur. I searched all day long for two days, but I don't know where is cause. "hdfs:///input" is right, because when I executed it in standalone mode, I worked well.

Could you give me a any idea of that? I'm using spark 1.5.2, hadoop 2.7.2.

tarting org.apache.spark.deploy.master.Master, logging to /opt/spark-1.5.2-bin-hadoop2.6/sbin/../logs/spark-root-org.apache.spark.deploy.master.Master-1-master.out
192.168.111.203: starting org.apache.spark.deploy.worker.Worker, logging to /opt/spark-1.5.2-bin-hadoop2.6/sbin/../logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave2.out
192.168.111.202: starting org.apache.spark.deploy.worker.Worker, logging to /opt/spark-1.5.2-bin-hadoop2.6/sbin/../logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave1.out
[root@master spark-1.5.2-bin-hadoop2.6]# bin/spark-shell --master yarn-client
16/03/19 05:59:12 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/03/19 05:59:12 INFO spark.SecurityManager: Changing view acls to: root
16/03/19 05:59:12 INFO spark.SecurityManager: Changing modify acls to: root
16/03/19 05:59:12 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/03/19 05:59:13 INFO spark.HttpServer: Starting HTTP Server
16/03/19 05:59:13 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/03/19 05:59:13 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:46780
16/03/19 05:59:13 INFO util.Utils: Successfully started service 'HTTP class server' on port 46780.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.5.2
      /_/

Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_73)
Type in expressions to have them evaluated.
Type :help for more information.
16/03/19 05:59:17 INFO spark.SparkContext: Running Spark version 1.5.2
16/03/19 05:59:17 WARN spark.SparkConf: 
SPARK_JAVA_OPTS was detected (set to '-Dspark.driver.port=53411').
This is deprecated in Spark 1.0+.

Please instead use:
 - ./spark-submit with conf/spark-defaults.conf to set defaults for an application
 - ./spark-submit with --driver-java-options to set -X options for a driver
 - spark.executor.extraJavaOptions to set -X options for executors
 - SPARK_DAEMON_JAVA_OPTS to set java options for standalone daemons (master or worker)

16/03/19 05:59:17 WARN spark.SparkConf: Setting 'spark.executor.extraJavaOptions' to '-Dspark.driver.port=53411' as a work-around.
16/03/19 05:59:17 WARN spark.SparkConf: Setting 'spark.driver.extraJavaOptions' to '-Dspark.driver.port=53411' as a work-around.
16/03/19 05:59:17 INFO spark.SecurityManager: Changing view acls to: root
16/03/19 05:59:17 INFO spark.SecurityManager: Changing modify acls to: root
16/03/19 05:59:17 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/03/19 05:59:18 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/03/19 05:59:18 INFO Remoting: Starting remoting
16/03/19 05:59:18 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.111.201:53411]
16/03/19 05:59:18 INFO util.Utils: Successfully started service 'sparkDriver' on port 53411.
16/03/19 05:59:18 INFO spark.SparkEnv: Registering MapOutputTracker
16/03/19 05:59:18 INFO spark.SparkEnv: Registering BlockManagerMaster
16/03/19 05:59:18 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-f70b1bb6-288b-4894-bb49-22d1fc3d8d89
16/03/19 05:59:18 INFO storage.MemoryStore: MemoryStore started with capacity 534.5 MB
16/03/19 05:59:18 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-58591b6b-5b19-4bc0-a993-0b846de5ef6f/httpd-fe0c46a2-1d87-4bc7-8b4f-adfc79cb762a
16/03/19 05:59:18 INFO spark.HttpServer: Starting HTTP Server
16/03/19 05:59:18 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/03/19 05:59:18 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:40258
16/03/19 05:59:18 INFO util.Utils: Successfully started service 'HTTP file server' on port 40258.
16/03/19 05:59:18 INFO spark.SparkEnv: Registering OutputCommitCoordinator
16/03/19 05:59:18 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/03/19 05:59:18 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
16/03/19 05:59:18 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
16/03/19 05:59:18 INFO ui.SparkUI: Started SparkUI at http://192.168.111.201:4040
16/03/19 05:59:19 WARN metrics.MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
16/03/19 05:59:19 INFO client.RMProxy: Connecting to ResourceManager at /192.168.111.201:8032
16/03/19 05:59:19 INFO yarn.Client: Requesting a new application from cluster with 2 NodeManagers
16/03/19 05:59:19 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
16/03/19 05:59:19 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
16/03/19 05:59:19 INFO yarn.Client: Setting up container launch context for our AM
16/03/19 05:59:19 INFO yarn.Client: Setting up the launch environment for our AM container
16/03/19 05:59:19 INFO yarn.Client: Preparing resources for our AM container
16/03/19 05:59:21 INFO yarn.Client: Uploading resource file:/opt/spark-1.5.2-bin-hadoop2.6/lib/spark-assembly-1.5.2-hadoop2.6.0.jar -> hdfs://192.168.111.201:9000/user/root/.sparkStaging/application_1458334003417_0002/spark-assembly-1.5.2-hadoop2.6.0.jar
16/03/19 05:59:25 INFO yarn.Client: Uploading resource file:/tmp/spark-58591b6b-5b19-4bc0-a993-0b846de5ef6f/__spark_conf__2052137095112870542.zip -> hdfs://192.168.111.201:9000/user/root/.sparkStaging/application_1458334003417_0002/__spark_conf__2052137095112870542.zip
16/03/19 05:59:25 INFO spark.SecurityManager: Changing view acls to: root
16/03/19 05:59:25 INFO spark.SecurityManager: Changing modify acls to: root
16/03/19 05:59:25 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/03/19 05:59:25 INFO yarn.Client: Submitting application 2 to ResourceManager
16/03/19 05:59:25 INFO impl.YarnClientImpl: Submitted application application_1458334003417_0002
16/03/19 05:59:26 INFO yarn.Client: Application report for application_1458334003417_0002 (state: ACCEPTED)
16/03/19 05:59:26 INFO yarn.Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1458334765746
     final status: UNDEFINED
     tracking URL: http://master:8088/proxy/application_1458334003417_0002/
     user: root
16/03/19 05:59:27 INFO yarn.Client: Application report for application_1458334003417_0002 (state: ACCEPTED)
16/03/19 05:59:28 INFO yarn.Client: Application report for application_1458334003417_0002 (state: ACCEPTED)
16/03/19 05:59:29 INFO yarn.Client: Application report for application_1458334003417_0002 (state: ACCEPTED)
16/03/19 05:59:30 INFO yarn.Client: Application report for application_1458334003417_0002 (state: ACCEPTED)
16/03/19 05:59:31 INFO yarn.Client: Application report for application_1458334003417_0002 (state: ACCEPTED)
16/03/19 05:59:32 INFO yarn.Client: Application report for application_1458334003417_0002 (state: ACCEPTED)
16/03/19 05:59:33 INFO yarn.Client: Application report for application_1458334003417_0002 (state: ACCEPTED)
16/03/19 05:59:34 INFO yarn.Client: Application report for application_1458334003417_0002 (state: ACCEPTED)
16/03/19 05:59:35 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as AkkaRpcEndpointRef(Actor[akka.tcp://sparkYarnAM@192.168.111.203:46505/user/YarnAM#149895142])
16/03/19 05:59:35 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> master, PROXY_URI_BASES -> http://master:8088/proxy/application_1458334003417_0002), /proxy/application_1458334003417_0002
16/03/19 05:59:35 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
16/03/19 05:59:35 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster has disassociated: 192.168.111.203:46505
16/03/19 05:59:35 WARN remote.ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkYarnAM@192.168.111.203:46505] has failed, address is now gated for [5000] ms. Reason: [Disassociated] 
16/03/19 05:59:35 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster has disassociated: 192.168.111.203:46505
16/03/19 05:59:35 INFO yarn.Client: Application report for application_1458334003417_0002 (state: RUNNING)
16/03/19 05:59:35 INFO yarn.Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: 192.168.111.203
     ApplicationMaster RPC port: 0
     queue: default
     start time: 1458334765746
     final status: UNDEFINED
     tracking URL: http://master:8088/proxy/application_1458334003417_0002/
     user: root
16/03/19 05:59:35 INFO cluster.YarnClientSchedulerBackend: Application application_1458334003417_0002 has started running.
16/03/19 05:59:36 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 42938.
16/03/19 05:59:36 INFO netty.NettyBlockTransferService: Server created on 42938
16/03/19 05:59:36 INFO storage.BlockManagerMaster: Trying to register BlockManager
16/03/19 05:59:36 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.111.201:42938 with 534.5 MB RAM, BlockManagerId(driver, 192.168.111.201, 42938)
16/03/19 05:59:36 INFO storage.BlockManagerMaster: Registered BlockManager
16/03/19 05:59:40 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as AkkaRpcEndpointRef(Actor[akka.tcp://sparkYarnAM@192.168.111.203:34633/user/YarnAM#-40449267])
16/03/19 05:59:40 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> master, PROXY_URI_BASES -> http://master:8088/proxy/application_1458334003417_0002), /proxy/application_1458334003417_0002
16/03/19 05:59:40 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
16/03/19 05:59:41 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster has disassociated: 192.168.111.203:34633
16/03/19 05:59:41 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster has disassociated: 192.168.111.203:34633
16/03/19 05:59:41 WARN remote.ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkYarnAM@192.168.111.203:34633] has failed, address is now gated for [5000] ms. Reason: [Disassociated] 
16/03/19 05:59:41 ERROR cluster.YarnClientSchedulerBackend: Yarn application has already exited with state FINISHED!
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/metrics/json,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
16/03/19 05:59:41 INFO ui.SparkUI: Stopped Spark web UI at http://192.168.111.201:4040
16/03/19 05:59:41 INFO scheduler.DAGScheduler: Stopping DAGScheduler
16/03/19 05:59:41 INFO cluster.YarnClientSchedulerBackend: Shutting down all executors
16/03/19 05:59:41 INFO cluster.YarnClientSchedulerBackend: Asking each executor to shut down
16/03/19 05:59:41 INFO cluster.YarnClientSchedulerBackend: Stopped
16/03/19 05:59:42 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/03/19 05:59:42 INFO storage.MemoryStore: MemoryStore cleared
16/03/19 05:59:42 INFO storage.BlockManager: BlockManager stopped
16/03/19 05:59:42 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
16/03/19 05:59:42 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/03/19 05:59:42 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/03/19 05:59:42 INFO spark.SparkContext: Successfully stopped SparkContext
16/03/19 05:59:42 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
16/03/19 05:59:49 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
16/03/19 05:59:49 INFO repl.SparkILoop: Created spark context..
Spark context available as sc.
16/03/19 05:59:49 INFO hive.HiveContext: Initializing execution hive, version 1.2.1
16/03/19 05:59:49 INFO client.ClientWrapper: Inspected Hadoop version: 2.6.0
16/03/19 05:59:49 INFO client.ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0
16/03/19 05:59:50 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/03/19 05:59:50 INFO metastore.ObjectStore: ObjectStore, initialize called
16/03/19 05:59:50 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
16/03/19 05:59:50 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
16/03/19 05:59:50 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/03/19 05:59:51 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/03/19 05:59:53 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
16/03/19 05:59:54 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/03/19 05:59:54 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/03/19 05:59:56 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/03/19 05:59:56 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/03/19 05:59:56 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/03/19 05:59:56 INFO metastore.ObjectStore: Initialized ObjectStore
16/03/19 05:59:57 WARN metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
16/03/19 05:59:57 WARN metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
16/03/19 05:59:57 INFO metastore.HiveMetaStore: Added admin role in metastore
16/03/19 05:59:57 INFO metastore.HiveMetaStore: Added public role in metastore
16/03/19 05:59:58 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
16/03/19 05:59:58 INFO metastore.HiveMetaStore: 0: get_all_databases
16/03/19 05:59:58 INFO HiveMetaStore.audit: ugi=root    ip=unknown-ip-addr  cmd=get_all_databases   
16/03/19 05:59:58 INFO metastore.HiveMetaStore: 0: get_functions: db=default pat=*
16/03/19 05:59:58 INFO HiveMetaStore.audit: ugi=root    ip=unknown-ip-addr  cmd=get_functions: db=default pat=* 
16/03/19 05:59:58 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
16/03/19 05:59:58 INFO session.SessionState: Created HDFS directory: /tmp/hive/root
16/03/19 05:59:58 INFO session.SessionState: Created local directory: /tmp/root
16/03/19 05:59:58 INFO session.SessionState: Created local directory: /tmp/e16dc45f-de41-4e69-9f73-c976cc3358c9_resources
16/03/19 05:59:58 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/e16dc45f-de41-4e69-9f73-c976cc3358c9
16/03/19 05:59:58 INFO session.SessionState: Created local directory: /tmp/root/e16dc45f-de41-4e69-9f73-c976cc3358c9
16/03/19 05:59:58 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/e16dc45f-de41-4e69-9f73-c976cc3358c9/_tmp_space.db
16/03/19 05:59:58 INFO hive.HiveContext: default warehouse location is /user/hive/warehouse
16/03/19 05:59:58 INFO hive.HiveContext: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
16/03/19 05:59:58 INFO client.ClientWrapper: Inspected Hadoop version: 2.6.0
16/03/19 05:59:59 INFO client.ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0
16/03/19 06:00:00 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/03/19 06:00:00 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/03/19 06:00:00 INFO metastore.ObjectStore: ObjectStore, initialize called
16/03/19 06:00:00 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
16/03/19 06:00:00 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
16/03/19 06:00:00 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/03/19 06:00:00 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/03/19 06:00:01 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
16/03/19 06:00:02 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/03/19 06:00:02 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/03/19 06:00:04 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/03/19 06:00:04 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/03/19 06:00:04 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/03/19 06:00:04 INFO metastore.ObjectStore: Initialized ObjectStore
16/03/19 06:00:04 WARN metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
16/03/19 06:00:05 WARN metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
16/03/19 06:00:05 INFO metastore.HiveMetaStore: Added admin role in metastore
16/03/19 06:00:05 INFO metastore.HiveMetaStore: Added public role in metastore
16/03/19 06:00:05 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
16/03/19 06:00:05 INFO metastore.HiveMetaStore: 0: get_all_databases
16/03/19 06:00:05 INFO HiveMetaStore.audit: ugi=root    ip=unknown-ip-addr  cmd=get_all_databases   
16/03/19 06:00:06 INFO metastore.HiveMetaStore: 0: get_functions: db=default pat=*
16/03/19 06:00:06 INFO HiveMetaStore.audit: ugi=root    ip=unknown-ip-addr  cmd=get_functions: db=default pat=* 
16/03/19 06:00:06 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
16/03/19 06:00:06 INFO session.SessionState: Created local directory: /tmp/b046e212-ccbd-4415-aec3-5b207f147fda_resources
16/03/19 06:00:06 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/b046e212-ccbd-4415-aec3-5b207f147fda
16/03/19 06:00:06 INFO session.SessionState: Created local directory: /tmp/root/b046e212-ccbd-4415-aec3-5b207f147fda
16/03/19 06:00:06 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/b046e212-ccbd-4415-aec3-5b207f147fda/_tmp_space.db
16/03/19 06:00:06 INFO repl.SparkILoop: Created sql context (with Hive support)..
SQL context available as sqlContext.

scala> val lines = sc.textFile("hdfs:///input")
java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext
    at org.apache.spark.SparkContext.org$apache$spark$SparkContext$$assertNotStopped(SparkContext.scala:104)
    at org.apache.spark.SparkContext.defaultParallelism(SparkContext.scala:2063)
    at org.apache.spark.SparkContext.defaultMinPartitions(SparkContext.scala:2076)
    at org.apache.spark.SparkContext.textFile$default$2(SparkContext.scala:825)
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:21)
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:26)
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:28)
    at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:30)
    at $iwC$$iwC$$iwC$$iwC.<init>(<console>:32)
    at $iwC$$iwC$$iwC.<init>(<console>:34)
    at $iwC$$iwC.<init>(<console>:36)
    at $iwC.<init>(<console>:38)
    at <init>(<console>:40)
    at .<init>(<console>:44)
    at .<clinit>(<console>)
    at .<init>(<console>:7)
    at .<clinit>(<console>)
    at $print(<console>)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
    at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1340)
    at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
    at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
    at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
    at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
    at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
    at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
    at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
    at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
    at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
    at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
    at org.apache.spark.repl.Main$.main(Main.scala:31)
    at org.apache.spark.repl.Main.main(Main.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:674)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

解决方案

Your YARN application exits immediately after it starts:

16/03/19 05:59:41 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster has disassociated: 192.168.111.203:34633
16/03/19 05:59:41 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster has disassociated: 192.168.111.203:34633
16/03/19 05:59:41 WARN remote.ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkYarnAM@192.168.111.203:34633] has failed, address is now gated for [5000] ms. Reason: [Disassociated] 
16/03/19 05:59:41 ERROR cluster.YarnClientSchedulerBackend: Yarn application has already exited with state FINISHED!

Then, SparkContext is closed, so any action on this context will throw the exception you see.

Check the "Application Master" logs (visible through YARN's UI) to see the cause for the failure. This could be a memory configuration issue, network issues (e.g. host unreachable) and more - the log on the driver side (which is what you pasted) won't tell you which one it is.

这篇关于你能给我任何线索为什么'不能调用停止的SparkContext方法'?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆