Hive作业发生mapreduce错误:调用从hmaster / 127.0.0.1到localhost:44849连接异常失败 [英] Hive jobs occurs mapreduce error : Call From hmaster/127.0.0.1 to localhost:44849 failed on connection exception
问题描述
当我在hive命令行中运行时:
hive>从alogs中选择count(*);
在终端上显示以下内容:
总计工作= 1
启动Job 1 out of 1
在编译时确定的reduce任务数量:1
为了改变平均值加载减速器(以字节为单位):
set hive.exec.reducers.bytes.per.reducer =< number>
为了限制还原器的最大数量:
set hive.exec.reducers.max =< number>
为了设置一个固定数量的简化器:
set mapreduce.job.reduces =< number>
开始的工作= job_1417084377943_0009,跟踪URL = HTTP://本地主机:8088 /代理/ application_1417084377943_0009 /
杀戮命令= / usr / lib目录/ Hadoop的/ bin中/ Hadoop的工作-kill job_1417084377943_0009
的Hadoop Stage-1的作业信息:mappers的数量:0;减员人数:0
2014-12-02 17:59:44,068第一阶段地图= 0%,减少= 0%
完成工作= job_1417084377943_0009错误
工作中出错,获得调试信息...
** FAILED:执行错误,从org.apache.hadoop.hive.ql.exec.mr.MapRedTask返回代码2 **
MapReduce作业启动:
阶段-Stage-1:HDFS读取:0 HDFS写入:0 FAIL
Total MapReduce CPU占用时间:0毫秒
然后我用的ResourceManager看到错误的详细信息:
应用application_1417084377943_0009失败,由于错误启动appattempt_1417084377943_0009_000002 2次。有例外:** java.net.ConnectException:调用从hmaster / 127.0.0.1到localhost:44849失败,连接异常:java.net.ConnectException:连接被拒绝; **有关更多详细信息,请参阅:http://wiki.apache .ORG / Hadoop的/ ConnectionRefused
在sun.reflect.NativeConstructorAccessorImpl.newInstance0(本机方法)
处sun.reflect sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
。 DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
处org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils java.lang.reflect.Constructor.newInstance(Constructor.java:408)
。 java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1415)在org.apache.hadoop.ipc.Client.call(Client.java:1364)
。在org.apache.hadoop.ipc.ProtobufRpcEngine $ Invoker.invoke(ProtobufRpcEngine.java:206)
在com.sun.proxy。$ Proxy32.startContainers(来源不明)
在org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
在在org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:119)
。的java:在java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142 254)
)
。在java.util.concurrent.ThreadPoolExecutor中$ Worker.run(ThreadPoolExecutor.java:617)
在java.lang.Thread.run(Thread.java:745)
原因:java.net.ConnectException:连接被拒绝
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)在sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:712)
在org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
。在组织.apache .hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.ipc .Client $ Connection.setupConnection(Client.java:606)
at org.apache.hadoop.ipc.Client $ Connection.setupIOstreams(Client.java:700)
at org.apache.hadoop.ipc .Client $ Connection.access $ 2800(Client.java:367)
位于org.apache.hadoop.ipc.Client.getConnection(Client.java:1463)
位于org.apache.hadoop.ipc。 Client.call(Client.java:1382)
... 9 more
。申请失败。
尽管错误信息足够详细,但我不知道在哪里设置配置'localhost: 44849',以及'从hmaster调用/ 127.0.0.1到localhost:44849在连接异常时失败'的含义是什么
如果你的hadoop安装文件中有一个配置文件.... / hadoop-2.8.1 / etc / hadoop / mapred-site.xml,并且你没有运行YARN,那么配置单元任务可能会重试连接到服务器:0.0.0.0/0.0.0.0:8032异常。 (你可能会发现select *是ok的,select sum()是错误的,┭┮﹏┭┮)
你可以执行jps来检查YARN是否正在运行。
如果YARN没有运行,结果可能如下:
[cc @ localhost conf] $ jps
36721 Jps
8402 DataNode
35458 RunJar
8659 SecondaryNameNode
8270 NameNode
如果YARN正在运行,结果可能如下:
[cc @ localhost sbin] $ jps
13237 Jps
9767 DataNode
9975 SecondaryNameNode
12651 ResourceManager(多了这个)
12956 NodeManager(多了这个)
9581 NameNode
13135 JobHistoryServer
有两种解决方案: strong>
<2>运行YARN。 ps:修改hadoop配置并使用start-yarn.sh运行YARN。
When I run in hive command line:
hive > select count(*) from alogs;
On the terminal, it shows the following :
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1417084377943_0009, Tracking URL = http://localhost:8088/proxy/application_1417084377943_0009/
Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_1417084377943_0009
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2014-12-02 17:59:44,068 Stage-1 map = 0%, reduce = 0%
Ended Job = job_1417084377943_0009 with errors
Error during job, obtaining debugging information...
**FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask**
MapReduce Jobs Launched:
Stage-Stage-1: HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
Then i used resourceManager to see the error details :
Application application_1417084377943_0009 failed 2 times due to Error launching appattempt_1417084377943_0009_000002. Got exception: **java.net.ConnectException: Call From hmaster/127.0.0.1 to localhost:44849 failed on connection exception: java.net.ConnectException: Connection refused;** For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1415)
at org.apache.hadoop.ipc.Client.call(Client.java:1364)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy32.startContainers(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:119)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:254)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:712)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:606)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:700)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1463)
at org.apache.hadoop.ipc.Client.call(Client.java:1382)
... 9 more
. Failing the application.
Though the error informs is detailed enough, i don't know where to set the configs 'localhost:44849', and what is the meaning of 'Call From hmaster/127.0.0.1 to localhost:44849 failed on connection exception'
if you have a config file "..../hadoop-2.8.1/etc/hadoop/mapred-site.xml" in your hadoop install file,and you haven't run YARN,hive task may throw Retrying connect to server: 0.0.0.0/0.0.0.0:8032" exception. (you may find select * is ok,select sum() is wrong,┭┮﹏┭┮)
you can execute "jps" to check if YARN is running.
if YARN is not running,the result may like:
[cc@localhost conf]$ jps
36721 Jps
8402 DataNode
35458 RunJar
8659 SecondaryNameNode
8270 NameNode
if YARN is running,the result may like:
[cc@localhost sbin]$ jps
13237 Jps
9767 DataNode
9975 SecondaryNameNode
12651 ResourceManager (多了这个)
12956 NodeManager (多了这个)
9581 NameNode
13135 JobHistoryServer
There are two solutions:
1.rename mapred-site.xml file,execute linux command "mv mapred-site.xml mapred-site.xml.template" or delete mapred-site.xml file,then restart hadoop.
2.run YARN. ps:modify hadoop config and use "start-yarn.sh" to run YARN.
这篇关于Hive作业发生mapreduce错误:调用从hmaster / 127.0.0.1到localhost:44849连接异常失败的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!