Hadoop 集群设置 - java.net.ConnectException:连接被拒绝 [英] Hadoop cluster setup - java.net.ConnectException: Connection refused

查看:87
本文介绍了Hadoop 集群设置 - java.net.ConnectException:连接被拒绝的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想在伪分布式模式下设置一个 hadoop-cluster.我设法执行了所有设置步骤,包括在我的机器上启动 Namenode、Datanode、Jobtracker 和 Tasktracker.

然后我尝试运行一些示例程序并遇到 java.net.ConnectException: Connection denied 错误.我退回到在独立模式下运行一些操作的最初步骤,并遇到了同样的问题.

我什至对所有安装步骤进行了三重检查,但不知道如何修复它.(我是 Hadoop 的新手和 Ubuntu 初学者,因此如果提供任何指南或提示,我恳请您考虑到它").

这是我不断收到的错误输出:

hduser@marta-komputer:/usr/local/hadoop$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar grep input output 'dfs[az.]+'15/02/22 18:23:04 WARN util.NativeCodeLoader:无法为您的平台加载本机 Hadoop 库...在适用的情况下使用内置 Java 类15/02/22 18:23:04 INFO client.RMProxy:在/0.0.0.0:8032 连接到 ResourceManagerjava.net.ConnectException: Call From marta-komputer/127.0.1.1 to localhost:9000 在连接异常时失败:java.net.ConnectException: Connection denied;有关更多详细信息,请参阅:http://wiki.apache.org/hadoop/ConnectionRefused在 sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)在 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)在 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)在 java.lang.reflect.Constructor.newInstance(Constructor.java:408)在 org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)在 org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)在 org.apache.hadoop.ipc.Client.call(Client.java:1472)在 org.apache.hadoop.ipc.Client.call(Client.java:1399)在 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)在 com.sun.proxy.$Proxy9.delete(来源不明)在 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:521)在 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)在 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)在 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)在 java.lang.reflect.Method.invoke(Method.java:483)在 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)在 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)在 com.sun.proxy.$Proxy10.delete(来源不明)在 org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1929)在 org.apache.hadoop.hdfs.DistributedFileSystem$12.doCall(DistributedFileSystem.java:638)在 org.apache.hadoop.hdfs.DistributedFileSystem$12.doCall(DistributedFileSystem.java:634)在 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)在 org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:634)在 org.apache.hadoop.examples.Grep.run(Grep.java:95)在 org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)在 org.apache.hadoop.examples.Grep.main(Grep.java:101)在 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)在 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)在 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)在 java.lang.reflect.Method.invoke(Method.java:483)在 org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)在 org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)在 org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)在 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)在 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)在 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)在 java.lang.reflect.Method.invoke(Method.java:483)在 org.apache.hadoop.util.RunJar.run(RunJar.java:221)在 org.apache.hadoop.util.RunJar.main(RunJar.java:136)引起:java.net.ConnectException:连接被拒绝在 sun.nio.ch.SocketChannelImpl.checkConnect(本机方法)在 sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)在 org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)在 org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)在 org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)在 org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607)在 org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705)在 org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)在 org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)在 org.apache.hadoop.ipc.Client.call(Client.java:1438)... 32 更多

<小时>

etc/hadoop/hadoop-env.sh 文件:

# 要使用的 java 实现.导出 JAVA_HOME=/usr/lib/jvm/java-8-oracle# 要使用的 jsvc 实现.运行安全数据节点需要 Jsvc# 绑定到特权端口以提供数据传输的身份验证# 协议.如果 SASL 配置用于身份验证,则不需要 Jsvc# 使用非特权端口的数据传输协议.#export JSVC_HOME=${JSVC_HOME}导出 HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}# 额外的 Java CLASSPATH 元素.自动插入容量调度程序.对于 $HADOOP_HOME/contrib/capacity-scheduler/*.jar 中的 f;做如果 [ "$HADOOP_CLASSPATH" ];然后导出 HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f别的导出 HADOOP_CLASSPATH=$f菲完毕# 要使用的最大堆量,以 MB 为单位.默认值为 1000.#导出 HADOOP_HEAPSIZE=#export HADOOP_NAMENODE_INIT_HEAPSIZE=""# 额外的 Java 运行时选项.默认为空.导出 HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"# 指定时附加到 HADOOP_OPTS 的命令特定选项export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS"export HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS $HADOOP_DATANODE_OPTS"export HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_SECONDARYNAMENODE_OPTS"导出 HADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS"导出 HADOOP_PORTMAP_OPTS="-Xmx512m $HADOOP_PORTMAP_OPTS"# 以下适用于多个命令(fs、dfs、fsck、distcp 等)导出 HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"#HADOOP_JAVA_PLATFORM_OPTS="-XX:-UsePerfData $HADOOP_JAVA_PLATFORM_OPTS"# 在安全的数据节点上,用户在删除权限后运行数据节点.# 如果使用特权端口,**必须** 取消注释以启用安全 HDFS# 提供数据传输协议的认证.这**不能**# 定义是否为数据传输协议的身份验证配置了 SASL# 使用非特权端口.导出 HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER}# 日志文件的存储位置.$HADOOP_HOME/logs 默认情况下.#export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER# 日志文件存储在安全数据环境中的位置.导出 HADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER}# HDFS Mover 具体参数#### 指定启动 HDFS Mover 时要使用的 JVM 选项.# 这些选项将附加到指定为 HADOOP_OPTS 的选项# 因此可能会覆盖 HADOOP_OPTS 中设置的任何类似标志## 导出 HADOOP_MOVER_OPTS=""#### 仅限高级用户!####pid文件存放的目录./tmp 默认.# 注意:这应该设置为一个只能被写入的目录# 将运行 hadoop 守护进程的用户.否则有# 潜在的符号链接攻击.导出 HADOOP_PID_DIR=${HADOOP_PID_DIR}导出 HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR}# 一个字符串,代表这个 hadoop 实例.$USER 默认.导出 HADOOP_IDENT_STRING=$USER

.bashrc 文件 Hadoop 相关片段:

# -- HADOOP 环境变量开始 -- #导出 JAVA_HOME=/usr/lib/jvm/java-8-oracle导出 HADOOP_HOME=/usr/local/hadoop导出路径=$PATH:$HADOOP_HOME/bin导出路径=$PATH:$HADOOP_HOME/sbin导出 HADOOP_MAPRED_HOME=$HADOOP_HOME导出 HADOOP_COMMON_HOME=$HADOOP_HOME导出 HADOOP_HDFS_HOME=$HADOOP_HOME导出 YARN_HOME=$HADOOP_HOME导出 HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native导出 HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"# -- HADOOP 环境变量结束 -- #

/usr/local/hadoop/etc/hadoop/core-site.xml 文件:

<预><代码><配置><财产><name>hadoop.tmp.dir</name><value>/usr/local/hadoop_tmp</value><description>其他临时目录的基础.</description></属性><财产><name>fs.default.name</name><value>hdfs://localhost:9000</value></属性></配置>

/usr/local/hadoop/etc/hadoop/hdfs-site.xml 文件:

<预><代码><配置><财产><name>dfs.replication</name><值>1</值></属性><财产><name>dfs.namenode.name.dir</name><value>文件:/usr/local/hadoop_tmp/hdfs/namenode</value></属性><财产><name>dfs.datanode.data.dir</name><value>文件:/usr/local/hadoop_tmp/hdfs/datanode</value></属性></配置>

/usr/local/hadoop/etc/hadoop/yarn-site.xml 文件:

<预><代码><配置><财产><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></属性><财产><name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name><value>org.apache.hadoop.mapred.ShuffleHandler</value></属性></配置>

/usr/local/hadoop/etc/hadoop/mapred-site.xml 文件:

<预><代码><配置><财产><name>mapreduce.framework.name</name><value>纱线</value></属性><配置>

运行 hduser@marta-komputer:/usr/local/hadoop$ bin/hdfs namenode -format 结果如下(我用 (...)):

hduser@marta-komputer:/usr/local/hadoop$ bin/hdfs namenode -format15/02/22 18:50:47 信息 namenode.NameNode: STARTUP_MSG:/****************************************************************STARTUP_MSG:启动 NameNodeSTARTUP_MSG: 主机 = marta-komputer/127.0.1.1STARTUP_MSG: args = [-format]STARTUP_MSG:版本 = 2.6.0STARTUP_MSG:classpath =/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/htrace-core-3.0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli (...)2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.0.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jarSTARTUP_MSG:build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1;'jenkins' 于 2014-11-13T21:10Z 编译STARTUP_MSG:java = 1.8.0_31******************************************************************/15/02/22 18:50:47 INFO namenode.NameNode:为 [TERM、HUP、INT] 注册的 UNIX 信号处理程序15/02/22 18:50:47 信息 namenode.NameNode: createNameNode [-format]15/02/22 18:50:47 WARN util.NativeCodeLoader:无法为您的平台加载本机 Hadoop 库...在适用的情况下使用内置 Java 类使用 clusterid 进行格式化:CID-0b65621a-eab3-47a4-bfd0-62b5596a940c15/02/22 18:50:48 信息 namenode.FSNamesystem:未找到 KeyProvider.15/02/22 18:50:48 INFO namenode.FSNamesystem: fsLock is fair:true15/02/22 18:50:48 信息块管理.DatanodeManager:dfs.block.invalidate.limit=100015/02/22 18:50:48 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true15/02/22 18:50:48 INFO blockmanagement.BlockManager:dfs.namenode.startup.delay.block.deletion.sec 设置为 000:00:00:00.00015/02/22 18:50:48 INFO blockmanagement.BlockManager: 区块删除将于 2015 年 2 月 22 日 18:50:48 左右开始15/02/22 18:50:48 INFO util.GSet:地图 BlocksMap 的计算能力15/02/22 18:50:48 信息 util.GSet:VM 类型 = 64 位15/02/22 18:50:48 INFO util.GSet:2.0% 最大内存 889 MB = 17.8 MB15/02/22 18:50:48 INFO util.GSet:容量 = 2^21 = 2097152 个条目15/02/22 18:50:48 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false15/02/22 18:50:48 INFO blockmanagement.BlockManager: defaultReplication = 115/02/22 18:50:48 信息块管理.块管理器:maxReplication = 51215/02/22 18:50:48 信息块管理.块管理器:minReplication = 115/02/22 18:50:48 信息块管理.块管理器:maxReplicationStreams = 215/02/22 18:50:48 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false15/02/22 18:50:48 信息块管理.块管理器:replicationRecheckInterval = 300015/02/22 18:50:48 INFO blockmanagement.BlockManager: encryptDataTransfer = false15/02/22 18:50:48 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 100015/02/22 18:50:48 信息 namenode.FSNamesystem: fsOwner = hduser (auth:SIMPLE)15/02/22 18:50:48 INFO namenode.FSNamesystem: supergroup = supergroup15/02/22 18:50:48 信息 namenode.FSNamesystem: isPermissionEnabled = true15/02/22 18:50:48 信息 namenode.FSNamesystem:HA 启用:false15/02/22 18:50:48 INFO namenode.FSNamesystem: Append Enabled: true15/02/22 18:50:48 INFO util.GSet:地图 INodeMap 的计算能力15/02/22 18:50:48 信息 util.GSet:VM 类型 = 64 位15/02/22 18:50:48 INFO util.GSet:1.0% 最大内存 889 MB = 8.9 MB15/02/22 18:50:48 INFO util.GSet:容量 = 2^20 = 1048576 个条目15/02/22 18:50:48 INFO namenode.NameNode:缓存出现超过 10 次的文件名15/02/22 18:50:48 INFO util.GSet:地图 cachedBlocks 的计算能力15/02/22 18:50:48 信息 util.GSet:VM 类型 = 64 位15/02/22 18:50:48 INFO util.GSet:0.25% 最大内存 889 MB = 2.2 MB15/02/22 18:50:48 INFO util.GSet:容量 = 2^18 = 262144 个条目15/02/22 18:50:48 信息 namenode.FSNamesystem:dfs.namenode.safemode.threshold-pct = 0.999000012874603315/02/22 18:50:48 信息 namenode.FSNamesystem:dfs.namenode.safemode.min.datanodes = 015/02/22 18:50:48 信息 namenode.FSNamesystem:dfs.namenode.safemode.extension = 3000015/02/22 18:50:48 INFO namenode.FSNamesystem:已启用 namenode 上的重试缓存15/02/22 18:50:48 INFO namenode.FSNamesystem:重试缓存将使用总堆的 0.03,重试缓存条目到期时间为 600000 毫秒15/02/22 18:50:48 INFO util.GSet:地图 NameNodeRetryCache 的计算能力15/02/22 18:50:48 信息 util.GSet:VM 类型 = 64 位15/02/22 18:50:48 INFO util.GSet:0.029999999329447746% 最大内存 889 MB = 273.1 KB15/02/22 18:50:48 INFO util.GSet:容量 = 2^15 = 32768 个条目15/02/22 18:50:48 INFO namenode.NNConf:启用了 ACL?错误的15/02/22 18:50:48 INFO namenode.NNConf:启用 XAttrs?真的15/02/22 18:50:48 INFO namenode.NNConf:xattr 的最大大小:16384在存储目录/usr/local/hadoop_tmp/hdfs/namenode 中重新格式化文件系统?(是或否)是15/02/22 18:50:50 INFO namenode.FSImage:分配的新 BlockPoolId:BP-948369552-127.0.1.1-142462745031615/02/22 18:50:50 INFO common.Storage:存储目录/usr/local/hadoop_tmp/hdfs/namenode 已成功格式化.15/02/22 18:50:50 INFO namenode.NNStorageRetentionManager:将保留 1 个 txid >= 0 的图像15/02/22 18:50:50 INFO util.ExitUtil:以状态 0 退出15/02/22 18:50:50 信息 namenode.NameNode: SHUTDOWN_MSG:/****************************************************************SHUTDOWN_MSG:在 marta-komputer/127.0.1.1 关闭 NameNode******************************************************************/

启动 dfsyarn 结果如下:

hduser@marta-komputer:/usr/local/hadoop$ start-dfs.sh15/02/22 18:53:05 WARN util.NativeCodeLoader:无法为您的平台加载本机 Hadoop 库...在适用的情况下使用内置 Java 类在 [localhost] 上启动名称节点localhost:启动namenode,登录到/usr/local/hadoop/logs/hadoop-hduser-namenode-marta-komputer.out本地主机:启动datanode,登录到/usr/local/hadoop/logs/hadoop-hduser-datanode-marta-komputer.out启动辅助名称节点 [0.0.0.0]0.0.0.0:启动secondarynamenode,登录到/usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-marta-komputer.out15/02/22 18:53:20 警告 util.NativeCodeLoader:无法为您的平台加载本机 Hadoop 库...在适用的情况下使用内置 Java 类hduser@marta-komputer:/usr/local/hadoop$ start-yarn.sh启动纱线守护进程启动资源管理器,登录到/usr/local/hadoop/logs/yarn-hduser-resourcemanager-marta-komputer.out本地主机:启动 nodemanager,登录到/usr/local/hadoop/logs/yarn-hduser-nodemanager-marta-komputer.out

在此之后不久调用 jps 给出:

hduser@marta-komputer:/usr/local/hadoop$ jps11696 资源管理器11842 节点管理器11171 名称节点11523 次要名称节点12167 日元

netstat 输出:

hduser@marta-komputer:/usr/local/hadoop$ sudo netstat -lpten |爪哇tcp 0 0 0.0.0.0:8088 0.0.0.0:* 听 1001 690283 11696/javatcp 0 0 0.0.0.0:42745 0.0.0.0:* 听 1001 684574 11842/javatcp 0 0 0.0.0.0:13562 0.0.0.0:* 听 1001 680955 11842/javatcp 0 0 0.0.0.0:8030 0.0.0.0:* 听 1001 684531 11696/javatcp 0 0 0.0.0.0:8031 0.0.0.0:* 听 1001 684524 11696/javatcp 0 0 0.0.0.0:8032 0.0.0.0:* 听 1001 680879 11696/javatcp 0 0 0.0.0.0:8033 0.0.0.0:* 听 1001 687392 11696/javatcp 0 0 0.0.0.0:8040 0.0.0.0:* 听 1001 680951 11842/javatcp 0 0 127.0.0.1:9000 0.0.0.0:* 听 1001 687242 11171/javatcp 0 0 0.0.0.0:8042 0.0.0.0:* 听 1001 680956 11842/javatcp 0 0 0.0.0.0:50090 0.0.0.0:* 听 1001 690252 11523/javatcp 0 0 0.0.0.0:50070 0.0.0.0:* 听 1001 687239 11171/java

/etc/hosts 文件:

127.0.0.1 本地主机127.0.1.1 玛塔电脑# 以下几行适用于支持 IPv6 的主机::1 ip6-localhost ip6-loopbackfe00::0 ip6-localnetff00::0 ip6-mcastprefixff02::1 ip6-allnodesff02::2 ip6-allrouters

======================================================

更新 1.

我更新了 core-site.xml,现在我有:

<name>fs.default.name</name><value>hdfs://marta-komputer:9000</value></属性>

但我一直收到错误 - 现在开始:

15/03/01 00:59:34 INFO client.RMProxy:连接到 ResourceManager at/0.0.0.0:8032java.net.ConnectException:从 marta-komputer.home/192.168.1.8 调用到 marta-komputer:9000 连接异常失败:java.net.ConnectException:连接被拒绝;有关更多详细信息,请参阅:http://wiki.apache.org/hadoop/ConnectionRefused

我还注意到 telnet localhost 9000 不起作用:

hduser@marta-komputer:~$ telnet localhost 9000正在尝试 127.0.0.1 ...telnet:无法连接到远程主机:连接被拒绝

解决方案

对我来说这些步骤有效

  1. stop-all.sh
  2. hadoop namenode -format
  3. start-all.sh

I want to setup a hadoop-cluster in pseudo-distributed mode. I managed to perform all the setup-steps, including startuping a Namenode, Datanode, Jobtracker and a Tasktracker on my machine.

Then I tried to run some exemplary programms and faced the java.net.ConnectException: Connection refused error. I stepped back to the very first steps of running some operations in standalone mode and faced the same problem.

I performed even triple-check of all the installation steps and have no idea how to fix it. (I am new to Hadoop and a beginner Ubuntu user thus I kindly ask you for "taking it into account" if providing any guide or tip).

This is the error output I keep receiving:

hduser@marta-komputer:/usr/local/hadoop$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar grep input output 'dfs[a-z.]+'
15/02/22 18:23:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/02/22 18:23:04 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
java.net.ConnectException: Call From marta-komputer/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)
    at org.apache.hadoop.ipc.Client.call(Client.java:1472)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy9.delete(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:521)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy10.delete(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1929)
    at org.apache.hadoop.hdfs.DistributedFileSystem$12.doCall(DistributedFileSystem.java:638)
    at org.apache.hadoop.hdfs.DistributedFileSystem$12.doCall(DistributedFileSystem.java:634)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:634)
    at org.apache.hadoop.examples.Grep.run(Grep.java:95)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.examples.Grep.main(Grep.java:101)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
    at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
    at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705)
    at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
    at org.apache.hadoop.ipc.Client.call(Client.java:1438)
    ... 32 more


etc/hadoop/hadoop-env.sh file:

# The java implementation to use.
export JAVA_HOME=/usr/lib/jvm/java-8-oracle

# The jsvc implementation to use. Jsvc is required to run secure datanodes
# that bind to privileged ports to provide authentication of data transfer
# protocol.  Jsvc is not required if SASL is configured for authentication of
# data transfer protocol using non-privileged ports.
#export JSVC_HOME=${JSVC_HOME}

export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}

# Extra Java CLASSPATH elements.  Automatically insert capacity-scheduler.
for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do
  if [ "$HADOOP_CLASSPATH" ]; then
    export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f
  else
    export HADOOP_CLASSPATH=$f
  fi
done

# The maximum amount of heap to use, in MB. Default is 1000.
#export HADOOP_HEAPSIZE=
#export HADOOP_NAMENODE_INIT_HEAPSIZE=""

# Extra Java runtime options.  Empty by default.
export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"

# Command specific options appended to HADOOP_OPTS when specified
export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS"
export HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS $HADOOP_DATANODE_OPTS"

export HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_SECONDARYNAMENODE_OPTS"

export HADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS"
export HADOOP_PORTMAP_OPTS="-Xmx512m $HADOOP_PORTMAP_OPTS"

# The following applies to multiple commands (fs, dfs, fsck, distcp etc)
export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
#HADOOP_JAVA_PLATFORM_OPTS="-XX:-UsePerfData $HADOOP_JAVA_PLATFORM_OPTS"

# On secure datanodes, user to run the datanode as after dropping privileges.
# This **MUST** be uncommented to enable secure HDFS if using privileged ports
# to provide authentication of data transfer protocol.  This **MUST NOT** be
# defined if SASL is configured for authentication of data transfer protocol
# using non-privileged ports.
export HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER}

# Where log files are stored.  $HADOOP_HOME/logs by default.
#export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER

# Where log files are stored in the secure data environment.
export HADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER}

# HDFS Mover specific parameters
###
# Specify the JVM options to be used when starting the HDFS Mover.
# These options will be appended to the options specified as HADOOP_OPTS
# and therefore may override any similar flags set in HADOOP_OPTS
#
# export HADOOP_MOVER_OPTS=""

###
# Advanced Users Only!
###

# The directory where pid files are stored. /tmp by default.
# NOTE: this should be set to a directory that can only be written to by 
#       the user that will run the hadoop daemons.  Otherwise there is the
#       potential for a symlink attack.
export HADOOP_PID_DIR=${HADOOP_PID_DIR}
export HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR}

# A string representing this instance of hadoop. $USER by default.
export HADOOP_IDENT_STRING=$USER

.bashrc file Hadoop-related fragment:

# -- HADOOP ENVIRONMENT VARIABLES START -- #
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
# -- HADOOP ENVIRONMENT VARIABLES END -- #

/usr/local/hadoop/etc/hadoop/core-site.xml file:

<configuration>

<property>
  <name>hadoop.tmp.dir</name>
  <value>/usr/local/hadoop_tmp</value>
  <description>A base for other temporary directories.</description>
</property>

<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>

</configuration>

/usr/local/hadoop/etc/hadoop/hdfs-site.xml file:

<configuration>
<property>
      <name>dfs.replication</name>
      <value>1</value>
 </property>
 <property>
      <name>dfs.namenode.name.dir</name>
      <value>file:/usr/local/hadoop_tmp/hdfs/namenode</value>
 </property>
 <property>
      <name>dfs.datanode.data.dir</name>
      <value>file:/usr/local/hadoop_tmp/hdfs/datanode</value>
 </property>
</configuration>

/usr/local/hadoop/etc/hadoop/yarn-site.xml file:

<configuration> 
<property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value>
</property>
<property>
      <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
      <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>

/usr/local/hadoop/etc/hadoop/mapred-site.xml file:

<configuration>
<property>
      <name>mapreduce.framework.name</name>
      <value>yarn</value>
</property>
<configuration>

Running hduser@marta-komputer:/usr/local/hadoop$ bin/hdfs namenode -format results in an output as follows (I substitiute some of its part with (...)):

hduser@marta-komputer:/usr/local/hadoop$ bin/hdfs namenode -format
15/02/22 18:50:47 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = marta-komputer/127.0.1.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.6.0
STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/htrace-core-3.0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli (...)2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.0.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10Z
STARTUP_MSG:   java = 1.8.0_31
************************************************************/
15/02/22 18:50:47 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
15/02/22 18:50:47 INFO namenode.NameNode: createNameNode [-format]
15/02/22 18:50:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-0b65621a-eab3-47a4-bfd0-62b5596a940c
15/02/22 18:50:48 INFO namenode.FSNamesystem: No KeyProvider found.
15/02/22 18:50:48 INFO namenode.FSNamesystem: fsLock is fair:true
15/02/22 18:50:48 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
15/02/22 18:50:48 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
15/02/22 18:50:48 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
15/02/22 18:50:48 INFO blockmanagement.BlockManager: The block deletion will start around 2015 Feb 22 18:50:48
15/02/22 18:50:48 INFO util.GSet: Computing capacity for map BlocksMap
15/02/22 18:50:48 INFO util.GSet: VM type       = 64-bit
15/02/22 18:50:48 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
15/02/22 18:50:48 INFO util.GSet: capacity      = 2^21 = 2097152 entries
15/02/22 18:50:48 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
15/02/22 18:50:48 INFO blockmanagement.BlockManager: defaultReplication         = 1
15/02/22 18:50:48 INFO blockmanagement.BlockManager: maxReplication             = 512
15/02/22 18:50:48 INFO blockmanagement.BlockManager: minReplication             = 1
15/02/22 18:50:48 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
15/02/22 18:50:48 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
15/02/22 18:50:48 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
15/02/22 18:50:48 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
15/02/22 18:50:48 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
15/02/22 18:50:48 INFO namenode.FSNamesystem: fsOwner             = hduser (auth:SIMPLE)
15/02/22 18:50:48 INFO namenode.FSNamesystem: supergroup          = supergroup
15/02/22 18:50:48 INFO namenode.FSNamesystem: isPermissionEnabled = true
15/02/22 18:50:48 INFO namenode.FSNamesystem: HA Enabled: false
15/02/22 18:50:48 INFO namenode.FSNamesystem: Append Enabled: true
15/02/22 18:50:48 INFO util.GSet: Computing capacity for map INodeMap
15/02/22 18:50:48 INFO util.GSet: VM type       = 64-bit
15/02/22 18:50:48 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
15/02/22 18:50:48 INFO util.GSet: capacity      = 2^20 = 1048576 entries
15/02/22 18:50:48 INFO namenode.NameNode: Caching file names occuring more than 10 times
15/02/22 18:50:48 INFO util.GSet: Computing capacity for map cachedBlocks
15/02/22 18:50:48 INFO util.GSet: VM type       = 64-bit
15/02/22 18:50:48 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
15/02/22 18:50:48 INFO util.GSet: capacity      = 2^18 = 262144 entries
15/02/22 18:50:48 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
15/02/22 18:50:48 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
15/02/22 18:50:48 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
15/02/22 18:50:48 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
15/02/22 18:50:48 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
15/02/22 18:50:48 INFO util.GSet: Computing capacity for map NameNodeRetryCache
15/02/22 18:50:48 INFO util.GSet: VM type       = 64-bit
15/02/22 18:50:48 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
15/02/22 18:50:48 INFO util.GSet: capacity      = 2^15 = 32768 entries
15/02/22 18:50:48 INFO namenode.NNConf: ACLs enabled? false
15/02/22 18:50:48 INFO namenode.NNConf: XAttrs enabled? true
15/02/22 18:50:48 INFO namenode.NNConf: Maximum size of an xattr: 16384
Re-format filesystem in Storage Directory /usr/local/hadoop_tmp/hdfs/namenode ? (Y or N) Y
15/02/22 18:50:50 INFO namenode.FSImage: Allocated new BlockPoolId: BP-948369552-127.0.1.1-1424627450316
15/02/22 18:50:50 INFO common.Storage: Storage directory /usr/local/hadoop_tmp/hdfs/namenode has been successfully formatted.
15/02/22 18:50:50 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
15/02/22 18:50:50 INFO util.ExitUtil: Exiting with status 0
15/02/22 18:50:50 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at marta-komputer/127.0.1.1
************************************************************/

Starting dfs and yarn results in the following output:

hduser@marta-komputer:/usr/local/hadoop$ start-dfs.sh
15/02/22 18:53:05 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-marta-komputer.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-marta-komputer.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-marta-komputer.out
15/02/22 18:53:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
hduser@marta-komputer:/usr/local/hadoop$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-marta-komputer.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-marta-komputer.out

Calling jps shortly after that gives:

hduser@marta-komputer:/usr/local/hadoop$ jps
11696 ResourceManager
11842 NodeManager
11171 NameNode
11523 SecondaryNameNode
12167 Jps

netstat output:

hduser@marta-komputer:/usr/local/hadoop$ sudo netstat -lpten | grep java
tcp        0      0 0.0.0.0:8088            0.0.0.0:*               LISTEN      1001       690283      11696/java      
tcp        0      0 0.0.0.0:42745           0.0.0.0:*               LISTEN      1001       684574      11842/java      
tcp        0      0 0.0.0.0:13562           0.0.0.0:*               LISTEN      1001       680955      11842/java      
tcp        0      0 0.0.0.0:8030            0.0.0.0:*               LISTEN      1001       684531      11696/java      
tcp        0      0 0.0.0.0:8031            0.0.0.0:*               LISTEN      1001       684524      11696/java      
tcp        0      0 0.0.0.0:8032            0.0.0.0:*               LISTEN      1001       680879      11696/java      
tcp        0      0 0.0.0.0:8033            0.0.0.0:*               LISTEN      1001       687392      11696/java      
tcp        0      0 0.0.0.0:8040            0.0.0.0:*               LISTEN      1001       680951      11842/java      
tcp        0      0 127.0.0.1:9000          0.0.0.0:*               LISTEN      1001       687242      11171/java      
tcp        0      0 0.0.0.0:8042            0.0.0.0:*               LISTEN      1001       680956      11842/java      
tcp        0      0 0.0.0.0:50090           0.0.0.0:*               LISTEN      1001       690252      11523/java      
tcp        0      0 0.0.0.0:50070           0.0.0.0:*               LISTEN      1001       687239      11171/java  

/etc/hosts file:

127.0.0.1       localhost
127.0.1.1       marta-komputer

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

====================================================

UPDATE 1.

I updated the core-site.xml and now I have:

<property>
<name>fs.default.name</name>
<value>hdfs://marta-komputer:9000</value>
</property>

but I keep receiving the error - now starting as:

15/03/01 00:59:34 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
java.net.ConnectException: Call From marta-komputer.home/192.168.1.8 to marta-komputer:9000 failed on connection exception:     java.net.ConnectException: Connection refused; For more details see:    http://wiki.apache.org/hadoop/ConnectionRefused

I also notice that telnet localhost 9000 is not working:

hduser@marta-komputer:~$ telnet localhost 9000
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused

解决方案

For me these steps worked

  1. stop-all.sh
  2. hadoop namenode -format
  3. start-all.sh

这篇关于Hadoop 集群设置 - java.net.ConnectException:连接被拒绝的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆