如何使用HDFS客户端连接到远程DataNode? [英] How to connect to remote datanode using hdfs client?

查看:381
本文介绍了如何使用HDFS客户端连接到远程DataNode?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的目标是将文件从hdfs下载到本地文件系统. 我正在使用连接到远程HDFS NameNode的客户端. hadoop fs -get hdfs://sourceHDFS:8020/path_to_file/file /path_to_save_file 而且我有一个例外.

My target is download file from hdfs to local filesystem. I am using client which connects to remote HDFS NameNode. hadoop fs -get hdfs://sourceHDFS:8020/path_to_file/file /path_to_save_file And I got an exception.

15/03/17 12:18:49 WARN client.ShortCircuitCache: ShortCircuitCache(0x11bbad83): failed to load 1073754800_BP-703742109-127.0.0.1-1398459391664
15/03/17 12:18:49 WARN hdfs.BlockReaderFactory: I/O error constructing remote block reader.
java.io.IOException: Got error for OP_READ_BLOCK, self=/127.0.0.1:57733, remote=bigdatalite.localdomain/127.0.0.1:50010, for file /user/hive/warehouse/b2_olap_hive.db/dim_deal_log/000000_0, for pool BP-703742109-127.0.0.1-1398459391664 block 1073754800_13977

我对情况的理解. HDFS客户端连接到NameNode,但是NameNode返回本地DataNode IP(因为NameNode和DataNode位于同一台计算机上).对于远程客户端127.0.0.1来说,DataNode的地址是错误的.

My understanding of situation. HDFS client connects to NameNode but NameNode return local DataNode IP (because NameNode and DataNode are located at the same machine). And for remote client 127.0.0.1 is wrong adress of DataNode.

如何连接到正确的datanode?也许我的理解是错误的?

How can I connect to correct datanode? And maybe my understanding is wrong?

预先感谢

推荐答案

您无法绑定到127.0.0.1.确保/etc/hosts中的主机名条目指向非回送接口.弹跳您的datanode和namenode.

You cannot bind to 127.0.0.1. Make sure that the host name entry in /etc/hosts points to the non loopback interface. Bounce your datanode and namenode.

这篇关于如何使用HDFS客户端连接到远程DataNode?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆