客户机上的hadoop api配置 [英] hadoop api configuration on the client machine

查看:57
本文介绍了客户机上的hadoop api配置的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

超小白。我有一台带有cdh3u1伪分布的服务器机器,以及一台使用cdh3u1 API的Java应用程序的客户端机器。

如何配置客户端与服务器交谈?我一直在谷歌搜索几个小时,并找不到客户端配置文件。 hdfs-default,core-default和mapred-default以及它们的-site对应物都与我的服务器(namenode和datanode)配置类似。

它只是多用途客户端服务器配置,我应该选择适合客户端的这些文件中的属性?这是他们?可能在这里丢失了一些大东西......



谢谢,Ido

解决方案

确保客户机可以访问hadoop服务器机器ip。如果您为hadoop服务器(cdh3 vm)使用虚拟机,请添加主机专用网络接口(详情请参阅: host-only networking with virtualbox 。我假设你的hadoop服务器的静态IP是 192.168.56.101 ,并且您可以从您的客户端ping通它。



在服务器和客户端计算机上为您的hadoop服务器计算机配置一个主机名如果要命名你的hadoop服务器local-elephant,在两台机器的/ etc / hosts中添加以下行: 192.168.56.101 local-elephant



goto / etc / hadoop / conf将以下属性的值从localhost更改为local-elephant:在core-site.xml中, fs.default.name ,在mapred-site.xml中的值为 mapred.job.tracker



在客户端机器上,创建core-site.xml和map在你的Java应用程序的类路径中的red-site.xml。在这些文件中只放置 fs.default.name mapred.job.tracker 属性。

ultra-noob. I have a server machine with cdh3u1 pseudo-distrib, and a client machine with a java application using the cdh3u1 API.

How do I configure the client to talk to the server? I've been googling for hours and couldn't find where is the "client configuration" file. The "hdfs-default", "core-default" and "mapred-default" and their "-site" counterparts all look like server (namenode and datanode) config to me.

Is it just "multipurpose client server" config and I should cherry-pick the attributes in these files that are appropriate to the client? which are they? probably missing something big here...

Thanks, Ido

解决方案

make sure that the client machine can access the hadoop server machine ip. If you use a virtualbox for the hadoop server (cdh3 vm), then add a "host-only" network interface (see details here: host-only networking with virtualbox. I'm assuming that your static ip for the hadoop server is 192.168.56.101 and that you're able to ping it from your client.

configure a hostname for your hadoop server machine in both the server and client machine. If you want to name your hadoop server "local-elephant", add the following line to /etc/hosts in both machines: 192.168.56.101 local-elephant.

in the server machine goto /etc/hadoop/conf change the values of the following properties from "localhost" to "local-elephant": in core-site.xml the value of fs.default.name and in mapred-site.xml the value of mapred.job.tracker.

in the client machine, create core-site.xml and mapred-site.xml in the classpath of your java application. In those files put only the fs.default.name and mapred.job.tracker properties.

这篇关于客户机上的hadoop api配置的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆