如何从开源Hadoop或由ambari安装的公共可用HDP源代码构建deb/rpm存储库 [英] How to build deb/rpm repos from open source Hadoop or publicly available HDP source code to be installed by ambari

查看:187
本文介绍了如何从开源Hadoop或由ambari安装的公共可用HDP源代码构建deb/rpm存储库的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试安装开源hadoop或从要由ambari安装的源构建HDP.我可以看到可以使用apache repos中的文档为每个组件构建java软件包,但是我如何使用这些软件包来构建hortonworks提供的rpm/deb软件包,以便由ambari安装HDP分发.

I am trying to install the open source hadoop or building the HDP from source to be installed by ambari. I can see that it is possible to build the java packages for each component with the documentation available in apache repos, but how can i use those to build rpm/deb packages that are provided by hortonworks for HDP distribution to be installed by ambari.

推荐答案

@ShivamKhandelwal从源代码构建Ambari是一项挑战,但可以持久地完成.在这篇文章中,我公开了我最近在centos中用于构建Ambari 2.7.5的命令:

@ShivamKhandelwal Building Ambari From Source is a challenge but one that can be accomplished with some persistence. In this post I have disclosed the commands I used recently to build Ambari 2.7.5 in centos:

Ambari 2.7.5安装CentOS 7上出现故障

从源代码构建HDP"是一项非常艰巨的任务,因为它需要分别构建每个组件,创建自己的公共/私有存储库,其中包含每种操作系统风格的所有组件存储库或rpm.这是一项艰巨的任务,以前是由Hortonworks的许多员工和组件贡献者管理的.

"Building HDP From Source" is very big task as it requires building each component separately, creating your own public/private repo which contains all the component repos or rpms for each operating system flavor. This is a monumental task which was previously managed by many employees and component contributors at Hortonworks.

当您从HDP安装Ambari时,它会随其仓库一起提供,包括其HDP堆栈(HDFS,Yarn,MR,Hive等).当您从源安装Ambari时,没有堆栈.唯一的解决方案是构建自己的堆栈,这是我擅长的事情.

When you install Ambari from HDP, it comes out of the box with their repos including their HDP stack (HDFS,Yarn,MR,Hive, etc). When you install Ambari From Source, there is no stack. The only solution is to Build Your Own Stack which is something I am expert at doing.

我目前正在构建一个DDP堆栈作为示例,以与公众共享.我通过对HDF管理包进行反向工程来开始这个项目,该管理包包括堆栈结构(文件/文件夹),以实现NiFi,Kafka,Zookeeper等功能.我已将其自定义为自己的堆栈,其中包含我自己的服务和组件(NiFi,Hue,Elasticsearch等).

I am currently building a DDP stack as an example to share with the public. I started this project by reverse engineering a HDF Management Pack which includes stack structure (files/folders) to role out NiFi, Kafka, Zookeeper, and more. I have customized it to be my own stack with my own services and components (NiFi, Hue, Elasticsearch, etc).

我使用DDP的目标是最终为我想要的组件和服务创建自己的存储库,以及要在群集中安装的版本.接下来,我将使用最后一个免费的公共HDP堆栈(HDP 3.1.5)将某些HDP组件(如HDFS,YARN,HIVE)从HDP堆栈直接复制到我的DDP堆栈中.

My goal with DDP is to eventually make my own repos for the Components and Services I want, with the versions I want to install in my cluster. Next I will copy some HDP Components like HDFS,YARN,HIVE from the HDP stack directly into my DDP stack using the last free public HDP Stack (HDP 3.1.5).

这篇关于如何从开源Hadoop或由ambari安装的公共可用HDP源代码构建deb/rpm存储库的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆