FIWARE-Cosmos的headnode的SSH访问 [英] SSH access for the headnode of FIWARE-Cosmos

查看:96
本文介绍了FIWARE-Cosmos的headnode的SSH访问的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我遵循指导Hadoop / FIWARE-Cosmos,我对Hive部分有疑问。



我可以访问旧集群( cosmos.lab.fiware .org )通过SSH的headnode,但是我不能为新的集群做。我尝试了 storage.cosmos.lab.fiware.org computing.cosmos.lab.fiware.org 并且失败连接。

我试图通过SSH连接的意图是通过Hive CLI测试我们数据的Hive查询。如果没有这样做,我检查并能够通过telnet连接到 computing.cosmos.lab.fiware.org 的10000端口。我猜Hive是通过那个端口服务的。这是我们在新群集中使用Hive的唯一方法吗?

解决方案

新的群集对未启用ssh访问。这是因为用户倾向于在旧群集中安装很多内容(甚至与大数据无关),因为您已经提到了ssh访问权限。因此,这对新群集只能通过公开的API使用:WebHDFS用于数据I / O和Tidoop用于MapReduce。

据说,一个Hive服务器也在运行,并且应该在10000端口中公开远程服务。我说应该是,因为它运行的是一个基于OAuth2的实验验证模块,如WebHDFS和Tidoop。从理论上讲,从Hive客户端连接到该端口与使用Cosmos用户名和有效标记(与WebHDFS和/或Tidoop使用的标记相同)一样简单。



那么Hive远程客户端怎么样?那么,这是你的应用程序应该实现的东西。无论如何,我已经在Cosmos回购上载了一些实施例子。例如:

https://github.com/telefonicaid/fiware-cosmos/tree/develop/resources/java/hiveserver2-client


I am following this guide on Hadoop/FIWARE-Cosmos and I have a question about the Hive part.

I can access the old cluster’s (cosmos.lab.fiware.org) headnode through SSH, but I cannot do it for the new cluster. I tried both storage.cosmos.lab.fiware.org and computing.cosmos.lab.fiware.org and failed to connect.

My intention in trying to connect via SSH was to test Hive queries on our data through the Hive CLI. After failing to do so, I checked and was able to connect to the 10000 port of computing.cosmos.lab.fiware.org with telnet. I guess Hive is served through that port. Is this the only way we can use Hive in the new cluster?

解决方案

The new pair of clusters have not enabled the ssh access. This is because users tend to install a lot of stuff (even not related with Big Data) in the "old" cluster, which had the ssh access enabled as you mention. So, the new pair of clusters are intended to be used only through the APIs exposed: WebHDFS for data I/O and Tidoop for MapReduce.

Being said that, a Hive Server is running as well and it should be exposing a remote service in the 10000 port as you mention as well. I say "it should be" because it is running an experimental authenticator module based in OAuth2 as WebHDFS and Tidoop do. Theoretically, connecting to that port from a Hive client is as easy as using your Cosmos username and a valid token (the same you are using for WebHDFS and/or Tidoop).

And what about a Hive remote client? Well, this is something your application should implement. Anyway, I have uploaded some implementation examples in the Cosmos repo. For instance:

https://github.com/telefonicaid/fiware-cosmos/tree/develop/resources/java/hiveserver2-client

这篇关于FIWARE-Cosmos的headnode的SSH访问的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆