从远程Windows系统使用jdbc连接到kerberised配置单元 [英] Connect to kerberised hive using jdbc from remote windows system
问题描述
我已经在Linux服务器(Red Hat)上启用了启用了Kerberos安全性的配置单元环境。我需要使用JDBC从远程Windows机器连接到配置单元。
因此,我在linux机器上运行hiveserver2,并且完成了kinit。
现在我尝试从Windows程序的一个java程序连接到这样的测试程序,
的Class.forName( org.apache.hive.jdbc.HiveDriver);
String url =jdbc:hive2://< host>:10000 / default; principal = hive / _HOST @< YOUR-REALM.COM>
Connection con = DriverManager.getConnection(url);
我得到以下错误:
由于以下原因导致的异常:无法使用JDBC打开客户端传输Uri:
jdbc:hive2://< host>:10000 /; principal = hive / _HOST @ YOUR-REALM .COM>:
GSS启动失败
我在这里做了什么错?我查了很多论坛,但无法得到适当的解决方案。任何答案将不胜感激。
谢谢 解决方案
在Linux中运行你的代码,我会直接指向定义Kerberos和JAAS配置,从具有特定格式的conf文件中删除。 但Windows上的 存在其他问题: 最重要的是,Apache Hive驱动程序存在兼容性问题 - 每当有线协议发生变化时,新客户端无法连接到较旧的服务器。 因此,我强烈建议您为 Windows客户端使用 Cloudera JDBC驱动程序for Hive 。 Cloudera网站只是询问您的电子邮件。 I have setup a hive environment with Kerberos security enabled on a Linux server (Red Hat). And I need to connect from a remote windows machine to hive using JDBC. So, I have hiveserver2 running in the linux machine, and I have done "kinit". Now I try to connect from a java program on the windows side with a test program like this, And I got the following error, What am I doing here wrong ? I checked many forums, but couldn't get a proper solution. Any answer will be appreciated. Thanks If you were running your code in Linux, I would simply point to that post -- i.e. you must use System properties to define Kerberos and JAAS configuration, from conf files with specific formats. But on Windows there are additional problems: On the top of that, the Apache Hive driver has compatibility issues -- whenever there are changes in the wire protocol, newer clients cannot connect to older servers. So I strongly advise you to use the Cloudera JDBC driver for Hive for your Windows clients. The Cloudera site just asks your e-mail. 这篇关于从远程Windows系统使用jdbc连接到kerberised配置单元的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
和/您必须切换调试跟踪标志以了解subtile配置问题(例如,不同版本的JVM可能有不同的语法要求,这些都没有记录,这是一个试错过程)。 / p>
hadoop.home.dir
和 java.library。路径
分别指向Hadoop主目录及其 bin
子目录
此后,您需要阅读80多页PDF手册,将JAR添加到您的CLASSPATH中,并根据手册对您的JDBC URL进行修改。< br>
注意:Cloudera驱动程序是一个合适的JDBC-4.x兼容驱动程序,不需要传统的 Class.forName()
。 .. Class.forName("org.apache.hive.jdbc.HiveDriver");
String url = "jdbc:hive2://<host>:10000/default;principal=hive/_HOST@<YOUR-REALM.COM>"
Connection con = DriverManager.getConnection(url);
Exception due to: Could not open client transport with JDBC Uri:
jdbc:hive2://<host>:10000/;principal=hive/_HOST@YOUR-REALM.COM>:
GSS initiate failed
And you have to switch the debug trace flags to understand subtile configuration issue (i.e. different flavors/versions of JVMs may have different syntax requirements, which are not documented, it's a trial-and-error process).
hadoop.home.dir
and java.library.path
pointing to the Hadoop home dir and its bin
sub-dir respectively
After that you have a 80+ pages PDF manual to read, the JARs to add to your CLASSPATH, and your JDBC URL to adapt according to the manual.
Side note: the Cloudera driver is a proper JDBC-4.x compliant driver, no need for that legacy Class.forName()
...