“客户端和服务器之间没有公共保护层"尝试与kerberized Hadoop集群进行通信时 [英] "No common protection layer between client and server" while trying to communicate with kerberized Hadoop cluster

查看:264
本文介绍了“客户端和服务器之间没有公共保护层"尝试与kerberized Hadoop集群进行通信时的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试通过编程方式与使用kerberized(CDH 5.3/HDFS 2.5.0)的Hadoop集群进行通信.

I'm trying to communicate programmatically to a Hadoop cluster which is kerberized (CDH 5.3/HDFS 2.5.0).

我在客户端有一个有效的Kerberos令牌.但是我收到如下错误,客户端和服务器之间没有公共保护层".

I have a valid Kerberos token on the client side. But I'm getting an error as below, "No common protection layer between client and server".

此错误是什么意思,有什么方法可以解决或解决它?

What does this error mean and are there any ways to fix or work around it?

这与 HDFS-5688 有关吗?该票证似乎暗示必须将属性"hadoop.rpc.protection"设置为身份验证"(例如,例如 ).

Is this something related to HDFS-5688? The ticket seems to imply that the property "hadoop.rpc.protection" must be set, presumably to "authentication" (also per e.g. this).

是否需要在集群中的所有服务器上都设置此设置,然后集群反弹?我无法轻松访问群集,因此我需要了解"hadoop.rpc.protection"是否是实际原因.似乎至少应根据core-default.xml文档,身份验证"应为默认使用的值.

Would this need to be set on all servers in the cluster and then the cluster bounced? I don't have easy access to the cluster so I need to understand whether 'hadoop.rpc.protection' is the actual cause. It seems that 'authentication' should be the value used by default, at least according to the core-default.xml documentation.

java.io.IOException:失败,发生本地异常:java.io.IOException:无法设置用于Principal1/server1.acme.net@xxx.acme.net到server2.acme.net/10.XX的连接. XXX.XXX:8020;主机详细信息:本地主机为:"some-host.acme.net/168.XX.XXX.XX";目标主机是:"server2.acme.net":8020;

java.io.IOException: Failed on local exception: java.io.IOException: Couldn't setup connection for principal1/server1.acme.net@xxx.acme.net to server2.acme.net/10.XX.XXX.XXX:8020; Host Details : local host is: "some-host.acme.net/168.XX.XXX.XX"; destination host is: "server2.acme.net":8020;

    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)

    at org.apache.hadoop.ipc.Client.call(Client.java:1415)

    at org.apache.hadoop.ipc.Client.call(Client.java:1364)

    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)

    at com.sun.proxy.$Proxy24.getFileInfo(Unknown Source)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

    at java.lang.reflect.Method.invoke(Method.java:498)

    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)

    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)

    at com.sun.proxy.$Proxy24.getFileInfo(Unknown Source)

    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:707)

    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1785)

    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1068)

    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064)

    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)

    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064)

    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1398)

    ... 11 more

由以下原因引起:java.io.IOException:无法设置用于Principal1/server1.acme.net@xxx.acme.net到server2.acme.net/10.XX.XXX.XXX:8020的连接;

Caused by: java.io.IOException: Couldn't setup connection for principal1/server1.acme.net@xxx.acme.net to server2.acme.net/10.XX.XXX.XXX:8020;

    at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:671)

    at java.security.AccessController.doPrivileged(Native Method)

    at javax.security.auth.Subject.doAs(Subject.java:422)

    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)

    at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:642)

    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:725)

    at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)

    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1463)

    at org.apache.hadoop.ipc.Client.call(Client.java:1382)

    ... 31 more

由以下原因引起:javax.security.sasl.SaslException:客户端和服务器之间没有公共保护层

Caused by: javax.security.sasl.SaslException: No common protection layer between client and server

    at com.sun.security.sasl.gsskerb.GssKrb5Client.doFinalHandshake(GssKrb5Client.java:251)

    at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:186)

    at org.apache.hadoop.security.SaslRpcClient.saslEvaluateToken(SaslRpcClient.java:483)

    at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:427)

    at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:552)

    at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:367)

    at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:717)

    at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:713)

    at java.security.AccessController.doPrivileged(Native Method)

    at javax.security.auth.Subject.doAs(Subject.java:422)

    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)

    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712)

    ... 34 more

推荐答案

要修复SASL出现的客户端和服务器之间没有通用保护"错误,我需要将"hadoop.rpc.protection"设置为相同值作为集群中服务器端的值.在这种情况下恰好是隐私".

To fix the "No common protection between client and server" error which is coming from SASL, I needed to set "hadoop.rpc.protection" to the same value as the one set on the serverside in the cluster. It happened to be "privacy" in this case.

此外,群集已配置为HA,因此我必须选择正确的主机名以在HDFS URI("fs.defaultFS")和"dfs.namenode.kerberos.principal"属性中使用:

Additionally, the cluster is configured for HA so I had to pick the right hostname to use in the HDFS URI ("fs.defaultFS") and in the "dfs.namenode.kerberos.principal" property:

Configuration config = new Configuration();
config.set("fs.defaultFS", "hdfs://host1.acme.com:8020");
config.set("hadoop.security.authentication", "kerberos");
config.set("hadoop.rpc.protection", "privacy");
// Need this or we get the error "Server has invalid Kerberos principal":
config.set("dfs.namenode.kerberos.principal",  
    "hdfs/host1.acme.com@ACME.DYN.ROOT.NET");

这篇关于“客户端和服务器之间没有公共保护层"尝试与kerberized Hadoop集群进行通信时的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆