将HDFS格式的本地磁盘替换为s3出现错误(org.apache.hadoop.service.AbstractService) [英] Replace HDFS form local disk to s3 getting error (org.apache.hadoop.service.AbstractService)

查看:128
本文介绍了将HDFS格式的本地磁盘替换为s3出现错误(org.apache.hadoop.service.AbstractService)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们正在尝试安装Cloudera 5.5,HDFS将在s3上运行,原因是我们已经在Core-site.xml中配置了必需属性。

 <性> 
< name> fs.s3a.access.key< / name>
< value> ################< / value>
< / property>
<属性>
< name> fs.s3a.secret.key< / name>
<值> ###############< /值>
< / property>
<属性>
<名称> fs.default.name< /名称>
<值> s3a:// bucket_Name< /值>
< / property>
<属性>
<名称> fs.defaultFS< / name>
<值> s3a:// bucket_Name< /值>
< / property>

完成设置后,我们可以通过命令浏览s3存储桶的文件。

b
$ b

  hadoop fs -ls / 





但是,当我们启动纱线服务时,JobHistory服务器无法以低于错误开始并启动猪作业,所以我们得到相同的错误

  PriviledgedActionException为:mapred(AUTH:SIMPLE)原因:org.apache.hadoop.fs.UnsupportedFileSystemException:否AbstractFileSystem对于方案:S3A 
错误org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils
无法创建默认文件上下文[S3A:// kyvosps]
org.apache.hadoop.fs .UnsupportedFileSystemException:No AbstractFileSystem for方案:s3a
at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:154)
at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem .java:242)在org.apach上
e.hadoop.fs.FileContext $ 2.run(FileContext.java:337)
在org.apache.hadoop.fs.FileContext $ 2.run(FileContext.java:334)
在java.security。 AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)

在Internet上的serching中,我们发现我们还需要在core-site.xml中设置以下属性

 <性> 
< name> fs.s3a.impl< / name>
< value> org.apache.hadoop.fs.s3a.S3AFileSystem< / value>
< description> S3A文件系统的实现类< / description>
< / property>
<属性>
< name> fs.AbstractFileSystem.s3a.impl< / name>
< value> org.apache.hadoop.fs.s3a.S3AFileSystem< / value>
< description>用于S3A文件系统的文件系统< / description>
< / property>

设定完上述属性后,我们会收到以下错误:

  org.apache.hadoop.service.AbstractService 
服务org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager失败状态INITED;原因:了java.lang.RuntimeException:java.lang.NoSuchMethodException:org.apache.hadoop.fs.s3a.S3AFileSystem< INIT>(java.net.URI中,org.apache.hadoop.conf.Configuration)
了java.lang.RuntimeException:java.lang.NoSuchMethodException:org.apache.hadoop.fs.s3a.S3AFileSystem< INIT>(java.net.URI中,org.apache.hadoop.conf.Configuration)
。在org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:131)
at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:157)
at org.apache .hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:242)$ or $ $ b $ org.apache.hadoop.fs.FileContext $ 2.run(FileContext.java:337)
at org.apache.hadoop。 fs.FileContext $ 2.run(FileContext.java:334)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
在org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:334)
在org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:451)
在org.apache .hadoop.fs.FileContext.getFileContext(FileContext.java:473)
at org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils.getDefaultFileContext(JobHistoryUtils.java:247)

为此需要的罐子已经到位,但仍然会遇到任何帮助错误。提前致谢

更新



我试图移除 fs.AbstractFileSystem.s3a.impl ,但它给了我以前得到的第一个异常,即

  org.apache.hadoop.security.UserGroupInformation 
PriviledgedActionException as:mapred(auth:SIMPLE)cause:org.apache.hadoop.fs.UnsupportedFileSystemException:No AbstractFileSystem for scheme:s3a
ERROR org.apache。 hadoop.mapreduce.v2.jobhistory.JobHistoryUtils
无法创建默认文件上下文[s3a:// bucket_name]
org.apache.hadoop.fs.UnsupportedFileSystemException:否AbstractFileSystem for scheme:s3a
在org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:154)
at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:242)
at org.apache .hadoop.fs.FileContext $ 2.run(FileContext.java:337)
at org.apache.hadoop.fs.FileConte xt $ 2.run(FileContext.java:334)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)$ b在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
$ b在org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:334)
。在组织.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:451)
at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:473)


解决方案

问题不在于罐子的位置。



问题在于设定:

 <属性> 
< name> fs.AbstractFileSystem.s3a.impl< / name>
< value> org.apache.hadoop.fs.s3a.S3AFileSystem< / value>
< description>用于S3A文件系统的文件系统< / description>
< / property>

此设置不是必需的。由于这个设置,它在 S3AFileSystem 类中搜索以下构造函数,并且没有这样的构造函数:

  S3AFileSystem(URI theUri,配置conf); 

以下异常清楚地表明它无法找到的构造函数S3AFileSystem URI 配置参数。

  java.lang.RuntimeException:java.lang.NoSuchMethodException:org.apache.hadoop.fs.s3a.S3AFileSystem。< init>(java.net.URI,org.apache .hadoop.conf.Configuration)

要解决此问题,请删除 fs。 AbstractFileSystem.s3a.impl core-site.xml 的设置。只需在 core-site.xml 中设置 fs.s3a.impl 即可解决您的问题。

编辑:
org.apache.hadoop.fs.s3a.S3AFileSystem 只是实现 FileSystem 。因此,您不能将 fs.AbstractFileSystem.s3a.impl 的值设置为 org
.apache.hadoop.fs.s3a.S3AFileSystem ,因为 org.apache.hadoop.fs.s3a.S3AFileSystem 不会实现 AbstractFileSystem



我使用Hadoop 2.7.0,并且在这个版本中 s3A 没有公开为 AbstractFileSystem



有JIRA票证: https://issues.apache.org/jira/browse/HADOOP-11262 以实现相同的功能,该修补程序可用于Hadoop 2.8.0。



假设您的jar已经将 s3A 作为 AbstractFileSystem code>,您需要为 fs.AbstractFileSystem.s3a.impl 设置以下内容:

 <性> 
< name> fs.AbstractFileSystem.s3a.impl< / name>
< value> org.apache.hadoop.fs.s3a.S3A< / value>
< / property>

这将解决您的问题。


We are trying to setup Cloudera 5.5 where HDFS will be working on s3 only for that we have already configured the necessory properties in Core-site.xml

<property>
    <name>fs.s3a.access.key</name>
    <value>################</value>
</property>
<property>
    <name>fs.s3a.secret.key</name>
    <value>###############</value>
</property>
<property>
    <name>fs.default.name</name>
    <value>s3a://bucket_Name</value>
</property>
<property>
    <name>fs.defaultFS</name>
    <value>s3a://bucket_Name</value>
</property>

After setting it up we were able to browse the files for s3 bucket from command

hadoop fs -ls /

And it shows the files available on s3 only.

But when we start the yarn services JobHistory server fails to start with below error and on launching pig jobs we are getting same error

PriviledgedActionException as:mapred (auth:SIMPLE) cause:org.apache.hadoop.fs.UnsupportedFileSystemException: No AbstractFileSystem for scheme: s3a
ERROR   org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils   
Unable to create default file context [s3a://kyvosps]
org.apache.hadoop.fs.UnsupportedFileSystemException: No AbstractFileSystem for scheme: s3a
    at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:154)
    at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:242)
    at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:337)
    at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:334)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)

On serching on Internet we found that we need to set following properties as well in core-site.xml

<property>
  <name>fs.s3a.impl</name>
  <value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
  <description>The implementation class of the S3A Filesystem</description>
</property>
<property>
    <name>fs.AbstractFileSystem.s3a.impl</name>
    <value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
    <description>The FileSystem for  S3A Filesystem</description>
</property>

After setting the above properties we are getting following error

org.apache.hadoop.service.AbstractService   
Service org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager failed in state INITED; cause: java.lang.RuntimeException: java.lang.NoSuchMethodException: org.apache.hadoop.fs.s3a.S3AFileSystem.<init>(java.net.URI, org.apache.hadoop.conf.Configuration)
java.lang.RuntimeException: java.lang.NoSuchMethodException: org.apache.hadoop.fs.s3a.S3AFileSystem.<init>(java.net.URI, org.apache.hadoop.conf.Configuration)
    at org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:131)
    at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:157)
    at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:242)
    at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:337)
    at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:334)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
    at org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:334)
    at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:451)
    at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:473)
    at org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils.getDefaultFileContext(JobHistoryUtils.java:247)

The jars needed for this is in place but still getting the error any help will be great. Thanks in advance

Update

I tried to remove the property fs.AbstractFileSystem.s3a.impl but it give me the same first exception the one i was getting previously which is

org.apache.hadoop.security.UserGroupInformation 
PriviledgedActionException as:mapred (auth:SIMPLE) cause:org.apache.hadoop.fs.UnsupportedFileSystemException: No AbstractFileSystem for scheme: s3a
ERROR   org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils   
Unable to create default file context [s3a://bucket_name]
org.apache.hadoop.fs.UnsupportedFileSystemException: No AbstractFileSystem for scheme: s3a
    at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:154)
    at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:242)
    at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:337)
    at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:334)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
    at org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:334)
    at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:451)
    at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:473)

解决方案

The problem is not with the location of the jars.

The problem is with the setting:

<property>
    <name>fs.AbstractFileSystem.s3a.impl</name>
    <value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
    <description>The FileSystem for  S3A Filesystem</description>
</property>

This setting is not needed. Because of this setting, it is searching for following constructor in S3AFileSystem class and there is no such constructor:

S3AFileSystem(URI theUri, Configuration conf);

Following exception clearly tells that it is unable to find a constructor for S3AFileSystem with URI and Configuration parameters.

java.lang.RuntimeException: java.lang.NoSuchMethodException: org.apache.hadoop.fs.s3a.S3AFileSystem.<init>(java.net.URI, org.apache.hadoop.conf.Configuration)

To resolve this problem, remove fs.AbstractFileSystem.s3a.impl setting from core-site.xml. Just having fs.s3a.impl setting in core-site.xml should solve your problem.

EDIT: org.apache.hadoop.fs.s3a.S3AFileSystem just implements FileSystem.

Hence, you cannot set value of fs.AbstractFileSystem.s3a.impl to org.apache.hadoop.fs.s3a.S3AFileSystem, since org.apache.hadoop.fs.s3a.S3AFileSystem does not implement AbstractFileSystem.

I am using Hadoop 2.7.0 and in this version s3A is not exposed as AbstractFileSystem.

There is JIRA ticket: https://issues.apache.org/jira/browse/HADOOP-11262 to implement the same and the fix is available in Hadoop 2.8.0.

Assuming, your jar has exposed s3A as AbstractFileSystem, you need to set the following for fs.AbstractFileSystem.s3a.impl:

<property>
    <name>fs.AbstractFileSystem.s3a.impl</name>
    <value>org.apache.hadoop.fs.s3a.S3A</value>
</property>

That will solve your problem.

这篇关于将HDFS格式的本地磁盘替换为s3出现错误(org.apache.hadoop.service.AbstractService)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆