将 HDFS 从本地磁盘替换为 s3 出现错误(org.apache.hadoop.service.AbstractService) [英] Replace HDFS form local disk to s3 getting error (org.apache.hadoop.service.AbstractService)
问题描述
我们正在尝试设置 Cloudera 5.5,其中 HDFS 将只在 s3 上工作,因为我们已经在 Core-site.xml 中配置了必要的属性
We are trying to setup Cloudera 5.5 where HDFS will be working on s3 only for that we have already configured the necessory properties in Core-site.xml
<property>
<name>fs.s3a.access.key</name>
<value>################</value>
</property>
<property>
<name>fs.s3a.secret.key</name>
<value>###############</value>
</property>
<property>
<name>fs.default.name</name>
<value>s3a://bucket_Name</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>s3a://bucket_Name</value>
</property>
设置完成后,我们可以通过命令浏览 s3 存储桶的文件
After setting it up we were able to browse the files for s3 bucket from command
hadoop fs -ls /
它仅显示 s3 上可用的文件.
And it shows the files available on s3 only.
但是当我们启动纱线服务时,JobHistory 服务器无法启动并出现以下错误,并且在启动猪作业时,我们遇到了同样的错误
But when we start the yarn services JobHistory server fails to start with below error and on launching pig jobs we are getting same error
PriviledgedActionException as:mapred (auth:SIMPLE) cause:org.apache.hadoop.fs.UnsupportedFileSystemException: No AbstractFileSystem for scheme: s3a
ERROR org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils
Unable to create default file context [s3a://kyvosps]
org.apache.hadoop.fs.UnsupportedFileSystemException: No AbstractFileSystem for scheme: s3a
at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:154)
at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:242)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:337)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:334)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
上网搜索发现core-site.xml中还需要设置如下属性
On serching on Internet we found that we need to set following properties as well in core-site.xml
<property>
<name>fs.s3a.impl</name>
<value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
<description>The implementation class of the S3A Filesystem</description>
</property>
<property>
<name>fs.AbstractFileSystem.s3a.impl</name>
<value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
<description>The FileSystem for S3A Filesystem</description>
</property>
设置上述属性后,我们收到以下错误
After setting the above properties we are getting following error
org.apache.hadoop.service.AbstractService
Service org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager failed in state INITED; cause: java.lang.RuntimeException: java.lang.NoSuchMethodException: org.apache.hadoop.fs.s3a.S3AFileSystem.<init>(java.net.URI, org.apache.hadoop.conf.Configuration)
java.lang.RuntimeException: java.lang.NoSuchMethodException: org.apache.hadoop.fs.s3a.S3AFileSystem.<init>(java.net.URI, org.apache.hadoop.conf.Configuration)
at org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:131)
at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:157)
at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:242)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:337)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:334)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:334)
at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:451)
at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:473)
at org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils.getDefaultFileContext(JobHistoryUtils.java:247)
所需的罐子已经到位,但仍然出现错误,任何帮助都会很棒.提前致谢
The jars needed for this is in place but still getting the error any help will be great. Thanks in advance
更新
我试图删除属性 fs.AbstractFileSystem.s3a.impl 但它给了我相同的第一个异常,我之前得到的是
I tried to remove the property fs.AbstractFileSystem.s3a.impl but it give me the same first exception the one i was getting previously which is
org.apache.hadoop.security.UserGroupInformation
PriviledgedActionException as:mapred (auth:SIMPLE) cause:org.apache.hadoop.fs.UnsupportedFileSystemException: No AbstractFileSystem for scheme: s3a
ERROR org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils
Unable to create default file context [s3a://bucket_name]
org.apache.hadoop.fs.UnsupportedFileSystemException: No AbstractFileSystem for scheme: s3a
at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:154)
at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:242)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:337)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:334)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:334)
at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:451)
at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:473)
推荐答案
问题不在于罐子的位置.
The problem is not with the location of the jars.
问题在于设置:
<property>
<name>fs.AbstractFileSystem.s3a.impl</name>
<value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
<description>The FileSystem for S3A Filesystem</description>
</property>
不需要此设置.由于这个设置,它在 S3AFileSystem
类中搜索以下构造函数,但没有这样的构造函数:
This setting is not needed. Because of this setting, it is searching for following constructor in S3AFileSystem
class and there is no such constructor:
S3AFileSystem(URI theUri, Configuration conf);
以下异常清楚地表明它无法找到带有 URI
和 Configuration
参数的 S3AFileSystem
的构造函数.
Following exception clearly tells that it is unable to find a constructor for S3AFileSystem
with URI
and Configuration
parameters.
java.lang.RuntimeException: java.lang.NoSuchMethodException: org.apache.hadoop.fs.s3a.S3AFileSystem.<init>(java.net.URI, org.apache.hadoop.conf.Configuration)
要解决此问题,请从 core-site.xml
中删除 fs.AbstractFileSystem.s3a.impl
设置.只需在 core-site.xml
中设置 fs.s3a.impl
就可以解决您的问题.
To resolve this problem, remove fs.AbstractFileSystem.s3a.impl
setting from core-site.xml
. Just having fs.s3a.impl
setting in core-site.xml
should solve your problem.
org.apache.hadoop.fs.s3a.S3AFileSystem
只是实现了FileSystem
.
因此,您不能将 fs.AbstractFileSystem.s3a.impl
的值设置为 org.apache.hadoop.fs.s3a.S3AFileSystem
,因为 org.apache.hadoop.fs.s3a.S3AFileSystem
没有实现AbstractFileSystem
.
Hence, you cannot set value of fs.AbstractFileSystem.s3a.impl
to org.apache.hadoop.fs.s3a.S3AFileSystem
, since org.apache.hadoop.fs.s3a.S3AFileSystem
does not implement AbstractFileSystem
.
我使用的是 Hadoop 2.7.0,在这个版本中 s3A
没有公开为 AbstractFileSystem
.
I am using Hadoop 2.7.0 and in this version s3A
is not exposed as AbstractFileSystem
.
有JIRA票:https://issues.apache.org/jira/browse/HADOOP-11262 以实现相同的功能,并且该修复程序在 Hadoop 2.8.0 中可用.
There is JIRA ticket: https://issues.apache.org/jira/browse/HADOOP-11262 to implement the same and the fix is available in Hadoop 2.8.0.
假设你的jar已经将s3A
暴露为AbstractFileSystem
,你需要为fs.AbstractFileSystem.s3a.impl
设置以下内容:
Assuming, your jar has exposed s3A
as AbstractFileSystem
, you need to set the following for fs.AbstractFileSystem.s3a.impl
:
<property>
<name>fs.AbstractFileSystem.s3a.impl</name>
<value>org.apache.hadoop.fs.s3a.S3A</value>
</property>
这将解决您的问题.
这篇关于将 HDFS 从本地磁盘替换为 s3 出现错误(org.apache.hadoop.service.AbstractService)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!