将外部罐子设置为hadoop classpath [英] Setting external jars to hadoop classpath
问题描述
我试图将外部罐子设置为hadoop classpath,但目前为止还没有运气。
我有以下设置
$ hadoop版本
Hadoop 2.0.6-alpha
Subversion https:// git-wip-us.apache.org/repos/asf/bigtop.git -r ca4c88898f95aaab3fd85b5e9c194ffd647c2109 $ b詹金斯在2013-10-31T07编译$ b:55Z
从源与校验95e88b2a9589fa69d6d5c1dbd48d4e
这个命令是使用/usr/lib/hadoop/hadoop-common-2.0.6-alpha.jar运行的
Classpath
<$> $ echo $ HADOOP_CLASSPATH
/home/tom/workspace/libs/opencsv-2.3.jar
我能够看到上面的HADOOP_CLASSPATH已经被hadoop拾起
$ b $ / etc / hadoop / conf:/ usr / lib / hadoop / lib / :/ usr / lib / hadoop /.//:强> /home/tom/workspace/libs/opencsv-2.3.jar 强>:/ usr / lib中/ Hadoop的HD FS /./:/ usr / lib中/ Hadoop的HDFS / LIB / :/ usr / lib中/ Hadoop的HDFS /.//:/ usr / lib中/ Hadoop的纱线/ LIB / :/ usr / lib中/ Hadoop的纱线/.//:/ usr / lib中/ Hadoop的映射精简/ LIB / :/ usr / lib中/ Hadoop的映射精简/.//
命令
$ sudo hadoop jar FlightsByCarrier.jar FlightsByCarrier /user/root/1987.csv / user / root / result
我试过还有-libjars选项以及
lockquote
$ sudo hadoop jar FlightsByCarrier.jar FlightsByCarrier / user / root / 1987。 csv / user / root / result -libjars /home/tom/workspace/libs/opencsv-2.3.jar
堆栈跟踪
14/11/04 16:43:23信息mapreduce.Job:正在运行的作业:job_1415115532989_0001
14/11/04 16:43:55信息mapreduce.Job:Job job_1415115532989_0001以超级模式运行:false
14/11/04 16:43:56信息mapreduce.Job:map 0%减少0%
14/11/04 16:45:27信息mapreduce.Job:map 50%reduce 0%
14/11/04 16:45:27信息mapreduce.Job:Task Id: attempt_1415115532989_0001_m_000001_0,状态:FAILED
错误:java.lang中的 ClassNotFoundException的强>:au.com.bytecode.opencsv的 CSVParser 强>
。在java.net.URLClassLoader的$ 1.run(URLClassLoader.java:366)
在java.net.URLClassLoader的$ 1.run(URLClassLoader.java:355)
在java.security.AccessController.doPrivileged(本机方法)
在java.net.URLClassLoader.findClass(URLClassLoader.java:354)
在java.lang.ClassLoader.loadClass(ClassLoader.java:425)
在sun.misc.Launcher $ AppClassLoader.loadClass(启动的.java:308)
。在在FlightsByCarrierMapper.map(FlightsByCarrierMapper.java:19)
java.lang.ClassLoader.loadClass(ClassLoader.java:358)
。在FlightsByCarrierMapper.map(FlightsByCarrierMapper。 java:10)
在org.apache.hadoop.mapreduce.Mapper.run(Mappe
at org.apache.hadoop.mapred.MapTask.run(MapTask.java: 339)
at org.apache.hadoop.mapred.YarnChild $ 2.run(YarnChild.java:158)$ b $ at java.security.AccessController.doPrivileged(Native Method)
at javax.security .auth.Subject.doAs(Subject.java:415)
在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478)
在org.apache.hadoop.mapred.YarnChild .main(YarnChild.java:153)
任何帮助都非常感谢。
运行地图的节点上缺少外部jar。您必须将其添加到缓存以使其可用。尝试:
DistributedCache.addFileToClassPath(new Path(pathToJar),conf);
不确定版本 DistributedCache
是否被弃用,但从Hadoop 2.2.0开始,您可以使用:
job.addFileToClassPath(new Path(pathToJar));
I am trying to set external jars to hadoop classpath but no luck so far.
I have the following setup
$ hadoop version
Hadoop 2.0.6-alpha Subversion https://git-wip-us.apache.org/repos/asf/bigtop.git -r ca4c88898f95aaab3fd85b5e9c194ffd647c2109 Compiled by jenkins on 2013-10-31T07:55Z From source with checksum 95e88b2a9589fa69d6d5c1dbd48d4e This command was run using /usr/lib/hadoop/hadoop-common-2.0.6-alpha.jar
Classpath
$ echo $HADOOP_CLASSPATH
/home/tom/workspace/libs/opencsv-2.3.jar
I am able see the above HADOOP_CLASSPATH has been picked up by hadoop
$ hadoop classpath
/etc/hadoop/conf:/usr/lib/hadoop/lib/:/usr/lib/hadoop/.//:/home/tom/workspace/libs/opencsv-2.3.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/:/usr/lib/hadoop-hdfs/.//:/usr/lib/hadoop-yarn/lib/:/usr/lib/hadoop-yarn/.//:/usr/lib/hadoop-mapreduce/lib/:/usr/lib/hadoop-mapreduce/.//
Command
$ sudo hadoop jar FlightsByCarrier.jar FlightsByCarrier /user/root/1987.csv /user/root/result
I tried with -libjars option as well
$ sudo hadoop jar FlightsByCarrier.jar FlightsByCarrier /user/root/1987.csv /user/root/result -libjars /home/tom/workspace/libs/opencsv-2.3.jar
The stacktrace
14/11/04 16:43:23 INFO mapreduce.Job: Running job: job_1415115532989_0001 14/11/04 16:43:55 INFO mapreduce.Job: Job job_1415115532989_0001 running in uber mode : false 14/11/04 16:43:56 INFO mapreduce.Job: map 0% reduce 0% 14/11/04 16:45:27 INFO mapreduce.Job: map 50% reduce 0% 14/11/04 16:45:27 INFO mapreduce.Job: Task Id : attempt_1415115532989_0001_m_000001_0, Status : FAILED Error: java.lang.ClassNotFoundException: au.com.bytecode.opencsv.CSVParser at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) at FlightsByCarrierMapper.map(FlightsByCarrierMapper.java:19) at FlightsByCarrierMapper.map(FlightsByCarrierMapper.java:10) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:757) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:158) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:153)
Any help is highly appreciated.
Your external jar is missing on the node running maps. You have to add it to the cache to make it available. Try :
DistributedCache.addFileToClassPath(new Path("pathToJar"), conf);
Not sure in which version DistributedCache
was deprecated, but from Hadoop 2.2.0 onward you can use :
job.addFileToClassPath(new Path("pathToJar"));
这篇关于将外部罐子设置为hadoop classpath的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!