org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: auxService:mapreduce_shuffle 不存在 [英] org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist

查看:68
本文介绍了org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: auxService:mapreduce_shuffle 不存在的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当我将数据从 SQL 导入 HDFS 时,我遇到了以下错误

org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle 不存在

我正在执行我在终端上的终端活动

student@ubuntu:~$ sqoop import --connect jdbc:mysql://localhost:3306/p \>--用户名 root \>--密码学生\>--表p\>-m \>1;警告:/home/student/Applications/sqoop/../hbase dofes 不存在!HBase 导入将失败.请将 $HBASE_HOME 设置为 HBase 安装的根目录.警告:/home/student/Applications/sqoop/../hcatalog 不存在!HCatalog 作业将失败.请将 $HCAT_HOME 设置为 HCatalog 安装的根目录.警告:/home/student/Applications/sqoop/../accumulo 不存在!累积导入将失败.请将 $ACCUMULO_HOME 设置为 Accumulo 安装的根目录.警告:/home/student/Applications/sqoop/../zookeeper 不存在!累积导入将失败.请将 $ZOOKEEPER_HOME 设置为 Zookeeper 安装的根目录.15/10/23 04:23:49 信息 sqoop.Sqoop:运行 Sqoop 版本:1.4.615/10/23 04:23:49 WARN tool.BaseSqoopTool:在命令行上设置密码是不安全的.考虑使用 -P 代替.15/10/23 04:23:52 INFO manager.MySQLManager:准备使用 MySQL 流结果集.15/10/23 04:23:52 INFO tool.CodeGenTool:开始代码生成15/10/23 04:24:00 INFO manager.SqlManager:执行 SQL 语句:SELECT t.* FROM `p` AS t LIMIT 115/10/23 04:24:01 INFO manager.SqlManager:执行 SQL 语句:SELECT t.* FROM `p` AS t LIMIT 115/10/23 04:24:01 INFO orm.CompilationManager: HADOOP_MAPRED_HOME 是/home/student/Applications/hadoop注意:/tmp/sqoop-student/compile/d0a3526dcf308f25f4333c8558068bb8/p.java 使用或覆盖已弃用的 API.注意:使用 -Xlint:deprecation 重新编译以获取详细信息.15/10/23 04:25:03 INFO orm.CompilationManager:编写 jar 文件:/tmp/sqoop-student/compile/d0a3526dcf308f25f4333c8558068bb8/p.jar15/10/23 04:25:04 WARN manager.MySQLManager:看起来你是从 mysql 导入的.15/10/23 04:25:04 WARN manager.MySQLManager:这个传输可以更快!使用 --direct15/10/23 04:25:04 WARN manager.MySQLManager:执行特定于 MySQL 的快速路径的选项.15/10/23 04:25:04 INFO manager.MySQLManager:将零 DATETIME 行为设置为 convertToNull (mysql)15/10/23 04:25:05 信息 mapreduce.ImportJobBase:开始导入 p15/10/23 04:25:28 WARN util.NativeCodeLoader:无法为您的平台加载本机 Hadoop 库...在适用的情况下使用内置 Java 类15/10/23 04:25:31 信息 Configuration.deprecation:不推荐使用 mapred.jar.相反,使用 mapreduce.job.jar15/10/23 04:26:02 信息 Configuration.deprecation:不推荐使用 mapred.map.tasks.相反,使用 mapreduce.job.maps15/10/23 04:26:04 INFO client.RMProxy:在/0.0.0.0:8032 连接到 ResourceManager15/10/23 04:26:40 INFO db.DBInputFormat:使用读提交事务隔离15/10/23 04:26:41 信息 mapreduce.JobSubmitter:分割数:115/10/23 04:26:45 INFO mapreduce.JobSubmitter:提交作业令牌:job_1445598425022_000115/10/23 04:26:53 INFO impl.YarnClientImpl:提交的申请application_1445598425022_000115/10/23 04:26:54 INFO mapreduce.Job:跟踪作业的 URL:http://ubuntu:8088/proxy/application_1445598425022_0001/15/10/23 04:26:54 INFO mapreduce.Job:运行作业:job_1445598425022_000115/10/23 04:28:24 INFO mapreduce.Job:作业 job_1445598425022_0001 在 uber 模式下运行:false15/10/23 04:28:25 信息 mapreduce.Job:映射 0% 减少 0%15/10/23 04:28:41 信息 mapreduce.Job:任务 ID:尝试_1445598425022_0001_m_000000_0,状态:失败container_1445598425022_0001_01_000002 的容器启动失败:org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException:auxService:mapreduce_shuffle 不存在在 sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)在 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)在 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)在 java.lang.reflect.Constructor.newInstance(Constructor.java:526)在 org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)在 org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)在 org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:155)在 org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:369)在 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)在 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)在 java.lang.Thread.run(Thread.java:745)15/10/23 04:28:43 信息 mapreduce.Job:任务 ID:尝试_1445598425022_0001_m_000000_1,状态:失败container_1445598425022_0001_01_000003 的容器启动失败:org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException:auxService:mapreduce_shuffle 不存在在 sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)在 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)在 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)在 java.lang.reflect.Constructor.newInstance(Constructor.java:526)在 org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)在 org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)在 org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:155)在 org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:369)在 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)在 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)在 java.lang.Thread.run(Thread.java:745)15/10/23 04:28:43 信息 mapreduce.Job:任务 ID:尝试_1445598425022_0001_m_000000_2,状态:失败container_1445598425022_0001_01_000004 的容器启动失败:org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException:auxService:mapreduce_shuffle 不存在在 sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)在 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)在 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)在 java.lang.reflect.Constructor.newInstance(Constructor.java:526)在 org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)在 org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)在 org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:155)在 org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:369)在 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)在 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)在 java.lang.Thread.run(Thread.java:745)15/10/23 04:28:47 信息 mapreduce.Job:映射 100% 减少 0%15/10/23 04:28:56 INFO mapreduce.Job:作业 job_1445598425022_0001 失败,状态失败,原因是:任务失败 task_1445598425022_0001_m_000000作业失败,因为任务失败.failedMaps:1 failedReduces:015/10/23 04:29:01 信息 mapreduce.Job:计数器:3工作计数器其他本地地图任务=4所有地图在占用槽位上花费的总时间(ms)=0所有reduce在占用时隙中花费的总时间(ms)=015/10/23 04:29:02 信息 mapred.ClientServiceDelegate:应用程序状态已完成.最终应用状态=失败.重定向到作业历史服务器15/10/23 04:29:13 信息 ipc.Client:重试连接到服务器:0.0.0.0/0.0.0.0:10020.已尝试0次;重试策略是 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)15/10/23 04:29:14 信息 ipc.Client:重试连接到服务器:0.0.0.0/0.0.0.0:10020.已经尝试过1次;重试策略是 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)15/10/23 04:29:15 信息 ipc.Client:重试连接到服务器:0.0.0.0/0.0.0.0:10020.已经试过2次;重试策略是 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)15/10/23 04:29:16 信息 ipc.Client:重试连接到服务器:0.0.0.0/0.0.0.0:10020.已经试过3次;重试策略是 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)15/10/23 04:29:17 信息 ipc.Client:重试连接到服务器:0.0.0.0/0.0.0.0:10020.已经尝试了4次;重试策略是 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)15/10/23 04:29:18 信息 ipc.Client:重试连接到服务器:0.0.0.0/0.0.0.0:10020.已经尝试了5次;重试策略是 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)15/10/23 04:29:19 信息 ipc.Client:重试连接到服务器:0.0.0.0/0.0.0.0:10020.已经尝试了6次;重试策略是 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)15/10/23 04:29:20 信息 ipc.Client:重试连接到服务器:0.0.0.0/0.0.0.0:10020.已经尝试了7次;重试策略是 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)15/10/23 04:29:21 信息 ipc.Client:重试连接到服务器:0.0.0.0/0.0.0.0:10020.已经尝试了8次;重试策略是 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)15/10/23 04:29:22 信息 ipc.Client:重试连接到服务器:0.0.0.0/0.0.0.0:10020.已经尝试了9次;重试策略是 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)15/10/23 04:29:23 INFO mapred.ClientServiceDelegate:应用程序状态已完成.最终应用状态=失败.重定向到作业历史服务器

我怎样才能克服这个问题?

解决方案

我也遇到过同样的情况.

要解决这个问题,您必须检查您的 yarn-site.xml,使其与以下代码段匹配.

<预><代码><配置><财产><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></属性></配置>

WHEN I'M GOING TO IMPORT DATA FROM SQL TO HDFS I GOT FOLLOWING ERROR SAYING

org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist

I'M PUTTING THE TERMINAL ACTIVITY WHICH I GOT ON MY TERMINAL

student@ubuntu:~$ sqoop import  --connect jdbc:mysql://localhost:3306/p \
> --username root \
> --password student \
> --table p \
> -m \
> 1;
Warning: /home/student/Applications/sqoop/../hbase dofes not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /home/student/Applications/sqoop/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/student/Applications/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /home/student/Applications/sqoop/../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
15/10/23 04:23:49 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
15/10/23 04:23:49 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
15/10/23 04:23:52 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
15/10/23 04:23:52 INFO tool.CodeGenTool: Beginning code generation
15/10/23 04:24:00 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `p` AS t LIMIT 1
15/10/23 04:24:01 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `p` AS t LIMIT 1
15/10/23 04:24:01 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/student/Applications/hadoop
Note: /tmp/sqoop-student/compile/d0a3526dcf308f25f4333c8558068bb8/p.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
15/10/23 04:25:03 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-student/compile/d0a3526dcf308f25f4333c8558068bb8/p.jar
15/10/23 04:25:04 WARN manager.MySQLManager: It looks like you are importing from mysql.
15/10/23 04:25:04 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
15/10/23 04:25:04 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
15/10/23 04:25:04 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
15/10/23 04:25:05 INFO mapreduce.ImportJobBase: Beginning import of p
15/10/23 04:25:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/10/23 04:25:31 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
15/10/23 04:26:02 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
15/10/23 04:26:04 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
15/10/23 04:26:40 INFO db.DBInputFormat: Using read commited transaction isolation
15/10/23 04:26:41 INFO mapreduce.JobSubmitter: number of splits:1
15/10/23 04:26:45 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1445598425022_0001
15/10/23 04:26:53 INFO impl.YarnClientImpl: Submitted application application_1445598425022_0001
15/10/23 04:26:54 INFO mapreduce.Job: The url to track the job: http://ubuntu:8088/proxy/application_1445598425022_0001/
15/10/23 04:26:54 INFO mapreduce.Job: Running job: job_1445598425022_0001
15/10/23 04:28:24 INFO mapreduce.Job: Job job_1445598425022_0001 running in uber mode : false
15/10/23 04:28:25 INFO mapreduce.Job:  map 0% reduce 0%
15/10/23 04:28:41 INFO mapreduce.Job: Task Id : attempt_1445598425022_0001_m_000000_0, Status : FAILED
Container launch failed for container_1445598425022_0001_01_000002 : org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
    at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
    at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:155)
    at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:369)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

15/10/23 04:28:43 INFO mapreduce.Job: Task Id : attempt_1445598425022_0001_m_000000_1, Status : FAILED
Container launch failed for container_1445598425022_0001_01_000003 : org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
    at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
    at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:155)
    at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:369)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

15/10/23 04:28:43 INFO mapreduce.Job: Task Id : attempt_1445598425022_0001_m_000000_2, Status : FAILED
Container launch failed for container_1445598425022_0001_01_000004 : org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
    at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
    at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:155)
    at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:369)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

15/10/23 04:28:47 INFO mapreduce.Job:  map 100% reduce 0%
15/10/23 04:28:56 INFO mapreduce.Job: Job job_1445598425022_0001 failed with state FAILED due to: Task failed task_1445598425022_0001_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0

15/10/23 04:29:01 INFO mapreduce.Job: Counters: 3
    Job Counters 
        Other local map tasks=4
        Total time spent by all maps in occupied slots (ms)=0
        Total time spent by all reduces in occupied slots (ms)=0
15/10/23 04:29:02 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=FAILED. Redirecting to job history server
15/10/23 04:29:13 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/10/23 04:29:14 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/10/23 04:29:15 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/10/23 04:29:16 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/10/23 04:29:17 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/10/23 04:29:18 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/10/23 04:29:19 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/10/23 04:29:20 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/10/23 04:29:21 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/10/23 04:29:22 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/10/23 04:29:23 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=FAILED. Redirecting to job history server

HOW CAN I OVERCOME THIS?

解决方案

I have suffered from the same kind of situation.

To overcome this you have to check your yarn-site.xml such that it will match the following code snippet.

<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>

这篇关于org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: auxService:mapreduce_shuffle 不存在的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆