spark shuffle 内存错误:无法分配直接内存 [英] spark shuffle memory error: failed to allocate direct memory

查看:34
本文介绍了spark shuffle 内存错误:无法分配直接内存的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

对 spark 数据帧 (4x) 执行几次连接时,出现以下错误:

When performing a couple of joins on spark data frames (4x) I get the following error:

org.apache.spark.shuffle.FetchFailedException: failed to allocate 16777216 byte(s) of direct memory (used: 4294967296, max: 4294967296)

即使设置:

--conf "spark.executor.extraJavaOptions-XX:MaxDirectMemorySize=4G" \

没有解决.

推荐答案

好像飞行块太多了.尝试使用较小的 spark.reducer.maxBlocksInFlightPerAddress 值.作为参考,请查看此 JIRA

Seems like there are too many in flight blocks. Try with smaller values of spark.reducer.maxBlocksInFlightPerAddress. For reference take a look at this JIRA

引用文本:

对于启用了外部 shuffle 的配置,我们观察到如果一个非常大的 no.的块正在从远程主机获取,它使 NM 承受额外的压力并可能使其崩溃.此更改引入了配置 spark.reducer.maxBlocksInFlightPerAddress ,以限制数量.从给定的远程地址获取的地图输出.此处应用的更改适用于两种情况 - 启用和禁用外部 shuffle 时.

For configurations with external shuffle enabled, we have observed that if a very large no. of blocks are being fetched from a remote host, it puts the NM under extra pressure and can crash it. This change introduces a configuration spark.reducer.maxBlocksInFlightPerAddress , to limit the no. of map outputs being fetched from a given remote address. The changes applied here are applicable for both the scenarios - when external shuffle is enabled as well as disabled.

这篇关于spark shuffle 内存错误:无法分配直接内存的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆