Apache Spark读取S3:无法腌制thread.lock对象 [英] Apache Spark reads for S3: can't pickle thread.lock objects

查看:122
本文介绍了Apache Spark读取S3:无法腌制thread.lock对象的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

所以我想让我的Spark应用程序从Amazon的S3中读取一些文本.我写了以下简单的脚本:

So I want my Spark App to read some text from Amazon's S3. I Wrote the following simple script:

import boto3
s3_client = boto3.client('s3')
text_keys = ["key1.txt", "key2.txt"]
data = sc.parallelize(text_keys).flatMap(lambda key: s3_client.get_object(Bucket="my_bucket", Key=key)['Body'].read().decode('utf-8'))

当我执行data.collect时,出现以下错误:

When I do data.collect I get the following error:

TypeError: can't pickle thread.lock objects

,我似乎在网上找不到任何帮助.也许有人设法解决了上述问题?

and I don't seem to find any help online. Have perhaps someone managed to solve the above?

推荐答案

您的s3_client不可序列化.

Your s3_client isn't serialisable.

代替flatMap使用mapPartitions,并在lambda主体内部初始化s3_client以避免开销.那会:

Instead of flatMap use mapPartitions, and initialise s3_client inside the lambda body to avoid overhead. That will:

  1. 在每个工作人员上初始化s3_client
  2. 减少初始化开销

这篇关于Apache Spark读取S3:无法腌制thread.lock对象的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆