使用Datastax Spark中的Scala将文件从S3存储桶读取到Spark Dataframe,并提交,给出AWS错误消息:错误请求 [英] Read Files from S3 bucket to Spark Dataframe using Scala in Datastax Spark Submit giving AWS Error Message: Bad Request

查看:284
本文介绍了使用Datastax Spark中的Scala将文件从S3存储桶读取到Spark Dataframe,并提交,给出AWS错误消息:错误请求的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试读取位于孟买地区s3存储桶上的CSV文件.我正在尝试使用datastax dse spark-submit读取文件.

I'm trying to read CSV files which are on s3 bucket which is located in Mumbai Region.I'm trying to read the files using datastax dse spark-submit.

我尝试将hadoop-aws版本更改为其他各种版本.当前,hadoop-aws版本是2.7.3

I tried changing hadoop-aws version to various other versions. Currently, hadoop-aws version is 2.7.3

spark.sparkContext.hadoopConfiguration.set("com.amazonaws.services.s3.enableV4", "true")

spark.sparkContext.hadoopConfiguration.set("fs.s3a.endpoint", "s3.ap-south-1.amazonaws.com")

spark.sparkContext.hadoopConfiguration.set("fs.s3a.access.key", accessKeyId)

spark.sparkContext.hadoopConfiguration.set("fs.s3a.secret.key", secretAccessKey)

spark.sparkContext.hadoopConfiguration.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")

val df = spark.read.csv("s3a://bucket_path/csv_name.csv")

执行时,以下是我得到的错误

Upon Executing, Following is the error which I'm getting,

线程主"中的异常 com.amazonaws.services.s3.model.AmazonS3Exception:状态码:400, AWS服务:Amazon S3,AWS请求ID:8C7D34A38E359FCE,AWS错误 代码:null,AWS错误消息:错误的请求位于 com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798) 在 com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421) 在 com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232) 在 com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528) 在 com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031) 在 com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994) 在 org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297) 在 org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653) 在org.apache.hadoop.fs.FileSystem.access $ 200(FileSystem.java:92)在 org.apache.hadoop.fs.FileSystem $ Cache.getInternal(FileSystem.java:2687) 在org.apache.hadoop.fs.FileSystem $ Cache.get(FileSystem.java:2669) 在org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)处 org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)在 org.apache.spark.sql.execution.datasources.DataSource $ .org $ apache $ spark $ sql $ execution $ datasources $ DataSource $$ checkAndGlobPathIfNecessary(DataSource.scala:616) 在 org.apache.spark.sql.execution.datasources.DataSource $$ anonfun $ 14.apply(DataSource.scala:350) 在 org.apache.spark.sql.execution.datasources.DataSource $$ anonfun $ 14.apply(DataSource.scala:350) 在 scala.collection.TraversableLike $$ anonfun $ flatMap $ 1.apply(TraversableLike.scala:241) 在 scala.collection.TraversableLike $$ anonfun $ flatMap $ 1.apply(TraversableLike.scala:241) 在scala.collection.immutable.List.foreach(List.scala:392)在 scala.collection.TraversableLike $ class.flatMap(TraversableLike.scala:241) 在scala.collection.immutable.List.flatMap(List.scala:355)在 org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:349) 在 org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178) 在 org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:533) 在 org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:412)

Exception in thread "main" com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: 8C7D34A38E359FCE, AWS Error Code: null, AWS Error Message: Bad Request at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798) at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528) at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031) at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994) at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) at org.apache.spark.sql.execution.datasources.DataSource$.org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary(DataSource.scala:616) at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$14.apply(DataSource.scala:350) at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$14.apply(DataSource.scala:350) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.immutable.List.foreach(List.scala:392) at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241) at scala.collection.immutable.List.flatMap(List.scala:355) at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:349) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178) at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:533) at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:412)

推荐答案

您的签名V4选项未应用.参见

Your signature V4 option is not applied. See This

在运行spark-submit或spark-shell时添加java选项.

Add the java option when you run the spark-submit or spark-shell.

spark.executor.extraJavaOptions=-Dcom.amazonaws.services.s3.enableV4=true
spark.driver.extraJavaOptions=-Dcom.amazonaws.services.s3.enableV4=true

或者,设置系统属性,例如:

Or, set the system property such as:

System.setProperty("com.amazonaws.services.s3.enableV4", "true");

这篇关于使用Datastax Spark中的Scala将文件从S3存储桶读取到Spark Dataframe,并提交,给出AWS错误消息:错误请求的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆