使用Python进行Spark Redshift [英] Spark Redshift with Python
本文介绍了使用Python进行Spark Redshift的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我正在尝试将Spark与亚马逊Redshift连接起来,但出现此错误:
I'm trying to connect Spark with amazon Redshift but i'm getting this error :
我的代码如下:
from pyspark.sql import SQLContext
from pyspark import SparkContext
sc = SparkContext(appName="Connect Spark with Redshift")
sql_context = SQLContext(sc)
sc._jsc.hadoopConfiguration().set("fs.s3n.awsAccessKeyId", <ACCESSID>)
sc._jsc.hadoopConfiguration().set("fs.s3n.awsSecretAccessKey", <ACCESSKEY>)
df = sql_context.read \
.option("url", "jdbc:redshift://example.coyf2i236wts.eu-central- 1.redshift.amazonaws.com:5439/agcdb?user=user&password=pwd") \
.option("dbtable", "table_name") \
.option("tempdir", "bucket") \
.load()
推荐答案
以下是逐步连接到redshift的过程.
Here is a step by step process for connecting to redshift.
- 下载redshift连接器文件.尝试以下命令
wget "https://s3.amazonaws.com/redshift-downloads/drivers/RedshiftJDBC4-1.2.1.1001.jar"
- 将以下代码保存在python文件(您要运行的.py)中,然后 相应地替换凭据.
- save the below code in a python file(.py that you want to run) and replace the credentials accordingly.
from pyspark.conf import SparkConf
from pyspark.sql import SparkSession
#initialize the spark session
spark = SparkSession.builder.master("yarn").appName("Connect to redshift").enableHiveSupport().getOrCreate()
sc = spark.sparkContext
sqlContext = HiveContext(sc)
sc._jsc.hadoopConfiguration().set("fs.s3.awsAccessKeyId", "<ACCESSKEYID>")
sc._jsc.hadoopConfiguration().set("fs.s3.awsSecretAccessKey", "<ACCESSKEYSECTRET>")
taxonomyDf = sqlContext.read \
.format("com.databricks.spark.redshift") \
.option("url", "jdbc:postgresql://url.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx") \
.option("dbtable", "table_name") \
.option("tempdir", "s3://mybucket/") \
.load()
- 像下面一样运行spark-submit
spark-submit --packages com.databricks:spark-redshift_2.10:0.5.0 --jars RedshiftJDBC4-1.2.1.1001.jar test.py
这篇关于使用Python进行Spark Redshift的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文