pyspark mysql jdbc load调用o23.load时发生错误没有合适的驱动程序 [英] pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver

查看:1722
本文介绍了pyspark mysql jdbc load调用o23.load时发生错误没有合适的驱动程序的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在Mac上使用码头图像 sequenceiq / spark 研究这些< a href =http://spark.apache.org/examples.html =nofollow noreferrer> spark示例,在学习过程中,我将该图像内的火花升级到1.6.1根据此答案,当我开始发生错误简单数据操作示例,以下是发生了什么:



当我运行 df = sqlContext.read.format(jdbc)。option(url,url).option(dbtable,people)。load()它引发错误, pyspark控制台如下:

  Python 2.6.6(r266:84292,2015年7月23日15:22:56 )
[GCC 4.4.7 20120313(Red Hat 4.4.7-11)] on linux2
有关详细信息,请输入help,copyright,credits或license。
16/04/12 22:45:28 WARN NativeCodeLoader:无法为您的平台加载native-hadoop库...在适用的情况下使用builtin-java类
欢迎使用
____ __
/ __ / __ ___ _____ / _ _ _
_\ \ / _ \ / _`/ __ /'_ /
/ __ / .__ / \ _,_ / _ / / _ / \_\ version 1.6.1
/ _ /

使用Python 2.6.6版(r266:84292,2015年7月23日15:22:56)
SparkContext可用作sc,HiveContext可用作sqlContext。
>>> url =jdbc:mysql:// localhost:3306 / test?user = root; password = myPassWord
>>> df = sqlContext.read.format(jdbc)。option(url,url).option(dbtable,people)。load()
16/04/12 22:46:05 WARN连接:指定但不存在于CLASSPATH(或其中一个依赖项)中的BoneCP
16/04/12 22:46:06 WARN连接:指定但不存在于CLASSPATH(或其中一个依赖项)中的BoneCP
16/04/12 22:46:11 WARN ObjectStore:在metastore中找不到版本信息。 hive.metastore.schema.verification未启用,因此记录模式版本1.2.0
16/04/12 22:46:11 WARN ObjectStore:无法获取数据库默认值,返回NoSuchObjectException
16/04 / 12 22:46:16 WARN连接:指定但不存在于CLASSPATH(或其中一个依赖项)中的BoneCP
16/04/12 22:46:17 WARN连接:指定但不存在于CLASSPATH中的BoneCP(或一个的依赖关系)
追溯(最近的最后一次调用):
文件< stdin>,第1行,< module>
文件/usr/local/spark/python/pyspark/sql/readwriter.py,第139行,加载
return self._df(self._jreader.load())
文件/usr/local/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py,第813行,__call__
文件/ usr / local / spark / python / pyspark / sql / utils.py,第45行,deco
return f(* a,** kw)
文件/usr/local/spark/python/lib/py4j-0.9-src.zip /py4j/protocol.py,第308行,在get_return_value
py4j.protocol.Py4JJavaError:调用o23.load时发生错误。
:java.sql.SQLException:没有合适的驱动程序
在java.sql.DriverManager.getDriver(DriverManager.java:278)
在org.apache.spark.sql.execution.datasources。 jdbc.JdbcUtils $$ anonfun $ 2.apply(JdbcUtils.scala:50)
,在org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils $$ anonfun $ 2.apply(JdbcUtils.scala:50)
at scala.Option.getOrElse(Option.scala:120)
在org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils $ .createConnectionFactory(JdbcUtils.scala:49)
在org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD $ .resolveTable(JDBCRDD.scala:120)
在org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation。< init> ;(JDBCRelation.scala:91)
在org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
在org.apache.spark.sql。 execution.datasources.ResolvedDataSource $ .apply(ResolvedDataSource.scala:158)
在org.apache.spark.sql.DataFrameReader.load(DataFrameRea der.scala:119)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect。 DelegatingMethodMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
在java.lang.reflect.Method.invoke(Method.java:606)
在py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
在py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
在py4j.Gateway.invoke(Gateway.java:259)
在py4j.commands.AbstractCommand.invokeMethod(AbstractCommand .java:133)
在py4j.commands.CallCommand.execute(CallCommand.java:79)
在py4j.GatewayConnection.run(GatewayConnection.java:209)
在java.lang。 Thread.run(Thread.java:744)

>>>

这是我迄今为止所尝试的:


  1. 下载 mysql-connector-java-5.0.8-bin.jar ,并将其放入的/ usr /本地/火花/ LIB / 。它仍然是同样的错误。


  2. 创建 t.py 像这样:



    从pyspark导入SparkContext

    从pyspark.sql导入SQLContext



    sc = SparkContext(appName =PythonSQL )

    sqlContext = SQLContext(sc)

    df = sqlContext.read.format(jdbc)。option(url,url).option(dbtable,人们)。load()



    df.printSchema()

    计数ByAge = df.groupBy(age)。count >
    countingByAge.show()

    计数ByAge.write.format(json).save(file:///usr/local/mysql/mysql-connector-java-5.0。 8 / db.json)


然后,我尝试了 spark-submit --conf spark.executor.extraClassPath = mysql-connector-java-5.0.8-bin.jar --driver-class-path mysql-connector-java-5.0.8-bin.jar --jar mysql-connector-java-5.0。 8-bin.jar --master local [4] t.py 。结果仍然是一样的。


  1. 然后我尝试了 pyspark --conf spark.executor .extraClassPath = mysql-connector-java-5.0.8-bin.jar --driver-class-path mysql-connector-java-5.0.8-bin.jar --jars mysql-connector-java-5.0.8-bin .jar --master local [4] t.py ,不管是否有以下 t.py ,仍然是一样的。 >

在这一切中,mysql正在运行。这是我的os信息:

 #rpm --query centos-release 
centos-release-6-5 .el6.centos.11.2.x86_64

而hadoop版本是2.6。



现在我没有下一个去,所以希望有人能帮帮忙,谢谢!

解决方案

当我尝试让我的脚本写入MySQL时,我碰到了java.sql.SQLException:没有合适的驱动程序。



这是我的



在script.py

  df。 write.jdbc(url =jdbc:mysql:// localhost:3333 / my_database
?user = my_user& password = my_password,
table =my_table,
mode = append,
properties = {driver:'com.mysql.jdbc.Driver'})

然后我以这种方式运行spark-submit

  SPARK_HOME = / usr / local / Cellar / apache-spark / 1.6.1 / libexec spark-submit --package s mysql:mysql-connector-java:5.1.39 ./script.py 

请注意,SPARK_HOME是特定于安装火花的地方。对于您的环境,此 https://github.com/sequenceiq/docker-spark /blob/master/README.md 可能有帮助。



如果以上所有情况都令人困惑,请尝试:

在t .py替换

  sqlContext.read.format(jdbc)。option(url,url) dbtable,people)。load()

  sqlContext.read.format(jdbc)。选项(dbtable,people)。选项(driver,com.mysql.jdbc驱动程序')load()

使用

  spark-submit --packages mysql:mysql-connector-java:5.1.39 --master local [4] t.py 


I use docker image sequenceiq/spark on my Mac to study these spark examples, during the study process, I upgrade the spark inside that image to 1.6.1 according to this answer, and the error occurred when I start the Simple Data Operations example, here is what happened:

when I run df = sqlContext.read.format("jdbc").option("url",url).option("dbtable","people").load() it raise a error, and the full stack with the pyspark console is as followed:

Python 2.6.6 (r266:84292, Jul 23 2015, 15:22:56)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-11)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
16/04/12 22:45:28 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 1.6.1
      /_/

Using Python version 2.6.6 (r266:84292, Jul 23 2015 15:22:56)
SparkContext available as sc, HiveContext available as sqlContext.
>>> url = "jdbc:mysql://localhost:3306/test?user=root;password=myPassWord"
>>> df = sqlContext.read.format("jdbc").option("url",url).option("dbtable","people").load()
16/04/12 22:46:05 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/04/12 22:46:06 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/04/12 22:46:11 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
16/04/12 22:46:11 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
16/04/12 22:46:16 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/04/12 22:46:17 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/spark/python/pyspark/sql/readwriter.py", line 139, in load
    return self._df(self._jreader.load())
  File "/usr/local/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__
  File "/usr/local/spark/python/pyspark/sql/utils.py", line 45, in deco
    return f(*a, **kw)
  File "/usr/local/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o23.load.
: java.sql.SQLException: No suitable driver
    at java.sql.DriverManager.getDriver(DriverManager.java:278)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$2.apply(JdbcUtils.scala:50)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$2.apply(JdbcUtils.scala:50)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.createConnectionFactory(JdbcUtils.scala:49)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:120)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:91)
    at org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
    at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
    at py4j.Gateway.invoke(Gateway.java:259)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:209)
    at java.lang.Thread.run(Thread.java:744)

>>>

Here is what I have tried till now:

  1. Download mysql-connector-java-5.0.8-bin.jar, and put it in to /usr/local/spark/lib/. It still the same error.

  2. Create t.py like this:

    from pyspark import SparkContext
    from pyspark.sql import SQLContext

    sc = SparkContext(appName="PythonSQL")
    sqlContext = SQLContext(sc)
    df = sqlContext.read.format("jdbc").option("url",url).option("dbtable","people").load()

    df.printSchema()
    countsByAge = df.groupBy("age").count()
    countsByAge.show()
    countsByAge.write.format("json").save("file:///usr/local/mysql/mysql-connector-java-5.0.8/db.json")

then, I tried spark-submit --conf spark.executor.extraClassPath=mysql-connector-java-5.0.8-bin.jar --driver-class-path mysql-connector-java-5.0.8-bin.jar --jars mysql-connector-java-5.0.8-bin.jar --master local[4] t.py. The result is still the same.

  1. Then I tried pyspark --conf spark.executor.extraClassPath=mysql-connector-java-5.0.8-bin.jar --driver-class-path mysql-connector-java-5.0.8-bin.jar --jars mysql-connector-java-5.0.8-bin.jar --master local[4] t.py, both with and without the following t.py, still the same.

During all of this, the mysql is running. And here is my os info:

# rpm --query centos-release  
centos-release-6-5.el6.centos.11.2.x86_64

And the hadoop version is 2.6.

Now I don't where to go next, so I hope some one can help give some advice, thanks!

解决方案

I ran into "java.sql.SQLException: No suitable driver" when I tried to have my script write to MySQL.

Here's what I did to fix that.

In script.py

df.write.jdbc(url="jdbc:mysql://localhost:3333/my_database"
                  "?user=my_user&password=my_password",
              table="my_table",
              mode="append",
              properties={"driver": 'com.mysql.jdbc.Driver'})

Then I ran spark-submit this way

SPARK_HOME=/usr/local/Cellar/apache-spark/1.6.1/libexec spark-submit --packages mysql:mysql-connector-java:5.1.39 ./script.py

Note that SPARK_HOME is specific to where spark is installed. For your environment this https://github.com/sequenceiq/docker-spark/blob/master/README.md might help.

In case all the above is confusing, try this:
In t.py replace

sqlContext.read.format("jdbc").option("url",url).option("dbtable","people").load()

with

sqlContext.read.format("jdbc").option("dbtable","people").option("driver", 'com.mysql.jdbc.Driver').load()

And run that with

spark-submit --packages mysql:mysql-connector-java:5.1.39 --master local[4] t.py

这篇关于pyspark mysql jdbc load调用o23.load时发生错误没有合适的驱动程序的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆