星火+卡桑德拉连接失败,LocalNodeFirstLoadBalancingPolicy.close() [英] Spark + Cassandra connector fails with LocalNodeFirstLoadBalancingPolicy.close()

查看:476
本文介绍了星火+卡桑德拉连接失败,LocalNodeFirstLoadBalancingPolicy.close()的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直在试图卡桑德拉在斯卡拉星火连接,但我一直面临着几个问题。
这里使用的版本:

I've been trying to connect cassandra with Spark in scala but I've been facing a couple of issues. Here are the versions used :

    Spark 1.5.0
    Cassandra 2.1.9
    Scala 2.11.1

下面是我遵循的步骤:
- 下载卡桑德拉使用默认的配置和我通过斌/卡桑德拉-f启动它。卡桑德拉开始很好,监听127.0.0.1
- 我加了一些模拟数据进入试表中的火花密钥空间。
- 下载Spark和通过sbin目录/ start-master.sh开始了主人。我可以看到本地主机:8888,主运行良好
- 我写了下面build.sbt:

Here are the steps I followed: - Downloaded Cassandra with the default configurations and I started it via bin/cassandra -f. Cassandra starts well and listens on 127.0.0.1 - I added some mock data into the try table in the spark keyspace. - Downloaded Spark and started the master via sbin/start-master.sh. I can see on localhost:8888 that the master is running well - I wrote the following build.sbt :

val sparkVersion = "1.5.0"

libraryDependencies ++= Seq(
  jdbc,
  anorm,
  cache,
  ws,
  "org.scalatest" % "scalatest_2.11" % "3.0.0-M8",
  "com.typesafe" % "scalalogging-slf4j_2.10" % "1.1.0",
  "com.typesafe.scala-logging" %% "scala-logging" % "3.1.0",
  "org.apache.spark" % "spark-core_2.11" % sparkVersion,
  "org.apache.spark" % "spark-streaming_2.11" % sparkVersion,
  "org.apache.spark" % "spark-streaming-twitter_2.11" % sparkVersion,
  "org.apache.spark" %% "spark-streaming-kafka" % sparkVersion,
  "org.apache.spark" % "spark-sql_2.11" % sparkVersion,
  "com.datastax.cassandra" % "cassandra-driver-core" % "3.0.0-alpha2",
  "com.datastax.spark" %% "spark-cassandra-connector" % "1.5.0-M1",
  "org.scalatest" %% "scalatest" % "2.2.1" % "test",
  "org.mockito" % "mockito-all" % "1.9.5" % "test"
)


  • 我写了下面主营:

    • I wrote the following Main :

      val conf = new SparkConf(true)
      .setAppName("Test")
      .setMaster("spark://127.0.0.1:7077")
      .set("spark.cassandra.connection.host","127.0.0.1")
      .set("spark.cassandra.connection.port", "9042")
      .set("spark.driver.allowMultipleContexts", "true")
      
      /** Connect to the Spark cluster: */
      lazy val sc = new SparkContext(conf)
      
      val rdd = sc.cassandraTable("spark", "try")
      val file_collect=rdd.collect()
      file_collect.map(println(_))
      
      sc.stop()
      


    • 然后我运行该程序。

    • Then I run the program.

      下面是堆栈跟踪我spark.cassandra.connector.host=火花://127.0.0.1:7077得到的。

      Here is the stack trace I get with "spark.cassandra.connector.host" = "spark://127.0.0.1:7077".

      [error] o.a.s.u.SparkUncaughtExceptionHandler - Uncaught exception in thread Thread[appclient-registration-retry-thread,5,main]
      java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@b6391a7 rejected from java.util.concurrent.ThreadPoolExecutor@7f56044a[Running, pool size = 1, active threads = 0, queued tasks = 0, completed tasks = 3]
          at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047) ~[na:1.8.0_60]
          at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823) [na:1.8.0_60]
          at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369) [na:1.8.0_60]
          at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) ~[na:1.8.0_60]
          at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anonfun$tryRegisterAllMasters$1.apply(AppClient.scala:96) ~[spark-core_2.11-1.5.0.jar:1.5.0]
      

      如果我改变这个参数为本地[*],然后我得到这个堆栈跟踪:

      If I change this parameter to local[*], then I get this stack trace:

      play.api.Application$$anon$1: Execution exception[[RuntimeException: java.lang.AbstractMethodError: com.datastax.spark.connector.cql.LocalNodeFirstLoadBalancingPolicy.close()V]]
          at play.api.Application$class.handleError(Application.scala:296) ~[play_2.11-2.3.8.jar:2.3.8]
          at play.api.DefaultApplication.handleError(Application.scala:402) [play_2.11-2.3.8.jar:2.3.8]
          at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3$$anonfun$applyOrElse$4.apply(PlayDefaultUpstreamHandler.scala:320) [play_2.11-2.3.8.jar:2.3.8]
          at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3$$anonfun$applyOrElse$4.apply(PlayDefaultUpstreamHandler.scala:320) [play_2.11-2.3.8.jar:2.3.8]
          at scala.Option.map(Option.scala:146) [scala-library-2.11.7.jar:na]
      Caused by: java.lang.RuntimeException: java.lang.AbstractMethodError: com.datastax.spark.connector.cql.LocalNodeFirstLoadBalancingPolicy.close()V
          at play.api.mvc.ActionBuilder$$anon$1.apply(Action.scala:523) ~[play_2.11-2.3.8.jar:2.3.8]
          at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4$$anonfun$apply$5.apply(Action.scala:130) ~[play_2.11-2.3.8.jar:2.3.8]
          at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4$$anonfun$apply$5.apply(Action.scala:130) ~[play_2.11-2.3.8.jar:2.3.8]
          at play.utils.Threads$.withContextClassLoader(Threads.scala:21) ~[play_2.11-2.3.8.jar:2.3.8]
          at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4.apply(Action.scala:129) ~[play_2.11-2.3.8.jar:2.3.8]
      Caused by: java.lang.AbstractMethodError: com.datastax.spark.connector.cql.LocalNodeFirstLoadBalancingPolicy.close()V
          at com.datastax.driver.core.Cluster$Manager.close(Cluster.java:1423) ~[cassandra-driver-core-3.0.0-alpha2.jar:na]
          at com.datastax.driver.core.Cluster$Manager.access$200(Cluster.java:1171) ~[cassandra-driver-core-3.0.0-alpha2.jar:na]
          at com.datastax.driver.core.Cluster.closeAsync(Cluster.java:462) ~[cassandra-driver-core-3.0.0-alpha2.jar:na]
          at com.datastax.driver.core.Cluster.close(Cluster.java:473) ~[cassandra-driver-core-3.0.0-alpha2.jar:na]
          at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:163) ~[spark-cassandra-connector_2.11-1.5.0-M1.jar:1.5.0-M1]
      

      任何想法,问题从何而来?

      Any idea where the problem comes from?

      推荐答案

      星火卡桑德拉连接器支持Java的驱动程序版本2.1。
      。驱动器V 2.2的支持位置: https://datastax-oss.atlassian.net/browse/ SPARKC-229
      这将包括在火花卡桑德拉连接器1.5.0-M2,也可以通过你的自我构建。
      我想,这也将与3.0的Java驱动程序。
      来回另一方面,建议使用C *,所以使用Java驱动程序2.1卡桑德拉2.1相同的Java驱动程序

      Spark Cassandra connector support java-driver version 2.1. driver v. 2.2 support here: https://datastax-oss.atlassian.net/browse/SPARKC-229 it will be included in spark-cassandra-connector 1.5.0-M2, or you can build it by your self. I think, it will also work with 3.0 java driver. Fro the other hand it is recommended to use the same java driver that C*, SO use java driver 2.1 for Cassandra 2.1

      这篇关于星火+卡桑德拉连接失败,LocalNodeFirstLoadBalancingPolicy.close()的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆