Spark-Cassandra Connector:无法打开到Cassandra的本地连接 [英] Spark-Cassandra Connector : Failed to open native connection to Cassandra

查看:1029
本文介绍了Spark-Cassandra Connector:无法打开到Cassandra的本地连接的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是Spark和Cassandra的新人。



详情:



版本:

  Spark:1.3.1(为hadoop 2.6或更高版本构建:spark-1.3.1-bin-hadoop2.6 )
Cassandra:2.0
Spark-Cassandra-Connector:1.3.0-M1
scala:2.10.5

Spark和Cassandra在虚拟集群上
集群详细信息:

  Spark Master:192.168.101.13 
Spark Slaves:192.168.101.11和192.168.101.12
Cassandra节点:192.168.101.11(种子节点)和192.168.101.12

我正试图通过我的客户端机器(笔记本电脑)提交作业 - 172.16.0.6。
在搜索此错误后,我确保我可以从客户端计算机ping所有集群上的机器:spark master / slaves和cassandra节点,并禁用所有计算机上的防火墙。



Cassandra.yaml

 

code> listen_address:192.168.101.11(192.168.101.12在其他cassandra节点上)
start_native_transport:true
native_transport_port:9042
start_rpc:true
rpc_address:192.168.101.11 192.168.101.12 on other cassandra node)
rpc_port:9160

我试图运行最小示例作业

  import org.apache.spark.SparkContext 
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.spark.rdd.RDD
import com.datastax.spark.connector._

val rdd = sc.cassandraTable(test,words)
rdd.toArray.foreach(println)

要提交作业,我使用spark-shell(:在spark shell中粘贴代码):

  --jars/home/ameya/.m2/repository/com/datastax/spark/spark-cassandra-connector_2.10/1.3.0-M1/spark-cassandra-connector_2.10-1.3.0-M1.jar ,/ home / ameya / .m2 / repository / com / datastax / cassandra / cassandra-driver-core /2.1.5/ cassandra-driver-core-2.1.5.jar,/ home / ameya / .m2 / repository / com / google / collections / google-collections / 1.0 / google-collections-1.0.jar,/ home / ameya / .m2 / repository / io / netty / netty / 3.8.0.Final / netty- / home / ameya / .m2 / repository / com / google / guava / 14.0.1 / guava-14.0.1.jar,.Final.jar,/ home / ameya / .m2 / io / dropwizard / metrics / metrics-core / 3.1.0 / metrics-core-3.1.0.jar,/ home / ameya / .m2 / repository / org / slf4j / slf4j-api / 1.7.10 / slf4j- api-1.7.10.jar,/ home / ameya / .m2 / repository / com / google / collections / google-collections / 1.0 / google-collections-1.0.jar,/ home / ameya / .m2 / repository / io / netty / netty / 3.8.0.Final / netty-3.8.0.Final.jar,/ home / ameya /.m2/resources/com/google/guava/guava/14.0.1/guava- 14.0.1.jar,/ home / ameya / .m2 / repository / org / apache / cassandra / cassandra-clientutil / 2.1.5 / cassandra-clientutil-2.1.5.jar,/ home / ameya /。 m2 / repository / joda-time / joda-time / 2.3 / joda-time-2.3.jar,/ home / ameya / .m2 / repository / org / apache / cassandra / cassandra-thrift / 2.1.3 / cassandra- thrift-2.1.3.jar,/ home / ameya / .m2 / repository / org / joda / joda-convert / 1.2 / joda-convert-1.2.jar,/ home / ameya / .m2 / org / apache / thrift / libthrift / 0.9.2 / libthrift-0.9.2.jar,/ home / ameya / .m2 / repository / org / apache / thrift / libthrift / 0.9.2 / libthrift- jar--master spark://192.168.101.13:7077 --conf spark.cassandra.connection.host = 192.168.101.11 --conf spark.cassandra.auth.username = cassandra --conf spark.cassandra.auth.password = cassandra 

我得到的错误:

 警告:有1个弃用警告;使用-deprecation查看详细信息
** java.io.IOException:无法在{192.168.101.11}上打开与Cassandra的本地连接:9042 **
at com.datastax.spark.connector。 cql.CassandraConnector $ .com $ datastax $ spark $ connector $ cql $ CassandraConnector $$ createSession(CassandraConnector.scala:181)
at com.datastax.spark.connector.cql.CassandraConnector $$ anonfun $ 2.apply(CassandraConnector .scala:167)
at com.datastax.spark.connector.cql.CassandraConnector $$ anonfun $ 2.apply(CassandraConnector.scala:167)
at com.datastax.spark.connector.cql.RefCountedCache .createNewValueAndKeys(RefCountedCache.scala:31)
at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:56)
at com.datastax.spark.connector.cql.CassandraConnector .openSession(CassandraConnector.scala:76)
at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:104)
at com.datastax.spark.connector.cql.CassandraConnector .withClusterDo(CassandraConnector.scala:115)
at com.datastax.spark.connector.cql.Schema $ .fromCassandra(Schema.scala:243)
at com.datastax.spark.connector.rdd。 CassandraTableRowReaderProvider $ class.tableDef(CassandraTableRowReaderProvider.scala:49)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.tableDef $ lzycompute(CassandraTableScanRDD.scala:59)
at com.datastax.spark。 connector.rdd.CassandraTableScanRDD.tableDef(CassandraTableScanRDD.scala:59)
at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider $ class.verify(CassandraTableRowReaderProvider.scala:148)
at com.datastax。 spark.connector.rdd.CassandraTableScanRDD.verify(CassandraTableScanRDD.scala:59)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.getPartitions(CassandraTableScanRDD.scala:118)
在org.apache。 spark.rdd.RDD $$ anonfun $ partitions $ 2.apply(RDD.scala:219)
at org.apache.spark.rdd.RDD $$ anonfun $ partitions $ 2.apply(RDD.scala:217)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
在org.apache.spark中的scala.Option.getOrElse(Option.scala:120)
。 SparkContext.runJob(SparkContext.scala:1512)
在org.apache.spark.rdd.RDD.collect(RDD.scala:813)
在org.apache.spark.rdd.RDD.toArray RDD.scala:833)
at $ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC。< init> ;(< console>:33)
at $ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC。< init> ;(< console>:38)
at $ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC。< init> ; console>:40)
at $ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC。< init>(< console>:42)
at $ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC。< init>(< console>:44)
at $ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC。< init>(< console>:46)
at $ iwC $$ iwC $$ iwC $$ iwC $$ iwC $$ iwC。< init>(< console>:48)
at $ iwC $$ iwC $$ iwC $$ iwC $$ iwC。< init>(< console>:50)
at $ iwC $$ iwC $$ iwC $$ iwC。< init>(< console>:52)
at $ iwC $$ iwC $$ iwC。< init>(< console>:54 )
at $ iwC $$ iwC。< init>(< console>:56)
at $ iwC。< init>(< console>:58)
at< ;< clinit>(< console>)
at(< console>:64)
在< < console>:7)
at。< clinit>(< console>)
at $ print(< console>)
at sun.reflect .NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
在java.lang.reflect.Method.invoke(Method.java:497)
在org.apache.spark.repl.SparkIMain $ ReadEvalPrint.call(SparkIMain.scala:1065)
在org.apache .spark.repl.SparkMain $ Request.loadAndRun(SparkIMain.scala:1338)
在org.apache.spark.repl.SparkIMain.loadAndRunReq $ 1(SparkIMain.scala:840)
在org.apache。 spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
在org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
在org.apache.spark.repl。 Sparkiloop.reallyInterpret $ 1(SparkILoop.scala:856)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:901)
at org.apache.spark.repl.SparkILoop.command (SparkILoop.scala:813)
at org.apache.spark.repl.SparkILoop.processLine $ 1(SparkILoop.scala:656)
at org.apache.spark.repl.SparkILoop.innerLoop $ 1(SparkILoop .scala:664)
at org.apache.spark.repl.SparkILoop.org $ apache $ spark $ repl $ SparkILoop $$ loop(SparkILoop.scala:669)
在org.apache.spark。 repl.SparkILoop $$ anonfun $ org $ apache $ spark $ repl $ SparkILoop $$ process $ 1.apply $ mcZ $ sp(SparkILoop.scala:996)
at org.apache.spark.repl.SparkILoop $$ anonfun $ org $ apache $ spark $ repl $ SparkILoop $$ process $ 1.apply(SparkILoop.scala:944)
at org.apache.spark.repl.SparkILoop $$ anonfun $ org $ apache $ spark $ repl $ SparkILoop $$ process $ 1.apply(SparkILoop.scala:944)
at scala.tools.nsc.util.ScalaClassLoader $ .savingContextLoader(ScalaClassLoader.scala:135)
在org.apache.spark.repl。 SparkILoop.org $ apache $ spark $ repl $ SparkILoop $$ process(SparkILoop.scala:944)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1058)
at org .apache.spark.repl.Main $ .main(Main.scala:31)
在org.apache.spark.repl.Main.main(Main.scala)
在sun.reflect.NativeMethodAccessorImpl。 invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
在java。 lang.reflect.Method.invoke(Method.java:497)
在org.apache.spark.deploy.SparkSubmit $ .org $ apache $ spark $ deploy $ SparkSubmit $$ runMain(SparkSubmit.scala:569)
at org.apache.spark.deploy.SparkSubmit $ .doRunMain $ 1(SparkSubmit.scala:166)
at org.apache.spark.deploy.SparkSubmit $ .submit(SparkSubmit.scala:189)
at org.apache.spark.deploy.SparkSubmit $ .main(SparkSubmit.scala:110)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
**导致by:com.datastax.driver.core.exceptions.NoHostAvailableException:所有主机尝试查询失败(尝试:/192.168.101.11:9042(com.datastax.driver.core.TransportException:[/192.168.101.11:9042 ] Connection is been closed))**
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:223)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection .java:78)
at com.datastax.driver.core.Cluster $ Manager.init(Cluster.java:1236)
at com.datastax.driver.core.Cluster.getMetadata(Cluster.java :333)
at com.datastax.spark.connector.cql.CassandraConnector $ .com $ datastax $ spark $ connector $ cql $ CassandraConnector $$ createSession(CassandraConnector.scala:174)
... 71更多

任何人都能指出我在这里做错了什么?


<你没有指定 spark.cassandra.connection.host 默认情况下spark假定cassandra主机是相同的火花主节点。

  var sc:SparkContext = _ 
val conf = new SparkConf()。setAppName(Cassandra Demo)。setMaster(master)
.set(spark.cassandra.connection.host,192.168.101.11)
c = new SparkContext(conf)

rdd = sc.cassandraTable(test,words)
rdd.toArray.foreach(println)

它应该工作,如果你正确设置种子节点 cassandra.yaml


I am new to Spark and Cassandra. On trying to submit a spark job, I am getting an error while connecting to Cassandra.

Details:

Versions:

Spark : 1.3.1 (build for hadoop 2.6 or later : spark-1.3.1-bin-hadoop2.6)
Cassandra : 2.0
Spark-Cassandra-Connector: 1.3.0-M1
scala : 2.10.5

Spark and Cassandra is on a virtual cluster Cluster details:

Spark Master : 192.168.101.13
Spark Slaves : 192.168.101.11 and 192.168.101.12
Cassandra Nodes: 192.168.101.11 (seed node) and 192.168.101.12

I am trying to submit a job through my client machine (laptop) - 172.16.0.6. After googling for this error, I have made sure that I can ping all the machines on the cluster from the client machine : spark master/slaves and cassandra nodes and also disabled the firewall on all machines. But I am still struggling with this error.

Cassandra.yaml

listen_address: 192.168.101.11 (192.168.101.12 on other cassandra node)
start_native_transport: true
native_transport_port: 9042
start_rpc: true
rpc_address: 192.168.101.11 (192.168.101.12 on other cassandra node)
rpc_port: 9160

I am trying to run a minimal sample job

import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.spark.rdd.RDD
import com.datastax.spark.connector._

val rdd = sc.cassandraTable("test", "words")
rdd.toArray.foreach(println)

To submit the job, I use spark-shell (:paste the code in spark shell):

    spark-shell --jars "/home/ameya/.m2/repository/com/datastax/spark/spark-cassandra-connector_2.10/1.3.0-M1/spark-cassandra-connector_2.10-1.3.0-M1.jar","/home/ameya/.m2/repository/com/datastax/cassandra/cassandra-driver-core/2.1.5/cassandra-driver-core-2.1.5.jar","/home/ameya/.m2/repository/com/google/collections/google-collections/1.0/google-collections-1.0.jar","/home/ameya/.m2/repository/io/netty/netty/3.8.0.Final/netty-3.8.0.Final.jar","/home/ameya/.m2/repository/com/google/guava/guava/14.0.1/guava-14.0.1.jar","/home/ameya/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.0/metrics-core-3.1.0.jar","/home/ameya/.m2/repository/org/slf4j/slf4j-api/1.7.10/slf4j-api-1.7.10.jar","/home/ameya/.m2/repository/com/google/collections/google-collections/1.0/google-collections-1.0.jar","/home/ameya/.m2/repository/io/netty/netty/3.8.0.Final/netty-3.8.0.Final.jar","/home/ameya/.m2/repository/com/google/guava/guava/14.0.1/guava-14.0.1.jar","/home/ameya/.m2/repository/org/apache/cassandra/cassandra-clientutil/2.1.5/cassandra-clientutil-2.1.5.jar","/home/ameya/.m2/repository/joda-time/joda-time/2.3/joda-time-2.3.jar","/home/ameya/.m2/repository/org/apache/cassandra/cassandra-thrift/2.1.3/cassandra-thrift-2.1.3.jar","/home/ameya/.m2/repository/org/joda/joda-convert/1.2/joda-convert-1.2.jar","/home/ameya/.m2/repository/org/apache/thrift/libthrift/0.9.2/libthrift-0.9.2.jar","/home/ameya/.m2/repository/org/apache/thrift/libthrift/0.9.2/libthrift-0.9.2.jar" --master spark://192.168.101.13:7077 --conf spark.cassandra.connection.host=192.168.101.11 --conf spark.cassandra.auth.username=cassandra --conf spark.cassandra.auth.password=cassandra

The error I am getting:

warning: there were 1 deprecation warning(s); re-run with -deprecation for details
**java.io.IOException: Failed to open native connection to Cassandra at {192.168.101.11}:9042**
    at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:181)
    at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:167)
    at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:167)
    at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:31)
    at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:56)
    at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:76)
    at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:104)
    at com.datastax.spark.connector.cql.CassandraConnector.withClusterDo(CassandraConnector.scala:115)
    at com.datastax.spark.connector.cql.Schema$.fromCassandra(Schema.scala:243)
    at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.tableDef(CassandraTableRowReaderProvider.scala:49)
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.tableDef$lzycompute(CassandraTableScanRDD.scala:59)
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.tableDef(CassandraTableScanRDD.scala:59)
    at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.verify(CassandraTableRowReaderProvider.scala:148)
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.verify(CassandraTableScanRDD.scala:59)
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.getPartitions(CassandraTableScanRDD.scala:118)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1512)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:813)
    at org.apache.spark.rdd.RDD.toArray(RDD.scala:833)
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:33)
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:38)
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:40)
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:42)
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:44)
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:46)
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:48)
    at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:50)
    at $iwC$$iwC$$iwC$$iwC.<init>(<console>:52)
    at $iwC$$iwC$$iwC.<init>(<console>:54)
    at $iwC$$iwC.<init>(<console>:56)
    at $iwC.<init>(<console>:58)
    at <init>(<console>:60)
    at .<init>(<console>:64)
    at .<clinit>(<console>)
    at .<init>(<console>:7)
    at .<clinit>(<console>)
    at $print(<console>)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
    at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338)
    at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
    at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:856)
    at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:901)
    at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:813)
    at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:656)
    at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:664)
    at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:669)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:996)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)
    at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
    at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:944)
    at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1058)
    at org.apache.spark.repl.Main$.main(Main.scala:31)
    at org.apache.spark.repl.Main.main(Main.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
**Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /192.168.101.11:9042 (com.datastax.driver.core.TransportException: [/192.168.101.11:9042] Connection has been closed))**
    at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:223)
    at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:78)
    at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1236)
    at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:333)
    at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:174)
    ... 71 more

Can anyone point out what am I doing wrong here ?

解决方案

you did not specified spark.cassandra.connection.host by default spark assume that cassandra host is same as spark master node.

var sc:SparkContext=_
val conf = new SparkConf().setAppName("Cassandra Demo").setMaster(master)
.set("spark.cassandra.connection.host", "192.168.101.11")
c=new SparkContext(conf)

val rdd = sc.cassandraTable("test", "words")
rdd.toArray.foreach(println)

it should work if you have properly set seed nodein cassandra.yaml

这篇关于Spark-Cassandra Connector:无法打开到Cassandra的本地连接的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆