Datastax Cassandra驱动程序引发CodecNotFoundException [英] Datastax Cassandra Driver throwing CodecNotFoundException
问题描述
确切的异常如下
com.datastax.driver.core.exceptions.CodecNotFoundException:找不到请求操作的编解码器:[varchar<-> java .math.BigDecimal]
com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [varchar <-> java.math.BigDecimal]
这些是我使用的软件版本
Spark 1.5
Datastax-cassandra 3.2.1
CDH 5.5.1
These are the versions of Software I am using Spark 1.5 Datastax-cassandra 3.2.1 CDH 5.5.1
我要执行的代码是一个使用Java api的Spark程序,它基本上从hdfs中读取数据(csv)并将其加载到cassandra表中。我正在使用spark-cassandra连接器。最初,我有很多关于google的番石榴库冲突的问题,我可以通过遮蔽番石榴库并构建具有所有依赖项的快照jar来解决。
The code I am trying to execute is a Spark program using the java api and it basically reads data (csv's) from hdfs and loads it into cassandra tables . I am using the spark-cassandra-connector. I had a lot of issues regarding the google s guava library conflict initially which I was able to resolve by shading the guava library and building a snap-shot jar with all the dependencies.
但是我能够加载某些文件的数据,但是对于某些文件,我得到了Codec Exception。在研究此问题时,我在相同的问题上遇到了以下这些问题。
However I was able to load data for some files but for some files I get the Codec Exception . When I researched on this issue I got these following threads on the same issue.
https://groups.google.com/a/lists.datastax.com/forum/#!topic/ java-driver-user / yZyaOQ-wazk
https://groups.google.com/a/lists.datastax.com/forum/#!topic/java-driver -user / yZyaOQ-wazk
经过这些讨论后,我了解到这是我使用的cassandra驱动程序的错误版本。还是仍然存在与guava库有关的类路径问题,因为cassandra 3.0和更高版本使用guava 16.0.1,并且上面的讨论说,类路径中可能存在较低版本的guava。
After going through these discussion what I understand is either it is a wrong version of the cassandra-driver I am using . Or there is still a class path issue related to the guava library as cassandra 3.0 and later versions use guava 16.0.1 and the discussions above say that there might be a lower version of the guava present in the class path .
这是pom.xml文件
Here is pom.xml file
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.5.0</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector-java_2.10</artifactId>
<version>1.5.0-M3</version>
</dependency>
<dependency>
<groupId>org.apache.cassandra</groupId>
<artifactId>cassandra-clientutil</artifactId>
<version>3.2.1</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
<relocations>
<relocation>
<pattern>com.google</pattern>
<shadedPattern>com.pointcross.shaded.google</shadedPattern>
</relocation>
</relocations>
<minimizeJar>false</minimizeJar>
<shadedArtifactAttached>true</shadedArtifactAttached>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
这些是使用上述pom下载的依赖项
and these are the dependencies that were downloaded using the above pom
spark-core_2.10-1.5.0.jar
spark-cassandra-connector- java_2.10-1.5.0-M3.jar
spark-cassandra-connector_2.10-1.5.0-M3.jar
spark-repl_2.10-1.5.1.jar
spark-bagel_2.10-1.5.1.jar
spark-mllib_2.10-1.5.1.jar
spark-streaming_2.10-1.5.1.jar
spark-graphx_2.10-1.5.1.jar
guava-16.0.1.jar
cassandra-clientutil-3.2.1.jar
cassandra-driver-core-3.0.0-alpha4.jar
上面是快照罐中的一些主要依赖项。
Above are some of the main dependencies on in my snap-shot jar.
Y是CodecNotFoundException吗?是因为类路径(番石榴)吗?或者是cassandra-driver(对于datastax cassandra 3.2.1,为cassandra-driver-core-core-3.0.0-alpha4.jar)或代码。
Y is the CodecNotFoundException ? Is it because of the class path (guava) ? or cassandra-driver (cassandra-driver-core-3.0.0-alpha4.jar for datastax cassandra 3.2.1) or because of the code .
我要插入数据类型为timestamp的列的日期。
Another point is all the dates I am inserting to columns who's data type is timestamp .
另外,当我执行spark-submit时,我会在日志中看到类路径,hadoop库下还有其他番石榴版本。这些会导致问题吗?
Also when I do a spark-submit I see the class path in the logs , There are other guava versions which are under the hadoop libs . R these causing the problem ?
在执行spark-submit时如何指定用户特定的类路径。会有所帮助吗?
How do we specify the a user-specific class path while do a spark-submit. Will that help ?
很高兴在这些问题上得到一些建议。
谢谢
Would be glad to get some points on these. Thanks
以下是堆栈跟踪
com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [timestamp <-> java.lang.String]
at com.datastax.driver.core.CodecRegistry.notFound(CodecRegistry.java:689)
at com.datastax.driver.core.CodecRegistry.createCodec(CodecRegistry.java:550)
at com.datastax.driver.core.CodecRegistry.findCodec(CodecRegistry.java:530)
at com.datastax.driver.core.CodecRegistry.codecFor(CodecRegistry.java:485)
at com.datastax.driver.core.AbstractGettableByIndexData.codecFor(AbstractGettableByIndexData.java:85)
at com.datastax.driver.core.BoundStatement.bind(BoundStatement.java:198)
at com.datastax.driver.core.DefaultPreparedStatement.bind(DefaultPreparedStatement.java:126)
at com.cassandra.test.LoadDataToCassandra$1.call(LoadDataToCassandra.java:223)
at com.cassandra.test.LoadDataToCassandra$1.call(LoadDataToCassandra.java:1)
at org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1.apply(JavaPairRDD.scala:1027)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1555)
at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1121)
at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1121)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
我也得到
com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [Math.BigDecimal <-> java.lang.String]
推荐答案
调用<$驱动程序期望 PreparedStatement
上的c $ c> bind(params ...),驱动程序希望您提供带有映射到cql的java类型的值
When you call bind(params...)
on a PreparedStatement
the driver expects you to provide values w/ java types that map to the cql types.
此错误( [timestamp<-> java.lang.String]
)告诉您没有将Java String
映射到cql <$ c $ timestamp 的编解码器。在Java驱动程序中, timestamp
类型映射到 java.util.Date
。因此,您在此处有2个选择:
This error ([timestamp <-> java.lang.String]
) is telling you that there is no such Codec registered that maps the java String
to a cql timestamp
. In the java driver, the timestamp
type maps to java.util.Date
. So you have 2 options here:
- 要绑定的列用于时间戳记,请提供一个
Date
类型的值,而不是String
。 - 创建一个映射
timestamp<的编解码器;->字符串
。为此,您可以创建MappingCodec
的子类,如文档站点,它将String映射到时间戳:
- Where the column being bound is for a timestamp, provide a
Date
-typed value instead of aString
. - Create a codec that maps
timestamp <-> String
. To do so you could create sub class ofMappingCodec
as described on the documentation site, that maps String to timestamp:
public class TimestampAsStringCodec extends MappingCodec<String, Date> {
public TimestampAsStringCodec() { super(TypeCodec.timestamp(), String.class); }
@Override
protected Date serialize(String value) { ... }
@Override
protected String deserialize(Date value) { ... }
}
然后您需要注册编解码器:
You then would need to register the Codec:
cluster.getConfiguration().getCodecRegistry()
.register(new TimestampAsStringCodec());
这篇关于Datastax Cassandra驱动程序引发CodecNotFoundException的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!