无法连接到CFS节点 [英] Can't connect to CFS node

查看:161
本文介绍了无法连接到CFS节点的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

几个月前,我删除了(或不推荐使用,但不记得使用)DSE分析节点(IP 10.14.5.50 )。现在,当我尝试执行 dse鲨鱼创建表ccc AS SELECT ... )查询时,我现在收到:

I removed (or decommisioned, can't remember) a DSE analytics node (with IP 10.14.5.50) a couple of months ago. When I now try to execute a dse shark (CREATE TABLE ccc AS SELECT ...) query I now receiving:

15/01/22 13:23:17 ERROR parse.SharkSemanticAnalyzer: org.apache.hadoop.hive.ql.parse.SemanticException: 0:0 Error creating temporary folder on: cfs://10.14.5.50/user/hive/warehouse/mykeyspace.db. Error encountered near token 'TOK_TMP_FILE'
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:1256)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:1053)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8342)
    at shark.parse.SharkSemanticAnalyzer.analyzeInternal(SharkSemanticAnalyzer.scala:105)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:284)
    at shark.SharkDriver.compile(SharkDriver.scala:215)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:342)
    at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:977)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888)
    at shark.SharkCliDriver.processCmd(SharkCliDriver.scala:347)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
    at shark.SharkCliDriver$.main(SharkCliDriver.scala:240)
    at shark.SharkCliDriver.main(SharkCliDriver.scala)
Caused by: java.lang.RuntimeException: java.io.IOException: Error connecting to node 10.14.5.50:9160 with strategy STICKY.
    at org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:216)
    at org.apache.hadoop.hive.ql.Context.getExternalScratchDir(Context.java:270)
    at org.apache.hadoop.hive.ql.Context.getExternalTmpFileURI(Context.java:363)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:1253)
    ... 12 more

我猜上面的错误是由于我的键空间所致指的是旧节点:

I guess the above error is due to my keyspace referring to the old node:

shark> DESCRIBE DATABASE mykeyspace;
OK
mykeyspace      cfs://10.14.5.50/user/hive/warehouse/mykeyspace.db
Time taken: 0.997 seconds

我有什么办法可以解决此错误的数据库路径?

尝试了(但失败了)重新创建数据库的解决方法:在 cqlsh 中,我创建了一个键空间 thekeyspace 并添加了一个表桌子。我打开了 dse配置单元(并注意到 DESCRIBE DATABASE密钥空间给了我正确的 cfs 路径)。但是,我无法使用 DROP DATABASE密钥空间删除数据库。

Tried (but failed) workaround to recreate the database: In cqlsh I created a keyspace thekeyspace and added a table thetable. I the opened up dse hive (and noticed that DESCRIBE DATABASE thekeyspace is giving me a correct cfs path). However, I am unable to drop the the database using DROP DATABASE thekeyspace.

其他信息:


  • 我的键空间中没有外部表。

  • 对表进行SELECT即可。

  • 设置 -hiveconf cassandra.host = WORKING_NODE_IP 没有帮助。

  • 以下命令返回正确的IP: s(即不是 XXX50 ):

    • dsetool listjt

    • dsetool作业跟踪器

    • dsetool sparkmaster

    • I have no external tables in my keyspace.
    • Making the SELECT against the tables works.
    • Setting -hiveconf cassandra.host=WORKING_NODE_IP does not help.
    • The following commands return proper IP:s (ie. not X.X.X.50):
      • dsetool listjt
      • dsetool jobtracker
      • dsetool sparkmaster

      推荐答案

      偶然发现页面,该页面要求您截断 HiveMetaStore。 MetaStore (在 cqlsh )后删除Hive节点。达到目的了。

      Stumbled across this page that says you need to TRUNCATE "HiveMetaStore"."MetaStore" (in cqlsh) after removing Hive nodes. That did the trick.

      这篇关于无法连接到CFS节点的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆