如何在不删除源文件的情况下将数据从HDFS加载到配置单元? [英] How to load data to hive from HDFS without removing the source file?
问题描述
将数据从HDFS加载到Hive时,LOAD DATA INPATH'hdfs_file'INTO TABLE tablename;
命令,它看起来像是将hdfs_file移动到 hive / warehouse
dir。
是否有可能(如何?)复制它,而不是将它移动,以便为另一个进程使用该文件。
我假设你已经有了你的数据在hdfs中。
因此,您不需要 LOAD DATA
,它将文件移动到默认配置单元位置 / user / hive / warehouse
。您可以使用 external
关键字简单地定义表格,该关键字将文件保留在原位,但会在配置单元Metastore中创建表格定义。请参阅:
创建表格DDL
例如。 :
创建外部表table_name(
id int,
myfields string
)
location'/ my / location / in / hdfs';
请注意您使用的格式可能与默认格式不同(正如JigneshRawal在评论中提到的) 。您可以使用自己的分隔符,例如使用Sqoop时:
以''结尾的格式分隔的字段
When load data from HDFS to Hive, using
LOAD DATA INPATH 'hdfs_file' INTO TABLE tablename;
command, it looks like it is moving the hdfs_file to hive/warehouse
dir.
Is it possible (How?) to copy it instead of moving it, in order, for the file, to be used by another process.
from your question I assume that you already have your data in hdfs.
So you don't need to LOAD DATA
, which moves the files to the default hive location /user/hive/warehouse
. You can simply define the table using the external
keyword, which leaves the files in place, but creates the table definition in the hive metastore. See here:
Create Table DDL
eg.:
create external table table_name (
id int,
myfields string
)
location '/my/location/in/hdfs';
Please note that the format you use might differ from the default (as mentioned by JigneshRawal in the comments). You can use your own delimiter, for example when using Sqoop:
row format delimited fields terminated by ','
这篇关于如何在不删除源文件的情况下将数据从HDFS加载到配置单元?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!