HCatalog包含用于并行输入和输出的数据传输API,而不使用MapReduce.此API使用表和行的基本存储抽象来从Hadoop集群读取数据并将数据写入其中.
数据传输API主要包含三个类;那些是 :
HCatReader : 从Hadoop集群读取数据.
HCatWriter : 将数据写入Hadoop集群.
DataTransferFactory : 生成读取器和编写器实例.
此API适用于主从节点设置.让我们更多地讨论 HCatReader 和 HCatWriter .
HCatReader是一个摘要HCatalog内部的类,抽象出要从中检索记录的底层系统的复杂性.
Sr.No. | 方法名称&描述 |
---|---|
1 | Public ReaderContext prepareRead( )throws HCatException 这应该在主节点调用以获取ReaderContext,然后应该序列化并发送从属节点. |
2 | Public abstract Iterator <HCatRecorder> read() throws HCaException 这应该在从属节点处调用以读取HCatRecords. |
3 | Public Configuration getConf() 它将返回配置类对象. |
HCatReader类用于从HDFS读取数据.读取是一个两步过程,其中第一步发生在外部系统的主节点上.第二步是在多个从节点上并行执行.
读取在 ReadEntity 上完成.在开始阅读之前,您需要定义一个ReadEntity来读取.这可以通过 ReadEntity.Builder 来完成.您可以指定数据库名称,表名称,分区和筛选字符串.例如 :
ReadEntity.Builder builder = new ReadEntity.Builder(); ReadEntity entity = builder.withDatabase("mydb").withTable("mytbl").build(); 10.
上面的代码片段定义了一个ReadEntity对象("entity"),它包含一个名为 mytbl 的表,名为 mydb ,可用于读取此表的所有行.请注意,此表必须在此操作开始之前存在于HCatalog中.
在定义ReadEntity之后,您将使用ReadEntity和集群配置 : 获取HCatReader的实例;
HCatReader reader = DataTransferFactory.getHCatReader(entity,config);
下一步是从读者那里获得一个ReaderContext,如下所示;
ReaderContext cntxt = reader.prepareRead();
此抽象是HCatalog内部的.这是为了便于从外部系统写入HCatalog.不要试图直接实例化它.相反,使用DataTransferFactory.
Sr.No. | 方法名称&描述 |
---|---|
1 | Public abstract WriterContext prepareRead() throws HCatException 外部系统应该从主节点调用此方法一次.它返回 WriterContext .这应序列化并发送到从节点以构建 HCatWriter . |
2 | Public abstract void write(Iterator<HCatRecord> recordItr) throws HCaException 此方法应该在从节点处用于执行写入. recordItr是一个迭代器对象,它包含要写入HCatalog的记录集合. |
3 | Public abstract void abort(WriterContext cntxt) throws HCatException 应该在主节点调用此方法.这种方法的主要目的是在发生故障时进行清理. |
4 | public abstract void commit(WriterContext cntxt) throws HCatException 应该在主节点调用此方法.此方法的目的是进行元数据提交. |
类似于阅读,写作也是两步过程,其中第一步发生在主节点上.随后,第二步在从节点上并行发生.
写入是在 WriteEntity 上完成的,它可以类似于读取和减去的方式构建;
WriteEntity.Builder builder = new WriteEntity.Builder(); WriteEntity entity = builder.withDatabase("mydb").withTable("mytbl").build();
上面的代码创建了一个WriteEntity对象 entity
,可用于写入名为
创建WriteEntity后,下一步是获取WriterContext :
HCatWriter writer = DataTransferFactory.getHCatWriter(entity, config); WriterContext info = writer.prepareWrite();
以上所有步骤都在主节点上进行.然后主节点序列化WriterContext对象并使其可供所有从属使用.
在从属节点上,您需要使用WriterContext获取HCatWriter,如下所示 :
HCatWriter writer = DataTransferFactory.getHCatWriter(context);
然后,编写器将迭代器作为写入 method :
writer.write(hCatRecordItr);
writer 然后在循环中对此迭代器调用 getNext()并写出所有附加到迭代器的记录.
TestReaderWriter.java 文件用于测试HCatreader和HCatWriter类.以下程序演示了如何使用HCatReader和HCatWriter API从源文件中读取数据,然后将其写入目标文件.
import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.io.ObjectInputStream; import java.io.ObjectOutputStream; import java.util.ArrayList; import java.util.HashMap; import java.util.Iterator; import java.util.List; import java.util.Map; import java.util.Map.Entry; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hive.metastore.api.MetaException; import org.apache.hadoop.hive.ql.CommandNeedRetryException; import org.apache.hadoop.mapreduce.InputSplit; import org.apache.hive.HCatalog.common.HCatException; import org.apache.hive.HCatalog.data.transfer.DataTransferFactory; import org.apache.hive.HCatalog.data.transfer.HCatReader; import org.apache.hive.HCatalog.data.transfer.HCatWriter; import org.apache.hive.HCatalog.data.transfer.ReadEntity; import org.apache.hive.HCatalog.data.transfer.ReaderContext; import org.apache.hive.HCatalog.data.transfer.WriteEntity; import org.apache.hive.HCatalog.data.transfer.WriterContext; import org.apache.hive.HCatalog.mapreduce.HCatBaseTest; import org.junit.Assert; import org.junit.Test; public class TestReaderWriter extends HCatBaseTest { @Test public void test() throws MetaException, CommandNeedRetryException, IOException, ClassNotFoundException { driver.run("drop table mytbl"); driver.run("create table mytbl (a string, b int)"); Iterator<Entry<String, String>> itr = hiveConf.iterator(); Map<String, String> map = new HashMap<String, String>(); while (itr.hasNext()) { Entry<String, String> kv = itr.next(); map.put(kv.getKey(), kv.getValue()); } WriterContext cntxt = runsInMaster(map); File writeCntxtFile = File.createTempFile("hcat-write", "temp"); writeCntxtFile.deleteOnExit(); // Serialize context. ObjectOutputStream oos = new ObjectOutputStream(new FileOutputStream(writeCntxtFile)); oos.writeObject(cntxt); oos.flush(); oos.close(); // Now, deserialize it. ObjectInputStream ois = new ObjectInputStream(new FileInputStream(writeCntxtFile)); cntxt = (WriterContext) ois.readObject(); ois.close(); runsInSlave(cntxt); commit(map, true, cntxt); ReaderContext readCntxt = runsInMaster(map, false); File readCntxtFile = File.createTempFile("hcat-read", "temp"); readCntxtFile.deleteOnExit(); oos = new ObjectOutputStream(new FileOutputStream(readCntxtFile)); oos.writeObject(readCntxt); oos.flush(); oos.close(); ois = new ObjectInputStream(new FileInputStream(readCntxtFile)); readCntxt = (ReaderContext) ois.readObject(); ois.close(); for (int i = 0; i < readCntxt.numSplits(); i++) { runsInSlave(readCntxt, i); } } private WriterContext runsInMaster(Map<String, String> config) throws HCatException { WriteEntity.Builder builder = new WriteEntity.Builder(); WriteEntity entity = builder.withTable("mytbl").build(); HCatWriter writer = DataTransferFactory.getHCatWriter(entity, config); WriterContext info = writer.prepareWrite(); return info; } private ReaderContext runsInMaster(Map<String, String> config, boolean bogus) throws HCatException { ReadEntity entity = new ReadEntity.Builder().withTable("mytbl").build(); HCatReader reader = DataTransferFactory.getHCatReader(entity, config); ReaderContext cntxt = reader.prepareRead(); return cntxt; } private void runsInSlave(ReaderContext cntxt, int slaveNum) throws HCatException { HCatReader reader = DataTransferFactory.getHCatReader(cntxt, slaveNum); Iterator<HCatRecord> itr = reader.read(); int i = 1; while (itr.hasNext()) { HCatRecord read = itr.next(); HCatRecord written = getRecord(i++); // Argh, HCatRecord doesnt implement equals() Assert.assertTrue("Read: " + read.get(0) + "Written: " + written.get(0), written.get(0).equals(read.get(0))); Assert.assertTrue("Read: " + read.get(1) + "Written: " + written.get(1), written.get(1).equals(read.get(1))); Assert.assertEquals(2, read.size()); } //Assert.assertFalse(itr.hasNext()); } private void runsInSlave(WriterContext context) throws HCatException { HCatWriter writer = DataTransferFactory.getHCatWriter(context); writer.write(new HCatRecordItr()); } private void commit(Map<String, String> config, boolean status, WriterContext context) throws IOException { WriteEntity.Builder builder = new WriteEntity.Builder(); WriteEntity entity = builder.withTable("mytbl").build(); HCatWriter writer = DataTransferFactory.getHCatWriter(entity, config); if (status) { writer.commit(context); } else { writer.abort(context); } } private static HCatRecord getRecord(int i) { List<Object> list = new ArrayList<Object>(2); list.add("Row #: " + i); list.add(i); return new DefaultHCatRecord(list); } private static class HCatRecordItr implements Iterator<HCatRecord> { int i = 0; @Override public boolean hasNext() { return i++ < 100 ? true : false; } @Override public HCatRecord next() { return getRecord(i); } @Override public void remove() { throw new RuntimeException(); } } }
上述程序以记录的形式从HDFS读取数据并写入将数据记录到 mytable
中