如何从内存地址加载张量流图 [英] How to load tensorflow graph from memory address

查看:26
本文介绍了如何从内存地址加载张量流图的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 TensorFlow C++ API 从文件加载图形并执行它.一切都很好,但我想从内存而不是从文件中加载图形(以便我可以将图形嵌入到二进制文件中以获得更好的可移植性).我有引用二进制数据(作为无符号字符数组)和数据大小的变量.

I'm using the TensorFlow C++ API to load a graph from a file and execute it. Everything is working great, but I'd like to load the graph from memory rather than from a file (so that I can embed the graph into the binary for better portability). I have variables that reference both the binary data (as an unsigned char array) and the size of the data.

这是我当前加载图表的方式.

This how I am currently loading my graph.

GraphDef graph_def;
ReadBinaryProto(tensorflow::Env::Default(), "./graph.pb", &graph_def);

感觉这应该很简单,但大部分讨论都是关于python API的.我确实尝试过寻找 ReadBinaryProto 的来源,但无法在 tensorflow 存储库中找到它.

Feels like this should be simple but most of the discussion is about the python API. I did try looking for the source of ReadBinaryProto but wasn't able to find it in the tensorflow repo.

推荐答案

以下应该有效:

GraphDef graph_def;
if (!graph_def.ParseFromArray(data, len)) {
    // Handle error
}
...

这是因为 GraphDefgoogle::protobuf::MessageList 的子类,因此继承了 多种解析方式

This is because GraphDef is a sub-class of google::protobuf::MessageList, and thus inherits a variety of parsing methods

警告:截至 2017 年 1 月,由于 默认协议缓冲区设置.对于较大的图形,从 ReadBinaryProto 的实现

Caveat: As of January 2017, the snippet above works only when the serialized graph is <64MB because of a default protocol buffer setting. For larger graphs, take inspiration from ReadBinaryProtos implementation

FWIW,ReadBinaryProto 的代码在 tensorflow/core/platform/env.cc

FWIW, the code for ReadBinaryProto is in tensorflow/core/platform/env.cc

这篇关于如何从内存地址加载张量流图的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆