在CPP中运行张量流模型 [英] Run tensorflow model in CPP

查看:52
本文介绍了在CPP中运行张量流模型的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用tf.keras训练了我的模型.我将此模型转换为".pb",

  import os将tensorflow导入为tf从tensorflow.keras将后端导入为KK.set_learning_phase(0)从tensorflow.keras.models导入load_model型号= load_model('model_checkpoint.h5')model.save('model_tf2',save_format ='tf') 

这将创建一个文件夹"model_tf2",其中包含资产",变量和save_model.pb

我正在尝试将此模型加载到cpp中.参考许多其他文章(主要是

上面的屏幕快照显示模型已加载.

我有以下问题

  1. 如何进行模型的正向传递?
  2. 我了解标签"可以是gpu,发球,训练..发球和gpu有什么区别?
  3. 我不了解LoadSavedModel的前两个参数,即会话选项和运行选项.他们的目的是什么?另外,您能通过语法示例帮助我理解吗?我通过查看另一个stackoverflow帖子来设置run_options,但是我不明白它的目的.

谢谢!!:)

解决方案

此方法在TF1.5上效果很好

加载图功能

  Status LoadGraph(const tensorflow :: string& graph_file_name,std :: unique_ptr< tensorflow :: Session> *会话,tensorflow :: SessionOptions选项){tensorflow :: GraphDef graph_def;状态load_graph_status =ReadBinaryProto(tensorflow :: Env :: Default(),graph_file_name,& graph_def);如果(!load_graph_status.ok()){返回tensorflow :: errors :: NotFound(无法在'"处加载计算图,graph_file_name,'");}//session-> reset(tensorflow :: NewSession(tensorflow :: SessionOptions()));session-> reset(tensorflow :: NewSession(options));状态session_create_status =(* session)-> Create(graph_def);如果(!session_create_status.ok()){返回session_create_status;}返回Status :: OK();} 

使用.pb模型和其他会话配置的路径调用负载图函数.加载模型后,您可以通过调用Run

进行正向传递

 状态load_graph_status = LoadGraph(graph_path,& session_fpass,选项);如果(!load_graph_status.ok()){LOG(错误)<<load_graph_status;返回-1;}std :: vector< tensorflow :: Tensor>输出;状态run_status = session_fpass-> Run({{input_layer,image_in}},{output_layer1},{output_layer1},& outputs);如果(!run_status.ok()){LOG(错误)<<运行模型失败:"<<run_status;返回-1;} 

I trained my model using tf.keras. I convert this model to '.pb' by,

import os
import tensorflow as tf
from tensorflow.keras import backend as K
K.set_learning_phase(0)

from tensorflow.keras.models import load_model
model = load_model('model_checkpoint.h5')
model.save('model_tf2', save_format='tf')

This creates a folder 'model_tf2' with 'assets', varaibles, and saved_model.pb

I'm trying to load this model in cpp. Referring to many other posts (mainly, Using Tensorflow checkpoint to restore model in C++), I am now able to load the model.

    RunOptions run_options;
    run_options.set_timeout_in_ms(60000);
    SavedModelBundle model;
    auto status = LoadSavedModel(SessionOptions(), run_options, model_dir_path, tags, &model);
    if (!status.ok()) {
        std::cerr << "Failed: " << status1;
        return -1;
    }

The above screenshot shows that the model was loaded.

I have the following questions

  1. How do I do a forward pass through the model?
  2. I understand 'tag' can be gpu, serve, train.. What is the difference between serve and gpu?
  3. I don't understand the first 2 arguments to LoadSavedModel i.e. session options and run options. What purpose do they serve? Also, could you help me understand with a syntactical example? I have set run_options by looking at another stackoverflow post, however I don't understand its purpose.

Thank you!! :)

解决方案

This worked well with TF1.5

load graph function

Status LoadGraph(const tensorflow::string& graph_file_name,
    std::unique_ptr<tensorflow::Session>* session, tensorflow::SessionOptions options) {
    tensorflow::GraphDef graph_def;
    Status load_graph_status =
        ReadBinaryProto(tensorflow::Env::Default(), graph_file_name, &graph_def);
    if (!load_graph_status.ok()) {
        return tensorflow::errors::NotFound("Failed to load compute graph at '",
            graph_file_name, "'");
    }
    //session->reset(tensorflow::NewSession(tensorflow::SessionOptions()));
    session->reset(tensorflow::NewSession(options));
    Status session_create_status = (*session)->Create(graph_def);
    if (!session_create_status.ok()) {
        return session_create_status;
    }
    return Status::OK();
}

Call the load graph function with path to .pb model and other session configuration. Once the model is loaded you can do forward pass by calling Run

Status load_graph_status = LoadGraph(graph_path, &session_fpass, options);

if (!load_graph_status.ok()) {
    LOG(ERROR) << load_graph_status;
    return -1;
}


std::vector<tensorflow::Tensor> outputs;

Status run_status = session_fpass->Run({ {input_layer, image_in} },
    { output_layer1}, { output_layer1}, &outputs);

if (!run_status.ok()) {
    LOG(ERROR) << "Running model failed: " << run_status;
    return -1;
}

这篇关于在CPP中运行张量流模型的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆