分段错误(核心转储) - 从 SavedModel 推断 Tensorflow C++ API [英] Segmentation fault (core dumped) - Infering with Tensorflow C++ API from SavedModel

查看:35
本文介绍了分段错误(核心转储) - 从 SavedModel 推断 Tensorflow C++ API的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 Tensorflow C++ API 加载 SavedModel 并运行推理.模型加载正常,但是当我运行推理时,出现以下错误:

I am using the Tensorflow C++ API to load a SavedModel and run inference. The model loads fine, but when I run the inference, I have the following error:

$ ./bazel-bin/tensorflow/gan_loader/gan_loader
2020-06-21 19:29:18.669604: I tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: /home/eduardo/Documents/GitHub/edualvarado/tensorflow/tensorflow/gan_loader/generator_model_final
2020-06-21 19:29:18.671368: I tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }
2020-06-21 19:29:18.671385: I tensorflow/cc/saved_model/loader.cc:295] Reading SavedModel debug info (if present) from: /home/eduardo/Documents/GitHub/edualvarado/tensorflow/tensorflow/gan_loader/generator_model_final
2020-06-21 19:29:18.671474: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE3 SSE4.1 SSE4.2 AVX AVX2 FMA
2020-06-21 19:29:18.688557: I tensorflow/cc/saved_model/loader.cc:234] Restoring SavedModel bundle.
2020-06-21 19:29:18.707707: I tensorflow/cc/saved_model/loader.cc:183] Running initialization op on SavedModel bundle at path: /home/eduardo/Documents/GitHub/edualvarado/tensorflow/tensorflow/gan_loader/generator_model_final
2020-06-21 19:29:18.714949: I tensorflow/cc/saved_model/loader.cc:364] SavedModel load for tags { serve }; Status: success: OK. Took 45356 microseconds.
Segmentation fault (core dumped)

完整的 infering.py 代码如下.一开始,注释可以找到SavedModel的信息.

The complete infering.py code is the following. At the beginning, commented you can find the information about the SavedModel.

    /* INFO ABOUT SAVEDMODEL

The given SavedModel SignatureDef contains the following input(s):
  inputs['dense_1_input'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 100)
      name: serving_default_dense_1_input:0
The given SavedModel SignatureDef contains the following output(s):
  outputs['conv2d_2'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 28, 28, 1)
      name: StatefulPartitionedCall:0
Method name is: tensorflow/serving/predict
*/


#include <fstream>
#include <utility>
#include <vector>

#include "tensorflow/cc/ops/const_op.h"
#include "tensorflow/cc/ops/image_ops.h"
#include "tensorflow/cc/ops/standard_ops.h"
#include "tensorflow/core/framework/graph.pb.h"
#include "tensorflow/core/framework/tensor.h"
#include "tensorflow/core/graph/default_device.h"
#include "tensorflow/core/graph/graph_def_builder.h"
#include "tensorflow/core/lib/core/errors.h"
#include "tensorflow/core/lib/core/stringpiece.h"
#include "tensorflow/core/lib/core/threadpool.h"
#include "tensorflow/core/lib/io/path.h"
#include "tensorflow/core/lib/strings/str_util.h"
#include "tensorflow/core/lib/strings/stringprintf.h"
#include "tensorflow/core/platform/env.h"
#include "tensorflow/core/platform/init_main.h"
#include "tensorflow/core/platform/logging.h"
#include "tensorflow/core/platform/types.h"
#include "tensorflow/core/public/session.h"
#include "tensorflow/core/util/command_line_flags.h"
#include "tensorflow/cc/saved_model/loader.h"
#include "tensorflow/cc/saved_model/tag_constants.h"

// These are all common classes it's handy to reference with no namespace.
using tensorflow::Flag;
using tensorflow::int32;
using tensorflow::Status;
using tensorflow::string;
using tensorflow::Tensor;
using tensorflow::tstring;


/*
TODO: Functions
*/
Tensor CreateLatentSpace(const int latent_dim, const int num_samples) {
  Tensor tensor(tensorflow::DT_FLOAT, tensorflow::TensorShape({num_samples, latent_dim}));
  
  auto tensor_mapped = tensor.tensor<float, 2>(); 
  for (int idx = 0; idx < tensor.dim_size(0); ++idx) {
    for (int i = 0; i < tensor.dim_size(1); ++i) {
      tensor_mapped(idx, i) = drand48() - 0.5;
    }
  }
  return tensor;
}

int main(int argc, char* argv[]) {
  // These are the command-line flags the program can understand.
  // They define where the graph and input data is located, and what kind of
  // input the model expects. 
 
  // To create latent space
  int32 latent_dim = 100;
  int32 samples_per_row = 5;
  int32 num_samples = 25;
  
  // Input/Output names
  string input_layer = "serving_default_dense_1_input";
  string output_layer = "StatefulPartitionedCall";


  // Arguments
  std::vector<Flag> flag_list = {
      Flag("latent_dim", &latent_dim, "latent dimensions"),
      Flag("samples_per_row", &samples_per_row, "samples per row"),
      Flag("num_samples", &num_samples, "number of samples"),
      Flag("input_layer", &input_layer, "name of input layer"),
      Flag("output_layer", &output_layer, "name of output layer"),
  };
  string usage = tensorflow::Flags::Usage(argv[0], flag_list);
  const bool parse_result = tensorflow::Flags::Parse(&argc, argv, flag_list);
  if (!parse_result) {
    LOG(ERROR) << usage;
    return -1;
  }

  // We need to call this to set up global state for TensorFlow.
  tensorflow::port::InitMain(argv[0], &argc, &argv);
  if (argc > 1) {
    LOG(ERROR) << "Unknown argument " << argv[1] << "\n" << usage;
    return -1;
  }

  // TODO: First we load and initialize the model.
  std::unique_ptr<tensorflow::Session> session;
  tensorflow::SavedModelBundle model;
  tensorflow::SessionOptions session_options;
  tensorflow::RunOptions run_options;

  const string export_dir = "/home/eduardo/Documents/GitHub/edualvarado/tensorflow/tensorflow/gan_loader/generator_model_final";
  const std::unordered_set<std::string> tags = {"serve"};         

  auto load_graph_status = tensorflow::LoadSavedModel(session_options, run_options, export_dir, tags, &model);
  if (!load_graph_status.ok()) {
    std::cerr << "Failed: " << load_graph_status;
    return -1;
  }

  // TODO: Create latent space
  auto latent_space_tensor = CreateLatentSpace(100, 1);


  // TODO: Run the latent space through the model
  std::vector<Tensor> outputs;
  Status run_status = session->Run({{input_layer, latent_space_tensor}},
                                   {output_layer}, {}, &outputs);

  if (!run_status.ok()) {
    LOG(ERROR) << "Running model failed: " << run_status;
    return -1;
  }
  
  // TODO: Save the figure


  return 0;
}

我想我已经尝试了几乎所有的东西,但遗憾的是没有那么多关于 C++ API 的文档.您能否为我提供一些指导,为什么会发生这种情况?

I think I have tried almost everything, but sadly there is no so much documentation about the C++ API. Could you please provide me with some guidance why this is happening?

非常感谢.

操作系统环境:

  • Ubuntu 18.04.
  • TensorFlow 2.2.0
  • 巴泽尔 2.0.0

推荐答案

在代码片段中,sessionptr 在调用 run(..) 之前没有初始化.

In the code snippet, the sessionptr is not initialized before calling run(..).

std::unique_ptr<tensorflow::Session> session;
Status run_status = session->Run({{input_layer, latent_space_tensor}},
                               {output_layer}, {}, &outputs);

在调用 run(..) 之前尝试初始化 session,这将解决问题.

Try initialising session before calling run(..), this will fix the issue.

初始化会话的一种方法是

One way to initialise the session is

std::unique_ptr<tensorflow::Session> session = make_unique<tensorflow::Session>()

这会调用 tensorflow::Session 的默认构造函数,现在你的 ptr 指向构造的对象,并在 ptr 超出范围时管理它的释放.

This calls the default constructor for tensorflow::Session and now your ptr points to the constructed object, and manages it deallocation when the ptr goes out of scope.

这篇关于分段错误(核心转储) - 从 SavedModel 推断 Tensorflow C++ API的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆