TensorFlow自定义分配器和从Tensor访问数据 [英] TensorFlow Custom Allocator and Accessing Data from Tensor

查看:382
本文介绍了TensorFlow自定义分配器和从Tensor访问数据的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在TensorFlow中,您可以出于各种原因创建自定义分配器(我正在为新硬件执行此操作)。由于设备的结构,我需要使用几个元素的结构作为我的数据指针,分配器返回 void *

In TensorFlow, you can create custom allocators for various reasons (I am doing it for new hardware). Due to the structure of the device, I need to use a struct of a few elements as my data pointer which the allocator returns as a void*.

在我写的内核中,我可以访问Tensors,但是我需要得到我写的指针结构。检查类,看起来我可以通过 tensor_t.buf _-> data()

In the kernels that I am writing, I am given access to Tensors but I need t get the pointer struct that I wrote. Examining the classes, it seemed that I could get this struct by doing tensor_t.buf_->data()

Tensor :: buf_

TensorBuffer :: data()

问题是,我找不到这样的代码,我担心它是不安全的(很可能!)或有一个更标准的方法来做这个。

The problem is that I can't find code that does this and I am worried that it is unsafe (highly likely!) or there is a more standard way to do this.

有人可以确认这是一个好主意还是坏主意?

Can someone confirm if this is a good/bad idea? And provide an alternative if such exists?

推荐答案

四天后...

void* GetBase(const Tensor* src) {
  return const_cast<void*>(DMAHelper::base(src));
}

来自 GPUUtils

DMAHelper :: base() 是一个朋友类方法,可以使用私有 Tensor :: base() 获取数据指针。

DMAHelper::base() is a friend class method that is given the ability to use the private Tensor::base() to get at the data pointer.

实施显示这只是一个包装,我想做一个又一个抽象。我猜这是一个更安全的方法来获取指针,应该改用。

The implementation shows that this is all just a wrapper around what I wanted to do after yet another abstraction. I am guessing it is a safer approach to getting the pointer and should be used instead.

这篇关于TensorFlow自定义分配器和从Tensor访问数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆