有没有更好和更快的方式从CPU内存复制到GPU使用推力? [英] is there a better and a faster way to copy from CPU memory to GPU using thrust?
问题描述
最近我一直在使用推力很多。我注意到,为了使用推力,必须总是将数据从cpu内存复制到gpu内存。
让我们看下面的例子:
Recently I have been using thrust a lot. I have noticed that in order to use thrust, one must always copy the data from the cpu memory to the gpu memory.
Let's see the following example :
int foo(int *foo)
{
host_vector<int> m(foo, foo+ 100000);
device_vector<int> s = m;
}
我不太确定 host_vector
构造函数工作,但似乎我复制初始数据,来自 * foo
,两次 - 一次到host_vector初始化时,和另一次当 device_vector
初始化时。有没有更好的方式从cpu复制到gpu,而不做一个中间数据副本?我知道我可以使用 device_ptr
作为包装,但仍然不能解决我的问题。
谢谢!
I'm not quite sure how the host_vector
constructor works, but it seems like I'm copying the initial data, coming from *foo
, twice - once to the host_vector when it is initialized, and another time when device_vector
is initialized. Is there a better way of copying from cpu to gpu without making an intermediate data copies? I know I can use device_ptr
as a wrapper, but that still doesn't fix my problem.
thanks!
它足够聪明,能够理解你的示例中的原始指针,所以你可以直接构造一个 device_vector
,避免临时 host_vector
:
One of device_vector
's constructors takes a range of elements specified by two iterators. It's smart enough to understand the raw pointer in your example, so you can construct a device_vector
directly and avoid the temporary host_vector
:
void my_function_taking_host_ptr(int *raw_ptr, size_t n)
{
// device_vector assumes raw_ptrs point to system memory
thrust::device_vector<int> vec(raw_ptr, raw_ptr + n);
...
}
到CUDA内存,引入 device_ptr
:
If your raw pointer points to CUDA memory, introduce a device_ptr
:
void my_function_taking_cuda_ptr(int *raw_ptr, size_t n)
{
// wrap raw_ptr before passing to device_vector
thrust::device_ptr<int> d_ptr(raw_ptr);
thrust::device_vector<int> vec(d_ptr, d_ptr + n);
...
}
使用 device_ptr
不分配任何存储;它只是在类型系统中编码指针的位置。
Using a device_ptr
doesn't allocate any storage; it just encodes the location of the pointer in the type system.
这篇关于有没有更好和更快的方式从CPU内存复制到GPU使用推力?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!