静态推力自定义分配器? [英] Static Thrust Custom Allocator?

查看:22
本文介绍了静态推力自定义分配器?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

设置几个事实:

  • Thrust 并非所有操作都在原地操作.
  • 您可以为 thrust::device_vectors 提供自定义分配器.
  • Thrust doesn't operate in-place for all of it's operations.
  • You can supply custom allocators to thrust::device_vectors.

我查看了 thrust::systemthrust::system::cuda 并没有找到任何看起来像静态系统分配器的东西.我的意思是,我看不到替换推力在内部使用的分配器为异地算法分配额外内存的方法.

I've looked in thrust::system and thrust::system::cuda and haven't found anything that looks like a static system allocator. By that I mean, I can't see a way of replacing the allocator that thrust uses internally to allocate extra memory for the out-of-place algorithms.

我也很难相信没有就地的函数使用给定 thrust::device_vectors 的分配器来分配工作内存.

I also find it hard to believe that the functions that are not in-place use the allocators for the given thrust::device_vectors to allocator working memory.

问题:thrust 是否有办法将内部分配器替换为用户定义的分配器?

Question: Does thrust have a way of replacing the internal allocator with a user defined one?

相关问题:

暗示推力在异地运行

自定义thrust分配器示例

推荐答案

Thrust 的 custom_temporary_allocation 示例演示了如何为 Thrust 算法内部使用的临时存储构建自己的自定义分配器.该示例使用缓存方案来执行分配,但原则上您可以使用任何您喜欢的策略.

Thrust's custom_temporary_allocation example demonstrates how to build your own custom allocator for the temporary storage used internally by Thrust algorithms. The example uses a caching scheme to perform allocation but in principle you could use any strategy you like.

基本上,这个想法是构建一个从 CUDA 后端派生的自定义后端,专门用于自定义分配.然后,当您想在自定义分配器中使用算法时,在调用算法时将 Thrust 指向自定义后端.

Basically, the idea is to build a custom backend derived from the CUDA backend specifically for the purpose of customizing allocation. Then, when you'd like to use an algorithm with your custom allocator, you point Thrust at your custom backend when you call the algorithm.

请注意,此功能需要 Thrust 1.6 或更高版本.

Note that this feature requires Thrust 1.6 or better.

这篇关于静态推力自定义分配器?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆