并发::任务和显式线程 [英] concurrency::task and explicit threads

查看:91
本文介绍了并发::任务和显式线程的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在我的应用程序中,我使用的是OpenGL,它需要一个显式线程来初始化和使用OpenGL上下文.我希望能够将这个显式线程(包装到活动的代理对象中)与并发::任务混合在一起,而不必不必要地 由任务计划程序延迟,即不必要地往返于任务计划程序队列中.

In my application I am using OpenGL which requires an explicit thread at which OpenGL context is initialized and used. I would like to be able to mix this explicit thread (wrapped into an active agent object) together with concurrency::task without unnecessarily being delayed by the task scheduler, i.e. doing unnecessary round-trips into the task-scheduler queue.

// See bottom for "thread_wrapper".
thread_wrapper<context> gpu_context;

concurrency::task<void> foo()
{
	concurrency::task_completion_event e; // Problem...

	gpu_context.run([=](context& c) -> std::shared_ptr<buffer>
	{ 
		// So far so good. Executed in explicit thread.

		return c.allocate_buffer(source.size()); 
	})
	.then([=](concurrency::task<std::shared_ptr<buffer>> t) -> std::shared_ptr<buffer>
	{
		// Still good. Continuation which performs memcpy is executed on task scheduler.
// Would be nice if high prio task.
auto buf_ptr = t.get(); memcpy(buf_ptr->data(), source.data(), source.size()); return buf_ptr; }).then([=](concurrency::task<std::shared_ptr<buffer>> t) { // This is a problem, we just want to execute stuff in an explicit thread, // but we still needed to do an unecessary roundtrip to the task scheduler // queue which adds latency and harms scalability. auto buf_ptr = t.get(); // Another problem, all of the sudden I must do .then inside of the continuation. gpu_context.run([=](context& c) -> std::shared_ptr<texture> { auto tex_ptr = c.allocate_texture(buf_ptr->size()); tex_ptr->copy_from(*buf_ptr); e.set(); }); }); return concurrency::task<void>(std::move(e)); } // Explicit thread wrapper. template<typename T> class thread_wrapper { T value_; public: template<typename F> auto run(F&& f) -> concurrency::task<decltype(f())> { concurrency::task_completion_event e; std::function<void()> f2 = [=] { e.set(func(value_)); }; if(thread_id_ == std::this_thread::get_id()) f2(); else queue_.push(std::move(f2)); // Queue is processed by by std::thread. return concurrency::task<decltype(f())>(std::move(e)); }); /* ... */ };

关于我可以改善上述情况的任何建议?

Any suggestions as to have I can improve on the above situation?













推荐答案

Ronag,

Hi Ronag,

并发运行时拥有优化,可伸缩性和转发进度保证的计划.

似乎是数据流并行性.我推荐的最简单的方法是异步代理: http://blogs.msdn.com/b/nativeconcurrency/archive/2009/06/03/introduction-to-asynchronous-agents-library.aspx .使用此数据流模型可以通过创建处理矩阵的中间代理来帮助您实现目标.代理功能可以是 简单的while循环,从(例如)无界缓冲区接收消息.我们有一个 Cartoonizer示例展示了数据工作流和数据管道的使用.

Concurrency runtime owns the scheduling for optimization, scalability, and forward progress guaranties. 

It seems like a data flow parallelism. The easiest way I recommend is Asynchronous Agents : http://blogs.msdn.com/b/nativeconcurrency/archive/2009/06/03/introduction-to-asynchronous-agents-library.aspx. Using this data flow model may help you achieve your goal by creating a middle agent that processes the matrixes. The agent function can be a simple while loop receiving messages from (say) an unbounded buffer. We have a Cartoonizer sample that show the use of data work flow and data pipe.


这篇关于并发::任务和显式线程的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆