是否以嵌套或递归方式调用asio io_service poll()或poll_one()(即在处理程序内)有效? [英] Is calling asio io_service poll() or poll_one() in a nested or recursive fashion (ie. within a handler) valid?
问题描述
以嵌套或递归方式(即从处理程序中)调用asio :: io_service :: poll()或poll_one()有效吗?
真正的基本测试似乎暗示这工作(我只做了一个平台上的测试),但我想确保调用poll()从处理程序内再次被认为是有效的行为。
我在asio文档中找不到任何相关信息,所以我希望有更多经验的asio的内部工作的人可以验证这个与解释或参考。
基本测试:
struct NestedHandler
{
NestedHandler(std :: string name,asio :: io_service * service):
name(name),
service(service)
{
// empty
}
void operator()()
{
std :: cout< {;
std :: cout<<名称;
std :: cout<< ...再次调用投票...;
service-> poll();
std :: cout<< };
}
std :: string name;
asio :: io_service * service;
};
struct DefaultHandler
{
DefaultHandler(std :: string name):
name(name)
{
//空
}
void operator()()
{
std :: cout< {;
std :: cout<<名称;
std :: cout<< };
}
std :: string name;
};
int main()
{
asio :: io_service service;
service.post(NestedHandler(N,& service));
service.post(DefaultHandler(A));
service.post(DefaultHandler(B));
service.post(DefaultHandler(C));
service.post(DefaultHandler(D));
std :: cout<< asio poll< std :: endl;
service.poll();
return 0;
}
//输出:
asio poll
{N ...再次调用轮询... {A} {B} {C} {D} }
$ b
对于处理 io_service
的函数系列, run()
是唯一一个限制:
run()
函数不能从线程调用正在调用run()
,run_one()
,poll对象
或
或
poll_one()
。
但是,我倾向于认为文档应该包含与 run_one()
相同的注释嵌套调用可能导致其无限期地阻塞 [1] :
-
io_service
是当前正在执行的处理程序 - 对于非I / O完成端口实现,唯一的工作是从当前处理程序和
io_service
具有1的并发提示
对于Windows I / O完成端口,将在为 io_service
使用 GetQueuedCompletionStatus()
。在高级别,调用 GetQueuedCompletionStatus()
函数的线程好像它们是线程池的一部分,允许操作系统将工作分派给每个线程。由于没有单线程负责对其他线程的解复用操作, poll()
或 poll_one()
do的嵌套调用不影响其他线程的操作调度。 文档 states:
使用I / O完成端口的解复用在所有 c $ c> io_service :: run(),
io_service :: run_one()
,io_service :: poll
或io_service :: poll_one()
。
对于所有其他多路分解机制系统,单个线程服务 io_service
用于多路分用I / O操作。确切的解复用机制可以在 Platform-具体实现注释:
使用[
/ dev / poll
,,
,
,
io_service :: run()
,io_service :: run_one()
,io_service :: poll()
或io_service :: poll_one()
。
解复用机制的实现略有不同,但处于高级别:
-
io_service
有一个主要伫列,线程从这个主要伫列消耗随时可执行的作业 -
io_service
在堆栈上创建用于以无锁方式管理操作的专用队列 - 最终与主队列同步发生,其中获取锁并且将私有队列操作复制到主队列中,通知其他线程并允许它们从主队列消费。
当 io_service
是构造,则可以提供并发提示 ,表明实现应该允许同时运行多少线程。当非I / O完成端口实现提供 1
的并发提示时,它们被优化以尽可能多地使用专用队列并推迟与主队列的同步。例如,如果调用了 post()
:
- 从处理程序的外部,然后
io_service
保证线程安全,所以它锁定主队列在入队处理程序之前。 - 如果调用
当嵌套的嵌套对象被嵌套在一个处理程序中时, poll()
或 poll_one()
被调用,私有队列被复制到主队列,因为要执行的操作将从主队列消耗。这种情况会在实现:
//我们要支持嵌套调用poll()和poll_one(),所以已经在线程专用队列上的任何处理程序
//现在需要放在主
//队列。
if(one_thread_)
if(thread_info * outer_thread_info = ctx.next_by_key())
op_queue_.push(outer_thread_info-> private_op_queue);
当没有并发提示或 1
,那么每次将打印处理程序同步到主队列中。由于不需要复制私有队列,因此嵌套 poll()
和 poll_one()
调用将正常工作。
1。在网络化ts草案中,请注意不能从当前调用
。 run()
的线程调用run_one()
Is calling asio::io_service::poll() or poll_one() in a nested or recursive fashion (ie. from within a handler) valid?
A really basic test seems to imply that this works (I've only done the test on one platform) but I want to be sure that calling poll() again from within a handler is considered valid behavior.
I couldn't find any relevant information in the asio docs, so I'm hoping that someone with a bit more experience with asio's inner workings could verify this with an explanation or references.
Basic test:
struct NestedHandler
{
NestedHandler(std::string name, asio::io_service * service) :
name(name),
service(service)
{
// empty
}
void operator()()
{
std::cout << " { ";
std::cout << name;
std::cout << " ...calling poll again... ";
service->poll();
std::cout << " } ";
}
std::string name;
asio::io_service * service;
};
struct DefaultHandler
{
DefaultHandler(std::string name) :
name(name)
{
// empty
}
void operator()()
{
std::cout << " { ";
std::cout << name;
std::cout << " } ";
}
std::string name;
};
int main()
{
asio::io_service service;
service.post(NestedHandler("N",&service));
service.post(DefaultHandler("A"));
service.post(DefaultHandler("B"));
service.post(DefaultHandler("C"));
service.post(DefaultHandler("D"));
std::cout << "asio poll" << std::endl;
service.poll();
return 0;
}
// Output:
asio poll
{ N ...calling poll again... { A } { B } { C } { D } }
It is valid.
For the family of functions that process the io_service
, run()
is the only one with restrictions:
The
run()
function must not be called from a thread that is currently calling one ofrun()
,run_one()
,poll()
orpoll_one()
on the sameio_service
object.
However, I am inclined to think that the documentation should also include the same remark for run_one()
, as a nested call can result in it blocking indefinitely for either of the following cases[1]:
- the only work in the
io_service
is the handler currently being executed - for non I/O completion port implementations, the only work was posted from within the current handler and the
io_service
has a concurrency hint of1
For Windows I/O completion ports, demultiplexing is performed in all threads servicing the io_service
using GetQueuedCompletionStatus()
. At a high-level, threads calling GetQueuedCompletionStatus()
function as if they are part of a thread pool, allowing the OS to dispatch work to each thread. As no single thread is responsible for demultiplexing operations to other threads, nested calls to poll()
or poll_one()
do not affect operation dispatching for other threads. The documentation states:
Demultiplexing using I/O completion ports is performed in all threads that call
io_service::run()
,io_service::run_one()
,io_service::poll()
orio_service::poll_one()
.
For all other demultiplexing mechanisms systems, a single thread servicing io_service
is used to demultiplex I/O operations. The exact demultiplexing mechanism can be found in the Platform-Specific Implementation Notes:
Demultiplexing using [
/dev/poll
,epoll
,kqueue
,select
] is performed in one of the threads that callsio_service::run()
,io_service::run_one()
,io_service::poll()
orio_service::poll_one()
.
The implementation for the demultiplexing mechanism differs slightly, but at a high-level:
- the
io_service
has a main queue from which threads consume ready-to-run operations to perform - each call to process the
io_service
creates a private queue on the stack that is used to manage operations in a lock-free manner - synchronization with the main queue eventually occurs, where a lock is acquired and the private queue operations are copied into the main queue, informing other threads, and allowing them to consume from the main queue.
When the io_service
is constructed, it may be provided a concurrency hint, suggesting how many threads the implementation should allow to run concurrently. When non-I/O completion port implementations are provided a concurrency hint of 1
, they are optimized to use the private queue as much as possible and defer synchronization with the main queue. For example, when a handler is posted via post()
:
- if invoked from outside of a handler, then the
io_service
guarantees thread safety so it locks the main queue before enqueueing the handler. - if invoked from within a handler, the posted handler is enqueued into the private queue, deferring deferring synchronization with the main queue until necessary.
When a nested poll()
or poll_one()
is invoked, it becomes necessary for the private queue to be copied into the main queue, as operations to be performed will be consumed from the main queue. This case is explicitly checked within the implementation:
// We want to support nested calls to poll() and poll_one(), so any handlers
// that are already on a thread-private queue need to be put on to the main
// queue now.
if (one_thread_)
if (thread_info* outer_thread_info = ctx.next_by_key())
op_queue_.push(outer_thread_info->private_op_queue);
When either no concurrency hint or any value other than 1
is provided, then posted handlers are synchronized into the main queue each time. As the private queue does not need to be copied, nested poll()
and poll_one()
calls will function as normal.
1. In the networking-ts draft, it is noted that run_one()
must not be called from a thread that is currently calling run()
.
这篇关于是否以嵌套或递归方式调用asio io_service poll()或poll_one()(即在处理程序内)有效?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!