io_service :: poll_one非确定性行为 [英] io_service::poll_one non-deterministic behaviour
问题描述
在下面的代码中,我期望输出总是为1,因为我期望只有一个处理程序运行 poll_one()
被调用。然而,一次在大约300次,输出实际上是3.基于我对boost库的理解,这似乎不正确。
#include< boost / asio.hpp>
int main(){
boost :: asio :: io_service io;
boost :: asio :: io_service :: work io_work(io);
boost :: asio :: io_service :: strand strand1(io);
boost :: asio :: io_service :: strand strand2(io);
int val = 0;
strand1.post([& val,& strand2](){
val = 1;
strand2.post([& val] $ b val = 2;
});
boost :: asio :: spawn(strand2,[& val](boost :: asio :: yield_context yield){
val = 3 ;
});
});
io.poll_one();
std :: cout<< 最后执行:< val<< std :: endl;
return 0;
}
使用boost-asio 1.60.0.6
p> Asio具有有限的链实现池,并且对于strand的默认分配策略是散列。如果发生哈希冲突,两个线程将使用相同的实现。发生哈希冲突时,示例简化为以下演示:
#include< cassert>
#include< boost / asio.hpp>
int main()
{
boost :: asio :: io_service io_service;
boost :: asio :: io_service :: strand strand1(io_service);
//让strand2使用与strand1相同的实现。
boost :: asio :: io_service :: strand strand2(strand1);
int value = 0;
auto handler1 = [& value,& strand1,& strand2](){
assert(strand1.running_in_this_thread());
assert(strand2.running_in_this_thread());
value = 1;
// handler2排队到strand中,从不调用。
auto handler2 = [& value](){assert(false); };
strand2.post(handler2);
// handler3立即执行。
auto handler3 = [& value](){value = 3; };
strand2.dispatch(handler3);
assert(value == 3);
};
//入队处理程序1。
strand1.post(handler1);
//运行事件处理循环,执行handler1。
assert(io_service.poll_one()== 1);
}
在上面的例子中:
-
io_service.poll_one()
执行一个准备好的处理程序(handler1
) -
handler2
从未启用 -
handler3
会立即在strand2中调用因为
返回strand2.dispatch()
从strand2的处理程序中调用,因此.dispatch()
running_in_this_thread()true
有多种细节可以促进观察到的行为:
-
io_service :: poll_one()
将运行io_service
的事件循环并且没有阻塞,它将最多执行一个准备运行处理程序。在dispatch()
的上下文中立即执行的处理程序不会入列到io_service
中, code> poll_one()的调用单个处理程序的限制。 -
boost::asio::spawn(strand,function)
超载通过strand.dispatch()
启动堆栈协程 as-if :
- 如果
strand.running_in_this_thread()
返回false
调用者,则协程将被发布到strand
以用于延迟调用 - 如果
strand.running_in_this_thread )
为调用者返回true
,那么协程将立即执行
- 如果
-
使用相同实现的离散
strand
对象仍然保持了strand的保证。也就是说,并发执行不会发生,处理程序调用顺序是明确定义的。当离散strand
对象使用离散实现,并且多个线程正在运行io_service
时,同时执行。然而,当离散strand
对象使用相同的实现时,即使多个线程正在运行io_service
。此行为是记录 :
实现不能保证通过不同的strand对象发布或分派的处理程序将被同时调用。
-
Asio具有有限的链实现。当前缺省值为
193
,可以通过将BOOST_ASIO_STRAND_IMPLEMENTATIONS
定义为所需的数字来控制。此功能在 Boost.Asio 1.48发行说明中有所说明
通过定义
BOOST_ASIO_STRAND_IMPLEMENTATIONS
可以配置strand实现的数量
通过减少池大小,一个增加两个离散链将使用相同实现的机会。使用原始代码,如果要将池大小设置为
1
,则strand1
和strand2
将始终使用相同的实现,导致val
始终为3
a href =http://coliru.stacked-crooked.com/a/d98ac182372d4192>演示)。 -
分配链实现是使用黄金比率哈希。由于使用散列算法,所以存在冲突的可能性,导致用于多个离散
strand
对象的相同实现。通过定义BOOST_ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION
,可以将分配策略更改为round-robin,防止发生冲突,直到BOOST_ASIO_STRAND_IMPLEMENTATIONS + 1
链分配已经发生。此功能在Boost.Asio 1.48发行说明中注明:
添加了对新
BOOST_ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION
flag,它将strand实现的分配切换为使用循环方法而不是哈希。
鉴于上述细节,当在原始代码中观察到 1
时,会发生以下情况:
-
strand1
和strand2
有离散实作 -
io_service :: poll_one()
执行直接发布到strand1中的单一处理程序
- 发布到
中的处理程序strand1
设置val
到1
- 发布到
strand2
中的处理程序已排入并且从未调用 -
协程创建被延迟,因为
strand
的调用顺序会阻止协程创建,直到发布到<$ c $的上一个处理程序c> strand2 已执行:
给出一个strand对象
s
,如果s.post(a)
发生在s.dispatch(b)
asio_handler_invoke(a1,& a1)
发生在asio_handler_invoke(b1,& b1)$ c $
另一方面, 3
:
-
strand1
和strand2
,导致他们使用相同的底层链实现 -
io_service :: poll_one )
执行直接发布到strand1
- 中的单个处理程序c $ c> strand1 设置
val
到1
- 发布到
strand2
中的处理程序已排入并且从未调用 - 该协程立即创建并在
boost :: asio :: spawn()
,将val
设置为3
strand2
可以安全地执行协同程序,同时保持非并发执行和处理程序调用的顺序
In the following code, I expect the output to always be 1, because I am expecting only one handler to run when poll_one()
is called. However, once in about 300 times, the output is actually 3. Based on my understanding of the boost library, this seems incorrect. Is the non-deterministic behavior a bug or expected?
#include <boost/asio.hpp>
int main() {
boost::asio::io_service io;
boost::asio::io_service::work io_work(io);
boost::asio::io_service::strand strand1(io);
boost::asio::io_service::strand strand2(io);
int val = 0;
strand1.post([&val, &strand2]() {
val = 1;
strand2.post([&val]() {
val = 2;
});
boost::asio::spawn(strand2, [&val](boost::asio::yield_context yield) {
val = 3;
});
});
io.poll_one();
std::cout << "Last executed: " << val << std::endl;
return 0;
}
Using boost-asio 1.60.0.6
The observed behavior is well defined and expected to occur, but one should not expected it to occur often.
Asio has a limited pool of strand implementations, and the default allocation strategy for strands is hashing. If a hash collision occurs, two strands will use the same implementation. When a hash-collision occurs, the example simplifies to the following demo:
#include <cassert>
#include <boost/asio.hpp>
int main()
{
boost::asio::io_service io_service;
boost::asio::io_service::strand strand1(io_service);
// Have strand2 use the same implementation as strand1.
boost::asio::io_service::strand strand2(strand1);
int value = 0;
auto handler1 = [&value, &strand1, &strand2]() {
assert(strand1.running_in_this_thread());
assert(strand2.running_in_this_thread());
value = 1;
// handler2 is queued into strand and never invoked.
auto handler2 = [&value]() { assert(false); };
strand2.post(handler2);
// handler3 is immediately executed.
auto handler3 = [&value]() { value = 3; };
strand2.dispatch(handler3);
assert(value == 3);
};
// Enqueue handler1.
strand1.post(handler1);
// Run the event processing loop, executing handler1.
assert(io_service.poll_one() == 1);
}
In the above example:
io_service.poll_one()
executes a single ready handler (handler1
)handler2
is never invokedhandler3
is invoked immediately withinstrand2.dispatch()
, asstrand2.dispatch()
is invoked from within a handler wherestrand2.running_in_this_thread()
returnstrue
There are various details contributing to the observed behavior:
io_service::poll_one()
will run theio_service
's event loop and without blocking, it will execute at most one ready to run handler. Handlers executed immediately within the context of adispatch()
are never enqueued into theio_service
, and are not subject topoll_one()
's limit of invoking a single handler.The
boost::asio::spawn(strand, function)
overload starts a stackful coroutine as-if bystrand.dispatch()
:- if
strand.running_in_this_thread()
returnsfalse
for the caller, then the coroutine will be posted into thestrand
for deferred invocation - if
strand.running_in_this_thread()
returnstrue
for the caller, then the coroutine will be executed immediately
- if
Discrete
strand
objects that use the same implementation still maintain the guarantees of a strand. Namely, concurrent execution will not occur and the order of handler invocation is well defined. When discretestrand
objects are using discrete implementations, and multiple threads are running theio_service
, then one may observe the discrete strands executing concurrently. However, when discretestrand
objects use the same implementation, one will not observe concurrency even if multiple threads are running theio_service
. This behavior is documented:The implementation makes no guarantee that handlers posted or dispatched through different strand objects will be invoked concurrently.
Asio has a limited pool of strand implementations. The current default is
193
and can be controlled by definingBOOST_ASIO_STRAND_IMPLEMENTATIONS
to the desired number. This feature is noted in the Boost.Asio 1.48 release notesMade the number of strand implementations configurable by defining
BOOST_ASIO_STRAND_IMPLEMENTATIONS
to the desired number.By decreasing the pool size, one increases the chance that two discrete strands will use the same implementation. With the original code, if one was to set the pool size to
1
, thenstrand1
andstrand2
will always use the same implementation, resulting inval
always being3
(demo).The default strategy for allocating strand implementations is to use a golden-ratio hash. As a hashing algorithm is used, there is a potential for collisions, resulting in the same implementation being used for multiple discrete
strand
objects. By definingBOOST_ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION
, one can change the allocation strategy to round-robin, preventing a collision from occurring untilBOOST_ASIO_STRAND_IMPLEMENTATIONS + 1
strand allocations have occurred. This feature is noted in the Boost.Asio 1.48 release notes:Added support for a new
BOOST_ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION
flag which switches the allocation of strand implementations to use a round-robin approach rather than hashing.
Given the above details, the following occurs when 1
is observed in the original code:
strand1
andstrand2
have discrete implementationsio_service::poll_one()
executes the single handler that was posted directly intostrand1
- the handler that was posted into
strand1
setsval
to1
- the handler posted into
strand2
is enqueued and never invoked the coroutine creation is deferred, as
strand
's order of invocation guarantee prevents the coroutine from being created until after the previous handler that was posted intostrand2
has executed:given a strand object
s
, ifs.post(a)
happens-befores.dispatch(b)
, where the latter is performed outside the strand, thenasio_handler_invoke(a1, &a1)
happens-beforeasio_handler_invoke(b1, &b1)
.
On the other hand, when 3
is observed:
- a hash-collision occurs for
strand1
andstrand2
, resulting in them using the same underlying strand implementation io_service::poll_one()
executes the single handler that was posted directly intostrand1
- the handler that was posted into
strand1
setsval
to1
- the handler posted into
strand2
is enqueued and never invoked - the coroutine is immediately created and invoked within
boost::asio::spawn()
, settingval
to3
, asstrand2
can safely execute the coroutine while maintaining the guarantee of non-concurrent execution and order of handler invocation
这篇关于io_service :: poll_one非确定性行为的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!