io_service :: poll_one非确定性行为 [英] io_service::poll_one non-deterministic behaviour

查看:252
本文介绍了io_service :: poll_one非确定性行为的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在下面的代码中,我期望输出总是为1,因为我期望只有一个处理程序运行 poll_one()被调用。然而,一次在大约300次,输出实际上是3.基于我对boost库的理解,这似乎不正确。

  #include< boost / asio.hpp> 

int main(){
boost :: asio :: io_service io;
boost :: asio :: io_service :: work io_work(io);
boost :: asio :: io_service :: strand strand1(io);
boost :: asio :: io_service :: strand strand2(io);
int val = 0;

strand1.post([& val,& strand2](){
val = 1;
strand2.post([& val] $ b val = 2;
});
boost :: asio :: spawn(strand2,[& val](boost :: asio :: yield_context yield){
val = 3 ;
});
});

io.poll_one();
std :: cout<< 最后执行:< val<< std :: endl;

return 0;
}

使用boost-asio 1.60.0.6



p> Asio具有有限的链实现池,并且对于strand的默认分配策略是散列。如果发生哈希冲突,两个线程将使用相同的实现。发生哈希冲突时,示例简化为以下演示

  #include< cassert> 
#include< boost / asio.hpp>

int main()
{
boost :: asio :: io_service io_service;
boost :: asio :: io_service :: strand strand1(io_service);
//让strand2使用与strand1相同的实现。
boost :: asio :: io_service :: strand strand2(strand1);

int value = 0;
auto handler1 = [& value,& strand1,& strand2](){
assert(strand1.running_in_this_thread());
assert(strand2.running_in_this_thread());
value = 1;

// handler2排队到strand中,从不调用。
auto handler2 = [& value](){assert(false); };
strand2.post(handler2);

// handler3立即执行。
auto handler3 = [& value](){value = 3; };
strand2.dispatch(handler3);
assert(value == 3);
};

//入队处理程序1。
strand1.post(handler1);

//运行事件处理循环,执行handler1。
assert(io_service.poll_one()== 1);
}

在上面的例子中:








有多种细节可以促进观察到的行为:




  • io_service :: poll_one() 将运行 io_service 的事件循环并且没有阻塞,它将最多执行一个准备运行处理程序。在 dispatch()的上下文中立即执行的处理程序不会入列到 io_service 中, code> poll_one()的调用单个处理程序的限制。


  • boost::asio::spawn(strand,function) 超载通过 strand.dispatch()启动堆栈协程 as-if




    • 如果 strand.running_in_this_thread()返回 false 调用者,则协程将被发布到 strand 以用于延迟调用

    • 如果 strand.running_in_this_thread )为调用者返回 true ,那么协程将立即执行


  • 使用相同实现的离散 strand 对象仍然保持了strand的保证。也就是说,并发执行不会发生,处理程序调用顺序是明确定义的。当离散 strand 对象使用离散实现,并且多个线程正在运行 io_service 时,同时执行。然而,当离散 strand 对象使用相同的实现时,即使多个线程正在运行 io_service 。此行为是记录


    实现不能保证通过不同的strand对象发布或分派的处理程序将被同时调用。



  • Asio具有有限的链实现。当前缺省值为 193 ,可以通过将 BOOST_ASIO_STRAND_IMPLEMENTATIONS 定义为所需的数字来控制。此功能在 Boost.Asio 1.48发行说明中有所说明


    通过定义 BOOST_ASIO_STRAND_IMPLEMENTATIONS 可以配置strand实现的数量


    通过减少池大小,一个增加两个离散链将使用相同实现的机会。使用原始代码,如果要将池大小设置为 1 ,则 strand1 strand2 将始终使用相同的实现,导致 val 始终为 3 a href =http://coliru.stacked-crooked.com/a/d98ac182372d4192>演示)。


  • 分配链实现是使用黄金比率哈希。由于使用散列算法,所以存在冲突的可能性,导致用于多个离散 strand 对象的相同实现。通过定义 BOOST_ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION ,可以将分配策略更改为round-robin,防止发生冲突,直到 BOOST_ASIO_STRAND_IMPLEMENTATIONS + 1 链分配已经发生。此功能在Boost.Asio 1.48发行说明中注明:


    添加了对新 BOOST_ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION flag,它将strand实现的分配切换为使用循环方法而不是哈希。





鉴于上述细节,当在原始代码中观察到 1 时,会发生以下情况:




  • strand1 strand2 有离散实作

  • io_service :: poll_one()执行直接发布到 strand1中的单一处理程序

  • 发布到中的处理程序strand1 设置 val 1

  • 发布到 strand2 中的处理程序已排入并且从未调用

  • 协程创建被延迟,因为 strand 的调用顺序会阻止协程创建,直到发布到<$ c $的上一个处理程序c> strand2 已执行:


    给出一个strand对象 s ,如果 s.post(a)发生在 s.dispatch(b) asio_handler_invoke(a1,& a1)发生在 asio_handler_invoke(b1,& b1)





另一方面, 3


  • strand1 strand2 ,导致他们使用相同的底层链实现

  • io_service :: poll_one )执行直接发布到 strand1

  • 中的单个处理程序c $ c> strand1 设置 val 1

  • 发布到 strand2 中的处理程序已排入并且从未调用

  • 该协程立即创建并在 boost :: asio :: spawn(),将 val 设置为 3 strand2 可以安全地执行协同程序,同时保持非并发执行和处理程序调用的顺序


In the following code, I expect the output to always be 1, because I am expecting only one handler to run when poll_one() is called. However, once in about 300 times, the output is actually 3. Based on my understanding of the boost library, this seems incorrect. Is the non-deterministic behavior a bug or expected?

#include <boost/asio.hpp>

int main() {
  boost::asio::io_service io;
  boost::asio::io_service::work io_work(io);
  boost::asio::io_service::strand strand1(io);
  boost::asio::io_service::strand strand2(io);
  int val = 0;

  strand1.post([&val, &strand2]() {
    val = 1;
    strand2.post([&val]() {
      val = 2;
    });
    boost::asio::spawn(strand2, [&val](boost::asio::yield_context yield) {
      val = 3;
    });
  });

  io.poll_one();
  std::cout << "Last executed: " << val << std::endl;

  return 0;
}

Using boost-asio 1.60.0.6

解决方案

The observed behavior is well defined and expected to occur, but one should not expected it to occur often.

Asio has a limited pool of strand implementations, and the default allocation strategy for strands is hashing. If a hash collision occurs, two strands will use the same implementation. When a hash-collision occurs, the example simplifies to the following demo:

#include <cassert>
#include <boost/asio.hpp>

int main()
{
  boost::asio::io_service io_service;
  boost::asio::io_service::strand strand1(io_service);
  // Have strand2 use the same implementation as strand1.
  boost::asio::io_service::strand strand2(strand1);

  int value = 0;
  auto handler1 = [&value, &strand1, &strand2]() {
    assert(strand1.running_in_this_thread());
    assert(strand2.running_in_this_thread());
    value = 1;

    // handler2 is queued into strand and never invoked.
    auto handler2 = [&value]() { assert(false); };
    strand2.post(handler2);

    // handler3 is immediately executed.
    auto handler3 = [&value]() { value = 3; };
    strand2.dispatch(handler3);
    assert(value == 3);
  };

  // Enqueue handler1.
  strand1.post(handler1);

  // Run the event processing loop, executing handler1.
  assert(io_service.poll_one() == 1);
}

In the above example:

  • io_service.poll_one() executes a single ready handler (handler1)
  • handler2 is never invoked
  • handler3 is invoked immediately within strand2.dispatch(), as strand2.dispatch() is invoked from within a handler where strand2.running_in_this_thread() returns true

There are various details contributing to the observed behavior:

  • io_service::poll_one() will run the io_service's event loop and without blocking, it will execute at most one ready to run handler. Handlers executed immediately within the context of a dispatch() are never enqueued into the io_service, and are not subject to poll_one()'s limit of invoking a single handler.

  • The boost::asio::spawn(strand, function) overload starts a stackful coroutine as-if by strand.dispatch():

    • if strand.running_in_this_thread() returns false for the caller, then the coroutine will be posted into the strand for deferred invocation
    • if strand.running_in_this_thread() returns true for the caller, then the coroutine will be executed immediately
  • Discrete strand objects that use the same implementation still maintain the guarantees of a strand. Namely, concurrent execution will not occur and the order of handler invocation is well defined. When discrete strand objects are using discrete implementations, and multiple threads are running the io_service, then one may observe the discrete strands executing concurrently. However, when discrete strand objects use the same implementation, one will not observe concurrency even if multiple threads are running the io_service. This behavior is documented:

    The implementation makes no guarantee that handlers posted or dispatched through different strand objects will be invoked concurrently.

  • Asio has a limited pool of strand implementations. The current default is 193 and can be controlled by defining BOOST_ASIO_STRAND_IMPLEMENTATIONS to the desired number. This feature is noted in the Boost.Asio 1.48 release notes

    Made the number of strand implementations configurable by defining BOOST_ASIO_STRAND_IMPLEMENTATIONS to the desired number.

    By decreasing the pool size, one increases the chance that two discrete strands will use the same implementation. With the original code, if one was to set the pool size to 1, then strand1 and strand2 will always use the same implementation, resulting in val always being 3 (demo).

  • The default strategy for allocating strand implementations is to use a golden-ratio hash. As a hashing algorithm is used, there is a potential for collisions, resulting in the same implementation being used for multiple discrete strand objects. By defining BOOST_ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION, one can change the allocation strategy to round-robin, preventing a collision from occurring until BOOST_ASIO_STRAND_IMPLEMENTATIONS + 1 strand allocations have occurred. This feature is noted in the Boost.Asio 1.48 release notes:

    Added support for a new BOOST_ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION flag which switches the allocation of strand implementations to use a round-robin approach rather than hashing.

Given the above details, the following occurs when 1 is observed in the original code:

  • strand1 and strand2 have discrete implementations
  • io_service::poll_one() executes the single handler that was posted directly into strand1
  • the handler that was posted into strand1 sets val to 1
  • the handler posted into strand2 is enqueued and never invoked
  • the coroutine creation is deferred, as strand's order of invocation guarantee prevents the coroutine from being created until after the previous handler that was posted into strand2 has executed:

    given a strand object s, if s.post(a) happens-before s.dispatch(b), where the latter is performed outside the strand, then asio_handler_invoke(a1, &a1) happens-before asio_handler_invoke(b1, &b1).

On the other hand, when 3 is observed:

  • a hash-collision occurs for strand1 and strand2, resulting in them using the same underlying strand implementation
  • io_service::poll_one() executes the single handler that was posted directly into strand1
  • the handler that was posted into strand1 sets val to 1
  • the handler posted into strand2 is enqueued and never invoked
  • the coroutine is immediately created and invoked within boost::asio::spawn(), setting val to 3, as strand2 can safely execute the coroutine while maintaining the guarantee of non-concurrent execution and order of handler invocation

这篇关于io_service :: poll_one非确定性行为的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆