boost :: asio和Active Object [英] boost::asio and Active Object

查看:293
本文介绍了boost :: asio和Active Object的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经实现了一些基于模块的主动对象设计模式。这是非常简单的实现。我有Scheduler,ActivationList,Requests和Futures得到响应。
我的要求是这样的:




  • 访问活动对象应通过在其自身内执行方法
    线程(主要req和活动对象
    设计模式的假设)

  • 调用者应能够指定请求执行的优先级。这意味着如果有多于零的请求等待执行,它们将按照分配给每个请求的优先级排序。具有较高优先级的请求将首先执行,因此如果总是有一些请求在ActivationList中等待,并且它们将具有比给定请求更高的优先级,则该请求将永远不会被执行 - 它对我来说是OK的。

  • 可以指定列表上待处理的请求的最大数量(限制内存使用)

  • 可以使所有未处理的请求无效

  • 请求应该能够返回值(阻塞调用者)或者只是执行没有值返回,但是调用者将被阻塞,直到请求被处理或者调用者不会被阻塞,并且如果给定请求它不重要已经处理或未处理
    g

  • 在执行请求之前,应执行一些保护方法以检查是否执行给定的请求。如果没有 - 它会返回一些未定义的值给调用者(在我当前的实现是boost :: none,因为每个请求返回类型是boost :: optional)



现在确定问题:
是否可以使用boost :: asio并满足我的所有要求?我的实现是工作,但我想使用什么是可能实现的比我做的更好的方式。

解决方案

Boost.Asio可以帮助您用于包含活动对象的意图:将方法执行与方法调用分离。其他要求需要在更高级别处理,但是在将Boost.Asio与其他Boost库结合使用时不会过于复杂。



Scheduler 可以使用:





:io_service :: work code> ActivationList 可以实现为:




  • A Boost.MultiIndex 以获取最高优先级的方法请求。使用暗示位置 insert(),对具有相同优先级的请求保留插入顺序。

  • std :: multiset std :: multimap

  • 如果请求,则在C ++ 03中未指定具有相同键(优先级)的请求顺序。不需要保护方法,则可以使用 std :: priority_queue



请求可能是未指定的类型:





Futures 可以使用Boost.Thread的期货支持。




  • future.valid如果请求已添加到 ActivationList ,则将返回true。 b $ b
  • future.wait()将阻止等待结果可用。

  • $ 传递给未来






这里是一个利用各种Boost库的完整示例,并且应该满足以下要求:

  //标准包括
#include< algorithm> // std :: find_if
#include< iostream>
#include< string>

//第三方包括
#include< boost / asio.hpp>
#include< boost / bind.hpp>
#include< boost / function.hpp>
#include< boost / make_shared.hpp>
#include< boost / multi_index_container.hpp>
#include< boost / multi_index / ordered_index.hpp>
#include< boost / multi_index / member.hpp>
#include< boost / shared_ptr.hpp>
#include< boost / thread.hpp>
#include< boost / utility / result_of.hpp>

/// @brief调度程序为优先作业提供限制。
template< typename Priority,
typename Compare = std :: less< Priority> >
class scheduler
{
public:
typedef priority priority_type;
private:

/// @brief method_request用于耦合guard和调用
///给定方法的函数。
struct method_request
{
typedef boost :: function< bool()> ready_func_type;
typedef boost :: function< void()> run_func_type;

template< typename ReadyFunctor,
typename RunFunctor>
method_request(ReadyFunctor就绪,
RunFunctor运行)
:准备就绪(准备好),
运行(运行)
{}

ready_func_type就绪;
run_func_type run;
};

/// @brief用于将请求与其优先级相关联的对类型。
typedef std :: pair< priority_type,
boost :: shared_ptr< method_request> > pair_type;

static bool is_method_ready(const pair_type& pair)
{
return pair.second-> ready();
}

public:

/// @brief构造调度程序。
///
/// @param max_threads并发任务的最大数量。
/// @param max_request请求的最大数量。
scheduler(std :: size_t max_threads,
std :: size_t max_request)
:work_(io_service_),
max_request_(max_request),
request_count_ b $ b {
//生成线程,专用于io_service。
for(std :: size_t i = 0; i threads_.create_thread(
boost :: bind(& boost :: asio :: io_service: :run,& io_service_));
}

/// @brief析构函数。
〜scheduler()
{
//从io_service释放线程。
io_service_.stop();
//清理。
threads_.join_all();
}

/// @brief将方法请求插入调度程序。
///
/// @param priority作业的优先级。
/// @param ready_func调用以检查方法是否已准备好运行。
/// @param run_func准备运行时调用。
///
/// @return与方法相关的未来。
template< typename ReadyFunctor,
typename RunFunctor>
boost :: unique_future< typename boost :: result_of< RunFunctor()> :: type>
insert(priority_type priority,
const ReadyFunctor& ready_func,
const RunFunctor& run_func)
{
typedef typename boost :: result_of< RunFunctor()> type result_type;
typedef boost :: unique_future< result_type> future_type;

boost :: unique_lock< mutex_type> lock(mutex_);

//如果已达到最大请求,则返回无效的未来。
if(max_request_&&
(request_count_ == max_request_))
return future_type();

++ request_count_;

//使用打包任务来处理填充promise和未来。
typedef boost :: packaged_task< result_type> task_type;

// Bind不能使用rvalue,而且packed_task只有可移动的,
//所以分配一个共享指针。
boost :: shared_ptr< task_type> task =
boost :: make_shared< task_type>(run_func);

//创建方法请求。
boost :: shared_ptr< method_request> request =
boost :: make_shared< method_request>(
ready_func,
boost :: bind(& task_type :: operator(),task)

//插入优先级。提示插入尽可能接近末端
//可能保留具有相同优先级的请求的插入顺序。
activation_list_.insert(activation_list_.end(),
pair_type(priority,request));

//现在有一个未完成的请求,所以post dispatch。
io_service_.post(boost :: bind(& scheduler :: dispatch,this));

return task-> get_future();
}

/// @brief将方法请求插入调度程序。
///
/// @param ready_func调用以检查方法是否已准备好运行。
/// @param run_func准备运行时调用。
///
/// @return与方法相关的未来。
template< typename ReadyFunctor,
typename RunFunctor>
boost :: unique_future< typename boost :: result_of< RunFunctor()> :: type>
insert(const ReadyFunctor& ready_func,
const RunFunctor& run_func)
{
return insert(priority_type(),ready_func,run_func);
}

/// @brief将方法请求插入调度程序。
///
/// @param priority作业的优先级。
/// @param run_func准备运行时调用。
///
/// @return与方法相关的未来。
template< typename RunFunctor>
boost :: unique_future< typename boost :: result_of< RunFunctor()> :: type>
insert(priority_type priority,
const RunFunctor& run_func)
{
return insert(priority,& always_ready,run_func);
}

/// @brief在
///调度程序中插入具有默认优先级的方法请求。
///
/// @param run_func准备运行时调用。
///
/// @param functor要运行的作业。
///
/// @return与作业相关的未来。
template< typename RunFunc>
boost :: unique_future< typename boost :: result_of< RunFunc()> :: type>
insert(const RunFunc& run_func)
{
return insert(& always_ready,run_func);
}

/// @brief取消所有未完成的请求。
void cancel()
{
boost :: unique_lock< Mutex_type> lock(mutex_);
activation_list_.clear();
request_count_ = 0;
}

private:

/// @brief调度请求。
void dispatch()
{
//获取当前最高优先级的请求,准备从队列中运行。
boost :: unique_lock< mutex_type> lock(mutex_);
if(activation_list_.empty())return;

//找到准备运行的最高优先级方法。
typedef typename activation_list_type :: iterator iterator;
iterator end = activation_list_.end();
iterator result = std :: find_if(
activation_list_.begin(),end,& is_method_ready);

//如果没有方法准备就绪,然后post到dispatch,因为
//方法可能已经准备好了。
if(end == result)
{
io_service_.post(boost :: bind(& scheduler :: dispatch,this)
return;
}

//获取请求的所有权。
boost :: shared_ptr< method_request> method = result-> second;
activation_list_.erase(result);

//运行没有互斥体的方法。
lock.unlock();
method-> run();
lock.lock();

//执行记帐。
--request_count_;
}

static bool always_ready(){return true; }

private:

/// @brief未完成请求的列表。
typedef boost :: multi_index_container<
pair_type,
boost :: multi_index :: indexed_by<
boost :: multi_index :: ordered_non_unique<
boost :: multi_index :: member< pair_type,
typename pair_type :: first_type,
& pair_type :: first>,
比较
>
>
> activation_list_type;
activation_list_type activation_list_;

/// @brief线程组管理线程服务池。
boost :: thread_group threads_;

/// @brief io_service用于作为线程池。
boost :: asio :: io_service io_service_;

/// @brief工作用于保持线程为io_service服务。
boost :: asio :: io_service :: work work_;

/// @brief请求的最大数量。
const std :: size_t max_request_;

/// @brief未完成请求的计数。
std :: size_t request_count_;

/// @brief同步对激活列表的访问。
typedef boost :: mutex mutex_type;
mutex_type mutex_;
};

typedef scheduler< unsigned int,
std :: greater< unsigned int> > high_priority_scheduler;

/// @brief加法器是一个简单的代理,它将工作委托给
///调度程序。
类附加器
{
public:
adder(high_priority_scheduler& scheduler)
:scheduler_(scheduler)
{}

/// @brief使用优先级添加a和b。
///
/// @return返回未来的结果。
template< typename T>
boost :: unique_future< T> add(
high_priority_scheduler :: priority_type priority,
const T& a,const T& b)
{
//插入方法请求
return scheduler_.insert b $ b priority,
boost :: bind(& adder :: do_add< T>,a,b));
}

/// @brief添加a和b。
///
/// @return返回未来的结果。
template< typename T>
boost :: unique_future< T> add(const T& a,const T& b)
{
return add(high_priority_scheduler :: priority_type(),a,b);
}

private:

/// @brief实际添加a和b。
template< typename T>
static T do_add(const T& a,const T& b)
{
std :: cout< 开始添加'< a
<< '和'< b<< '< std :: endl;
//模拟繁忙的工作。
boost :: this_thread :: sleep_for(boost :: chrono :: seconds(2));
std :: cout<< 完成添加< std :: endl;
return a + b;
}

private:
high_priority_scheduler& scheduler_;
};

bool get(bool& value){return value; }
void guarded_call()
{
std :: cout< guarded_call< std :: endl;
}

int main()
{
const unsigned int max_threads = 1;
const unsigned int max_request = 4;

// Sscheduler
high_priority_scheduler scheduler(max_threads,max_request);

//代理
加法器加法器(调度程序);

//客户端

//将受保护的方法添加到调度程序。
bool ready = false;
std :: cout<< 添加保护方法。 << std :: endl;
boost :: unique_future< void> future1 = scheduler.insert(
boost :: bind(& get,boost :: ref(ready)),
& guarded_call);

//默认优先级为1 + 100。
boost :: unique_future< int> future2 = adder.add(1,100);

//强制睡眠尝试获取调度程序以首先运行请求2。
boost :: this_thread :: sleep_for(boost :: chrono :: seconds(1));

//添加:
// 2 + 200低优先级(5)
//test+this高优先级boost :: unique_future< int> future3 = adder.add(5,2,200);
boost :: unique_future< std :: string> future4 = adder.add(99,
std :: string(test),std :: string(this));

//已达到最大请求,因此请添加另一个。
boost :: unique_future< int> future5 = adder.add(3,300);

//检查是否添加了请求。
std :: cout<< future1有效:< future1.valid()
<< \\\
future2 is valid:< future2.valid()
<< \\\
future3 is valid:< future3.valid()
<< \\\
future4 is valid:< future4.valid()
<< \\\
future5 is valid:< future5.valid()
<< std :: endl;

//获取future2和future3的结果。对future4的结果不做任何操作。
std :: cout<< future2 result:< future2.get()
<< \\\
future3 result:< future3.get()
<< std :: endl;

std :: cout<< 防护方法。 << std :: endl;
ready = true;
future1.wait();
}

执行使用线程池1,最多4个请求。 p>


  • request1保护到程序结束,并且应该最后运行。

  • request2

  • request3(2 + 200)插入低优先级,应在request4之后运行。

  • request4('test'+'this')以高优先级插入,并应在request3之前运行。

  • request5由于最大请求而无法插入,无效。



输出如下:

添加保护方法。 
开始添加'1'和'100'
future1有效:1
future2有效:1
future3有效:1
future4有效:1
future5有效:0
已完成添加
开始添加'test'和'this'
已添加
开始添加'2'和'200'
完成添加
future2结果:101
future3结果:202
Unguarding方法。
guarded_call


I have implemented some module based Active Object design pattern. It is very simple implementation. I have Scheduler, ActivationList, Requests and Futures to get response. My requirements were like that:

  • Access to active object shall be serialized by executing its methods within its own thread (main req and assumption of Active Object design pattern)
  • Caller shall be able to specify the priority of requests execution. It means that if there is more than zero requests waiting for execution, they shall be ordered by the priority assigned to each request. Requests with higher priority shall be executed first so if there will be some requests pending on the ActivationList always and they will have higher priority than a given requests, this request will never be executed - its OK for me
  • It shall be possible to specify the maximum number of requests pending on the list (limit the memory usage)
  • It shall be possible to invalidate all pending requests
  • Requests shall be able to return values (blocking the caller) OR just shall be executed without value return but caller shall be blocked until request is processed OR caller shall not be blocked and it is not important for it if given request has been processed or not g
  • Just before request execution, some guard method shall be executed to check if given request shall be executed or not. If not - it shall return some undefined value to caller (in my current implementation it is boost::none, because each request return type is boost::optional)

OK now question: Is it possible to use boost::asio and fulfill all my requirements? My implementation is working but I would like to use something what is probably implemented in much better way than I have done this. Also I would like to know it for the future and do not "reinvent the wheel" once again.

解决方案

Boost.Asio can be used to encompass the intention of Active Object: decouple method execution from method invocation. Additional requirements will need to be handled at a higher-level, but it is not overly complex when using Boost.Asio in conjunction with other Boost libraries.

Scheduler could use:

ActivationList could be implemented as:

  • A Boost.MultiIndex for obtaining highest priority method request. With a hinted-position insert(), the insertion order is preserved for request with the same priority.
  • std::multiset or std::multimap can be used. However, it is unspecified in C++03 as to the order of request with the same key (priority).
  • If Request do not need an guard method, then std::priority_queue could be used.

Request could be an unspecified type:

  • boost::function and boost::bind could be used to provide a type-erasure, while binding to callable types without introducing a Request hierarchy.

Futures could use Boost.Thread's Futures support.

  • future.valid() will return true if Request has been added to ActivationList.
  • future.wait() will block waiting for a result to become available.
  • future.get() will block waiting for the result.
  • If caller does nothing with the future, then caller will not be blocked.
  • Another benefit to using Boost.Thread's Futures is that exceptions originating from within a Request will be passed to the Future.

Here is a complete example leveraging various Boost libraries and should meet the requirements:

// Standard includes
#include <algorithm> // std::find_if
#include <iostream>
#include <string>

// 3rd party includes
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/function.hpp>
#include <boost/make_shared.hpp>
#include <boost/multi_index_container.hpp>
#include <boost/multi_index/ordered_index.hpp>
#include <boost/multi_index/member.hpp>
#include <boost/shared_ptr.hpp>
#include <boost/thread.hpp>
#include <boost/utility/result_of.hpp>

/// @brief scheduler that provides limits with prioritized jobs.
template <typename Priority,
          typename Compare = std::less<Priority> >
class scheduler
{
public:
  typedef Priority priority_type;
private:

  /// @brief method_request is used to couple the guard and call
  ///        functions for a given method.
  struct method_request
  {
    typedef boost::function<bool()> ready_func_type;
    typedef boost::function<void()> run_func_type;

    template <typename ReadyFunctor,
              typename RunFunctor>
    method_request(ReadyFunctor ready,
                   RunFunctor run)
      : ready(ready),
        run(run)
    {}

    ready_func_type ready;
    run_func_type run;
  };

  /// @brief Pair type used to associate a request with its priority.
  typedef std::pair<priority_type,
                    boost::shared_ptr<method_request> > pair_type;

  static bool is_method_ready(const pair_type& pair)
  {
    return pair.second->ready();
  }

public:

  /// @brief Construct scheduler.
  ///
  /// @param max_threads Maximum amount of concurrent task.
  /// @param max_request Maximum amount of request.  
  scheduler(std::size_t max_threads,
            std::size_t max_request)
    : work_(io_service_),
      max_request_(max_request),
      request_count_(0)
  {
    // Spawn threads, dedicating them to the io_service.
    for (std::size_t i = 0; i < max_threads; ++i)
      threads_.create_thread(
        boost::bind(&boost::asio::io_service::run, &io_service_));
  }

  /// @brief Destructor.
  ~scheduler()
  {
    // Release threads from the io_service.
    io_service_.stop();
    // Cleanup.
    threads_.join_all();
  }

  /// @brief Insert a method request into the scheduler.
  ///
  /// @param priority Priority of job.
  /// @param ready_func Invoked to check if method is ready to run.
  /// @param run_func Invoked when ready to run.
  ///
  /// @return future associated with the method.
  template <typename ReadyFunctor,
            typename RunFunctor>
  boost::unique_future<typename boost::result_of<RunFunctor()>::type>
  insert(priority_type priority, 
         const ReadyFunctor& ready_func,
         const RunFunctor& run_func)
  {
    typedef typename boost::result_of<RunFunctor()>::type result_type;
    typedef boost::unique_future<result_type> future_type;

    boost::unique_lock<mutex_type> lock(mutex_);

    // If max request has been reached, then return an invalid future.
    if (max_request_ &&
        (request_count_ == max_request_))
      return future_type();

    ++request_count_;

    // Use a packaged task to handle populating promise and future.
    typedef boost::packaged_task<result_type> task_type;

    // Bind does not work with rvalue, and packaged_task is only moveable,
    // so allocate a shared pointer.
    boost::shared_ptr<task_type> task = 
      boost::make_shared<task_type>(run_func);

    // Create method request.
    boost::shared_ptr<method_request> request =
      boost::make_shared<method_request>(
        ready_func,
        boost::bind(&task_type::operator(), task));

    // Insert into priority.  Hint to inserting as close to the end as
    // possible to preserve insertion order for request with same priority.
    activation_list_.insert(activation_list_.end(),
                            pair_type(priority, request));

    // There is now an outstanding request, so post to dispatch.
    io_service_.post(boost::bind(&scheduler::dispatch, this));

    return task->get_future();
  }

  /// @brief Insert a method request into the scheduler.
  ///
  /// @param ready_func Invoked to check if method is ready to run.
  /// @param run_func Invoked when ready to run.
  ///
  /// @return future associated with the method.
  template <typename ReadyFunctor,
            typename RunFunctor>
  boost::unique_future<typename boost::result_of<RunFunctor()>::type>
  insert(const ReadyFunctor& ready_func,
         const RunFunctor& run_func)
  {
    return insert(priority_type(), ready_func, run_func);
  }

  /// @brief Insert a method request into the scheduler.
  ///
  /// @param priority Priority of job.
  /// @param run_func Invoked when ready to run.
  ///
  /// @return future associated with the method.
  template <typename RunFunctor>
  boost::unique_future<typename boost::result_of<RunFunctor()>::type>
  insert(priority_type priority, 
         const RunFunctor& run_func)
  {
    return insert(priority, &always_ready, run_func);
  }

  /// @brief Insert a method request with default priority into the
  ///        scheduler.
  ///
  /// @param run_func Invoked when ready to run.
  ///
  /// @param functor Job to run.
  ///
  /// @return future associated with the job.
  template <typename RunFunc>
  boost::unique_future<typename boost::result_of<RunFunc()>::type>
  insert(const RunFunc& run_func)
  {
    return insert(&always_ready, run_func);
  }

  /// @brief Cancel all outstanding request.
  void cancel()
  {
    boost::unique_lock<mutex_type> lock(mutex_);
    activation_list_.clear();
    request_count_ = 0;
  } 

private:

  /// @brief Dispatch a request.
  void dispatch()
  {
    // Get the current highest priority request ready to run from the queue.
    boost::unique_lock<mutex_type> lock(mutex_);
    if (activation_list_.empty()) return;

    // Find the highest priority method ready to run.
    typedef typename activation_list_type::iterator iterator;
    iterator end = activation_list_.end();
    iterator result = std::find_if(
      activation_list_.begin(), end, &is_method_ready);

    // If no methods are ready, then post into dispatch, as the
    // method may have become ready.
    if (end == result)
    {
      io_service_.post(boost::bind(&scheduler::dispatch, this));
      return;
    }

    // Take ownership of request.
    boost::shared_ptr<method_request> method = result->second;
    activation_list_.erase(result);

    // Run method without mutex.
    lock.unlock();
    method->run();    
    lock.lock();

    // Perform bookkeeping.
    --request_count_;
  }

  static bool always_ready() { return true; }

private:

  /// @brief List of outstanding request.
  typedef boost::multi_index_container<
    pair_type,
    boost::multi_index::indexed_by<
      boost::multi_index::ordered_non_unique<
        boost::multi_index::member<pair_type,
                                   typename pair_type::first_type,
                                   &pair_type::first>,
        Compare
      >
    >
  > activation_list_type;
  activation_list_type activation_list_;

  /// @brief Thread group managing threads servicing pool.
  boost::thread_group threads_;

  /// @brief io_service used to function as a thread pool.
  boost::asio::io_service io_service_;

  /// @brief Work is used to keep threads servicing io_service.
  boost::asio::io_service::work work_;

  /// @brief Maximum amount of request.
  const std::size_t max_request_;

  /// @brief Count of outstanding request.
  std::size_t request_count_;

  /// @brief Synchronize access to the activation list.
  typedef boost::mutex mutex_type;
  mutex_type mutex_;
};

typedef scheduler<unsigned int, 
                  std::greater<unsigned int> > high_priority_scheduler;

/// @brief adder is a simple proxy that will delegate work to
///        the scheduler.
class adder
{
public:
  adder(high_priority_scheduler& scheduler)
    : scheduler_(scheduler)
  {}

  /// @brief Add a and b with a priority.
  ///
  /// @return Return future result.
  template <typename T>
  boost::unique_future<T> add(
    high_priority_scheduler::priority_type priority,
    const T& a, const T& b)
  {
    // Insert method request
    return scheduler_.insert(
      priority,
      boost::bind(&adder::do_add<T>, a, b));
  }

  /// @brief Add a and b.
  ///
  /// @return Return future result.
  template <typename T>
  boost::unique_future<T> add(const T& a, const T& b)
  {
    return add(high_priority_scheduler::priority_type(), a, b);
  }

private:

  /// @brief Actual add a and b.
  template <typename T>
  static T do_add(const T& a, const T& b)
  {
    std::cout << "Starting addition of '" << a 
              << "' and '" << b << "'" << std::endl;
    // Mimic busy work.
    boost::this_thread::sleep_for(boost::chrono::seconds(2));
    std::cout << "Finished addition" << std::endl;
    return a + b;
  }

private:
  high_priority_scheduler& scheduler_;
};

bool get(bool& value) { return value; }
void guarded_call()
{
  std::cout << "guarded_call" << std::endl; 
}

int main()
{
  const unsigned int max_threads = 1;
  const unsigned int max_request = 4;

  // Sscheduler
  high_priority_scheduler scheduler(max_threads, max_request);

  // Proxy
  adder adder(scheduler);

  // Client

  // Add guarded method to scheduler.
  bool ready = false;
  std::cout << "Add guarded method." << std::endl;
  boost::unique_future<void> future1 = scheduler.insert(
    boost::bind(&get, boost::ref(ready)),
    &guarded_call);

  // Add 1 + 100 with default priority.
  boost::unique_future<int> future2 = adder.add(1, 100);

  // Force sleep to try to get scheduler to run request 2 first.
  boost::this_thread::sleep_for(boost::chrono::seconds(1));

  // Add:
  //   2 + 200 with low priority (5)
  //   "test" + "this" with high priority (99)
  boost::unique_future<int> future3 = adder.add(5, 2, 200);
  boost::unique_future<std::string> future4 = adder.add(99,
    std::string("test"), std::string("this"));

  // Max request should have been reached, so add another.
  boost::unique_future<int> future5 = adder.add(3, 300);

  // Check if request was added.
  std::cout << "future1 is valid: " << future1.valid()
          << "\nfuture2 is valid: " << future2.valid()
          << "\nfuture3 is valid: " << future3.valid()
          << "\nfuture4 is valid: " << future4.valid()
          << "\nfuture5 is valid: " << future5.valid()
          << std::endl;

  // Get results for future2 and future3.  Do nothing with future4's results.
  std::cout << "future2 result: " << future2.get()
          << "\nfuture3 result: " << future3.get()
          << std::endl;

  std::cout << "Unguarding method." << std::endl;
  ready = true;
  future1.wait();
}

The execution uses thread pool of 1 with a max of 4 request.

  • request1 is guarded until the end of program, and should be last to run.
  • request2 (1 + 100) is inserted with default priority, and should be first to run.
  • request3 (2 + 200) is inserted low priority, and should run after request4.
  • request4 ('test' + 'this') is inserted with high priority, and should run before request3.
  • request5 should fail to insert due to max request, and should not be valid.

The output is as follows:

Add guarded method.
Starting addition of '1' and '100'
future1 is valid: 1
future2 is valid: 1
future3 is valid: 1
future4 is valid: 1
future5 is valid: 0
Finished addition
Starting addition of 'test' and 'this'
Finished addition
Starting addition of '2' and '200'
Finished addition
future2 result: 101
future3 result: 202
Unguarding method.
guarded_call

这篇关于boost :: asio和Active Object的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆