如何限制用C正在运行的实例的数目+ + [英] How to limit the number of running instances in C++

查看:178
本文介绍了如何限制用C正在运行的实例的数目+ +的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个C ++类,分配大量内存。它通过调用,旨在崩溃,如果它不能分配内存第三方库,有时我的应用程序创建我的并行线程类的几个实例。有了太多的线程我有一个崩溃。
为解决我的最好的办法是要确保有永远,说,比在同一时间运行的三个实例。 (这是一个好主意吗?)
而我对于目前的实施最好的主意的的是使用升压互斥。大致如下伪code线的东西,

I have a c++ class that allocates a lot of memory. It does this by calling a third-party library that is designed to crash if it cannot allocate the memory, and sometimes my application creates several instances of my class in parallel threads. With too many threads I have a crash. My best idea for a solution is to make sure that there are never, say, more than three instances running at the same time. (Is this a good idea?) And my current best idea for implementing that is to use a boost mutex. Something along the lines of the following pseudo-code,

MyClass::MyClass(){
  my_thread_number = -1; //this is a class variable
  while (my_thread_number == -1)
    for (int i=0; i < MAX_PROCESSES; i++)
      if(try_lock a mutex named i){
        my_thread_number = i;
        break;
      }
  //Now I know that my thread has mutex number i and it is allowed to run
}

MyClass::~MyClass(){
    release mutex named my_thread_number
}

正如你所看到的,我不太清楚这里互斥体的确切语法的。所以总结,我的问题是

As you see, I am not quite sure of the exact syntax for mutexes here.. So summing up, my questions are


  1. 难道我时,我想通过限制线程的数量来解决我的记忆错误在正确的轨道上?

  2. 如果有,我应该用互斥体或以其他方式呢?

  3. 如果是的话,我的算法声音?

  4. 是否有一个很好的例子某处如何与升压互斥使用try_lock?

编辑:我意识到,我说的是线程,而不是进程。
编辑:我参与建设,可以在Linux和Windows ...


I realized I am talking about threads, not processes. I am involved in building an application that can run on both linux and Windows...

推荐答案

下面是实现自己的'信号'(因为我不认为标准库或升压有一个),一个简单的方法。这种选择一个'合作'的做法和工人将等待对方:

Here's a simplistic way to implement your own 'semaphore' (since I don't think the standard library or boost have one). This chooses a 'cooperative' approach and workers will wait for each other:

#include <boost/thread.hpp>
#include <boost/phoenix.hpp>

using namespace boost;
using namespace boost::phoenix::arg_names;

void the_work(int id)
{
    static int running = 0;
    std::cout << "worker " << id << " entered (" << running << " running)\n";

    static mutex mx;
    static condition_variable cv;

    // synchronize here, waiting until we can begin work
    {
        unique_lock<mutex> lk(mx);
        cv.wait(lk, phoenix::cref(running) < 3);
        running += 1;
    }

    std::cout << "worker " << id << " start work\n";
    this_thread::sleep_for(chrono::seconds(2));
    std::cout << "worker " << id << " done\n";

    // signal one other worker, if waiting
    {
        lock_guard<mutex> lk(mx);
        running -= 1;
        cv.notify_one(); 
    }
}

int main()
{
    thread_group pool;

    for (int i = 0; i < 10; ++i)
        pool.create_thread(bind(the_work, i));

    pool.join_all();
}

现在,我会说这是可能是更好的有n个工人专用的游泳池服用他们的工作从队列中轮流:

Now, I'd say it's probably better to have a dedicated pool of n workers taking their work from a queue in turns:

#include <boost/thread.hpp>
#include <boost/phoenix.hpp>
#include <boost/optional.hpp>

using namespace boost;
using namespace boost::phoenix::arg_names;

class thread_pool
{
  private:
      mutex mx;
      condition_variable cv;

      typedef function<void()> job_t;
      std::deque<job_t> _queue;

      thread_group pool;

      boost::atomic_bool shutdown;
      static void worker_thread(thread_pool& q)
      {
          while (auto job = q.dequeue())
              (*job)();
      }

  public:
      thread_pool() : shutdown(false) {
          for (unsigned i = 0; i < boost::thread::hardware_concurrency(); ++i)
              pool.create_thread(bind(worker_thread, ref(*this)));
      }

      void enqueue(job_t job) 
      {
          lock_guard<mutex> lk(mx);
          _queue.push_back(std::move(job));

          cv.notify_one();
      }

      optional<job_t> dequeue() 
      {
          unique_lock<mutex> lk(mx);
          namespace phx = boost::phoenix;

          cv.wait(lk, phx::ref(shutdown) || !phx::empty(phx::ref(_queue)));

          if (_queue.empty())
              return none;

          auto job = std::move(_queue.front());
          _queue.pop_front();

          return std::move(job);
      }

      ~thread_pool()
      {
          shutdown = true;
          {
              lock_guard<mutex> lk(mx);
              cv.notify_all();
          }

          pool.join_all();
      }
};

void the_work(int id)
{
    std::cout << "worker " << id << " entered\n";

    // no more synchronization; the pool size determines max concurrency
    std::cout << "worker " << id << " start work\n";
    this_thread::sleep_for(chrono::seconds(2));
    std::cout << "worker " << id << " done\n";
}

int main()
{
    thread_pool pool; // uses 1 thread per core

    for (int i = 0; i < 10; ++i)
        pool.enqueue(bind(the_work, i));
}

PS。您可以使用C ++ 11 lambda表达式,而不是提高::凤那里,如果你preFER。

PS. You can use C++11 lambdas instead boost::phoenix there if you prefer.

这篇关于如何限制用C正在运行的实例的数目+ +的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆