提高::写了一段时间后async_write失败 [英] boost::async_write fails after writing for some time

查看:367
本文介绍了提高::写了一段时间后async_write失败的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个非常奇特的问题。我写了写,它从第三方接收到连接的客户端数据的服务器。服务器写入到客户端(S)的罚款了一段时间,但经过一段时间,无论是async_write失败或写永不再来。对于我的程序,如果async_write永远不会返回,再没有后续的写操作将发生,直到一切都打击了我的服务器将排队从第三方接收数据。

I am having a very peculiar problem. I have written a server that writes data that it receives from a third party to connected clients. The server writes to the client(s) fine for a while, but after a while, async_write either fails or a write never returns. For my program, if an async_write never returns, then no subsequent writes will take place, and my server will queue up the data it receives from the third party until everything blows up.

我已经包含下面我code:

I have included my code below:

void ClientPartitionServer::HandleSignal(const CommonSessionMessage& message, int transferSize) {
  boost::lock_guard<boost::mutex> lock(m_mutex);
  if(m_clientSockets.size() != 0) {
    TransferToQueueBuffer(message.GetData(), transferSize);
  }
  if(m_writeCompleteFlag) {
    // TransferToWriteBuffer();
    for(vector<boost::asio::ip::tcp::socket*>::const_iterator i = m_clientSockets.begin(); i != m_clientSockets.end(); ++i) {
      WriteToClient(*i);
    }
  }
}

void ClientPartitionServer::WriteToClient(boost::asio::ip::tcp::socket* clientSocket) {
  m_writeCompleteFlag = false;
  cout << "Iniating write: " << m_identifier << endl;
  boost::asio::async_write(
    *clientSocket,
    boost::asio::buffer(m_queueBuffer.get(), m_queueBufferSize),
    boost::bind(
      &ClientPartitionServer::HandleWrite, this,
      boost::asio::placeholders::error,
      boost::asio::placeholders::bytes_transferred
  ));
}

void ClientPartitionServer::HandleWrite(const boost::system::error_code& ec, size_t bytes_transferred) {
  boost::lock_guard<boost::mutex> lock(m_mutex);
  if(ec != 0) {
    cerr << "Error writing to client: " << ec.message() << " " << m_identifier << endl;
    // return;
    cout << "HandleWrite Error" << endl;
    exit(0);
  }
  cout << "Write complete: " << m_identifier << endl;
  m_writeCompleteFlag = true;
  m_queueBuffer.reset();
  m_queueBufferSize = 0;
}

任何帮助将是AP preciated。

Any help would be appreciated.

感谢您。

推荐答案

在没有看到所有的code这很难说,但它是一个红旗,我认为你持有的互斥体在多个(甚至一) WriteToClient 调用。典型地保持任何种类的整个I / O(甚至异步你有在这里)一个锁充其量坏的性能和在最坏情况下为负载下怪异死锁的配方。如果异步写入完成内联,你会得到所谓的回 HandleWrite 在同一个线程/调用堆栈,例如,会发生什么?

Without seeing all the code it's hard to say, but it's a red flag to me that you hold the mutex across multiple (or even one) WriteToClient calls. Typically holding a lock of any kind across I/O (even async as you have here) is at best bad for performance and at worst a recipe for weird deadlocks under load. What happens if the async write completes inline and you get called back on HandleWrite in the same thread/callstack, for instance?

我会尝试重构这个使锁期间写调用释放。

I would try to refactor this so that the lock is released during the write calls.

无论解决原来是更一般的建议是:

Whatever the solution turns out to be, more general advice:


  • 请不要跨越我锁定/ O的

  • 添加一些诊断输出 - 什么
    线程调用每个处理程序,并在
    什么顺序?

  • 尝试调试,一旦你打
    静态。应该可以
    从结果诊断死锁
    流程状态。

这篇关于提高::写了一段时间后async_write失败的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆