TNonblockingServer,TThreadedServer和TThreadPoolServer,哪一个最适合我的情况? [英] TNonblockingServer, TThreadedServer and TThreadPoolServer, which one fits best for my case?

查看:2701
本文介绍了TNonblockingServer,TThreadedServer和TThreadPoolServer,哪一个最适合我的情况?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们的分析服务器是用c ++编写的。它基本上查询底层存储引擎,并通过thrift返回一个相当大的结构化数据。一个典型的请求需要大约0.05到0.6秒才能完成,取决于请求的大小。

Our analytic server is written in c++. It basically queries underlying storage engine and returns a fairly big structured data via thrift. A typical requests will take about 0.05 to 0.6 seconds to finish depends on the request size.

我注意到有几个选项,我们可以使用Thrift服务器在c ++代码中,特别是TNonblockingServer,TThreadedServer和TThreadPoolServer。看起来像TNonblockingServer是去的方式,因为它可以支持更多的并发请求,仍然使用线程池后面的场景通过任务来紧缩。它还避免了构建/销毁线程的成本。

I noticed that there are a few options in terms of which Thrift server we can use in the c++ code, specifically TNonblockingServer, TThreadedServer, and TThreadPoolServer. It seems like TNonblockingServer is the way to go since it can support much more concurrent requests and still using a thread pool behind the scene to crunch through the tasks. It also avoids the cost of constructing/destructing the threads.

Facebook的更新thrift: http://www.facebook.com/note.php?note_id=16787213919

Facebook's update on thrift: http://www.facebook.com/note.php?note_id=16787213919


在这里在Facebook上,我们正在为C ++开发一个完全异步的客户端和服务器。这个
服务器使用事件驱动的I / O,像当前的TNonblockingServer,但它的接口
的应用程序代码都是基于异步回调。这将允许我们编写
服务器,它可以只用几个线程来处理成千上万个并发请求(每个请求需要
调用其他Thrift或Memcache服务器)。

Here at Facebook, we're working on a fully asynchronous client and server for C++. This server uses event-driven I/O like the current TNonblockingServer, but its interface to the application code is all based on asynchronous callbacks. This will allow us to write servers that can service thousands of simultaneous requests (each of which requires making calls to other Thrift or Memcache servers) with only a few threads.

stackover上的相关文章: http://stackoverflow.com/questions/954945/large-number-of-simulteneous-connections-in-thrift

Related posts on stackover: http://stackoverflow.com/questions/954945/large-number-of-simulteneous-connections-in-thrift


话虽如此,你不一定能更快地工作(处理程序
仍然在线程池中执行),但更多的客户端将能够立即连接到您。

That being said, you won't necessarily be able to actually do work faster (handlers still execute in a thread pool), but more clients will be able to connect to you at once.

只是想知道我在这里缺少什么其他因素吗?

Just wondering are there any other factors I'm missing here? How shall I decide which one fits my needs the best?

推荐答案

请求需要花费50-600毫秒才能完成。创建或销毁线程所需的时间远少于此,所以不要让这个因素到你的决定在这个时候。我会选择一个最容易支持的,这是最不容易出错的。您想要最小化微妙的并发错误的可能性。

Requests that take 50-600 milliseconds to complete are pretty long. The time it takes to create or destroy a thread is much less than that, so don't let that factor into your decision at this time. I would choose the one that is easiest to support and that is the least error-prone. You want to minimize the likelihood of subtle concurrency bugs.

这就是为什么写单线程事务处理代码往往更容易阻塞它需要的地方,许多这些并行运行,比拥有一个更复杂的非阻塞模型。阻塞的线程可能会减慢单个事务,但它不会阻止服务器在等待时进行其他工作。

This is why it is often easier to write single-threaded transaction handling code that blocks where it needs to, and have many of these running in parallel, than to have a more complex non-blocking model. A blocked thread may slow down an individual transaction, but it does not prevent the server from doing other work while it waits.

如果您的事务负载增加(即更多的客户端事务)或请求变得更快处理(每个事务接近1毫秒),则事务开销变成更多的因素。要注意的指标是吞吐量:每单位时间完成多少事务。单个交易的绝对持续时间不如完成的速度重要,至少如果它保持在一秒钟以下。

If your transaction load increases (i.e. more client transactions) or the requests become faster to process (approaching 1 millisecond per transaction), then transaction overhead becomes more of a factor. The metric to pay attention to is throughput: how many transactions complete per unit time. The absolute duration of a single transaction is less important than the rate at which they are being completed, at least if it stays well below one second.

这篇关于TNonblockingServer,TThreadedServer和TThreadPoolServer,哪一个最适合我的情况?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆