每个连接的线程数与每个请求的线程数有什么区别? [英] What is the difference between thread per connection vs thread per request?

查看:475
本文介绍了每个连接的线程数与每个请求的线程数有什么区别?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

请解释一下在各种servlet实现中实现的两种方法:

Can you please explain the two methodologies which has been implemented in various servlet implementations:


  1. 每个连接的线程

  2. 每个请求的主题

上述两种策略中哪一种更好地扩展?为什么?

Which of the above two strategies scales better and why?

推荐答案


以上两种策略中哪一种更好地扩展?为什么?

Which of the above two strategies scales better and why?

每个请求的线程比每个连接的线程更好。

Thread-per-request scales better than thread-per-connection.

Java线程相当昂贵,通常每个使用1Mb内存段,无论他们是活跃还是闲着。如果为每个连接提供自己的线程,则线程通常在连接上的连续请求之间保持空闲。最终,框架需要停止接受新连接(因为它不能创建更多线程)或者开始断开旧连接(如果/当用户醒来时导致连接流失)。

Java threads are rather expensive, typically using a 1Mb memory segment each, whether they are active or idle. If you give each connection its own thread, the thread will typically sit idle between successive requests on the connection. Ultimately the framework needs to either stop accepting new connections ('cos it can't create any more threads) or start disconnecting old connections (which leads to connection churn if / when the user wakes up).

HTTP连接所需的资源远远少于线程堆栈,尽管由于TCP / IP的工作方式,每个IP地址的开放连接数限制为64K。

HTTP connection requires significantly less resources than a thread stack, although there is a limit of 64K open connections per IP address, due to the way that TCP/IP works.

相反,在每个请求的线程模型中,线程仅在处理请求时关联。这通常意味着服务需要更少的线程来处理相同数量的用户。并且由于线程使用大量资源,这意味着服务将更具可伸缩性。

By contrast, in the thread-per-request model, the thread is only associated while a request is being processed. That usually means that the service needs fewer threads to handle the same number of users. And since threads use significant resources, that means that the service will be more scalable.

(请注意,每个请求的线程并不意味着框架必须关闭HTTP请求之间的TCP连接......)

(And note that thread-per-request does not mean that the framework has to close the TCP connection between HTTP request ...)

话虽如此,每个请求的线程模型并不理想在处理每个请求期间有长暂停的时间。 (当服务使用 comet 方法时,它尤其不理想包括保持回复流打开很长时间。)为了支持这一点,Servlet 3.0规范提供了一种异步servlet机制,它允许servlet的请求方法暂停与当前请求线程的关联。这将释放线程并处理另一个请求。

Having said that, the thread-per-request model is not ideal when there are long pauses during the processing of each request. (And it is especially non-ideal when the service uses the comet approach which involves keeping the reply stream open for a long time.) To support this, the Servlet 3.0 spec provides an "asynchronous servlet" mechanism which allows a servlet's request method to suspend its association with the current request thread. This releases the thread to go and process another request.

如果Web应用程序可以设计为使用异步机制,则它可能比任何一个都更具可伸缩性每个请求的线程或每个连接的线程。

If the web application can be designed to use the "asynchronous" mechanism, it is likely to be more scalable than either thread-per-request or thread-per-connection.

关注


让我们假设一个包含1000张图片的网页。这导致1001个HTTP请求。进一步假设使用HTTP持久连接。使用TPR策略,这将导致1001个线程池管理操作(TPMO)。使用TPC策略,这将产生1个TPMO ......现在,根据单个TPMO的实际成本,我可以想象TPC可能比TPR更好地扩展的情况。

Let's assume a single webpage with 1000 images. This results in 1001 HTTP requests. Further let's assume HTTP persistent connections is used. With the TPR strategy, this will result in 1001 thread pool management operations (TPMO). With the TPC strategy, this will result in 1 TPMO... Now depending on the actual costs for a single TPMO, I can imagine scenarios where TPC may scale better then TPR.

我认为有些事情你没有考虑过:

I think there are some things you haven't considered:


  • 网络浏览器面对要获取以完成页面的大量URL,可能会打开多个连接。

  • The web browser faced with lots of URLs to fetch to complete a page may well open multiple connections.

使用TPC和持久连接,线程必须等待客户端到收到回复并发送下一个请求。如果网络延迟很高,则此等待时间可能很长。

With TPC and persistent connections, the thread has to wait for the client to receive the response and send the next request. This wait time could be significant if the network latency is high.

服务器无法知道何时可以关闭给定(持久)连接。如果浏览器没有关闭它,它可能会徘徊,将TPC线程关闭直到服务器超时连接。

The server has no way of knowing when a given (persistent) connection can be closed. If the browser doesn't close it, it could "linger", tying down the TPC thread until the server times out the connection.

TPMO开销是并不大,尤其是当您将池开销与上下文切换开销分开时。 (你需要这样做,因为TPC将在持久连接上产生上下文切换;见上文。)

The TPMO overheads are not huge, especially when you separate the pool overheads from the context switch overheads. (You need to do that, since TPC is going to incur context switches on a persistent connections; see above.)

我的感觉是,这些因素可能超过TPMO节省了每个连接专用一个线程。

My feeling is that these factors are likely to outweigh the TPMO saving of having one thread dedicated to each connection.

这篇关于每个连接的线程数与每个请求的线程数有什么区别?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆