当在后台使用Netty vs Tomcat时,春季webFlux有所不同 [英] Spring webFlux differrences when Netty vs Tomcat is used under the hood

查看:166
本文介绍了当在后台使用Netty vs Tomcat时,春季webFlux有所不同的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是spring webflux的学习者,并且阅读了以下系列文章(解决方案

在使用Servlet 2.5时,Servlet容器会将请求分配给线程,直到该请求被完全处理为止.

使用Servlet 3.0异步处理时,服务器可以在应用程序处理请求时在单独的线程池中分派请求处理.但是,当涉及到I/O时,工作总是在服务器线程上进行,并且总是阻塞.这意味着,慢速客户端"可以垄断服务器线程,因为在网络连接不良的情况下读取/写入该客户端时,服务器被阻止了.

在Servlet 3.1中,允许异步I/O,在这种情况下,不再使用一个请求/线程"模型.在任何时候都可以在服务器管理的不同线程上安排位请求处理.

Servlet 3.1+容器通过Servlet API提供了所有这些可能性.应用程序可以利用异步处理或非阻塞I/O.在非阻塞I/O的情况下,范式更改非常重要,使用起来确实具有挑战性.

使用Spring WebFlux-Tomcat,Jetty和Netty并没有完全相同的运行时模型,但是它们都支持反应性背压和非阻塞I/O.

I am learninig spring webflux and I've read the following series of articles(first, second, third)

In the third Article I faced the following text:

Remember the same application code runs on Tomcat, Jetty or Netty. Currently, the Tomcat and Jetty support is provided on top of Servlet 3.1 asynchronous processing, so it is limited to one request per thread. When the same code runs on the Netty server platform that constraint is lifted, and the server can dispatch requests sympathetically to the web client. As long as the client doesn’t block, everyone is happy. Performance metrics for the netty server and client probably show similar characteristics, but the Netty server is not restricted to processing a single request per thread, so it doesn’t use a large thread pool and we might expect to see some differences in resource utilization. We will come back to that later in another article in this series.

First of all I don't see newer article in the series although it was written in 2016. It is clear for me that tomcat has 100 threads by default for handling requests and one thread handle one request in the same time but I don't understand phrase it is limited to one request per thread What does it mean?

Also I would like to know how Netty works for that concrete case(I want to understand difference with Tomcat). Can it handle 2 requests per thread?

解决方案

When using Servlet 2.5, Servlet containers will assign a request to a thread until that request has been fully processed.

When using Servlet 3.0 async processing, the server can dispatch the request processing in a separate thread pool while the request is being processed by the application. However, when it comes to I/O, work always happens on a server thread and it is always blocking. This means that a "slow client" can monopolize a server thread, since the server is blocked while reading/writing to that client with a poor network connection.

With Servlet 3.1, async I/O is allowed and in that case the "one request/thread" model isn't anymore. At any point a bit request processing can be scheduled on a different thread managed by the server.

Servlet 3.1+ containers offer all those possibilities with the Servlet API. It's up to the application to leverage async processing, or non-blocking I/O. In the case of non-blocking I/O, the paradigm change is important and it's really challenging to use.

With Spring WebFlux - Tomcat, Jetty and Netty don't have the exact same runtime model, but they all support reactive backpressure and non-blocking I/O.

这篇关于当在后台使用Netty vs Tomcat时,春季webFlux有所不同的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆