为什么 lift web 框架是可扩展的? [英] why is the lift web framework scalable?

查看:15
本文介绍了为什么 lift web 框架是可扩展的?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想知道lift web框架具有高性能和可扩展性的技术原因?我知道它使用 scala,它有一个演员库,但根据安装说明,它的默认配置是使用码头.那么它是否使用actor库进行缩放?

I want to know the technical reasons why the lift webframework has high performance and scalability? I know it uses scala, which has an actor library, but according to the install instructions it default configuration is with jetty. So does it use the actor library to scale?

现在是开箱即用的可扩展性.只需添加额外的服务器和节点,它就会自动扩展,是这样工作的吗?它可以处理 500000+ 与支持服务器的并发连接.

Now is the scalability built right out of the box. Just add additional servers and nodes and it will automatically scale, is that how it works? Can it handle 500000+ concurrent connections with supporting servers.

我正在尝试为企业级创建一个 Web 服务框架,该框架可以超越现有的框架,并且易于扩展、可配置和可维护.我对扩展的定义只是添加更多服务器,您应该能够适应额外的负载.

I am trying to create a web services framework for the enterprise level, that can beat what is out there and is easy to scale, configurable, and maintainable. My definition of scaling is just adding more servers and you should be able to accommodate the extra load.

谢谢

推荐答案

Lift 的可扩展性方法是在一台机器内实现的.跨机器扩展是一个更大、更棘手的话题.简短的回答是:Scala 和 Lift 不会做任何事情来帮助或阻碍水平扩展.

Lift's approach to scalability is within a single machine. Scaling across machines is a larger, tougher topic. The short answer there is: Scala and Lift don't do anything to either help or hinder horizontal scaling.

就单个机器中的参与者而言,Lift 实现了更好的可扩展性,因为单个实例可以处理比大多数其他服务器更多的并发请求.为了解释,我首先必须指出经典的每请求线程处理模型中的缺陷.请耐心等待,这需要一些解释.

As far as actors within a single machine, Lift achieves better scalability because a single instance can handle more concurrent requests than most other servers. To explain, I first have to point out the flaws in the classic thread-per-request handling model. Bear with me, this is going to require some explanation.

典型的框架使用线程来服务页面请求.当客户端连接时,框架会从池中分配一个线程.该线程然后做三件事:它从套接字读取请求;它进行一些计算(可能涉及到数据库的 I/O);它在套接字上发送一个响应.几乎在每一步,线程最终都会阻塞一段时间.在读取请求时,它可以在等待网络的同时阻塞.在进行计算时,它可能会阻塞磁盘或网络 I/O.它也可以在等待数据库时阻塞.最后,在发送响应时,如果客户端接收数据缓慢并且 TCP 窗口被填满,它会阻塞.总体而言,线程可能会花费 30 - 90% 的时间被阻塞.然而,它把 100% 的时间都花在了那个请求上.

A typical framework uses a thread to service a page request. When the client connects, the framework assigns a thread out of a pool. That thread then does three things: it reads the request from a socket; it does some computation (potentially involving I/O to the database); and it sends a response out on the socket. At pretty much every step, the thread will end up blocking for some time. When reading the request, it can block while waiting for the network. When doing the computation, it can block on disk or network I/O. It can also block while waiting for the database. Finally, while sending the response, it can block if the client receives data slowly and TCP windows get filled up. Overall, the thread might spend 30 - 90% of it's time blocked. It spends 100% of its time, however, on that one request.

JVM 在真正变慢之前只能支持这么多线程.线程调度、共享内存实体(如连接池和监视器)的争用以及本机操作系统限制都对 JVM 可以创建的线程数量施加了限制.

A JVM can only support so many threads before it really slows down. Thread scheduling, contention for shared-memory entities (like connection pools and monitors), and native OS limits all impose restrictions on how many threads a JVM can create.

好吧,如果 JVM 的最大线程数是有限的,而线程数决定了一个服务器可以处理多少并发请求,那么并发请求数将取决于线程数.

Well, if the JVM is limited in its maximum number of threads, and the number of threads determines how many concurrent requests a server can handle, then the number of concurrent requests will be determined by the number of threads.

(还有其他问题可以施加较低的限制——例如 GC 抖动.线程是一个基本的限制因素,但不是唯一的!)

(There are other issues that can impose lower limits---GC thrashing, for example. Threads are a fundamental limiting factor, but not the only one!)

Lift 将线程与请求分离.在 Lift 中,请求不会占用线程.相反,一个线程执行一个操作(例如读取请求),然后向参与者发送一条消息.演员是故事的重要组成部分,因为他们是通过轻量级"线程安排的.线程池用于处理参与者中的消息.避免在actor内部阻塞操作很重要,因此这些线程会迅速返回到池中.(请注意,该池对应用程序不可见,它是 Scala 对参与者的支持的一部分.)例如,当前在数据库或磁盘 I/O 上阻塞的请求不会保持请求处理线程被占用.请求处理线程几乎立即可用,以接收更多连接.

Lift decouples thread from requests. In Lift, a request does not tie up a thread. Rather, a thread does an action (like reading the request), then sends a message to an actor. Actors are an important part of the story, because they are scheduled via "lightweight" threads. A pool of threads gets used to process messages within actors. It's important to avoid blocking operations inside of actors, so these threads get returned to the pool rapidly. (Note that this pool isn't visible to the application, it's part of Scala's support for actors.) A request that's currently blocked on database or disk I/O, for example, doesn't keep a request-handling thread occupied. The request handling thread is available, almost immediately, to receive more connections.

这种将请求与线程分离的方法允许 Lift 服务器比一个线程请求服务器具有更多的并发请求.(我还想指出,Grizzly 库支持类似的方法,无需演员.)更多并发请求意味着单个 Lift 服务器可以支持比常规 Java EE 服务器更多的用户.

This method for decoupling requests from threads allows a Lift server to have many more concurrent requests than a thread-per-request server. (I'd also like to point out that the Grizzly library supports a similar approach without actors.) More concurrent requests means that a single Lift server can support more users than a regular Java EE server.

这篇关于为什么 lift web 框架是可扩展的?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆