为什么Lift Web框架具有可伸缩性? [英] why is the lift web framework scalable?

查看:93
本文介绍了为什么Lift Web框架具有可伸缩性?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想知道为什么电梯网络框架具有高性能和可扩展性的技术原因?我知道它使用具有演员库的scala,但是根据安装说明,默认配置是使用jetty.那么它是否使用actor库进行缩放?

I want to know the technical reasons why the lift webframework has high performance and scalability? I know it uses scala, which has an actor library, but according to the install instructions it default configuration is with jetty. So does it use the actor library to scale?

现在是开箱即用的可扩展性.只需添加其他服务器和节点,它就会自动扩展,这是如何工作的?它可以处理与支持服务器的500000+并发连接.

Now is the scalability built right out of the box. Just add additional servers and nodes and it will automatically scale, is that how it works? Can it handle 500000+ concurrent connections with supporting servers.

我正在尝试为企业级创建一个Web服务框架,该框架可以超越现有的框架,并且易于扩展,可配置和可维护.我对扩展的定义只是添加更多服务器,您应该能够承受额外的负载.

I am trying to create a web services framework for the enterprise level, that can beat what is out there and is easy to scale, configurable, and maintainable. My definition of scaling is just adding more servers and you should be able to accommodate the extra load.

谢谢

推荐答案

Lift的可伸缩性方法位于一台计算机中.跨机器扩展是一个更大,更困难的话题.简短的答案是:Scala和Lift并没有帮助或阻碍水平缩放.

Lift's approach to scalability is within a single machine. Scaling across machines is a larger, tougher topic. The short answer there is: Scala and Lift don't do anything to either help or hinder horizontal scaling.

就单个计算机中的参与者而言,Lift实现了更好的可伸缩性,因为单个实例可以处理比大多数其他服务器更多的并发请求.为了解释,我首先必须指出经典的每个请求线程处理模型中的缺陷.忍受我,这将需要一些解释.

As far as actors within a single machine, Lift achieves better scalability because a single instance can handle more concurrent requests than most other servers. To explain, I first have to point out the flaws in the classic thread-per-request handling model. Bear with me, this is going to require some explanation.

典型的框架使用线程来服务页面请求.当客户端连接时,框架从池中分配一个线程.然后,该线程执行三件事:从套接字读取请求;它进行一些计算(可能涉及到数据库的I/O);并在套接字上发送响应.在几乎每个步骤中,线程最终都会阻塞一段时间.读取请求时,它可能会在等待网络时阻塞.计算时,它可能会阻塞磁盘或网络I/O.在等待数据库时,它也可能会阻塞.最后,在发送响应时,它可能会阻止客户端是否缓慢接收数据以及是否填充了TCP窗口.总体而言,该线程可能会花费30-90%的时间被阻止.但是,它在那一个请求上花费了100%的时间.

A typical framework uses a thread to service a page request. When the client connects, the framework assigns a thread out of a pool. That thread then does three things: it reads the request from a socket; it does some computation (potentially involving I/O to the database); and it sends a response out on the socket. At pretty much every step, the thread will end up blocking for some time. When reading the request, it can block while waiting for the network. When doing the computation, it can block on disk or network I/O. It can also block while waiting for the database. Finally, while sending the response, it can block if the client receives data slowly and TCP windows get filled up. Overall, the thread might spend 30 - 90% of it's time blocked. It spends 100% of its time, however, on that one request.

一个JVM只能支持这么多线程,然后才真正减慢速度.线程调度,对共享内存实体(例如连接池和监视器)的争用以及本机OS限制都对JVM可以创建多少个线程施加了限制.

A JVM can only support so many threads before it really slows down. Thread scheduling, contention for shared-memory entities (like connection pools and monitors), and native OS limits all impose restrictions on how many threads a JVM can create.

好吧,如果JVM的最大线程数受到限制,并且线程数决定了服务器可以处理的并发请求数,那么并发请求数将由线程数决定.

Well, if the JVM is limited in its maximum number of threads, and the number of threads determines how many concurrent requests a server can handle, then the number of concurrent requests will be determined by the number of threads.

(还有其他一些问题可以施加较低的限制,例如,GC抖动.线程是一个基本的限制因素,但不是唯一的限制因素!)

(There are other issues that can impose lower limits---GC thrashing, for example. Threads are a fundamental limiting factor, but not the only one!)

Lift将线程与请求分离.在Lift中,请求 not 不会占用线程.而是,线程执行操作(例如读取请求),然后将消息发送给参与者.演员是故事的重要组成部分,因为它们是通过轻量级"线程安排的.线程池用于处理参与者中的消息.避免阻塞参与者内部的操作很重要,这样这些线程才能迅速返回到池中. (请注意,该池对应用程序不可见,它是Scala对actor的支持的一部分.)例如,当前在数据库或磁盘I/O上被阻止的请求不会保留请求处理线程.请求处理线程几乎可以立即用于接收更多连接.

Lift decouples thread from requests. In Lift, a request does not tie up a thread. Rather, a thread does an action (like reading the request), then sends a message to an actor. Actors are an important part of the story, because they are scheduled via "lightweight" threads. A pool of threads gets used to process messages within actors. It's important to avoid blocking operations inside of actors, so these threads get returned to the pool rapidly. (Note that this pool isn't visible to the application, it's part of Scala's support for actors.) A request that's currently blocked on database or disk I/O, for example, doesn't keep a request-handling thread occupied. The request handling thread is available, almost immediately, to receive more connections.

此方法用于将请求与线程解耦,从而使Lift服务器具有比每个请求线程服务器更多的并发请求. (我还想指出,Grizzly库支持没有参与者的类似方法.)更多并发请求意味着单个Lift服务器比常规Java EE服务器可以支持更多用户.

This method for decoupling requests from threads allows a Lift server to have many more concurrent requests than a thread-per-request server. (I'd also like to point out that the Grizzly library supports a similar approach without actors.) More concurrent requests means that a single Lift server can support more users than a regular Java EE server.

这篇关于为什么Lift Web框架具有可伸缩性?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆