NodeJS Event Loop Fundamendals [英] NodeJS Event Loop Fundamendals

查看:92
本文介绍了NodeJS Event Loop Fundamendals的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我确定这是一个常见问题,但没有找到具体的答案。

I'm sure it's a commonly asked question but didn't find a concrete answer.

我有点理解NodeJS的基本概念,它是异步/非阻止处理I / O的性质。

I kind of understand the basic concept of NodeJS and it's asynchronous/non-blocking nature of processing I/O.

为了论证,让我们举一个在节点中编写的HTTP服务器的简单示例,该服务器执行unix命令'find /'并将结果写入http响应(因此在用户的浏览器上显示命令的结果)。
我们假设这需要3秒钟。

For argument sake, let's take a simple example of a HTTP server written in node that executes the unix command 'find /' and writes the result to the http response (therefore displaying the result of the command on the user's browser). Let's assume that this takes 3 seconds.

我们假设有两个用户A和B同时通过他们的浏览器请求。

Let's assume that there are two users 'A' and 'B' requesting through their browsers exactly at the same time.

据我所知,用户的请求在事件队列中排队(消息A,消息B)。该消息还引用了它在处理完成后要执行的相关回调。

As I understand the user's requests are queued in the event queue (Message A, Message B). The message also has a reference to it's associated callback to be executed once the processing is done.

因为,事件循环是单线程的并逐个处理事件,

Since, the event loop is single threaded and processes the events one by one,

在上面的例子中,用户B的回调是否需要6秒才会被触发? [3用于用户A的事件处理,3用于自己的事件处理]

In my above example, Will it take 6 seconds for the Callback of "User B" to get triggered? [3 for "User A"s event processing and 3 for it's own event processing]

这听起来像我在这里遗漏了什么?

This sounds like I'm missing something here?

最糟糕的是100个用户是否在同一毫秒内请求?第100个事件所有者将成为最不幸的用户并且必须等待永恒。

The worst is if 100 users are requesting at the same millisecond? The 100th event owner is going to be the most unfortunate user and has to wait for eternity.

据我所知,运行时只有一个事件队列,上面问题可适用于应用程序任何部分的任何用户。例如,网页X中的数据库查询速度慢会降低网页Y中的不同用户的速度?

As I understand, there is only one event queue in the runtime, the above problem can applicable to any user in any part of the application. For example, a slow Database Query in web page X would slow down the a different user in web page Y?

从根本上说,我发现事件的串行处理存在问题串行执行相关的回调。

Fundamentally, I see a problem in serial processing of events and serial execution of their associated callbacks.

我在这里遗漏了什么吗?

Am I missing something here?

推荐答案

正确编写的node.js服务器将对任何网络,磁盘I / O,计时器或与其他进程的通信使用异步I / O和通信。以这种方式编写时,可以并行处理多个http请求。虽然处理任何给定请求的node.js代码一次只运行一个,但只要一个请求等待I / O(通常是请求的大部分时间),就可以运行其他请求。

A properly written node.js server will use async I/O and communication for any networking, disk I/O, timers or communication with other processes. When written this way, multiple http requests can be worked on in parallel. Though the node.js code that processes any given request is only run one at a time, anytime one request is waiting for I/O (which is typically much of the time of a request), then other requests can run.

最终结果是所有请求似乎同时进行(尽管实际上,它们之间的工作交织在一起)。 Javascript事件队列是在所有各种请求之间序列化工作的机制。每当异步操作完成它的工作或希望通知某个事件的主JS线程时,它就会在事件队列中放入一些东西。当JS执行的当前线程完成时(即使它有自己的异步操作正在进行中),JS引擎查看事件队列然后执行该队列中的下一个项目(通常是某种形式的回调),并且方式,下一个排队操作继续。

The end result is that all requests appear to progress at the same time (though in reality, the work on them is interwoven). The Javascript event queue is the mechanism for serializing the work among all the various requests. Whenever an async operation finishes it's work or wishes to notify the main JS thread of some event, it puts something in the event queue. When the current thread of JS execution finishes (even if it has its own async operations in progress), the JS engine looks in the event queue and then executes the next item in that queue (usually some form of a callback) and, in that way, the next queued operation proceeds.

在您的具体示例中,当您启动另一个进程然后异步等待其结果时,当前执行的线程结束然后事件队列中的下一个项目将运行。如果下一个项目是另一个http请求,则该请求开始处理。当第二个请求,然后命中一些异步点时,它的执行线程结束,并再次运行事件队列中的下一个项目。通过这种方式,新的http请求开始,并且已经完成的异步操作的异步回调开始运行。事情大致以FIFO(先进先出)的顺序发生,它们是如何放入事件队列的。我说粗略,因为实际上有不同类型的事件,并非所有事件都是相同的序列化,但为了讨论的目的,可以忽略实现细节。

In your specific example, when you fire up another process and then asynchronously wait for its result, the current thread of execution finishes and then the next item in the event queue gets to run. If that next item is another http request, then that request starts processing. When this second request, then hits some async point, it's thread of execution finishes and again the next item in the event queue runs. In this way, new http requests get started and asynchronous callbacks from async operations that have finished get to run. Things happen in roughly a FIFO (first-in, first-out) order for how they are put in the event queue. I say "roughly" because there are actually different types of events and not all are serialized equally, but for the purpose of this discussion that implementation detail can be ignored.

所以,如果三个http请求在完全相同的时间到达,那么一个将运行直到它到达异步点。然后,下一个将运行,直到它到达异步点。然后,第三个将运行,直到它遇到异步点。然后,无论哪个请求完成其第一个异步操作,都将从该异步操作获得回调,它将一直运行直到完成或命中另一个异步点。等等...

So, if three http requests arrive at the exact same time, then one will run until it hits an async point. Then, the next will run until it hits an async point. Then, the third will run until it hits an async point. Then, whichever request finishes its first async operation will get a callback from that async operation and it will run until it is done or hits another async point. And, so on...

由于通常会导致Web服务器花费很多时间来响应的大部分内容通常是某种I / O操作(磁盘或网络) )这些都可以在node.js中异步编程,整个过程通常运行得很好,服务器资源实际上比每个请求使用一个单独的线程更有效。有一次它不能很好地工作,如果有一个繁重的计算密集型或一些长时间运行,但不是异步操作,长时间绑定主node.js线程。因为node.js系统是一个协作的CPU共享系统,如果你有一个长时间运行的操作,它绑定了主node.js线程,它将占用系统(没有先发制人的共享与其他操作一样可以使用多线程系统)。占用系统会使所有其他请求等到第一个请求完成。 node.js对一些CPU占用计算的回答是将一个操作移动到另一个进程并与node.js线程中的其他进程异步通信 - 从而保留单个node.js线程的异步模型。

Since much of what usually causes a web server to take much time to respond is usually some sort of I/O operation (disk or networking) which can all be programmed asynchronously in node.js, this whole process generally works quite well and its actually a lot more efficient with server resources than using a separate thread per request. The one time that it doesn't work very well is if there's a heavy compute-intensive or some long running, but not asynchronous operation that ties up the main node.js thread for long periods of time. Because the node.js system is a cooperative CPU sharing system, if you have a long running operation that ties up the main node.js thread, it will hog the system (there is no pre-emptive sharing at all with other operations like there could be with a mutli-threaded system). Hogging the system makes all other requests wait until the first one is done. The node.js answer to some CPU hogging computation would be to move that one operation to another process and communicate asynchronously with that other process from the node.js thread - thus preserving the async model for the single node.js thread.

对于node.js数据库操作,数据库通常会为node.js编程提供异步接口,以便在异步中使用数据库时尚,然后由数据库接口的实现来实际以异步方式实现接口。这可能是通过与实现实际数据库逻辑的其他一些进程(可能通过TCP进行通信)进行通信来完成的。实际的数据库逻辑可能使用或不使用实际线程 - 这是一个由数据库本身决定的实现细节。 node.js的重要之处在于计算和数据库工作在某个其他进程中的node.js线程之外,甚至可能在另一个主机上,因此它不会阻塞node.js线程。

For node.js database operations, the database will generally provide an async interface for node.js programming to use the database in an async fashion and then it is up to the implementation of the database interface to actually implement the interface in an async fashion. This will likely be done by communicating with some other process where the actual database logic is implemented (probably communicating via TCP). That actual database logic may use actual threads or not - that's an implementation detail that is up to the database itself. What is important to node.js is that the computation and database work is out of the node.js thread in some other process, perhaps even on another host so it does not block the node.js thread.

这篇关于NodeJS Event Loop Fundamendals的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆