使用 pg-promise 的连接池 [英] Connection pool using pg-promise

查看:95
本文介绍了使用 pg-promise 的连接池的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 Node js 和 Postgresql,并试图在连接实现中最有效.
我看到 pg-promise 建立在 node-postgres 之上,node-postgres 使用 pg-pool 来管理池.
我还读到一次超过 100 个客户端是一件非常糟糕的事情"(node-postgres).

I'm using Node js and Postgresql and trying to be most efficient in the connections implementation.
I saw that pg-promise is built on top of node-postgres and node-postgres uses pg-pool to manage pooling.
I also read that "more than 100 clients at a time is a very bad thing" (node-postgres).

我正在使用 pg-promise 并想知道:

I'm using pg-promise and wanted to know:

  1. 对于大量数据,推荐的 poolSize 是多少.
  2. 如果 poolSize = 100 并且应用程序同时(甚至更多)收到 101 个请求会发生什么?Postgres 是否会处理订单并使 101 请求等到它可以运行它?

推荐答案

我是 pg-promise.

我正在使用 Node js 和 Postgresql,并试图在连接实现中最高效.

I'm using Node js and Postgresql and trying to be most efficient in the connections implementation.

数据库通信有多个优化级别.其中最重要的是尽量减少每个 HTTP 请求的查询次数,因为 IO 很昂贵,连接池也是如此.

There are several levels of optimization for database communications. The most important of them is to minimize the number of queries per HTTP request, because IO is expensive, so is the connection pool.

  • If you have to execute more than one query per HTTP request, always use tasks, via method task.
  • If your task requires a transaction, execute it as a transaction, via method tx.
  • If you need to do multiple inserts or updates, always use multi-row operations. See Multi-row insert with pg-promise and PostgreSQL multi-row updates in Node.js.

我看到 pg-promise 建立在 node-postgres 之上,而 node-postgres 使用 pg-pool 来管理池.

I saw that pg-promise is built on top of node-postgres and node-postgres uses pg-pool to manage pooling.

node-postgres 从 6.x 版开始使用 pg-pool,而 pg-promise 保留在使用内部连接池实现的 5.x 版中.原因如下.

node-postgres started using pg-pool from version 6.x, while pg-promise remains on version 5.x which uses the internal connection pool implementation. Here's the reason why.

我还读到一次超过 100 个客户是一件非常糟糕的事情"

I also read that "more than 100 clients at a time is a very bad thing"

我在这方面的长期实践表明:如果您无法将您的服务放入 20 个连接的池中,那么您将无法通过获得更多连接来拯救您,您需要修复您的实现.此外,如果超过 20,您就会开始给 CPU 带来额外的压力,这会转化为进一步减速.

My long practice in this area suggests: If you cannot fit your service into a pool of 20 connections, you will not be saved by going for more connections, you will need to fix your implementation instead. Also, by going over 20 you start putting additional strain on the CPU, and that translates into further slow-down.

对于非常大的数据负载,推荐的 poolSize 是多少.

what is the recommended poolSize for a very big load of data.

数据的大小与池的大小无关.无论连接有多大,您通常只使用一个连接进行一次下载或上传.除非您的实施是错误的并且您最终使用了多个连接,否则您需要修复它,如果您希望您的应用具有可扩展性.

The size of the data got nothing to do with the size of the pool. You typically use just one connection for a single download or upload, no matter how large. Unless your implementation is wrong and you end up using more than one connection, then you need to fix it, if you want your app to be scalable.

如果 poolSize = 100 并且应用程序同时收到 101 个请求会发生什么

what happens if poolSize = 100 and the application gets 101 request simultaneously

它将等待下一个可用的连接.

It will wait for the next available connection.

另见:

这篇关于使用 pg-promise 的连接池的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆