在HTTP请求的各个部分之间共享pg-promise任务 [英] Sharing a pg-promise task across parts of an http request

查看:64
本文介绍了在HTTP请求的各个部分之间共享pg-promise任务的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在GraphQL应用程序中使用 pg-promise ,原因是嵌套解析程序的/迭代性质,每个HTTP请求都会进行很多个数据库查询.

所以我想知道在解析器收集数据时,有没有更有效的方法可以从连接池共享连接?

我了解到 pg-promise 任务仅在该函数的回调中有效,并且我看不到任何其他方式来链接查询(如记录在 Apollo服务器

时的

解析器示例

 查询:{用户:(obj,args,上下文,信息)=>{返回context.db.manyOrNone('从用户中选择ID')}} 

 用户:{文件:(obj,args,上下文,信息)=>{const userId = obj.id;返回context.db.manyOrNone('从其中user_id = $ 1',userId的文件中选择*);}} 

例如,如果有很多用户,这将生成许多SQL查询.

注意

我知道 dataloader 之类的技术可以解决N + 1 Select之类的问题,但是我目前无法重新设计该应用程序,而仅通过数据库连接提高效率将是一个巨大的性能赢.

谢谢.

解决方案

池中的每个HTTP端点和每个数据库连接都是异步的.

如果您试图在多个HTTP端点之间重用同一数据库连接,那么当它们需要访问数据库时,它们将相互阻塞,这是不好的.

并且如果池中的连接数少于访问数据库的HTTP端点数,则说明您自己的伸缩性很差.您需要的连接数量至少要与HTTP端点的数量匹配.

因此,您要寻找的-在多个HTTP端点之间共享数据库连接是一个不好的主意.

如果要在单个HTTP请求中将多个数据解析器分组,则可以统一单个任务中的处理逻辑(请参阅连接进行手动连接,但我不建议将其用于一般的连接重用,因为在特定情况下会出现这种情况,否则容易出错,并且会使自动连接的想法失效.

I am using pg-promise in a GraphQL application and because of the nested/iterative nature of the resolvers each HTTP request makes a lot of database queries.

So I am wondering is there is a more effective way to share a connection from the connection pool as the resolvers are collecting data?

I understand that a pg-promise task is only alive for the callback of the function and I don't see any other way to chain the queries (as documented here).

Example

GraphQL Query:

{
  users {
    files {
      name
      date
    }
  }
}

Resolvers example when using Apollo Server

Query: {
    users: (obj, args, context, info) => {
      return context.db.manyOrNone('select id from users')
    }
 }

and

Users: {
  files: (obj, args, context, info) => {
    const userId = obj.id;
    return context.db.manyOrNone('select * from files where user_id = $1', userId);
  }
}

This will generate lots of SQL queries if there are lots of users for instance.

NOTE

I'm aware of techniques like dataloader to address problems like N+1 Select but I cannot afford to rearchitect this application at the moment and simply being more efficient with database connections would be a huge performance win.

Thank you.

解决方案

Each HTTP endpoint and each database connection from the pool are meant to be asynchronous.

If you attempt to reuse the same database connection across multiple HTTP endpoints, those will be blocking each other, whenever they need access to the database, which is not good.

And if the number of connections in the pool is less than the number of HTTP endpoints that access the database, you've got yourself a poorly-scalable HTTP service. You need the number of connections at least to match that of the HTTP endpoints.

So what you are looking for - sharing a database connection across multiple HTTP endpoints is a bad idea to begin with.

And if you want to group multiple data resolvers within a single HTTP request, you can unify the processing logic within a single task (see Tasks).

There is also manual connection, via method connect, but I wouldn't recommend it for general connection reuse, as it is there for specific cases, can be error-prone otherwise and invalidates the idea of automated connections.

这篇关于在HTTP请求的各个部分之间共享pg-promise任务的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆