跨部分 http 请求共享 pg-promise 任务 [英] Sharing a pg-promise task across parts of an http request

查看:22
本文介绍了跨部分 http 请求共享 pg-promise 任务的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在 GraphQL 应用程序中使用 pg-promise 并且由于嵌套/解析器的迭代性质,每个 HTTP 请求都会进行很多的数据库查询.

I am using pg-promise in a GraphQL application and because of the nested/iterative nature of the resolvers each HTTP request makes a lot of database queries.

所以我想知道在解析器收集数据时是否有更有效的方法来共享连接池中的连接?

So I am wondering is there is a more effective way to share a connection from the connection pool as the resolvers are collecting data?

我知道 pg-promise 任务只对函数的回调有效,我看不到任何其他链接查询的方法(如文档所述 这里).

I understand that a pg-promise task is only alive for the callback of the function and I don't see any other way to chain the queries (as documented here).

GraphQL 查询:

GraphQL Query:

{
  users {
    files {
      name
      date
    }
  }
}

使用 Apollo Server

Query: {
    users: (obj, args, context, info) => {
      return context.db.manyOrNone('select id from users')
    }
 }

Users: {
  files: (obj, args, context, info) => {
    const userId = obj.id;
    return context.db.manyOrNone('select * from files where user_id = $1', userId);
  }
}

例如,如果有很多用户,这将生成大量 SQL 查询.

This will generate lots of SQL queries if there are lots of users for instance.

注意

我知道诸如 dataloader 之类的技术可以解决 N+1 Select 之类的问题,但我目前无法重新构建此应用程序,仅仅提高数据库连接的效率会带来巨大的性能赢了.

I'm aware of techniques like dataloader to address problems like N+1 Select but I cannot afford to rearchitect this application at the moment and simply being more efficient with database connections would be a huge performance win.

谢谢.

推荐答案

池中的每个 HTTP 端点和每个数据库连接都是异步的.

Each HTTP endpoint and each database connection from the pool are meant to be asynchronous.

如果您尝试跨多个 HTTP 端点重用相同的数据库连接,那么它们将在需要访问数据库时相互阻塞,这并不好.

If you attempt to reuse the same database connection across multiple HTTP endpoints, those will be blocking each other, whenever they need access to the database, which is not good.

如果池中的连接数少于访问数据库的 HTTP 端点数,则您自己的 HTTP 服务可扩展性较差.您需要的连接数至少要与 HTTP 端点的数量相匹配.

And if the number of connections in the pool is less than the number of HTTP endpoints that access the database, you've got yourself a poorly-scalable HTTP service. You need the number of connections at least to match that of the HTTP endpoints.

因此,您要寻找的是 - 在多个 HTTP 端点之间共享数据库连接一开始就不是一个好主意.

So what you are looking for - sharing a database connection across multiple HTTP endpoints is a bad idea to begin with.

并且如果您想在单个 HTTP 请求中对多个数据解析器进行分组,则可以在单个任务中统一处理逻辑(参见 任务).

And if you want to group multiple data resolvers within a single HTTP request, you can unify the processing logic within a single task (see Tasks).

还有手动连接,通过方法connect,但我不建议将其用于一般连接重用,因为它用于特定情况,否则容易出错并使自动连接的想法无效.

There is also manual connection, via method connect, but I wouldn't recommend it for general connection reuse, as it is there for specific cases, can be error-prone otherwise and invalidates the idea of automated connections.

这篇关于跨部分 http 请求共享 pg-promise 任务的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆