最大连接池是否还会限制与数据库的最大连接? [英] Does max connection pool also limits max connections to database?

查看:261
本文介绍了最大连接池是否还会限制与数据库的最大连接?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用带有超过1000个并发用户的spring boot应用程序的hikari cp。
我已经设置了最大池大小-

I am using hikari cp with spring boot app which has more that 1000 concurrent users. I have set the max pool size-

spring.datasource.hikari.maximum-pool-size=300

当我使用

show processlist;

它显示最大300等于池的大小,它永远不会比最大池增加。有意吗
我认为池大小意味着维护连接,以便在将来需要对数据库的请求时可以重用连接,但是当需要时可以建立更多连接。

It shows max 300 which is equal to the pool size.It never increases than max pool.Is this intened? I thought pool size means connections maintained so that the connections can be reused when future requests to the database are required but when need comes more connections can be made.

另外,当我删除最大池配置时,我立即得到-

Also when I am removing the max pool config ,I immediately get-


HikariPool-0-连接不可用,请求在之后超时30000ms。

HikariPool-0 - Connection is not available, request timed out after 30000ms.

如何解决此问题。谢谢。

How to resolve this problem.Thanks in advance.

推荐答案

是的。引用文档


此属性控制允许池达到的最大大小,包括空闲和正在使用的连接。基本上,此值将确定到数据库后端的最大实际连接数。合理的值最好由您的执行环境确定。当池达到此大小并且没有空闲连接可用时,对 getConnection()的调用将最多阻塞 connectionTimeout 毫秒,然后再超时。请阅读关于池大小默认值:10

This property controls the maximum size that the pool is allowed to reach, including both idle and in-use connections. Basically this value will determine the maximum number of actual connections to the database backend. A reasonable value for this is best determined by your execution environment. When the pool reaches this size, and no idle connections are available, calls to getConnection() will block for up to connectionTimeout milliseconds before timing out. Please read about pool sizing. Default: 10

所以基本上,当所有300个连接都在使用中时,您正在尝试建立您的301 st 连接,Hikari不会创建新连接(因为 maximumPoolSize 是绝对最大值),但它会等待(默认情况下) 30秒),直到再次可用连接为止。

So basically, when all 300 connections are in use, and you are trying to make your 301st connection, Hikari won't create a new one (as maximumPoolSize is the absolute maximum), but it will rather wait (by default 30 seconds) until a connection is available again.

这也解释了为什么会出现所提到的异常,因为默认情况(未配置时) maximumPoolSize )是10个连接,您可能会立即达到它们。

This also explains why you get the exception you mentioned, because the default (when not configuring a maximumPoolSize) is 10 connections, which you'll probably immediately reach.

要解决此问题,您必须找出为什么这些连接是封锁超过30秒。即使在有1000个并发用户的情况下,如果查询最多花费几毫秒或几秒钟也不会有问题。

To solve this issue, you have to find out why these connections are blocked for more than 30 seconds. Even in a situation with 1000 concurrent users, there should be no problem if your query takes a few milliseconds or a few seconds at most.

如果要调用需要很长时间的非常复杂的查询,可能性很小。第一个是增加池的大小。但是,不推荐,因为计算最大池大小的推荐公式为:

If you are invoking really complex queries that take a long time, there are a few possibilities. The first one is to increase the pool size. This however is not recommended, as the recommended formula for calculating the maximum pool size is:

connections = ((core_count * 2) + effective_spindle_count)

引用关于泳池尺寸文章:


多年来一直在许多基准测试中保持良好状态的公式是
,为了获得最佳吞吐量,活动连接的数量应在((core_count * 2)+ Effective_spindle_count)。即使启用了超线程,核心数也不应包括
HT线程。如果有效数据集
被完全高速缓存,则有效主轴数为零,并且随着高速缓存命中率的降低,有效主轴数将接近实际主轴数
。 ...到目前为止,尚未针对
该公式与SSD的配合情况进行任何分析。

A formula which has held up pretty well across a lot of benchmarks for years is that for optimal throughput the number of active connections should be somewhere near ((core_count * 2) + effective_spindle_count). Core count should not include HT threads, even if hyperthreading is enabled. Effective spindle count is zero if the active data set is fully cached, and approaches the actual number of spindles as the cache hit rate falls. ... There hasn't been any analysis so far regarding how well the formula works with SSDs.

如上所述在同一篇文章中,这意味着带有1个硬盘的4核服务器应该只有大约10个连接。即使您可能具有更多的核心,我还是假设您没有足够的核心来保证您正在建立的300个连接,更不用说进一步增加它了。

As described within the same article, that means that a 4 core server with 1 hard disk should only have about 10 connections. Even though you might have more cores, I'm assuming that you don't have enough cores to warrant the 300 connections you're making, let alone increasing it even further.

另一种可能性是增加连接超时。如前所述,在使用所有连接时,默认情况下它将等待30秒,这是连接超时。

Another possibility is to increase the connection timeout. As mentioned before, when all connections are in use, it will wait for 30 seconds by default, which is the connection timeout.

您可以增加此值,以便应用程序将超时等待更长的时间。如果您的复杂查询需要20秒,并且您拥有300个和1000个并发用户的连接池,则理论上应该将连接超时配置为至少 20 * 1000/300 = 67秒

You can increase this value so that the application will wait longer before going in timeout. If your complex query takes 20 seconds, and you have a connection pool of 300 and 1000 concurrent users, you should theoretically configure your connection timeout to be at least 20 * 1000 / 300 = 67 seconds.

不过请注意,这意味着您的应用程序可能需要很长时间才能向用户显示响应。如果您有67秒的连接超时,并且在复杂查询完成之前还有20秒,则您的用户可能需要等待一分半钟。

Be aware though, that means that your application might take a long time before showing a response to the user. If you have a 67 second connection timeout and an additional 20 seconds before your complex query completes, your user might have to wait up to a minute and a half.

如前所述,您的主要目标是找出查询为何花这么长时间的原因。连接池为300个,连接超时为30秒,并发用户为1000个,这意味着您的查询至少要花费 9秒才能完成,这很多。

As mentioned before, your primary goal would be to find out why your queries are taking so long. With a connection pool of 300, a connection timeout of 30 seconds and 1000 concurrent users, it means that your queries are taking at least 9 seconds before completing, which is a lot.

尝试通过以下方式缩短执行时间:

Try to improve the execution time by:


  • 添加适当的索引。

  • 正确编写查询。

  • 改善数据库硬件(磁盘,核心,网络等)

  • 限制记录量您正在通过引入分页进行处理,...。

  • 将工作划分。看一下查询是否可以拆分为较小的查询,这些查询会产生中间结果,然后可以在其他查​​询中使用该结果,依此类推。只要您不在事务中,连接之间的连接就会被释放,从而使您能够以牺牲性能为代价为多个用户服务。

  • 使用缓存

  • 预先计算结果:如果您要进行大量的资源计算,则可以在应用程序不经常使用的时候尝试预先计算结果。晚上将这些结果存储在可以轻松查询的其他表中。

  • ...

  • Adding proper indexes.
  • Writing your queries properly.
  • Improve database hardware (disks, cores, network, ...)
  • Limit the amount of records you're dealing with by introducing pagination, ... .
  • Divide the work. Take a look to see if the query can be split into smaller queries that result in intermediary results that can then be used in another query and so on. As long as you're not working in transactions, the connection will be freed up in between, allowing you to serve multiple users at the cost of some performance.
  • Use caching
  • Precalculate the results: If you're doing some resource-heavy calculation, you could try to pre-calculate the results during a moment that the application isn't used as often, eg. at night and store those results in a different table that can be easily queried.
  • ...

这篇关于最大连接池是否还会限制与数据库的最大连接?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆