ADO.NET连接池问题与SQL Azure或我们如何不能扩展我们的Web应用程序 [英] ADO.NET Connection Pool concurency issue with SQL Azure or How we cannot scale out our Web App

查看:537
本文介绍了ADO.NET连接池问题与SQL Azure或我们如何不能扩展我们的Web应用程序的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

由于我们的SQL Azure DB,我们无法在Azure上扩展我们的ASP.NET Web API。问题是,我们的Web /业务SQL Azure数据库最多支持180个并发请求。在多个情况下,我们已达到此限制,并抛出以下异常:

We are having issues to scale out our ASP.NET Web API on Azure because of our SQL Azure DB. The issue is that our Web/Business SQL Azure DB supports a maximum of 180 concurrent requests. At multiple occasions, we've hit that limit, and the following exception is thrown:

资源ID:1.数据库的请求限制为180,已达到。请参阅 http://go.microsoft.com/fwlink/?LinkId=267637 '

Resource ID : 1. The request limit for the database is 180 and has been reached. See 'http://go.microsoft.com/fwlink/?LinkId=267637'

为了防止这种情况发生,我们明确将我们的connectionstring的Max Pool Size属性设置为180,这样ADO.NET不能超过对数据库的并发请求数。但即使在这种变化后,我们仍然收到相同的错误。然后我们意识到,这是因为我们的应用程序有多个节点。因此,我们的想法是将最大池大小设置为180除以我们正在使用的节点数,但这似乎对我来说是坚果。它感觉像是一个非常低效的资源使用。如果我推这个推理极端,这意味着如果我想确保我从来没有超过并发请求到数据库的最大数量,我永远不能将我的网络应用程序扩展到超过180个节点。

To prevent that to happen, we explicitly set the "Max Pool Size" property of our connectionstring to 180 so that ADO.NET is not able to exceed the number of concurrent requests to the DB. But even after this change we kept receiving the same error. We then realized that it was because we had multiple nodes of our app. Therefore, our idea is to set the "Max Pool Size" to 180 divided by the number of nodes we are using, but that seems nuts to me. It feels like a very inefficient use of resources. If I push that reasoning to the extreme, it means I will never be able to scale my web app to more than 180 nodes if I want to make sure that I never exceed the max number of concurrent requests to the DB.

在分布式环境中有更好的方法来处理这个限制吗?

Is there a better approach to deal with this limitation in a distributed environment?

推荐答案

谢谢蒂姆,我想你是对的。我们的代码有2个主要的设计问题。第一,我们不要关闭连接足够快。我们让我们的DI(Autofac)处理打开和关闭连接,而不是基于每个方法处理。第二,我们的一些查询非常非常严重地写在EF的顶部,这创建了更糟糕的混乱。我们完全用Dapper替换EF,所以我们可以完全控制我们的插入/更新查询。一旦我们实现了,我们应该能够获得正常的性能写入(〜30-60ms)。一旦我们达到这一点,我想这只是纯可能性,有超过180并发请求到DB的可能性足够低,认为它是一个非问题。在我们的存储库内,我想我们可以轻松地将我们的请求包含在一个重试机制中,以处理这些罕见的事件。你觉得怎么样?

Thanks Tim, I think you're right. There are 2 main design issues in our code. 1st, we do not close the connection fast enough. We let our DI(Autofac) dealing with opening and closing the connection rather than dealing with this on a per-method basis. 2nd, some of our queries are very very badly written on top of EF, which creates an even worse mess. We are completely replacing EF with Dapper so we can gain complete control of our insert/update queries. Once we've achieved that, we should be able to get normal performance for our writes(~30-60ms). Once we have reach that point, I guess this is just pure probabilities that the likelihood of having more than 180 concurrent requests to the DB is low enough to consider it a non-problem. Mind due, inside our repository, I guess we could easily wrap our request inside a retry mechanism that would deal with those rare events. What do you think?

这篇关于ADO.NET连接池问题与SQL Azure或我们如何不能扩展我们的Web应用程序的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆