在Azure上MongoDB的连接问题 [英] MongoDB connection problems on Azure

查看:157
本文介绍了在Azure上MongoDB的连接问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们必须部署到连接到MongoDB的和不读取和写入操作的Azure的网站ASP.NET MVC应用程序。该应用程序这一反复。每分钟几千次。

我们使用初始化的Autofac C#驱动程序,我们设置MaxConnectionIdleTime 45秒为的 https://groups.google.com/forum/#​​!topic/mongodb-user/_Z8YepNHnbI 以及其他一些地方。

我们仍然得到了大量以下错误:


  

无法读取传输连接的数据:连接
  尝试失败,因为连接的方没有正确响应
  一段时间后,或者建立连接失败,因为
  连接的主机未能响应。方法
  消息::{类名:System.IO.IOException,消息:无法
  从传输连接读取数据:连接尝试失败
  因为连接的方没有正确一段时间后应对
  时间,或已建立的连接失败,因为连接主机
  没有反应。


我们得到这个错误,同时连接到两个在Azure上,也同时连接到外部的PaaS提供商的MongoDB同一数据中心/地区部署在虚拟机上MongoDB实例。

我运行相同的code在我的本地计算机和连接到同一个数据库,我没有收到这些错误。只有当我部署code到Azure的网站。
有什么建议?


解决方案

每分钟几千元的请求是一个负荷,并把事情做对的唯一途径,就是通过控制和限制这可能在任何时间运行的最大线程数。

至于有没有贴至于你如何实现这么多的信息。我要说的是,一些可能出现的情况。


时间来试验...

的常量:


  • 产品工艺:

    • 50 每秒的,或者换句话说......

    • 3000 每分钟的,和另一种方式来看待它...

    • 180000 每小时


变量:


  • 数据传输率:


    • 多少数据可以每秒传输将会无论我们做什么发挥作用,这将取决于一天的时间变化通过了一天。

      我们唯一能做的事情就是火了不同CPU的多个请求分发流量的重量我们发回ñ等等。



  • 处理功率:


    • 我假设你有这个在 WebJob ,而不是具有MVC网站这是自这里面codeD。它的效率非常低,不适合,你想要达到的目的。通过使用WebJob我们可以的队列的工作项目必须由其他处理 WebJobs 。在队列的问题是的的Azure队列存储


        

      Azure的队列存储是用于存储大量信息的服务
        可以从世界上任何地方访问通过认证
        调用使用HTTP或HTTPS。单个队列消息可以是高达64 KB
        在大小,和一队列可以包含数百万个消息,高达总
        存储帐户的容量限制。存储帐户最多可包含
        200 TB的blob,排队和表格数据。见Azure存储
        可扩展性和性能目标有关存储帐户的详细信息
        的能力。


        
        

      队列存储的常见用途包括:


        
        

          
      • 创建工作积压的异步处理

      •   
      • 从一个Azure的Web角色传递消息到的Azure工作者角色

      •   



的问题:


  • 我们正在尝试完成每秒50个交易,所以每个交易应小于1秒,如果我们使用50个线程来完成。我们45秒超时无济于事在这一点上。

  • 我们正在期待50个线程同时运行,并且所有在完成下的第二,每一秒,在一个单一的CPU。 (我在这里夸大一点,只是提出一个观点......但试想下载50文本文件的每一秒钟。处理它,然后试图回到了拍摄它在希望同事,他们甚至会准备抓住它)

  • 我们需要有一个适当的重试逻辑,如果经过3次尝试的项目不被处理,就需要在放置回队列。理想情况下,我们应该提供更多的时间给服务器比每个故障只需一​​秒回应,可以说,我们给它第一次失败2秒的休息,然后是4秒钟,然后10,这将大大增加我们的胜算持续/检索,我们需要的数据。

  • 我们很假设我们的 MongoDB的能处理这个数字每秒的请求。如果你有没有准备好,开始寻找方法来扩展它,这个问题不是一个事实,即它是一个MongoDB中,数据层本来是什么,那就是我们正在做这个号码的请求从事实这将是你的问题的最有可能的原因的单一来源。

解决方案:


  1. 设置了 WebJob 并将其命名为 EnqueueJob 。这 WebJob 将有一个唯一的目的,要排队的工作项目是在队列存储过程。

  2. 创建一个队列存储容器名为 WorkItemQueue ,此队列将作为触发下一步,踹从我们的向外扩展操作。

  3. 创建另一个 WebJob 名为 DequeueJob 。这 WebJob 也将有一个唯一的目的,从 WorkItemQueue 出队的工作项和火力输出请求您数据存储。

  4. 配置 DequeueJob 旋转起来,一旦项目已被放置在 WorkItemQueue ,上启动5个独立的线程每当队列不为空,为每个线程并尝试出队的工作项目以执行出队的工作。

    1. 尝试1,如果失败,等待和放大器;重试。

    2. 2的尝试,如果失败,等待和放大器;重试。

    3. 3的尝试,如果失败了,排队的项目回 WorkItemQueue


  5. 配置您的网站自动缩放到CPU的多少多少(请注意,您的网站和网页的工作共享相同的资源)

下面是一个简短10分钟影片,让如何利用队列存储器和网络工作的概述。


编辑:

另外一个原因,你可能会得到这些错误可能是因为其他两个因素还有,再由它在MVC应用程序是造成...

如果您正在编译应用了 DEBUG 属性的应用程序,但推发布版本而不是,你可以运行到你的的web.config ,没有 DEBUG 属性,一个ASP.NET网页,由于设置问题应用程序将运行时间最长为90秒的请求时,如果请求花费比这更长的时间,它会处理的要求。

要增加超时,而不是更长的您将需要更改 [的httpRuntime] [3] 在<$ c属性$ C>的web.config ...

 &LT;! - 增加超时五分钟 - &GT;
&LT;的httpRuntime executionTimeout =300/&GT;

这是你需要知道的另一件事是你的浏览器> Web应用程序的请求超时设置,我会说,如果你坚持保持code在MVC中,而不是提取它,并把它成WebJob,那么你可以使用下面的code火的请求给您的Web应用程序和偏移请求暂停。

 字符串的html =的String.Empty;
字符串URI =htt​​p://google.com;
HttpWebRequest的要求=(HttpWebRequest的)WebRequest.Create(URI);
request.Timeout = TimeSpan.FromMinutes(5);使用(HttpWebResponse响应=(HttpWebResonse)request.GetResponse())
使用(流流= response.GetResponseStream())
使用(StreamReader的读者=新的StreamReader(流))
{
    的HTML = reader.ReadToEnd();
}

We have an ASP.NET MVC application deployed to an Azure Website that connects to MongoDB and does both read and write operations. The application does this iteratively. A few thousand times per minute.

We initialize the C# driver using Autofac and we set the MaxConnectionIdleTime to 45 seconds as suggested in https://groups.google.com/forum/#!topic/mongodb-user/_Z8YepNHnbI and a few other places.

We are still getting a large number of the below error:

Unable to read data from the transport connection: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. Method Message:":{"ClassName":"System.IO.IOException","Message":"Unable to read data from the transport connection: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.

We get this error while connecting to both a MongoDB instance deployed on a VM in the same datacenter/region on Azure and also while connecting to an external PaaS MongoDB provider.

I run the same code in my local computer and connect to the same DB and I don't receive these errors. It's only when I deploy the code to an Azure Website. Any suggestions?

解决方案

A few thousand requests per minute is a big load, and the only way to do it right, is by controlling and limiting the maximum number of threads which could be running at any one time.

As there's not much information posted as to how you've implemented this. I'm going to cover a few possible circumstances.


Time to experiment...

The constants:

  • Items to process:
    • 50 per second, or in other words...
    • 3,000 per minute, and one more way to look at it...
    • 180,000 per hour

The variables:

  • Data transfer rates:

    • How much data you can transfer per second is going to play a role no matter what we do, and this will vary through out the day depending on the time of day.

      The only thing we can do is fire off more requests from different cpu's to distribute the weight of traffic we're sending back n forth.

  • Processing power:

    • I'm assuming you have this in a WebJob as opposed to having this coded inside the MVC site it's self. It's highly inefficient and not fit for the purpose that you're trying to achieve. By using a WebJob we can queue work items to be processed by other WebJobs. The queue in question is the Azure Queue Storage.

      Azure Queue storage is a service for storing large numbers of messages that can be accessed from anywhere in the world via authenticated calls using HTTP or HTTPS. A single queue message can be up to 64 KB in size, and a queue can contain millions of messages, up to the total capacity limit of a storage account. A storage account can contain up to 200 TB of blob, queue, and table data. See Azure Storage Scalability and Performance Targets for details about storage account capacity.

      Common uses of Queue storage include:

      • Creating a backlog of work to process asynchronously
      • Passing messages from an Azure Web role to an Azure Worker role

The issues:

  • We're attempting to complete 50 transactions per second, so each transaction should be done in under 1 second if we were utilising 50 threads. Our 45 second time out serves no purpose at this point.
  • We're expecting 50 threads to run concurrently, and all complete in under a second, every second, on a single cpu. (I'm exaggerating a point here, just to make a point... but imagine downloading 50 text files every single second. Processing it, then trying to shoot it back over to a colleague in the hopes they'll even be ready to catch it)
  • We need to have a retry logic in place, if after 3 attempts the item isn't processed, they need to be placed back in to the queue. Ideally we should be providing more time to the server to respond than just one second with each failure, lets say that we gave it a 2 second break on first failure, then 4 seconds, then 10, this will greatly increase the odds of us persisting / retrieving the data that we needed.
  • We're assuming that our MongoDb can handle this number of requests per second. If you haven't already, start looking at ways to scale it out, the issue isn't in the fact that it's a MongoDb, the data layer could have been anything, it's the fact that we're making this number of requests from a single source that is going to be the most likely cause of your issues.

The solution:

  1. Set up a WebJob and name it EnqueueJob. This WebJob will have one sole purpose, to queue items of work to be process in the Queue Storage.
  2. Create a Queue Storage Container named WorkItemQueue, this queue will act as a trigger to the next step and kick off our scaling out operations.
  3. Create another WebJob named DequeueJob. This WebJob will also have one sole purpose, to dequeue the work items from the WorkItemQueue and fire out the requests to your data store.
  4. Configure the DequeueJob to spin up once an item has been placed inside the WorkItemQueue, start 5 separate threads on each and while the queue is not empty, dequeue work items for each thread and attempt to execute the dequeued job.

    1. Attempt 1, if fail, wait & retry.
    2. Attempt 2, if fail, wait & retry.
    3. Attempt 3, if fail, enqueue item back to WorkItemQueue

  5. Configure your website to autoscale out to x amount of cpu's (note that your website and web jobs share the same resources)

Here's a short 10 minute video that gives an overview on how to utilise queue storages and web jobs.


Edit:

Another reason you may be getting those errors could be because of two other factors as well, again caused by it being in an MVC app...

If you're compiling the application with the DEBUG attribute applied but pushing the RELEASE version instead, you could be running into issues due to the settings in your web.config, without the DEBUG attribute, an ASP.NET web application will run a request for a maximum of 90 seconds, if the request takes longer than this, it will dispose of the request.

To increase the timeout to longer than 90 seconds you will need to change the [httpRuntime][3] property in your web.config...

<!-- Increase timeout to five minutes -->
<httpRuntime executionTimeout="300" />

The other thing that you need to be aware of is the request timeout settings of your browser > web app, I'd say that if you insist on keeping the code in MVC as opposed to extracting it and putting it into a WebJob, then you can use the following code to fire a request off to your web app and offset the timeout of the request.

string html = string.Empty;
string uri = "http://google.com";
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uri);
request.Timeout = TimeSpan.FromMinutes(5);

using (HttpWebResponse response = (HttpWebResonse)request.GetResponse())
using (Stream stream = response.GetResponseStream())
using (StreamReader reader = new StreamReader(stream))
{
    html = reader.ReadToEnd();
}

这篇关于在Azure上MongoDB的连接问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆