在Google App Engine上处理并发请求 [英] Concurrent requests handling on Google App Engine

查看:115
本文介绍了在Google App Engine上处理并发请求的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在几个平台上试验并发请求处理。



实验的目的是对容量进行广泛度量

我在我的机器上安装了一个基本的Go http服务器( http.HandleFunc code> http 默认包)。
然后,服务器将计算修改版本的 fasta 算法将线程和进程限制为1,并返回结果。 N被设置为100000.
该算法在大约2秒内运行。
我在Google App Engine项目中使用了相同的算法和逻辑。



算法使用相同的代码编写,只是根据GAE的要求,设置的处理程序在 init()而不是 main()上完成。



另一方面,Android客户端产生500个线程,每个线程并行向 fasta发送一个 GET 请求计算服务器,请求超时时间为5000毫秒。



我期待GAE应用程序扩展并回答每个请求,本地Go服务器将失败对500个请求中的一些,但结果相反:
本地服务器在超时范围内正确回复了每个请求,而GAE应用程序能够处理500个请求中的160个请求。其余请求超时。 / p>

我检查了云端控制台,并证实18个GAE实例已生成,但绝大多数请求都失败。



我认为他们中的大多数都因为每个GAE实例的启动时间而失败,所以我重复了实验,但我得到了相同的结果:大多数请求超时。



我期待GAE能够适应所有的请求,相信如果一个本地虚拟机可以成功回复500个并发请求,GAE也可以做到这一点,但这不是发生。



GAE控制台不显示任何错误并正确报告传入请求的数量。



这可能是什么原因?
另外,如果一个实例只能通过goroutine处理机器上的所有传入请求,那么GAE如何在所有?

解决方案

感谢大家的帮助。
我在这个主题上的答案已经提出了许多有趣的观点和见解。



云控制台报告没有错误的事实让我相信我发现结果不如预期的原因:带宽。
$ b b $ b

每个响应的有效负载大约为1MB,因此响应来自同一客户端的500个同时连接会阻塞这些线路,导致超时。
当请求虚拟机时,显然没有发生这种情况,带宽更大。

现在,GAE缩放符合我的预期:成功以适应每个传入的请求。


I was experimenting with concurrent request handling on few platforms.

The aim of the experiment was to have a broad measure of the capacity bounds of some selected technologies.

I set up a Linux VM on my machine with a basic Go http server (the vanilla http.HandleFunc of the http default package). The server would then compute a modified version of the fasta algorithm that restricted threads and processes to 1, and return the result. N was set to 100000. The algorithm runs in roughly 2 seconds. I used the same algorithm and logic on a Google App Engine project.

The algorithm is written using the same code, just the handler set up is done on init() instead of main() as per GAE requirements.

On the other end an Android client is spawning 500 threads each one issuing in parallel a GET request to the fasta computing server, with a request timeout of 5000 ms.

I was expecting the GAE application to scale and answer back to each request and the local Go server to fail on some of the 500 requests but results were the opposite: the local server correctly replied to each request within the timeout bounds while the GAE application was able to handle just 160 requests out of 500. The remaining requests timed out.

I checked on the Cloud Console and I verified that 18 GAE instances were spawned, but still the vast majority of requests failed.

I thought that most of them failed because of the start-up time of each GAE instance, so I repeated the experiment right after but I had the same results: most of the requests timed out.

I was expecting GAE to scale to accomodate ALL the requests, believing that if a single local VM could successfully reply to 500 concurrent requests GAE would have done the same, but this is not what happened.

The GAE console doesn't show any error and correctly reports the number of incoming requests.

What could be the cause of this? Also, if a single instance could handle all the incoming requests on my machine by virtue of only goroutines, how come that GAE needed to scale so much at all?

解决方案

Thanks everyone for their help. Many interesting points and insights have been made by the answers I had on this topic.

The fact the the Cloud Console were reporting no errors led me to believe that the bottleneck was happening after the real request processing.

I found the reason why the results were not as expected: bandwidth.

Each response had a payload of roughly 1MB and thus responding to 500 simultaneous connections from the same client would clog the lines, resulting in timeouts. This was obviously not happening when requesting to the VM, where the bandwith is much larger.

Now GAE scaling is in line with what I expected: it successfully scales to accomodate each incoming request.

这篇关于在Google App Engine上处理并发请求的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆