WCF HttpTransport:流VS缓冲TransferMode [英] WCF HttpTransport: streamed vs buffered TransferMode

查看:359
本文介绍了WCF HttpTransport:流VS缓冲TransferMode的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个自托管WCF服务(V4框架),它通过一个 HttpTransport 为主的自定义绑定暴露。绑定使用自定义的 MessageEn codeR 是pretty的太大的 BinaryMessageEn codeR 与加入GZIP COM pression功能。

I have a self-hosted WCF service (v4 framework) that is exposed through a HttpTransport-based custom binding. The binding uses a custom MessageEncoder that is pretty much a BinaryMessageEncoder with the addition of gzip compression functionality.

一个Silverlight和Windows客户端使用的Web服务。

A Silverlight and a Windows client consume the web service.

问题:在某些情况下,服务必须返回非常大的对象,偶尔在响应多个并发请求抛出OutOfMemory异常(即使任务管理器报告〜600 MB的进程)。在自定义EN codeR唯一的例外发生了,当消息即将被COM pressed,但我相信这只是一种症状,而不是原因。唯一的例外规定未能分配带宽X MB,其中x是16,32或64,而不是过于庞大的 - 因为这个原因,我相信别的东西已经把这个过程附近的一些限制之前。

Problem: in some cases the service had to return very large objects and occasionally threw OutOfMemory exceptions when responding to several concurrent requests (even if Task Manager reported ~600 Mb for the process). The exception happened in the custom encoder, when the message was about to be compressed, but I believe this was just a symptom and not the cause. The exception stated "failed to allocate x Mb" where x was 16, 32 or 64, not a overly huge amount -for this reason I believe something else already put the process near some limit before that.

的服务端点的定义如下:

The service endpoint is defined as follows:

var transport = new HttpTransportBindingElement(); // quotas omitted for simplicity
var binaryEncoder = new BinaryMessageEncodingBindingElement(); // Readerquotas omitted for simplicity
var customBinding = new CustomBinding(new GZipMessageEncodingBindingElement(binaryEncoder), transport);

然后我做了一个实验:我改变了 TransferMode 缓冲 StreamedResponse (并修改了相应的客户端)。这是新的服务定义:

Then I did an experiment: I changed TransferMode from Buffered to StreamedResponse (and modified the client accordingly). This is the new service definition:

var transport = new HttpTransportBindingElement()
{
    TransferMode = TransferMode.StreamedResponse // <-- this is the only change
};
var binaryEncoder = new BinaryMessageEncodingBindingElement(); // Readerquotas omitted for simplicity
var customBinding = new CustomBinding(new GZipMessageEncodingBindingElement(binaryEncoder), transport);

奇妙的是,没有OutOfMemory异常了。这项服务是小消息的慢一点,但差别变得越来越小邮件大小的增长。 (无论是速度和内存溢出的例外)的行为是可重复的,我做了几个测试,这两个配置,这些结果是一致的。

Magically, no OutOfMemory exceptions anymore. The service is a bit slower for small messages, but the difference gets smaller and smaller as message size grows. The behavior (both for speed and OutOfMemory exceptions) is reproducible, I did several tests with both configurations and these results are consistent.

问题解决了,但我无法解释自己这里发生了什么。从其实我的惊喜源于我没有改变合同以任何方式。即我没有创建一个参数等合同,因为你通常做流的消息。我还是用我的复杂的类具有相同DataContract和DataMember属性。 我只是修改端点的,仅此而已。

Problem solved, BUT: I cannot explain myself what is happening here. My surprise stems from the fact that I did not change the contract in any way. I.e. I did not create a contract with a single Stream parameter, etc., as you usually do for streamed messages. I am still using my complex classes with the same DataContract and DataMember attribute. I just modified the endpoint, that's all.

我认为设置TransferMode只是一种方式的启用的流媒体格式正确的合同,但是很显然不止于此。 任何人都可以解释究竟是什么引擎盖下发生的,当您更改 TransferMode

I thought that setting TransferMode was just a way to enable streaming for properly formed contracts, but obviously there is more than that. Can anybody explain what actually happens under the hood when you change TransferMode?

推荐答案

当你用'GZ​​ipMessageEncodingBindingElement',我假设你正在使用的MS GZIP样本。

As you use 'GZipMessageEncodingBindingElement', I assume you are using the MS GZIP sample.

看一看 DECOM pressBuffer()在GZipMessageEn coderFactory.cs,你会明白什么是在缓冲模式怎么回事。

Have a look at DecompressBuffer() in GZipMessageEncoderFactory.cs and you will understand what's going on in buffered mode.

为了举例,假设你有uncom pressed大小50M的消息,COM pressed大小25M。

For the sake of example, let's say you have a message of uncompressed size 50M, compressed size 25M.

DECOM pressBuffer将获得(1) 25M 尺寸的ArraySegment缓冲区参数。该方法将创建一个MemoryStream,uncom preSS缓冲进去,用(2)的 50M 。然后,它会做一个MemoryStream.ToArray(),复制内存流缓冲到一个新的(3) 50M 大的字节数组。然后它会把的BufferManager另一个字节数组中最小(4) 50M + ,在现实中,它可以是一个很多 - 在我的情况下,它总是67M的50M阵列

DecompressBuffer will receive an 'ArraySegment buffer' param of (1) 25M size. The method will then create a MemoryStream, uncompress the buffer into it, using (2) 50M. Then it will do a MemoryStream.ToArray(), copying the memory stream buffer into a new (3) 50M big byte array. Then it takes another byte array from the BufferManager of AT LEAST (4) 50M+, in reality, it can be a lot more - in my case it was always 67M for a 50M array.

目前DECOM pressBuffer,(1)将被返回到BufferManager(这似乎从未得到由WCF清零)的端部,(2)和(3)受到的GC(这是异步,和如果你比GC快,你可能会得到OOM异常,即使会有足够的纪念品,如果清理)。 (4)将presumably在BinaryMessageEncodingBindingElement.ReadMessage还给了BufferManager()。

At the end of DecompressBuffer, (1) will be returned to the BufferManager (which seems to never get cleared by WCF), (2) and (3) are subject to GC (which is async, and if you are faster than the GC, you might get OOM exceptions even though there would be enough mem if cleaned up). (4) will presumably be given back to the BufferManager in your BinaryMessageEncodingBindingElement.ReadMessage().

要总结一下,供大家50M的消息,你的缓冲方案将临时占用的 25 + 50 + 50 +例如: 65 = 190M 的内存,它的一些受异步GC,有些是由BufferManager,其中管理 - 最坏的情况 - 意味着它使大量未使用的阵列,在内存中既不在后续请求可用的(如太小),也不符合GC。现在,假设你有多个并发请求,在这种情况下BufferManager将建立独立的缓冲区为所有并发请求,这将永远清理,除非你手动调用BufferManager.Clear(),我不知道的方式来做到这一点与使用WCF缓冲经理,又见这个问题:我哪有prevent BufferManager / PooledBufferManager在我浪费内存WCF客户端应用程序?]

To sum up, for your 50M message, your buffered scenario will temporarily take up 25 + 50 + 50 + e.g. 65 = 190M memory, some of it subject to asynchronous GC, some of it managed by the BufferManager, which - worst case - means it keeps lots of unused arrays in memory that are neither usable in a subsequent request (e.g. too small) nor eligible for GC. Now imagine you have multiple concurrent requests, in that case BufferManager will create separate buffers for all concurrent requests, which will never be cleaned up, unless you manually call BufferManager.Clear(), and I don't know of a way to do that with the buffer managers used by WCF, see also this question: How can I prevent BufferManager / PooledBufferManager in my WCF client app from wasting memory? ]

更新:迁移到IIS7的Http的COM pression后(的 WCF有条件的COM pression 内存消耗,CPU负载和启动时间下降(不具备数字方便),然后从缓冲到流TransferMode迁移(<一href="http://stackoverflow.com/questions/7252417/how-can-i-$p$pvent-buffermanager-pooledbuffermanager-in-my-wcf-client-app-from-w/7253103#7253103">How可我浪费内存prevent BufferManager / PooledBufferManager在我的WCF客户端应用程序吗?)的我的WCF客户端应用程序的内存消耗已经从630M(峰值下降)/ 470M(连续)到270M(两个峰值连续)

Update: After migrating to IIS7 Http Compression ( wcf conditional compression) memory consumption, cpu load and startup time dropped (don't have the numbers handy) and then migrating from buffered to streamed TransferMode ( How can I prevent BufferManager / PooledBufferManager in my WCF client app from wasting memory?) memory consumption of my WCF client app has dropped from 630M (peak) / 470M (continuous) to 270M (both peak and continuous)!

这篇关于WCF HttpTransport:流VS缓冲TransferMode的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆