WCF HttpTransport:流式传输与缓冲传输模式 [英] WCF HttpTransport: streamed vs buffered TransferMode

查看:38
本文介绍了WCF HttpTransport:流式传输与缓冲传输模式的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个通过基于 HttpTransport 的自定义绑定公开的自托管 WCF 服务(v4 框架).绑定使用自定义的 MessageEncoder,它几乎是一个 BinaryMessageEncoder,并添加了 gzip 压缩功能.

I have a self-hosted WCF service (v4 framework) that is exposed through a HttpTransport-based custom binding. The binding uses a custom MessageEncoder that is pretty much a BinaryMessageEncoder with the addition of gzip compression functionality.

Silverlight 和 Windows 客户端使用 Web 服务.

A Silverlight and a Windows client consume the web service.

问题:在某些情况下,服务必须返回非常大的对象,并且在响应多个并发请求时偶尔会抛出 OutOfMemory 异常(即使任务管理器报告了大约 600 Mb 的进程).异常发生在自定义编码器中,当消息即将被压缩时,但我相信这只是一个症状而不是原因.异常指出无法分配 x Mb",其中 x 是 16、32 或 64,不是一个太大的数量 - 因此我相信在此之前其他一些事情已经使进程接近某个限制.

Problem: in some cases the service had to return very large objects and occasionally threw OutOfMemory exceptions when responding to several concurrent requests (even if Task Manager reported ~600 Mb for the process). The exception happened in the custom encoder, when the message was about to be compressed, but I believe this was just a symptom and not the cause. The exception stated "failed to allocate x Mb" where x was 16, 32 or 64, not a overly huge amount -for this reason I believe something else already put the process near some limit before that.

服务端点定义如下:

var transport = new HttpTransportBindingElement(); // quotas omitted for simplicity
var binaryEncoder = new BinaryMessageEncodingBindingElement(); // Readerquotas omitted for simplicity
var customBinding = new CustomBinding(new GZipMessageEncodingBindingElement(binaryEncoder), transport);

然后我做了一个实验:我将 TransferModeBuffered 更改为 StreamedResponse(并相应地修改了客户端).这是新的服务定义:

Then I did an experiment: I changed TransferMode from Buffered to StreamedResponse (and modified the client accordingly). This is the new service definition:

var transport = new HttpTransportBindingElement()
{
    TransferMode = TransferMode.StreamedResponse // <-- this is the only change
};
var binaryEncoder = new BinaryMessageEncodingBindingElement(); // Readerquotas omitted for simplicity
var customBinding = new CustomBinding(new GZipMessageEncodingBindingElement(binaryEncoder), transport);

神奇的是,不再有 OutOfMemory 异常.对于小消息,该服务有点慢,但随着消息大小的增加,差异会越来越小.行为(速度和 OutOfMemory 异常)是可重现的,我对这两种配置进行了多次测试,这些结果是一致的.

Magically, no OutOfMemory exceptions anymore. The service is a bit slower for small messages, but the difference gets smaller and smaller as message size grows. The behavior (both for speed and OutOfMemory exceptions) is reproducible, I did several tests with both configurations and these results are consistent.

问题解决了,但是:我无法解释自己这里发生了什么.我的惊讶源于我没有以任何方式更改合同.IE.我没有像您通常为流式消息所做的那样创建带有单个 Stream 参数等的合同.我仍在使用具有相同 DataContract 和 DataMember 属性的复杂类.我只是修改了端点,仅此而已.

Problem solved, BUT: I cannot explain myself what is happening here. My surprise stems from the fact that I did not change the contract in any way. I.e. I did not create a contract with a single Stream parameter, etc., as you usually do for streamed messages. I am still using my complex classes with the same DataContract and DataMember attribute. I just modified the endpoint, that's all.

我认为设置 TransferMode 只是一种启用流媒体格式正确的合同的方式,但显然还有更多.任何人都可以解释当您更改 TransferMode 时实际发生的事情吗?

I thought that setting TransferMode was just a way to enable streaming for properly formed contracts, but obviously there is more than that. Can anybody explain what actually happens under the hood when you change TransferMode?

推荐答案

当您使用GZipMessageEncodingBindingElement"时,我假设您使用的是 MS GZIP 示例.

As you use 'GZipMessageEncodingBindingElement', I assume you are using the MS GZIP sample.

看看 GZipMessageEncoderFactory.cs 中的 DecompressBuffer(),你就会明白缓冲模式下发生了什么.

Have a look at DecompressBuffer() in GZipMessageEncoderFactory.cs and you will understand what's going on in buffered mode.

例如,假设您有一条未压缩大小为 50M、压缩大小为 25M 的消息.

For the sake of example, let's say you have a message of uncompressed size 50M, compressed size 25M.

DecompressBuffer 将接收 (1) 25M 大小的ArraySegment 缓冲区"参数.然后该方法将创建一个 MemoryStream,使用 (2) 50M 将缓冲区解压缩到其中.然后它将执行 MemoryStream.ToArray(),将内存流缓冲区复制到一个新的 (3) 50M 大字节数组中.然后它从至少 (4) 50M+ 的 BufferManager 中获取另一个字节数组,实际上,它可以更多 - 在我的情况下,50M 数组总是 67M.

DecompressBuffer will receive an 'ArraySegment buffer' param of (1) 25M size. The method will then create a MemoryStream, uncompress the buffer into it, using (2) 50M. Then it will do a MemoryStream.ToArray(), copying the memory stream buffer into a new (3) 50M big byte array. Then it takes another byte array from the BufferManager of AT LEAST (4) 50M+, in reality, it can be a lot more - in my case it was always 67M for a 50M array.

在 DecompressBuffer 结束时,(1) 将返回到 BufferManager(似乎永远不会被 WCF 清除),(2) 和 (3) 受 GC(这是异步的,如果你更快的话)与 GC 相比,您可能会遇到 OOM 异常,即使清理后会有足够的内存).(4) 大概会在您的 BinaryMessageEncodingBindingElement.ReadMessage() 中返回给 BufferManager.

At the end of DecompressBuffer, (1) will be returned to the BufferManager (which seems to never get cleared by WCF), (2) and (3) are subject to GC (which is async, and if you are faster than the GC, you might get OOM exceptions even though there would be enough mem if cleaned up). (4) will presumably be given back to the BufferManager in your BinaryMessageEncodingBindingElement.ReadMessage().

总而言之,对于您的 50M 消息,您的缓冲方案将暂时占用 25 + 50 + 50 + 例如65 = 190M 内存,其中一些受到异步 GC 的影响,一些由 BufferManager 管理,这 - 最坏的情况 - 意味着它在内存中保留了大量未使用的数组,这些数组在后续请求中都不可用(例如太小)也不符合 GC 的条件.现在假设您有多个并发请求,在这种情况下,BufferManager 将为所有并发请求创建单独的缓冲区,除非您手动调用 BufferManager.Clear(),否则 永远 不会被清除,而我不知道使用 WCF 使用的缓冲区管理器来做到这一点的方法,另见这个问题:如何在我的 WCF 客户端应用程序中阻止 BufferManager/PooledBufferManager浪费内存? ]

To sum up, for your 50M message, your buffered scenario will temporarily take up 25 + 50 + 50 + e.g. 65 = 190M memory, some of it subject to asynchronous GC, some of it managed by the BufferManager, which - worst case - means it keeps lots of unused arrays in memory that are neither usable in a subsequent request (e.g. too small) nor eligible for GC. Now imagine you have multiple concurrent requests, in that case BufferManager will create separate buffers for all concurrent requests, which will never be cleaned up, unless you manually call BufferManager.Clear(), and I don't know of a way to do that with the buffer managers used by WCF, see also this question: How can I prevent BufferManager / PooledBufferManager in my WCF client app from wasting memory? ]

更新:迁移到 IIS7 Http 压缩后 ( wcf 条件压缩) 内存消耗、cpu 负载和启动时间下降(手头没有数字),然后从缓冲迁移到流式传输模式 ( 如何防止 WCF 客户端应用程序中的 BufferManager/PooledBufferManager 从浪费内存?) 我的 WCF 客户端应用的内存消耗从 630M(峰值)/470M(连续)下降到 270M(峰值和连续)

Update: After migrating to IIS7 Http Compression ( wcf conditional compression) memory consumption, cpu load and startup time dropped (don't have the numbers handy) and then migrating from buffered to streamed TransferMode ( How can I prevent BufferManager / PooledBufferManager in my WCF client app from wasting memory?) memory consumption of my WCF client app has dropped from 630M (peak) / 470M (continuous) to 270M (both peak and continuous)!

这篇关于WCF HttpTransport:流式传输与缓冲传输模式的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆