REST WCF - 流下载非常慢,65535(64KB)块无法更改 [英] REST WCF - stream download is VERY slow with 65535 (64KB) chunks that cant be changed

查看:111
本文介绍了REST WCF - 流下载非常慢,65535(64KB)块无法更改的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们有一个返回流的WCF方法 - 通过REST公开。
我们比较了常规下载(从网站)到WCF方法,我们发现以下70MB文件:




  • 在常规站点 - 下载需要~10秒 - 1MB块大小

  • 在WCF方法中 - 耗时约20秒 - 块大小始终 65,535字节



我们有一个实际流入另一个产品的自定义流,这使得时间的差异变得更糟 - 常规网站1分钟,虽然WCF需要2分钟。



因为我们需要支持非常大的文件 - 它变得至关重要。



<我们停止了调试,发现WCF调用的Stream方法Read的块大小总是为65,535 - 导致缓慢



我们尝试了几种服务器配置 - 像这样:



端点:

 < endpoint address =Downloadbinding =webHttpBinding绑定Configuration =webDownloadHttpBindingConfigbehaviorConfiguration =webcontract =IAPI/> 

绑定:

 < binding name =webDownloadHttpBindingConfigmaxReceivedMessageSize =20000000maxBufferSize =20000000transferMode =Streamed> 
< readerQuotas maxDepth =32maxStringContentLength =20000000maxArrayLength =20000000maxBytesPerRead =20000000maxNameTableCharCount =20000000/>
< security mode =Transport>
< transport clientCredentialType =NoneproxyCredentialType =Nonerealm =/>
< / security>
< / binding>

作为REST客户端的客户端(不能使用WCF绑定 - 我们不想引用它) - 以这种方式构建:

  System.Net.HttpWebRequest request =(HttpWebRequest)WebRequest.Create(CombineURI(BaseURL,i_RelativeURL) )); 

request.Proxy = null; //我们没有使用proxy
request.Timeout = i_Timeout;
request.Method = i_MethodType;
request.ContentType = i_ContentType;

string actualResult = string.Empty;
TResult result = default(TResult);
if(!string.IsNullOrEmpty(m_AuthenticationToken))
{
request.Headers.Add(ControllerConsts.AUTH_HEADER_KEY,m_AuthenticationToken);
}

使用(var response = request.GetResponse())
{
using(Stream responseStream = response.GetResponseStream())
{
byte [] buffer = new byte [1048576];

int read;
while((read = responseStream.Read(buffer,0,buffer.Length))> 0)
{
o_Stream.Write(buffer,0,read);
}
}
}

基本上我们只是流媒体流入。



所以,无论我们做什么 - 服务器总是收到65,535的块大小(我们尝试了几个客户端/服务器配置)



我们缺少什么?



谢谢!



==编辑8 / 4/15微软回复==
您好,我们与微软就此案件进行了合作,这是他们的答案:



当WCF客户端调用WCF方法时返回一个Stream,它实际上获得了对MessageBodyStream实例的引用。
MessageBodyStream最终依赖于WebResponseInputStream来实际读取数据,通过这个关系图:




  • MessageBodyStream有一个成员,消息,引用InternalByteStreamMessage实例

  • InternalByteStreamMessage有一个成员bodyWriter,引用StreamBasedStreamedBodyWriter实例

  • StreamBasedStreamedBodyWriter有一个成员stream,它引用一个MaxMessageSizeStream实例

  • MaxMessageSizeStream有一个成员,stream,引用WebResponseInputStream实例



当你调用Read时()在流上,最终调用WebResponseInputStream.Read()(你可以通过在Visual Studio中设置断点来自己测试 - 一个警告:Visual Studio中的只是我的代码选项 - 必须禁用调试,以便要击中的断点)。
WebResponseInputStream.Read()的相关部分如下:

  return BaseStream.Read(buffer,offset, Math.Min(count,maxSocketRead)); 

其中maxSocketRead定义为64KB。
上面的注释maxSocketRead说为了避免破坏内核缓冲区,我们限制了读取。 http.sys处理这个问题,但System.Net没有做任何这样的限制。
这意味着如果指定的读取值太大,则会超出内核自身的缓冲区大小,导致性能下降,因为它需要做更多的工作。



这是否会导致性能瓶颈?不,它不应该。一次读取太少的字节(例如,256字节)将导致性能下降。但64KB应该是一个导致良好性能的值。在这些情况下,真正的瓶颈通常是网络带宽,而不是客户端读取数据的速度。
为了最大限度地提高性能,重要的是读取循环尽可能紧密(换句话说,读取之间没有明显的延迟)。
我们还要记住,大于80KB的对象会转到.Net中的大对象堆,它的内存管理效率低于普通堆(压缩不会在正常条件下发生,因此内存碎片化)可能会发生。)

解决方案

我们与微软就此案一起工作,这是他们的答案:



当WCF客户端调用返回Stream的WCF方法时,它实际上获得对MessageBodyStream实例的引用。 MessageBodyStream最终依赖于WebResponseInputStream来实际读取数据,通过这个关系图:



MessageBodyStream有一个成员消息,引用一个InternalByteStreamMessage实例
InternalByteStreamMessage有一个成员,bodyWriter,引用StreamBasedStreamedBodyWriter实例
StreamBasedStreamedBodyWriter有一个成员,stream,引用MaxMessageSizeStream实例
MaxMessageSizeStream有一个成员,stream,引用WebResponseInputStream实例
当你调用Read时( )在流上,最终调用WebResponseInputStream.Read()(你可以通过在Visual Studio中设置断点来自己测试 - 一个警告:Visual Studio中的只是我的代码选项 - 必须禁用调试,以便断点被击中)。 WebResponseInputStream.Read()的相关部分如下:

  return BaseStream.Read(buffer,offset,Math.Min( count,maxSocketRead)); 

其中maxSocketRead定义为64KB。 maxSocketRead上面的注释说为了避免吹出内核缓冲区,我们限制了读取。 http.sys处理这个问题,但System.Net没有做任何这样的限制。这意味着如果您指定的读取值太大,则会超出内核自身的缓冲区大小并导致性能下降,因为它需要执行更多工作。



这是否会导致性能瓶颈?不,它不应该。一次读取太少的字节(例如,256字节)将导致性能下降。但64KB应该是一个导致良好性能的值。在这些情况下,真正的瓶颈通常是网络带宽,而不是客户端读取数据的速度。为了最大限度地提高性能,重要的是读取循环尽可能紧密(换句话说,读取之间没有明显的延迟)。我们还要记住,大于80KB的对象会转到.Net中的大对象堆,它的内存管理效率低于普通堆(压缩不会在正常情况下发生,因此可能会发生内存碎片)。



可能的解决方案:是在内存中缓存更大的块(例如,使用MemoryStream,而WCF Stream调用自定义Read - 缓存在1MB或更多/更少 - 无论你想要什么。



然后,当1MB(或其他值)超过时 - 将其推送到您的实际自定义流,并继续缓存更大块



这没有被检查但我认为它应该解决性能问题。


we have a WCF method that returns a stream - exposed via REST. we compared a regular download (from web site) to the WCF method, and we found out the following for 70MB file:

  • in regular site - the download took ~10 seconds - 1MB chunk size
  • in WCF method - took ~20 seconds - the chunk size was ALWAYS 65,535 bytes.

we have a custom stream that actually streams into another product, which makes the difference of times ever worse - 1 minute for regular site, while it takes 2 minutes for the WCF.

because we need to support very large files - its getting crucial.

we stopped on debug, and found out that the method "Read" of the Stream that the WCF calls always have a chunk size of 65,535 - which causes the slowness.

we tried several server configurations - like this:

The endpoint:

   <endpoint address="Download" binding="webHttpBinding" bindingConfiguration="webDownloadHttpBindingConfig"  behaviorConfiguration="web" contract="IAPI" />

The binding:

<binding name="webDownloadHttpBindingConfig" maxReceivedMessageSize="20000000" maxBufferSize="20000000" transferMode="Streamed">
                              <readerQuotas maxDepth="32" maxStringContentLength="20000000" maxArrayLength="20000000" maxBytesPerRead="20000000" maxNameTableCharCount="20000000"/>
                              <security mode="Transport">
                                     <transport clientCredentialType="None" proxyCredentialType="None" realm=""/>
                              </security>
                       </binding>

The client which is a REST client (cannot use WCF binding - we don't want to reference it) - is built this way:

System.Net.HttpWebRequest request = (HttpWebRequest)WebRequest.Create(CombineURI(BaseURL, i_RelativeURL));

     request.Proxy = null; // We are not using proxy
     request.Timeout = i_Timeout;
     request.Method = i_MethodType;
     request.ContentType = i_ContentType;

     string actualResult = string.Empty;
     TResult result = default(TResult);
     if (!string.IsNullOrEmpty(m_AuthenticationToken))
     {
        request.Headers.Add(ControllerConsts.AUTH_HEADER_KEY, m_AuthenticationToken);
     }

     using (var response = request.GetResponse())
        {
           using (Stream responseStream = response.GetResponseStream())
           {
              byte[] buffer = new byte[1048576];

              int read;
              while ((read = responseStream.Read(buffer, 0, buffer.Length)) > 0)
              {
                 o_Stream.Write(buffer, 0, read);
              }
           }
        }

basically we're just streaming into a stream.

so, no matter we do - the server ALWAYS receives chunk size of 65,535 (we tried several client / server configurations)

What are we missing?

Thanks!

== EDIT 8/4/15 Microsoft response == Hi, we worked with microsoft about this case, this is their answer:

When the WCF client calls a WCF method that returns a Stream, it actually gets a reference to a MessageBodyStream instance. MessageBodyStream ultimately relies on WebResponseInputStream to actually read data, through this graph of relationships:

  • MessageBodyStream has a member, message, that references an InternalByteStreamMessage instance
  • InternalByteStreamMessage has a member, bodyWriter, that references a StreamBasedStreamedBodyWriter instance
  • StreamBasedStreamedBodyWriter has a member, stream, that references a MaxMessageSizeStream instance
  • MaxMessageSizeStream has a member, stream, that references a WebResponseInputStream instance

When you call Read() on the stream, WebResponseInputStream.Read() is ultimately called (you can test this yourself by setting the breakpoint in Visual Studio – one caveat: "Just My Code" option in Visual Studio – Debugging must be disabled, in order for the breakpoint to be hit). The relevant part of WebResponseInputStream.Read() is the following:

                    return BaseStream.Read(buffer, offset, Math.Min(count, maxSocketRead));

where maxSocketRead is defined to be 64KB. The comment above maxSocketRead says "in order to avoid blowing kernel buffers, we throttle our reads. http.sys deals with this fine, but System.Net doesn't do any such throttling.". This means that if you specify too large a read value, you exceed the kernel’s own buffer size and causes poorer performance as it needs to do more work.

Does this cause a performance bottleneck? No, it should not. Reading too few bytes at a time (say, 256 bytes) will cause a performance degradation. But 64KB should be a value that causes good performance. In these cases, the real bottleneck is typically the network bandwidth, not how fast data is read by the client. In order to maximize performance, it is important that the reading loop is as tight as possible (in other words, there are no significant delays between reads). Let’s also keep in mind that objects larger than 80KB go to the Large Object Heap in .Net, which has a less efficient memory management than the "normal" heap (compaction does not take place under normal conditions, so memory fragmentation can occur).

解决方案

we worked with microsoft about this case, this is their answer:

When the WCF client calls a WCF method that returns a Stream, it actually gets a reference to a MessageBodyStream instance. MessageBodyStream ultimately relies on WebResponseInputStream to actually read data, through this graph of relationships:

MessageBodyStream has a member, message, that references an InternalByteStreamMessage instance InternalByteStreamMessage has a member, bodyWriter, that references a StreamBasedStreamedBodyWriter instance StreamBasedStreamedBodyWriter has a member, stream, that references a MaxMessageSizeStream instance MaxMessageSizeStream has a member, stream, that references a WebResponseInputStream instance When you call Read() on the stream, WebResponseInputStream.Read() is ultimately called (you can test this yourself by setting the breakpoint in Visual Studio – one caveat: "Just My Code" option in Visual Studio – Debugging must be disabled, in order for the breakpoint to be hit). The relevant part of WebResponseInputStream.Read() is the following:

                return BaseStream.Read(buffer, offset, Math.Min(count, maxSocketRead));

where maxSocketRead is defined to be 64KB. The comment above maxSocketRead says "in order to avoid blowing kernel buffers, we throttle our reads. http.sys deals with this fine, but System.Net doesn't do any such throttling.". This means that if you specify too large a read value, you exceed the kernel’s own buffer size and causes poorer performance as it needs to do more work.

Does this cause a performance bottleneck? No, it should not. Reading too few bytes at a time (say, 256 bytes) will cause a performance degradation. But 64KB should be a value that causes good performance. In these cases, the real bottleneck is typically the network bandwidth, not how fast data is read by the client. In order to maximize performance, it is important that the reading loop is as tight as possible (in other words, there are no significant delays between reads). Let’s also keep in mind that objects larger than 80KB go to the Large Object Heap in .Net, which has a less efficient memory management than the "normal" heap (compaction does not take place under normal conditions, so memory fragmentation can occur).

Possible solution: is to cache in memory bigger chunks (for example, use MemoryStream and while the WCF Stream calls your custom "Read" - cache inside 1MB or more / less - whatever you want.

then, when 1MB (or other value) exceeds - push it to your actual custom stream, and continue caching bigger chunks

this wasn't checked but i think it should solve the performance issues.

这篇关于REST WCF - 流下载非常慢,65535(64KB)块无法更改的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆