什么 130 秒超时正在杀死我的 WCF 流服务调用? [英] What 130 second timeout is killing my WCF streaming service call?

查看:39
本文介绍了什么 130 秒超时正在杀死我的 WCF 流服务调用?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

就在最近,我开始研究 WCF 流的一个棘手问题,如果客户端在两次发送到服务器之间的等待时间超过 130 秒,就会产生 CommunicationException.

这里是完整的例外:

System.ServiceModel.CommunicationException 未被用户代码处理H结果=-2146233087Message=套接字连接被中止.这可能是由于处理您的消息时出错或远程主机超出接收超时,或者是潜在的网络资源问题造成的.本地套接字超时为23:59:59.9110000".源=mscorlib堆栈跟踪:服务器堆栈跟踪:在 System.ServiceModel.Channels.HttpOutput.WebRequestHttpOutput.WebRequestOutputStream.Write(Byte[] 缓冲区,Int32 偏移量,Int32 计数)在 System.IO.BufferedStream.Write(Byte[] 数组,Int32 偏移量,Int32 计数)在 System.Xml.XmlStreamNodeWriter.FlushBuffer()在 System.Xml.XmlStreamNodeWriter.GetBuffer(Int32 计数,Int32& 偏移量)在 System.Xml.XmlUTF8NodeWriter.InternalWriteBase64Text(字节 [] 缓冲区,Int32 偏移量,Int32 计数)在 System.Xml.XmlBaseWriter.WriteBase64(字节 [] 缓冲区,Int32 偏移量,Int32 计数)在 System.Xml.XmlDictionaryWriter.WriteValue(IStreamProvider 值)在 System.ServiceModel.Dispatcher.StreamFormatter.Serialize(XmlDictionaryWriter writer,Object[] 参数,Object returnValue)在 System.ServiceModel.Dispatcher.OperationFormatter.OperationFormatterMessage.OperationFormatterBodyWriter.OnWriteBodyContents(XmlDictionaryWriter 作家)在 System.ServiceModel.Channels.Message.OnWriteMessage(XmlDictionaryWriter 编写器)在 System.ServiceModel.Channels.TextMessageEncoderFactory.TextMessageEncoder.WriteMessage(消息消息,流流)在 System.ServiceModel.Channels.HttpOutput.WriteStreamedMessage(时间跨度超时)在 System.ServiceModel.Channels.HttpOutput.Send(时间跨度超时)在 System.ServiceModel.Channels.HttpChannelFactory`1.HttpRequestChannel.HttpChannelRequest.SendRequest(消息消息,TimeSpan 超时)在 System.ServiceModel.Channels.RequestChannel.Request(消息消息,TimeSpan 超时)在 System.ServiceModel.Channels.ServiceChannel.Call(字符串操作,布尔单向,ProxyOperationRuntime 操作,Object[] 输入,Object[] 输出,TimeSpan 超时)在 System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime 操作)在 System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage 消息)在 [0] 处重新抛出异常:在 System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg,IMessage retMsg)在 System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData,Int32 类型)在 WcfService.IStreamingService.SendStream(MyStreamUpRequest 请求)在 c:UsersjpiersonDocumentsVisual Studio 2012ProjectsWcfStreamingTestClientProgram.cs:line 44 中的 Client.Program.<Main>b__0()在 System.Threading.Tasks.Task.Execute()内部异常:System.IO.IOExceptionH结果=-2146232800消息=无法将数据写入传输连接:现有连接被远程主机强行关闭.源=系统堆栈跟踪:在 System.Net.Sockets.NetworkStream.MultipleWrite(BufferOffsetSize[] 缓冲区)在 System.Net.ConnectStream.InternalWrite(布尔异步,字节 [] 缓冲区,Int32 偏移量,Int32 大小,AsyncCallback 回调,对象状态)在 System.Net.ConnectStream.Write(字节 [] 缓冲区,Int32 偏移量,Int32 大小)在 System.ServiceModel.Channels.BytesReadPositionStream.Write(字节 [] 缓冲区,Int32 偏移量,Int32 计数)在 System.ServiceModel.Channels.HttpOutput.WebRequestHttpOutput.WebRequestOutputStream.Write(Byte[] 缓冲区,Int32 偏移量,Int32 计数)内部异常:System.Net.Sockets.SocketExceptionH结果=-2147467259Message=远程主机强制关闭现有连接源=系统错误代码=10054NativeErrorCode=10054堆栈跟踪:在 System.Net.Sockets.Socket.MultipleSend(BufferOffsetSize[] 缓冲区,SocketFlags socketFlags)在 System.Net.Sockets.NetworkStream.MultipleWrite(BufferOffsetSize[] 缓冲区)内部异常:

由于连接不活动,服务器似乎提前关闭了连接.如果我改为向服务器发送一个脉冲,甚至一次一个字节,那么我永远不会得到这个异常,我可以继续无限期地传输数据.我构建了一个非常简单的示例应用程序来演示这一点,它使用带有 Streamed transferMode 的 basicHttpBinding,我从客户端上的自定义流实现中插入了一个人工延迟,延迟 130 秒.这模拟了类似于缓冲区运行不足的情况,在这种情况下,我在客户端的服务调用中提供的流没有足够快地将数据提供给 WCF 基础结构,无法满足某种似乎在 130 秒左右的无法识别的超时值标记.

使用 WCF 服务跟踪工具,我可以找到一个 HttpException 并显示一条消息,该消息显示客户端已断开连接,因为基础请求已完成.不再有可用的 HttpContext."

从 IIS Express 跟踪日志文件中,我看到一个条目显示由于线程退出或应用程序请求,I/O 操作已中止.(0x800703e3)"

我已将服务器和客户端超时配置为使用远超过 130 秒标记的值来排除它们.我已经在 IIS Express 中尝试了 idleTimeout 以及许多与 ASP.NET 相关的超时值,以便发现这个问题来自哪里,但到目前为止还没有运气.到目前为止,我能找到的最佳信息是开发人员在 FireFox 问题跟踪器中发表的 一样Timer_EntityBody 和 connectionTimeout 指定,但很难确定,因为 IIS Express 似乎忽略了 applicationhost.config 中限制元素中指定的 connectionTimeout 值,而不管文件说什么.为了确定这一点,我必须在我的开发机器上安装完整版本的 IIS,并在将我的网站托管在那里之后修改上面的设置.

由于我们在 Windows 2008 上的 IIS 下托管实际服务,因此上述解决方案对我有用,但问题仍然是在您是自托管的情况下如何正确修改连接超时值.

Just recently I started to investigate a tricky problem with WCF streaming in which a CommunicationException is produced if the client waits for any longer than 130 seconds in between sends to the server.

Here is the full exception:

System.ServiceModel.CommunicationException was unhandled by user code
  HResult=-2146233087
  Message=The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '23:59:59.9110000'.
  Source=mscorlib
  StackTrace:
    Server stack trace: 
       at System.ServiceModel.Channels.HttpOutput.WebRequestHttpOutput.WebRequestOutputStream.Write(Byte[] buffer, Int32 offset, Int32 count)
       at System.IO.BufferedStream.Write(Byte[] array, Int32 offset, Int32 count)
       at System.Xml.XmlStreamNodeWriter.FlushBuffer()
       at System.Xml.XmlStreamNodeWriter.GetBuffer(Int32 count, Int32& offset)
       at System.Xml.XmlUTF8NodeWriter.InternalWriteBase64Text(Byte[] buffer, Int32 offset, Int32 count)
       at System.Xml.XmlBaseWriter.WriteBase64(Byte[] buffer, Int32 offset, Int32 count)
       at System.Xml.XmlDictionaryWriter.WriteValue(IStreamProvider value)
       at System.ServiceModel.Dispatcher.StreamFormatter.Serialize(XmlDictionaryWriter writer, Object[] parameters, Object returnValue)
       at System.ServiceModel.Dispatcher.OperationFormatter.OperationFormatterMessage.OperationFormatterBodyWriter.OnWriteBodyContents(XmlDictionaryWriter writer)
       at System.ServiceModel.Channels.Message.OnWriteMessage(XmlDictionaryWriter writer)
       at System.ServiceModel.Channels.TextMessageEncoderFactory.TextMessageEncoder.WriteMessage(Message message, Stream stream)
       at System.ServiceModel.Channels.HttpOutput.WriteStreamedMessage(TimeSpan timeout)
       at System.ServiceModel.Channels.HttpOutput.Send(TimeSpan timeout)
       at System.ServiceModel.Channels.HttpChannelFactory`1.HttpRequestChannel.HttpChannelRequest.SendRequest(Message message, TimeSpan timeout)
       at System.ServiceModel.Channels.RequestChannel.Request(Message message, TimeSpan timeout)
       at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout)
       at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation)
       at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message)
    Exception rethrown at [0]: 
       at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
       at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
       at WcfService.IStreamingService.SendStream(MyStreamUpRequest request)
       at Client.Program.<Main>b__0() in c:UsersjpiersonDocumentsVisual Studio 2012ProjectsWcfStreamingTestClientProgram.cs:line 44
       at System.Threading.Tasks.Task.Execute()
  InnerException: System.IO.IOException
       HResult=-2146232800
       Message=Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
       Source=System
       StackTrace:
            at System.Net.Sockets.NetworkStream.MultipleWrite(BufferOffsetSize[] buffers)
            at System.Net.ConnectStream.InternalWrite(Boolean async, Byte[] buffer, Int32 offset, Int32 size, AsyncCallback callback, Object state)
            at System.Net.ConnectStream.Write(Byte[] buffer, Int32 offset, Int32 size)
            at System.ServiceModel.Channels.BytesReadPositionStream.Write(Byte[] buffer, Int32 offset, Int32 count)
            at System.ServiceModel.Channels.HttpOutput.WebRequestHttpOutput.WebRequestOutputStream.Write(Byte[] buffer, Int32 offset, Int32 count)
       InnerException: System.Net.Sockets.SocketException
            HResult=-2147467259
            Message=An existing connection was forcibly closed by the remote host
            Source=System
            ErrorCode=10054
            NativeErrorCode=10054
            StackTrace:
                 at System.Net.Sockets.Socket.MultipleSend(BufferOffsetSize[] buffers, SocketFlags socketFlags)
                 at System.Net.Sockets.NetworkStream.MultipleWrite(BufferOffsetSize[] buffers)
            InnerException: 

It appears that the server has closed the connection prematurely due to inactivity on the connection. If I instead give a pulse to the server, even one byte at a time, then I never get this exception and I can continue to transfer data indefinitely. I've constructed a very simple example application to demonstrate this which uses basicHttpBinding with Streamed transferMode and I insert an artificial delay from within a custom stream implementation on the client that delays for 130 seconds. This simulates something similar to a buffer under-run condition in which the stream provided in my service call from the client is not feeding the data to the WCF infrastructure quick enough to satisfy some sort of unidentifiable timeout value that appears to be around the 130 second mark.

Using the WCF service tracing tools I'm able to find an HttpException with the a message that reads "The client is disconnected because the underlying request has been completed. There is no longer an HttpContext available."

From the IIS Express trace log file I see an entry that says "The I/O operation has been aborted because of either a thread exit or an application request. (0x800703e3)"

I've configured both server and client timeouts to use a value well over the 130 second mark just to rule them out. I've tried idleTimeout in IIS Express and a host of ASP.NET related timeout values too in order to discover where this issue is coming from but so far no luck. The best information I can find so far is a comment in the FireFox issue tracker by a developer that describes a similar problem working outside of the WCF architecture. For this reason I'm guessing the issue may be more related specifically to IIS7 or possibly Windows Server.

Custom binding on server Web.config

<binding name="myHttpBindingConfiguration"
         closeTimeout="02:00:00"
         openTimeout="02:00:00"
         receiveTimeout="02:00:00"
         sendTimeout="02:00:00">
  <textMessageEncoding messageVersion="Soap11" />
  <httpTransport maxBufferSize="65536"                        
                 maxReceivedMessageSize="2147483647"
                 maxBufferPoolSize="2147483647"
                 transferMode="Streamed" />
</binding>

Client side configuration in code:

    var binding = new BasicHttpBinding();
    binding.MaxReceivedMessageSize = _maxReceivedMessageSize;
    binding.MaxBufferSize = 65536;
    binding.ReaderQuotas.MaxStringContentLength = int.MaxValue;
    binding.ReaderQuotas.MaxArrayLength = int.MaxValue;
    binding.TransferMode = TransferMode.Streamed;
    binding.ReceiveTimeout = TimeSpan.FromDays(1);
    binding.OpenTimeout = TimeSpan.FromDays(1);
    binding.SendTimeout = TimeSpan.FromDays(1);
    binding.CloseTimeout = TimeSpan.FromDays(1);

In response to wals idea to try to see if I get any different results by self hosting my service I want to add that I did so and found that I get the same results as when hosting in IIS. What does this mean? My guess is that this means the issue is either in WCF or in the underlying networking infrastructure in Windows. I'm using Windows 7 64 bit and we've discovered this issue by running various clients and running the service portion on a Windows 2008 Server.

Update 2013-01-15

I found some new clues thanks to DarkWanderer once I realized that WCF uses HTTP.sys underneath in self-hosting scenarios on Windows 7. This got me looking into what I could configure for HTTP.sys and also what type of issues people are reporting that for HTTP.sys that sound similar to what I'm experiencing. This lead me to a log file located at C:WindowsSystem32LogFilesHTTPERRhttperr1.log which appears to log specific types of HTTP issues on the part of HTTP.sys. In this log I see the following type of log entry each time I run my test.

2013-01-15 17:17:12 127.0.0.1 59111 127.0.0.1 52733 HTTP/1.1 POST /StreamingService.svc - - Timer_EntityBody -

So it's down to finding what conditions could cause a Timer_EntityBody error and what settings in IIS7 or elsewhere may have a bearing on when and if that error occurs.

From the official IIS webiste:

The connection expired before the request entity body arrived. When it is clear that a request has an entity body, the HTTP API turns on the Timer_EntityBody timer. Initially, the limit of this timer is set to the connectionTimeout value. Each time another data indication is received on this request, the HTTP API resets the timer to give the connection more minutes as specified in the connectionTimeout attribute.

Trying to modify the connectionTimeout attribute as the reference above suggests for in applicationhost.config for IIS Express doesn't seem to make any difference. Perhaps IIS Express ignores this configuration and uses a hard coded value internally? Trying something on my own I discovered that there are new netsh http commands added to show and add timeout values so giving that I go I came up with the following command but unfortunately doing so didn't seem to have any effect on this error either.

netsh http add timeout timeouttype=IdleConnectionTimeout value=300

解决方案

It turns out that this issue was caused by the Connection Timeout value used by HTTP.sys and that can be specified through IIS Manager through the Advanced Settings for the individual site. This value is configured by default to timeout a connection when both the header and body haven't been received within 120 seconds. If a pulse of data from the body is received then the server restarts a timer (Timer_EntityBody) within the timeout value then the timer is reset to wait for additional data.

This is just as the documentation concerning Timer_EntityBody and connectionTimeout specifies, however it was hard to pinpoint because it appears that IIS Express ignores the connectionTimeout value specified in the limits element in applicationhost.config regardless of what the documentation says. In order to determine this I had to install the full version of IIS on my development machine and modify the setting above after hosting my site there.

Since we are hosting the real service under IIS on Windows 2008 the above solution will work for me however the question still remains on how to properly modify the Connection Timeout value in cases where you are Self Hosting.

这篇关于什么 130 秒超时正在杀死我的 WCF 流服务调用?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆