HTTPClient缓冲区超过2G;无法将更多字节写入缓冲区 [英] HTTPClient Buffer Exceeded 2G; Cannot write more bytes to the buffer

查看:227
本文介绍了HTTPClient缓冲区超过2G;无法将更多字节写入缓冲区的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

最新的沃尔玛API传奇。我正在提交电话,以使用HttpClient获取清关项目列表。事情可以与其他请求一起正常工作,但是这个请求是如此之大,以至于破坏了HTTPRequest缓冲区。 也很奇怪,这是一个REQUEST缓冲区错误,而不是RESPONSE错误,因为请求只是URL。

The latest in what is becoming the saga of the Walmart API. I'm submitting a call to get a list of clearance items using HttpClient. Things work fine with other requests, but this one is so large it busts the HTTPRequest buffer. Odd, too, that it's a REQUEST buffer error and not a RESPONSE error, since the request is only the URL.

异常信息:

 Cannot write more bytes to the buffer than the configured maximum buffer size: 2147483647. (System.Net.Http)




在System.Net.Http.HttpContent.LimitMemoryStream处。在System.Net.Http.HttpContent.LimitMemoryStream.Write的CheckSize(Int32 countToAdd)中,在System.Net.Http.StreamToStreamCopy处的字节大小。 System.Net.Http.StreamToStreamCopy.StartRead()处的TryStartWriteSync(Int32 bytesRead)rn-从上一个引发异常的位置开始的堆栈跟踪结束在System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(任务任务)处,在System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(任务任务),在System.Runtime.CompilerServices中。 TaskAwaiter 1.GetResult()→位于Wal_Mart_Crawler.NET_IO上。< walMart_Special_Feed_Lookup> d__4.MoveNext()在c:\\users\\user\\document中\\Visual Studio 2015\\项目\\Wal-Mart_Inventory_Tracker\\Wal-Mart_Crawler\\NET_IO.cs:第65行\r\n ---堆栈结束跟踪自在System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(任务任务)处抛出异常的先前位置-在System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(任务任务)处在System.Runtime.CompilerServices.TaskAwaiter处的$ 1.GetResult()在Wal_Mart_Crawler.Special_Feeds.d__2.MoveNext()处的c:\users\user \文档\Visual Studio 2015\项目\沃尔玛_库存_跟踪器\沃尔玛履带\Special_Feeds.cs:第22行

at System.Net.Http.HttpContent.LimitMemoryStream.CheckSize(Int32 countToAdd)\r\n at System.Net.Http.HttpContent.LimitMemoryStream.Write(Byte[] buffer, Int32 offset, Int32 count)\r\n at System.Net.Http.StreamToStreamCopy.TryStartWriteSync(Int32 bytesRead)\r\n at System.Net.Http.StreamToStreamCopy.StartRead()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter1.GetResult()\r\n at Wal_Mart_Crawler.NET_IO.<walMart_Special_Feed_Lookup>d__4.MoveNext() in c:\\users\\user\\documents\\visual studio 2015\\Projects\\Wal-Mart_Inventory_Tracker\\Wal-Mart_Crawler\\NET_IO.cs:line 65\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter1.GetResult()\r\n at Wal_Mart_Crawler.Special_Feeds.d__2.MoveNext() in c:\users\user\documents\visual studio 2015\Projects\Wal-Mart_Inventory_Tracker\Wal-Mart_Crawler\Special_Feeds.cs:line 22

(出于好奇,第22行是):

(for the curious, Line 22 is):

 API_Json_Special_Feeds.RootObject Items = await net.walMart_Special_Feed_Lookup(specialFeedsURLs[i].Replace("{apiKey}", Wal_Mart_Crawler.Properties.Resources.API_Key_Walmart));
 //which uses HttpClient to call the API

起初,我的眼睛有点闪烁当我看到已配置时,这意味着我可以更改它,对吗?我毕竟是在x64上运行-也许可以破坏2G 不知道如何操作。

At first, my eyes twinkled a bit when I saw "configured," as that means I can change it, right? I'm running in x64 after all - might be able to bust 2G can't figure out how.

仔细阅读SO,发现我应该禁用流媒体。尝试过:

Read up on SO, found that I should disable streaming. Tried:

 var response = await http.GetAsync(url, HttpCompletionOption.ResponseHeadersRead);

... 没有布宜诺斯艾利斯。

为WebRequest找到了 AllowReadStreamBuffering 设置-似乎找不到HttpClient的设置。

Found an AllowReadStreamBuffering setting for WebRequest - can't seem to find one for HttpClient.

我找不到控制要获取的数据量-在沃尔玛上。我也受制于对响应流的处理,因为它会直接反序列化:

I can't control how much data I'm getting -- that's on Walmart. I'm also limited by my shrunken head on what to do with the response stream because it's going straight to deserialize:

   var response = await http.GetAsync(url, HttpCompletionOption.ResponseHeadersRead);
   return JsonConvert.DeserializeObject<API_Json_Special_Feeds.RootObject>(result);

所以即使我可以中断响应,我也会卡住是因为我认为我无法馈送部分数据以进行反序列化。

So even if I could break up the response, I'd be stuck because I don't think I can feed partial data to deserialize.

问题:如何将缓冲区增加到2G以上?避免缓冲区大小超出异常?

Question: How can I either increase the buffer beyond 2G or otherwise avoid the buffer size exceeded exception?

我希望这是我疲倦,无知的大脑无法解决的另一种轻松解决方法。一如既往-真诚的感谢您的时间,并在此先感谢您能提供的任何帮助。

I'm hoping it's another easy fix my weary, ignorant brain just can't figure out. As always - a sincere THANK YOU for your time and in advance for any help you can provide.

响应标题:

 StatusCode: 200, ReasonPhrase: 'OK', Version: 1.1, Content: System.Net.Http.StreamContent, Headers:
{
 X-Mashery-Responder: prod-j-worker-us-west-1c-63.mashery.com
 transfer-encoding: chunked
 Connection: keep-alive
 Date: Sat, 10 Sep 2016 00:57:38 GMT
 Server: Mashery
 Server: Proxy
 Content-MD5: BvJMDJiZPUvmAxxmwKGSog==
 Content-Type: application/json; charset=utf-8
 Last-Modified: Fri Sep 09 15:31:08 PDT 2016
}


推荐答案

这样的东西会起作用吗?

Will something like this work?

using (var http = new HttpClient())
using (var response = await http.GetAsync(url, HttpCompletionOption.ResponseHeadersRead))
using (StreamReader sr = new StreamReader(await response.Content.ReadAsStreamAsync()))
{
    var serializer = new JsonSerializer();               
    ITraceWriter tw = new MemoryTraceWriter();
    serializer.TraceWriter = tw; 
    var obj = (API_Json_Special_Feeds.RootObject)serializer.Deserialize(sr, typeof(API_Json_Special_Feeds.RootObject));
    // Stop and inspect the tracewriter object here to diagnose 
    return obj;                
}

编辑:通过包含tracewriter,您可以尝试确定为什么反序列化是表现不如您预期。 JSON反序列化倾向于以静默的方式丢弃不知道如何处理的信息,并导致对象为空或稀疏。跟踪记录器将帮助诊断这些问题。 示例

By including the tracewriter, you can try and determine why the deserialization isn't behaving as you would expect. JSON deserialization tends to silently throw away information it doesn't know how to handle and results in an empty or sparsely populated object. The tracewriter will help diagnose these issues. Example

这篇关于HTTPClient缓冲区超过2G;无法将更多字节写入缓冲区的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆