将blob保存到云时出错 [英] errors while saving blobs to the cloud

查看:83
本文介绍了将blob保存到云时出错的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述



我写了一个小程序来处理本地目录并获取每个文件并将其发布到blob容器。


使用本地azure存储模拟器时,它可以正常工作。但是,当我直接使用云(azure)时,有例外:


错误12002:错误:WinHttpReceiveResponse:操作超时


并且应用程序抛出一个未处理的异常:


Microsoft C ++异常:utilities :: win32_exception


我注意到当我使用更大的时候会发生这种情况我的低上传电缆连接上的文件(1MB +)。当我尝试使用互联网连接更好地上传时,在更大的文件(2MB和13MB)上抛出异常。


发布blob的代码非常简单(在try / catch中)块):


auto blob = container.create_block_blob(utilities :: conversions :: utf8_to_utf16(EncodeName(Name)));


if(! blob.put(std :: move(vecData))。get())

返回0;



我怀疑这是由于将文件发送到azure所需的时间,但我找不到任何方法设置或控制它的超时时间。有没有办法呢?


由于我遇到了这个错误,我试图直接使用块(比如100K大小的块)。但即使我尝试在"访问Azure存储"中描述的简单示例帮助文件,我立即在put_block操作上获得异常。代码就像这个
(vec在真实程序中有数据):


vector< unsigned char> ; vec; 

std :: string id = string(" BaseInformation-1"); // + ITA(i +1);

blob.put_block(std :: move(vecData),id).get();



引发的异常:


InvalidQueryParameterValue和完整的xml是:


< ;?xml version =" 1.0" encoding =" utf-8"?><错误><代码> InvalidQueryParameterValue< / Code><消息>请求URI中指定的查询参数之一的值无效。

RequestId:928f89bd-8664-418f-8c52-997eeb88bcbd

时间:2012-07-30T07:36:42.3472165Z< / Message>< QueryParameterName> blockid< / QueryParameterName>< QueryParameterValue> BaseInformation -1< / QueryParameterValue>< Reason>不是有效的base64字符串。< / Reason>< /错误>



任何人都可以在这里帮忙吗?



谢谢,


Yoav



解决方案

你好Yoav,


看起来你在这里遇到两个问题。


1。第一个问题听起来像是在遇到需要很长时间才能将数据上传到实时Azure存储的请求时遇到超时问题。在put方法超时之前实际需要多长时间?现在我们使用30秒的请求和
响应超时。我们要做的正确的事情是在我们的底层http_client上公开用于设置超时值的选项,http_client用于实现我们的Azure存储服务。然后我们的存储库将适当地设置更好的默认超时值
,并向用户公开选项以显式给出超时值。我将确保将其添加到我们将来刷新的工作列表中。


2。我认为您遇到的第二个问题是由于使用put_block,您需要自己处理块ID的base 64编码。另外要记住的是,所有块ID在单个块blob中必须具有相同的长度。我们b $ b最近做了一些自己的应用程序,并认为这太麻烦了,所以我们修复了库来解决这些可用性问题。在将来的版本中,我们为用户处理base 64编码。我们还为用户
添加了选项,他们不关心是否能够为blob中的各个块指定名称,只需为每个块ID指定一个整数值,并允许库来处理所有内容。 / p>

现在要解决需要处理base 64编码的问题,请使用asyncrt_utils.h中的函数utilities :: conversions :: to_base64。


感谢您的反馈,如果您遇到任何其他问题,请告知我们。


Steve


Hi,

I wrote a small program that process a local directory and takes each file and publish it to a blobs container.

it works just fine when working with the local azure storage emulator. However, when I work directly with the cloud (azure), there are exceptions:

Error 12002: Error in: WinHttpReceiveResponse: The operation timed out

and the application throws an un-handled exception:

Microsoft C++ exception: utilities::win32_exception

I noticed this happens when I work with larger files (1MB+) on my low upload cables connection. When i tried on a internet connection with better upload the exception got thrown on larger file (2MB and 13MB).

the code to publish the blobs is very simple (in a try/catch block):

auto blob = container.create_block_blob(utilities::conversions::utf8_to_utf16(EncodeName(Name)));

if ( !blob.put(std::move(vecData)).get())
return 0;

I suspect it's due to the amount of time it takes to send the files to azure but I couldn't find any way to set or control timeout period for it. Is there a way for it?

Since I had this error I tried to work directly with blocks (say 100K size blocks). but even when I try the simple example described in the "Accessing Azure Storage" help file, I immediately get exception on the put_block action. The code is like this (vec has data in the real program) :

vector<unsigned char> vec; 
std::string id = string("BaseInformation-1");// + ITA(i+1);
blob.put_block(std::move(vecData),id).get();

the exception thrown:

InvalidQueryParameterValue and the full xml is:

<?xml version="1.0" encoding="utf-8"?><Error><Code>InvalidQueryParameterValue</Code><Message>Value for one of the query parameters specified in the request URI is invalid.
RequestId:928f89bd-8664-418f-8c52-997eeb88bcbd
Time:2012-07-30T07:36:42.3472165Z</Message><QueryParameterName>blockid</QueryParameterName><QueryParameterValue>BaseInformation-1</QueryParameterValue><Reason>Not a valid base64 string.</Reason></Error>

Anyone can help here?

Thanks,

Yoav

解决方案

Hi Yoav,

Looks like you are encountering two problems here.

1. The first problem sounds like you are encountering timeout issues on requests that take a long time to upload data to live Azure storage. How long is it actually taking before the put method times out? Right now we are using 30 seconds for request and response timeouts. The right thing for us to do is to expose options for setting timeout values on our underly http_client, which is used to implement our Azure storage services. Then our storage library will appropriately set better default timeout values as well as expose the option to users to explicitly give a timeout value. I will make sure this gets added to our list of work for one of our future refreshes.

2. I think the second problem you are encountering is due to the fact that with put_block you need to handle the base 64 encoding of block ids yourself. Another thing to keep in mind is all block ids must be the same length within a single block blob. We recently did some application building ourselves and considered this too much of a hassle, so we fixed the library to address these usability issues. In a future release we handle the base 64 encoding for the user. We also added options for users who don't care about being able to assign a name to individual blocks in a blob, to just specify an integer value for each block id and allow the library to take care of everything.

For now to work around the issue of needing to handle the base 64 encoding use the function utilities::conversions::to_base64 in asyncrt_utils.h.

Thanks for the feedback and let us know if you encounter any other issues.

Steve


这篇关于将blob保存到云时出错的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆