如何在使用BulkExecutor库时重试批量插入失败。 [英] How to retry bulk insertion failure while using BulkExecutor library.

查看:129
本文介绍了如何在使用BulkExecutor库时重试批量插入失败。的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

场景:



我使用BulkExecutor库进行批量插入。



我使用了Github文档中提到的以下设置,将所有控件传递给BulkExecutor库。



//将重试设置为0以通过完全控制批量执行程序。

client.ConnectionPolicy.RetryOptions。 MaxRetryWaitTimeInSeconds = 0;

client.ConnectionPolicy.RetryOptions。 MaxRetryAttemptsOnThrottledRequests = 0;
$


我正在插入 如下所示,请注意 enableUpsert disableAutomaticIdGeneration 设置为
false



try

{

var bulkImportResponse = await bulkExecutor.BulkImportAsync(documents:documentsToImportInBatch,enableUpsert:false,disableAutomaticIdGeneration:false);

}

catch(例外e)

{

     //错误处理逻辑

}



案例:

Scenario:

I am using BulkExecutor library for Bulk insertion.

I have used following settings as per mentioned in Github docs, to pass all control to BulkExecutor library.

// Set retries to 0 to pass complete control to bulk executor.
client.ConnectionPolicy.RetryOptions.MaxRetryWaitTimeInSeconds = 0;
client.ConnectionPolicy.RetryOptions.MaxRetryAttemptsOnThrottledRequests = 0;

I am inserting  as given below, note that enableUpsert and disableAutomaticIdGeneration are set to false.

try
{
var bulkImportResponse = await bulkExecutor.BulkImportAsync(documents: documentsToImportInBatch, enableUpsert: false, disableAutomaticIdGeneration: false);
}
catch (Exception e)
{
    // error handling logic
}

Case:

假设我已经传递了100个文件用于批量插入,并且在插入60个文档后发生了异常。

Suppose I have passed 100 documents for bulk insertion and exception occurs after insertion of 60 documents.

问题:

1。 bulkExecutor是否回滚插入操作。

2.如果否,那么仅重新插入40个文档的机制是什么。

3.我怎么知道哪40个文件是未被BulkExecutor插入。



如果您遇到同样的问题,我想知道任何其他解决方案。

Questions:
1. Does bulkExecutor rollback insertion operation.
2. If No, then What is the mechanism to retry insertion for 40 documents only.
3. How do I know which 40 documents are not inserted by BulkExecutor.

I would like to know any other solutions if you have related to same issue.

推荐答案

批量执行程序不回滚,但Cosmosdb通常会返回服务器异常,例如服务器不接受请求。通常我们创建存储过程以进行回滚操作。

Bulk executor does not rollback but Cosmosdb typically returns server exception somthing like server not accepting the request. Typically we create Stored procedure to do have a roll back action.

如果您有大量负载插入,您可以检查这个

You can go over this if you have heavy load insertions

https:// stackoverflow.com/questions/41744582/fastest-way-to-insert-100-000-records-into-documentdb

https://stackoverflow.com/questions/41744582/fastest-way-to-insert-100-000-records-into-documentdb

另外,如果你想跳过存储过程,你可以实现客户端模式将发送取消令牌以检查插入的记录。

Also, incase you want to skip the stored procedure, you can implement a client side pattern that will send cancellation tokens accross to check the inserted records.

有几个博客列出了这些,我觉得这很有用:  Cosmos DB存储过程 - 处理延续

There are couple of blogs which list those, I find this useful: Cosmos DB Stored Procedures – handling continuation

干杯

K

 


这篇关于如何在使用BulkExecutor库时重试批量插入失败。的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆