与SQL Server数据库调用线程多C#应用程序 [英] Multi threading C# application with SQL Server database calls

查看:150
本文介绍了与SQL Server数据库调用线程多C#应用程序的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有500,000条记录表 SQL Server数据库。还有其他三个表名为 child1 的child2 child3 。在许多人 child1 的child2 child3 之间的多对多关系和通过三个关系表来实现: main_child1_relationship main_child2_relationship main_child3_relationship 。我需要阅读的记录,更新,也插入关系表中新行以及在子表中插入新记录。在子表中的记录有唯一性约束,所以伪code实际计算(CalculateDetails)会是这样的:

I have a SQL Server database with 500,000 records in table main. There are also three other tables called child1, child2, and child3. The many to many relationships between child1, child2, child3, and main are implemented via the three relationship tables: main_child1_relationship, main_child2_relationship, and main_child3_relationship. I need to read the records in main, update main, and also insert into the relationship tables new rows as well as insert new records in the child tables. The records in the child tables have uniqueness constraints, so the pseudo-code for the actual calculation (CalculateDetails) would be something like:

for each record in main
{
   find its child1 like qualities
   for each one of its child1 qualities
   {
      find the record in child1 that matches that quality
      if found
      {
          add a record to main_child1_relationship to connect the two records
      }
      else
      {
          create a new record in child1 for the quality mentioned
          add a record to main_child1_relationship to connect the two records
      }
   }
   ...repeat the above for child2
   ...repeat the above for child3 
}

这工作正常作为单线程应用程序。但实在是太慢了。在C#中的处理是pretty重型和花费的时间太长。我想变成一个多线程的应用程序。

This works fine as a single threaded app. But it is too slow. The processing in C# is pretty heavy duty and takes too long. I want to turn this into a multi-threaded app.

什么是做到这一点的最好方法是什么?我们使用LINQ to SQL。

What is the best way to do this? We are using Linq to Sql.

到目前为止,我的方法是从创建记录每一批新的的DataContext 对象主要和使用 ThreadPool.QueueUserWorkItem 来处理它。然而,这些批次对方的脚趾踩着因为一个线程增加了一个记录,然后下一个线程试图添加相同的一个,...我收到各种有趣的SQL Server死锁。

So far my approach has been to create a new DataContext object for each batch of records from main and use ThreadPool.QueueUserWorkItem to process it. However these batches are stepping on each other's toes because one thread adds a record and then the next thread tries to add the same one and ... I am getting all kinds of interesting SQL Server dead locks.

下面是code:

    int skip = 0;
    List<int> thisBatch;
    Queue<List<int>> allBatches = new Queue<List<int>>();
    do
    {
        thisBatch = allIds
                .Skip(skip)
                .Take(numberOfRecordsToPullFromDBAtATime).ToList();
        allBatches.Enqueue(thisBatch);
        skip += numberOfRecordsToPullFromDBAtATime;

    } while (thisBatch.Count() > 0);

    while (allBatches.Count() > 0)
    {
        RRDataContext rrdc = new RRDataContext();

        var currentBatch = allBatches.Dequeue();
        lock (locker)  
        {
            runningTasks++;
        }
        System.Threading.ThreadPool.QueueUserWorkItem(x =>
                    ProcessBatch(currentBatch, rrdc));

        lock (locker) 
        {
            while (runningTasks > MAX_NUMBER_OF_THREADS)
            {
                 Monitor.Wait(locker);
                 UpdateGUI();
            }
        }
    }

这是ProcessBatch:

And here is ProcessBatch:

    private static void ProcessBatch( 
        List<int> currentBatch, RRDataContext rrdc)
    {
        var topRecords = GetTopRecords(rrdc, currentBatch);
        CalculateDetails(rrdc, topRecords);
        rrdc.Dispose();

        lock (locker)
        {
            runningTasks--;
            Monitor.Pulse(locker);
        };
    }

    private static List<Record> GetTopRecords(RecipeRelationshipsDataContext rrdc, 
                                              List<int> thisBatch)
    {
        List<Record> topRecords;

        topRecords = rrdc.Records
                    .Where(x => thisBatch.Contains(x.Id))
                    .OrderBy(x => x.OrderByMe).ToList();
        return topRecords;
    }

CalculateDetails 最好由顶部的伪code解释。

CalculateDetails is best explained by the pseudo-code at the top.

我觉得必须有一个更好的方式来做到这一点。请帮忙。非常感谢!

I think there must be a better way to do this. Please help. Many thanks!

推荐答案

下面是我拿的问题:


  • 在使用多线程插入SQL Server或任何数据库/更新/查询数据,那么死锁是生活中的事实。你必须假定他们会发生,并且适当地处理它们。

  • When using multiple threads to insert/update/query data in SQL Server, or any database, then deadlocks are a fact of life. You have to assume they will occur and handle them appropriately.

这不是这么说我们不应该试图限制死锁的发生。然而,很容易在的基本原因死锁的研读,并采取措施,以prevent他们,但SQL Server总是会令你大吃一惊: - )

That's not so say we shouldn't attempt to limit the occurence of deadlocks. However, it's easy to read up on the basic causes of deadlocks and take steps to prevent them, but SQL Server will always surprise you :-)

一些原因死锁:


  • 过多线程 - 试图限制线程到最小的数量,但当然,我们希望实现最大性能多个线程

  • Too many threads - try to limit the number of threads to a minimum, but of course we want more threads for maximum performance.

没有足够的索引。如果选择和更新没有选择性不够的SQL将拿出更大范围锁比是健康的。尝试指定适当的索引。

Not enough indexes. If selects and updates aren't selective enough SQL will take out larger range locks than is healthy. Try to specify appropriate indexes.

有太多的索引。更新索引导致死锁,所以尽量减少索引的最低要求。

Too many indexes. Updating indexes causes deadlocks, so try to reduce indexes to the minimum required.

交易isolational水平过高。使用.NET时默认隔离级别是序列化,而使用SQL Server默认为提交读。降低隔离级别可以有很大的帮助(当然,如果合适)。

Transaction isolational level too high. The default isolation level when using .NET is 'Serializable', whereas the default using SQL Server is 'Read Committed'. Reducing the isolation level can help a lot (if appropriate of course).

这是我怎样处理你的问题:

This is how I might tackle your problem:


  • 我不会推出自己的线程解决方案,我会使用TaskParallel库。我的主要方法看起来是这样的:

  • I wouldn't roll my own threading solution, I would use the TaskParallel library. My main method would look something like this:

using (var dc = new TestDataContext())
{
    // Get all the ids of interest.
    // I assume you mark successfully updated rows in some way
    // in the update transaction.
    List<int> ids = dc.TestItems.Where(...).Select(item => item.Id).ToList();

    var problematicIds = new List<ErrorType>();

    // Either allow the TaskParallel library to select what it considers
    // as the optimum degree of parallelism by omitting the 
    // ParallelOptions parameter, or specify what you want.
    Parallel.ForEach(ids, new ParallelOptions {MaxDegreeOfParallelism = 8},
                        id => CalculateDetails(id, problematicIds));
}


  • 执行CalculateDetails法重试僵局失败

  • Execute the CalculateDetails method with retries for deadlock failures

    private static void CalculateDetails(int id, List<ErrorType> problematicIds)
    {
        try
        {
            // Handle deadlocks
            DeadlockRetryHelper.Execute(() => CalculateDetails(id));
        }
        catch (Exception e)
        {
            // Too many deadlock retries (or other exception). 
            // Record so we can diagnose problem or retry later
            problematicIds.Add(new ErrorType(id, e));
        }
    }
    


  • 核心CalculateDetails方法

  • The core CalculateDetails method

    private static void CalculateDetails(int id)
    {
        // Creating a new DeviceContext is not expensive.
        // No need to create outside of this method.
        using (var dc = new TestDataContext())
        {
            // TODO: adjust IsolationLevel to minimize deadlocks
            // If you don't need to change the isolation level 
            // then you can remove the TransactionScope altogether
            using (var scope = new TransactionScope(
                TransactionScopeOption.Required,
                new TransactionOptions {IsolationLevel = IsolationLevel.Serializable}))
            {
                TestItem item = dc.TestItems.Single(i => i.Id == id);
    
                // work done here
    
                dc.SubmitChanges();
                scope.Complete();
            }
        }
    }
    


  • 当然,我的执行死锁重试帮手

  • And of course my implementation of a deadlock retry helper

    public static class DeadlockRetryHelper
    {
        private const int MaxRetries = 4;
        private const int SqlDeadlock = 1205;
    
        public static void Execute(Action action, int maxRetries = MaxRetries)
        {
            if (HasAmbientTransaction())
            {
                // Deadlock blows out containing transaction
                // so no point retrying if already in tx.
                action();
            }
    
            int retries = 0;
    
            while (retries < maxRetries)
            {
                try
                {
                    action();
                    return;
                }
                catch (Exception e)
                {
                    if (IsSqlDeadlock(e))
                    {
                        retries++;
                        // Delay subsequent retries - not sure if this helps or not
                        Thread.Sleep(100 * retries);
                    }
                    else
                    {
                        throw;
                    }
                }
            }
    
            action();
        }
    
        private static bool HasAmbientTransaction()
        {
            return Transaction.Current != null;
        }
    
        private static bool IsSqlDeadlock(Exception exception)
        {
            if (exception == null)
            {
                return false;
            }
    
            var sqlException = exception as SqlException;
    
            if (sqlException != null && sqlException.Number == SqlDeadlock)
            {
                return true;
            }
    
            if (exception.InnerException != null)
            {
                return IsSqlDeadlock(exception.InnerException);
            }
    
            return false;
        }
    }
    


  • 一另一种可能性是使用分区策略

  • One further possibility is to use a partitioning strategy

    如果你的表自然就可以划分为几个不同的数据集,那么您可以使用的SQL Server分区表和索引的,或者你可以<一个href=\"http://www.simple-talk.com/sql/sql-tools/sql-server-partitioning-without-enterprise-edition/\">manually拆分现有的表到几套表。我会建议使用SQL Server的分区,因为第二个选项是凌乱。还内置分区仅适用于SQL企业版。

    If your tables can naturally be partitioned into several distinct sets of data, then you can either use SQL Server partitioned tables and indexes, or you could manually split your existing tables into several sets of tables. I would recommend using SQL Server's partitioning, since the second option would be messy. Also built-in partitioning is only available on SQL Enterprise Edition.

    如果分区是可能的,你可以选择你打破了数据,让说不同的8套partion方案。现在你可以使用你原来的单线程code,但每次打靶一个单独的分区8个线程。现在将不会有任何(或至少最小数目)死锁

    If partitioning is possible for you, you could choose a partion scheme that broke you data in lets say 8 distinct sets. Now you could use your original single threaded code, but have 8 threads each targetting a separate partition. Now there won't be any (or at least a minimum number of) deadlocks.

    我希望是有道理的。

    这篇关于与SQL Server数据库调用线程多C#应用程序的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

  • 查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆