删除大量行非常慢-SQL Server [英] Delete Large Number of Rows Is Very Slow - SQL Server

查看:851
本文介绍了删除大量行非常慢-SQL Server的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的数据库表包含大约3百万条记录.当我删除大量行(大约40万条记录)时,交易将永远完成.

表未分区,并且数据库在Sql Server 2012 Standard Edition上运行.我正在使用Nhibernate作为ORM.

如何使交易更快?

这是表格的创建脚本

/****** Object:  Table [dbo].[ES_DirectorDataParameters]    Script Date: 03/10/2016 4:10:30 PM ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

SET ANSI_PADDING ON
GO

CREATE TABLE [dbo].[ES_DirectorDataParameters](
    [DDP_ID] [numeric](18, 0) IDENTITY(1,1) NOT NULL,
    [DP_Name] [varchar](255) NOT NULL,
    [D_ID] [numeric](18, 0) NOT NULL,
    [DDP_DisplayName] [varchar](255) NULL,
    [DDP_Visibility] [varchar](50) NULL,
    [DDP_Replicable] [numeric](18, 0) NOT NULL CONSTRAINT [DF_ES_DirectorDataParameters_DD_Replicable]  DEFAULT ((1)),
 CONSTRAINT [PK_ES_DirectorDataParameters] PRIMARY KEY CLUSTERED 
(
    [DP_Name] ASC,
    [D_ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

GO

SET ANSI_PADDING OFF
GO

ALTER TABLE [dbo].[ES_DirectorDataParameters]  WITH CHECK ADD  CONSTRAINT [FK_ES_DirectorDataParameters_ES_DataParameters] FOREIGN KEY([DP_Name])
REFERENCES [dbo].[ES_DataParameters] ([DP_Name])
GO

ALTER TABLE [dbo].[ES_DirectorDataParameters] CHECK CONSTRAINT [FK_ES_DirectorDataParameters_ES_DataParameters]
GO

ALTER TABLE [dbo].[ES_DirectorDataParameters]  WITH CHECK ADD  CONSTRAINT [FK_ES_DirectorDataParameters_ES_Directors] FOREIGN KEY([D_ID])
REFERENCES [dbo].[ES_Directors] ([D_ID])
GO

ALTER TABLE [dbo].[ES_DirectorDataParameters] CHECK CONSTRAINT [FK_ES_DirectorDataParameters_ES_Directors]
GO

这是我在执行删除(在200000行左右删除)时在活动监视器中找到的删除语句

DELETE FROM ES_DirectorDataParameters WHERE DDP_ID = @p0

注意:为DDP_ID列创建集群索引对删除性能的影响很小

谢谢

解决方案

让我开始:400.000条记录很小.我碰巧定期删除了6400万条记录.

如何使交易更快?

与通常使SQL Server更快的方法相同:您提供的IO功能比我想像的要多得多.

  • 拆分数据库,tempdb并登录到单独的硬盘中.哦,让它成为SSD或至少带有备份缓冲区的东西.

SQL依靠IO功能而生死,在过去的15年中,人们一直抱怨我曾经谈论过总是在硬件上运行SQL Server的大型"数据操作(实际上是很小的)的性能.布局对于任何实际的数据库工作都是完全不足的.我们谈论诸如我想用卡车赢得一级方程式赛车"之类的可笑程度.

让我大致了解一下布局(用于6400万行操作):2个RAID 5中有6个SSD用于数据,RAID4 RAID 4中有4个SSD用于tempdb,2个SSD已镜像用于日志.

还请确保您有足够的内存-通常,您应该将活动数据集保留在内存中,以免损坏光盘.

并显然检查是否存在正确的索引.

I have database table contains around 3 millions records. When I delete large number of rows, around 400,000 records, the transaction takes forever to finish.

The table is not partitioned and the database is running on Sql Server 2012 Standard Edition. I'm using Nhibernate as ORM.

How do I make the transaction faster?

Here the creation script of the table

/****** Object:  Table [dbo].[ES_DirectorDataParameters]    Script Date: 03/10/2016 4:10:30 PM ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

SET ANSI_PADDING ON
GO

CREATE TABLE [dbo].[ES_DirectorDataParameters](
    [DDP_ID] [numeric](18, 0) IDENTITY(1,1) NOT NULL,
    [DP_Name] [varchar](255) NOT NULL,
    [D_ID] [numeric](18, 0) NOT NULL,
    [DDP_DisplayName] [varchar](255) NULL,
    [DDP_Visibility] [varchar](50) NULL,
    [DDP_Replicable] [numeric](18, 0) NOT NULL CONSTRAINT [DF_ES_DirectorDataParameters_DD_Replicable]  DEFAULT ((1)),
 CONSTRAINT [PK_ES_DirectorDataParameters] PRIMARY KEY CLUSTERED 
(
    [DP_Name] ASC,
    [D_ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

GO

SET ANSI_PADDING OFF
GO

ALTER TABLE [dbo].[ES_DirectorDataParameters]  WITH CHECK ADD  CONSTRAINT [FK_ES_DirectorDataParameters_ES_DataParameters] FOREIGN KEY([DP_Name])
REFERENCES [dbo].[ES_DataParameters] ([DP_Name])
GO

ALTER TABLE [dbo].[ES_DirectorDataParameters] CHECK CONSTRAINT [FK_ES_DirectorDataParameters_ES_DataParameters]
GO

ALTER TABLE [dbo].[ES_DirectorDataParameters]  WITH CHECK ADD  CONSTRAINT [FK_ES_DirectorDataParameters_ES_Directors] FOREIGN KEY([D_ID])
REFERENCES [dbo].[ES_Directors] ([D_ID])
GO

ALTER TABLE [dbo].[ES_DirectorDataParameters] CHECK CONSTRAINT [FK_ES_DirectorDataParameters_ES_Directors]
GO

here is the delete statement I found in the Activity Monitor when executing the delete (deleting around 200000 rows)

DELETE FROM ES_DirectorDataParameters WHERE DDP_ID = @p0

Note: Creating Cluster Index for the column DDP_ID helped the deletion performance slightly

Thanks

解决方案

Let me start: 400.000 records is small. I happen to regularly delete batches of 64 million records.

How do I make the transaction faster?

The same way you make SQL Server faster in general: you provide a lot more IO capabilities than I assume you have.

  • Split database, tempdb and log into separate hard discs. Oh, and make this an SSD or at least something with a backed up buffer.

SQL lives and dies by IO capabilities, and somehoe in the last 15 years or so everyone complaining about the performance of "large" data operations (Which in reality are tiny) that I ever talked to always ran SQL Server on a hardware layout that was totally inadequate for any real database work. We talk of a comical levels like "I want to win formula one with a truck" type of discrepancies.

To give you an idea of my layout (for the 64 million row operations): 6 SSD in 2x Raid 5 for data, 4 SSD in a Raid 10 for tempdb and 2 SSD mirrored for logs.

Also make sure you have enough memory - generally you should keep the active set of your data in memory, to avoid hitting discs.

And obviously check whether the proper indices exist.

这篇关于删除大量行非常慢-SQL Server的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆