是时候缩小数据库了 [英] Time to shrink a database

查看:82
本文介绍了是时候缩小数据库了的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

大家好,

我之前发布的消息是关于试图清除200Gig数据库中的许多记录(约35%)

。回复给了我很多思考,特别是关于索引的b $ b。但是由于我必须运行的短窗口,操纵索引不是一种选择。


但这导致了另一个问题。当所有这一切都完成后,我们将需要缩小db以回收空间。
需要缩小db。我们还需要

重建索引,但这可以一次完成一个表,这样

可能没问题。我正在寻找的是关于如何在一台相当慢的机器上通过一个200G db的

的建议。是否有任何交易技巧可以帮助我度过难关?我相信其中一位DBA

表示他们多年来一直无法收缩数据库,因为它需要比最长的可用窗口更长的时间。


在此先感谢

Hi all,
I posted messages before about trying to purge many records (about 35%)
of a 200Gig database. The responses gave me a lot to think about,
especially regarding the indexes. But due to the short windows that I
have to run in, manipulating the indexes is not an option.

But this leads to another question. When all of this is done, we will
need to shrink the db to reclaim the space. We will also need to
rebuild the indexes, but this can be done one table at a time, so that
might be ok. What I am looking for is advice on how to get through a
shink of a 200G db on a fairly slow machine. Are there any ''tricks of
the trade'' that will help me get through it? I believe one of the DBAs
said that they have not been able to shrink the db in years because it
takes longer than the longest available window.

Thanks In Advance

推荐答案

(继续第一篇文章)


我的一个想法,可能根本不起作用,如下所示。


将表复制到临时数据库。

缩小临时数据库。

删除原始表。

将表从临时数据库移动到原始数据库。


对数据库中的每个表重复。


缩小原始数据库。


这背后的想法是如果你缩小数据库只有一个表,

你可以比使用所有

表缩小数据库快得多。这可以在可用的窗口中完成。

然后当你缩小原始数据库时,由于

每个表已经缩小,所以工作量减少。

我认为由于DB背后的文件,这不起作用。

任何对此的想法都将受到赞赏。我打算设置

a测试这个,但如果是浪费时间,请告诉我。


谢谢

(continuation of first post)

One idea that I had, which may not work at all, is as follows.

Copy a table to a temp db.
Do the shrink on the temp db.
Drop the original table.
Move the table from the temp db to the original db.

Repeat for each table in the db.

Shrink the original db.

The thought behind this is that if you shrink a db with only one table,
you can get through it much quicker than shrinking a db with all of the
tables in it. This could be done in the available window.
Then when you shrink the original db, there is less work to do since
each table was already shrunk.
I assume that because of the files behind the DBs, this would not work.
Any thoughts on this would be appreciated. I am planning on setting up
a test of this, but if it is a waste of time, please let me know.

Thanks


ha*****@yahoo.com (ha * ****@yahoo.com)写道:
ha*****@yahoo.com (ha*****@yahoo.com) writes:
我之前发布了关于试图清除200Gig数据库的许多记录(约35%)的消息。回答给了我很多思考,特别是关于索引。但是由于我必须运行的窗口很短,因此操作索引不是一种选择。

但这会导致另一个问题。完成所有这些后,我们将需要缩小数据库来回收空间。我们还需要重建索引,但这可以一次完成一个表,这样
可能没问题。我正在寻找的是关于如何在相当慢的机器上通过200G数据库的问题的建议。是否有任何交易技巧可以帮助我度过难关?我相信其中一位DBA说他们多年来一直无法缩小数据库,因为它需要的时间比最长的可用窗口长。
I posted messages before about trying to purge many records (about 35%)
of a 200Gig database. The responses gave me a lot to think about,
especially regarding the indexes. But due to the short windows that I
have to run in, manipulating the indexes is not an option.

But this leads to another question. When all of this is done, we will
need to shrink the db to reclaim the space. We will also need to
rebuild the indexes, but this can be done one table at a time, so that
might be ok. What I am looking for is advice on how to get through a
shink of a 200G db on a fairly slow machine. Are there any ''tricks of
the trade'' that will help me get through it? I believe one of the DBAs
said that they have not been able to shrink the db in years because it
takes longer than the longest available window.




我不确定你会不会缩水。即使你从数据库中删除了很多行,我也认为新的行一直在进入
?萎缩是没有意义的,如果它会再次增长。


另一方面重新编制索引是一个好主意,但这不是什么东西

你应该DELETE作业完成后运行,但应定期执行

。可以通过DBCC DBREINDEX和

DBCC INDEXDEFRAG两种方式执行碎片整理。第一种是离线操作,即表格

在运行时无法访问。 INDEXDEFRAG是一个在线操作。

再次,我假设定期在

数据库中插入和更新数据。

根据联机丛书的说法,缩小是一种在线操作,因为

使用可以继续工作。我希望它需要一些负担,

我肯定不会在办公时间缩短。提示是指b
指定目标大小;这是我最成功的变化

with。


如果你要收缩,我再也不建议,当你的删除工作减少了所有表格时,最好的是



你不能按照你在其他地方概述的那样缩小一张表/>
post。


一旦你完全收缩,你肯定应该运行

碎片整理,因为缩小会导致很多碎片。


-

Erland Sommarskog,SQL Server MVP, es *** *@sommarskog.se


SQL Server 2005联机丛书
http://www.microsoft.com/technet/pro...ads/books.mspx

SQL Server 2000联机丛书
http://www.microsoft.com/sql/prodinf...ons/books.mspx



I''m not sure you are going shrink at all. Even if you are removing
a lot of rows from the database, I assume that new rows keep coming in
all the time? There is no point in shrinking, if it will grow again.

Reindexing on the other hand is a good idea, but this it not something
you should run when your DELETE job is done, but which should be performed
regularly. Defragmenting can be performed in two ways DBCC DBREINDEX and
DBCC INDEXDEFRAG. The first is an offline operation, that is the table
is not accessible while it''s running. INDEXDEFRAG is an online operation.
Again, I''m assuming that data is inserted and updated in the
database on a regular basis.

According to Books Online, shrinking is an online operation in the sense
that uses can keep on working. I would expect it to take some load,
and I would certainly not run a shrink on office hours. A tip is to
specify a target size; that''s the variation I''ve been most successful
with.

If you are going to shrink, which again I don''t recommend, the best is
to do this when all tables have been reduced by your deletion job.
You cannot shrink one table at a time as you outlined in you other
post.

Once you have completely any shrinking, you should definitely run
defragmentation, as shrinking causes a lot of fragmentation.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx


好的,我想我明白你关于不缩小数据库的观点。

这是否意味着新记录将被写入已被释放的空间

由删除?我认为这个空间不会重复使用,直到

你缩小它才能释放它。

然而,我的清除将删除大约2亿条记录和db

每月仅增长约1000万。所以需要很长时间才能填补清除所释放的空间。


至于DBREINDEX和INDEXDEFRAG,它们是否产生类似的

结果?我这样做的DBA似乎相信一个

reindex会比碎片整理带来更大的性能提升。


谢谢

Ok, I think I understand your point about not shrinking the db. Does
this mean that new records will be written to the space that was freed
up by the deletes? I thought that the space would not be reused until
you did a shrink to release it.
However, my purge will be removing about 200 million records and the db
only grows at about 10 million per month. So it would take a long time
to fill up the space freed by the purge.

As for the DBREINDEX and the INDEXDEFRAG, do they produce similar
results? The DBAs that I am doing this for seem to believe that a
reindex will give a bigger performance boost than the defrag.

thanks


这篇关于是时候缩小数据库了的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆