无法控制的Azure SQL数据库大小无法正常使用DBCC SHRINKDATABASE [英] Azure SQL Database size growing out of control DBCC SHRINKDATABASE doesn't work

查看:127
本文介绍了无法控制的Azure SQL数据库大小无法正常使用DBCC SHRINKDATABASE的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

门户中的Azure SQL数据库大小为164GB.有许多二进制大对象通过数据库,这些记录已被删除,但空间没有被回收. DBCC SHRINKDATABASE没有帮​​助,它报告的已用页面比sys.dm_db_partition_stats中的used_pa​​ge_count总数要多.

Azure SQL database size in portal is 164GB. There are a lot of binary large objects passing through the database, those records are being deleted but the space is not getting reclaimed. DBCC SHRINKDATABASE doesn't help, it reports many more used pages than the sum of used_page_count from sys.dm_db_partition_stats.

DBCC SHRINKDATABASE results

DbId    FileId    CurrentSize    MinimumSize    UsedPages    EstimatedPages

5        1        19877520       2048           19877208     19877208

5        2        17024          128            17024        128

sys.dm_db_partition_stats结果中used_pa​​ge_count的总和:8292675

sum of used_page_count from sys.dm_db_partition_stats results: 8292675

这代表11584533页或大约90GB的差异,实际上并未使用,并且无法使用DBCC SHRINKDATABASE进行回收.在过去的几周中,数据库报告的大小与实际使用的页面计数大小之间的差异一直在迅速增长,并且数据库很快将达到250GB的大小限制.我该怎么办才能解决此问题?非常感谢您的帮助-谢谢.

This represents a difference of 11584533 pages or about 90GB that is not actually being used and cannot be reclaimed with DBCC SHRINKDATABASE. This difference between the database reported size and actual used page count size has been growing rapidly over the past few weeks and the database will soon hit the size limit of 250GB. What can I do to resolve this issue? Any help is much appreciated - thank you.

更新:根据Microsoft的支持,4月份对其SQL数据库服务器的部署破坏了自动幽灵记录清理功能.几周前,有人能够为我们的服务器手动将其重新打开,并且数据库大小稳定在174GB,但是没有回收幽灵记录占用的其他空间. Microsoft支持建议扩展到高级层,以最大程度地减少以下I/O密集过程的影响:

Update: per Microsoft support, a deployment to their SQL database servers in April broke the automated ghost record cleanup. A couple weeks ago, somebody was able to manually turn it back on for our server and the database size leveled out at 174GB but did not reclaim the other space consumed by ghost records. Microsoft support recommended scaling up to a Premium tier to minimize the effects of the following I/O intensive process:

declare @db_id int = db_id()
exec ('dbcc forceghostcleanup ('+ @db_id + ', 'visit_all_pages'')')

我假设周转速度更快且停机时间更少,则按比例放大到P15.运行命令结果:

I scaled up to P15 assuming a quicker turnaround and less down time. Running the command results:

Msg 40518, Level 16, State 1, Line 1
DBCC command 'forceghostcleanup' is not supported in this version of SQL Server.

由于无法运行该命令,因此尝试缩小到S3.规模操作运行了24小时,报告已成功记录活动日志,但数据库仍为P15.下一个建议是分阶段缩减规模.我试图缩小到P6.规模操作运行了24小时,报告已在活动日志中成功完成,但是数据库仍为P15.在这一点上,MS支持将回到产品支持,我在等着您的回音.我希望这个地方可以退款.

Unable to run the command, I attempted to scale back down to S3. The scale operation ran for 24 hours, reported that it had succeeded in the activity log, but the database was still P15. The next recommendation was to scale down in stages. I attempted to scale down to P6. The scale operation ran for 24 hours, reported that it had succeeded in the activity log, but the database is still P15. At this point, MS support is going back to product support and I'm waiting to hear back. I hope there's a refund in this somewhere.

推荐答案

我添加的更新解释了此问题是幽灵记录问题,有望由Microsoft解决.

The update I added explains this issue is a ghost record issue that will hopefully be resolved by Microsoft.

这篇关于无法控制的Azure SQL数据库大小无法正常使用DBCC SHRINKDATABASE的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆