Sql Azure - maxing DTU查询“空表”的百分比 [英] Sql Azure - maxing DTU Percentage querying a 'empty table'

查看:104
本文介绍了Sql Azure - maxing DTU查询“空表”的百分比的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在上个月左右遇到了数据库问题...(11月份很好)。
(S0标准层 - 甚至不是最低层。) - 在更新5中修复



选择语句导致我的数据库油门(超时偶数)。
为了确保它不仅仅是我的数据库的问题,Ive:


  1. 复制数据库...同样的问题删除数据库,再次从实体框架开始
  2. (空数据库)
  3. >

第二个被证明更有趣。现在我的数据库没有数据,它仍然使DTU达到高峰,并且使得这些数据没有反应。





首先...这是正常吗?



我在工作中有更复杂的数据库,在同一级别(s0)使用dtu的大约10%的最大值。所以我很困惑这只是一个用户,一个数据库,目前是空的,我可以使其无响应。



更新2:
从副本(具有数据10000〜记录的数据)。我把它升级到标准的S2(5倍以上的功能比s0潜在的没有问题
再次降级到S0和



SET STATISTICS IO ON
SET STATISTICS TIME ON
从竞争中选择* - 6条记录在这里...



SQL Server解析和编译时间:
CPU时间= 0 ms,经过时间= 1 ms。



SQL Server执行时间:
CPU时间= 0 ms,已用时间= 0 ms。



SQL Server执行时间:
CPU时间= 0 ms,已用时间= 0 ms。

 (6 row(s)affected)

表比赛扫描计数1,逻辑读取3,物理读取1,预读读0,lob逻辑读0,lob物理读0,lob预读读0。



SQL Server执行时间:
CPU时间= 407 ms,已用时间= 21291 ms。



我想念天文数据库,需要继续保暖吗?如果我运行相同的查询再次,这将是即时的。如果我关闭连接并再次执行它的回到〜20秒。



更新3:
s1级别,它在第一次在〜1秒时执行相同的查询



更新4:
s0级别...第一个查询...

 (6行s)影响)

表比赛。扫描计数1,逻辑读取3,物理读取0,预读读取0,lob逻辑读取0,lob物理读取0,lob预读读取0。



SQL Server执行时间:
CPU时间= 16 ms,经过时间= 35 ms。



除了层之外,这些数据库上没有任何变化。在s0上的一个实时站点(不同的数据库,模式和数据)上漫游后,它达到了14.58%(其统计站点)的高峰。



它不是我最好的调查。但是累了:D
如果有人好奇,我可以给更多的更新。



**更新:5 - 固定类型的**





前几个100%的峰值是同一张表。更新模式并删除地理字段(该列中的数据为空)后,它已经移动到较小的峰值〜1-4%,结果返回到非常低的ms。

感谢您的帮助,
Matt

解决方案

问题的原因是100%的DTO是一个GEOGRAPHY字段:
http://msdn.microsoft.com/en-gb/library/cc280766.aspx



从我的查询中删除此问题可以解决问题。从我的EF模型中删除它将有望确保它永远不会回来。



我想在Azure中使用地理字段(最终可能不是几个月),所以如果有人知道为什么会导致意外的DTU数量花在一个(现在总是为空)的列上,对未来的知识将是非常有用的。


I have been having trouble with a database for the last month or so... (it was fine in November). (S0 Standard tier - not even the lowest tier.) - Fixed in update 5

Select statements are causing my database to throttle (timeout even). To makes sure it wasn't just a problem with my database, Ive:

  1. Copied the database... same problem on both (unless increasing the tier size).
  2. Deleted the database, and created the database again (blank database) from entity framework code-first

The second one proved more interesting. Now my database has 'no' data, and it still peaks the DTU and makes things unresponsive.

Firstly ... is this normal?

I do have more complicated databases at work that use about 10% max of the dtu at the same level (s0). So i'm perplexed. This is just one user, one database and currently empty, and I can make it unresponsive.

Update 2: From the copy ("the one with data 10000~ records"). I upgraded it to standard S2 (5x more powerful than s0 potentially. No problems. Down-graded it to S0 again and

SET STATISTICS IO ON SET STATISTICS TIME ON select * from Competitions -- 6 records here...

SQL Server parse and compile time: CPU time = 0 ms, elapsed time = 1 ms.

SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms.

SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms.

(6 row(s) affected)

Table 'Competitions'. Scan count 1, logical reads 3, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times: CPU time = 407 ms, elapsed time = 21291 ms.

Am i miss understanding azure databases, that they need to keep warming up? If i run the same query again it will be immediate. If i close the connection and do it again its back to ~20 seconds.

Update 3: s1 level and it does the same query above for the first time at ~1 second

Update 4: s0 level again ... first query...

(6 row(s) affected)

Table 'Competitions'. Scan count 1, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times: CPU time = 16 ms, elapsed time = 35 ms.

Nothing is changing on these databases apart from the tier. After roaming around on one of my live sites (different database, schema and data) on s0 ... it peaked at 14.58% (its a stats site)

Its not my best investigation. But im tired :D I can give more updates if anyone is curious.

** Update: 5 - fixed sort of **

The first few 100% spikes were the same table. After updating the schema and removing a geography field (the data was null in that column) it has moved to the later smaller peaks ~1-4% and a result time back in the very low ms.

Thanks for the help, Matt

解决方案

The cause of the problem to the crippling 100% DTO was a GEOGRAPHY field: http://msdn.microsoft.com/en-gb/library/cc280766.aspx

Removing this from my queries fixed the problem. Removing it from my EF models will hopefully make sure it never comes back.

I do want to use the geography field in Azure (eventually and probably not for a few months), so if anyone knows why it was causing a unexpected amount of DTU to be spent on a (currently always null) column that would be very useful for future knowledge.

这篇关于Sql Azure - maxing DTU查询“空表”的百分比的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆