优化大表上的简单mysql选择(75M +行) [英] Optimizing a simple mysql select on a large table (75M+ rows)
问题描述
我有一个统计表,它以很大的速度(大约25M行/天)增长,我想优化选择,表适合内存,服务器有足够的备用内存(32G,表是4G)。
I have a statistics table which grows at a large rate (around 25M rows/day) that I'd like to optimize for selects, the table fits in memory, and the server has plenty of spare memory (32G, table is 4G).
我的简单汇总查询是:
EXPLAIN select FROM_UNIXTIME(FLOOR(endtime/3600)*3600) as ts,sum(numevent1) as success , sum(numevent2) as failure from stats where endtime > UNIX_TIMESTAMP()-3600*96 group by ts order by ts;
+----+-------------+--------------+------+---------------+------+---------+------+----------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------------+------+---------------+------+---------+------+----------+----------------------------------------------+
| 1 | SIMPLE | stats | ALL | ts | NULL | NULL | NULL | 78238584 | Using where; Using temporary; Using filesort |
+----+-------------+--------------+------+---------------+------+---------+------+----------+----------------------------------------------+
Stats是一个innodb表,有一个正常的endtime索引..我应该如何优化它?
Stats is an innodb table, there is a normal index on endtime.. How should I optimize this?
注意:我确实计划添加汇总表,但目前这是我坚持使用的,我想知道是否可以在没有其他应用程序代码的情况下修复它。
Note: I do plan on adding roll-up tables, but currently this is what I'm stuck with, and I'm wondering if its possible to fix it without additional application code.
推荐答案
我一直在做本地测试。请尝试以下操作:
I've been doing local tests. Try the following:
alter table stats add index (endtime, numevent1, numevent2);
按删除订单,因为它应该通过
隐含在组中(在这种情况下,解析器只是忽略
顺序
,但以防万一:)
And remove the order by
as it should be implicit in the group by
(I guess the parser just ignores the order by
in this case, but just in case :)
这篇关于优化大表上的简单mysql选择(75M +行)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!