MySQL索引配置 [英] Mysql index configuration

查看:108
本文介绍了MySQL索引配置的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一张桌子,上面有450000条充满新闻的行. 表模式是这样的:

I have a table with 450000 row full of news. The table schema is like this:

CREATE TABLE IF NOT EXISTS `news` (
  `id` int(11) NOT NULL auto_increment,
  `cat_id` int(11) NOT NULL,
  `title` tinytext NOT NULL,
  `content` text NOT NULL,
  `date` int(11) NOT NULL,
  `readcount` int(11) NOT NULL default '0',
  PRIMARY KEY  (`id`),
  KEY `cat_id` (`cat_id`),
  KEY `cat_id_2` (`cat_id`,`id`),
  KEY `cat_id_date` (`cat_id`,`date`)
) ENGINE=MyISAM  DEFAULT CHARSET=latin5 AUTO_INCREMENT=462679 ;

当我运行如下所示的sql命令以获取类别页面的页面"x"的一些消息时,如果x超过100,则将花费超过15秒的时间:

When i run a sql command like below to take some news for a page "x" of the category page it takes more than 15 seconds if x is over 100:

select * news where cat_id='4' order by id desc limit 150000,10;

说明显示其使用"where"和索引"cat_id_2"

explain shows that its using "where" and the index "cat_id_2"

在编写此问题时,我还检查了一个更简单的sql查询,它也花了将近一分钟:

While writing this question i also checked a more simple sql query like this and it also took near to a minute:

select * from haberler order by id desc limit 40000,10;

如果sql类似于以下内容,则只需几毫秒:

if the sql is like the following one it takes just a few milliseconds:

select * from haberler order by id desc limit 20,10;

我的my.cnf配置如下:

My my.cnf configuration is like this:

skip-locking
skip-innodb
query_cache_limit=1M
query_cache_size=256M
query_cache_type=1
max_connections=30
interactive_timeout=600000
#wait_timeout=5
#connect_timeout=5
thread_cache_size=384
key_buffer=256M
join_buffer=4M
max_allowed_packet=16M
table_cache=1024
record_buffer=1M
sort_buffer_size=64M
read_buffer_size=16M
max_connect_errors=10
# Try number of CPU's*2 for thread_concurrency
thread_concurrency=2
myisam_sort_buffer_size=128M
long_query_time         = 1
log_slow_queries        = /var/log/mysql/mysql-slow.log
max_heap_table_size=512M

该网站在带有2GB RAM的core2duo上运行. 我认为问题可能是由sort_buffer_size引起的,但我不确定. 预先感谢.

The website is running on a core2duo with 2GB of RAM. I think that the problem may be caused by the sort_buffer_size but i'm not sure. thanks in advance.

推荐答案

更新:

有关此问题的更详细分析,请参阅我的博客中的这篇文章:

See this article in my blog for the more detailed analysis of the problem:

发出LIMIT 150000, 10之类的内容时,这意味着MySQL应该遍历这些150,000记录并找到下一个10.

When you issue something like LIMIT 150000, 10, it means that MySQL should traverse these 150,000 records and find the next 10.

MySQL中遍历索引很慢.

此外,MySQL无法执行后期行查找.

Also, MySQL is not capable of doing late row lookups.

理论上,如果您执行ORDER BY id LIMIT 100000, 10,则使用索引查找从100000100010的值就足够了,然后仅查找满足该索引的10行并返回它们.

Theoretically, if you do ORDER BY id LIMIT 100000, 10, it is enough to use the index to find the values from 100000 to 100010, then look up only 10 rows that satisfy that index and return them.

MySQL以外的所有主要系统都知道这一点,并且仅在确实要返回值时才向上查找行.

All major systems except MySQL are aware of it and look the rows up only if the values are really to be returned.

MySQL,但是,查找每一行.

MySQL, however, looks up every row.

尝试以此方式重写您的查询:

Try to rewrite your query as this:

SELECT  news.*
FROM    (
        SELECT  id
        FROM    news
        WHERE   cat_id='4'
        ORDER BY
                id DESC
        LIMIT 150000, 10
        ) o
JOIN    news
ON      news.id = o.id

这篇关于MySQL索引配置的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆