依次运行具有1,000,000条记录的MYSQL表? [英] Sequentially run through a MYSQL table with 1,000,000 records?

查看:65
本文介绍了依次运行具有1,000,000条记录的MYSQL表?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我建立了一个网站,其中包含大约1,000,000种产品.我想构建一组站点地图,其中将包含所有产品的链接.我了解站点地图可能包含多达50,000个链接,因此我的计划是构建20个此类站点地图.

I've built a site that has about 1,000,000 products in it. I want to build a set of site maps which will contain links to all the products. I understand that site maps may contain up to 50,000, links, so my plan is to build 20 such site maps.

我想通过一系列SQL语句来完成此任务.我的计划是:1,000条SQL语句,每条语句将产生1,000个产品. SQL语句将遵循以下几行:

I want to do accomplish this with a series of SQL statements. My plan: 1,000 SQL statements, each of which would pull up 1,000 products. The SQL statements would be along the following lines:

SELECT name, category, catalog_number FROM product ORDER BY id LIMIT    0, 1000
SELECT name, category, catalog_number FROM product ORDER BY id LIMIT 1000, 1000
SELECT name, category, catalog_number FROM product ORDER BY id LIMIT 2000, 1000
SELECT name, category, catalog_number FROM product ORDER BY id LIMIT 3000, 1000

以此类推.

问题在于,随着LIMIT(偏移量)的第一个参数的增加,每个sql执行所花费的时间非常快!具体来说,当LIMIT的第一个参数小于28000时,该语句将在0.3秒内执行.但是当我尝试:

The problem is that as the first argument to LIMIT (the offset) grows, the amount of time each sql takes to execute grows very quickly! To be specific, when the first argument to LIMIT was anything less than 28000, the statement executed in under .3 seconds. But when I tried:

SELECT name, category, catalog_number FROM product ORDER BY id LIMIT 29000, 1000

执行该语句所需的时间跃升至30秒!而且此后的每条语句花费的时间也差不多.

the time taken to execute the statement jumped to 30 seconds! And every statement after that took a similarly long amount of time to execute.

我试图做的另一件事是将 second 参数增加到LIMIT,但是发生了同样的问题.

Another thing I tried was to increase the second argument to LIMIT -- but the same problem occurred.

任何帮助都将受到欢迎.谢谢.

Any assistance would be welcome. Thank you.

推荐答案

使用LIMIT要求MySQL提取所有行,直到达到限制,但丢弃那些您不感兴趣的行-即只是不感兴趣将它们退还给客户.因此,如果有成千上万的行要提取(并丢弃),则会极大地缩短执行时间.

Using LIMIT requires MySQL to fetch all of the rows up to the limit, but discard those in which you're not interested - i.e. merely don't return them to the client. Therefore, if there are thousands of rows to fetch (and discard), it can dramatically impair the execution time.

相反,只需跟踪您看到的最后一个id即可;然后始终对此进行过滤:

Instead, just keep track of the last id you saw; then always filter on that:

SELECT   name, category, catalog_number
FROM     product
WHERE    id >= ?  -- the last one you saw, or 0/omit if none
ORDER BY id
LIMIT    1000

如果id列上有索引,这将特别快.

This will be especially fast if there is an index on the id column.

这篇关于依次运行具有1,000,000条记录的MYSQL表?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆