为什么 MySQL InnoDB 插入这么慢? [英] Why is MySQL InnoDB insert so slow?

查看:32
本文介绍了为什么 MySQL InnoDB 插入这么慢?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用大随机数作为密钥(来自另一个系统).在相当小的(如几百万行)表上插入和更新所需的时间比我认为的合理时间要长得多.

I am using large random numbers as keys (coming in from another system). Inserts and updates on fairly-small (as in a few million rows) tables are taking much longer than I think is reasonable.

我提炼了一个非常简单的测试来说明.在测试表中,我试图让它尽可能简单;我的真实代码没有如此简单的布局,并且具有关系和附加索引等.但是,更简单的设置显示出相同的性能.

I have distilled a very simple test to illustrate. In the test table I've tried to make it as simple as possible; my real code does not have such a simple layout and has relations and additional indices and such. However, a simpler setup shows equivalent performance.

结果如下:

creating the MyISAM table took 0.000 seconds
creating 1024000 rows of test data took 1.243 seconds
inserting the test data took 6.335 seconds
selecting 1023742 rows of test data took 1.435 seconds
fetching 1023742 batches of test data took 0.037 seconds
dropping the table took 0.089 seconds
creating the InnoDB table took 0.276 seconds
creating 1024000 rows of test data took 1.165 seconds
inserting the test data took 3433.268 seconds
selecting 1023748 rows of test data took 4.220 seconds
fetching 1023748 batches of test data took 0.037 seconds
dropping the table took 0.288 seconds

在 MyISAM 中插入 1M 行需要 6 秒;进入 InnoDB 需要 3433 秒

Inserting 1M rows into MyISAM takes 6 seconds; into InnoDB takes 3433 seconds!

我做错了什么?什么配置错误?(MySQL 是具有默认设置的普通 Ubuntu 安装)

What am I doing wrong? What is misconfigured? (MySQL is a normal Ubuntu installation with defaults)

测试代码如下:

import sys, time, random
import MySQLdb as db

# usage: python script db_username db_password database_name

db = db.connect(host="127.0.0.1",port=3306,user=sys.argv[1],passwd=sys.argv[2],db=sys.argv[3]).cursor()

def test(engine):

    start = time.time() # fine for this purpose
    db.execute("""
CREATE TEMPORARY TABLE Testing123 (
k INTEGER PRIMARY KEY NOT NULL,
v VARCHAR(255) NOT NULL
) ENGINE=%s;"""%engine)
    duration = time.time()-start
    print "creating the %s table took %0.3f seconds"%(engine,duration)

    start = time.time()
    # 1 million rows in 100 chunks of 10K
    data = [[(str(random.getrandbits(48)) if a&1 else int(random.getrandbits(31))) for a in xrange(10*1024*2)] for b in xrange(100)]
    duration = time.time()-start
    print "creating %d rows of test data took %0.3f seconds"%(sum(len(rows)/2 for rows in data),duration)

    sql = "REPLACE INTO Testing123 (k,v) VALUES %s;"%("(%s,%s),"*(10*1024))[:-1]
    start = time.time()
    for rows in data:
        db.execute(sql,rows)
    duration = time.time()-start
    print "inserting the test data took %0.3f seconds"%duration

    # execute the query
    start = time.time()
    query = db.execute("SELECT k,v FROM Testing123;")
    duration = time.time()-start
    print "selecting %d rows of test data took %0.3f seconds"%(query,duration)

    # get the rows in chunks of 10K
    rows = 0
    start = time.time()
    while query:
        batch = min(query,10*1024)
        query -= batch
        rows += len(db.fetchmany(batch))
    duration = time.time()-start
    print "fetching %d batches of test data took %0.3f seconds"%(rows,duration)

    # drop the table
    start = time.time()
    db.execute("DROP TABLE Testing123;")
    duration = time.time()-start
    print "dropping the table took %0.3f seconds"%duration


test("MyISAM")
test("InnoDB")

推荐答案

InnoDB 不能很好地处理随机"主键.尝试顺序键或自动增量,我相信你会看到更好的性能.您的真实"键字段仍然可以被索引,但对于批量插入,您最好在插入完成后一次性删除并重新创建该索引.有兴趣查看您的基准测试!

InnoDB doesn't cope well with 'random' primary keys. Try a sequential key or auto-increment, and I believe you'll see better performance. Your 'real' key field could still be indexed, but for a bulk insert you might be better off dropping and recreating that index in one hit after the insert in complete. Would be interested to see your benchmarks for that!

一些相关问题

这篇关于为什么 MySQL InnoDB 插入这么慢?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆