如何在大笔交易下获得高性能(PostgreSQL) [英] How to get high performance under a large transaction (postgresql)

查看:79
本文介绍了如何在大笔交易下获得高性能(PostgreSQL)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我要插入Postgresql的数据量为200万。但是它的性能很低。我可以通过将大型交易拆分为较小的交易来获得高性能的插入程序(实际上,我不想这样做)吗?还是有其他明智的解决方案?

I have data with amount of 2 millions needed to insert into postgresql. But it has played an low performance. Can I achieve a high-performance inserter by split the large transaction into smaller ones (Actually, I don't want to do this)? or, there is any other wise solutions?

推荐答案

不,要快得多的主要想法是一次完成所有插入操作交易。

No, the main idea to have it much faster is doing all inserts in one transaction. Multiple transactions, or using no transaction, is much slower.

并尝试使用副本,它甚至更快: http://www.postgresql.org/docs/9.1/static/sql-copy.html

And try to use copy, which is even faster: http://www.postgresql.org/docs/9.1/static/sql-copy.html

如果确实需要使用插入,还可以尝试删除此表上的所有索引,并在加载数据后创建它们。

If you really have to use inserts, you can also try dropping all indexes on this table, and creating them after loading the data.

这也可能很有趣: http://www.postgresql.org /docs/9.1/static/populate.html

这篇关于如何在大笔交易下获得高性能(PostgreSQL)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆