pg_restore需要很长时间 [英] pg_restore takes ages

查看:497
本文介绍了pg_restore需要很长时间的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述



我使用pg_restore加载以前转储的数据库。 (10mil记录)。

数据的负载运行得相当快,但是当开始创建外键的

触发器时,它需要永远。


不是有更快的方法。在源db中的所有触发器之后

已经确保数据是干净的。


谢谢

Alex


---------------------------(广播结束)---------- -----------------

提示7:别忘了增加你的免费空间地图设置

Hi,
I use pg_restore to load a previously dumped database. (10mil records).
the load of the data runs quite fast but when starting creating the
triggers for foreign keys it takes forever.

Isnt there are a faster way. after all the triggers in the source db
already made sure the data was clean.

Thanks
Alex

---------------------------(end of broadcast)---------------------------
TIP 7: don''t forget to increase your free space map settings

推荐答案

Alex写道:
Alex wrote:

我使用pg_restore加载以前转储的数据库。 (10mil记录)。
数据的负载运行得非常快,但是当开始为外键创建
触发器时,它需要永远。

不是有更快的方法。在源db中的所有触发器确定数据已经清理之后。
Hi,
I use pg_restore to load a previously dumped database. (10mil records).
the load of the data runs quite fast but when starting creating the
triggers for foreign keys it takes forever.

Isnt there are a faster way. after all the triggers in the source db
already made sure the data was clean.




您可以先尝试创建索引/触发器并加载数据。最后它会花费一些时间才能获得一个可用的数据库,但是看看

对您的工作速度更快。


就个人而言,我遇到了postgresql占用空间的情况,而

在一个包含81M行且3GB磁盘空间的表上创建索引。我放弃了桌子上的
并重新创建了它。在加载数据之前还创建索引。使用这种方法加载

很慢,但它在3小时内完成。我也有一个更新的

指数。只需要对它进行真空吸尘。


拿你的选择..


Shridhar


---------------------------(广播结束)------------------ ---------

提示3:如果通过Usenet发布/阅读,请发送适当的

subscribe-nomail命令到 ma ******* @ postgresql.org ,以便您的

消息可以通过邮件列表干净地



You can try creating index/triggers first and load the data. At the end it will
take a while before you get a usable database with either approach but see what
works faster for you.

Personally I was in a situation where postgresql was hogging space while
creating index on a table that had 81M rows with 3GB disk footprint. I dropped
the table and recreated it. Also created index before loading data. The loading
was slow with this approach but it finished in 3 hours. And I had an updated
index as well. Just had to run vacuum over it.

Take your pick..

Shridhar

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly


>>>>> " SD" == Shridhar Daithankar< sh ***************** @ persistent.co.in>写道:


SD>您可以先尝试创建索引/触发器并加载数据。在

SD>结束它需要一段时间才能得到一个可用的数据库

SD>但是看看哪种方法对你来说更快。


触发器和FK在创建它们时不会做太多。

他们在更新/插入/删除数据时工作。


SD>脚印。我放下桌子重新创建它。还创建了索引

SD>在加载数据之前。使用这种方法加载速度很慢,但它是&b
SD>在3个小时内完成。我也有一个更新的索引。只需要

SD>在它上面运行真空。


我无法相信这比加载数据更快,然后是

创建索引。也许您需要碰撞sort_mem,以便索引可以更有效地创建
。我还发现,将b
checkpoint_segments提升到一个很高的数字可以大大提高速度。


-

= - = - = - = - = - = - = - = - = - = - = - = - = - = - = - = - = - = - = - = - = - = - = - = - = - = - = - = - = - = - = - = - = - = - =

Vivek Khera,Ph.D。 Khera Communications,Inc。

互联网: kh***@kciLink.com Rockville,MD + 1- 240-453-8497

AIM:vivekkhera Y!:vivek_khera http://www.khera.org/~vivek/


--------------------- ------(广播结束)---------------------------

提示4:唐''''杀'-9''邮政局长

>>>>> "SD" == Shridhar Daithankar <sh*****************@persistent.co.in> writes:

SD> You can try creating index/triggers first and load the data. At the
SD> end it will take a while before you get a usable database with either
SD> approach but see what works faster for you.

The triggers and FK''s don''t do much at the time they are created.
They work upon update/insert/delete of data.

SD> footprint. I dropped the table and recreated it. Also created index
SD> before loading data. The loading was slow with this approach but it
SD> finished in 3 hours. And I had an updated index as well. Just had to
SD> run vacuum over it.

I cannot believe that this was faster than load data followed by
create index. Perhaps you needed to bump sort_mem so the index could
be created more efficiently. I also find that bumping up
checkpoint_segments to a high number speeds things up considerably.

--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D. Khera Communications, Inc.
Internet: kh***@kciLink.com Rockville, MD +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/

---------------------------(end of broadcast)---------------------------
TIP 4: Don''t ''kill -9'' the postmaster


Vivek Khera写道:
Vivek Khera wrote:
SD>脚印。我放下桌子重新创建它。还创建了索引
SD>在加载数据之前。使用这种方法加载速度很慢,但是SD>在3个小时内完成。我也有一个更新的索引。只需要
SD>在它上面运行真空。

我无法相信这比加载数据更快,然后是
创建索引。也许你需要碰撞sort_mem,这样才能更有效地创建索引。我也发现将checkpoint_segments提升到一个很高的数字可以大大提高速度。
SD> footprint. I dropped the table and recreated it. Also created index
SD> before loading data. The loading was slow with this approach but it
SD> finished in 3 hours. And I had an updated index as well. Just had to
SD> run vacuum over it.

I cannot believe that this was faster than load data followed by
create index. Perhaps you needed to bump sort_mem so the index could
be created more efficiently. I also find that bumping up
checkpoint_segments to a high number speeds things up considerably.




嗯,在我的情况下速度不是问题。我把



Well, In my case speed wasn''t the issue. I put


这篇关于pg_restore需要很长时间的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆