postgresql:共享内存? [英] postgresql: out of shared memory?
问题描述
我使用Python和psycopg2运行一堆查询。我创建一个大的临时表w /约2百万行,然后我通过使用 cur.fetchmany(1000)
一次获得1000行,并运行更广泛的查询那些行。广泛的查询是自给自足的,虽然 - 一旦完成,我再不需要他们的结果,当我移动到下一个1000。
I'm running a bunch of queries using Python and psycopg2. I create one large temporary table w/ about 2 million rows, then I get 1000 rows at a time from it by using cur.fetchmany(1000)
and run more extensive queries involving those rows. The extensive queries are self-sufficient, though - once they are done, I don't need their results anymore when I move on to the next 1000.
但是,关于1000000行,我从psycopg2中得到一个例外:
However, about 1000000 rows in, I got an exception from psycopg2:
psycopg2.OperationalError: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
<
Funnily enough, this happened when I was executing a query to drop some temporary tables that the more extensive queries created.
为什么会发生这种情况?有什么办法可以避免吗?这是令人讨厌的,这发生了一半,意味着我必须再次运行它。 max_locks_per_transaction
可能与什么有关?
Why might this happen? Is there any way to avoid it? It was annoying that this happened halfway through, meaning I have to run it all again. What might max_locks_per_transaction
have to do with anything?
注意:我不会做任何 .commit()
s,但是我删除了所有创建的临时表,我只是触摸相同的5表反正对于每个广泛事务,所以我看不到如何运行表锁可能是问题...
NOTE: I'm not doing any .commit()
s, but I'm deleting all the temporary tables I create, and I'm only touching the same 5 tables anyway for each "extensive" transaction, so I don't see how running out of table locks could be the problem...
推荐答案
是否创建了多个具有相同名称的保存点, ?
Did you create multiple savepoints with the same name without releasing them?
我跟着这些指令,重复执行
SAVEPOINT savepoint_name
,但没有执行任何相应的 RELEASE SAVEPOINT savepoint_name
语句。 PostgreSQL只是掩盖了旧的保存点,从不释放它们。它跟踪每个,直到它用完了内存的锁。我认为我的postgresql内存限制低得多,它只花了〜10,000保存点,我打 max_locks_per_transaction 。
I followed these instructions, repeatedly executing
SAVEPOINT savepoint_name
but without ever executing any corresponding RELEASE SAVEPOINT savepoint_name
statements. PostgreSQL was just masking the old savepoints, never freeing them. It kept track of each until it ran out of memory for locks. I think my postgresql memory limits were much lower, it only took ~10,000 savepoints for me to hit max_locks_per_transaction.
这篇关于postgresql:共享内存?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!