PostgreSQL 9.1可以泄漏锁定吗? (共享内存不足/增加max_pred_locks_per_transaction) [英] Can PostgreSQL 9.1 leak locks? (out of shared memory/increase max_pred_locks_per_transaction)

查看:927
本文介绍了PostgreSQL 9.1可以泄漏锁定吗? (共享内存不足/增加max_pred_locks_per_transaction)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们最近从8.3升级到了postgresql 9.1.6.我们的测试服务器指出,max_pred_locks_per_transaction的设置至少应设置为900(这比建议的设置64还要高).

We recently upgraded to postgresql 9.1.6 (from 8.3). Our test server indicated that the max_pred_locks_per_transaction should be set at least as high as 900 (which is way beyond the recommended setting of 64).

我们现在已经投入生产,我不得不多次增加此参数,因为我们的日志将开始填充:

We're now in production, and I've had to increase this parameter many times, as our log will start filling with:

ERROR:  53200: out of shared memory
HINT:  You might need to increase max_pred_locks_per_transaction.

客户端连接设置为600(但池系统永远不会超过100个客户端):

With a client connection setting of 600 (but a pooling system that never goes over 100 clients):

max_pred_locks_per_transaction: 我们去了3000.大约一天后就出去了. 达到9000,用了大约3天的时间就耗尽了.

max_pred_locks_per_transaction: We went to 3000. Ran out in about a day. Went to 9000, ran out in about 3 days.

我现在将其设置为30000,并且由于此数字是每个允许的客户端连接分配的平均数,因此我现在拥有大约5 GB的共享内存专用于锁定空间!

I now have it set to 30000, and since this number is the average allocated per allowed client connection, I now have around 5 GB of shared memory dedicated to lock space!

我确实将shared_buffers设置得很高(目前为24GB),超过了40%的RAM数字. (我计划在下次重新启动时将其调低到大约25%的RAM).

I do have shared_buffers set rather high (24GB at the moment), which is over the 40% RAM figure. (I plan to tune this down to about 25% of RAM at the next restart).

原来,这种调整是一个坏主意.我的数据库有很多繁重的查询,并且有一半专用于shared_buffers的大RAM可以阻止它阻塞,因为它可以完全缓存较大的表.

平均而言,我一次看到大约5到10个活跃查询.我们的查询负载 far 超过了我们的更新负载.

On average, I see somewhere around 5-10 active queries at a time. Our query load far outstrips our update load.

有人在乎告诉我如何跟踪这里出了什么问题吗?有了这么小的更新集,我真的无法弄清楚为什么我们如此频繁地用完锁...它的确对我来说确实像是泄漏.

Anybody care to tell me how I might track down what is going wrong here? With such a small update set, I really can't figure out why we are running out of locks so often...it really does smell like a leak to me.

任何人都知道如何检查锁的去向吗? (例如,关于该问题,我该如何阅读pg_locks的内容)

Anyone know how to examine where the locks are going? (e.g. how might I read the content of pg_locks with respect to this issue)

推荐答案

这听起来似乎是由长时间运行的事务引起的.在所有重叠的读写事务完成之前,无法释放针对一个事务的谓词锁.这包括准备好的交易.

This sounds like it is likely to be caused by a long-running transaction. Predicate locks for one transaction cannot be released until all overlapping read-write transactions complete. This includes prepared transactions.

请查看pg_stat_activitypg_prepared_xacts在几分钟前开始(或已准备好的)的任何事务.

Take a look at both pg_stat_activity and pg_prepared_xacts for any transactions which started (or were prepared) more than a few minutes ago.

我能想到的唯一可能的,非错误的解释是您的表具有成百上千个分区.

The only other probable, non-bug explanation I can think of is that you have tables with hundreds or thousands of partitions.

如果这些解释都没有道理,那么我很乐意尝试一个可复制的测试用例.有什么方法可以创建表,使用generate_series()向表中填充查询,并以可预测的方式实现这一点?有了这样的测试用例,我绝对可以找到原因.

If neither of these explanations makes sense, I would love to get my hands on a reproducible test case. Is there any way to create tables, populate them with queries using generate_series() and make this happen in a predictable way? With such a test case I can definitely track down the cause.

这篇关于PostgreSQL 9.1可以泄漏锁定吗? (共享内存不足/增加max_pred_locks_per_transaction)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆