postgresql死锁 [英] postgresql deadlock

查看:2606
本文介绍了postgresql死锁的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

有时postgresql会引发错误死锁。

Sometimes postgresql raise error deadlocks.

表的触发器为FOR UPDATE。

In trigger for table setted FOR UPDATE.

http://pastebin.com/L1a8dbn4

日志(INSERT句子已截断):

Log (INSERT sentences is cutted):

2012-01-26 17:21:06 MSK ERROR:  deadlock detected
2012-01-26 17:21:06 MSK DETAIL:  Process 2754 waits for ExclusiveLock on tuple (40224,15) of relation 735493 of database 734745; blocked by process 2053.
Process 2053 waits for ShareLock on transaction 25162240; blocked by process 2754.
Process 2754: INSERT INTO comment (user_id, content_id, reply_id, text) VALUES (1756235868, 935967, 11378142, 'text1') RETURNING comment.id;
Process 2053: INSERT INTO comment (user_id, content_id, reply_id, text) VALUES (4071267066, 935967, 11372945, 'text2') RETURNING comment.id;
2012-01-26 17:21:06 MSK HINT:  See server log for query details.
2012-01-26 17:21:06 MSK CONTEXT:  SQL statement "SELECT comments_count FROM content WHERE content.id = NEW.content_id FOR UPDATE"
PL/pgSQL function "increase_comment_counter" line 5 at SQL statement
2012-01-26 17:21:06 MSK STATEMENT:  INSERT INTO comment (user_id, content_id, reply_id, text) VALUES (1756235868, 935967, 11378142, 'text1') RETURNING comment.id;

并触发表格注释:

CREATE OR REPLACE FUNCTION increase_comment_counter() RETURNS TRIGGER AS $$
DECLARE
comments_count_var INTEGER;
BEGIN
    SELECT INTO comments_count_var comments_count FROM content WHERE content.id = NEW.content_id FOR UPDATE;
    UPDATE content SET comments_count = comments_count_var + 1, last_comment_dt = now()  WHERE content.id = NEW.content_id;
    RETURN NEW;
END;
$$ LANGUAGE plpgsql;



CREATE TRIGGER increase_comment_counter_trigger AFTER INSERT ON comment FOR EACH ROW EXECUTE PROCEDURE increase_comment_counter();

为什么会发生?

推荐答案

这两个注释使用相同的content_id插入。仅仅插入注释会在内容行上取出SHARE锁,以便在第一个事务完成之前停止另一个事务删除该行。

These are two comments being inserted with the same content_id. Merely inserting the comment will take out a SHARE lock on the content row, in order to stop another transaction deleting that row until the first transaction has completed.

但是,触发器然后继续将锁升级到EXCLUSIVE,并且这可以由执行相同进程的并发事务阻止。考虑以下事件序列:

However, the trigger then goes on to upgrade the lock to EXCLUSIVE, and this can be blocked by a concurrent transaction performing the same process. Consider the following sequence of events:

Txn 2754                      Txn 2053
Insert Comment
                              Insert Comment
Lock Content#935967 SHARE
  (performed by fkey)
                              Lock Content#935967 SHARE
                                (performed by fkey)
Trigger
Lock Content#935967 EXCLUSIVE
(blocks on 2053's share lock)
                              Trigger
                              Lock Content#935967 EXCLUSIVE
                              (blocks on 2754's share lock)

死锁。

一种解决方案是立即在插入注释之前先行。即

One solution is to immediately take an exclusive lock on the content row before inserting the comment. i.e.

SELECT 1 FROM content WHERE content.id = 935967 FOR UPDATE
INSERT INTO comment(.....)

另一个解决方案是完全避免这种缓存计数模式,除非你可以证明是性能必需的。如果是,请考虑将缓存的计数保留在内容表之外的位置,例如一个用于计数器的专用表。这也将减少每次添加评论时对内容表的更新流量。或者只是重新选择计数并在应用程序中使用memcached。没有任何事情,无论你存储在哪里,这个缓存的计数将是一个阻塞点,它必须安全地更新。

Another solution is simply to avoid this "cached counts" pattern completely, except where you can prove it is necessary for performance. If so, consider keeping the cached count somewhere other than the content table-- e.g. a dedicated table for the counter. That will also cut down on the update traffic to the content table every time a comment gets added. Or maybe just re-select the count and use memcached in the application. There's no getting round the fact that wherever you store this cached count is going to be a choke point, it has to be updated safely.

这篇关于postgresql死锁的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆