Postgres触发器和行锁定 [英] Postgres trigger and row locking

查看:196
本文介绍了Postgres触发器和行锁定的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

请帮助我理解触发器和锁如何相互作用

Please help with my understanding of how triggers and locks can interact

我将记录批量加载到具有这样的语句的表中……..

I bulk load records to a table with statements something like this…..

BEGIN;
INSERT INTO table_a VALUES (record1) , (record2), (record3)………;
INSERT INTO table_a VALUES (record91) , (record92), (record93)………;
…..
….
COMMIT;

在单个插入中可以有数百条记录,并且在COMMIT之间可以有几十条INSERT语句

There can be several hundred records in a single insert, and there can be several dozen INSERT statements between COMMITs

Table_a的触发器定义为…。

Table_a has a trigger on it defined as….

AFTER INSERT ON table_a FOR EACH ROW EXECUTE PROCEDURE foo();

过程 foo()在添加新行时对其进行解析,并(以及其他内容)将更新摘要 table_b (由主键唯一标识)中的记录。因此,对于插入到 table_a 中的每条记录,相应的记录将在 table_b

The procedure foo() parses each new row as it’s added, and will (amongst other stuff) update a record in a summary table_b (uniquely identified by primary key). So, for every record inserted into table_a a corresponding record will be updated in table_b

我有第二个过程还会尝试(偶尔)更新 table_b 中的记录。在极少数情况下,它可能会尝试更新批量处理过程正在更新的 table_b 中的同一行

I have a 2nd process that also attempts to (occasionally) update records in table_b. On very rare occasions it may attempt to update the same row in table_b that the bulk process is updating

问题 –批量插入语句中的任何内容是否会影响我的第二个进程能够更新 table_b 中的记录?我知道批量插入过程每次在 table_b 中更新一行时都会获得行锁,但是何时释放该行锁? –当插入单个记录(记录1,记录2,记录3等)时?还是当整个INSERT语句完成时?还是到达COMMIT?

Questions – should anything in the bulk insert statements affect my 2nd process being able to update records in table_b? I understand that the bulk insert process will obtain a row lock each time it updates a row in table_b, but when will that row lock be released? – when the individual record (record1, record2, record3 etc etc) has been inserted? Or when the entire INSERT statement has completed? Or when the COMMIT is reached?

更多信息-这个问题的总体目的是试图了解为什么我的第二个流程有时会暂停尝试更新table_b中的行(也由批量加载过程更新)时,等待一分钟或更长时间。 似乎正在发生的事情是,直到到达COMMIT才真正释放对 table_b 中目标记录的锁定-这与我认为应该的相反即将发生。 (我认为应该在该行的UPDATE完成后立即释放行锁)

Some more info - my overall purpose for this question is to try to understand why my 2nd process occasionally pauses for a minute or more when trying to update a row in table_b that is also being updated by the bulk-load process. What appears to be happening is that the lock on the target record in table_b isn't actually being released until the COMMIT has been reached - which is contrary to what I think ought to be happening. (I think a row-lock should be released as soon as the UPDATE on that row is done)

在回答后进行更新(s)-是的,您当然是对的。在我心中,我以某种方式使自己确信,触发器内执行的各个更新与整个事务的整体BEGIN和COMMIT有所不同。愚蠢的我。

UPDATE after answer(s) - yes of course you're both right. In my mind I had somehow convinced myself that the individual updates performed within the trigger were somehow separate from the overall BEGIN and COMMIT of the whole transaction. Silly me.

引入了使用一个INSERT添加多个记录,以及在COMMIT之间添加多个INSERT的做法,以提高我忘记的大容量加载速度(这样做)

The practice of adding multiple records with one INSERT, and multiple INSERTs between COMMITs was introduced to improve the bulk load speed (which it does) I had forgotten about the side-effect of increasing the time before locks would be released.

推荐答案

事务回滚时会发生什么?很明显,应该回滚 table_a 上的所有插入以及 table_b 上的所有更新。这就是为什么由触发器更新的 table_b 所有行都将被锁定,直到事务完成为止。

What should happen when the transaction is rolled back? It is rather obvious that all inserts on table_a, as well as all updates on table_b, should be rolled back. This is why all rows of table_b updated by the trigger will be locked until the transaction completes.

每行之后提交插入(减少单个事务中插入的行数)将减少与并发进程冲突的机会。

Committing after each insert (reducing the number of rows inserted in a single transaction) will reduce the chance of conflicts with concurrent processes.

这篇关于Postgres触发器和行锁定的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆