SQLAlchemy:次要关系更新 [英] Sqlalchemy: secondary relationship update
问题描述
我有两个表,例如A和B。两个表都有主键ID。他们之间存在多对多关系(SEC)。
I have two tables, say A and B. Both have a primary key id. They have a many-to-many relationship, SEC.
SEC = Table('sec', Base.metadata,
Column('a_id', Integer, ForeignKey('A.id'), primary_key=True, nullable=False),
Column('b_id', Integer, ForeignKey('B.id'), primary_key=True, nullable=False)
)
class A():
...
id = Column(Integer, primary_key=True)
...
rels = relationship(B, secondary=SEC)
class B():
...
id = Column(Integer, primary_key=True)
...
让我们考虑一下这段代码。
Let's consider this piece of code.
a = A()
b1 = B()
b2 = B()
a.rels = [b1, b2]
...
#some place later
b3 = B()
a.rels = [b1, b3] # errors sometimes
有时候,我在最后一行说
Sometimes, I get an error at the last line saying
duplicate key value violates unique constraint a_b_pkey
以我的理解,我认为它试图再次将(a.id,b.id)添加到 sec表中,从而导致唯一的约束错误。那是什么吗?如果是这样,我该如何避免呢?如果没有,为什么会出现此错误?
In my understanding, I think it tries to add (a.id, b.id) into 'sec' table again resulting in a unique constraint error. Is that what it is? If so, how can I avoid this? If not, why do I have this error?
推荐答案
问题是您要确保创建的实例是唯一的。我们可以创建一个备用构造函数,该构造函数在返回新实例之前检查现有未提交实例的缓存或在数据库中查询现有提交实例。
The problem is you want to make sure the instances you create are unique. We can create an alternate constructor that checks a cache of existing uncommited instances or queries the database for existing commited instance before returning a new instance.
以下是这种情况的演示。方法:
Here is a demonstration of such a method:
from sqlalchemy import Column, Integer, String, ForeignKey, Table
from sqlalchemy.engine import create_engine
from sqlalchemy.ext.declarative.api import declarative_base
from sqlalchemy.orm import sessionmaker, relationship
engine = create_engine('sqlite:///:memory:', echo=True)
Session = sessionmaker(engine)
Base = declarative_base(engine)
session = Session()
class Role(Base):
__tablename__ = 'role'
id = Column(Integer, primary_key=True)
name = Column(String, nullable=False, unique=True)
@classmethod
def get_unique(cls, name):
# get the session cache, creating it if necessary
cache = session._unique_cache = getattr(session, '_unique_cache', {})
# create a key for memoizing
key = (cls, name)
# check the cache first
o = cache.get(key)
if o is None:
# check the database if it's not in the cache
o = session.query(cls).filter_by(name=name).first()
if o is None:
# create a new one if it's not in the database
o = cls(name=name)
session.add(o)
# update the cache
cache[key] = o
return o
Base.metadata.create_all()
# demonstrate cache check
r1 = Role.get_unique('admin') # this is new
r2 = Role.get_unique('admin') # from cache
session.commit() # doesn't fail
# demonstrate database check
r1 = Role.get_unique('mod') # this is new
session.commit()
session._unique_cache.clear() # empty cache
r2 = Role.get_unique('mod') # from database
session.commit() # nop
# show final state
print session.query(Role).all() # two unique instances from four create calls
create_unique
方法的灵感来自SQLAlchemy Wiki的示例。这个版本不那么复杂,相比于灵活性,更倾向于简单。我已经在生产系统中使用了它,没有问题。
The create_unique
method was inspired by the example from the SQLAlchemy wiki. This version is much less convoluted, favoring simplicity over flexibility. I have used it in production systems with no problems.
显然可以添加一些改进;这只是一个简单的例子。 get_unique
方法可以从 UniqueMixin
继承,用于许多模型。可以实现更灵活的参数记录。这也排除了多个线程插入Ants Aasma提到的冲突数据的问题。处理起来比较复杂,但应该是显而易见的扩展。我留给你。
There are obviously improvements that can be added; this is just a simple example. The get_unique
method could be inherited from a UniqueMixin
, to be used for any number of models. More flexible memoizing of arguments could be implemented. This also puts aside the problem of multiple threads inserting conflicting data mentioned by Ants Aasma; handling that is more complex but should be an obvious extension. I leave that to you.
这篇关于SQLAlchemy:次要关系更新的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!