SQLalchemy批量插入一对一的关系 [英] SQLalchemy Bulk insert with one to one relation

查看:993
本文介绍了SQLalchemy批量插入一对一的关系的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

  class TableA(db.Model):TableA和TableB之间有1对1的关系: 
id = Column(db.BigInteger,primary_key = True)
title = Column(字符串(1024))
table_b =关系('TableB',uselist = False,back_populates =table_a )


class TableB(db.Model):
id = Column(BigInteger,ForeignKey(TableA.id),primary_key = True)
a = relationship(' TableA',back_populates ='table_b')
name = Column(String(1024))

当插入1条记录时,一切都很顺利:

  rec_a = TableA(title ='hello')
rec_b = TableB(a = rec_a,name ='world')
db.session.add(rec_b)
db.session.commit()

但是当我尝试为大量记录做这个时:

  (标题,名称):
rec_a = TableA(title = title)
bulk_.append(T ()=

db.session.bulk_save_objects(bulk_)
db.session.commit()
/ pre>

我得到以下异常:

  sqlalchemy.exc .InternalError:(pymysql.err.InternalError)(1364,Field'id'没有默认值)

我做错了什么?我配置模型错了吗?
是否有批量提交这种类型数据的方法?

解决方案

您看到的错误是由Mysql抛出的。这是抱怨试图插入记录到 table_b 违反了外键约束。



一种技术可能是将所有标题写入一个批量声明中,然后在第二个批量声明中写入所有的名称。此外,我从来没有成功地将关系传递给批量操作,这种方法依赖于插入简单的值。

  bulk_titles = [TableA (title = title)for title in title] 
session.bulk_save_objects(bulk_titles,return_defauls = True)
bulk_names = [TableB(id = title.id,name = name) bulk_titles,names)]
session.bulk_save_objects(bulk_names)

return_defaults = True 是上面需要的,因为在第二个批量操作中我们需要 title.id 。但是这个大大降低了为了避免由于 return_defauts = True 造成的性能下降,您可以从应用程序生成主键,而不是数据库,例如使用uuids,或获取每个表中的最大id,并从该起始值生成范围。

另一种技术可能是写你的散装使用sqlalchemy核心或纯文本插入语句。


I have the following model where TableA and TableB have 1 to 1 relationship:

class TableA(db.Model):
    id = Column(db.BigInteger, primary_key=True)
    title = Column(String(1024))
    table_b = relationship('TableB', uselist=False, back_populates="table_a")


class TableB(db.Model):
    id = Column(BigInteger, ForeignKey(TableA.id), primary_key=True)
    a = relationship('TableA', back_populates='table_b')
    name = Column(String(1024))

when I insert 1 record everything goes fine:

rec_a = TableA(title='hello')
rec_b = TableB(a=rec_a, name='world')
db.session.add(rec_b)
db.session.commit()

but when I try to do this for bulk of records:

bulk_ = []

for title, name in zip(titles, names):
    rec_a = TableA(title=title)
    bulk_.append(TableB(a=rec_a, name=name))

db.session.bulk_save_objects(bulk_)
db.session.commit()

I get the following exception:

sqlalchemy.exc.InternalError: (pymysql.err.InternalError) (1364, "Field 'id' doesn't have a default value")

Am I doing something wrong? Did I configure the model wrong? Is there a way to bulk commit this type of data?

解决方案

The error you see is thrown by Mysql. It is complaining that the attempt to insert records into table_b violates the foreign key constraint.

One technique could be to write all the titles in one bulk statement, then write all the names in a 2nd bulk statement. Also, I've never passed relationships successfully to bulk operations, to this method relies on inserting simple values.

bulk_titles = [TableA(title=title) for title in titles]
session.bulk_save_objects(bulk_titles, return_defauls=True)
bulk_names = [TableB(id=title.id, name=name) for title, name in zip(bulk_titles, names)]
session.bulk_save_objects(bulk_names)

return_defaults=True is needed above because we need title.id in the 2nd bulk operation. But this greatly reduces the performance gains of the bulk operation

To avoid the performance degradation due to return_defauts=True, you could generate the primary keys from the application, rather than the database, e.g. using uuids, or fetching the max id in each table and generating a range from that start value.

Another technique might be to write your bulk insert statement using sqlalchemy core or plain text.

这篇关于SQLalchemy批量插入一对一的关系的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆