在Django中加快批量ORM操作的策略 [英] Strategies for speeding up batch ORM operations in Django

查看:178
本文介绍了在Django中加快批量ORM操作的策略的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的一个API调用可能导致大量对象(Django模型)的更新。我正在遇到性能问题,因为我正在单独更新每个项目,保存并移动到下一个:

  for Something.objects.filter(x ='y'):
item.a =something
item.save()


似乎正式的答案是不会修复 。我想知道在这些情况下,人们使用什么策略来提高性能。

解决方案

- 如果您不依赖于覆盖的保存方法或前/后保存信号来保存工作, QuerySet 有一个更新方法,您可以使用它们对已过滤的行执行更新

  Something.objects.filter(x__in = ['a','b','c'])。update(a ='something')


One of my API calls can result in updates to a large number of objects (Django models). I'm running into performance issues with this since I'm updating each item individually, saving, and moving on to the next:

for item in Something.objects.filter(x='y'):
    item.a="something"
    item.save()

Sometimes my filter criterion looks like "where x in ('a','b','c',...)".

It seems the official answer to this is "won't fix". I'm wondering what strategies people are using to improve performance in these scenarios.

解决方案

The ticket you linked to is for bulk creation - if you're not relying on an overridden save method or pre/post save signals to do bits of work on save, QuerySet has an update method which you can use to perform an UPDATE on the filtered rows:

Something.objects.filter(x__in=['a', 'b', 'c']).update(a='something')

这篇关于在Django中加快批量ORM操作的策略的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆