在Django中加快批量ORM操作的策略 [英] Strategies for speeding up batch ORM operations in Django
问题描述
我的一个API调用可能导致大量对象(Django模型)的更新。我正在遇到性能问题,因为我正在单独更新每个项目,保存并移动到下一个:
for Something.objects.filter(x ='y'):
有时候,我的过滤器条件看起来像wherein('a','b','c',...)。
item.a =something
item.save()
似乎正式的答案是不会修复 。我想知道在这些情况下,人们使用什么策略来提高性能。
解决方案- 如果您不依赖于覆盖的
保存
方法或前/后保存信号来保存工作,QuerySet
有一个更新
方法,您可以使用它们对已过滤的行执行更新
Something.objects.filter(x__in = ['a','b','c'])。update(a ='something')
One of my API calls can result in updates to a large number of objects (Django models). I'm running into performance issues with this since I'm updating each item individually, saving, and moving on to the next:
for item in Something.objects.filter(x='y'): item.a="something" item.save()
Sometimes my filter criterion looks like "where x in ('a','b','c',...)".
It seems the official answer to this is "won't fix". I'm wondering what strategies people are using to improve performance in these scenarios.
解决方案The ticket you linked to is for bulk creation - if you're not relying on an overridden
save
method or pre/post save signals to do bits of work on save,QuerySet
has anupdate
method which you can use to perform anUPDATE
on the filtered rows:Something.objects.filter(x__in=['a', 'b', 'c']).update(a='something')
这篇关于在Django中加快批量ORM操作的策略的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!