SQLAlchemy 并通过一个大的结果集 [英] SQLAlchemy and going through a large result set
问题描述
我需要从一个大表的所有行中读取数据,但我不想一次将所有数据拉入内存.是否有处理分页的 SQLAlchemy 函数?也就是说,将几行拉入内存,然后在必要时获取更多行.
I need to read data from all of the rows of a large table, but I don't want to pull all of the data into memory at one time. Is there a SQLAlchemy function that will handle paging? That is, pull several rows into memory and then fetch more when necessary.
我知道您可以使用 limit
和 offset
作为 这篇文章 建议,但如果我没有必要,我宁愿不处理.
I understand you can do this with limit
and offset
as this article suggests, but I'd rather not handle that if I don't have to.
推荐答案
如果您使用的是 Flask-SqlAlchemy,请参阅 分页 方法.paginate
提供了几种简化分页的方法.
If you are using Flask-SqlAlchemy, see the paginate method of query
. paginate
offers several method to simplify pagination.
record_query = Record.query.paginate(page, per_page, False)
total = record_query.total
record_items = record_query.items
第一页应该是 1 否则 .total
返回除以零的异常
First page should be 1 otherwise the .total
returns exception divided by zero
这篇关于SQLAlchemy 并通过一个大的结果集的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!