SQLAlchemy并处理大量结果 [英] SQLAlchemy and going through a large result set
问题描述
我需要从一个大表的所有行中读取数据,但是我不想一次将所有数据都拉到内存中.是否有一个可以处理分页的SQLAlchemy函数?也就是说,将几行存储到内存中,然后在需要时获取更多行.
I need to read data from all of the rows of a large table, but I don't want to pull all of the data into memory at one time. Is there a SQLAlchemy function that will handle paging? That is, pull several rows into memory and then fetch more when necessary.
我了解您可以使用limit
和offset
作为本文建议,但如果不需要的话,我宁愿不处理.
I understand you can do this with limit
and offset
as this article suggests, but I'd rather not handle that if I don't have to.
推荐答案
If you are using Flask-SqlAlchemy, see the paginate method of query
. paginate
offers several method to simplify pagination.
record_query = Record.query.paginate(page, per_page, False)
total = record_query.total
record_items = record_query.items
首页应为1,否则.total
返回异常除以零
First page should be 1 otherwise the .total
returns exception divided by zero
这篇关于SQLAlchemy并处理大量结果的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!