在一个查询中获取所有数据并在 PHP 中处理与对每条数据重复查询 [英] Getting all data in one query and processing in PHP vs. querying repetitively for each piece of data

查看:50
本文介绍了在一个查询中获取所有数据并在 PHP 中处理与对每条数据重复查询的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我提前知道我最终将需要某个表中的一堆行,但我不知道我需要每个行的顺序.

I know ahead of time that I'm eventually going to need a bunch of rows from a certain table, but I don't know in what order I'll need each one.

一般来说,从性能和最佳实践的角度来看,以下做法是否更好:

In general, from a performance and best practices point of view, is it a better idea to:

  1. 做一个单一的通用查询,返回我知道我最终需要的所有行,然后使用 PHP(或其他)在我需要时挑选出每一行;或者,

  1. Do a single general query that returns all the rows I know I'll eventually need, and then use PHP (or whatever) to pick out each one as I need it; or,

使用 WHERE 进行大量查询以仅挑选出我需要的每一行.

Do lots of queries using WHERE to pick out only each row as I need it.

我需要的行数大约为 300-600,并且页面每分钟加载的次数不应超过几次(尽管我想选择一个可以应对较重负载的解决方案比这个).

The number of rows I'll need is something like 300-600, and the page shouldn't be getting loaded more than a few times per minute (although I'd like to pick a solution that will cope under heavier loads than this).

编辑以获取更多信息:我正在使用 PostgreSQL.随着时间的推移,该表会变得非常大,可能有数千行.我可以在进行任何查询之前确定我最终需要哪些行(正如我所说,n ~ 500),但我不知道我需要它们的顺序.所以它是 1 个查询并且 n PHP 搜索数组中的特定值,或者每次使用不同的 WHERE 子句进行 n 次查询.我倾向于前者.

Edit for more info: I'm using PostgreSQL. The table is going to get quite large over time, probably with many thousands of rows. I can determine before I do any queries exactly which rows I will eventually need (as I said, n ~ 500), but what I don't know is what order I'll need them in. So it's either 1 query and n PHP searches for specific values in an array, or n queries with a different WHERE clause each time. I'm leaning towards the former.

推荐答案

在可能的情况下,执行一个查询而不是 n 个查询几乎总是更快.如果您的结果集在关联数组中,则不会在数组中索引单个行.但是,如果您遍历结果数组并按行 ID 键控数组元素,则它们在 PHP 中的随机访问将变得更快(而不是每次都必须遍历整个数组才能找到所需的行).当然,这样做会使用更多的内存来复制数组元素.

It's nearly always faster to do one query instead of n queries when possible. If your result set is in an associative array, individual rows won't be indexed in the array. But, if you iterate over the result array and key the array elements by an row ID, they will become faster for random access in PHP (rather than having to traverse the whole array each time to find the row you need). Of course, doing so will use a bit more memory to copy the array elements.

无论哪种方式,这都比每次都返回数据库要快得多,尤其是在您将点击数百次的情况下.

Either way, this is much faster than returning to the database each time, especially if you will hit it hundreds of times.

最好的选择是使用 ORDER BY 按照您需要的顺序从 RDBMS 中提取您的记录.那么随机访问不是问题.

The best option is to pull your records from the RDBMS in the order which you'll need them with ORDER BY. Then random access isn't an issue.

无论如何,600 行听起来都不算多.确保您在适当的列上有索引,以便优化您的查询,您会没事的.

In any case, 600 rows doesn't sound like that much. Make sure you have indexes on appropriate columns so your query is optimized and you'll be fine.

EDIT 获取行时,创建数组索引.现在您不需要搜索数组,因为您可以通过索引访问它.

EDIT When fetching rows, create an array index. Now you don't need to search the array, as you can just access it by index.

$rowset = array();
while ($row = pg_fetch_assoc($result)) {
  // Append rows to $rowset indexed by the 'id' column
  // Use whatever column you'll need to be searching with PHP
  $rowset[$row['id']] = $row;
}

这篇关于在一个查询中获取所有数据并在 PHP 中处理与对每条数据重复查询的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆