PHP-如何有效读取大型远程文件并循环使用缓冲区 [英] PHP - how to read big remote files efficiently and use buffer in loop

查看:132
本文介绍了PHP-如何有效读取大型远程文件并循环使用缓冲区的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想了解如何使用读取文件的缓冲区。

i would like to understand how to use the buffer of a read file.

假设我们有一个大文件,其中逐行列出了电子邮件列表(定界符为经典 \n

Assuming we have a big file with a list of emails line by line ( delimiter is a classic \n )

现在,我们想将每一行与数据库中表的每个记录进行比较以类似 line_of_file == table_row 这样的检查方式。

now, we want compare each line with each record of a table in our database in a kind of check like line_of_file == table_row.

如果您有普通文件,这是一个简单的任务,否则,如果文件很大,服务器通常会在几分钟后停止操作。

this is a simple task if you have a normal file, otherwise, if you have a huge file the server usually stop the operation after few minute.

那么用文件缓冲区执行这种操作的最佳方法是什么?

so what's the best way of doing this kind of stuff with the file buffer?

到目前为止,我的情况是这样的:

what i have so far is something like this:

$buffer = file_get_contents('file.txt');
while($row = mysql_fetch_array($result)) {
  if ( preg_match('/'.$email.'/im',$buffer)) {
    echo $row_val;
  }
}







$buffer = file_get_contents('file.txt');
$lines = preg_split('/\n/',$buffer); 
//or $lines = explode('\n',$buffer);
while($row = mysql_fetch_array($result)) {
  if ( in_array($email,$lines)) {
    echo $row_val;
  }
}


推荐答案

在我对您的问题的投票中(因此CW)已经建议过:

Like already suggested in my closevotes to your question (hence CW):

您可以使用 SplFileObject ,它实现了Iterator来逐行迭代文件以节省内存。看到我的答案

You can use SplFileObject which implements Iterator to iterate over a file line by line to save memory. See my answers to

  • Least memory intensive way to read a file in PHP and
  • How to save memory when reading a file in Php?

例如。

这篇关于PHP-如何有效读取大型远程文件并循环使用缓冲区的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆