文件()太慢了 [英] File() too slow

查看:49
本文介绍了文件()太慢了的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试处理CSV文件但我的主机遇到问题

最长执行时间为30秒。

这就是脚本目前的工作方式。

用户上传他们的CSV文件

脚本遍历文件()并写入较小的块文件。

脚本然后经过处理较小的文件,填充

数据库,删除已处理的文件并刷新自己,从而再次启动




此系统适用于最多15000行的文件,但我需要能够处理更大的文件。

瓶颈是初始拆分,因为我使用的是文件()

函数读取整个上传的文件。


有没有人知道更快的方法将文件拆分成更小的块。


TIA

RG

I am trying to process a CSV file but am having trouble with my hosts
maximum execution time of 30 seconds.
This is how the script works at the moment.
User uploads their CSV file
The script goes through the file() and writes smaller chunk files.
The script then goes through processing the smaller files, populating the
database, deleting the processed file and refreshing itself, thus starting
again.

This system works for files up to 15000 rows, but I need to be able to
process larger files.
The bottleneck is with the initial splitting, since I use the file()
function to read the entire uploaded file.

Does anyone know a quicker way to split a file into smaller chunks.

TIA
RG

推荐答案

>这个系统适用于最多15000行的文件,但是我需要能够
> This system works for files up to 15000 rows, but I need to be able to
处理更大的文件。
瓶颈是初始拆分,因为我使用文件()
函数来读取整个上传的文件。
process larger files.
The bottleneck is with the initial splitting, since I use the file()
function to read the entire uploaded file.




file()到目前为止不是瓶颈问题因为它不是

比阅读使用fread()然后将其拆分更慢。


瓶颈(似乎)是你的数据库比较/写作的东西。


这听起来像是你读过的。 15000行,将它们分解成块

然后逐行继续所有块,检查数据库,

比较

数据库数据用你的cvs然后决定做什么(删除,

插入

更新...)


在15000个条目这将使你的脚本超时,而不是文件()

命令。


如果你不相信它,试着手动阅读并爆炸它.. 。


问候


timo



file() is by far NOT a bottleneck at your present Problem as it is not
any slower than reading the using fread() and splitting it up afterwards.

The Bottleneck is (seems like) your Database compare / writing stuff.

It sounds like you read in eg. 15000 lines, break them down into chunks
and then proceed all the chunks line by line, checking the database,
comparing
the database data with your cvs and then deciding on what to do (delete,
insert
update ...)

At 15000 Entries THIS will make your Script timeout, not the file()
command.

If you do not believe it, try to manually read and explode it ...

regards

timo




Timo Henke <我们******* @ fli7e.de>在消息中写道

news:bl ************* @ news.t-online.com ...

"Timo Henke" <we*******@fli7e.de> wrote in message
news:bl*************@news.t-online.com...
此系统适用于最多15000行的文件,但我需要能够处理更大的文件。
瓶颈是初始拆分,因为我使用文件()
This system works for files up to 15000 rows, but I need to be able to
process larger files.
The bottleneck is with the initial splitting, since I use the file()
function to read the entire uploaded file.



文件()到目前为止不是瓶颈问题因为它不比读取使用的fread慢()然后将其拆分。

瓶颈是(似乎)你的数据库比较/写作的东西。

听起来像是你读过的。 15000行,将它们分成块,然后逐行处理所有块,检查数据库,将数据库数据与cvs进行比较,然后决定做什么(删除,
插入
更新...)

15000条目这将使你的脚本超时,而不是文件()
命令。

蒂莫



file() is by far NOT a bottleneck at your present Problem as it is not
any slower than reading the using fread() and splitting it up afterwards.

The Bottleneck is (seems like) your Database compare / writing stuff.

It sounds like you read in eg. 15000 lines, break them down into chunks
and then proceed all the chunks line by line, checking the database,
comparing
the database data with your cvs and then deciding on what to do (delete,
insert
update ...)

At 15000 Entries THIS will make your Script timeout, not the file()
command.

If you do not believe it, try to manually read and explode it ...

regards

timo



初始拆分文件是瓶颈,我不比较这个程序中的任何东西。

数据库比较等没有问题,似乎需要

大约需要2秒钟来做3000个mysql查询。这很好。


我只需要一个快速的方法来将大文件分成较小的

文件。

TIA

RG


The initial splitting of the file is the bottleneck, I do not compare
anything in this procedure.
There is not a problem with the database comparing etc, it seems to take
about 2 seconds to do around 3000 mysql queries. This is fine.

I just need a quick way to initially split the large file into smaller
files.
TIA
RG


>文件的初始拆分是瓶颈,我不比较
> The initial splitting of the file is the bottleneck, I do not compare
这个程序中的任何东西。
数据库比较等没有问题,似乎需要
大约2秒钟做3000左右的mysql查询。这很好。
anything in this procedure.
There is not a problem with the database comparing etc, it seems to take
about 2 seconds to do around 3000 mysql queries. This is fine.




让我们谈谈文件大小。我刚刚尝试过。在我的机器上读取(和分割)19MB

文件

和322000行需要0.39843秒:

<?php


list(



lets talk about filesizes. I JUST tried it. Reading (and splitting) a 19MB
File
with 322000 Lines took 0.39843 seconds on My Machine:
<?php

list(


这篇关于文件()太慢了的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆