PHP fwrite()文件导致内部服务器错误 [英] PHP fwrite() file leads to internal server error

查看:76
本文介绍了PHP fwrite()文件导致内部服务器错误的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想将大文件上传到我的服务器.我将这些文件拼接起来,然后上传到最大. 1MB的块.如果上传了一个块,则该块将附加到文件中. 在我的本地服务器上,一切正常,但是,如果我在我的Web服务器(由Strato -.-托管)上测试此脚本,则每次服务器上附加的文件变大64MB时,该过程就会因内部服务器错误而退出. 我认为其(当然?)是由于Strato的限制引起的.也许有些记忆,但我无法向自己解释为什么会发生这种情况. 这是脚本(PHP 5.6版):

i want to upload large files to my server. I splice these files before upload in max. 1MB chunks. If one chunk is uploaded, this chunk gets appended to the file. On my local server everything works well, but if i test this script on my Webserver (hosted by Strato -.-) the process quits with an Internal Server Error everytime the appended file on the server gets 64MB large. I think its (of course?) caused because of a restriction from Strato. Maybe something with the memory but i cant explain to myself why this happens. This is the script (PHP version 5.6):

$file = $_FILES['chunk'];
$server_chunk = fopen($uploadDir.$_POST['file_id'], "ab");
$new_chunk = fopen($file['tmp_name'], "rb");

while(!feof($new_chunk)) //while-loop is optional
{
     fwrite($server_chunk, fread($new_chunk, 1024));
};
fclose($new_chunk);
fclose($server_chunk);

我认为这段代码中没有行将文件加载到内存中导致此错误,或者是否有其他原因导致此错误?

In my opinion there is no line in this code where the file gets loaded in the memory to cause this error or could something different cause this error?

我检查了服务器日志,但如果发生此错误,则没有条目.

I checked te server-logs but there is no entry if this error happens.

php.ini

我可以创建多个63MB的文件.仅当文件超过64MB时服务器才会中止.

I can create multiple 63MB files. Only if the files exceed 64MB the server aborts.

更新: 我编写了以下脚本,将服务器上的文件块与cat串联在一起. 但是我总是会得到一个8192B文件. 这个脚本有问题吗? $ command类似于:
/bin/cat ../files/8_0 ../files/8_1 ../files/8_2 ../files/8_3

UPDATE: I wrote the following script to concatenate the filechunks on the server with cat. But I always get a 8192B File back. Is something wrong with this script? $command is something like:
/bin/cat ../files/8_0 ../files/8_1 ../files/8_2 ../files/8_3

$command = '/bin/cat';
foreach($file_array AS $file_info)
{
    $command = $command.' ../files/'.$file_info['file_id'].'_'.$file_info['server_chunkNumber'];
}
$handle1 = popen($command, "r");
$read = fread($handle1, $_GET['size']);
echo $read;

我检查了结果. 8192B文件中的字节与原始文件的开头完全相同.所以似乎有些工作...

I checked the result. The bytes in the 8192B file are exactly the same as of the beginning of the original file. So something seems to work...

更新: 我发现了.

更新:

$handle1 = popen($command, "r");
while(!feof($handle1))  
{ 
    $read = fread($handle1, 1024);
    echo $read;
}

这有效,我可以从句柄中逐段读取.但是,当然,这种方式会遇到超时限制.如何将文件传递给客户端?如果回答了这个问题,我所有的问题都解决了;)

This works, i can piecewise read from the handle. But of course this way im running into timeout limits. How can i pass the file to the client? If this question is answered all of my problems are gone ;)

推荐答案

(请参阅底部更新)

内存限制错误看起来可能有所不同(但是您可以编写一个脚本,该脚本通过将大型对象添加到不断增长的数组中来连续分配内存,并查看会发生什么情况).同样,内存限制与PHP核心,脚本,结构,文件内容有关;即使要根据内存限制加载或计数附加到文件的文件(即使看起来很奇怪,也可以通过mmap进行计数),该限制充其量也不太可能达到64兆字节.

A memory limit error would look different (but you can write a script that continuously allocates memory by adding large objects to a growing array, and see what happens). Also, the memory limit is related to the PHP core, plus script, plus structures, plus any file content; even if a file appended to were to be loaded or counted against memory limit (maybe through mmap, even if it appears weird), it is unlikely at best that the limit should chance at 64 megabytes.

对我来说,这看起来像是单个文件大小的文件系统限制.有几种云文件系统具有这种局限性.但我不知道在本地磁盘存储上有这样的东西.我想向托管技术支持寻求线索.

So this looks like a filesystem limitation on a single file size to me. There are several cloud file systems that have such limitations; but I know of none such on a local disk storage. I'd ask clues to the hosting tech support.

我会做一些尝试:

  • 仔细检查$uploadDir,除非它是您亲自分配的.

  • double check $uploadDir unless it was assigned by you personally.

尝试在与$uploadDir不同的路径中创建文件,除非可以肯定是在同一文件系统上

try creating the file in a different path than $uploadDir, unless certainly on the same filesystem

尝试检查错误:

if (!fwrite($server_chunk, fread($new_chunk, 1024))) {
   die("Error writing: ");
}

  • 偏执检查phpinfo()以确保没有真正的奇怪怪异,例如枚举功能并进行检查( spoofable,是的,但不太可能是).

  • paranoid check on phpinfo() to ensure there isn't some really weird weirdness such as function overriding. You can investigate by enumerating the functions and checking them out (spoofable, yes, but unlikely to have been).

    它实际上看起来像是文件大小限制,与PHP无关.同一主机的早期用户报告32 Mb.请参见此处

    it REALLY looks like a filesize limitation, unrelated to PHP. Earlier users of the same hosting reported 32 Mb. See here and here. There's people unable to run mysqldump, or tar backups. It has nothing to do with PHP directly.

    您也许可以通过以下方法来解决此问题:将文件分块存储,并分几批下载,或者将Apache的管道传递给cat程序(如果有).

    You can perhaps work around this problem by storing the file in chunks, and downloading it in several installments or by passing Apache a pipe to the cat program, if available.

    将发生的情况是,您将存储file123.0001,file123.0002,...,然后在下载时检查所有片段,发送适当的Content-Length,为/bin/cat构建命令行( (如果可以访问... ),然后将流连接到服务器.您可能仍然会遇到时间限制,但是值得一试.

    What would happen is that you would store file123.0001, file123.0002, ..., and then upon download check all the fragments, send the appropriate Content-Length, build a commandline for /bin/cat (if accessible...) and connect the stream to the server. You may still run into time limits, but it's worth a shot.

    示例:

    <?php
        $pattern    = "test/zot.*"; # BEWARE OF SHELL METACHARACTERS
        $files      = glob($pattern);
        natsort($files);
        $size       = array_sum(array_map('filesize', $files));
        ob_end_clean();
        Header("Content-Disposition: attachment;filename=\"test.bin\";");
        Header("Content-Type: application/octet-stream");
        Header("Content-Length: {$size}");
        passthru("/bin/cat {$pattern}");
    

    我已经测试了上面的内容,并从一堆10-Mb的块中下载了一个120Mb的文件,这些文件的排序方式为zot.1,zot.2,...,zot.10,zot.11,zot.12( 是的,起初我没有使用natsort ).如果可以找到时间,则可以在网络受到限制的VM中再次运行该时间,因此脚本的时间限制为10秒,下载时间为20秒. 可能 PHP 在密码传递返回之前不会终止脚本,因为我注意到PHP的计时不是很直观.

    I have tested the above and it downloads a single 120Mb file from a bunch of 10-Mb chunks, ordered like zot.1, zot.2, ..., zot.10, zot.11, zot.12 (and yes, I did not use natsort at first). If I can find the time I'll run it again in a VM with throttled network, so that the script has 10s time limit and the download would take 20s. It's possible that PHP won't terminate the script until passthru returns, since I noticed PHP's timekeeping is not very intuitive.

    以下代码的运行时限为 3秒.它运行一个耗时四秒的命令,然后将输出发送到浏览器,一直运行直到时间用完.

    The following code runs with a time limit of three seconds. It runs a command that takes four seconds, then sends the output to the browser, keeps running until its time is exhausted.

    <pre>
    <?php
        print "It is now " . date("H:i:s") . "\n";
        passthru("sleep 4; echo 'Qapla!'");
        print "It is now " . date("H:i:s") . "\n";
        $x = 0;
        for ($i = 0; $i < 5; $i++) {
            $t = microtime(true);
            while (microtime(true) < $t + 1.0) {
                $x++;
            }
            echo "OK so far. It is now " . date("H:i:s") . "\n";
        }
    

    结果是:

    It is now 20:52:56
    Qapla!
    It is now 20:53:00
    OK so far. It is now 20:53:01
    OK so far. It is now 20:53:02
    OK so far. It is now 20:53:03
    
    
    ( ! ) Fatal error: Maximum execution time of 3 seconds exceeded in /srv/www/rumenta/htdocs/test.php on line 9
    
    Call Stack
    #   Time    Memory  Function    Location
    1   0.0002  235384  {main}( )   ../test.php:0
    2   7.0191  236152  microtime ( )   ../test.php:9
    

    当然,Strato可能会对脚本的运行时间使用更严格的检查.另外,我已经将PHP安装为模块;对于作为独立进程运行的CGI,可能适用不同的规则.

    Of course, it is possible that Strato uses a stronger check on a script's running time. Also, I have PHP installed as a module; possibly for CGIs, which run as independent processes, different rules apply.

    这篇关于PHP fwrite()文件导致内部服务器错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

  • 查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆