使用PHP和Guzzle流式传输远程文件 [英] Stream remote file with PHP and Guzzle

查看:677
本文介绍了使用PHP和Guzzle流式传输远程文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的应用程序应将大文件(即服务器远程文件)流回浏览器.目前,该文件是从本地NodeJS服务器提供的.

My application should stream back to the browser a large file, that is server remotely. Currently the file is served from a local NodeJS server.

我正在使用25GB的VirtualBox磁盘映像,只是为了确保在流式传输时它不会存储在内存中. 这是我正在努力的相关代码

I am using a VirtualBox Disk image of 25GB, just to be sure that it is not stored in memory while streaming. This is the related code I'm struggling with

    require __DIR__ . '/vendor/autoload.php';
    use GuzzleHttp\Stream\Stream;
    use GuzzleHttp\Stream\LimitStream;

    $client = new \GuzzleHttp\Client();
    logger('==== START REQUEST ====');
    $res = $client->request('GET', 'http://localhost:3002/', [
      'on_headers' => function (\Psr\Http\Message\ResponseInterface $response) use ($res) {
        $length = $response->getHeaderLine('Content-Length');
        logger('Content length is: ' . $length);
        header('Content-Description: File Transfer');
        header('Content-Type: application/octet-stream');
        header('Content-Disposition: attachment; filename="testfile.zip"');
        header('Expires: 0');
        header('Cache-Control: must-revalidate');
        header('Pragma: public');
        header('Content-Length: ' . $length);

      }
    ]);

    $body = $res->getBody();
    $read = 0;
    while(!$body->eof()) {
      logger("Reading chunk. " . $read);
      $chunk = $body->read(8192);
      $read += strlen($chunk);
      echo $chunk;
    }
    logger('Read ' . $read . ' bytes');
    logger("==== END REQUEST ====\n\n");

    function logger($string) {
      $myfile = fopen("log.txt", "a") or die ('Unable to open log file');
      fwrite($myfile, "[" . date("d/m/Y H:i:s") . "] " . $string . "\n");
      fclose($myfile);
    }

即使$body = $res->getBody();应该返回流,它也会用交换数据快速将磁盘装满,这意味着它在流回客户端之前试图将其保存在内存中,但这不是预期的行为.我想念什么?

Even though $body = $res->getBody(); should return a stream, it quickly full the disk with swap data, meaning that it is trying to save that in memory before streaming back to the client, but this is not the expected behavior. What am I missing?

推荐答案

您必须指定 stream sink 这样的选项:

You have to specify stream and sink options like this:

$res = $client->request('GET', 'http://localhost:3002/', [
    'stream' => true,
    'sink' => STDOUT, // Default output stream.
    'on_headers' => ...
]);

添加这些内容后,您将能够逐块地传输响应,而无需任何其他代码即可将响应主体流复制到STDOUT(使用echo).

After these additions you will be able to stream responses chunk by chunk, without any additional code to copy from response body stream to STDOUT (with echo).

但是通常您不想这样做,因为每个活动客户端都需要拥有一个PHP进程(php-fpm或Apache的mod_php).

But usually you don't want to do this, because you will need to have one process of PHP (php-fpm or Apache's mod_php) for each active client.

如果只想提供机密文件,请尝试使用内部重定向":通过Nginx的X-Accel-Redirect标头或Apache的X-Sendfile.您将获得相同的行为,但资源使用较少(由于在nginx情况下具有高度优化的事件循环).有关配置的详细信息,您可以阅读官方文档,当然也可以阅读其他SO问题(例如

If you just want to serve secret files, try to use an "internal redirect": through X-Accel-Redirect header for nginx or X-Sendfile for Apache. You will get the same behavior, but with less resource usage (because of high optimized event loop in case of nginx). For configuration details you can read an official documentation or, of course, other SO questions (like this one).

这篇关于使用PHP和Guzzle流式传输远程文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆