大文件上传导致Nginx PHP失败(超过6 GB) [英] Nginx PHP Failing with Large File Uploads (Over 6 GB)

查看:1208
本文介绍了大文件上传导致Nginx PHP失败(超过6 GB)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个非常奇怪的问题上传6GB的大文件。我的过程如下所示:


  1. 文件通过Ajax上传到php脚本中。

  2. PHP上传脚本需要$ _FILE并将其复制到块中,如



    在大文件上传过程中,会出现几个问题:


    • HTTP正文请求转储到磁盘,并传递到后端,并复制文件

    • 无法在HTTP请求内容上传到服务器之前验证请求。
    • 上传大文件后端很少需要文件内容。



    让我们从配置了新位置的Nginx开始



    让我们来看看它是如何发生的:

    1)配置Nginx将HTTP正文内容转储到一个文件中,并保存在 client_body_in_file_only上;
    $ b 2)创建新的后端点




    无论初始POST内容长度大小如何,预上传验证逻辑都可防止未经验证的请求。

    I am having a very weird issue uploading larges files over 6GB. My process works like this:

    1. Files are uploaded via Ajax to an php script.
    2. The PHP upload script takes the $_FILE and copies it over in chunks, as in this answer to a tmp location.
    3. The location of the file is stored in the db
    4. A cron script will upload the file to s3 at a later time, again using fopen functions and buffering to keep memory usage low

    My PHP(HHVM) and NGINX configuration both have their configuration set to allow up to 16GB of file, my test file is only 8GB.

    Here is the weird part, the ajax will ALWAYS time out. But the file is successfully uploaded, its gets copied to the tmp location, the location stored in the db, s3, etc. But the AJAX runs for an hour even AFTER all the execution is finished(which takes 10-15 minutes) and only ends when timing out.

    What can be causing the server not send a response for only large files?

    Also error logs on server side are empty.

    解决方案

    A large file upload is expensive and error prone operation. Nginx and backend should have correct timeout boundaries to handle slow disk IO if any. Theoretically it straightforward to manage file upload using multipart/form-data encoding RFC 1867.

    According to developer.mozilla.org in a multipart/form-data body, the HTTP Content-Disposition general header is a header that can be used on the subpart of a multipart body to give information about the field it applies to. The subpart is delimited by the boundary defined in the Content-Type header. Used on the body itself, Content-Disposition has no effect.

    Let's see what happens while file being uploaded:

    1) client sends HTTP request with the file content to webserver

    2) webserver accepts the request and initiates data transfer (or returns error 413 if the file size is exceed the limit)

    3) webserver starts to populate buffers (depends on file and buffers size)

    4) webserver sends file content via file/network socket to backend

    5) backend authenticates initial request

    6) backend reads the file and cuts headers (Content-Disposition, Content-Type)

    7) backend dumps resulted file on to disk

    8) any follow up procedures like database changes

    During large files upload several problems occur:

    • the HTTP body request dumps on to disk and passes to backend which process and copy the file
    • not possible to authenticate request before HTTP request content is uploaded to server
    • while upload large files backend rarely requires a file content itself immediately

    Let's start with Nginx configured with new location http://backend/upload to receive large file upload, back-end interaction is minimised (Content-Legth: 0), file is being stored just to disk. Using buffers Nginx dumps the file to disk (a file stored to the temporary directory with random name, it can not be changed) followed by POST request to backend to location http://backend/file with the file name in X-File-Name header.

    To keep extra information you may use headers with initial POST request. For instance, having X-Original-File-Name headers from initial requests help you to match file and store necessary mapping information to the database.

    Let's see how make it happen:

    1) configure Nginx to dump HTTP body content to a file and keep it stored client_body_in_file_only on;

    2) create new backend endpoint http://backend/file to handle mapping between temp file name and header X-File-Name

    4) instrument AJAX query with header X-File-Name Nginx will use to send post upload request with

    Configuration:

    location /upload {
      client_body_temp_path      /tmp/;
      client_body_in_file_only   on;
      client_body_buffer_size    1M;
      client_max_body_size       7G;
    
      proxy_pass_request_headers on;
      proxy_set_header           X-File-Name $request_body_file; 
      proxy_set_body             off;
      proxy_redirect             off;
      proxy_pass                 http://backend/file;
    }
    

    Nginx configuration option client_body_in_file_only is incompatible with multi-part data upload, but you can use it with AJAX i.e. XMLHttpRequest2 (binary data).

    If you need to have back-end authentication, only way to handle is to use auth_request, for instance:

    location = /upload {
      auth_request               /upload/authenticate;
      ...
    }
    
    location = /upload/authenticate {
      internal;
      proxy_set_body             off;
      proxy_pass                 http://backend;
    }
    

    Pre-upload authentication logic protects from unauthenticated requests regardless of the initial POST Content-Length size.

    这篇关于大文件上传导致Nginx PHP失败(超过6 GB)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆