将requestTimeout上传到使用PHP S3 [英] RequestTimeout uploading to S3 using PHP

查看:278
本文介绍了将requestTimeout上传到使用PHP S3的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我无法从我们的服务器上传文件到S3。我们使用S3来存储备份和我们所有的服务器都运行Ubuntu 8.04与PHP 5.2.4和libcurl的7.18.0。每当我试着上传文件亚马逊返回的requestTimeout错误。我知道有在我们目前的超过200MB的libcurl preventing上传的版本中的错误。出于这个原因,我们分手了备份成更小的文件。

I am having trouble uploading files to S3 from on one of our servers. We use S3 to store our backups and all of our servers are running Ubuntu 8.04 with PHP 5.2.4 and libcurl 7.18.0. Whenever I try to upload a file Amazon returns a RequestTimeout error. I know there is a bug in our current version of libcurl preventing uploads of over 200MB. For that reason we split our backups into smaller files.

我们必须托管在客户的私有云(背后的公司的防火墙在VMware ESX箱)托管在亚马逊的EC2服务器和服务器。说我有托管在客户的私有云麻烦特定的服务器。

We have servers hosted on Amazon's EC2 and servers hosted on customer's "private clouds" (a VMWare ESX box behind their company firewall). The specific server that I am having trouble with is hosted on a customer's private cloud.

我们使用Amazon S3 PHP类从的http:// undesigned.org.za/2007/10/22/amazon-s3-php-class 。我已经试过200MB,100MB和50MB的文件,都具有相同的结果。我们用下面的上载文件:

We use the Amazon S3 PHP Class from http://undesigned.org.za/2007/10/22/amazon-s3-php-class. I have tried 200MB, 100MB and 50MB files, all with the same results. We use the following to upload the files:

$s3 = new S3($access_key, $secret_key, false);
$success = $s3->putObjectFile($local_path, $bucket_name,
    $remote_name, S3::ACL_PRIVATE);

我已经尝试设置 curl_setopt($卷曲,CURLOPT_NOPROGRESS,假); 键查看进度条同时上传文件。我第一次运行它使用此选项设置它的工作。然而,每一个后续时间它已经失败。这似乎上传的文件大约3​​MB / s的5-10秒,然后下降到0。20秒钟后坐在0,亚马逊将返回应为requestTimeout - 你的套接字连接到服务器无法读取或在超时写入期间,空闲连接将被关闭。错误。

I have tried setting curl_setopt($curl, CURLOPT_NOPROGRESS, false); to view the progress bar while it uploads the file. The first time I ran it with this option set it worked. However, every subsequent time it has failed. It seems to upload the file at around 3Mb/s for 5-10 seconds then drops to 0. After 20 seconds sitting at 0, Amazon returns the "RequestTimeout - Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed." error.

我已尝试的 GitHub上,但它并没有区别。我还发现了<一href="http://www.phpclasses.org/package/4144-PHP-Stream-wrapper-to-get-and-send-files-to-Amazon-S3.html"相对=nofollow>亚马逊S3流包装类,并给了使用下列code一试:

I have tried updating the S3 class to the latest version from GitHub but it made no difference. I also found the Amazon S3 Stream Wrapper class and gave that a try using the following code:

include 'gs3.php';
define('S3_KEY', 'ACCESSKEYGOESHERE');
define('S3_PRIVATE','SECRETKEYGOESHERE');
$local = fopen('/path/to/backup_id.tar.gz.0000', 'r');
$remote = fopen('s3://bucket-name/customer/backup_id.tar.gz.0000', 'w+r');

$count = 0;
while (!feof($local))
{
    $result = fwrite($remote, fread($local, (1024 * 1024)));
    if ($result === false)
    {
        fwrite(STDOUT, $count++.': Unable to write!'."\n");
    }
    else
    {
        fwrite(STDOUT, $count++.': Wrote '.$result.' bytes'."\n");
    }
}

fclose($local);
fclose($remote);

这code读取文件中的一个MB的时间,以便将其传输到S3。对于一个50MB的文件,我得到1:写1048576字节49次(第一个数字,当然每次的变化),但在循环的最后一次迭代我得到一个错误,说:通知:fputs()的:送8192字节失败,错误号= 11资源的/path/to/http.php线230暂时不可用。

This code reads the file one MB at a time in order to stream it to S3. For a 50MB file, I get "1: Wrote 1048576 bytes" 49 times (the first number changes each time of course) but on the last iteration of the loop I get an error that says "Notice: fputs(): send of 8192 bytes failed with errno=11 Resource temporarily unavailable in /path/to/http.php on line 230".

我首先想到的是,这是一个网络问题。我们打​​电话给客户,并解释这个问题,并要求他们看看他们的防火墙,看看他们是否被丢弃任何东西。根据他们的网络管理员通信流向就好了。

My first thought was that this is a networking issue. We called up the customer and explained the issue and asked them to take a look at their firewall to see if they were dropping anything. According to their network administrator the traffic is flowing just fine.

我很茫然,什么我可以做下一个。我一直在手动运行备份并使用SCP将其传送到另一台机器,并将其上传。这显然​​是不理想的,任何帮助将不胜AP preciated。

I am at a loss as to what I can do next. I have been running the backups manually and using SCP to transfer them to another machine and upload them. This is obviously not ideal and any help would be greatly appreciated.

更新 - 2011年6月23日

我已经尝试了许多下面的选项,但它们都提供了相同的结果。我已发现,即使试图立即并最终从SCP的问题的服务器的文件到另一个服务器档位超时。不过,我可以使用SCP从另一台机器下载同一个文件。这使我更加坚信,这是对客户端的网络问题,任何进一步的建议,将不胜AP preciated。

I have tried many of the options below but they all provided the same result. I have found that even trying to scp a file from the server in question to another server stalls immediately and eventually times out. However, I can use scp to download that same file from another machine. This makes me even more convinced that this is a networking issue on the clients end, any further suggestions would be greatly appreciated.

推荐答案

这个问题存在,因为你试图再次上传同一文件。例如:

This problem exists because you are trying to upload the same file again. Example:

$s3 = new S3('XXX','YYYY', false);
$s3->putObjectFile('file.jpg','bucket-name','file.jpg');
$s3->putObjectFile('file.jpg','bucket-name','newname-file.jpg');

要解决这个问题,只需要复制该文件,并赋予它新的名称,然后将其上传正常。

To fix it, just copy the file and give it new name then upload it normally.

例如:

$s3 = new S3('XXX','YYYY', false);
$s3->putObjectFile('file.jpg','bucket-name','file.jpg');
now rename file.jpg to newname-file.jpg
$s3->putObjectFile('newname-file.jpg','bucket-name','newname-file.jpg');

这篇关于将requestTimeout上传到使用PHP S3的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆