如何减少数据传输成本?亚马逊S3-> Cloudflare->游客 [英] How can I reduce my data transfer cost? Amazon S3 --> Cloudflare --> Visitor

查看:130
本文介绍了如何减少数据传输成本?亚马逊S3-> Cloudflare->游客的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我最近开始使用Amazon S3向访问者提供图像,因为这将减少服务器负载.现在,出现了一个新问题:今天,我研究了我的AWS账单.我注意到我有一个巨额账单在等我-20天之内总共进行了4TB AWS数据传输.

I recently started using Amazon S3 to serve images to my visitors since this will reduce the server load. Now, there is a new problem: Today I looked into my AWS billings. I noticed that I have a huge bill waiting for me - there has been a total of 4TB AWS Data Transfer in 20 days.

显然,这是因为大量的Amazon S3传出流量(流向Cloudflare,然后将其提供给访问者).现在,我应该通过设置一个Cache头来减少请求的文件数量(因为Cloudflare的Crawler会尊重这一点).我已经修改了我的代码,如下所示:

Obviously this is because the high amount of outgoing Amazon S3 traffic (to Cloudflare which then serves it to the visitors). Now I should to reduce the amount of requested files by setting a Cache header (since Cloudflare's Crawler will respect that). I have modified my code like this:

$s3->putObjectFile($path, $bucket , 'images/'.$id.'.jpg', S3::ACL_PUBLIC_READ);

$s3->putObjectFile($path, $bucket , 'images/'.$id.'.jpg', S3::ACL_PUBLIC_READ, array('Cache-Control' => 'public,max-age=31536000'));

仍然,它不起作用. Cloudflare不尊重Cache,因为Cache-Control在标头中不显示为"Cache-Control",而是显示为"x-amz-meta-cachecontrol". Cloudflare忽略了这一点.

Still, it does not work. Cloudflare does not respect the Cache because the Cache-Control does not show up as "Cache-Control" in the Header but instead as "x-amz-meta-cachecontrol". Cloudflare ignores this.

有人对此有一个简单的解决方案吗?

Does anyone have an easy solution for this?

TL; DR::我与此人大致相同的问题:

TL;DR: I have more or less the same problem as this guy: http://support.bucketexplorer.com/topic734.html (that was in 2008)

编辑:我偶然发现: Amazon S3无法缓存图片,但不幸的是,该解决方案对我不起作用.

I have stumbled upon this: Amazon S3 not caching images but unfortunately that solution does not work for me.

事实证明它没有用,因为我使用的是旧版本的"Amazon S3类".我更新了代码,现在可以正常工作.

EDIT 2: Turns out it didn't work because I was using an old version of the "Amazon S3 class". I updated and the code works now.

谢谢您的时间.

推荐答案

如果获取的是"x-amz-meta-cachecontrol",则可能是您未正确设置标头.这可能只是您在代码中执行此操作的确切方式.该 应该有效.我推断这是使用Amazon S3 PHP类的php?

If you are getting "x-amz-meta-cachecontrol", it is likely you are not setting the headers correctly. It might just be the exact way you are doing it in your code. This is supposed to work. I am deducing this is php using Amazon S3 PHP Class?

尝试一下:

$s3->putObject(file_get_contents($path), $bucket, $url, S3::ACL_PUBLIC_READ, array(), array('Cache-Control' => 'max-age=31536000, public'));

S3 PHP文档putObjectFile列在传统方法"下:

In the S3 PHP docs putObjectFile is listed under Legacy Methods:

putObjectFile (string $file, 
               string $bucket, 
               string $uri, 
               [constant $acl = S3::ACL_PRIVATE], 
               [array $metaHeaders = array()], 
               [string $contentType = null])

与此比较:

putObject (mixed $input, 
           string $bucket, 
           string $uri, 
           [constant $acl = S3::ACL_PRIVATE], 
           [array $metaHeaders = array()], 
           [array $requestHeaders = array()])

您需要将cache-control设置为 request 标头,但似乎无法用putObjectFile设置请求标头,只有元标头.您必须使用putObject并为它提供一个用于元标头的空数组,然后再为该数组提供请求标头(包括缓存控制).

You need to set cache-control as a request header, but appears that there is no way to set request headers with putObjectFile, only meta headers. You have to use putObject and give it an empty array for meta headers and then another array with the request headers (including cache-control).

您还可以尝试下面列出的其他一些工作示例.

You can also try some of the other working examples I have listed below.

另请参阅:

更新Amazon S3和CloudFront的缓存标头(python)

为整个S3设置缓存控制自动进行存储(使用存储桶策略?)

http://docs.aws.amazon .com/AmazonS3/latest/API/RESTObjectGET.html?r = 5225

这篇关于如何减少数据传输成本?亚马逊S3-> Cloudflare->游客的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆