django-compressor,heroku,s3:请求已过期 [英] django-compressor, heroku, s3: Request has expired

查看:67
本文介绍了django-compressor,heroku,s3:请求已过期的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在带有Amazon S3服务静态文件的heroku上使用django-compressor,由于压缩程序生成的指向静态文件的链接,我一直遇到以下错误.我对压缩机和s3完全陌生:

I am using django-compressor on heroku with amazon s3 serving static files and I keep running into the following error with the compressor generated links to static files. I am totally new to compressor and s3:

https://xxx.s3.amazonaws.com/static/CACHE/css/989a3bfc8147.css?Signature=tBJBLUAWoA2xjGlFOIu8r3SPI5k%3D&Expires=1365267213&AWSAccessKeyId=AKIAJCWU6JPFNTTJ77IQ

<Error>
<Code>AccessDenied</Code>
<Message>Request has expired</Message>
<RequestId>FE4625EF498A9588</RequestId>
<Expires>2013-04-06T16:53:33Z</Expires>
<HostId>Fbjlk4eigroefpAsW0a533NOHgfQBG+WFRTJ392v2k2/zuG8RraifYIppLyTueFu</HostId>
<ServerTime>2013-04-06T17:04:41Z</ServerTime>
</Error>

我配置了两台heroku服务器,一台用于登台,一台用于生产.他们每个人都有自己的数据库和s3存储桶.它们还共享相同的设置文件,所有唯一设置都配置为环境变量.我已经检查了静态文件是否确实被推送到了各自的存储桶中.

I have two heroku servers configured, one for staging and one for production. They each have their own database and s3 bucket. They also share the same settings file, all unique settings are configured as environment vars. I have checked that the static files are in fact being pushed to their respective buckets.

压缩机和放大器;s3设置如下:

compressor & s3 settings are as follows:

COMPRESS_ENABLED = True
COMPRESS_STORAGE = STATICFILES_STORAGE 
COMPRESS_URL = STATIC_URL
COMPRESS_ROOT = STATIC_ROOT
COMPRESS_OFFLINE = False

AWS_ACCESS_KEY_ID = os.environ.get('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.environ.get('AWS_SECRET_ACCESS_KEY')
AWS_STORAGE_BUCKET_NAME = os.environ.get('AWS_STORAGE_BUCKET_NAME')

每次我在舞台或产品上推送更新到heroku时,我最终都会遇到上述问题.有时会在一个小时后,有时是一天,有时是一周,有时是在推出更新后立即发生.奇怪的是,如果我在两个环境中推送相同的更新,则一个将正常工作,而另一个则会出错,或者它们都将在第一时间工作,一个将在一个小时内到期,另一个将在一周内到期..

Each time I push an update to heroku on staging or production, I eventually run into the above issue. Sometimes it happens after an hour, sometimes a day, sometimes a week, and sometimes as soon as an update is pushed out. The odd thing is that, if I push the same update to both environments, one will work and I will get the error on the other or they will both work at first and one will expire in an hour and the other will expire in a week.

如果有人能解释发生了什么,我将非常感激.显然,Expires参数是造成此问题的原因,但是为什么每次按下时持续时间都会变化,并且由什么决定时间长度?您如何更改到期时间?请让我知道是否需要更多信息.

I would really appreciate it if someone could explain what is going on. Obviously the Expires parameter is causing the problem, but why would the duration change with each push and what determines the amount of time? HOW DO YOU CHANGE THE EXPIRATION TIME? Please let me know if you need any more info.

更新:我通过设置AWS_QUERYSTRING_AUTH = False暂时解决了该问题.似乎没有任何方法可以在查询字符串中设置EXPIRATION TIME,只能在请求标头中使用

UPDATE: I temporarily solved the problem by setting AWS_QUERYSTRING_AUTH = False. There does not seem to be any way to set the EXPIRATION TIME in the query string, only using in the request header.

推荐答案

尝试一下:

AWS_QUERYSTRING_EXPIRE = 63115200

值为链接生成后的秒数.

The value being number of seconds from the time the links are generated.

这篇关于django-compressor,heroku,s3:请求已过期的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆