Heroku的:没有一个本地文件系统服务的大型动态生成的资产 [英] Heroku: Serving Large Dynamically-Generated Assets Without a Local Filesystem

查看:193
本文介绍了Heroku的:没有一个本地文件系统服务的大型动态生成的资产的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个关于举办大型动态生成的资产和的Heroku 的问题。

I have a question about hosting large dynamically-generated assets and Heroku.

我的应用程序将提供它的底层数据的一个子集,其中将包括(> 100 MB),每24小时生成一次大文件的批量下载。如果我是一个服务器上运行,我只是写文件到公共目录。

My app will offer bulk download of a subset of its underlying data, which will consist of a large file (>100 MB) generated once every 24 hours. If I were running on a server, I'd just write the file into the public directory.

不过,按照我的理解,这是不可能的Heroku的。在/ tmp目录可以写入,但文件的保证寿命有似乎术语进行定义之一请求 - 响应周期,而不是​​后台作业。

But as I understand it, this is not possible with Heroku. The /tmp directory can be written to, but the guaranteed lifetime of files there seems to be defined in terms of one request-response cycle, not a background job.

我想使用S3主办的下载文件。该 S3宝石并支持流上传,但仅适用于已在本地文件系统中存在的文件。它看起来像内容的大小需要知道的前期,这将是不可能在我的情况。

I'd like to use S3 to host the download file. The S3 gem does support streaming uploads, but only for files that already exist on the local filesystem. It looks like the content size needs to be known up-front, which won't be possible in my case.

所以,这看起来像一个catch-22。我试图避免上传时,S3,但S3仅支持流媒体上传了已经在本地文件系统中存在的文件创建一个巨大的字符串,在内存中。

So this looks like a catch-22. I'm trying to avoid creating a gigantic string in memory when uploading to S3, but S3 only supports streaming uploads for files that already exist on the local filesystem.

由于一个Rails应用程序中,我不能写入本地文件系统,我怎么成为了每天的生成,而在内存中创建大串大文件?

Given a Rails app in which I can't write to the local filesystem, how do I serve a large file that's generated daily without creating a large string in memory?

推荐答案

$ {RAILS_ROOT} / tmp目录(不是/ tmp目录,它在你的应用程序的目录)持续的持续的过程。如果你正在运行一个后台的DJ,在TMP文件将持续这一过程的持续时间。

${RAILS_ROOT}/tmp (not /tmp, it's in your app's directory) lasts for the duration of your process. If you're running a background DJ, the files in TMP will last for the duration of that process.

其实,这些文件将持续较长时间,我们说你不能保证可用性的原因是,TMP没有跨服务器共享,且每个作业/进程可以基于云计算负载在不同的服务器上运行。你还需要确保您删除的文件,当你与他们工作的一部分完成。

Actually, the files will last longer, the reason we say you can't guarantee availability is that tmp isn't shared across servers, and each job/process can run on a different server based on the cloud load. You also need to make sure you delete your files when you're done with them as part of the job.

-Another的Heroku员工

-Another Heroku employee

这篇关于Heroku的:没有一个本地文件系统服务的大型动态生成的资产的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆