迷雾存储的文件在S3中设定CONTENT_TYPE [英] Set content_type of Fog storage files on s3

查看:222
本文介绍了迷雾存储的文件在S3中设定CONTENT_TYPE的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有雾和Amazon S3的工作来管理视频和图像文件。我已经运行到了不少麻烦与设置CONTENT_TYPE为我的文件。

当从控制台的工作,我能够通过并单独更新每个文件的CONTENT_TYPE,然后运行保存。然而,当我尝试运行在所有在一个特定目录中的文件的更新,我没有得到一个错误,但没有得到更新。我已经运行多种不同的方法,都具有相同的基本思想,和所有设置为打印救了!如果该文件保存。该方法正确运行,并打印出救了!,但是当我回去检查文件中,CONTENT_TYPE仍然是零。

下面是我在做什么的例子:

  directory.files.each也| F |
  案f.key.split(。)。最后
  当JPG
    f.content_type =为image / jpeg
    把救了!如果f.save
  当MOV
    f.content_type =视频/ QuickTime的
    把救了!如果f.save
  结束
结束
 

此外,当我去通过,并分别更新每个文件,保存工作和CONTENT_TYPE得到更新,但数据不会保留。

例如:

 文件= directory.files.first
file.content_type ='视频/ QuickTime的
file.save#返回true
file.content_type#返回视频/ QuickTime的
 

然而,当我去检查的AWS文件,内容类型仍然是零。

有要去关于雾S3文件更新CONTENT_TYPE一​​个更好的(持久)的方式?我觉得我必须要对此错误的方法。

更新: 使用文件#复制的方法试了:

  directory.files.each也| F |
  CONTENT_TYPE =案f.key.split(。)。最后
  当JPG
    为image / jpeg
  当MOV
    视频/ QuickTime的
  结束
  把山寨!如果f.copy(f.directory.key,f.key,{'的Content-Type=> CONTENT_TYPE})
结束
 

我得到了一个错误:

  EXCON ::错误:: BadRequest:预期(200)LT; =>实际(400错误的请求)

从/Users/marybethlee/.rvm/gems/ruby-2.0.0-p0@mothership/gems/excon-0.22.1/lib/excon/middlewares/expects.rb:6:in `response_call
从/Users/marybethlee/.rvm/gems/ruby-2.0.0-p0@mothership/gems/excon-0.22.1/lib/excon/connection.rb:355:in '反应'
从/Users/marybethlee/.rvm/gems/ruby-2.0.0-p0@mothership/gems/excon-0.22.1/lib/excon/connection.rb:249:in '要求'
从/Users/marybethlee/.rvm/gems/ruby-2.0.0-p0@mothership/gems/fog-1.11.1/lib/fog/core/connection.rb:21:in '要求'
从/Users/marybethlee/.rvm/gems/ruby-2.0.0-p0@mothership/gems/fog-1.11.1/lib/fog/aws/storage.rb:506:in'要求'
从/Users/marybethlee/.rvm/gems/ruby-2.0.0-p0@mothership/gems/fog-1.11.1/lib/fog/aws/requests/storage/copy_object.rb:33:in `copy_object
从/Users/marybethlee/.rvm/gems/ruby-2.0.0-p0@mothership/gems/fog-1.11.1/lib/fog/aws/models/storage/file.rb:93:in '复制'
从(IRB):14
从/Users/marybethlee/.rvm/rubies/ruby-2.0.0-p0/bin/irb:16:in`<主>
 

解决方案

如果你是刚刚更新的元数据(而不是身体/内容本身),你可能想使用副本,而不是保存。这也许是无明显的,但,保持在S3侧的操作,使得它会快得多。

有关复制的签名是这样的:

复制(target_directory_key,target_file_key,选择= {})

所以,我认为我所提出的解决方案应该是(或多或少)是这样的:

  directory.files.each也| F |
  CONTENT_TYPE =案f.key.split(。)。最后
  当JPG
    为image / jpeg
  当MOV
    视频/ QuickTime的
  结束
  选项​​= {
    Content-Type的'=> CONTENT_TYPE,
    的X AMZ-元数据指令'=> '更换'
  }
  把山寨!如果f.copy(f.directory,f.key,期权)
结束
 

这应该主要讲述S3复制我的文件在自身的顶部,但改变这一头。这样,你就不必下载/重新上传文件。这可能是你想要的方式。

所以,解决办法之余,似乎仍然喜欢你发现了一个bug。您可以包括你所说的单独更新每个文件指的例子吗?只是想确保我确切地知道你的意思,而且我可以看到在工作/非工作情况下相映成趣。此外,如何/为什么你认为它没有更新的内容类型(它实际上可能被更新,但只是不显示更新的值正确,或者类似的东西)。奖励积分,如果你可以在这里创建一个问题,以确保我不会忘记来解决这个问题: https://github.com/fog/fog/issues?state=open

I'm working with Fog and Amazon s3 to manage video and image files. I've been running into a lot of trouble with setting the content_type for my files.

When working from the console, I am able to go through and individually update each file's content_type, and then run save. However, when I try to run an update on all of the files within a specific directory, I don't get an error, but nothing gets updated. I've run multiple different methods, all with the same basic idea, and all set to print "saved!" if the file saves. The methods run properly and print out "saved!", but when I go back and check the files, the content_type is still nil.

Here's an example of what I'm doing:

directory.files.each do |f|
  case f.key.split(".").last
  when "jpg"
    f.content_type = "image/jpeg"
    puts "saved!" if f.save
  when "mov"
    f.content_type = "video/quicktime"
    puts "saved!" if f.save
  end
end

Also, when I go through and individually update each file, the save works and the content_type gets updated, but the data doesn't persist.

For example:

file = directory.files.first
file.content_type = 'video/quicktime'
file.save         # returns true
file.content_type # returns 'video/quicktime'

However, when I go check the file in AWS, the content type is still nil.

Is there a better (persistent) way of going about updating content_type on Fog s3 files? I feel like I must be going about this the wrong way.

Update: Tried using the file#copy method:

directory.files.each do |f|
  content_type = case f.key.split(".").last
  when "jpg"
    "image/jpeg"
  when "mov"
    "video/quicktime"
  end
  puts "copied!" if f.copy(f.directory.key, f.key, { 'Content-Type' => content_type })
end

I got an error:

Excon::Errors::BadRequest: Expected(200) <=> Actual(400 Bad Request)

from /Users/marybethlee/.rvm/gems/ruby-2.0.0-p0@mothership/gems/excon-0.22.1/lib/excon/middlewares/expects.rb:6:in `response_call'
from /Users/marybethlee/.rvm/gems/ruby-2.0.0-p0@mothership/gems/excon-0.22.1/lib/excon/connection.rb:355:in `response'
from /Users/marybethlee/.rvm/gems/ruby-2.0.0-p0@mothership/gems/excon-0.22.1/lib/excon/connection.rb:249:in `request'
from /Users/marybethlee/.rvm/gems/ruby-2.0.0-p0@mothership/gems/fog-1.11.1/lib/fog/core/connection.rb:21:in `request'
from /Users/marybethlee/.rvm/gems/ruby-2.0.0-p0@mothership/gems/fog-1.11.1/lib/fog/aws/storage.rb:506:in `request'
from /Users/marybethlee/.rvm/gems/ruby-2.0.0-p0@mothership/gems/fog-1.11.1/lib/fog/aws/requests/storage/copy_object.rb:33:in `copy_object'
from /Users/marybethlee/.rvm/gems/ruby-2.0.0-p0@mothership/gems/fog-1.11.1/lib/fog/aws/models/storage/file.rb:93:in `copy'
from (irb):14
from /Users/marybethlee/.rvm/rubies/ruby-2.0.0-p0/bin/irb:16:in `<main>'

解决方案

If you are just updating metadata (and not the body/content itself) you probably want to use copy instead of save. This is perhaps non-obvious, but that keeps the operation on the S3 side so that it will be MUCH faster.

The signature for copy looks like:

copy(target_directory_key, target_file_key, options = {})

So I think my proposed solution should look (more or less) like this:

directory.files.each do |f|
  content_type = case f.key.split(".").last
  when "jpg"
    "image/jpeg"
  when "mov"
    "video/quicktime"
  end
  options = {
    'Content-Type' => content_type,
    'x-amz-metadata-directive' => 'REPLACE'
  }
  puts "copied!" if f.copy(f.directory, f.key, options)
end

That should basically tell S3 "copy my file over the top of itself, but change this header". That way you don't have to download/reupload the file. This is probably the approach you want.

So, solution aside, still seems like you found a bug. Could you include an example of what you mean by "individually update each file"? Just want to make sure I know exactly what you mean and that I can see the working/non-working cases side by side. Also, how/why do you think it isn't updating the content-type (it might actually be updating it, but just not displaying the updated value correctly, or something like that). Bonus points if you can create an issue here to make sure I don't forget to address it: https://github.com/fog/fog/issues?state=open

这篇关于迷雾存储的文件在S3中设定CONTENT_TYPE的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆