AWS跨codeR覆盖在S3上的文件 [英] aws transcoder overwrite files on s3

查看:282
本文介绍了AWS跨codeR覆盖在S3上的文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用的是AWS PHP SDK将文件上传到S3则c它移植$ C $有弹性横贯codeR。

先通过一切正常,该putobject命令会覆盖S3的旧文件(始终命名为相同):

  $ S3-> putObject([
      '斗'=>配置::得到('app.aws.S3.bucket),
      '关键'=> $关键,
      '的SourceFile'=> $路径,
      元数据=> [
        '标题'=>输入::获得(标题)
      ]
    ]);
 

但是创建第二个转码作业时,我得到的错误:

 指定的对象不能保存在指定的水桶,因为对象由该名称已经存在
 

跨codeR角色具有完全S3访问。有没有办法解决这个或将我以前的跨codeD删除使用SDK每次文件?

我的创造就业:

  $结果= $跨codeR-> createJob([
      PipelineId'=>配置::得到('app.aws.ElasticTrans coder.PipelineId'),
      输入=> [
        '关键'=> $关键
      ]
      输出=> [
        '关键'=> 视频/'.$用户。'/'。$ output_key,
        ThumbnailPattern'=> 视频/'.$用户。/拇指{}算',
        '旋转'=> 0,
        'presetId'=>配置::得到('app.aws.ElasticTrans codeR,presetId')
      ]
    ]);
 

解决方案

在亚马逊弹性横贯codeR服务文档,这是这里的预期的行为:<一href="http://docs.aws.amazon.com/elastictrans$c$cr/latest/developerguide/job-settings.html#job-settings-output-key" rel="nofollow">http://docs.aws.amazon.com/elastictrans$c$cr/latest/developerguide/job-settings.html#job-settings-output-key.

如果您的工作流程需要您覆盖相同的密钥,那么它听起来像你应该有作业输出独特的地方,然后发出一个S3 CopyObject操作来覆盖旧的文件。

I'm using the AWS PHP SDK to upload a file to S3 then trancode it with Elastic Transcoder.

First pass everything works fine, the putobject command overwrites the old file (always named the same) on s3:

$s3->putObject([
      'Bucket'     => Config::get('app.aws.S3.bucket'),
      'Key'        => $key,
      'SourceFile' => $path,          
      'Metadata'   => [
        'title'     => Input::get('title')
      ]
    ]);

However when creating a second transcoding job, i get the error:

  The specified object could not be saved in the specified bucket because an object by that name already exists

the transcoder role has full s3 access. Is there a way around this or will i have to delete the files using the sdk everytime before its transcoded?

my create job:

    $result = $transcoder->createJob([
      'PipelineId' => Config::get('app.aws.ElasticTranscoder.PipelineId'),
      'Input' => [
        'Key' => $key
      ],
      'Output' => [
        'Key' => 'videos/'.$user.'/'.$output_key,
        'ThumbnailPattern' => 'videos/'.$user.'/thumb-{count}',
        'Rotate' => '0',
        'PresetId' => Config::get('app.aws.ElasticTranscoder.PresetId')       
      ],
    ]);

解决方案

The Amazon Elastic Transcoder service documents that this is the expected behavior here: http://docs.aws.amazon.com/elastictranscoder/latest/developerguide/job-settings.html#job-settings-output-key.

If your workflow requires you to overwrite the same key, then it sounds like you should have the job output somewhere unique and then issue an S3 CopyObject operation to overwrite the older file.

这篇关于AWS跨codeR覆盖在S3上的文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆