我怎么能存储上传的图片AWS的S3上的PHP [英] How could I store uploaded images to AWS S3 on PHP

查看:2530
本文介绍了我怎么能存储上传的图片AWS的S3上的PHP的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在一个EC2实例,我希望用我的亚马逊S3存储连接我的PHP的网站,我已经看到了PHP的API,在这里:的 http://aws.amazon.com/sdkforphp/ ,但目前还不清楚。

这是我需要编辑在我的控制器中的code线:

  thisFu ['original_img'] ='/上传/福福/'.$_ POST ['猫'。/原_'。uniqid(福_')。。JPG ';
 

我需要连接到Amazon S3,并能够改变code是这样的:

<$p$p><$c$c>$thisFu['original_img']='my_s3_bucket/uploads/fufu/'.$_POST['cat'].'/original_'.uniqid('fu_').'.jpg';

我已经配置为目的的IAM用户,但​​我不知道所有需要完成的作业的步骤。

怎么能连接和互动与Amazon S3上传和检索公众形象?

更新

我决定尝试使用s3fs的建议,所以我安装了它描述的此处(我的操作系统是Ubuntu的14.04)

我从控制台运行:

 命令和apt-get安装建立必要的混帐libfuse-dev的libcurl4-的OpenSSL开发的libxml2-dev的MIME支持的automake的libtool
命令和apt-get安装的pkg配置的libssl-dev的
混帐克隆https://github.com/s3fs-fuse/s3fs-fuse
CD s3fs熔丝/
./autogen.sh
的./configure  -  preFIX =的/ usr --with-的OpenSSL
使
须藤使安装
 

一切都被正确安装,但接下来会发生什么?我应该在哪里申报凭据,并在我的项目我怎么会用这种整合?

2日更新

我创建了一个名为 .passwd-s3fs文件与我的IAM凭证单code线 accessKeyId:secretAccessKey

我把它放到我的目录,并用给它600的权限搭配chmod 600〜/ .passwd-s3fs

接下来从控制台我跑的/ usr / bin中/ s3fs My_S3bucket /上传/富富

/上传/富富还有我所有的桶,现在的文件夹。然而,当我尝试这个命令:

  s3fs -o非空allow_other My_S3bucket /上传/福福
 

我收到此错误信息:

  s3fs:无法访问MOUNTPOINT My_S3bucket:没有这样的文件或目录
 

第三更新

作为建议我运行此 fusermount -u /上传/富富,在那之后我查了福福文件夹为空预期。 从那以后,我又试了这个命令(多一个-o):

  s3fs -o非空-o allow_other My_S3bucket /上传/福福
 

和得到这个错误信息:

  fusermount:未能打开/etc/fuse.conf:权限被拒绝
fusermount:选项allow_other如果'user_allow_other在/etc/fuse.conf设置只允许
 

任何其他建议?

4日更新 18/04/15

在从控制台的建议,我跑须藤usermod命令-a -G保险丝的Ubuntu 须藤的vim /etc/fuse.conf我的注释去掉,mount_max = 1000和user_allow_other

比我跑s3fs -o非空-o allow_other My_S3bucket /上传/富富

乍一看没有任何错误,所以我想万物不错,但它是完全相反的。

我有点沮丧了,因为我不知道发生了什么,但我的文件夹 /上传/富富是隐藏的,并使用 LS -al 我只看到这个

  D ????????? ? ? ? ? ?福福
 

我不能 sudo的RM -r -rf MV -r 它说, /上传/富富是一个目录

我试图重启退出并安装-a ,但什么都没有。

我试着使用卸载 fusermount 和错误消息是 fusermount:条目/上传/福福没有发现在/ etc / mtab中

但我试过须藤的vim的/ etc / mtab中,我发现这行: s3fs /上传/福福fuse.s3fs RW采用这个选项,为nodev,allow_other 0 0

有人能告诉我,我该怎么卸载,最后删除此文件夹 /上传/富富

解决方案

尽管以S3fs是近年来非常可靠的构建,我可以分享我自己的经验与s3fs和信息,我们感动的写操作直接s3fs安装文件夹的访问AWS的定期随机系统崩溃后控制台(SDK API可能也这样)。

可能你不会有像图像的小尺寸文件的所有问题,但它肯定使问题的同时,我们试着写mp4文件。在日志系统崩溃之前,所以最后的消息是:

 内核:[9180.212990] s3fs [29994]:段错误,在0 IP 000000000042b503 SP 00007f09b4abf530错误4 s3fs [400000 + 52000]
 

,这是罕见的,随机的情况下,但自制系统不稳定。

所以我们决定还是保留s3fs安装,但仅使用它读取访问

下面我展示如何安装s3fs与AIM的凭据,没有密码文件

 #!/斌/ bash的-x
S3_MOUNT_DIR = /媒体/ S3
的cache_dir =的/ var /缓存/ s3cache

wget的HTTP://s3fs.google$c$c.com/files/s3fs-1.74.tar.gz
焦油xvfz s3fs-1.74.tar.gz
CD s3fs-1.74
的./configure
使
使安装

MKDIR $ S3_MOUNT_DIR
MKDIR $ cache_dir的

搭配chmod 0755 $ S3_MOUNT_DIR
搭配chmod 0755 $的cache_dir

出口IAMROLE =`卷曲的http:// 169.254.169.254 /最新/元数据/ IAM /安全证书/`

在/ usr / local / bin目录/ s3fs $ S3_BUCKET $ S3_MOUNT_DIR -o iam_role = $ IAMROLE,RW,allow_other,use_cache = $ cache_dir的,UID = 222,GID = 500
 

此外,你需要创建一个分配到实例附加条款IAM角色:

<$p$p><$c$c>{"Statement":[{"Resource":"*","Action":["s3:*"],"Sid":"S3","Effect":"Allow"}],"Version":"2012-10-17"}

在你的情况下,似乎是合理的使用PHP SDK(对方的回答有使用例子的话),但你也可以写图像S3与AWS控制台:

  AWS S3 CP /path_to_image/image.jpg S3:// your_bucket /路径
 

如果你将有IAM创建并分配给您的实例的角色,你将不需要提供任何额外的凭据

更新 - 回答你的问题:

  • 在我并不需要为工厂方法,声明我的IAM凭据?

是的,如果你将IAM分配到EC2实例,则在code,你只需要创建客户端为:

  $ s3Client = S3Client ::厂();
     $桶='my_s3_bucket';
     。$键名= $ _ POST ['猫']'/ original_' .uniqid(福_')。JPG。
     $ localFilePath ='/local_path/some_image.jpg';

 $结果= $ s3Client-&GT; putObject(阵列(
        '斗'=&GT; $桶,
        '关键'=&GT; $键名,
        '的SourceFile'=&GT; $文件路径,
        ACL=&GT; 大众阅读,
        '的ContentType'=&GT; 为image / jpeg
    ));
    取消链接($ localFilePath);
 

选项2:如果您不需要文件的本地存储的阶段,但将会把direclt从上传表单:

  $ s3Client = S3Client ::厂();
 $桶='my_s3_bucket';
 。$键名= $ _ POST ['猫']'/ original_' .uniqid(福_')。JPG。
 $ dataFromFile =的file_get_contents($ _ FILES ['UploadedFile的'] ['tmp_name的值']);

$结果= $ s3Client-&GT; putObject(阵列(
    '斗'=&GT; $桶,
    '关键'=&GT; $键名,
    身体=&GT; $ dataFromFile,
    ACL=&GT; 大众阅读,
));
 

和获得S3链接,如果你将有公共访问

  $ publicUrl = $ s3Client-&GT; getObjectUrl($桶,$键名);
 

或生成签名的网址为私人内容:

  $ validTime ='10分钟;
$ signedUrl = $ s3Client-&GT; getObjectUrl($桶,$键名,$ validTime);
 

I'm on an EC2 instance and I wish to connect my PHP website with my Amazon S3 bucket, I already saw the API for PHP here: http://aws.amazon.com/sdkforphp/ but it's not clear.

This is the code line I need to edit in my controller:

thisFu['original_img']='/uploads/fufu/'.$_POST['cat'].'/original_'.uniqid('fu_').'.jpg';

I need to connect to Amazon S3 and be able to change the code like this:

$thisFu['original_img']='my_s3_bucket/uploads/fufu/'.$_POST['cat'].'/original_'.uniqid('fu_').'.jpg';

I already configured an IAM user for the purpose but I don't know all the steps needed to accomplished the job.

How could I connect and interact with Amazon S3 to upload and retrieve public images?

UPDATE

I decided to try using the s3fs as suggested, so I installed it as described here (my OS is Ubuntu 14.04)

I run from console:

sudo apt-get install build-essential git libfuse-dev libcurl4-openssl-dev libxml2-dev mime-support automake libtool
sudo apt-get install pkg-config libssl-dev
git clone https://github.com/s3fs-fuse/s3fs-fuse
cd s3fs-fuse/
./autogen.sh
./configure --prefix=/usr --with-openssl
make
sudo make install

Everything was properly installed but what's next? Where should I declare credentials and how could I use this integration in my project?

2nd UPDATE

I created a file called .passwd-s3fs with a single code line with my IAM credentials accessKeyId:secretAccessKey.

I place it into my home/ubuntu directory and give it a 600 permission with chmod 600 ~/.passwd-s3fs

Next from console I run /usr/bin/s3fs My_S3bucket /uploads/fufu

Inside the /uploads/fufu there are all my bucket folders now. However when I try this command:

s3fs -o nonempty allow_other My_S3bucket /uploads/fufu

I get this error message:

s3fs: unable to access MOUNTPOINT My_S3bucket : No such file or directory

3rd UPDATE

As suggested I run this fusermount -u /uploads/fufu, after that I checked the fufu folder and is empty as expected. After that I tried again this command (with one more -o):

s3fs -o nonempty -o allow_other My_S3bucket /uploads/fufu

and got this error message:

fusermount: failed to open /etc/fuse.conf: Permission denied
fusermount: option allow_other only allowed if 'user_allow_other' is set in /etc/fuse.conf

Any other suggestion?

4th UPDATE 18/04/15

Under suggestion from console I run sudo usermod -a -G fuse ubuntu and sudo vim /etc/fuse.conf where I uncommented mount_max = 1000 and user_allow_other

Than I run s3fs -o nonempty -o allow_other My_S3bucket /uploads/fufu

At first sight no errors, so I thought everythings fine but it's exactly the opposite.

I'm a bit frustrated now, because I don't know what happened but my folder /uploads/fufu is hidden and using ls -Al I see only this

d????????? ? ?        ?              ?            ? fufu

I cannot sudo rm -r or -rf or mv -r it says that /uploads/fufu is a directory

I tried to reboot exit and mount -a, but nothing.

I tried to unmount using fusermount and the error message is fusermount: entry for /uploads/fufu not found in /etc/mtab

But I tried sudo vim /etc/mtab and I found this line: s3fs /uploads/fufu fuse.s3fs rw,nosuid,nodev,allow_other 0 0

Could someone tell me how can I unmount and finally remove this folder /uploads/fufu ?

解决方案

Despite to "S3fs is very reliable in recent builds", I can share my own experience with s3fs and info that we moved write operation from direct s3fs mounted folder access to aws console(SDK api possible way also) after periodic randomly system crashes .

Possible that you won't have any problem with small size files like images, but it certainly made the problem while we tried to write mp4 files. So last message at log before system crash was:

kernel: [ 9180.212990] s3fs[29994]: segfault at 0 ip 000000000042b503 sp 00007f09b4abf530 error 4 in s3fs[400000+52000]

and it was rare randomly cases, but that made system unstable.

So we decided still to keep s3fs mounted, but use it only for read access

Below I show how to mount s3fs with AIM credentials without password file

#!/bin/bash -x
S3_MOUNT_DIR=/media/s3
CACHE_DIR=/var/cache/s3cache

wget http://s3fs.googlecode.com/files/s3fs-1.74.tar.gz
tar xvfz s3fs-1.74.tar.gz
cd s3fs-1.74
./configure
make
make install

mkdir $S3_MOUNT_DIR
mkdir $CACHE_DIR

chmod 0755 $S3_MOUNT_DIR
chmod 0755 $CACHE_DIR

export IAMROLE=`curl http://169.254.169.254/latest/meta-data/iam/security-credentials/`

/usr/local/bin/s3fs $S3_BUCKET $S3_MOUNT_DIR  -o iam_role=$IAMROLE,rw,allow_other,use_cache=$CACHE_DIR,uid=222,gid=500

Also you will need to create IAM role that assigned to the instance with attached policy:

{"Statement":[{"Resource":"*","Action":["s3:*"],"Sid":"S3","Effect":"Allow"}],"Version":"2012-10-17"}

In you case, seems it is reasonable to use php sdk (other answer has usage example already), but you also can write images to s3 with aws console:

aws s3 cp /path_to_image/image.jpg s3://your_bucket/path

If you will have IAM role created and assigned to your instance you won't need to provide any additional credentials

Update - answer to your question:

  • I don't need to include the factory method for declare my IAM credentials?

Yes if you will have IAM assigned to ec2 instance, then at code you just need to create the client as:

     $s3Client = S3Client::factory();
     $bucket = 'my_s3_bucket';
     $keyname = $_POST['cat'].'/original_'‌​.uniqid('fu_').'.jpg';
     $localFilePath = '/local_path/some_image.jpg';

 $result = $s3Client->putObject(array(
        'Bucket' => $bucket,
        'Key'    => $keyname,
        'SourceFile'   => $filePath,
        'ACL'    => 'public-read',
        'ContentType' => 'image/jpeg'
    ));
    unlink($localFilePath);

option 2: If you do not need file local storage stage , but will put direclt from upload form:

 $s3Client = S3Client::factory();
 $bucket = 'my_s3_bucket';
 $keyname = $_POST['cat'].'/original_'‌​.uniqid('fu_').'.jpg';
 $dataFromFile = file_get_contents($_FILES['uploadedfile']['tmp_name']); 

$result = $s3Client->putObject(array(
    'Bucket' => $bucket,
    'Key'    => $keyname,
    'Body' => $dataFromFile,
    'ACL'    => 'public-read',
));

And to get s3 link if you will have public access

$publicUrl = $s3Client->getObjectUrl($bucket, $keyname);

Or generate signed url to private content:

$validTime = '+10 minutes';
$signedUrl = $s3Client->getObjectUrl($bucket, $keyname, $validTime);

这篇关于我怎么能存储上传的图片AWS的S3上的PHP的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆