当基础64图像选择并从数据库编码时发现缓慢 [英] Slowness found when base 64 image select and encode from database

查看:129
本文介绍了当基础64图像选择并从数据库编码时发现缓慢的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在离子框架中工作。目前正在设计包含文字和图片的帖子页面。用户可以发布数据和图像,所有这些都是安全的。



因此,我使用base 64编码并将图像保存在数据库中。

  encodeURIComponent($ scope.image)

每当用户请求时,我从表中选择行并将其与文本一起显示并解码。

  decodeURIComponent($ scope.image)

HTML data:image / jpeg; base64,_______转换。



工作正常,但需要花费很多时间。因此,图像的尺寸增大了33%,看起来看起来很凸显。



然后我决定继续使用cordova的文件上传插件。但我意识到,以这种方式维护文件是如此多的风险和收集。我也尝试将二进制数据保存到数据库中。但失败了。



没有base64数据的文本选择大大减少了时间。如果可以在另一个http调用中单独选择图像,则在选择其他列和显示之后。它是处理安全图像的正确机制吗?

解决方案

因为它只是个人文件,所以你可以将它们存储在S3中。



为了确保文件上传安全,只需检查文件的mime类型,然后再上传您选择的任何存储空间。



http://php.net/manual/en/function.mime -content-type.php



只需快速检查上传的文件:

  $ mime = mime_content_type($ file_path); 
if($ mime =='image / jpeg')返回true;

没什么大不了的!



保留文件在数据库上是不好的做法,它应该是你的最后一个资源。 S3非常适用于许多用例,但对于高使用率来说它很昂贵,本地文件只能用于内部网和非公开可用的应用程序。



在我的观点,去S3。



亚马逊的sdk易于使用,你可以获得1GB的免费存储空间进行测试。
您也可以使用自己的服务器,只需将其保留在数据库之外。



在文件系统上存储图像的解决方案



假设您有100,000个用户,每个用户都有10个图片。你如何处理本地存储?
问题: Linux文件系统在几十万张图像后中断,因此你应该让文件结构避免这种情况



解决方案:
将文件夹名称设为'abs(userID / 1000)* 1000'/ userID



当您拥有该用户时id为989787,图像将存储在文件夹中
989000/989787 / img1.jpeg
989000/989787 / img2.jpeg
989000/989787 / img3.jpeg



你有它,一种为一百万用户存储图像的方法,它不会破坏unix文件系统。



怎么样存储大小?



上个月,我不得不为我正在处理的电子商务压缩130万个jpeg。上传图像时,使用带有无损标记和80%质量的imagick进行压缩。这将删除不可见的像素并优化您的存储空间。由于我们的图像从40x40(缩略图)到1500x1500(缩放图像)不等,我们平均有700x700张图像,130万张图像,大约120GB的存储空间。



所以是的,可以将它全部存储在你的文件系统中。



当事情开始变慢时,你会雇用CDN。



如何那会有用吗?



如果CDN被请求输入文件,CDN就位于你的图像服务器前面,如果它没有在它的存储器中找到它(缓存)小姐)它将从您的图像服务器复制它。之后,当再次请求CDN时,它将从其自己的缓存中传送图像。



这样就无需代码即可迁移到CDN图像传送,您需要做的就是更改网站中的网址并雇用CDN,同样的工作原理对于一个S3桶。



这不是一个便宜的服务,但它比云端更便宜,当你到达需要它的时候,你可能买得起它。 / p>

I am working in ionic framework. Currently designing a posts page with text and images. User can post there data and image and all are secure.

So, i use base 64 encoding and save the image in database.

encodeURIComponent($scope.image)

Each time when user request, i select rows from table and display them along with text and decode them.

decodeURIComponent($scope.image) 

with HTML "data:image/jpeg;base64,_______" conversion.

Works fine, but take so much time that i expected. Hence, image are 33% bigger size, and totally looks bulgy.

Then i decide to move on file upload plugin of cordova. But i realize, maintain file in this way is so much risk and complected. I also try to save binary data into database. But failed.

Text selecting without base64 data are dramatically reduce time. If it is possible to select image individually in another http call, after selecting other column and display. Is it a right mechanism to handle secure images?

解决方案

Since it's just personal files, your could store them in S3.

In order to be safe about file uploads, just check the file's mime type before uploading for whatever storage you choose.

http://php.net/manual/en/function.mime-content-type.php

just run a quick check on the uploaded file:

$mime = mime_content_type($file_path);
if($mime == 'image/jpeg') return true;

no big deal!

keeping files on the database is bad practise, it should be your last resource. S3 is great for many use cases, but it's expensive for high usages and local files should be used only for intranets and non-public available apps.

In my opinion, go S3.

Amazon's sdk is easy to use and you get a 1gb free storage for testing. You could also use your own server, just keep it out of your database.

Solution for storing images on filesystem

Let's say you have 100.000 users and each one of them has 10 pics. How do you handle storing it locally? Problem: Linux filesystem breaks after a few dozens of thousands images, therefore you should make the file structure avoid that

Solution: Make the folder name be 'abs(userID/1000)*1000'/userID

That way when you have the user with id 989787 it's images will be stored on the folder 989000/989787/img1.jpeg 989000/989787/img2.jpeg 989000/989787/img3.jpeg

and there you have it, a way of storing images for a million users that doesn't break the unix filesystem.

How about storage sizes?

Last month I had to compress a 1.3 million jpegs for the e-commerce I work on. When uploading images, compress using imagick with lossless flags and 80% quality. That will remove the invisible pixels and optimize your storage. Since our images vary from 40x40 (thumbnails) to 1500x1500 (zoom images) we have an average of 700x700 images, times 1.3 million images which filled around 120GB of storage.

So yeah, it's possible to store it all on your filesystem.

When things start to get slow, you hire a CDN.

How will that work?

The CDN sits in front of your image server, whenever the CDN is requested for a file, if it doesn't find it in it's storage (cache miss) it will copy it from your image server. Later, when the CDN get's requested again, it will deliver the image from it's own cache.

This way no code is needed to migrate to a CDN image deliver, all you will need to do is change the urls in your site and hire a CDN, the same works for a S3 bucket.

It's not a cheap service, but it's waaaaay cheaper then cloudfront and when you get to the point of needing it, you can probably afford it.

这篇关于当基础64图像选择并从数据库编码时发现缓慢的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆