使用redis / php-resque优化并发ImageMagick请求 [英] Optimising concurrent ImageMagick Requests using redis/php-resque

查看:138
本文介绍了使用redis / php-resque优化并发ImageMagick请求的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用ImageMagick生成图片的网站上工作。该网站每分钟会收到数百个请求,使用ImageMagick执行此操作会导致网站崩溃。

I am working on a site that uses ImageMagick to generate images. The site will get hundreds of request every minute and using ImageMagick to do this causes the site to crash.

所以我们实现了Redis和Php-resque来在单独的服务器上在后台生成ImageMagick,这样它就不会使我们的主服务器崩溃。问题是它仍然需要很长时间才能完成图像。用户可能希望等待最多2-3分钟的图像请求,因为服务器忙于处理这些图像。

So we implemented Redis and Php-resque to do the ImageMagick generating in the background on a seperate server so that it doesn't crash our main one. The problem is that it's still taking a very long time to get images done. A user might expect to wait up to 2-3 minutes for an image request because the server is so busy processing these images.

我不确定会给你什么信息,但我更需要建议。我想如果我们可以减少ImageMagick请求的初始处理时间,那么显然这将有助于加快我们可以处理的图像数量。

I am not sure what information to give you, but I'm more looking for advice. I think if we can cut down the initial process time for the ImageMagick request, then obviously this will help speed up the amount of images we can process.

下面是一个示例我们使用的ImageMagick脚本:

Below is a sample of the ImageMagick script that we use:

convert -size 600x400 xc:none \( ".$path."assets/images/bases/base_image_69509021433289153_8_0.png -fill rgb\(255,15,127\) -colorize 100% \) -composite \( ".$path."assets/images/bases/eye_image_60444011438514404_8_0.png -fill rgb\(15,107,255\) -colorize 100% \) -composite \( ".$path."assets/images/markings/marking_clan_8_marking_10_1433289499.png -fill rgb\(255,79,79\) -colorize 100% \) -composite \( ".$path."assets/images/bases/shading_image_893252771433289153_8_0.png -fill rgb\(135,159,255\) -colorize 100% \) -compose Multiply -composite \( ".$path."assets/images/highlight_image_629750231433289153_8_0.png -fill rgb\(27,35,36\) -colorize 100% \) -compose Overlay -composite \( ".$path."assets/images/lineart_image_433715161433289153_8_0.png -fill rgb\(0,0,0\) -colorize 100% \) -compose Over -composite ".$path."assets/generated/queue/tempt_preview_27992_userid_0_".$filename."_file.png

我的理论是,这需要相当长时间的原因是由于图像着色的过程。有没有办法优化这个过程?

My theory is that the reason this takes quite a long time is due to the process of colouring the images. Is there a way to optimise this process at all?

任何有处理大量图像搜索过程的经验的人,或者可以看到一些非常简单的方法来优化我们的请求,我会非常感激。

Anyone who has some experience with handling heavy loads of imagemagick processes or can see some glaringly easy ways to optimise our requests, I'd be very greatful.

谢谢:)

推荐答案

你的命令实际归结为:

convert -size 600x400 xc:none                                 \
    \( 1.png -fill rgb\(x,y,z\) -colorize 100% \) -composite  \
    \( 2.png -fill rgb\(x,y,z\) -colorize 100% \) -composite  \
    \( 3.png -fill rgb\(x,y,z\) -colorize 100% \) -composite  \
    \( 4.png -fill rgb\(x,y,z\) -colorize 100% \) -composite  \
    \( 5.png -fill rgb\(x,y,z\) -colorize 100% \) -composite  \
    \( 6.png -fill rgb\(x,y,z\) -colorize 100% \) -composite  \
    result.png

我的想法如下:

第1点:

空白画布上的第一个 -composite 似乎毫无意义 - 大概是 1 .png 是一个600x400的PNG,具有透明度,因此您的第一行可以避免合成操作,并通过更改为以下内容节省16%的处理时间:

The first -composite onto a blank canvas seems pointless - presumably 1.png is a 600x400 PNG with transparency, so your first line can avoid the compositing operation and save 16% of the processing time by changing to:

convert -background none 1.png -fill ... -colorize 100% \
   \( 2.png ..
   \( 3.png ...

第2点

我将你的命令的等价物放入循环中并进行100次迭代,需要15秒。然后我将所有PNG文件的读取更改为 MPC 文件 - 或Magick Pixel Cache文件的读取。这将处理时间缩短到不到10秒,即减少了33%。 Magic Pixel Cache只是一个预先解压缩的预解码文件,无需任何CPU工作即可直接读入内存。您可以在目录更改时预先创建它们,并将它们与PNG文件一起存储。你要做一个

I put the equivalent of your command into a loop and did 100 iterations and it takes 15 seconds. I then changed all your reads of PNG files into reads of MPC files - or Magick Pixel Cache files. That reduced the processing time to just under 10 seconds, i.e. by 33%. A Magic Pixel Cache is just a pre-decompressed, pre-decoded file that can be read directly into memory without any CPU effort. You could pre-create them whenever your catalogue changes and store them alongside the PNG files. To make one you do

convert image.png image.mpc

你会得到 image.mpc image.cache 。然后您只需将代码更改为:

and you will get out image.mpc and image.cache. Then you would simply change your code to look like this:

convert -size 600x400 xc:none                                 \
    \( 1.mpc -fill rgb\(x,y,z\) -colorize 100% \) -composite  \
    \( 2.mpc -fill rgb\(x,y,z\) -colorize 100% \) -composite  \
    \( 3.mpc -fill rgb\(x,y,z\) -colorize 100% \) -composite  \
    \( 4.mpc -fill rgb\(x,y,z\) -colorize 100% \) -composite  \
    \( 5.mpc -fill rgb\(x,y,z\) -colorize 100% \) -composite  \
    \( 6.mpc -fill rgb\(x,y,z\) -colorize 100% \) -composite  \
    result.png

第3点

很遗憾你还没有回答我的问题,但是如果你的资产目录不是太大,你可以将它(或上面的MPC等价物)放到RAM磁盘上在系统启动时。

Unfortunately you haven't answered my questions yet, but if your assets catalogue is not too big, you could put that (or the MPC equivalents above) onto a RAM disk at system startup.

第4点

你肯定应该并行运行 - 这将产生最大的所有的收获。使用GNU Parallel非常简单 - 此处示例

You should definitely run in parallel - that will yield the biggest gains of all. It is very simple with GNU Parallel - example here.

如果您使用REDIS,它实际上比这更容易。只需 LPUSH 将您的MIME编码图像写入REDIS列表,如下所示:

If you are using REDIS, it is actually easier than that. Just LPUSH your MIME-encoded images into a REDIS list like this:

#!/usr/bin/perl
################################################################################
# generator.pl <number of images> <image size in bytes>
# Mark Setchell
# Base64 encodes and sends "images" of specified size to REDIS
################################################################################
use strict;
use warnings FATAL => 'all';
use Redis;
use MIME::Base64;
use Time::HiRes qw(time);

my $Debug=0;    # set to 1 for debug messages

my $nargs = $#ARGV + 1;
if ($nargs != 2) {
    print "Usage: generator.pl <number of images> <image size in bytes>\n";
    exit 1;
}

my $nimages=$ARGV[0];
my $imsize=$ARGV[1];

# Our "image"
my $image="x"x$imsize;

printf "DEBUG($$): images: $nimages, size: $imsize\n" if $Debug;

# Connection to REDIS
my $redis = Redis->new;
my $start=time;

for(my $i=0;$i<$nimages;$i++){
   my $encoded=encode_base64($image,'');
   $redis->rpush('images'=>$encoded);
   print "DEBUG($$): Sending image $i\n" if $Debug;
}
my $elapsed=time-$start;
printf "DEBUG($$): Sent $nimages images of $imsize bytes in %.3f seconds, %d images/s\n",$elapsed,int($nimages/$elapsed);

然后运行多个工作人员,他们都坐在那里做BLPOP的工作

and then run multiple workers that all sit there doing BLPOPs of jobs to do

#!/usr/bin/perl
################################################################################
# worker.pl
# Mark Setchell
# Reads "images" from REDIS and uudecodes them as fast as possible
################################################################################
use strict;
use warnings FATAL => 'all';
use Redis;
use MIME::Base64;
use Time::HiRes qw(time);

my $Debug=0;    # set to 1 for debug messages
my $timeout=1;  # number of seconds to wait for an image
my $i=0;

# Connection to REDIS
my $redis = Redis->new;

my $start=time;

while(1){
   #my $encoded=encode_base64($image,'');
   my (undef,$encoded)=$redis->blpop('images',$timeout);
   last if !defined $encoded;
   my $image=decode_base64($encoded);
   my $l=length($image);
   $i++; 
   print "DEBUG($$): Received image:$i, $l bytes\n" if $Debug;
}

my $elapsed=time-$start-$timeout; # since we waited that long for the last one
printf "DEBUG($$): Received $i images in %.3f seconds, %d images/s\n",$elapsed,int($i/$elapsed);

如果我运行上面的一个生成器进程并让它生成100,000个200kB的图像,并读取它们在合理的规格iMac上有4个工作流程,需要59秒,或者大约1,700个图像/秒可以通过REDIS。

If I run one generator process as above and have it generate 100,000 images of 200kB each, and read them out with 4 worker processes on my reasonable spec iMac, it takes 59 seconds, or around 1,700 images/s can pass through REDIS.

这篇关于使用redis / php-resque优化并发ImageMagick请求的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆