curl_multi_exec:下载的某些图像缺少某些数据/流不完整 [英] curl_multi_exec: some images downloaded are missing some data / stream incomplete

查看:217
本文介绍了curl_multi_exec:下载的某些图像缺少某些数据/流不完整的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经实现了检查&使用PHP curl_multi_init()方法下载大量图像(> 1'000)-使用数组传递给它。



已经对它进行了几次重做,因为我得到了0字节文件之类的东西,所以我现在有一个解决方案,可以下载所有图像-但是下载的所有其他图像文件都不完整。



在我看来,我好像使用 file_put_contents()太早了,这意味着在某些图像数据之前完全是通过 curl_multi_exec()收到的。



很遗憾,我没有找到任何类似的问题,也没有找到任何Google结果我的情况是,我需要使用curl_multi_exec,但又不想检索&使用curl-opt-header CURLOPT_FILE 保存图像。



希望有人能够提供帮助我想知道我所缺少的内容以及为什么要在本地保存一些损坏的图像。



以下是一些检索到的损坏图像的示例:





我传递给我的Multi-CURL函数的一个示例数组:



  $ curl_httpresources = [
['http: //www.gravatar.com/avatar/example?d=mm&r=x&s=427'
,'/ srv / www / data / images / 1_unsplash.jpg'],
[ 'http://www.gravatar.com/avatar/example?d=identicon&r=x&s=427'
,'/ srv / www / data / images / 2_unsplash。 jpg'],
['http://www.gravatar.com/avatar/example?d=monsterid&r=x&s=427'
,'/ srv / www / data / images /3_unsplash.jpg'],
['http://www.gravatar.com/avatar/example?d=wavatar&r=x&s=427'
,'/ srv / www / data / images / 4_unsplash.jpg'],
['http://www.gravatar.com/avatar/example?d=retro&r=x&s=427'
,'/ srv /www/data/images/5_unsplash.jpg'],
['http://www.gravatar.com/avatar/example?d=mm&r=x&s=427'
, '/srv/www/data/images/6_unsplash.jpg'],
['http://www.gravatar.com/avatar/example?d=identicon&r=x&s=427'
,'/ srv / www / data / images / 7_unsplash.jpg'],
['http://www.gravatar.com/avatar/example?d=monsterid&r=x&s=427 '
,'/ srv / www / data / images / 8_unsplash.jpg'],
['http://www.gravatar.com/avatar/example?d=wavatar&r=x& s = 427'
,'/ srv / www / data / images / 9_unsplash.jpg'],
['http ://www.gravatar.com/avatar/example?d = retro& r = x& s = 427'
,'/ srv / www / data / images / 10_unsplash.jpg'],
];



我的多语言PHP函数



现在,对于我当前正在使用的功能-以及某种作品,除了一些部分下载的文件-这是代码:

  function cURLfetch(array $ resources)
{
/ **禁用PHP时限,因为这可能需要一段时间... * /
set_time_limit(0);

/ **验证$ resources数组(不为空,空等)* /
$ resources_num = count($ resources);
if(empty($ resources)|| $ resources_num< = 0)返回false;

/ **回调函数,用于将数据写入文件* /
$ callback = function($ resource,$ filepath)
{
file_put_contents($ filepath, $ resource);
/ **仅限于调试:输出< img>-带有已保存$ resource的标签* /
printf('< img src =%s>< br>',str_replace( '/ srv / www','',$ filepath));
};

/ **
*初始化用于处理多个并行请求的CURL进程
* /
$ curl_instance = curl_multi_init();
$ curl_multi_exec_active = null;
$ curl_request_options = [
CURLOPT_USERAGENT => ‘PHP-Script / 1.0(+ https://website.com/),
CURLOPT_TIMEOUT => 10,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_VERBOSE =>否,
CURLOPT_RETURNTRANSFER =>是的,
];

/ **
*遍历所有$ resources
* $ resources [$ i] [0] = HTTP资源
* $ resources [$ i] [ 1] =目标文件路径
* /
for($ i = 0; $ i< $ resources_num; $ i ++)
{
$ curl_requests [$ i] = curl_init( $ resources [$ i] [0]);
curl_setopt_array($ curl_requests [$ i],$ curl_request_options);
curl_multi_add_handle($ curl_instance,$ curl_requests [$ i]);
}

do {
try {
$ curl_execute = curl_multi_exec($ curl_instance,$ curl_multi_exec_active);
} catch(Exception $ e){
error_log($ e-> getMessage());
}
}而($ curl_execute == CURLM_CALL_MULTI_PERFORM);


/ **等待数据到达所有套接字* /
$ h = 0; //初始化计数器
,而($ curl_multi_exec_active&& $ curl_execute == CURLM_OK)
{
if(curl_multi_select($ curl_instance)!= -1)
{
do {
$ curl_data = curl_multi_exec($ curl_instance,$ curl_multi_exec_active);
$ curl_done = curl_multi_info_read($ curl_instance);
/ **检查是否有数据... * /
if($ curl_done ['handle']!== NULL)
{
/ **仅当HTTP状态码正常(200)* /
$ info = curl_getinfo($ curl_done ['handle']);
if($ info ['http_code'] == 200)
{
if(!empty(curl_multi_getcontent($ curl_requests [$ h]))){
/ **卷曲请求成功。使用回调函数处理数据。 * /
$ callback(curl_multi_getcontent($ curl_requests [$ h]),$ resources [$ h] [1]);
}
$ h ++; //计算
}
}
}而($ curl_data == CURLM_CALL_MULTI_PERFORM);
}
}

/ **关闭所有$ curl_requests * /
foreach($ curl_requests为$ request){
curl_multi_remove_handle($ curl_instance,$请求);
}
curl_multi_close($ curl_instance);

返回true;
}

/ **开始从阵列中获取图像* /
cURLfetch($ curl_httpresources);



非常感谢您的帮助,非常感谢!


1'000张图像并下载响应为 HTTP 200 OK的图像。我最初担心的是,由于潜在的错误识别的DDoS服务器可能会切断连接没有效果,为什么这种方法在我的情况下效果很好。



这是最后一个我正在使用的带有常规cURL请求的函数:

 函数cURLfetchUrl($ url,$ save_as_file)
{
/ **验证$ url& $ save_as_file(不为空,空值或类似值)* /
if(empty($ url)|| is_numeric($ url))返回false;
if(empty($ save_as_file)|| is_numeric($ save_as_file))返回false;

/ **禁用PHP时间限制,因为这可能需要一段时间... * /
set_time_limit(0);

试试{
/ **
*设置要传递给单个请求的cURL选项
* /
$ curl_request_options = [
CURLOPT_USERAGENT => ‘PHP-Script / 1.0(+ https://website.com/),
CURLOPT_TIMEOUT => 5,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_RETURNTRANSFER =>是的,
];

/ **初始化&执行cURL请求* /
$ curl_instance = curl_init($ url);
curl_setopt_array($ curl_instance,$ curl_request_options);
$ curl_data = curl_exec($ curl_instance);
$ curl_done = curl_getinfo($ curl_instance);

/ ** cURL请求成功* /
if($ curl_done ['http_code'] == 200)
{
/ **打开新文件句柄,写入文件&关闭文件句柄* /
if(file_put_contents($ save_as_file,$ curl_data)!== false){
//记录file_put_contents是否正常
} else {
//记录是否file_put_contents失败
}
}

/ **关闭$ curl_instance * /
curl_close($ curl_instance);

返回true;

} catch(Exception $ e){
error_log($ e-> getMessage());
返回false;
}
}

并执行它:

  $ curl_httpresources = [
['http://www.gravatar.com/avatar/example?d=mm&r=x& s = 427'
,'/ srv / www / data / images / 1_unsplash.jpg'],
['http://www.gravatar.com/avatar/example?d=identicon&r = x& s = 427'
,'/ srv / www / data / images / 2_unsplash.jpg'],
['http://www.gravatar.com/avatar/example?d= monsterid& r = x& s = 427'
,'/ srv / www / data / images / 3_unsplash.jpg'],
['http://www.gravatar.com/avatar/example ?d = wavatar& r = x& s = 427'
,'/ srv / www / data / images / 4_unsplash.jpg'],
['http://www.gravatar.com/头像/示例?d = retro& r = x& s = 427'
,'/ srv / www / data / images / 5_unsplash.jpg'],
['http://www.gravatar .com / avatar / example?d = mm& r = x& s = 427'
,'/ srv / www / data / images / 6_unsplash.jpg'],
['http:// www.gravatar.com/avatar/example?d=identicon&r=x&s=427'
,'/ srv / ww w / data / images / 7_unsplash.jpg'],
['http://www.gravatar.com/avatar/example?d=monsterid&r=x&s=427'
,' /srv/www/data/images/8_unsplash.jpg'],
['http://www.gravatar.com/avatar/example?d=wavatar&r=x&s=427'
,'/ srv / www / data / images / 9_unsplash.jpg'],
['http://www.gravatar.com/avatar/example?d=retro&r=x&s=427'
,'/ srv / www / data / images / 10_unsplash.jpg'],
];

/ ** cURL来自$ curl_httpresources数组的所有请求* /
if(count($ curl_httpresources)> 0)
{
foreach($ curl_httpresources as $ resource)
{
cURLfetchUrl($ resource [0],$ resource [1]);
}
}

不过,如果有人有主意,如何正确处理使用curl_multi检索文件数据流,那会很棒,因为我对初始问题的回答仅显示了一种不同的方法-而不是解决初始方法。


I have implemented a PHP function which checks & downloads a lot of images (> 1'000) - as passed to it using an array - using the PHP curl_multi_init() method.

After reworking this a few times already, because I was getting things like 0 bytes files, etc. I have a solution now which downloads all images - BUT every other image file downloaded is not complete.

It looks to me as if I use file_put_contents() "too early", meaning, before some of the images' data has been received completely using curl_multi_exec().

Unfortunately I didn't find any similar question nor any Google Results for my case, in which I need to use curl_multi_exec, but do NOT want to retrieve & save images using the curl-opt-header "CURLOPT_FILE".

Hope someone is able to help me out regarding what I'm missing and why I get some broken images saved locally.

Here are some examples of the broken images retrieved:

Here's an example Array which I pass to my Multi-CURL-Function:

$curl_httpresources = [
        [ 'http://www.gravatar.com/avatar/example?d=mm&r=x&s=427'
        ,'/srv/www/data/images/1_unsplash.jpg' ],
        [ 'http://www.gravatar.com/avatar/example?d=identicon&r=x&s=427'
        ,'/srv/www/data/images/2_unsplash.jpg' ],
        [ 'http://www.gravatar.com/avatar/example?d=monsterid&r=x&s=427'
        ,'/srv/www/data/images/3_unsplash.jpg' ],
        [ 'http://www.gravatar.com/avatar/example?d=wavatar&r=x&s=427'
        ,'/srv/www/data/images/4_unsplash.jpg' ],
        [ 'http://www.gravatar.com/avatar/example?d=retro&r=x&s=427'
        ,'/srv/www/data/images/5_unsplash.jpg' ],
        [ 'http://www.gravatar.com/avatar/example?d=mm&r=x&s=427'
        ,'/srv/www/data/images/6_unsplash.jpg' ],
        [ 'http://www.gravatar.com/avatar/example?d=identicon&r=x&s=427'
        ,'/srv/www/data/images/7_unsplash.jpg' ],
        [ 'http://www.gravatar.com/avatar/example?d=monsterid&r=x&s=427'
        ,'/srv/www/data/images/8_unsplash.jpg' ],
        [ 'http://www.gravatar.com/avatar/example?d=wavatar&r=x&s=427'
        ,'/srv/www/data/images/9_unsplash.jpg' ],
        [ 'http://www.gravatar.com/avatar/example?d=retro&r=x&s=427'
        ,'/srv/www/data/images/10_unsplash.jpg' ],
];

My Multi-cURL PHP Function

Now for the function I'm currently using - and kind of "works", except some partially downloaded files - this is the code:

function cURLfetch(array $resources)
{
    /** Disable PHP timelimit, because this could take a while... */
    set_time_limit(0);

    /** Validate the $resources Array (not empty, null, or alike) */
    $resources_num = count($resources);
    if ( empty($resources) || $resources_num <= 0 ) return false;

    /** Callback-Function for writing data to file */
    $callback = function($resource, $filepath)
    {
        file_put_contents($filepath, $resource);
        /** For Debug only: output <img>-Tag with saved $resource */
        printf('<img src="%s"><br>', str_replace('/srv/www', '', $filepath));
    };

    /**
     * Initialize CURL process for handling multiple parallel requests 
     */
    $curl_instance = curl_multi_init();
    $curl_multi_exec_active = null;
    $curl_request_options = [
                                CURLOPT_USERAGENT => 'PHP-Script/1.0 (+https://website.com/)',
                                CURLOPT_TIMEOUT => 10,
                                CURLOPT_FOLLOWLOCATION => true,
                                CURLOPT_VERBOSE => false,
                                CURLOPT_RETURNTRANSFER => true,
                            ];

    /**
     * Looping through all $resources
     *   $resources[$i][0] = HTTP resource
     *   $resources[$i][1] = Target Filepath
     */
    for ($i = 0; $i < $resources_num; $i++)
    {
        $curl_requests[$i] = curl_init($resources[$i][0]);
        curl_setopt_array($curl_requests[$i], $curl_request_options);
        curl_multi_add_handle($curl_instance, $curl_requests[$i]);
    }

    do {
        try {
            $curl_execute = curl_multi_exec($curl_instance, $curl_multi_exec_active);
        } catch (Exception $e) {
            error_log($e->getMessage());
        }
    } while ($curl_execute == CURLM_CALL_MULTI_PERFORM);


    /** Wait until data arrives on all sockets */
    $h = 0; // initialise a counter
    while ($curl_multi_exec_active && $curl_execute == CURLM_OK)
    {
        if (curl_multi_select($curl_instance) != -1)
        {
            do {
              $curl_data = curl_multi_exec($curl_instance, $curl_multi_exec_active);
              $curl_done = curl_multi_info_read($curl_instance);
              /** Check if there is data... */
              if ($curl_done['handle'] !== NULL)
              {
                  /** Continue ONLY if HTTP statuscode was OK (200) */
                  $info = curl_getinfo($curl_done['handle']);
                  if ($info['http_code'] == 200)
                  {
                      if (!empty(curl_multi_getcontent($curl_requests[$h]))) {
                          /** Curl request successful. Process data using the callback function. */
                          $callback(curl_multi_getcontent($curl_requests[$h]), $resources[$h][1]);
                      }
                      $h++; // count up
                   }
               }
            } while ($curl_data == CURLM_CALL_MULTI_PERFORM);
        }
    }

    /** Close all $curl_requests */
    foreach($curl_requests as $request) {
        curl_multi_remove_handle($curl_instance, $request);
    }
    curl_multi_close($curl_instance);

    return true;
}

/** Start fetching images from an Array */
cURLfetch($curl_httpresources);

Thanks a lot for any help, much appreciated!

解决方案

I ended up using just regular cURL requests in a classical loop, to query all >1'000 images and download the ones with a "HTTP 200 OK" response. My initial concerns, that the server might cut the connection due to a potential falsely identified DDoS has had no effect, why this approach works well for my case.

Here's the final function with regular cURL requests I'm using:

function cURLfetchUrl($url, $save_as_file)
{
    /** Validate $url & $save_as_file (not empty, null, or alike) */
    if ( empty($url) || is_numeric($url) ) return false;
    if ( empty($save_as_file) || is_numeric($save_as_file) ) return false;

    /** Disable PHP timelimit, because this could take a while... */
    set_time_limit(0);

    try {
        /**
         * Set cURL options to be passed to a single request
         */
        $curl_request_options = [
                                    CURLOPT_USERAGENT => 'PHP-Script/1.0 (+https://website.com/)',
                                    CURLOPT_TIMEOUT => 5,
                                    CURLOPT_FOLLOWLOCATION => true,
                                    CURLOPT_RETURNTRANSFER => true,
                                ];

        /** Initialize & execute cURL-Request */
        $curl_instance = curl_init($url);
        curl_setopt_array($curl_instance, $curl_request_options);
        $curl_data = curl_exec($curl_instance);
        $curl_done = curl_getinfo($curl_instance);

        /** cURL request successful */
        if ($curl_done['http_code'] == 200)
        {
            /** Open a new file handle, write the file & close the file handle */
            if (file_put_contents($save_as_file, $curl_data) !== false) {
                // logging if file_put_contents was OK
            } else {
                // logging if file_put_contents FAILED
            }
        }

        /** Close the $curl_instance */
        curl_close($curl_instance);

        return true;

    } catch (Exception $e) {
        error_log($e->getMessage());
        return false;
    }
}

And to execute it:

$curl_httpresources = [
    [ 'http://www.gravatar.com/avatar/example?d=mm&r=x&s=427'
    ,'/srv/www/data/images/1_unsplash.jpg' ],
    [ 'http://www.gravatar.com/avatar/example?d=identicon&r=x&s=427'
    ,'/srv/www/data/images/2_unsplash.jpg' ],
    [ 'http://www.gravatar.com/avatar/example?d=monsterid&r=x&s=427'
    ,'/srv/www/data/images/3_unsplash.jpg' ],
    [ 'http://www.gravatar.com/avatar/example?d=wavatar&r=x&s=427'
    ,'/srv/www/data/images/4_unsplash.jpg' ],
    [ 'http://www.gravatar.com/avatar/example?d=retro&r=x&s=427'
    ,'/srv/www/data/images/5_unsplash.jpg' ],
    [ 'http://www.gravatar.com/avatar/example?d=mm&r=x&s=427'
    ,'/srv/www/data/images/6_unsplash.jpg' ],
    [ 'http://www.gravatar.com/avatar/example?d=identicon&r=x&s=427'
    ,'/srv/www/data/images/7_unsplash.jpg' ],
    [ 'http://www.gravatar.com/avatar/example?d=monsterid&r=x&s=427'
    ,'/srv/www/data/images/8_unsplash.jpg' ],
    [ 'http://www.gravatar.com/avatar/example?d=wavatar&r=x&s=427'
    ,'/srv/www/data/images/9_unsplash.jpg' ],
    [ 'http://www.gravatar.com/avatar/example?d=retro&r=x&s=427'
    ,'/srv/www/data/images/10_unsplash.jpg' ],
];

/** cURL all request from the $curl_httpresources Array */
if (count($curl_httpresources) > 0)
{
    foreach ($curl_httpresources as $resource)
    {
        cURLfetchUrl($resource[0], $resource[1]);
    }
}

Still, if someone has an idea, how to properly retrieve the file data streams using curl_multi, that would be great, as my answer to the initial question just shows a different approach - rather than solving the initial approach.

这篇关于curl_multi_exec:下载的某些图像缺少某些数据/流不完整的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆