libcurl可以用来做出多个并发请求吗? [英] Can libcurl be used to make multiple concurrent requests?

查看:291
本文介绍了libcurl可以用来做出多个并发请求吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用libcurl作为我的一个项目。我知道curl不是用来做多个并发请求,而是libcurl支持它吗?



我知道有其他工具像ab,但有很多功能libcurl提供。
再次,我知道我可以在脚本中使用curl来运行多个请求,但这不是我正在寻找的。

我找不到满意的答案这期望。虽然,这不是决定性的。



我应该能够使用多个手柄的多个连接。



有没有人试过这个?是否有任何需要注意的问题?

我应该可以这样做:

  my_app --total_connections 1000 --concurrency 100< Other libcurl options> url 


解决方案

为了测试你正在寻找什么,一个小C程序。它在一个循环中使用libcurl执行10个http-get请求。使用openmp(如果可用)将循环并行化。



要运行它,只需将其保存在一个文件中,例如 parallel_curl_test.cpp 并编译它两次。首先对并行版本使用 g ++ parallel_curl_test.cpp -fopenmp $(pkg-config --libs --cflags libcurl)-o parallel_curl ,第二次使用<$ c $



>这里是代码:

  #include< cmath> 
#include< stdio.h>
#include< curl / curl.h>
#include< time.h>

void curl_request();
size_t write_data(void *,size_t,size_t,void *);

static struct timeval tm1;
static int num_requests = 10;

static inline void start()
{
gettimeofday(& tm1,NULL);
}

static inline void stop()
{
struct timeval tm2;
gettimeofday(& tm2,NULL);
unsigned long long t = 1000 *(tm2.tv_sec -tm1.tv_sec)+(tm2.tv_usec-tm1.tv_usec)/ 1000;
printf(%d个请求在%llu ms\\\
,num_requests,t);
}

int main()
{
start();
#pragma omp parallel for
for(int n = 0; n< num_requests; ++ n){
curl_request();
}
stop();

return 0;
}

void curl_request()
{
CURL * curl;
CURLcode res;

curl = curl_easy_init();
if(curl){
curl_easy_setopt(curl,CURLOPT_URL,http://example.com);
curl_easy_setopt(curl,CURLOPT_FOLLOWLOCATION,1L);
curl_easy_setopt(curl,CURLOPT_WRITEFUNCTION,write_data);
res = curl_easy_perform(curl);
if(res!= CURLE_OK)
fprintf(stderr,curl_request()failed:%s\\\

curl_easy_strerror(res));

curl_easy_cleanup(curl);
}
}

size_t write_data(void * buffer,size_t size,size_t nmemb,void * userp)
{
return size * nmemb;
}

./ parallel_curl 将如下所示:

  657 ms中的10个请求
pre>

./ sequential_curl 的输出将类似于:

 在13794 ms中有10个请求

可以看到,使用并发的 parallel_curl 完成的速度明显快于顺序的 sequential_curl



因此,你的问题的答案是:是的!



当然,顺序执行可以做得更有效率使用流水线,keep-alives并重用资源。但这是另一个问题。


I am using libcurl for one of my projects. I know that curl is not used to make multiple concurrent requests but does libcurl support it?

I know there are other tools like ab but that there are many features that libcurl provides. Again I know I can use curl within script to run multiple requests,but that's not what I am looking for.

I could not find a satisfactory answer for this expect this one. Although, It's not conclusive.

I should be able to use multiple handles for multiple connections.

Has anyone tried this? Are there any gotchas I need to look out for?
I should be able to do something like this:

 my_app --total_connections 1000 --concurrency 100 <Other libcurl options> url

解决方案

To test what you are looking for, i wrote a little C program. It executes 10 http-get requests using libcurl in a loop. The loop is parallelized using openmp (if available).

To run it, just save it in a file called for example parallel_curl_test.cpp and compile it two times. First using g++ parallel_curl_test.cpp -fopenmp $(pkg-config --libs --cflags libcurl) -o parallel_curl for the parallel version and a second time using g++ parallel_curl_test.cpp $(pkg-config --libs --cflags libcurl) -o sequential_curl without openmp for the sequential version.

Here is the code:

#include <cmath>
#include <stdio.h>
#include <curl/curl.h>
#include <time.h>

void curl_request();
size_t write_data(void *, size_t, size_t, void *);

static struct timeval tm1;
static int num_requests = 10;

static inline void start()
{
    gettimeofday(&tm1, NULL);
}

static inline void stop()
{
    struct timeval tm2;
    gettimeofday(&tm2, NULL);
    unsigned long long t = 1000 * (tm2.tv_sec - tm1.tv_sec) + (tm2.tv_usec - tm1.tv_usec) / 1000;
    printf("%d requests in %llu ms\n",num_requests , t);
}

int main()
{           
    start();
    #pragma omp parallel for
    for(int n=0; n<num_requests; ++n){
        curl_request();
    }
    stop();

    return 0;
}

void curl_request()
{
    CURL *curl;
    CURLcode res;

    curl = curl_easy_init();
    if(curl) {
    curl_easy_setopt(curl, CURLOPT_URL, "http://example.com");
    curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1L);
        curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, write_data);
    res = curl_easy_perform(curl);
    if(res != CURLE_OK)
        fprintf(stderr, "curl_request() failed: %s\n",
            curl_easy_strerror(res));

        curl_easy_cleanup(curl);
    }
}

size_t write_data(void *buffer, size_t size, size_t nmemb, void *userp)
{
   return size * nmemb;
}

The output for ./parallel_curl will look like this:

10 requests in 657 ms

the output for ./sequential_curl will look something like:

10 requests in 13794 ms

As you can see, the parallel_curl which uses concurrency finished significantly faster than sequential_curl which ran sequential.

Thus the answer to your question is : Yes!

Of course, sequential execution could be done much more efficient using pipelining, keep-alives and reusage of resources. But this is another question.

这篇关于libcurl可以用来做出多个并发请求吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆