c - Can libcurl be used to make multiple concurrent requests? -


i using libcurl 1 of projects. know curl not used make multiple concurrent requests libcurl support it?

i know there other tools ab there many features libcurl provides. again know can use curl within script run multiple requests,but that's not looking for.

i not find satisfactory answer expect this one. although, it's not conclusive.

i should able use multiple handles multiple connections.

has tried this? there gotchas need out for?
should able this:

 my_app --total_connections 1000 --concurrency 100 <other libcurl options> url 

to test looking for, wrote little c program. executes 10 http-get requests using libcurl in loop. loop parallelized using openmp (if available).

to run it, save in file called example parallel_curl_test.cpp , compile 2 times. first using g++ parallel_curl_test.cpp -fopenmp $(pkg-config --libs --cflags libcurl) -o parallel_curl parallel version , second time using g++ parallel_curl_test.cpp $(pkg-config --libs --cflags libcurl) -o sequential_curl without openmp sequential version.

here code:

#include <cmath> #include <stdio.h> #include <curl/curl.h> #include <time.h>  void curl_request(); size_t write_data(void *, size_t, size_t, void *);  static struct timeval tm1; static int num_requests = 10;  static inline void start() {     gettimeofday(&tm1, null); }  static inline void stop() {     struct timeval tm2;     gettimeofday(&tm2, null);     unsigned long long t = 1000 * (tm2.tv_sec - tm1.tv_sec) + (tm2.tv_usec - tm1.tv_usec) / 1000;     printf("%d requests in %llu ms\n",num_requests , t); }  int main() {                start();     #pragma omp parallel     for(int n=0; n<num_requests; ++n){         curl_request();     }     stop();      return 0; }  void curl_request() {     curl *curl;     curlcode res;      curl = curl_easy_init();     if(curl) {     curl_easy_setopt(curl, curlopt_url, "http://example.com");     curl_easy_setopt(curl, curlopt_followlocation, 1l);         curl_easy_setopt(curl, curlopt_writefunction, write_data);     res = curl_easy_perform(curl);     if(res != curle_ok)         fprintf(stderr, "curl_request() failed: %s\n",             curl_easy_strerror(res));          curl_easy_cleanup(curl);     } }  size_t write_data(void *buffer, size_t size, size_t nmemb, void *userp) {    return size * nmemb; } 

the output ./parallel_curl this:

10 requests in 657 ms 

the output ./sequential_curl like:

10 requests in 13794 ms 

as can see, parallel_curl uses concurrency finished faster sequential_curl ran sequential.

thus answer question : yes!

of course, sequential execution done more efficient using pipelining, keep-alives , reusage of resources. question.


Comments

Popular posts from this blog

javascript - RequestAnimationFrame not working when exiting fullscreen switching space on Safari -

Python ctypes access violation with const pointer arguments -