Struggling with poor network performance?
Parallel cURL Testing
To perform multiple curl transfers in parallel, we need to look at another tool: xargs.
If you aren’t familiar with xargs, it is a very powerful linux utility. With it, we can execute multiple (dynamic) curl commands in parallel with very little overhead. Example:
seq 1 3 | xargs -n1 -P3 bash -c 'i=$0; url="http://mytestserver.net/10m_test.html?run=${i}"; curl -O -s $url'
This code will run 3 curl commands in parallel. The -P parameter allows you to set the desired number of parallel executions. In this example, we are using the seq command to pass numerical arguments to our commands so that each URL is unique with a run number. The -n parameter simply limits how many arguments are passed per execution. The -c parameter is where we specify our command to be run.
Note that this example doesn’t give any output, it simply runs the transfers. If you want to save the output, you can use the previous discussion on output format to decide what you want to output and how to save it.
From here, you can expand the number of iterations, pass other interesting parameters (a list of URLs from a file, perhaps), and so on. We often use this type of command when generating background traffic to simulate particular network conditions.
Automated cURL Testing
At some point, you will want to ramp up the number of iterations to improve the statistical significance of your test results. Fortunately, it’s easy to script cURL for your test purposes. We will go through some script examples in Bash that use many of the features we have discussed previously.
First, you have to decide what you want your output to be. What stats do you care about? HTTP Code? Transfer Time? Connect Time? All of the above?
Next you need to decide what your output format will be. CSV format? Text output? A summary only, or individual data points?
Let’s start with a simple example that gives a summary result:
#!/bin/bash
function usage() {
echo "Usage: $0 host count size port"
}
if [ $# -ne 4 ]; then
usage;
exit;
fi
host=$1
count=$2
size=$3
port=$4
let i=$count-1
while [ $i -ge 0 ];
do
curl -w "$i: %{time_total} %{http_code} %{size_download} %{url_effective}\n" -o "/dev/null" -s http://${host}:${port}/${size}_test.html
let i=i-1
./usleep 1000
done
This simple script takes 4 parameters: host, count, size, and port. These values are then used to build the URL and run the command count times. This assumes your server already has test files available for various pre-determined file sizes. Here is a sample output from running the script:
$ ./curltest.sh mytestserver.net 10 10k 80
9: 0.037 200 10000 http://mytestserver.net:80/10k_test.html
8: 0.032 200 10000 http://mytestserver.net:80/10k_test.html
7: 0.034 200 10000 http://mytestserver.net:80/10k_test.html
6: 0.031 200 10000 http://mytestserver.net:80/10k_test.html
5: 0.034 200 10000 http://mytestserver.net:80/10k_test.html
4: 0.035 200 10000 http://mytestserver.net:80/10k_test.html
3: 0.036 200 10000 http://mytestserver.net:80/10k_test.html
2: 0.040 200 10000 http://mytestserver.net:80/10k_test.html
1: 0.033 200 10000 http://mytestserver.net:80/10k_test.html
0: 0.035 200 10000 http://mytestserver.net:80/10k_test.html
That’s useful, but it could be more useful if we had some averages in there. That means we have to keep track of the results as we go. Here is an updated version of the script:
#!/bin/bash
function usage() {
echo "Usage: $0 host count size port"
echo "Example: $0 mytestserver.net 10 5k 80";
}
if [ $# -ne 4 ]; then
usage;
exit;
fi
host=$1
count=$2
size=$3
port=$4
let i=$count-1
tot=0
while [ $i -ge 0 ];
do
res=`curl -w "$i: %{time_total} %{http_code} %{size_download} %{url_effective}\n" -o "/dev/null" -s http://${host}:${port}/${size}_test.html`
echo $res
val=`echo $res | cut -f2 -d' '`
tot=`echo "scale=3;${tot}+${val}" | bc`
let i=i-1
./usleep 1000
done
avg=`echo "scale=3; ${tot}/${count}" |bc`
echo " ........................."
echo " AVG: $tot/$count = $avg"
Now if we run the above script, we get the following summary at the end:
$ ./curltest.sh mytestserver.net 10 10k 80
9: 0.033 200 10000 http://mytestserver.net:80/10k_test.html
8: 0.033 200 10000 http://mytestserver.net:80/10k_test.html
7: 0.040 200 10000 http://mytestserver.net:80/10k_test.html
6: 0.037 200 10000 http://mytestserver.net:80/10k_test.html
5: 0.040 200 10000 http://mytestserver.net:80/10k_test.html
4: 0.035 200 10000 http://mytestserver.net:80/10k_test.html
3: 0.033 200 10000 http://mytestserver.net:80/10k_test.html
2: 0.040 200 10000 http://mytestserver.net:80/10k_test.html
1: 0.034 200 10000 http://mytestserver.net:80/10k_test.html
0: 0.032 200 10000 http://mytestserver.net:80/10k_test.html
.........................
AVG: .357/10 = .035
Now we have an average, which is more useful for comparisons. If we have alternate ports set up (for example, with one going through a Badu proxy), then running subsequent tests on the respective ports gives us a meaningful measurement of improvement. To make that easier, we could further modify the script to accept multiple ports. Then it could run all the different ports for us, and we could see an immediate comparison.
Here’s what that might look like:
#!/bin/bash
function usage() {
echo "Usage: $0 host count size port(s)"
echo "Example: $0 mytestserver.net 20 10k 81 82";
}
if [ $# -lt 4 ]; then
usage;
exit;
fi
host=$1
count=$2
size=$3
shift;
shift;
shift;
for p in $*;
do
echo "------------"
let i=$count-1
tot=0
while [ $i -ge 0 ];
do
res=`curl -w "$i: %{time_total} %{http_code} %{size_download} %{url_effective}\n" -o "/dev/null" -s http://${host}:${p}/${size}_test.html`
echo $res
val=`echo $res | cut -f2 -d' '`
tot=`echo "scale=3;${tot}+${val}" | bc`
let i=i-1
./usleep 1000
done
avg=`echo "scale=3; ${tot}/${count}" |bc`
echo " ........................."
echo " AVG: $tot/$count = $avg"
done
Note that this implementation allows us to enter as many ports as we want. Here is sample output from the updated script with two ports:
$ ./curltest mytestserver.net 10 10k 80 81
------------
9: 0.120 200 10000 http://mytestserver.net:80/10k_test.html
8: 0.035 200 10000 http://mytestserver.net:80/10k_test.html
7: 0.038 200 10000 http://mytestserver.net:80/10k_test.html
6: 0.035 200 10000 http://mytestserver.net:80/10k_test.html
5: 0.032 200 10000 http://mytestserver.net:80/10k_test.html
4: 0.032 200 10000 http://mytestserver.net:80/10k_test.html
3: 0.041 200 10000 http://mytestserver.net:80/10k_test.html
2: 0.039 200 10000 http://mytestserver.net:80/10k_test.html
1: 0.033 200 10000 http://mytestserver.net:80/10k_test.html
0: 0.030 200 10000 http://mytestserver.net:80/10k_test.html
.........................
AVG: .435/10 = .043
------------
9: 0.038 200 10000 http://mytestserver.net:81/10k_test.html
8: 0.040 200 10000 http://mytestserver.net:81/10k_test.html
7: 0.038 200 10000 http://mytestserver.net:81/10k_test.html
6: 0.032 200 10000 http://mytestserver.net:81/10k_test.html
5: 0.035 200 10000 http://mytestserver.net:81/10k_test.html
4: 0.033 200 10000 http://mytestserver.net:81/10k_test.html
3: 0.039 200 10000 http://mytestserver.net:81/10k_test.html
2: 0.031 200 10000 http://mytestserver.net:81/10k_test.html
1: 0.034 200 10000 http://mytestserver.net:81/10k_test.html
0: 0.038 200 10000 http://mytestserver.net:81/10k_test.html
.........................
AVG: .358/10 = .035
Now we can have a quick comparison of performance over two separate paths. However, we’ve discussed elsewhere that running all the tests for one path, followed by all the tests for another path, is not the most accurate way to test. Most accurate would be to run both paths in parallel, or to at least approximate this by alternating between the paths/ports. Because network conditions change very rapidly, we want our test runs to be as similar as possible for a fair comparison. This is especially true if your number of test runs is low.
With that in mind, here is a mostly rewritten script that alternates between two paths:
#!/bin/bash
function usage() {
echo "Usage: $0 count size udelay host1 port1 host2 port2"
echo "Example: $0 10 50k 1000 mytestserver.net 80 mytestserver.net 81";
}
# check number of parameters
if [ $# -ne 7 ]; then
usage;
exit;
fi
# assign parameters to variables
count=$1
size=$2
delay=$3
host1=$4
port1=$5
host2=$6
port2=$7
# take the dns hit here
curl -w "$i: %{time_total} %{http_code} %{size_download} %{url_effective}\n" -o "/dev/null" -s http://${host1}:80/1k_test.html &> /dev/null
curl -w "$i: %{time_total} %{http_code} %{size_download} %{url_effective}\n" -o "/dev/null" -s http://${host2}:80/1k_test.html &> /dev/null
div="==================================================================="
# print commands to be run
printf "%s%s\n" $div $div
com1="$count: curl -s http://${host1}:${port1}/${size}_test.html"
com2="$count: curl -s http://${host2}:${port2}/${size}_test.html"
printf "%s\t\t%s\n" "$com1" "$com2"
printf "%s%s\n" $div $div
# perform tests
let i=$count-1
tot1=0
tot2=0
while [ $i -ge 0 ];
do
# tests for host1
res1=`curl -w "$i: %{time_total} %{speed_download} %{http_code} %{size_download} %{url_effective}\n" -o "/dev/null" -s http://${host1}:${port1}/${size}_test.html`
val1=`echo "${res1}" | cut -f2 -d' '`
tot1=`echo "scale=3;${tot1}+${val1}" | bc`
# tests for host2
res2=`curl -w "$i: %{time_total} %{speed_download} %{http_code} %{size_download} %{url_effective}\n" -o "/dev/null" -s http://${host2}:${port2}/${size}_test.html`
val2=`echo "${res2}" | cut -f2 -d' '`
tot2=`echo "scale=3;${tot2}+${val2}" | bc`
printf "%s\t%s\n" "$res1" "$res2"
let i=$i-1
./usleep $delay
done
# print summary
avg1=`echo "scale=3; ${tot1}/${count}" |bc`
avg2=`echo "scale=3; ${tot2}/${count}" |bc`
printf "%s%s\n" $div $div
printf "%s\t\t\t\t\t\t\t%s\n" "AVG: ${tot1}/$count = ${avg1}" "AVG: ${tot2}/$count = ${avg2}"
And now our output looks different to show both paths (looks best on a wide screen):
$ ./curltest 10 50k 1000 mytestserver.net 80 mytestserver.net 81
======================================================================================================================================
10: curl -s http://mytestserver.net:80/50k_test.html 10: curl -s http://mytestserver.net:81/50k_test.html
======================================================================================================================================
9: 0.043 1159070.000 200 50000 http://mytestserver.net:80/50k_test.html 9: 0.059 849300.000 200 50000 http://mytestserver.net:81/50k_test.html
8: 0.036 1400874.000 200 50000 http://mytestserver.net:80/50k_test.html 8: 0.059 846095.000 200 50000 http://mytestserver.net:81/50k_test.html
7: 0.035 1429388.000 200 50000 http://mytestserver.net:80/50k_test.html 7: 0.058 864872.000 200 50000 http://mytestserver.net:81/50k_test.html
6: 0.037 1366194.000 200 50000 http://mytestserver.net:80/50k_test.html 6: 0.056 889410.000 200 50000 http://mytestserver.net:81/50k_test.html
5: 0.036 1406944.000 200 50000 http://mytestserver.net:80/50k_test.html 5: 0.052 969612.000 200 50000 http://mytestserver.net:81/50k_test.html
4: 0.035 1419204.000 200 50000 http://mytestserver.net:80/50k_test.html 4: 0.072 698187.000 200 50000 http://mytestserver.net:81/50k_test.html
3: 0.033 1512447.000 200 50000 http://mytestserver.net:80/50k_test.html 3: 0.058 858295.000 200 50000 http://mytestserver.net:81/50k_test.html
2: 0.036 1403587.000 200 50000 http://mytestserver.net:80/50k_test.html 2: 0.060 839842.000 200 50000 http://mytestserver.net:81/50k_test.html
1: 0.033 1526717.000 200 50000 http://mytestserver.net:80/50k_test.html 1: 0.050 994233.000 200 50000 http://mytestserver.net:81/50k_test.html
0: 0.038 1325345.000 200 50000 http://mytestserver.net:80/50k_test.html 0: 0.055 915969.000 200 50000 http://mytestserver.net:81/50k_test.html
======================================================================================================================================
AVG: .362/10 = .036 AVG: .579/10 = .057
This makes our test results much more accurate.
We will leave it as an exercise for the reader to further improve this script. Some recommended next steps would be to:
- add a percentage measurement of improvement
- add uploads instead of or in addition to downloads
- use parameter flags instead of ordered parameters
- format the output for CSV
Happy testing!